Study examines the research that never receives a citation

The article is interesting for several reasons. One, the data reflecting the surge in citation within the sciences is interesting. Second, the corresponding lack in the humanities highlights crucial and unfortunately misleading differences that, third, Marco Caracciolo, who is quoted, explains:

For instance, monographs and book chapters “carry a lot of weight in this area of the humanities” and it was much more likely that these — rather than any journal article that first expressed an idea — would be cited.

“The general expectation is that articles pave the way for monographs, which will contain the ‘final’ version of an argument — not the other way around,” said Caracciolo.

He added that the citation culture was also different for scholars on the more theoretical side of literary theory. Here, citation “works by signaling affiliation with a certain movement or theoretical trend.”

“Scholars position their approach not through a comprehensive literature review but by way of strategic citations — which may result in a relatively small number of highly influential publications (typically in book form) receiving the vast majority of citations,” Caracciolo said.

“This is quite different from what happens in the sciences, where the logic would appear to be more incremental,” he said, adding that his own citation rate could be higher because of his primary field of narrative theory having “a more science-like logic.”

As the article writer, Simon Baker, points out, however, that “Even with the caveats about the citation patterns seen in different disciplines, there is a danger that such figures could be seized upon by those wanting to question the value of publicly funded research.” That is, austerity-minded politicians, administrators, and others who fear or hate publicly-funded research and teaching (especially if it runs counter to their beliefs, be they religious or political), will look to the data, give it the yell of truth and then cut funding for those disciplines that fail the acid test, even though that test is wrong for the discipline.

Solutions, like having a greater awareness of the issues, only work if the cutters are willing to be schooled. Another, albeit only half-solution, which is sort of practiced in many British universities, I think, is to count publications, regardless of nature. These publications could be reviews, new, deep articles, or books. They are all counted. it’s not a good way to evaluate the quality of scholarship produced, but it does put the humanities’ scholar on a more even footing as the tech/science scholar, at least in counting publications for any academic or departmental purpose. A complication is that in the sciences, the rule is collaboration–to have many authors per paper, and yet more for important ones. That is not the case in most humanities.

 

 

Analysis suggests big differences among disciplines in the volume of scholarship that fails to garner a citation.

Source: Study examines the research that never receives a citation