Reviewing #10--Shingling
You've all heard about shingling and what it is. As a reviewer, you can be very helpful to the editor by pointing out that a paper is a shingle. Keep in mind that, even in specialty journals, the editors may come from a wide variety of backgrounds and may not be familiar with all aspects of the literature. At high-load journals, it may be very difficult if not impossible for the editors to do the legwork--which can take a couple of hours or more per paper--to figure out if a paper is a shingle, so they really are dependent on the reviewers.
Before I start, though, let me define shingling for those of you who might not be that familiar with the concept. Shingling is publishing basically the same paper as one you've published before, but with some new data added, resulting in, at best, a mere expansion of already-reached conclusions.
Not all papers that have new data added to old data are shingles. I have seen many papers that add new data to previous work, but the new data either substantially change or even falsify previous conclusions. Such papers can be very exciting and certainly useful.
I recently handled a paper that seemed to me to be a shingle. I did not feel qualified, however, to evaluate whether the new data added enough, so I sent it out for review. Although it got a couple of good reviews (one from a reviewer recommended by the authors and one from a reviewer whom I did not expect to know the literature well, but whom I recruited for his expertise in a particular aspect of the paper), the third review slammed the paper on the very grounds I suspected. Because the first two reviewers found value in the paper, I rejected the paper but offered the authors the opportunity to resubmit, if they could focus on the new data and how it substantially changed the conclusions.
Shingling came about, of course, because P&T committees have tended to focus on numbers of papers published rather than their impact on the science. With the rise of impact statistics (well, citation statistics, which aren't necessarily the same thing), the pressure to produce numbers has eased somewhat in some institutions.
Before I start, though, let me define shingling for those of you who might not be that familiar with the concept. Shingling is publishing basically the same paper as one you've published before, but with some new data added, resulting in, at best, a mere expansion of already-reached conclusions.
Not all papers that have new data added to old data are shingles. I have seen many papers that add new data to previous work, but the new data either substantially change or even falsify previous conclusions. Such papers can be very exciting and certainly useful.
I recently handled a paper that seemed to me to be a shingle. I did not feel qualified, however, to evaluate whether the new data added enough, so I sent it out for review. Although it got a couple of good reviews (one from a reviewer recommended by the authors and one from a reviewer whom I did not expect to know the literature well, but whom I recruited for his expertise in a particular aspect of the paper), the third review slammed the paper on the very grounds I suspected. Because the first two reviewers found value in the paper, I rejected the paper but offered the authors the opportunity to resubmit, if they could focus on the new data and how it substantially changed the conclusions.
Shingling came about, of course, because P&T committees have tended to focus on numbers of papers published rather than their impact on the science. With the rise of impact statistics (well, citation statistics, which aren't necessarily the same thing), the pressure to produce numbers has eased somewhat in some institutions.
Comments
Post a Comment