As is, I suppose, an all too common occurence for an academic, I recently received an email, asking me to review a paper. This time for a journal called “Journal of Computer and Communications“, published by an outfit called SCIRP. A bit of googling on the topic of that publisher brought up the following:
- Is Scientific Research Publishing (SCIRP) Publishing Pseudo-Science?
- A wikipedia page listing a large amount controversies surrounding this publishing house
- Two new journals copying the old from nature
- The Chinese Publisher SCIRP (Scientific Research Publishing): A Publishing Empire Built on Junk Science
Ouch!
This journal lists “prof Vicente Milanés” as its Editor-in-chief. That’s a little odd given that INRIA doesn’t have “professors” as their job titles (they have “research directors” and “junior research scientists” and “R&D Engineers“), and that Vicente Milanes is listed as having a “Starting Research Position” on his research teams website. Vicente’s LinkedIn profile does not mention this journal – but his professional WWW-site does…
But, back to the beginning. The email requesting me to do a review, looked thus:
Admittedly, I had never heard of this journal, and the grammatically incorrect name was a bit of a red flag – but, the topic of the paper is one where I am vaguely competent and within which I have published myself. Furthermore, I was looking for a good excuse to stop doing administration and do a little bit of science – in this case, doing “scientific community service” felt close enough to science. So, despite the warning signs, I accepted – and started doing the review almost immediately.
About 2h later I was done. The paper was badly written and presented no new, or even valid, results – and was guilty of almost all the sins that I have discussed in my Bad Science series: its bibliography was lacking references to the objects studied, it was drawing generalised (and incorrect) conclusions based on a single datapoint, etc.
My recommendation was a firm reject and, out of respect for the authors, was accompanied by a 3-page review detailing the failings I saw. There’s nothing worse than getting a review “verdict” without an explanation – and a reviewer not willing to explain his or her reasons is not a “peer” to the authors.
The paper, and the review, would actually have made a great topic in itself, for the Bad Science series – but as the paper was (and is) not published, it’d be unethical of me to post the details. Most reputable journals and conferences do also insist on discretion of reviewers being required, on their website, in review solicitation emails, etc. And all reputable scientists honour that code of ethics even when not reminded to do so by the journal publisher.
But, on that topic…. the PDF of the paper that I received contained a copyright statement that the work was licensed under Creative Commons CCBY 4.0. CCBY 4.0 allows sharing and adapting the material for any purpose, even commercially.
So, the PDF distributed to the reviewers came with a surprising copyright statement, essentially in direct opposition to what I consider professional conduct – and, which the IEEE Computer Society guidelines for peer reviews captures fairly well:
- Assume that manuscripts submitted for publication are not meant to be public
- Do not use material from a manuscript you have reviewed
- Do not share material from a manuscript you have reviewed with others
- Do not distribute copies of a manuscript you have been asked to review unless the material is already public
I would never submit my unpublished work to a publication venue which would not guarantee that it would be treated confidentially: with this CCBY 4.0 copyright licensing for the review copy, a less-than-ethical reviewer could, for example, feel justified rejecting the paper, and publish the results as his/her own, simply with an attribution for “inspired by private communication with …”.
On that front alone, “Journal of Computer and Communications“, and Scientific Research Publishing,
is disqualifying themselves as a serious, academic, publisher.
So I’ll look at this “Journal of Computer and Communications“, which is apparently published by something called Scientific Research Publishing, and which claims a “Google-based Impact Factor” of 0.79 – which the publisher and journal, of course, claims to be superior to the traditional Thomson-Reuters calculated Web-of-Science based journal impact factor (in which this journal is not included, btw).
The journal uses the generic SCIRP Peer Review Process, which has a few preliminary steps including plagiarism checks, and a preliminary review by (presumably) an associate editor, before a paper is sent to the peer reviewers. It is not clear what precisely that preliminary review consists of other than “check data, score manuscript”, but I would assume that it would not include a scientific review – but rather a sanity check that a scientific review will be possible.
That review process actually looks complete and appropriate, on the face of it (although…see later), and SCIRP could be commended for having a well defined process, AND for making it available to anyone bothering to look.
One of the first things which struck me when flipping through the paper, before even starting to read the words, was the graphs — presumably presenting results from the study, and included in the below. These graphs were, literally, low-resolution screenshots – so low-resolution that it’s literally impossible to read the values on the axis, the legends, etc., and it’s therefore simply impossible to interpret the results.
That’s the sort of stuff that I would expect that a “preliminary review” would catch and request corrected before bothering the (presumably, more than one — but more on that in a minute) peer reviewers: this is one of the things that is required before “a scientific review will be possible”.
When submitted my review on-line, I also sent an email to the editorial assistant, asking if this paper was representative of the standard of submissions that they receive. I promptly got a reply (which is fantastic) – but, sadly, a reply which sent shivers down my spine – given the timeline, it seems that they may have made a decision based on my review alone:
- I received the review request Nov. 3 2016 at 12:20
- I submitted my review Nov. 3 2016 at 14:37
- I received the note (right) Nov. 4 2016, at 3:45
So, a decision was made to reject the paper less than 16h after the review request was sent to me.
Now, admittedly, my review was fairly strong in its recommendation to reject the paper. But, even while trying to be neutral, everybody has a bias that may show. It might be that a reviewer has a beef with one of the authors (this review was not double-blind), for example. I have also myself received reviews to my papers where it was clear that a reviewer was upset that I didn’t cite enough of his/her work. I even once had a review which simply stated “I don’t like these results” with a recommendation to “strong reject”.
That’s one reason that reputable publication venues generally has multiple reviewers – which is what the generic SCIRP Peer Review Process also suggests (although it doesn’t state how many reviews are the minimum they are willing to operate with for a given paper).
Knowing how hard it is to exhort reviews from peer reviewers, in general, it is impressive if “JCC” managed to get more than mine during the window of those 16h. On the other hand, it would be quite worrying if they made a decision based on just my review…
Of course, JCC may have previously requested from a first set of reviewers, not all of whom responded in time, and they therefore solicited me (and presumably others) for additional reviews to – precisely – have a quorum for making a decision. I reached out to the editorial assistant on this matter on November 5, 2016.
On November 7, 2016, I received this reply to my inquiry:
On the serious of Journals
While I am of course forming my opinion on this particular journal, and this particular journal publisher, that’s actually not the main point of this posting. Some general considerations on the serious of Journals are, however, the point:
- I often decry journals for having unduly long publication processing times, and that’s true. However the inverse – in this case, that I literally saw a “verdict” announced less than 16h after having received the review request – is suspect, and doesn’t inspire confidence that the process is rigorous.
- On that same token, it is immensely useful for building confidence to understand what happens during the review process, including how many reviews are expected. The IEEE Computer Society list on their website, for example, that at least three reviews will be solicited.
- While, of course, ethical conduct of reviewers is (mostly) par for the course, it’s a sign of lack of serious when a publishing venue does not do anything to remind the reviewers of proper conduct. The IEEE Computer Society, again, shows a good example of expected conduct of a reviewer, including reminders of confidentiality. And it is an outright warning-sign when a received paper to review contains a permission-to-share/transform copyright statement.
- When the claimed editor-in-chief/chair makes no mention of the journal/conference on his/her professional profiles, then that’s extremely suspect: my interpretation is that either the claimed editor-in-chief/chair is somehow an “accomplice” of a scam (such as: gets a cut of the publication fees in a pay-to-publish venue) — or is completely unaware that his/her name is being used.
- When a journal editor sends a manuscript for peer review, which clearly does not meet minimum requirements (as illustrated by the unreadable “graphs” in from the paper that triggered this writing), it is a clear indicator of “lack of serious”. I have, of course, peer reviewed (and rejected) very poor papers submitted to very serious and prestigious journals or conferences – that’s the rule of the game. But a serious journal or conference would never request a review of a paper that fails a simple clerical verification.
- And, as with email scams: when there’re outright typos and grammatical mistakes in the journal title (“Journal of Computer and Communications”???? seriously…) then we’re probably dealing with something questionable especially when talking about a “publishing house” which is supposed to have professional editors on staff.