More and more researchers are worried about the rumours regarding the possibility of a general exclusion of articles published in MDPI journals to assess research careers. Is this happening in your country? In Spain, the discussion is already being held by several commissions in charge of assessing researchers' productivity and Cvs.
Congratulations for this discussion.
In my humble opinion, this model, adopted by MDPI journals only have "sustainability" for themselves, not for researchers and Universities.
This link below to Paolo Crossetto's piece on the topic may inform the discussion
https://paolocrosetto.wordpress.com/2021/04/12/is-mdpi-a-predatory-publisher/
The growth experienced by MPI is so fast that is not sustainable. Some researchers think that MDPI functions as a Ponzi scheme, which will collapse. If this is the potential future of MDPI, will the published papers disappear? Who will maintain the servers? Who is interested in publishing in an editorial taking these risks?
I certainly agree that MDPI is using a suspicious model (even though at least some Journals are actually of pretty high quality) but wouldn't the concern of disappearing papers be similar for other commercial publishers as well (e.g. Frontiers, PlosOne)? There are also numerous smaller publishers that are potentially at a higher risk of running out of resources.
This is a key point from Paolo's piece, to explain MDPI journals success:
What if I am wrong?
I might be wrong. There are many good sides of the OA model that MDPI adopts.
It is more inclusive, for one. It breaks the gate-keeping that is done by the small academic elites that control the traditional journals in many disciplines. It probably makes no sense in the 21st century to limit the number of articles in journals to, say, 60 a year, because back in the 20th century we printed the journals and space was at a premium, so breaking the (fictional and self-imposed) quantity constraint is a good thing.
In addition, MDPI gives you discount vouchers after revising for them. No other OA journals make the same thought.
Rodrigo Zacca
Please read this link below to Paolo Crossetto's it explains very well what can be considered predatory...
https://paolocrosetto.wordpress.com/2021/04/12/is-mdpi-a-predatory-publisher/
Opinions can be expressed or objective elements can be pointed out. I vote for the latter:
- Many researchers have a large percentage of their latest JCR in journals from this publisher.
- As with gold or diamonds, the more JCRs researchers have, the less each of them is worth. In other words, JCRs become cheaper and instead of publishing one a year, it won´t be enough to publish one a month.
- A large number of articles published in this journal are of low or medium-low quality.
- Not all articles in this journal are of low quality. Some are of very high quality. It is a pity that, as they say in Spain, "pagan justos por pecadores" (the just people pay for sinners).
From this point on, opinions are opinions.
And now for my opinion:
I admit that I myself have taken advantage of the ease of publication at this publishing house, thanks, above all, to special monographs that facilitate publication at lower cost than usual. I also recognise that once I have published a few articles in these journals, it is less of an effort to give up publishing more.
I have reviewed several articles in journals published by this publisher and I never cease to be surprised when reviewers see the names of the authors. In journals that are not indexed even in SJR, anonymous reviews are used, but not in these journals. This is not acceptable.
My intention is to distance myself as much as possible from this type of publication, although I recognise that it is a temptation.
Hugo Olmedillas
I do not see any problem here. The fact that a journal has an impact factor should not automatically guarantee a positive assessment of the research output. The process of assessing CVs implies sorting people depending on their capacities and outputs. If the scientific production concentrates on fast-easy-publishing MDPI journals, then researchers should be judged accordingly. Journals and publishers could be excluded if they use Ponzi-like schemes to inflate their impact factors or if the editorial board has become a decorative puppet to attract incautious researchers. Assessing panels could also limit the number of papers of this type that can be considered. People involved in assessing CVs should be aware of this situation and act accordingly.
I think we should be honest with ourselfs. Even if MPDI publish good studies their review process is very questionable and the short time of review in many many studies very very suspicious. We should not discuss whether the classic journals has also a business model, this is another topic not related with the review process and I will be happy to show how the repositorys in the universities could help to solve this issue. So, we know that the chance to publish in MPDI is quiet high in comparison with other journals because of the review process (of course with exceptions).
I am heading a research center in Sports Sciences that recently discussed and approved regulations to discourage publications in these journals. We cannot go further....but it´s becoming unacceptable, in one month they publish anything, and they are flooding the market with low-level science and now we see a left-skewed distribution in productivity because everybody publishes in Q1 journals.
The problem is that the decision made by ANECA may be totally arbitrary, and it is debatable that a small group of evaluators judge which articles are valid or not. Anyone who makes a claim outside the channels of the ANECA (for example in court) would win that claim. The problem is fundamental and requires a reevaluation of the entire publication system.
Very relevant topic. I would add that the business example set by MDPI with massive numbers of articles and special issues is undoubtedly affecting other Open Access publishers, who desperately will want to try to emulate MDPIs economical success. I see signs of this happening with e.g. the Frontiers journals. And Frontiers itself is no stranger to controversy, having been under scrutiny with reference to its business model essentially from the start.
So the problem surely does not end with MDPI and therefore the discussion should be broadened to include other Open Access publishers as well. Simply put, the bar must be raised for the Open Access publishers, especially with regard to publication ethics and the standards of the peer review process.
To illustrate some of the potential consequences for science of the current sad state of affairs in the Open Access journals , I will attempt to provide a brief example, again from Frontiers. Frontiers in Physiology used to be one of the best Open Access journals in exercise physiology, but it has in my opinion increasingly become a platform for career-hungry unscrupulous researchers to publish their second-rate descriptive papers instead of truly novel, more thorough, and more mechanistically oriented studies.
In addition, one can too often find papers in which the reviewers/referees of said article get remarkably high numbers of their own articles cited, typically at the expense of the work of other researchers. Coercive citations and "if you scratch my back, I will scratch yours" strategies of course exist in traditional journals as well, but the problem is arguably more rampant in the Open Access journals.
The Editors at Frontiers, who as far as I can tell have generally high integrity, have evidently been powerless to stop these sad developments.
The root of the problem with Open Access publishing is well known. The economical incentive to publish an article is simply outweighing matters of scientific merit and publication ethics in too many OA journals, so again the problem is certainly not limited to MDPI.
One thing that MPDI journals should do is to ensure that reviews follow a double-blind review system.
Best regards,
Gustavo.
To summarize my answer: I will exclude any open journal. Even nature communications and similars.
Not an easy questin to discuss, but a very needed one.
Better and worst journals and paper have always existed. A critical thinking and evaluation from the science community is always needed besides the more traditional or new way of evaluating sciencs (IF for instance).
I agree that a new way for publishing paradigm is appearing.
For instance there is no formal need for thematic journals when people are using web searches by keyword (so journals can be more and more multi thematic?)
There is no more need for a limited number of articles per year or issue (do we need an issue sustem anymore?) when the digital storage is the limit.
And there is no spacefor a lengthy revison system whera a paer can take a year or more to be published when we have a one wek or month deadline urge to publish (because of PhD defense times as example).
Nevertheless I do believe that we still need lengthy revisions, strict thematic journals, and limited number of papers by year. Because we need for research results to be reviewed by real experts on the theme, an not by anyone who says is an expert and has the time, and the will to do it (with a reward). The system used by MDPI, Frontiers and PlosOne, of inviting reviewers by the hundreds to the same paper, and wait for the first two positive answers (without proper choice from a specialized editor) is just wrong. And will have a major impact on the research published in these publishers.
The answer has to be slowing down the publication quantity competition, and to increase the quality merit on all academic carrers, including, and most importantly, in the demand for mandatory publication for PhD defense. Easy to say, I know...
Dear Daniel A. Boullosa ,
to my opinion you mention a really interesting point. At the beginning of my academic career, which started before the raise of OA publishers, I had many rejections by journals based on limited space, not on bad quality of the papers. As projects, scholarships, and postgraduate positions are limited in time in most cases, the "delay" in publishing results can have fatal consequences for both, the project and the scientists. Therefore, I really like and support your positive argumentation.
However, to my opinion one central point that has to be discussed critically is that some journals (independently of being an OA publisher or not) break the rule of blind review by providing the names of the authors to potential reviewers. This can lead to biased reviews (positive or negative) and consequently influence the quality of peer review.
Best regards,
Pavel
This discussion is truly relevant and we need to unite as a community to advance.
First, all scientific field has standards, and to publish in the best journals we need to conduct the best experiments, right? well... I think science has got too political and political influence may be more relevant than the science itself. Nonetheless, the different layers of journal impact are creating casts of researchers. Which combined to grant awards create a cycle of highly financed science that will get published in highly ranked journals repeatedly.
If one only publishes at the MDPI and other open-access publishers, it certainly raises the question as to why. Every field has traditional and highly respectable journals. If the issue is to offer open access papers, the highly traditional journals also provide such an option (perhaps more expensive). Science should be available to everyone and open access should be cheaper at all journals.
One downside of publishing in OA publishers may be that the review process of a manuscript can be quite suboptimal. Imagine that in highly respectable journals you'll most likely get the top authors reading and commenting on your manuscript. Therefore, you may advance a lot and improve substantially the quality of your research outcome going through the peer-review process. Are the top authors reviewing for publishers such as MDPI? I highly doubt it...
Nonetheless, good science can flourish even in lower-level journals. Ultimately, the impact of the research we publish can be evaluated by the citations the paper receives. But more important than the number of citations is the quality of the authors citing it. If a researcher has several papers on MDPI and is being highly cited by the intermediate/high-level authors in the field, I don't see it as a problem. So I think the hiring committees should look at the impact of the research far more than just to where it was published.
Last but not least. I personally don't believe that a research field should advance by having an infinite number of manuscripts. At the current pace, it is already quite difficult to keep up with publications. We will not generate experts to review papers as fast as we are creating papers. Unfortunately, this will reduce the overall quality of research. But this is only my opinion and I may be wrong....
See our latest published editorial about some issues related to this discussion that occur in the backstage of science:
Article The Right Journal, Editor, and Referees, at the Right Time
I find it surprising that many responses to this question broaden the debate to OA in general. In my experience, most of the recognized journals in OA do a good job, sometimes even an outstanding job (Frontiers comes to mind). On the other hand, I have stopped counting the number of times I have been asked by MDPI to review papers or edit special issues that have NOTHING to do with my area of expertise. As a result, I have completely blacklisted this publisher that I consider as predatory as the many others that flood our mailboxes all day long.
Franck Mars
I was going to add the exact same comment. Not all OA are bad, or the problem is not the fact of being OA.
Luís Paulo Rodrigues I assume that you are referring to my comment above. Where did I say that all OA journals are bad? Specifically, I wrote: "The economical incentive to publish an article is simply outweighing matters of scientific merit and publication ethics in too many OA journals".
That said, I maintain my view that the problems I outlined above are generally greater in OA journals for the aforementioned reasons.
Franck Mars I respect your point of view and I have personally had some good experiences with Frontiers, in my case Frontiers in Physiology. But in recent years I have also seen some real horror stories in the same journal. Papers that would never ever have been published in traditional journals such as the Journal of Physiology, Acta Physiologica, the Journal of Applied Physiology, the American Journal of Physiology, etc.
Frontiers in Physiology has a relatively high impact factor, higher than e.g. Journal of Applied Physiolgy. But Frontiers in Physiology is these days mixing really good sound papers with horror stories and plenty of papers in between (i.e. "so what-articles"). I simply do not see that in the better traditional journals.
The likely reasons (in my view at least) have been outlined above, although admittedly not in an exhaustive manner. This is not to say that Frontiers and MDPI are the same, but they are on the same continuum and as stated above, Frontiers are showing signs of going in the same direction as MDPI.
I.e., the economical incentive to publish vast numbers of articles far and away outweighs scientifc and publication ethics matters.
Traditional journals also publish bullshit but only a few specialists in each field are able to perceive it... Further, I have experienced arbitrary decisions in those journals with only 1 report, while in MDPI I've seen 2-3 rounds of reviews with 4-6 reviewers and the editor finally endorsing it (put the name in the cover of the article). Sorry, but too many myths and generalizations in this discussion.
My experience with "Scientific Reports" is incredible. The editor rejected our manuscript saying that our comparison was not appropriate following the reviewers suggestions (biased because of their school/paradigm). I showed him the same comparison published in other traditional journals and he said OK but those journals are not "Scientific Reports" (sic).
I can only report that my experience with two MDPI journals was excellent in terms of quality and strictness of the reviews. Open process, clear, and without scientific prejudice.
Let me write I always try to interpret the role of editor, reviewer and author for MDPI journals in the most honest way I can. I act as editor for eight journals (not only MDPI) and reviewer for 81.
Daniel A. Boullosa I am assuming that your post above was aimed if not primarily then at least partially at my comments. As for "too many myths and generalizations", I took care to provide examples of journals which I consider to be among the better ones in the field of physiology. Notably, these journals typically appoint specialists as reviewers, thereby minimizing the number of "so-what articles" and outright duds.
There are of course also plenty of mediocre and bad journals among the traditional journals, but these generally have the rankings and reputations that they deserve. The distinction between better and less good traditional journals has to be made, although it is admittedly not an entirely easy one.
With Open Access journals, however, the situation is different. Some OA journals have relatively high impact factors despite having clearly suboptimal peer review processes, publication ethics and control over conflicts of interests, etc. Large numbers of articles and special issues, along with Open Access per se, increase the chances of citation. So does exchanges of favours between different groups of authors ("if you cite our paper we will cite yours"), coercive citations and self-citations, etc.
With regard to MDPI, the picture is arguably a bit more complex than a simple "either or" narrative. Some journals have Editors who have high integrity and who appoint expert referees, who in turn perform very good and thorough reviews of the manuscripts. But there are also many reports of the exact opposite, as noted in the multiple threads which have been dedicated to that very topic here on ResearchGate.
Dear Mathias Wernbom, no I was talking in general and not specifically to you. Reality is complex and metrics and perceptions are quite different things. Science is self-correcting so I am not concerned with new publishing models as others who are more worried about their egos and losing power and influence to privilege their dogmas/paradigms/schools.
As an example, retraction indices affects more highly impact journals...
A very interesting question, indeed... and very difficult to respond at the moment.
Both situations are happening at this time: excellent papers of recognized authors are being published in MDPI journals (in addition to others of poor quality) and “almost predatory” strategies can be detected (as reviewer, I have already seen the publication of papers that I had rejected due to the very poor quality).
Majority of MDPI journals with impact factor (IF) is relatively recent (and very variable), but if we look at the MDPI journals with ten or more years receiving impact factor (such as International Journal of Molecular Sciences or International Journal of Environmental Research and Public Health), they present stable or even growing evolution of IF history.
Other aspect to be considered is that OA is already a reality, and probably, the future of scientific publication, especially when we consider current actions such as the European S-Plan. I have observed that many journals of Elsevier, for example, migrated during the last two years to OA, but many times the APCs and services (time to publication, editorial management, etc.) are worse than those of MPDI journals with similar IF. As clients of a service, authors can become more demanding on these issues in the near future.
So, in my opinion, today it will be very premature to discard all the MDPI journals in the assessment of the scientific production, since the area is changing.
For authors, I recommend to maintain a balance: first, publish some papers in recognized journals and, after that, have MDPI journals as an option for those studies whose conditions (need for faster publication, target audience or topic, etc.) make it worth the risk-benefit of betting on these journals… and then, maintain your choices open with this balance. More than ever, the future is very uncertain.
All the best,
Elena
Let me report my personal experience as reviewer for MDPI.
All articles are reviewed by at least 2 reviewers, and even 3-4 in more complicated cases (I'm often requested to be the 3rd/4th in such cases). In the past, I've noticed some questionable reviews, but when Editors and Editorial Office were briefed about It, they intervened rapidly and quite severely against the "borderline" reviewers.
The Only question mark Is against the rejection rate. Journals from MDPI seemly prefer to request repeated rounds of review until the paper is appropriate for publication, while other journals would reject it, but honestly the improvements usually prompted by the rounds of review are substantial even in more "difficult" papers.
Dear fellow researchers,
this section has some interesting thoughts on the ongoing discussion of OA and predatory journals (or is it publishers?).
I agree with Daniel A. Boullosa – space limitation appears outdated as it goes back to having a printed version of a journal. However, without, the number of articles appears endless but the number of qualified researchers for review and the number of high-quality reviews they are willing to provide for free is limited. At the same time, OA journals broke down the two-side payment model (authors and readers pay – that is somewhat insane also considering the high amount of public funding in producing published research), which obviously leads to the publisher’s motivation to publish as many articles as possible. I also agree that the quality of articles in OA journals is sometimes questionable but this at least partly also the fault of established journals which in recent years installed some OA version of their journal sending out messages to authors like “your article is not suitable for XY but we will be pleased to transfer it to our OA title…”.
Concerning quality, I can say that during the past years I have been receiving requests to review from OA and non-OA journals and I have the feeling that editors tend to send out an increasing number of articles that have considerable limitations in language, research question, and methods for review to see if the reviewer would be able to “make something out of it”. I have started to return those assignments after receiving the article even if I originally agreed to review. I had some good experiences with Frontiers in Physiology/ Exercise Physiology, however it appears that their process and review platform design makes it still hard to reject a manuscript.
I think we as a research community have some options to improve the quality of OA. First, we need to review for OA journals and provide fair and constructive reviews but also turning down reviews for articles of inacceptable low quality with respective communication to editors. Second, we need to change our publishing practice and how we think about OA. “If it is not good enough for a standard non-OA journal send it to OA” should not be a general understanding and placing some “left-over” data or secondary (retrospective) analysis in OA (“they will publish it anyways”) also shouldn’t be. The attempt to limit the impact of OA by excluding OA papers in the assessment of the scientific production will not be a solution to the general problem.
Boris Schmitz Thank you for making several excellent and important points, especially in the penultimate and last paragraphs in your post.
In particular, I share the impression that it is more difficult when acting as a reviewer for certain OA journals (including the one that you mention) than traditional journals to bring about a Reject decision for manuscripts which are simply not good enough. Of course, this decision is ultimately for the Editor-in-Chief to make and other reviewers may have different opinions about the merits of a given manuscript which may outweigh ones own, etc.
That said, it is arguably more difficult to bring about a Reject decision as a referee in an OA journal vs a tradional one, presumably because of the economical incentive from the part of the OA journal publishers to publish an article.
Again, MDPI is not alone. The Frontiers journals have also been surrounded by controversy with regard to high acceptance rates (at least up until a few years ago) and reports from Editors about the pressure to publish. Some Editors were even sacked for their relatively high rejection rates, according to the blog For Better Science.
Editor sacked over rejection rate: “not inline with Frontiers core principles” – For Better Science
One would hope that Frontiers have improved since then. Perhaps it is the case. In the Frontiers report from 2019, it is stated that the rejection rate for 2019 was on average 42% for all journals.
Frontiers | Report (frontiersin.org)
However, I could not find the figure for Frontiers in Physiology and I also had no luck in finding acceptance/rejection rates for the section Exercise Physiology. I think reporting acceptance/rejection rates should be mandatory for any serious scientific journal, so I am surprised that this information is not given.
That said, I would also be surprised if Frontiers in Physiology has a rejection rate which is markedly higher than the stated average of 42% for the Frontiers journals. Frontiers in Physiology has an Impact Factor of 3.367. This is very similar to the average Impact Factor of ~3.3 for a Frontiers journal that I calculated based on the figures in the 2019 Frontiers Report. So it seems reasonable to assume a similar 42% rejection rate (and hence a ~58% acceptance rate) for Frontiers in Physiology as for the average Frontiers journal.
For comparison purposes, the Journal of Applied Physiology reports an acceptance rate of 24% for 2020 and an Impact Factor of 3.044 for 2019. I.e., a ~76% rejection rate. Medicine and Science in Sports and Exercise has an IF of 4.48 and an acceptance rate of 27.8%. The European Journal of Applied Physiology has an IF of 2.58 and an acceptance rate of 40%.
There is always the same recurring issue. Traditional publishing journals live on subscription. Open Access journals live on publishing fee. Both editors and reviewers are unpaid. MDPI provides due revisions with publishing fees discounts. Yet, this is not enough in addition to self-fostering MDPI model only. At both Frontiers and MDPI, authors are clearly super-guaranteed. Therefore, especially editors – unpaid and neither awarded in other ways (IMO, Publons and ReviewerCredits are not enough yet. When editors and reviewers h-index?) – are clearly made to feel super-responsible in correct review procedure.
This is an interesting topic that has increased its importance after the usage of bibliometrics for the assessment of the academic field, in particular for the career development of the scholars, which is based (at least in Italy) mainly on the bibliometric indices. Unfortunately, nowadays it is more important what is your ranking and what is the level of your indices than what is written in your papers.
For instance, the indices sometimes do not take into consideration the specificity of your research with the academic sector in which you are included. The number of Journals during last decades triplicated and OA Journals attracted a lot of scholars since one of the indices taken into consideration is the number of published papers. Fortunately, among the indices there are also the Hindex and the number of citations.
Acting as editor and reviewer for both OA and traditional journals, many times I have the opportunity to read a lot of self-citation and/or wrong citations in both journals, and sometimes I have the impression that the cited papers have not been read by the authors (who cite them) in both types of Journals.
Going back to the topic I think that for the evaluators will be very difficult not taking into consideration OA journals from publisher like MDPI OR FRONTIERS that are ranked in WOS or SCOPUS, and sometimes are also ranked in Q1. More simply, the evaluation should be based on a MIXED METHOD crossing quantitative indices with the qualitative assessment of each paper, considering the original contribution on the field, the advancement of the knowledge, the methodology and the assuming the responsibilities as evaluator for career development in academic evaluation. IN OTHER WORDS, we have TO TAKE THE RESPONSIBILITY OF THE EVALUATION when we are ASSESSORS OR when we are EDITOR OR REVIERWER.
Open access journal can increase the democracy of the Science that could be more transparent, however it can be implemented only if the individual responsibility of the editors and reviewers is assumed and considered. Please read the names of the editors and reviewers on the papers that have big flaws and keep them it in your mind!!!!
IN OTHER WORD IT IS OUR RESPONSIBILITY!!!! PEER review is mandatory in science if the scientists conscientiously review the manuscripts in both OA and traditional journals!
Dear Mathias Wernbom & Boris Schmitz , sorry but i disagree with some of your points. I have experience as author, reviewer and editor in both sides of the coin.
1) There is no link between rejection/acceptance rates and IFs in any journal in any field. There is no logic when thinking that a journal is good because they reject lots of papers (e.g. including excellent papers that are opposite to the paradigm of the editor who takes the decision). This mostly depends on the available space, and OA journals do not have a problem of space as print journals. The IF (with its strengths and limitations) will show the trend for any journal during the next years.
2) I have received/recommended rejections in MDPI, Frontiers, and other OA journals. In fact, I have 2 published papers that were previously rejected in MDPI journals. When I endorse a publication in Frontiers, it means that I am confortable for putting my name in the cover of the article despite its limitations (sorry but all studies in biological/health sciences have limitations, this is not math). When not, I simply withdraw from the process. I cannot understand your difficulties to find the buttons for rejection/withdraw in the Frontiers editorial system.
3) Always we, citizens, pay the bill. In the traditional model, the bill is paid by governments with the money from taxes. In the OA, we scientific citizens pay the bill with our grants/salaries. However, MDPI and PeerJ give you discount vouchers for reviewing for them (not sure if others make the same).This is an excellent incentive for collaboration. I have recently accepted to be AE of a MDPI journal because I receive one waived article and discounts for other accepted manuscripts. After receiving an invitation to apply for AE in Frontiers, I did not consider the invitation because they don't give me anything. Further, we have now one manuscript under review in a MDPI journal with a good IF that, if accepted, will be published for free after sending all the discount vouchers we authors had in our accounts. If you prefer the other model, OK and be happy.
4) Finally, I have detected several times flawed articles in your respected physiological journals on topics I have expertise. In fact, sometimes I have send letters to the editors because of this. Sorry but your respected journals are managed by humans and not by perfect machines. Further, several times I was surprised of being invited to review bad manuscripts for these journals. The fact that retractions are more often seen in big journals is not a coincidence...
Best of luck,
Daniel
Dear Daniel A. Boullosa ,
thank you for your reply and opinion.
With regard to your point 4: this is absolutely true and I have done/ experienced the same! However I do not understand the "your respected" in your answer. I did not take any sides in my comment. And personally, I care more about the content than impact. Concerning your point 2: I know perfectly well where the "reject" button is in Frontiers. However, if I remember correctly, it was not always there! I had to look around for a while but found this older blog at https://neuroconscience.wordpress.com/2016/01/15/is-frontiers-in-trouble/
with an entry of O.C. Schultheiss describing exactly what I remembered "Not my experience as a Frontiers review editor at all. For several papers, I only had the choice of withdrawing from the review process completely to make my fundamental problems with a submitted paper known — a “reject” option simply did not exist for a long time. I always explained my reasons in a separate communication to the editor and that my withdrawal should be counted as a “reject”. Maybe Frontiers is changing this now, and I think it would be a step in the right direction."
Thanks Boris Schmitz and sorry if I wrote something offensive. It was not my intention. Just was trying to capture the "feeling" about traditional journals from previous posts. As I am getting older and more experienced, it is more easy for me to see flawed published papers in respected journals so I cannot accept the statement that only OA journals publish poor papers.
Not sure if Frontiers changed their layout for reviewers but all the times I decided to withdraw from a review process, I simply did it. If someone decide to put his/her name as editor or reviewer in the cover of a poor paper, it is his/her problem and not mine. As an author myself, I have put my name in good, regular and not so good papers because of different reasons and contexts and that is not a crime...
Very interesting points that were brought up. I think the discussion here clearly shows that the current publication system is about to collapse. This is because in the present form, science is understood as a business model rather than a pubic interest. As a consequence, it is quite obvious that publishers aim to accept as many papers as possible and in my opinion this pressure is quite similar in OA Journals and rather traditional ones. Actually in most traditional Journals of big publishers (e.g. Springer, Elsevier etc.) the number of accepted papers is not only limited by printed issues but much rather by the resources that are made available by the publisher (i.e. the financial margins, how many handling editors are assigned to that Journal etc.). Despite tremendous revenues, also in these cases academic editors and reviewers are mainly working for free. A good example are also edited books in which the share given to authors is minimal as compared to the prices these books are sold for (and consequently the revenue).
What is unique to classical OA Journals, in my opinion, is the way the resources are used. For example, in Frontiers recently quite many papers are published that include 6-10 reviewers! This is because editors send out numerous invitations in an attempt to get it done quickly (and obviously also because of the high number of declined reviews). Consequently, papers are revised back and forth (often due to contradicting reviewer comments) and it is becoming even more difficult to judge whether papers should be rejected or accepted if the votes differ between the reviewers. Obviously this procedure is wasting resources (i.e. 5*2 reviewers would be ok for 5 papers instead of one) on the cost of academics, especially because when accepting the reviewer invitation it is not shown how many reviewers have already agreed. Also, rejecting papers is possible but in this case the paper is no longer counted as your editorial contribution. However, visible editorial contributions are the only incentives for these commercial reviews, and this probably discourages reviewers from rejecting papers (after having spent quite some time thoroughly reading the paper). As such, I think it is implausible to assume these Journals are primarily looking at high-quality papers (even though this does certainly not exclude that good papers are published in these Journals, as has been mentioned earlier).
I think the only solution lies in smaller, university-based Journals where the publication charges are marginal and are basically just used to maintain the server, i.e. non-profit Journals.The OSF framework shows that this may well be possible. Since the majority of editorial work is anyway done by non-paid academics, this system could also be adopted for peer-reviewed publications. As such, also the IF discussion would no longer exist (i.e. to distinguish between apparently better and worse Journals) since the key is a thorough peer-review anyway (rather than being accepted in a prestigious Journal). Obviously, difficult to be the first publishing in these types of Journals when evaluations are based on rather classical score (IF etc.)...
Dear Daniel A. Boullosa, with regard to your points 1-4:
1) Although I have made it implicit that I believe there is some sort of relationship between IF (and/or more importantly the overall quality of the journal) and the rejection rate of the journal in question, I never stated that such a relation is a simple one, or without exceptions.
For example, it is well known how even some traditional journals have suddenly made drastic jumps upwards in Impact Factor. I recall one such journal which jumped from about 4 to around 13 in just one year!
This is of course ridiculous. It also suggests that some journals, including traditional ones, focus on IF for its own sake and resort to various measures for raising it. If IFs are to be used, the average IF over a number of years is probably a more reliable index, although this too obviously does not tell the whole story.
As for the claim that "there is no link between rejection/acceptance rates and IFs in any journal in any field", this is a rather strong statement. I disagree with this view, even though as noted above I believe that the relation is not a simple one.
As for Impact Factors of Open Access vs traditional journals. The explosion of OA journals, pressure from journals on authors to cite other papers from the same journals (a problem especially in predatory journals) etc make comparisons of IFs between OA and traditional journals difficult and perhaps even misleading. An argument can be made that OA and traditional journals should be in separate categories with regard to their IFs.
2) First, I would like to emphasize that there are reports of pressure on Editors from some OA publishers, including the ones we are discussing here, to let a manuscript through. And the high acceptance rates in many OA journals means that chances are high that a poor or mediocre manuscript is published anyway despite recommendations of Reject from the referees. As noted previously and in other threads here on RG, there are many Referees and Editors who have reported this very problem in journals belonging to the big OA publishers that we are discussing here.
Second, I have indeed withdrawn from reviewing a manuscript for one of the OA journals and publishers that we are talking about, so I have done that while at the same time clearly stating that the manuscript in question was hopelessly poor. There was no Reject option available at the time. The manuscript later appeared in print anyway.
3) To be clear, my preference is for journals with a well-functioning peer review process, regardless of whether it is an OA or traditional journal. I reiterate what I wrote in a RG-thread that was started by Erik G Hansen on the MDPI journal Sustainability.
An example of an openly accessible peer review which accompanies the final paper in an Open Access journal, in this case eLIFE. First though a disclaimer: I have no ties with eLIFE, and no collaborations with the authors, I just happen to think that this was an interesting paper, and more importantly for this discussion a nice example of what an openly accessible peer review could look like.
The paper, and at the end, the peer review of it:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5338923/
If I saw reviews like this accompanying a new scientific article that I read, both from traditional and Open Access journals, I would be considerably more confident in both the paper and the journal that published it.
4) I did not say that the journals that I referred to are "perfect", I merely referred to them as among "the better traditional journals".
Furthermore, I have also been on the receiving end of (in my view) bad/strange decisions from some of the higher ranked journals in physiology. E.g. Editorial Rejects which have been very poorly supported.
However, I maintain that I generally do not see pure horror stories appearing in these journals and I also see a clearly lower share of "so what-articles" in them.
Boris Schmitz I also recall that Frontiers did not have a Reject option, at least it was not visible to me when I needed it a few years ago. Good to hear that this has now changed.
Dear Mathias Wernbom
Thank you for your clarifications. It is good to see that we all agree about the complexity of the process. My point is about avoiding generalizations and assume that every publishing model has its strengths and limitations. We all are truth seekers and should remember that our findings are barely portions of the truth, independently where they are published.
Since the JCR IF is the "gold standard" for record evaluations in most countries, it is obvious that some problems come with this metric. But it is unacceptable to say this journal is valid and the other not for the same IF, only because of the publishing model.
Cheers,
Daniel
First of all, I am pleasantly surprised by the number of important researchers that are participating in this interesting discussion. In my personal opinion, the issue with MDPI is the tremendous number of managing editors involved in the referees selection. In fact, many times researchers that are not experts in the field of the paper are invited to make review, which results in low quality revisions. However, most recently the name of the editor appears in the final revision, which to me already seems as an improvement of the final quality. Another important step would be to show the referees names in the final version such as Frontier Journals does. Lastly, in order to guarantee additional improvement in quality there should be more control over the invited referees and editors or guest editors should have the possibility to choose and discuss with the managing editors about the invited referees.
I also see the problems that arise from the inflationary expansion of publication numbers at e.g. MDPI. Such an expansion can only be at the expense of quality.
In our discussion, however, I see another aspect that has not been mentioned so far. It may be legitimate to adapt the evaluation criteria for MDPI when it comes to accumulation and numerical evaluation of scientific achievements. But, we should not consider to completely ignore such publications or even take them as a malus.
As an example, I see our own methodologically oriented (necessary for further work in the field) publication from our working group in Sensors/MDPI - just before the exponential increase of publications started there.
There is no broad forum for such a technical topic in the "regular" sports science field. On the other hand, there is no comparable publication system in engineering as in the natural sciences - to which we could have switched with the paper. Apparently, engineering careers are far more likely to be evaluated by presence and presentation at congresses and the corresponding proceedings.
As a consequence, for topics that are not in the thematic mainstream (such as technical papers in sports science), sometimes only exotic "regular" journals or else OA are left to publish at all. However, the community in sports sciences (and most other natural and life sciences) requires publications as a proof of scientific work and does not really consider pure congress papers or monographs at all.
Francesco Campa mentions an important aspect: too often, people who are not experts in a given area are invited to review a manuscript. Contributing to this problem is the fact that people who are experts increasingly find that they simply do not have the time to be referees of the many manuscripts that they are being invited to review. The main reason is rather obvious: the exponential increases in the number of articles and journals in almost all areas.
Some years ago, I almost never declined an invitation to be a referee as long as the manuscript was within or reasonably close to my areas of expertise. Recently, however, I find that I decline about 67%-75% of all invitations. I simply do not have the time to do a proper thorough review of more than a small fraction of all the manuscripts that I am invited to review (unless I am willing to sacrifice my own precious research time).
I have also observed that the percentages of "so-what manuscripts" (and articles) and decidedly non-novel investigations (including "kicking-in-open-doors-papers") have markedly increased in recent years. Although I do see the value of replication studies and investigations which attempt to confirm findings of previous studies, I argue that vast numbers of papers in todays science are redundant and could thus be deleted without any appreciable loss to the overall body of knowledge.
I am reasonably sure that the above scenario of too little time to be a referee more or less applies to many if not most scientists here on RG. So again, too often a manuscript ends up in the hands of a non-expert.
In addition, with exponential increases in the numbers of papers and even greater relative increases in more-or-less unimportant work, it is increasingly difficult for the more important and significant scientific works to get through to the reader, and of course also for the reader to stay up-to-date in any given field.
All in all, this is not a good situation. For these and other reasons, I am not in favour of the current state-of-affairs of an ever-increasing number of Open Access journals and their typically enormous numbers of "special issues". I argue that we have too many already.
Sometimes it is smarter not to publish secondary results and invest the time spared in more complex studies or just in increasing the statistical power of the studies undertaken. Should not this approach be encouraged and rewarded?
Dear Jose A Calbet,
its often mass before class and secondary results bring more first and last authorships for different people involved. I have seen this a lot also on conferences with two posters reporting on the same study but different (sometimes) small aspects. This is where it starts.
Jose A Calbet I have not seen this happen or heard about such a phenomenon, which doesn't mean it isn't happening of course. Typical focus is on JCR impact factor (IF), 5 yr IF, H-index and H5-index of the journal. I have seen, first hand, some of the "traditional" journals boost their IF, by requesting authors to consider citing papers, published in the very journal where the paper is being reviewed. Aside from affiliation with a flagship professional society (most traditional journals), IF and whether a journal is indexed on PubMed in our field are often proxies for legitimacy. I think increasingly, "worthiness" of a publication is based upon the IF or some other metric, because who doesn't love data, its objective right? However, it would be interesting to see what public opinion would be on a publication in journal with a lower IF, but longer history/more traditional vs. a publication in a newer, but higher IF open access journal. I think there are two very real issues: 1) the traditional journals are the same "size" but the number of researchers and papers/researcher have increased dramatically creating a bottle neck (I recently waited 2 years to go from approval to "in print" at a traditional journal). 2) The expectations at the traditional journals seem to be increasing, perhaps this the natural evolution, but might also signal a desire to have a high rejection rate and/or relate to the issue above. I think the newer journals can be a pressure release valve reducing pressure on the traditional, and limited number of journals with limited paper outputs. However, there are clear and flagrant abusers in the open access system that lead can lead to papers such as “What’s the Deal with Birds?” or "Uromycitisis Poisoning Results in Lower Urinary Tract Infection and Acute Renal Failure: Case Report". Open access can be a solution but needs to be managed well (I think Physiological Reports is a decent example).
Really an interesting discussion, thanks for all,
According the question for articles which published in MDPI journals could be excluded in the assessment of the scientific production? This a big and serious question. However, several years ago I doubted on the perspectives in some journals of this publisher. Reason: among the 4 reviewers one, even 2 reviewers gave a recommendation "a rejection" for my colleagues paper. But those manuscripts has been published.
This is an interesting issue.
As long as these journals are not excluded from the SCI, I do not see what objective justification can lead to their being excluded from the CV. In fact, it is possible that, in this debate, the opinion of other publishers who feel harmed is weighing. MDPI can be a game changer and favor an evolution of the scientific publishing model, which is by no means free from inefficiencies.
It is not reasonable to propose that the JIF continues to serve as an evaluation criterion, except for those journals that we do not like. The only possible alternative is that the JIF should cease to be a reference in the evaluations. Are we going to abandon this metric? What would be the substitute? A subjective assessment of each article by each evaluator or each committee?
Moreover, if these publications are not considered in the evaluation of CVs without an objective basis, the evaluation committees will probably have to face sues. It´s a complex issue, in any case.
I've a real problem with predatory OA publishers and journals, but the issue of defining what makes a particular publisher/journal 'predatory' is not clear cut. In this regard, the publisher MDPI has a mixed track record and reputation. However, I cannot help but think that some of the reaction against "predatory" OA publisher is fuelled by the established publishers and we do need to be careful before we leap to judgement.
Perhaps we all need to take a lesson from the 'responsible metrics movement' and recognise that it is simply illegitimate to judge an article solely on the journal it is published in (http://www.open.ac.uk/blogs/the_orb/?p=3242). It follows that a blanket ban on recognising articles in journals from a particular publisher cannot be right unless the publisher is clearly fraudulent. I've seen and heard of examples of very poor practice in MDPI journals but, as noted by others, not all MDPI journals are implicated and bad practices sometimes appear in more established publishers' journals as well. The issue is how the publisher responds I think. I don't know enough about MDPI to comment.
For some Chinese universities, certain MDPI journals have been paid special attention. Of course, we have admitted that such OA journals are quite efficient and effective. In the field of economics, one 2021 published paper (titled "informational herding, optimal experimentation, and ceontrarianism", ) in the top-field journal - RES - takes 24 years (first submitted: Oct. 1997, and accepted: Jan. 2021, note that now it has been revised to Aug. 2005 for the first revision received). Really incredible!!! There are some tradeoffs there.
Marco Aurelio M. Freire
Peng Nie Maria Elena Crespo-López Are making some valid points but are conflating a number of different issues. This is not about open access vs traditional publishing. There are now many established and respectable OA journals and publishers about. Often these are owned by the established publishers (BioMed Central) for example. Speed of publication and OA shouldn’t be confused either. Online publishing has changed the product fundamentally and delays are primarily about the editorial process. The worst experiences I’ve had in recent years have been with an OA journal. In answer to David Blanco Fernandez JIF should never be used to judge the quality of a paper - it was never intended for that and there is a strong movement to put an end to its use. However, the issue at hand is about a specific exclusion of papers with a particular publisher based on predatory practices. See my earlier post if you are interested on my take on that!One issue to consider is that the academic community has possibly contributed to the explosion of new reputable (and less reputable) journals by the old adage 'publish or perish'. Indeed, I have heard an academic state that they had published in a predatory journal - I can only assume that this was a genuine mistake due to the similarity in journal names. This same issue (similarity of names), and a long list of publications will make checking of CVs as in Jose's original comment, very difficult. Perhaps, as with the referencing of books, we need to consider including the publisher on journal citations...
Some new information after the release of 2020 Impact Factors.
In 2020, Spain was country number 11th by the number of papers published in Science and Nature. In 2020, China was the leading country publishing in Sustainability (a popular MDPI journal) (6579 articles), followed by Spain (2500 papers) and then South Korea (2230) in the third position. The International Journal of Environmental Research and Public Health (IJERPH) is another popular MDPI journal. In this journal, China was the main contributor in 2020 with 3856 papers, followed by the USA (3227) and Spain (2081). Why there are so many scientists from Spain publishing in these two MDPI journals? Denmark published 195 papers in Sustainability in 2020, being number 37 on the list of contributors. Should countries like Denmark take measures to avoid climbing in this ranking? Why is Sustainability so attractive to the Spanish researchers and much less to the Danish?
answer to the last question:
Because it is easy to publish and a short path to climb in the academia (call it "acreditacion"). Sorry for being so honest
Totally agree with the above comments "Publish or perish". The pay-to-publish model has introduced a clear conflict of interest in academic publishing. Many Spanish projects have funding to pay for the publication of articles, making it easier to migrate to these types of journals where rejecting articles does not produce any income.
Just an update: I've rejected the first 2 manuscripts edited by me in the MDPI journal IJERPH with 4 and 5 reviewers participating in each case.
The editorial system and the support by the editorial assistants are simply much better than those I've seen in other OA journals. Here you have another reason for understanding such rapid editorial processes.
Meanwhile, I've received the acceptance of a manuscript in the same journal and I will not pay because I have a full waiver because of my role as AE.
I just think that MDPI is ahead of their competitors.
Journal citation reports and the definition of a predatory journal: The case of the Multidisciplinary Digital Publishing Institute (MDPI)
https://academic.oup.com/rev/advance-article/doi/10.1093/reseval/rvab020/6348133
Diogo Henrique Constantino Coledam, Krzysztof Nowel ,
The author claims that there is no "uniformly accepted criterion for identifying predatory journals", but the conclusion is that MDPI journals are "predatory".It sounds inherently subjective to me.
This conclusion is mainly based on the number of citations received from the MDPI journals. Perhaps the reason is that all MDPI journals are open access and, consequently, easily accessed by a higher number of researchers? Perhaps there are reasons related to the rapid publication process and open access policy that make MDPI so attractive to researchers?
In my humble opinion, the conclusions in this paper could be derived from available data, but the research itself is designed to meet a preconfigured conclusion, avoiding an in-depth analysis of the causes of MDPI's success.
Research and reflections on the evolution and goodness of research publication processes are part of scientific thinking and are important for the construction of science and its appropriate communication. The article reaffirms in the conclusion what was intended from the beginning of the article. A case study (editorial) can be vitiated in origin (even, in my opinion, bordering on the limits of scientific ethics). Most of us assume that in general, the strongest scientific conclusions are drawn from randomised case-control studies, with a good and scrupulous methodology in obtaining the data to be analysed. Without precisely defining a concept (predator), it is difficult to draw conclusions about the concept.
All this does not invalidate the fact that we must be careful to ensure our good practice and vigilant in our work as scientists.
Thank you Jose A Calbet for raising this interesting discussion.
I have experience with MDPI from several viewpoints, since I am an author, reviewer, guest editor and member of the editorial board for one of their journals.
In my opinion, the discussion with MDPI requires a non-simplistic approach, so I will try to put in context my experience with them.
First, I must say that there are many good professionals acting as editors and reviewers for MDPI journals. My colleagues and I have published in several MDPI journals (Water, mainly, but also on Soil Systems and Hydrology) and I received very constructive and thorough reviews most of the times, leading to an improvement in the quality of the papers. However, it is true that, due to the speed of the process, some reviews were not satisfactory.
As an editor, I have received a high number of manuscript and some of them do not have enough entity (because they are badly written or because they seem too preliminary) to be published in a scientific journal. In those cases, I rejected the manuscript… and here is when the problems with MDPI offices begin. A couple of times, I received emails from the Editorial Office of an MDPI journal asking me to reconsider my decision and trying to force me to accept the manuscript. This indicates that people at MDPI are more interested in getting the money than in publishing good science.
Another aspect that it must be taken into account in this debate is the fact that the number of MDPI is increasing at a fast rate, as well as the predatory techniques they use for attracting authors and pressing reviewers. This is like a snowball that is growing and, at a given moment, the system will collapse.
On the other hand, there are many good articles published in MDPI journals, so the debate about removing these journals (or articles from these journals) from the evaluation of scientists is clearly unfair, from my viewpoint.
As I said, there are many good scientists acting as editors and reviewers for these journals that are trying to do their best for increasing the quality of the research published within these journals.
Last, but not least, some other “traditional” publishers practice bad publishing habits, likely at a minor scale, but I have seen several examples in which other editors from renowned journals accept papers because they came from their friends. Should we remove those papers from the evaluation of researchers? How do we detect those bad publication habits?
I hagree with Jose Manuel Miras-Avalos,
moreover, I think that the real problem, is not what MDPI is doing, but what all (or almost all) the publishing groups are doing.
Since publishing something, for them is a source of great economic profit, it is inevitable that they are pushed to accept as many manuscripts as possible.
So the problem is upstream..., we produce scientific data often using public funding, in non-profit organizations such as universities, and then these data are used to make huge profits?
Although I do not have particularly sympathies toward MDPI (where I have not published an article so far and never will), it would seem relatively unfair to me to single them out, when other open access publishers (like Frontiers) have practices that are not very different, not to mention the biggies like Elsevier, Springer, Wiley, etc....
What we should do instead is to re-evaluate entirely the scholarly publishing business, which to me seems to have taken a wrong turn many years ago, when commercial publishers started publishing for profit the kind of journals that sholarly societies used to publish until then. We, the researchers, did not see clearly at that point that the end result of this transition would amount to us working for free (as editors in some cases, as associate editors most often, and as reviewers definitely) to make available to them a material on which they can then put their copyright, and to which they restrict access, unless we pay large amounts of money, which goes almost all in the pocket of their investors. In a nutshell, we the researchers are helping stockholders of big publishing houses to appropriate public money to get even richer, against a minute amount of work on their part. The situation has gotten worse with the advent of open access journals (some of which charge as much as $9500 per article!!!).
I think that it is time we change course in this area. We need to take back control on the publishing of our own work, and we need to stop wasting the (woefully) little money the public is giving us to do research. If universities and research centres had any vision in the matter, they would launch their own not-for-profit journals, or encourage the editorial board of existing journals to switch to not-for-profit mode, and in so doing avoid a lot of the waste of financial resources. In the mean time, I personally plan to submit manuscripts only to non-profit journals or at least to ones that are associated with, and contribute financially to, scholarly societies.
I really liked the discussion. Very informative and constructive. To me, the question is how to overcome the bad practices of MDPI such as giving reviewers 7 days only to make a critical appraisal of a paper, etc. So it is not about destroying the whole thing that already exists but rather how to improve it. The second aspect may be how to make the quality of science but not money the top priority? This is the central problem of all open-access publishers and not only MDPI.
A recent article that has provided evidence to suggest that self-citation is a bigger problem in MDPI journals than in the leading journals listed in Journal Citation Reports.
https://academic.oup.com/rev/advance-article/doi/10.1093/reseval/rvab020/6348133
Hi Mathias Wernbom
Clarivate has its own algorithms to check this. Therefore, MDPI journals are in law for JCR as any journal with an abnormal self-citation rate would be automatically removed. Alternatively, it may be more related to clusters of authors and topics than an induced self-citation which I've never seen there as editor, author or reviewer.
Cheers,
Daniel
Hi Daniel A. Boullosa ,
so contrary to the conclusions of that paper, you are effectively claiming that MDPI journals do not have a problem with self-citations compared to leading journals in the same fields listed in JCR (such as Cell, Nature Reviews Molecular Cell Biology, etc), as they would presumably otherwise be thrown out of JCR.
Thank you for enlightening me.
Cheers,
Mathias
Mathias Wernbom I don't remember exactly but JCR has a % tolerance for self-citations. So, self-citations are allowed up to a level which is legal to any journal in any field.
Some years ago, JCR suspended 4 journals which had an scheme for self-citations among them which was easily identified by the algorithm of Thomson Reuters (the owner of JCR at that time). Supposedly, Clarivate uses the same algorithm.
The reasons for the elevated self-citations rate described in that paper (which I did not read) may be different than bad practices. That's my suggestion. It would seem you and others are trying to do cherry picking all the time to demonstrate MDPI is the Demon's publisher on Earth.
Thank you very much for this very informative discussion. Personally, only one criterion is enough to get an idea about an open acess journal: the rejection rate !
As a reviewer for several journals (including for some MDPI journals until recently), I have rarely recommended against the publication of a manuscript, and the few times I did, the other reviewers' were as skeptical as I was about the ms so it was eventually rejected by the journal's editor.
Yet, according to my experience as a reviewer, things are different in MDPI journals: some papers were published even though I recommended a rejection or a revision (because the other reviewer considered the papers were ok).
I do not know whether MDPI is "the Demon's publisher on Earth" or not, but I do know that the reviewing process of MDPI journals sometimes leads to very surprising outcomes. In other journals, I have never seen a paper published while I recommended a major revision or a rejection, and my comments are generally in line with other reviewers' comments.
From this experience, I became quite skeptical about the quality of the reviewing process at MDPI. While reviewers do not necessarily need 2 or 3 months to properly evaluate a ms, 10 days or so are definitely not enough to allow a high-quality evaluation. While authors might not need 2 months to perform a major revision of their ms, 10 days or so are definitely not enough to substantially improve a ms.
Thus, IMHO, the MDPI reviewing process does not allow any ms to be improved, which would explain why there are both "good" and "bad" papers published in these journals.
My quick reply is why only MDPI?
Other publishers also did the same.
They have biases toward certain aspects.
It come across several times ago, that one journal only favors research from Europe although the research does not have novelty and also does not fill any profound gap of research?
Dear M. Rizaludin Mahmud: please see my comments from June 2 and June 5 with regard to Frontiers. And I maintain that this discussion should be broadened to include other OA access publishers than MDPI, including Frontiers.
In my view, the current and recent publishing practices of Frontiers are just as open to question as those of MDPI.
Dear colleagues, I have read all comments which are very constructive, and the answer is maybe a glue of many of them. I heard the same thing in relation with MDPI in Spain. The key problem is, from my point of view, when the CV has only publications in MDPI journals, and usually in the same journal and Committees for Accreditation do not accept it. However, it is not legal to questioning it, and you can claim/complaint because Clarivates&Elsevier use the same rule for assessing/follow-up all indexed journals. If the criterion is a paper published in Q1 ranked journal, Committees cannot remove any specific journal, rules are rules Jose A Calbet
In my country, when assessment of the scientific productivity is performed, databases where the Journal is indexed are the ones that are relevant and not the publisher and the name of the Journal.
Is the MDPI Journal indexed is WoSs Science Citation Index Expanded and SCOPUS? Does it have IF? Yes! Then this is a fully legit Journal. It is legit since it has been accredited by these indexing data bases. This is why these databases are created and used for!
Not sure that I follow how can University or a group if Individuals or individual say how a single publisher or a Journal is unacceptable if it has been accredited and ranked by this, acknowledged system which represents a basis of the modern science?
Also not following why is the MDPI working model any different then any other OA journal model? They all work the same, so no reason for pointing out MDPI and their Journals.
If someone thinks how this, OA model is not working, then the problem is far more greater then we think. Then the OA model is false and the indexing data bases (WoW, Scopus) are not doing their Job.
I personally have published 2 papers in MDPI Journals. Each time, reviewers needed over 2 month for initial round of revision and then another month for the second round. Both times, the comments were very constructive. Both times, we were given a 100% waiver.
Thank you for all the responses. Such an interesting discussion made me realize that although there are different opinions, the higher IF also proves that there are many excellent articles. The recent experience for reviewing MDPI manuscripts makes me feel that publishers seem to be moving on the path of correction, such as sufficient review time and more regular (normal) article processing period.
Good news. This may signify the start for some positive changes in editorial policies of most OA journals:
https://blog.frontiersin.org/2021/09/21/former-mdpi-ceo-dr-franck-vazquez-joins-frontiers/
MDPI broke the rules and the competitors have understood it...
Thanks for the answers, I agree with M. Rizaludin Mahmud.
In any case, such an evaluation should be done by the Ministry of University, which in Italy has established by law that the values of H index, citations and number of papers are compulsorily taken from Scopus Elsevier and ISI Web.
Shall we talk about it?
bravo Spain! excellent analysis https://asepuc.org/wp-content/uploads/2021/10/210930_Openaccess.pdf
PS. More and more facts about the relatively low standards of MDPI and their unethical practices strongly focused on making money, e.g., about their tricks to pump IFs:
- https://twitter.com/search?q=mdpi%20predatory&src=typed_query&f=live
- https://academic.oup.com/rev/advance-article/doi/10.1093/reseval/rvab020/6348133
- https://paolocrosetto.wordpress.com/2021/04/12/is-mdpi-a-predatory-publisher/
- https://retractionwatch.com/2020/06/16/failure-fails-as-publisher-privileges-the-privileged/
- https://widgren.blogspot.com/2019/02/is-mdpi-serious-publisher-or-predatory.html
- https://forbetterscience.com/2020/12/29/mdpi-and-racism/
- https://en.wikipedia.org/wiki/MDPI
- ...
Even famous Jeffrey Beall is no longer afraid to openly call them a predatory publisher, https://twitter.com/Jeffrey_Beall/status/1376534050656018435.
Dear Krzysztof Nowel, in Spain ANECA (our quality agency) tried to analyze the MDPI (and other open-access publishers) phenomenon (the first link you posted) but their conclusion is that only some MDPI journals have an "anormal" behavior regarding increasing number of publications and self-citation. In fact, it was also revealed that other important non-open-access journals have the same behavior. At this moment, a black-list, a brown-list and a red-list have been developed including those journals with anormal behaviors of any publisher.
Dear Alfonso, thanks for your reply but I am afraid you might be somehow biased in the MDPI topic, as well as other people having many publications in this factory. I refer you to the current facts, not - more or less biased - opinions. For example, do you know that MDPI has 40 000 special issues this year? Yes, I have not mistaken - 40 000 not 40! And, one can assume, many more predatory spam emails inviting - more or less inadequate - scientists to be the author, reviewer, and/or guest editor of these 'special' issues! If this is not a predatory practice focused mainly on making money, then what is?
- https://twitter.com/search?q=mdpi%20predatory&src=typed_query&f=live
- https://academic.oup.com/rev/advance-article/doi/10.1093/reseval/rvab020/6348133
- https://paolocrosetto.wordpress.com/2021/04/12/is-mdpi-a-predatory-publisher/
- https://retractionwatch.com/2020/06/16/failure-fails-as-publisher-privileges-the-privileged/
- https://widgren.blogspot.com/2019/02/is-mdpi-serious-publisher-or-predatory.html
- https://forbetterscience.com/2020/12/29/mdpi-and-racism/
- https://en.wikipedia.org/wiki/MDPI
- ...
Even famous Jeffrey Beall is no longer afraid to openly call them a predatory publisher, https://twitter.com/Jeffrey_Beall/status/1376534050656018435.
Dear Krzysztof Nowel thank you for your reply. My comment was not my opinion, was just a try to explain better the content of the report for any researcher who is not familiar with spanish language. You can check what I said in pages 87-112 of the report. Sorry if you consider my contribution biased, it was not my aim.
Dear Alfonso, that's ok, thanks for your contribution to the discussion. I just consider predatory MDPI as cancer, and sometimes I react too emotionally about this - fatal to science - money-making machine.
Krzysztof Nowel thank you for your comprehension. Most of us want the development that Science deserves, out of economical interests. These publishers are taking advantage of an outdated publishing system, which usually takes months for a first review and sometimes years until the manuscript is published. I am not in favor of MDPI, but the system needs an update according with these times.
As long as the trend is for researchers to publish (super fast) or ... perish, the publishing system will be increasingly questionable.
After all, publishers make their own business and have no interest in science.
When I was a young PhD, a serious experimental biologist published one/two papers a year. Today if you don't publish 3-4 or more, you lose funding and credibility, and maybe you don't even have a career.
Either this unhealthy trend is reversed or it will get worse and worse.
Canadian universities appear in favor of MDPI, and professors are publishing more papers in MDPI journals. For example, University of Toronto published only 686 papers in MDPI journals in 10 years from 2010-2019, but 842 papers in MDPI journals in last year alone (and counting). There are two contributing factors. Firstly, the quality of MDPI journals have significantly improved. Secondly, the burden of APC has been much reduced. For example, my own university offers 50% reimbursement for publishing in MDPI, so I am encouraging all my students to publish in MDPI journals. Journal impact factor will (unfortunately) continue to play a role for researchers to choose journals to publish. I have little doubt that MDPI journals will have still higher impact factors next year. For example, I published a paper in Viruses early this year (Article Domains and Functions of Spike Protein in SARS-Cov-2 in the ...
), and it has already recorded 50 citations according to google scholar.Krzysztof Nowel, I share with you the same feelings about the MDPI.
I receive several emails inviting me to publish there for a trifle 1800 Swiss francs (CHF)! In my country, Brazil, with the economic devaluation of our currency - the real - this amount can pay for a doctoral student's research grant for approximately five months. A tremendous nonsense. Still, there are those who think it is fair to exchange ~18 manuscript reviews for waivers to publish an article in some journal of this publisher.
Only MDPI? How about PLOS and Frontiers? If this blame is based on just for open access charges!
Traditional Journals vs MDPI?
Well, my recent experience with some journals from Elsevier is the first question in the submission process: Open Access or subscription? when subscription was checked, I am still waiting for an answer (for over a year now). Looking at the published articles, it is obvious that the review process for Open Access is much faster (considering the submission date). With MDPI, authors find out the results in a very short time. An obvious advantage... And which articles have priority in traditional journals: established authors or no-name authors? For a no-name researcher coming from a no-name university, MDPI offers the opportunity to submit an article and receive feedback within a reasonable time, either positive or negative... this is also an obvious advantage.
Quality of articles.....Yes, surely you can find a bad articles in the MDPI Journals.... but the same story is present even in Traditional Journals...
P.S. I have not published anything with MDPI.
Hello, I have commented once on this thread, but I think the topic is still going and we need to discuss this issue as a community.
I believe there are fewer and fewer differences between open access and the so-called "traditional" journals, as there are more and more efforts to make these latter journals as open access as possible. I particularly don't have a problem with MDPI or other publishers like it, There are pretty good manuscripts on these publishers as well... There is bullshit being published in any journal, we just don't realize it as fast as we publish bullshit altogether.
My overall suggestion to students and colleagues is to assess the quality of the research, not the journal, nor the authors... A study with an appropriate research question, methodological control, adequate sample size/statistical analysis as well as true conclusion with respect to the results should not be overlooked. Otherwise, we start to insert barriers to the way we access science, and this is non-sensical (if not unethical) considering readers cherry-picking articles to generate his/her own science (which by definition should be done as neutrally as possible, right?).
My question here is: are we (all) being neutral researchers when it comes to determining the quality of a manuscript?
My overall suggestion to students and colleagues is to assess the quality of the research, not the journal, nor the authors... A study with an appropriate research question, methodological control, adequate sample size/statistical analysis as well as true conclusion with respect to the results should not be overlooked. Otherwise, we start to insert barriers to the way we access science, and this is non-sensical (if not unethical) considering readers cherry-picking articles to generate his/her own science (which by definition should be done as neutrally as possible, right?).