In contrast to most other fields, CS uses conference proceedings as the main outlet of its research results. This leads to a huge competitiveness in conference acceptance (10-15% are quite common for top conferences) and long papers (say 12 or even 14 pages double-column). And because the papers are already seen as a final product, the incentive to write a journal version seems to be low, especially when other researchers go on citing the conference version anyway.
I find that this special position has some negative impact on CS:
- there is only a single round of reviews, leaving little room to improve the paper (shepherding is not fixing this problem)
- good papers get rejected at good conferences (not a "hot topic," reviewer randomness)
- papers must be "perfect" at time of submission, large but fixable problems lead to rejection
- no consistent reviews, if you fix all problems a new set of reviewers will have different problems (or even want you to revert to its previous state), wasting your time and the reviewers'
- the evaluation of research quality across fields is skewed because the widely used impact factor is bad for CS and good for everyone else
- the editing of papers is done by the authors themselves in a short timeframe, resulting in a poor style and print quality
A good point:
- the time-to-print of journals is high (1-2 years), conferences have fixed deadlines and only about 6 months (and you can also tell that something is happening)
So what is your opinion on this? Should we change to the classical model of journals (maybe with faster journals), try to improve the proceedings, or switch to something else entirely?
All the conferences and journals that I have published in have a review process and all are aiming to improve the process to ensure quality publications.
However, since I struggle to find funding for my research, I am increasingly looking at publishing materials for open review. Why not have a review / rating system similar to social networking likes. Take the emphasis off meeting standards for publication and allow the research community to review / rate / cite as appropriate. Under such a system, the cream will come to the top and those who are just publishing at every opportunity regardless of the quality of the research to obtain a research ranking will slowly drop out of the mix.
We have technologies that enable new publishing models. I wonder why we are not exploring these possibilities. We are not just publishers of research, we should be consumers of research and as a consequence be willing to rate the articles that we consume for the benefit of others. I suspect if we put such processes in place we may actually obtain better quality research and publications.
The model that we use is based on an outdated printing based model and we should be looking at alternatives.
Looking at it from the perspective of a reader of research, I find many conference papers (especially short ones) lack the detail necessary for the reader to understand the depths or nature of the research. This means that I often can not see the detail that I need in order to use the material appropriately.
As a reviewer, I also find this frustrating as it leaves me feeling a lot of the research is more superficial than it really is. When reviewing conference papers, the feedback process often doesn't give me much room to provide helpful ideas to the author(s). I find myself under a lot of pressure to get the reviews through with the consequent impact on my review comments.
As an author, conferences are about making people aware of my work and connecting with others doing similar work. I see them less of a place to publish the detail of any in-depth research work. In contrast, a journal gives space to provide the detail and provide sound reasoning. I expect the process to produce a good journal paper to take longer and to involve more stages.
Thank you for your answers so far!
@Manaschai: I guess it is true that the main issue with journals is speed. However, I think there should be ways to improve the current situation. I did some reviews, and the deadlines were always very generous. Let's say you have 3 months, and what happens is that everyone waits 2 months 3 weeks and then does the review in a day or two (a significant part of reviewers seem to wait even longer). Why not changing the deadline to one week, either you have time to do it directly or just decline? With enough reviewers this would push the speed even past conferences.
Ok, I agree that journal reviewing is an extremely thankless job, for this to work the incentives must be increased to spend your time on reviewing.
@Mohammed: It is true that conferences have served CS well until now, but the papers may not be as good as they could be, and writing polished, long papers also slows down the publication process. What about shorter conference papers to make the contribution known, and the complete evaluation later in a journal, in a second step? This way, you can also include feedback from discussions at the conference, or even work done in collaboration after you outlined the initial idea. My impression is that with current conferences such improvements rarely happen because the papers are almost finished already. It is more like the conferences are acting as journals, and the conferences are less a forum for discussions than in other fields (of course this depends on the conference, some are better than others).
@Errol: In CS, my impression is that papers are very wordy because every problem must be described starting with zero (basically everyone defines a set of own, new problems). In math you can write that you proof Conjecture X, and then just do it, in CS you first have to write three pages motivation why routing in the Internet is of interest (or something). With good problem statements it should be easier to grasp the contribution with fewer words. This may be a general problem of CS (at least it is in networking).
I agree that conferences should be a place to discuss research ideas and approaches to improve research, but in most conferences it is more like take it or leave it.
My impression is that education conferences depend on abstracts for acceptance. It is assumed that you will then write a longer paper for journal publication. Would this be a possibility within CS?
When a field of research is starting out, I would agree that it may be necessary to define terms but as a field matures, we should be able to assume that some things are givens. However, I am conscious that in a education paper that I wrote, I had to be clear on my definitions simply because the audience would come with different understandings of those terms. Some would agree with my definitions while others would want to argue with them. So the problem of having to define things first isn't unique to CS. We always need to ensure that our audience understands the terminology that we will use in the way that we intend to use it.
I will take an example from software testing which I am teaching at the moment. Behaviour-driven development using the term scenarios for what I would see more as functional or unit tests. This is quite different from what is meant by scenario testing where you are looking at an overall user interaction scenario with the system. Because any audience in this domain may contain people with these different ideas of scenario, I have to make sure they understand in which sense I am using the word.
My experience in CS is that we do use some terms in different ways in different contexts. Sometimes in a given context, we might assume that our readers know the definition of the term but can we assume that if it is used in a slightly different sense in another context?
In astronomy journal reviewer-author interaction has a typical time scale of 3 month. In CS is in a best case: for a year. By one hand, we have the so called "scientific result inflation" (you need more results to show than before). On the other, CS is not really considered as a "natural science", so results does not come from measuring a natural phenomena and validating them is more difficult.
Both reason force to considers conferences as more natural scenario for testing results with reviewer comments and the expert audience. CS conferences have much more quicker answer than journals. Therefore, audience opinion will only be present on the paper "score" as long as people considers the overall conference. I think crowd evaluation should be more important part of the process than now.
Pablo: I think this is already the case. We already evaluate the value of a conference or workshop based on the attendance and the quality of previous papers. Exciting results are lined up for high profile conferences: usenix is a great example here. Sending papers to lower-quality conferences, such as ieee globecom (where the acceptance rate is over 40% for some years) is not recommended, at least not when you have a great result.
I think the same internal selection process applies to journals: if you're submitting an article, you'll probably try to get it into the highest possible journal (e.g., ACM computing surveys), or in the journal relating to the field (e.g., IEEE transactions on ).
There are numerous lists documenting the rough quality of conferences (or at least their acceptance rate and so on over recent years). For example this site for network conferences, which also has references to other index pages:
http://www.cs.ucsb.edu/~almeroth/conf/stats/
For security, there's this excellent list:
http://icsd.i2r.a-star.edu.sg/staff/jianying/conference-ranking.html
A few words on acceptance rate: this metric is very, very biased. The problem is that it is based on how many submissions there are. If there are a few high quality submissions in a small set, the acceptance rate will be high. If there (near-)zero good papers in a very large submission set, the acceptance rate will be low, but the quality will still be bad. Secondly there's the issue of short papers, which are usually included in the acceptance rate of a conference (see for example wisec in the rankings above). The metric generally holds up, but we should keep in mind it is not an exact quality metric.
Overall I don't see a problem with conference publications. From my admittedly limited experience in submitting, it seems that the process is solid. I've had a discussion about this with a professor a short time ago, who did some (statistical) analysis on a specific conference and the review process associated with it: he indeed found significant review 'randomness', but this mostly canceled out over the three reviews. I suspect that's also a trend, at least for the higher tier conferences (as reviewers invest sufficient time there). I think the perfect-at-submission is a good thing - it puts some weight on the authors. It's not like you're required to submit to conferences; you can also focus on journal publications. Or are the different opinions there?
If conferences perform better than journals in theirs revision schedule, then they should be used as a source for journal paper candidates. This can help journals to improve theirs rates by accepting a larger amount of "selected papers" from conferences or at least consider them as partially reviewed. This mechanism is already used, but not massively. Perhaps journals editors should also consider that proven "popular" presentation in conference could be "popular" paper in journal and then their impact factor will be positively affected. Such "popular-rated" papers are sometimes missed from conference committee and could improve future request for those kind of work in a journal. Otherwise, how you can early predict the "future importance" of a given work?
Perhaps it is because I work in a developing country, but no one has mentioned the travel allowance issue. From what I've heard this is being cut everywhere, sometimes limiting your participation to once a year or forcing you to write at least a couple of papers for the same conference before you get financed. And since this is mostly a CS issue it is hard to justify to others that this should be handled differently than in other fields, where conferences might be more readily seen as minor contributions.
Also, in terms of "points" there are also problems, becase while there are standard ways of assigning points (for promotion) based on whether or not the article is in a first or second tier within a known index, for conferences this is not so standard and the ISI conference index (CPCI) is not as easily accepted or contested by other indexes in the same way as journals.
In any case, this discussion, in my opinion, is fast becoming obsolete with open access and collaborative authorship, reviews and publications. It's all about sharing knowledge, really.
The practice of submitting papers from certain conferences to journals as revised versions is quite common. In addition, there are conferences that invite authors of specific papers to submit a revision of their paper in a variety of journals, although usually these journals tend to be lower-tier (i.e., non-IEEE/ACM).
Requiring the publication of code is not a feasible solution. Sure, it'd be great, but there's two important reasons why this will not happen in the near future:
1. A large majority of papers is based on (partly) propietary software (particularly network simulators, such as VSimRTI and NCTUns), often with conflicting licenses (components of JiST/SWANS), or none at all (typically discontinued student projects). There's also a lot of code and papers written by industry, where publication of papers is already a very difficult experience -- getting code from industry is practically impossible without an NDA. Basically: the publication of most research-related code-bases I know about would probably cause a chain of lawsuits if they were published.
2. Releasing code also releases all the associated ideas and errors written into comments/documentation. Also, if code is required to submit a particular paper, copying ideas becomes way too easy. On the other hand, the review process is not helped. Reviewing code is an extremely time-consuming and un-rewarding process, which adds nothing to the actual research that the authors have done. It also makes the time required for writing the code (even) longer, because as an author you know it will be reviewed (i.e., your paper will be rejected if the code is not styled according to the standards of the reviewer).
Proceeding with journals and changing their approach and time line for reviewing will be more useful. Conference publications are useful when you developed model is practical with implementation, that can bring some potential sponsors for that research to be conducted on large scale. Else journal publications are good.
I agree with most of the answers in here, specifically Errol's and Ren's reviews. However, I think to meet somewhere in the middle would be to shorten the period between the submission and notifications. I bet I share Matthias feelings whenever I check my email every second for the notification email to know whether I need to submit to another conference 5 days following the reject notification. Some good conferences (ie. SIGIR , WWW) makes sure you submit to one of them per year by sending the notifications emails after the deadline of the other one has passed, even though, the results are there. This is really frustrating and in my opinion the #1 cause for stress in graduate studies.
I prefer to take my time and publish in a journal. I like a hard topic that requires a decade of work. However, I admire those who can publish quickly.
It seems to be part of building the experience, the first papers are rejected and then learn the trade and the skills. Later on the career your papers are less and less rejected and you learn where to publish/not to publish.
The number of researchers in (particularly) computer science related areas have been increasing. I think it should be 10 times than a decade ago. In order to accomodate and disseminate info very fast these number of conferences are required. Only thing is that the organizers see that the acceptance criteria is not related to the response etc.
Conference paper is the good way start the publication process, if the work is not fully materialized and/or been tested on limited data sets. By addressing the review comments and presenting the paper in conference, incorporating the suggestions from the participants make the research more potential. However, one should ensure that publishing in the conference is not merely get the count (number of papers published) ticking.
One further problem with the conference focused publication tradition is that it falls frequently out of the common (mostly journal based) publication metrics. This is actually a fault of the metrics, not of the tradition, but researchers and research institutions do not have the option the choose how they are evaluated. There are (still) many areas in CS, where the best publications are at conferences, not journals. I certainly prefer a publication metrics, which are not determined by the "quality" of the outlet, but by the quality of the content.
There was the time -- many years ago -- when we published if we had something new to say... What is the number of publications by Alan Turing, e.g.?
It might be useful to consider how the field of physics advances using arXiv as one of the mechanisms to disseminate work at the pre-print stage. The recent acceleration of new results in physics might be, at least in part, due to the wide adoption of arXiv.
Perhaps, authors of rejected conference papers could submit them to arXiv. That way quality work that failed to meet some conference's restrictions could still provide stepping stones for other researchers and assert the original authors' pre-eminence.
The problem with things like arXiv is that reading all the not-accepted-for-conference works simply costs too much time. Reading thousands of papers a year just isn't feasible if you also want to actually do your own research and teach students. This is why we have conferences and journals in the first place- to act as a filter.
I'd rather that people (re)submit to different/less prominent conferences, or to prominent workshops in their field. There are plenty of conferences out there that accept almost anything(*), and workshops that are really restrictive (such as ACM VANET). You just have to know what goes where in your field, and that's something you learn by reading and through a supervisor that knows his stuff.
(*) I don't really want to mention a specific conference here, but have a look at these statistics:
http://www.cs.ucsb.edu/~almeroth/conf/stats/
In my view, most of publications in conferencies, workshops, journals, etc demonstrate only qualification level without any science contribution. Please, have a look at the links https://www.researchgate.net/post/What_are_the_measures_used_in_different_countries_to_stimulate_publication_activity
https://www.researchgate.net/post/Is_the_existing_paradigm_of_scientific_communication_exhausting_itself
OK, let's take only the best researches. Please, have a look at the link
https://www.researchgate.net/post/Why_dont_developers_use_the_best_research_on_software_development
I want to discuss this question from the point of view of the large Research-Practice gap. Sometimes it seems to me that Computer Science is in deep crisis from this point of view. In my opinion, this gap is tragedy in CS. Now I want to draw my usual situation. Suppose I read a conference (workshop, journal, etc) paper and I was interested to use an author's algorithm to solve my problem but the author's program is not accepted. What I must to do now? My answer would be: I will try to develop my personal algorithm and program in order to solve my problem. Each time I do so. Thus, in my understanding, discussions on this question are only "fluctuations of air" from the point of view of outcome.
Researchers aren't and shouldn't be developing software. That's not what they do. Many researchers are average programmers at best, and have no experience managing large projects (myself included). The goal of doing research is not to write programs, it is to develop new ideas. New ideas can be adapted into practice by programmers and companies. Writing proof of concepts, which is what most researchers do, is an entirely distinct thing from writing real programs.
Also, there's a big difference between 'program' and 'algorithm'. An algorithm contains the essence of a program, the reason why it works. Using the algorithm text in the paper you should be able to write a program that performs the same task. There's a parallel with copyright law here too - you're free to do whatever you want with your implementation of the authors' algorithm, unlike an author-supplied program.
Developing your own algorithm for every program is not a good idea. It is like inventing your own cryptography for every application you write. You shouldn't do this, because there is a lot of research necessary to build a secure algorithm/cryptosystem. In addition, there's an entirely different aspect that you should be working on - correct implementation! You can't have a good program without both. This is the same in other fields - you don't expect material scientists or mechanical engineers to build houses, right?
Moral of the story: research and programming are not the same thing. Research is figuring out how to do it, programming is actually doing it. Sometimes we can do both, but we're not wizards. Sometimes we need to leave the task to good programmers.
Ouch! Saying researchers shouldn't develop SW is akin to saying they should make slides or write papers. These are tools that are essential to some research contexts. SW development is essential in many areas of research, not just CS. As are other tools: paper and scissors for paper protos, co-creative design etc in HCI (which is widely classified as a subset of CS). Sorry about the confrontational stance, but I have seen how explicit leadership neglect of SW creation and formal project management has wasted so much reseach funding, so many researchers and, more importantly, the majority of the research potential of a 1200+ in mobile, telecom and web research org (which darwinistically became 200+ and shrinking).
Some research does not demand craft and engineering build skills. However, majority of non-mathematic CS contribution does demand these things. How you human resource and skill share these things is another issue. But the issue of easily reusable research in software, algorithm and theory is probably the proof point. Including even FOSS SW doesn't always improve a contribution. Anyone making use of the original SIFT CV work would, however, thank the heavens that the SW works, as the peer reviewed papers were obviously not peer implemented. There are plenty of other examples - this one just cause me personal pain and research time wastage.
Confrontational is not at all a problem, it is essential in a good discussion :-).
I'm not saying they shouldn't make any software. I'm saying that it isn't their primary task. I'm also saying that software engineers are generally (way way way) better at building software than researchers are, because that's where their expertise is. Of course there are exceptions, and of course it's great if researchers find time to write software, but it is much more important (in my opinion!) to come up with novel solutions and to communicate those with other researchers and companies that can put them into practice (building slides and writing papers is more important). When I come up with a new solution, I'd probably write a proof of concept, and then focus on why it is a better solution and how it can be made even better or appleid in other fields. I wouldn't "waste" my time on writing a program that other people will just copy, or worse, complain about when there's a bug. It isn't my job to do that either, and I don't want to waste society's money on writing software that could just be written by those that will later make money off it. The output of a research project is the associated publications (and deliverables, although noone reads those), not software. Otherwise it would be a software project. You can try to do both - sometimes it works, but it really isn't easy.
To put it another way; you say "easily reusable research". Have you ever studied the source code associated with an average research project? I've written on a few myself, and it's not exactly pretty, let alone well-documented. They often contain old comments, obsolete solutions, bugs in obscure features that noone stumbled on, or just poor coding standards. That may be caused by the conference-driven approach to publication (because writing proper code takes time), or it may be caused by the lack of experience with large software projects. I think it's a little bit of both. A lot of the code ends up being unmaintained, because people turn in their thesis, or stop working in this field.
I think there is very little pure research code out there that is what a real software company would consider "production-ready". Or maybe I just have a too high expectation from the software industry.
Dear Rens van der Heijden,
I want to ask you: Is your point of view the opinion of most researchers? If it is true, then (in my humble opinion) this is a huge tragedy for our community. Please, read attentively the discussion on the link https://www.researchgate.net/post/Why_dont_developers_use_the_best_research_on_software_development
There are some of my answers. In particualar, I used and discussed the notion "Transitional Developer" as an immediate link between Researchers and Developers. My opinion on this topic can be viewed too on the link
https://www.researchgate.net/post/How_can_the_theories_of_computer_science_be_made_more_useful
You wrote a phrase "Developing your own algorithm for every program is not a good idea". OK, please place your program on a server to be accepted for users. If you can not implement this thing, then please give me any advice in order to solve my problem otherwise I will not have other choice and will develop my personal algorithm and program to guarantee to solve my problem in reality.
To bring the discussion back to publications, Gennady and Rod, how should the current system be changed? Conferences that accept code submissions and reviewers analyze the code and assess its quality? All papers must be accompanied by code?
I'm with Rens that currently ideas should be distilled as much as possible, so that it is still valid even in 20 years and that it is reusable in completely different applications domains (looking at math, you will never know where packing bullets into a crate can be applied to). To achieve this the results should be abstracted as much as possible from their implementation, otherwise lots of technicalities obstruct the idea, e.g., old system papers from the nineties describe lots of workarounds to avoid hardware limitations, with very little to no relevance nowadays. And if progress continues as it is currently, the same will be true for papers written today. If a paper does not fully convey its idea, it is because it is a bad paper and not because conference proceedings/journals are a bad form of science communication.
So I think while software is a valuable tool and definitely necessary (and publishing it helps to recreate and validate results, or save other researchers' time), it is not the scientific (end-)result we should strive for. Academic research rarely leads to working products, and I don't see why it should. There should be experts who find good ideas and polish them, the industry simply seems to neglect this.
If tools are usable for several researchers, it could be developed collaboratively to save implementation time. One example is ACM SIGCOMM Community Projects:
http://www.sigcomm.org/content/acm-sigcomm-community-projects
@Jairo: Conference-Journal hybrids look very interesting, I hope that more experiments like this will be performed in the future. It seems that most people in charge feel that the current system is "good enough," so there is no real pressure to really change the system in a short timeframe.
Dear Matthias Wilhelm,
I want to comment your phrase "Academic research rarely leads to working products, and I don't see why it should. There should be experts who find good ideas and polish them, the industry simply seems to neglect this". I ask you read attentively the discussion again on the link https://www.researchgate.net/post/Why_dont_developers_use_the_best_research_on_software_development
You can find an answer on this phrase from this discussion. Why do you count that experts must "find good ideas and polish them" and developers must create commercial programs from your ideas? Nobody will do these things, I'm sure to 100%, NOBODY.
(Note, I'm mostly on the same page as Rens - and thanks for the dissonance that activated me :)
CS publishing isn't broken in the first place so as an engineer I'd not suggest fixing it. Rather feed the right stuff and let natural selection do the rest - sounds theoretical right?
I'm not offering a silver bullet, but ideas where well documented algorithms have an easier path to subsequent journal or stupidly low acceptance rate conferences makes sense, and maybe well cited & regarded publications are not offered this yellow brick road unless they are genuinely peer usable (aka IETF genetically independent implementations). This kind of thinking would need to be driven at ACM level. Also, the "mass" of researcher implementors are often at the lowest point on the food chain (why reusing code is a better value choice), but getting their feedback (including stupid repeated mistakes) into the dialogue would add much value to the original publications. The current process does not afford this and, outside geological discovery in the victorian period, a faster and friendlier than paper publication channel would desirable to tease these experiences into the public domain. (Yep, I know such things already are in the public domain based on actual software in use, but getting the long tail of insights of use of academic publications into circulation would be a massive step forward).
A little more reward for honest incremental and integration contribution and a little less acceptance of artificial novelty could make the author-reviewer partnership more content-beneficial. Artificial novelty appears all over HCI where "a significant contribution scoring high on metrics" is really a tiny one newly dressed in the conference-specific rhetoric and tastes, and too often selecting an alien lexicon outside that conference/community which actually reduces multi-disciplinary reuse.
Likewise, contributions which are "merely" much better implementations of a known approach are actually novel and of scientific value, and represent both new knowledge and another shoulder to stand upon. Excellent code isn't my expertise so my frustration comes from Rens's point that research often produces poor SW that we're stuck with - and I have to use without the skills to beef it up in a reasonable time. Why not reward not only the algorithm contributions "with our science currency" but also the implementation contribution with a view to the same scientific benefits of peer utility and progress?
Incidentally, from the CS world I'm more exposed to HCI conferences than others, and I believe the pathological need to keep incredibly low acceptance rates is truly broken. It's like Darwin without Mendel - only effective on remote islands without much inter-realm exchange, and even then not for the long term.
@Gennady Fedulov: The answer is that a research contribution should be evidently better than the state-of-the-art, and companies have a natural incentive to integrate scientific advances into their products to improve them. If they strive to find better solutions, then they should try to understand research results instead of waiting for good things to copy and paste. On the other hand researchers shouldn't mind whether an idea is usable at once or maybe in 100 years, it should just be correct and add to the knowledge of the field. It is good research when others can reuse its contribution, maybe even in unintended ways.
But overall this development discussion seems off-topic to me, how does it translate to science communication? Conference proceedings and journals may not be optimal to transfer knowledge, but what is the alternative you propose to publish results?
A last comment:
A rate of 2 years for accepting a paper in CS journal is an ill condition for a field that change itself every 6 month. Even worst, any paper accepted after three year from first submission is necessarily outdated soon after print released. You cannot believe how many CS paper are printed after three years or more.
This fact cannot be ignored and efficiency of the journal revision procedure should be revised, otherwise CS production is condemned to be driven by others diffusion channels.
Simplify the thing. If the input is massive, your filtering algorithm cannot be expensive. Use conferences result as a pre-filter for your journal, indeed in a mayor rate than now.
If you are not sure about conference acceptance, ask them for another "agile" rating like crowd validation. But remember that a publishing time window of more than a year in CS means poor quality as outdated content.
@Gennady: I think you missed my point. There is a distinction between a program (im plementation) and an algorithm (theory). Research develops the theory, not the implementation. Proof of concepts exist, but like I said, those are rarely useful to others, and sometimes they contain things that fall under NDAs! Which brings us back to the discussion at hand - how do we improve conferences? Unfortunately, requiring code with submissions isn't going to help, for a large number of reasons as we have discussed earlier.
@Pablo:
Could you provide an example of such a paper? My understanding is that the publisher that takes the most time (Elsevier) takes at most a year from first submission to online appearance. As far as I'm aware, the different IEEE Transaction journals are typically in the 6-12 month range. And I think that some delay in this process is important to focus on quality. Of course the process can be stream-lined (there is always room for improvement), but I don't think radical changes are necessary at this point. At least not for journals.
I also would argue against the field changing every 6 months. Yes, things change quite quickly in our field. Yes, there are a lot of hypes, a lot of buzzwords and a lot of nonsense sometimes. But looking at the evolution of VANET-related papers (which I've been following since my bachelor thesis three years ago), I'm fairly certain that a longer turn-around isn't that problematic. Those that are interested in the real state-of-the-art will obiviously look at conference and workshop publications, rather than journal articles. For those looking for an introduction or the wider implications of results from a particular field, quality is much more important than novelty. This is why we have journals and journal articles (at least, in computer science). The current three-level (journal/conference/workshop) system seems pretty stable from a reader point of view. Like I pointed out a while ago, there are plenty of conferences out there that will accept practically anything. As long as this is the case, I would argue we don't need to worry too much about papers being rejected.
@Rens: In my experience from several journal submissions (mainly to various Transactions), you can expect to wait at least 6 months before *anything* happens, and only after you query the editor on the status of the paper. It is really required to bother the editor, who in turn tries to kick the slow reviewers a bit, or in the worst case search for new reviewers. Of course this may vary, but judging from the dates in the "bottomstuff" of published journal articles you can expect 1--2 years until publication. Conferences are simply more predictable, which is one reason they turned into "journals that meet in a hotel."
From my view reviewers are to blame for this, having no strict deadlines seem to have a bad effect. But you cannot blame them (us?), because there is no much incentive to work promptly. There is no reputation increase, no pay, no benefits. To improve journals, maybe we should think about how to better reward committed reviewers.
Maybe the field is not changing 6 months, but it is likely that some other researcher publishes a paper similar to your work in the same timeframe, effectively voiding all chances to publish your paper in its current form even if it is clearly superior to the other. So you have to be fast instead of thorough, but at the same time you must aim for a good visibility. Of course it is no problem to get your paper accepted somewhere but if no one finds it the effort is also wasted.
You talk about a three level system, but effectively there is only two or even one level, journal articles are optional at best and emphasis is on conference papers. This takes us full-circle again to the original problem; the current system works, but maybe it could be better with a different publication system (e.g., really using all three levels).
Rens van der Heijden@: I asked you to answer on my question. I want to remind: I asked you give me any advice from my phrase
Please, answer me if you can.
@Matthias Wilhelm: My answer on your question "Conference proceedings and journals may not be optimal to transfer knowledge, but what is the alternative you propose to publish results?" will be the following: we must create a DataBase of Transitional Developments (Not Commercial Online Tools) on a Server to be ACCEPTED FOR USERS AND PRACTITIONERS to solve their problems. I want to underline that I mean to create such DataBase only in CS.
@Matthias: (I meant to write two years, you're right about the time schedule)
I agree with the review problem. In my personal opinion, the fact that there is (almost) no credit for writing good reviews is a core component. I love writing extensive and helpful reviews (at least- so far, but I've only written a hand-full), but there's no way for anyone to see these review beyond the chairs and the authors. If you're lucky, you're noted down in a TPC list somewhere that noone will ever look at. My impression is that it doesn't help your CV much either.
@Gennady:
The algorithms are normally contained within the paper, as pseudo-code or an extensive description at least. Apart from that- it is usually a good idea to write an email to the authors of the paper. Often they are very willing to help you, especially if you show interest in what they're doing (at least, I'd be more than happy to send my code or at least extra details if someone asks me).
To be absolutely clear - I don't think your concept of a transitional developer is bad. On the contrary- it is a great idea, but the problem is who will do the work? Who will pay for it? From my (perhaps naive) point of view, this should be inside those communities where development is performed, whether it is open source or at a company. This is where university-graduated students and "ex-researchers" can contribute a lot and really have an edge in their interviews. Of course, this only works if the company actually wants to pay for it. I think that is where the problem lies.
Of course there are examples where things are different. For example, MIT has a bunch of projects that are fully open-sourced (roofnet and COPE, just to name a two examples), most simulation engines we use are open source (ns-2, ns-3, jist/swans, omnet++). However, for smaller-scale projects, students get to write code for the phd students or researchers as a project. The problem lies in that programming these things requires a deep understanding of how the field works (and inexperience on behalf of the students). Programming is done by bachelor and master students as projects, because the researchers are busy doing research. After the code is complete, the students understand the ideas and realize how the code should have looked -- but by now the code base is a big mess. And then the next batch of students comes, the problem repeats and the code base becomes more and more chaotic. It would be great to have proper project management, but there's just no money for it. And us researchers do not all feel like spending their free time and weekends writing code -- we want to have at least some pretense of a social life too!
Bottom-line: the transitional developer should be one that works in a company or in an open-source project, because universities won't pay for it, and companies don't like giving out their research. That seems to be the somewhat sad truth.
There is often a culture gap between academic R&d and commercial r&D. Might I suggest that more incentives for the academics to produce quickly testable and reusable results would be a good thing. Just as more incentives for the 'commercial authors' to produce well referenced and scientifically published results would be a good thing. In specific niches this works: googles high phd intake ensures that idea, proto and product development involves people with an excellent scientific appreciation (and often a continuing scientific motivation) in any case. (Google's just an obvious example - I certainly don't mean to imply other large or successful IT companies combine academic and commercial quite so well in general).
@Rod Walsh: Can you give me more detailed explanation on your phrases "more incentives for the academics" and "more incentives for the commercial authors"? What do you mean under 'incentives'? If I understood you correctly, the more academic salary the better "producing quickly testable and reusable results". Do you
think that high salary can solve this problem? I don't think so. I want to ask too: Could be need "well referenced and scientifically published results" to 'commercial authors'? I don't think so too. I want to say that 'commercial authors' will not be interested to share their commercial secrets with researchers. Then you wrote a phrase "In specific niches this works ...". Can you give me more detailed examples (links) to understand your suggestions better?
@Gennady. Absolutely nothing to do with salary, and probably nothing to do with any direct financial incentive. On the academic side, reward comes otherwise. In terms of publications, there are various social rewards (that are hard to engineer until the community has formed), and various "badges". The publication on the researcher's home page bio is the obvious one, but there are so many more - spend a week where you ask each researcher you meet what one thing makes you achieve more and what one thing could you not achieve without (and you'll get an insight into motivators and hygiene factors in that rewards economy).
Quick note on "hard to engineer social rewards": it is rather amazing how little effort is made at conferences to reward and socially facilitate outside the normal inner circle of a conference. Hard, but with low hanging fruit - but it equates to making researchers want to get to that (family of) conference(s) because they appreciate that group, are developing friendships there, etc.
The industrial side is much more complex as the context (which can change) will affect the possibility and motivation to engage in public. But obviously, every sector of professional life enjoys recognition and proof points of their abilities and achievements. It just boils down to lowering the barriers to participation, and making these kinds of recognition accessible to beyond-research-groups.
I think that we need to recognise that academic R&D is more 'pure research' oriented and commercial R&D is more development oriented, i.e. lead to revenue with "acceptable" lead-time.
The incentives for academic research come in the form of tenure, grants, promotions - mostly determined by publications produced. Which unfortunately can lead to carefully apportioned incremental results for publication.
On the commercial side, maintaining competitiveness is a key concern. Trade secrets, patents, IP agreements, etc are used to fortress any gains made. Early and detailed publication is contrary to accepted business practices.
Whilst industry - academia collaborations are possible, the mutually acceptable middle ground takes considerable negotiations.
Perhaps giving collaborative industry track papers / articles a greater weighting during evaluations for grants and promotions could be a missing incentive.
@Eugene:
I think this varies per university, but promotions aren't granted on the amount of papers you publish, but rather the size of the contribution you make (which is why you write a dissertation in the first place, instead of just printing your papers).
With respect to collaboration, I'd like to add that the problem is typically not with researchers on either side, but rather the administration and legal departments of the univeristy and the company that cannot agree on things.
With respect to "industry tracks": I would be very careful with this (or any other classification of papers before review). Industry papers can be just as good as academic papers - so treat them the same, and judge them the same. Don't create an industry track just because you want to accept more industry papers, this can only lead to a downwards trend in conference quality[*].
Better ideas are some that already exist --colocated workshops, demo events and so on-- but I think ultimately the topic area of a conference ultimately decides how much industry will show up. One option that might also improve attendance is to have a special price for industry, because the rule of thumb seems to be that industry doesn't want to spend money. An "easy" way would thus be to decrease the cost.
[*] to be clear; I'm not saying industry papers are bad - the same would apply to an academic-only track.
The review model is broken altogether.
Reviewers should not be anonymous I think. Anonymous papers are right. But anonymous reviewers can afford to be complete jacka**es. So you get these conference reviews in which it is clear that the reviewer didn't even bother to read the paper.
I would say that improving the conference model, and tying the conference publications to journals better (the mixed model) looks like a good way.
Of course this cannot solve the real issue on its own, which is the horrible lack of originality in most of CS publications.
This is a topic that is actively discussed in many fora. A collection of these can be found at http://cra.org/scholarlypub/ if you wish to read more. I have also written a SIGMOD blog entry on this explaining my personal views. In case you are interested, it can be found here: http://wp.sigmod.org/?p=488
Thanks for the great essay. It does go the heart of the matter.
There is an idea that is relevant to your 3rd point about the structure of conferences.
It would be great to consolidate the entire review and in-depth Q/A process on a single web-site. Presentation material can be uploaded to the website. In-depth questions may be directed to the authors at the website, where they will find ample time to respond in detail.
Also, the whole multi-round review process can be continuous. I've seen some new conferences that adopted a multi-stage submission process, whereby you submit to the first round, and based on the quality of the submissions additional stages are announced. That kind of adaptivity can be useful, but there is even a better idea, that the submission process is continuous: deadlines only pertain to the physical gatherings, not the conference per se. And every quarter, some papers are invited to a journal issue.
I suppose it cannot get any faster than that.
Also, I do think that the review process must be *transparent*. Make the papers available as they are submitted, for viewing by the public. Like on arXiv.org. This way, you have archived and timestamped all submissions publicly, and you prevent theft by reviewers, which is not as amusing as it sounds. Thereafter, every review must be named, so that reviewers risk their reputation on the line, if they are making malicious reviews. This will force them to be 10 times more careful. They would also feel more incentive to deliver quality reviews on time. Especially, if there is a meta-review process.
I like the discussion direction. Let's consider a hypothetical situation. Let's assume that I listened to two conference reports on same problem (model). The first report is a good Research and the second report is a bad Development. The research program (e.g. in C++) is not accepted for users and practitioners but the development program (e.g. in ASP.NET with C#) is accepted for users and practitioners to solve their problem from a server. Now I want to ask you: What do you like of these reports? From the point of view of Researchers you will choose the first report. From the point of view of Practitioners you will choose the second report. I want to ask now: Could there be the third way? How to join together the good research and the bad development in order to get quality product for both researhers and practitioners? What is use of the best researches for ordinary users? Let's assume that I'm a practitioner without special knowledge to understand the best research. What I must to do? As to me, I will choose the bad development to solve my task from a server. OK, let's assume now that I'm a researcher on this topic and I'm interested to solve my task. Please, advise me, what to do. As to me, I will develop my personal algorithm and program. You can ask me: Why I should develop my personal algorithm and program? I will answer to you: this will be the best quarantee to solve my problem within a time limit. I don't like to understand a 'research algorithm' in detail. Usually I try to understand only a general 'research algorithm' idea. But in any case I will develop my personal algorithm. Reading a research paper, I'm interested to know only two things very well: a problem description (mathematical model) and experimental results. If the reseacher's program is not accepted then I'm not interested to read attentively the research solution approach in detail. From this point of view, in my opinion, today's paper presentations are "yesterday's day". In my view, today, we must consider a new paper structure as 'paper + acceptance for users'. We must create a Database of acceptance programs in the Internet to give opportunity to the practitioners to solve their problem (at the moment) from a server. If you will interest to maintain this idea then, I want to offer my contribution to your consideration as "Bin Packing Online Tool" on the link http://www.fedulov.ge . This is a Transitional (Not Commercial) Development ( see a duscussion on the link https://www.researchgate.net/post/Why_dont_developers_use_the_best_research_on_software_development ). I want to underline, this is my first experience only. Please don't hesitate to share your opinions. I thank you in advance.
Conferences are, for PhD students, a good way to understand the academic world and an insertion to that world. So, our PhD thesis director ask us to try a couple of conference before we could send something to a journal.
Journals are a more serious challenge to PhD students since journals reviewers are expecting a good research approach and a real contribution to science, at least that is what our directors said.
Why in Computer Science conferences has preference over journals? Maybe is because Computer Science has a direct competence with the use of Information Technology in business world and that world don't want to wait. They can't wait until some reviewers said that its job is scientifically well done. They need information technology improvements now and if something goes wrong with the solution, they saw the problem, find a way to resolve the issue, fix it and go forward. So, our area of expertise has a greater pressure to show short term results and the only way to show those results is with conferences.
Knowledge is power but applied knowledge is more powerful. So conferences give us a way to apply knowledge faster and got that power.
The main advantages of conference publication, I believe are shorter publication period and ideas exchange with others in the conference, thought this paradigm is not aligned with other disciplines. But the big issue I think is the cost for an author to register and attend the conference is two high, especially the conference held in abroad. Why not lower such cost, e.g., reduce registration fee.
Publishing your work in some good conference is a good deal. However, if your work is having even very little novelty, I argue to send your work to a good journal. In recent years conducting a conference become business. Hence, one should opt for very good conference, for instance, ICIP, INDICON, and INFOCOM etc.
Attending a conference has more than publishing a paper. On the technology as well as science tract the conference uploads new information and puts new ideas forward weather or not you are aware of it.
I think rising the synergy and keeping up with the techniques, conferences much valuable than other publications. Although reading a well tided journal paper is quite valuable in essence, but it may not steal your mind for a new solution as good as the person he spent some time on it and speaking what was the reason of the work and what has been the outcome.
Regards
Present day conferences are publishing anything without review. You don't find any comments other than alignment. So it is needed to improve.
Coming originally from physics (20 years ago) I found the CS publishing tradition bizarre. Physics was such a fast moving field that despite the relatively rapid turn-around by journals like Physical Review Latters - typically less than 3 months - it was a tradition to circulate a preprint widely that could be cited. This was a pre-review version of the final, reviewed and accepted journal version. With the Web and Paul Ginsparg, this preprint tradition became the arXiv e-Print repository. This is where physicists look everyday for new results and the journals provide the final place of record.
Computer science is, if anything, a faster moving field yet our journals have typically long turn around times. Perhaps for this reason, a workshop and conference publishing tradition has grown up, often resulting in essentially the same work in varying versions being published two or three times. In addition, it is not so easy to find all the workshop and conference papers - despite the valiant job being done by our colleagues at DBLP.
Curiously, instead of being in the vanguard of promoting open access, the CS community seems indifferent to this movement. With the recent moves towards Open Access in the US, in Europe and around the world, this will clearly have to change. For a personal account of my reasons for supporting open access interested readers can read my series of 6 blog articles at http://tonyhey.net
My field is applied mathematics, and I attended only a few conference as a grad student (My Prof's), and I was not allowed. to present (WHAT do you think my Prof's conferences are?). Finally, some six years after I graduated, I finally presented something at my Prof's conference. Conferences are not bad, but really, should not be the basis for evaluating 'output'. They are nit really reviewed (if at all), sometimes acceptance is based inky on the abstract (inly the prestigious ones ask to see the paper), so is not even a reliable source for citations! Additionally, for some people, it's an added source of income, with the per diem. Sorry state ... let's just count the papers, guys!
I agree with Tony's point of view. Why in Physics or other Sc field journal review process is comparatively so fast? I also come from Physics, and I guess that spurious research in CS is more difficult to be confirmed. In Phys we have equations that fits in a couple of pages, in CS we have programs that fits in several megabyte of code. Not all CS journals provides public links to code and/or data. Without having such concrete proof-of-concept, can you/others easily validate a CS paper point of view? Openness is becoming necessary condition of the CS journal review process for improving timing.
The speed of turnaround is certainly a boon in CS, although some of the online journals are now getting quite snappy in reviewing papers too. Perhaps it is the nature of CS that we like to share and get feedback that adds to the conference culture :-).
Do you all believe that the conferences/Journals that are chaired/organized/edited etc. by some groups really publish papers on Merit?
In my opinion, the answer is "NO". Most of the Conferences are organized by the research groups headed by one or more than one persons and they try to assign reviewers accordingly and some papers go through without any review process. The journals, specifically the SPECIAL ISSUES, have the same story. This story doesn't end here. Most of the Conferences and Journals (not all) just prefer and consider the papers from the authors of the same region for their conferences or journals. I think this is the ALARMING situation and we all have to think about it.
If anyone disagree with my opinion, please feel free to criticize. Positive criticism will be welcomed.
All the conferences and journals that I have published in have a review process and all are aiming to improve the process to ensure quality publications.
However, since I struggle to find funding for my research, I am increasingly looking at publishing materials for open review. Why not have a review / rating system similar to social networking likes. Take the emphasis off meeting standards for publication and allow the research community to review / rate / cite as appropriate. Under such a system, the cream will come to the top and those who are just publishing at every opportunity regardless of the quality of the research to obtain a research ranking will slowly drop out of the mix.
We have technologies that enable new publishing models. I wonder why we are not exploring these possibilities. We are not just publishers of research, we should be consumers of research and as a consequence be willing to rate the articles that we consume for the benefit of others. I suspect if we put such processes in place we may actually obtain better quality research and publications.
The model that we use is based on an outdated printing based model and we should be looking at alternatives.
I agree. I have no really an assessment of the problem, but it is very plausible. However, I guess that the force driven this phenomena is academic inflation and It is circularly reinforced. How you can stop this? Lets your peer see openly your research results and take into accounts/publish theirs critics. That will unavoidably destroys closed group pushing theirs own papers. I am not telling that journal should offers free paper, but they could expose publicly code and data. In this setting conference should work more fairly and efficiently.
With social networks for science, I do not ever need to meet anyone anymore.
https://www.researchgate.net/post/Is_a_ResearchGate_score_scientific_clout_Is_it_a_commodity_Un_score_dinfluence_ResearchGate_constitue-t-il_un_poids_scientifique
Publish quality of research but such work should not dump in cold storage.
Bertand Meyer sums up his ideas about this topic in his blog. I suggest you to read it.
http://bertrandmeyer.com/2013/02/06/conferemces-publication-communication-sanction/
Can we please take corrupt publishers out of the picture? They do nothing but stifle research by allowing some kind of solidarity of low quality research. And their "open access" idea is offensive. They say it is "convenient" that the authors pay for open access, really making their publication model a payola system.
My point is that, we should not let anyone earn inordinate amounts of money from our contributions. Research should be free, as in beer, and as in liberty.
Also, Safdar is right, but many people who did not personally experience the biased review process in some conferences would not know of this. Has not any of you received the third "anonymous random review" which just makes up an irrelevant excuse to reject the paper?
Errol Thompson is right, as well, we should build open/free web-based systems for all publication, review, meta-review, and ranking. It's disappointing that the Computer Science community does little to solve this problem.
In my view, we discuss to try to find feasible solutions. With this purpose, I'd liked to realize most of proposals of our duscussion. Now I want to comment some phrases of discussion to try to realize them.
1. Marc Herbstritt. Phrase from "Publication Culture in Computing Research" on the link http://www.dagstuhl.de/12452
"A dispassionate observer, perhaps visiting from another planet, would surely be dumbfounded by how, in an age of multimedia, smartphones, 3D television and 24/7 social network connectivity, scholars and researchers continue to communicate their thoughts and research results primarily by means of the selective distribution of ink on paper, or at best via electronic facsimiles of the same. Modern technologies
enable vastly improved knowledge transfer and far wider impact".
2. Matthias Wilhelm
"Conference proceedings and journals may not be optimal to transfer knowledge, but what is the alternative you propose to publish results?"
3. Safdar Bouk
Do you all believe that the conferences/Journals that are chaired/organized/edited etc. by some groups really publish papers on Merit? In my opinion, the answer is "NO".
4. Errol Thompson
"We have technologies that enable new publishing models. I wonder why we are not exploring these possibilities. We are not just publishers of research, we should be consumers of research and as a consequence be willing to rate the articles that we consume for the benefit of others".
"The model that we use is based on an outdated printing based model and we should be looking at alternatives".
5. Önder Gürcan
"Errol Thompson is right, as well, we should build open/free web-based systems for all publication, review, meta-review, and ranking. It's disappointing that the Computer Science community does little to solve this problem".
6. Rens van der Heijden
"To be absolutely clear - I don't think your concept of a transitional developer is bad. On the contrary- it is a great idea, but the problem is who will do the work? Who will pay for it? From my (perhaps naive) point of view, this should be inside those communities where development is performed, whether it is open source or at a company". "Bottom-line: the transitional developer should be one that works in a
company or in an open-source project, because universities won't pay for it, and companies don't like giving out their research. That seems to be the somewhat sad truth".
_____________________________________________________
My very short comment is: ALL SHOULD BE FREE ONLINE: free online publications, free online conferencies, free online discussions (as ResearchGate), free online tools (as transitional developments), and so on. Now I want to cite a phrase from Vladimir Moskovkin: "Is the existing paradigm of scientific communication exhausting itself?" on the link
https://www.researchgate.net/post/Is_the_existing_paradigm_of_scientific_communication_exhausting_itself
"At the present time, when the processes of production, analysis and dissemination of scientific knowledge is under the control of neo-liberal forces , we are not capable to control the quality of this knowledge. It seems that the existing paradigm of formal scientific communications which originates from the 17th century exhausted itself. This gave an origin to a new paradigm in scientific communications - Liquid publications".
In my view, we must welcome the "Liquid publications" as a way of Free Online publications in the Internet. Please share your proposals about other Free Online versions. I'm sure that this is an effective way to solve our problem. We (as Science Community) must control "processes of production, analysis and dissemination of scientific knowledge" but not "neo-liberal forces". As to "Transitional Develiopments", I hope this problem can be solved in near future. While I satisfied that this problem is actual enough to interest a sponsor to pay for transformation from an Offline Research Program to an Online Transitional Development (Online Tool in the Internet). I want to observe that author of this idea is a well-known scientist Donald Norman, see "Designing For People" on the link
http://www.jnd.org/dn.mss/the_research-practice_gap_1.html .
Maybe we could use Donald Norman's authority to solve this problem.
@Errol Thompson: I find open review an important part of a more open science community. Luckily, some initiatives are under way which I hope will make it further:
Open review of already (traditionally) published papers: http://pubpeer.com/
Open review of (AFAIK) papers anywhere (e.g. arXiv): https://publons.com/
Up-coming open review platform: http://episciences.org/
@Eray Ozkural: One possible place to look for corrupt publishers could be: http://scholarlyoa.com/publishers/
I agree with the open access movement, to enable open and free circulation of scientific knowledge.
It is shameful that publishers think like the authors pagen suitable for publication. The authors, professors and researchers already made their contribution by producing new knowledge, the diffusion should be free and on the web.
I also agree that should build open source systems / free for the web and in this sense the community must work computer and classification analysis and the library community must intervene.
yeah it vary a/c to subject area, if author have opportunity to publish paper in good journal after sending abstract or paper to such conference I think its good option for authors but we must check authenticity of organizer.
Mohammad, I believe an open review system will discourage plagiarism of others work or the writing of papers with inadequate practical research simply because it is open for anyone to review. I believe it will ultimately take us away from a model based on publication counts and reference counts to a model where genuine quality research will rise to the top and those who have a good idea but struggle to make sufficient headway through lack of access to resources will get the opportunity to work with others of similar interest. However, my view also depends on removing the competitive nature of so much research activity and funding.
Recenttly I participate in a workshop at Dagstuhl exactly about this point, the title was “Perspectives Workshop: Publication Culture in Computing Research”. In anexe to this answer my position paper (https://www.researchgate.net/publication/236661264_Books_conferences_open_publications_culture_and_quality?ev=prf_pub). To access the seminar go to: http://www.dagstuhl.de/en/program/calendar/semhp/?semnr=12452
Conference Paper Books, conferences, open publications: culture and quality
I think many reputable journals in CS and CE now have much shorter review time. Journals like many of the IEEE Transactions, Pattern recognition, Bioinformatics, etc. have a policy to get the review done within a month. Moreover, many of these journals put out online version of the paper almost immediately once accepted, months ahead of the printed copy.
Open access journals that appeared in mass recently are mostly of dubious quality. The same remark applies to many conferences. One can easily check that some proclaimed "scientific organization" organizes literally hundreds of conferences covering almost every fields every year. Having many publications in these dubious "journals" or "conferences" actually impact negatively one's CV. Any good researcher knowledgeable in the field will easily spot this kind of low quality publications in one's CV.
I think that due to the "publish or perish" pressure, many researchers send their work to open access journals that have almost 100% acceptance rate. Even though they need to pay for publication, at least they got a "journal" paper, as oppose to just a "conference" paper (which they would need to pay for registration anyway). It is a shame that some people take advantage of this and see this as an opportunity for making mega-bucks quickly, to the detriment of scientific research and tax payer's money. PhD students suffered too as they failed to learn to do good quality research from their supervisors, who choose to take the easy route.
Although I am wary of most open access journals, there are some rare exceptions, like the PLoS journals and the BMC journals. To judge the merit of an open access journal, you would need to check the reputation of the publisher, and most importantly, the quality of the papers the journal published. For experience researchers, this is easy, but may not be so for prospective PhD students just starting in their research career.
Great discussion and very interesting points of views. I'm specially interested in the insights of Errol Thompson.
Moreover, I would like to highlight the interesting approach of Manuel Hermenegildo on the "Publication Culture in Computing Research" Seminar (http://www.dagstuhl.de/12452):
"[...] CS journal papers can be of two types: rapid publication papers as well as the longer journal papers that are traditional in CS. [...] So, instead of publishing the traditional conference proceedings, have the papers submitted to a special issue of a journal which is ready online in time for the conference."
I think the different publishing model in a field such as CS, compared to, for example, a field such as physics, is due to several factors, which are not really intrinsic to the research itself, but nonetheless are not likely to change in the near future. First, I think history plays a significant role here. The American Physical Society, and the journal Physical Review, for example, date back to a time before travel across the country was easy, so the conference model of publishing was really out of the question. These journals have a well-established prestige and significant weight in the community, and the tradition of publishing in such journals is highly self-reinforcing by the community. The field of CS, in contrast, is mostly relatively modern, and it seems like the conference publication model established itself as the field expanded rapidly.
Another factor may be the much of the basic physics research is done by academics, less so by industrial labs, and I think this is somewhat in contrast to the field of CS. Many folks carrying out research in industrial labs are often involved directly with product development as well, and have less time to devote to the production of more lengthy, and more time-consuming journal publications.
Finally, the conference model itself offers a type of interaction which the journals do not. Basically, there is a lot of human interaction that goes on: the presentation of papers in technical sessions is only a small part of all that goes on in a large conference. This experience cannot be duplicated in a journal, and may be particularly suited to a large, rapidly expanding field of research.
One more word on the open access movement. I think this movement will slowly change the publishing industry, but the key word is slow. No matter what anyone says, I believe that publishing a quality journal will always incur non-trivial expenses, and people will put up with it because of the exclusivity offered. But the profit margins are bound to come down, and more and more stuff will become easier and easier to access over time.
Hi Nooruldeen Qadir,
Regarding the Impact Factor metric, although there are problems with it, I believe it is still an objective quality measure of journal. There are studies that indicate that among all papers published in a journal with IF, only around 15% or less have citations that is greater than the IF score (IF is the average citation per published paper in a journal within the last 2 years), while the rest has very few or zero citations. Hence it was argued that the IF of a journal is inflated by a few highly cited papers. But, on the same token, you can also say that it is deflated by the many non-cited papers. The deflaction effect is actually MORE serious when you think of the 85% of barely cited papers. Hence, it will be in the interest of the journal to accept only high quality works that would have the potential to attract the attention of the research community and get high citation, while avoid publishing papers that are not going to be cited. The top journals with high IF will be able to attract good submission and a large number of them, and hence can afford to reject a large portion of the submitted manuscripts while still getting enough papers to fill up the pages. But journals at the lower end of the spectrum are not so lucky since they will get fewer submissions and even fewer high quality works. Hence, I am of the opinion that IF is still the most relevant metric in measuring the quality of a journal, especially those journals in the first half or 3/4 of the rank within its field (these info is available from the Journal Citation Index from ISI).
However, I would argue that IF is a less relevant metric in measuring the quality of a paper. Contrary to deflating in the case of journal, IF will more often inflats a paper (unless the paper is among the rare 15% group). So, a more appropriate measure for papers and for a researcher's credential will be the citation count. After all, it is the impact your papers generated that counts toward your credential. By the same argument, the h-index, total citation, total citation in top ten papers, and average citation per paper, of a researcher is a better metric to judge the merit a researcher. Many of these paper-centered metrics are complementary and reflects different quality aspect of a researcher. For example, the average citation per paper of a researcher would reduce the urge for the researcher to publish papers that are of low value, or to break up a piece of work to publish in several papers, or worst still, to submit basically the same piece of work multiple times to multiple places in order to boost the paper count. Multi-venue publishing of basically the same work is quite rampant in CS/IT, and it artificially inflats the number of outputs a researcher has - but this is a waste of time (time to write up the paper, reviewer's time, time of other researchers reading multi version of basically the same work, etc), and journal/conf page space. If a researcher has lots of this type of papers, it actually harm his/her reputation, as it is very easy to detect this type of outputs.
To sum up, there is no perfect metrics, but IF is a good objective metric for journal, and citation count (and a host of paper-centered metrics discussed earlier) is a good metric for individual researcher.
I should add that there are 5-year IF, besides the 2-year one, too. If you look in JCI, there are a number of other metrics associated with journal too.
The ranking debate rages across academia, journals, conferences, departments, faculties and universities (esp in Australia/NZ/UK) influences not only 'reputation' of the individual, but also the funding of institutions. Ranking metrics do tell us something, but there are also factors not captured by the most sophisticated (simple example, perhaps you publish in basic research area that currently is extremely narrow and of no present application -- citations are unlikely now, but in ten years time it may be a 'fundamental' source).
I want to understand the logic of our duscussion from the point of view of the end result. What is our aim? What we want to reach? Now I want to track the chronology of events to express clearly my thought. Once I want to observe that I will mean only experemental papers in CS.
1. Assume that a student solved a mathematical task from a text-book. Who would be interested to know about this solution ? As to me, I would not be interested to see this solution. In my view, the goal of student's solutions is to demonstrate only the qualification level. Assume that a researcher presented a solution to a CS problem. Who would be interested to know about this solution ? As to me, I would be interested to know about this solution in detail in case of researcher's solution would be enough innovation. But from my personal observation, the huge number of proposed solutions are not innovation solutions but are ordinary (not high-quality, not professional, sometimes naive) solutions from a class "publish or perish". In my view, the goal of like researcher's solutions is to demonstrate only the qualification (educational) level as a publication without any science contribution. In my view, these papers are only qualification papers to gain an Impact Factor (the more the better). I want to remind my main thought: if you (as a reseacher) think that you developed a "good algorithm" to solve a CS model then you create an accepted Online Tool in the Internet. Your Online Tool could be used by all people (researchers, users, practitioners) from a server to solve their task (with personal data) and confirm legitimacy of statement about "good algorithm". Otherwise (in my opinion) your experimental paper will not have any cost for most of researchers.
2. Assume that you want to buy a household appliance. This appliance has a text description for users to apply correctly. I want to ask you: What do you want to learn more about? Do you want more to use appliance in practice or read description only without any using appliance? Why I gave this example ? Sometimes it seems to me that the most of CS papers offer and present only appliance description and care only about quality of descriptioin. But, where is the appliance, that is the program for solving the problem? The answer is 'NO'. Thus, the CS paper is the description without the appliance. I don't think that it is good. In my understanding, the better description the more mockery at people (users, practitioners). We must offer not only description as the paper but and appliance as the program, that is, we must present an updated version of paper as "PROGRAM + PAPER" like "APPLIANCE + DESCRIPTION". In other words, the paper has to be integraded with the program and to present a single object, but the paper has to be addition to the program and subordinated to the program. The program has to be primary, but the paper has to be secondary. I already offered a variant of presenting "PROGRAM + PAPER" as "ONLINE TOOL + PAPER". Then all users could solve their tasks using ONLINE TOOL, but researchers could read the theory (methodology, solution approach and so on) from the paper too. Thus, all interested people would be happy (or satisfied at least).
3. In my personal opinion, today's journals and conferences are business (e.g. look at the link
https://www.researchgate.net/post/Is_the_existing_paradigm_of_scientific_communication_exhausting_itself ).
The papers are evaluated by two or three anonymous reviewers but not
community (e.g. look at the link https://www.researchgate.net/post/How_valid_are_the_papers_published_in_US_scientific_journals ). Why we must depend on opinion of
two or three people ? Why we can not depend on community opinion ? From this point of view, I want to thank Marc Herbstritt and Jose Palazzo Moreira de Oliveira for their proposals from "Publication Culture in Computing Research". But I see many difficulties to realize these proposals since today's journals and conferences are indeed a huge business. If I'm understanding correctly, today's journals are like providers of Impact Factors, or in other words, researchers depend on business. I want to remind that Grigory Perelman published his three well-known papers in the Internet (arXiv) without any reviewers. Second thing is the Citation Index to influence reputation of
researcher. But this parameter depend on business too (e.g. look at the
link https://www.researchgate.net/post/Authorss_citation_cartel-Anyone_seen_or_heard_about_it ).
In my understanding, modern productive forces (as Internet technologies) entered into acute contradiction to old production relations (as today's journals and conferences). If it is true, that has to happen afterwards? Must there be a revolution ?
Journals are coming around...Why?
I understand Impact Factors are based on citations within a two year window, counted from appearance of an article. If I cite a newly appeared article, submit my paper, and it takes two years to publish, the citation will not count, for impact factor purposes. So Journal X that has many citations will want to speed up the publishing of articles with references to Journal X, since it increases their impact factor. Witness what is happening in many IEEE journals (which a few years ago did not seem to mind unusually low impact factors and unusually high publication times).