Knowledge is evolving fast, which also implies that contents of older publications are not supported by recent publications.
Why do people continue citing older publications not supported by recent publications?
- Those that cite older publications do not believe the contents of recent publications
- Those that cite older publications do not know the content of recent publications
- Older publications that are not supported by recent publications can mention aspects that are not handled by the recent publications.
Actually, there may be a corrective factor in effect, since at least in the humanities and interpretive social sciences, most scholars think that nothing over a decade old is worth reading, and are seriously into recreating wheels. It's very annoying to read "there is no work on X" where, two decades ago, there was a large cluster on precisely that.
Sometimes it's a matter of historical record. Ideas have a history and it is often useful to trace (or explain) the history of an idea, or set of ideas, that one is working on. Some older publications may be very elegant, or alternatively, may have been overlooked and therefore deserve to be cited. It's true, of course, that there are many older works that should just be forgotten because the work was either incorrect or has been superseded. I sometimes look at the current rush of literature, however, and wonder whether some of the writers have ever read the earlier works of relevance. By not doing so, there is a risk that paradigms already proven false will be re-invented, just to be squashed again, in an endless cycle. At least in the fields I work in, I have not noticed a tendency to cite too much of the old stuff (but then, I am old myself).
Martin's answer highlights a number of interesting points. As far as economics is concerned I may suggest the following reference: Blaug, Mark. 2001. "No History of Ideas, Please, We're Economists." Journal of Economic Perspectives, 15(1): 145-164.
As the reader will see, the usefulness of historical records is a controversial question.
Actually, there may be a corrective factor in effect, since at least in the humanities and interpretive social sciences, most scholars think that nothing over a decade old is worth reading, and are seriously into recreating wheels. It's very annoying to read "there is no work on X" where, two decades ago, there was a large cluster on precisely that.
Everyone's contributions have value in this discussion. However, this is a large topic that isn't fruitfully reduced to yes/no, or 1 or 2. One of the most wonderful aspects to the availability of the internet is the access to original works (some partial, some with translation problems, I know), which reveal that what we consider a psychological trait has likely (very likely) been handled by philosophers, sufi mystics, prophets, religious leaders, professors and naturalists over the past several thousand years, orally or in writing. What 'modern' psychology, a purely American invention, claims to have added to this eternal dialogue is a scientific approach. The topics aren't new, and the sheer amount of material evaluated and kept safe century after century completely dwarfs modern psychology. We also seem to believe we discovered many concepts that have been known in a way that is more likely to be used by everyday people (a very important process we still struggle with), is built into many many systems as a preventative-type method, and if understood properly today, would lead to a dramatic difference in how we see ourselves and reality.
For instance, Al Gazzali's definition, observations and recommendations about conditioning and its effect on people reads as if it were written in recent history, not in the 1200s. HIs approach by far involves more genuine understanding (not moralizing...which science often does) and a real way to reduce the deleterious effects of conditioning while accepting its a part of life, essential even, as to him, it provided a 'skin' between a new, pristine you and what you learn via social interactions. I wasn't making a blank slate comment, more a referral to a time when you had unique talents, quirks, reactions...genuinely you...things that assisted in making you human. This would be before you heard you were clumsy, before you wondered if you were good enough, etc. If you didn't have contact with the world, yes, even going through some of the most traumatic events imaginable, you would not be complete, and in layman's terms, you would appear to have been raised by wolves. So now that we have established conditioning happens, nearly all the time, on purpose and by accident, without the need to differentiate classical and operant, we can hear Gazzali when he gives his recommendations. The same can be found about mental illnesses (anxiety, depression), perception, love, so many topics.
I would like to emphasize just how important it is to see any field, scientific or not, as accurately represented by what the current research demonstrates. So I agree that to read the history of a concept, to delve into the methods used to measure it, to find how results/observations were interpreted, over time, keeping in mind the overlay that is always present--the church, reason, industry, technology and for the sciences, to borrow a quote, "Worshipping at the Scientific Method".
*wow. I left out "NOT". Last paragraph should read "...just how important it is NOT to see..."
Kelly ... you can always edit a post. There's a button at top right corner of each post (available only to the author) that permits edit or delete.
I think the question is, if a field of specialization is one of competing theories (and ideas) or not. Maybe physics and mathematics (apart from intuitionism) are not. In contrast economics, philosophy, social sciences and others appear to be battlefields of scientific camps.
If anybody has the time, please take a look at our new Lexicon of Arguments (www.philosophy-science-humanities-controversies.com) which presents synopses of controversies and helps you find arguments by entering VsName and NameVs or ismVs and VsIsm.
Our aim is to keep statements in the discussion, even when they are older than just a few months. And you don’t have to read a full text to be informed.
You may contribute your own new statements and arguments with just a few clicks.
After that your name will be open for the Google search.
At the moment we are preparing new Lexicons of Arguments for economics, psychology, art theory, history, law.
http://www.philosophy-science-humanities-controversies.com
Sometimes older publications are much clearer and apt to the point: judge by yourself and candidly judge if you know a clearer explanation of PCA
Dear Marcel
A bibliographic reference fulfills four essential functions
- The merits of the author of the text consulted
- It gives more credibility to what the author writes
- Let those who read to locate, confirm and explore the source from which the
information was extracted.
- Works as a kind of "memory aid" for the author, allowing you to later use
Dear Nelson,
I agree with your analysis from the author's point of view, but then there is the analysis from the reader's point of view.
I can give an extreme example to stimulate thinking:
When you take the word 'Testosterone' you will get >100.000 reference propositions in web of science, of which let's not more than 10 will be cited in a publication dealing with testosterone (in birds for instance). How can a reader handle this deluge in literature information to pick the most relevant ones for his manuscript to be prepared? What to do with relevant publications not accessible to many readers (e.g. local journals, foreign language)....
The most basic argument is as follows: in the human sciences -at large- we distinguish the tjheoretical corpus, and the secondary literature. Well, ancient texrts, i.e. classical texts are primary sources. The most recente literature should be somehow based on those sources and then discuss the interpretations about them.
Classical sources should by no menas be taken for granted.
Perhaps there is a difference between philosophy-based literature and science-based literature in how old publications are perceived and the knowledge evolutions tracked. Clifford refers to philosophy apparently.
Dear Marcel, There is this fantastic (real) anecdote in physics:
When D. Bohm was silenced because of the MacArthur policy in the times of the cold war (he was accused of being a communist, which he was not, etc.), Bohm lost his courses in Columbia and was totaly at odds. Well, he decided to go to (in thos etimes) the end of the world, namely Brazil.
In the times with no internet Bohm didn' t have journals and updated books. But (a great but!) he kept with himself the ancient basic papers but people like Planck, Bohr, Born, Einstein, and the like. (I shall not mention here his meetings and encounters with R. Feymann in Brazil).
The point is, thanks to his having not updated but older publications, thanks to D. Boh, quantum physics was re-born. Let me just remind that at the time the paradimg was nuclear or atomic physics. Via Bohm, later on Bell would contribute to the revival of quantum physics.
My point is: let's consider older publications under the light of a sensible and intelligent mind. This notwithstanding, recent publications keep science and philosophy alive...
Hello C.,
It's always good to know history in a given field to understand the final outcome, that is what is produced today. I don't know if we are smarter. At least there is more written literature to day than let say 2000 years ago, but there are also much more people all contributing to the knowledge construction. And a construction sometimes has to be repaired and cleaned.
That is indeed the case! Provided one implicit assumption: even though it is almost never openly declared, every generation defines herself as smarter than the older ones. Perhaps with that notable exception that were the Renaissance's thinkers, looking back to the ancient Greece feeding up their own impulses to step forwards - far away from those dark Middle Ages years from the (recent) past.
Taken from non-monotonicity the answer is yes: monotonic assumptions are always those that do not suffer frame change whereas non- monotonic assessments are those where new information modifies previous claims.
The issue pertaining the very possibility of the progress of knowledge should not be left far behind.
@ Clifford
What an absolutely sad picture about science you draw there. And I’m not sure it isn’t true. But isn’t there an inconsistency when you speak of human condition being never changed? What about the scholarly tradition that yielded results that are not object of political power play such as Relativity theory. If I understand you right, RT was developed in the same environment of human condition. Do you think these times are over now? What changed then?
With respect to recent discussions here about the nature of "science", I do not think it is mostly about "irrefutable proof", which can rarely, if ever, be obtained. Practical science is about finding the best interpretation of the current evidence, and is always subject to re-interpretation when new evidence arises. Scientific minds are supposed to be open. In practice, scientific knowledge progresses as a result of social processes that involve debate. These processes are naturally flawed by the nature of human social interactions, which are highly subjective. I see this as reason to integrate understanding of the social and psychological process of learning into the interpretation of evidence in drawing conclusions. I also see this as reason to maintain debate until reasonable consensus is reached. In medical, legal and environmental practice, it may mean that motive has to be assessed. This may mean asking the question "Why are you doing this?", a question which scientists tend to avoid in my experience.
I have never had a problem with finding out that I was wrong, because it is usually the foundation of discovery, and the joy of discovery is generally what motivates scientists, in my opinion.
Dear Marcel, you have raised a very genuine question which I myself have raised on various occasions while listening to PhD viva presentations by different students. In Geo-sciences, ever-since the advent of Plate Tectonics, the explanations for mountain building and valley formations have witnessed so drastic changes that it can simply be called a paradigm shift. There are researchers who either do not take into account these changes or fail to appreciate the magnitude of changes the modern theories imply. This happens because in spite of calling the degree 'Doctor of Philosophy', in various research programs, the 'Philosophy' part is highly neglected. Data collection, plotting and interpretations are done in a highly routine manner. At a number of occasions, interpretations are done based on certain published work by some other authors and then conforming their findings. The logic is identical with the law practitioners who refer different verdicts to make their arguments stronger avoiding most of the times the context. In a 'Publish or perish' world, certain schemes of points determining the intellectual existence of academic workers, the issue raised in this thread does hardly carry any significance. However, for any serious researchers, referencing should follow a policy rational enough to separate paradigms.
Hello Clifford
Regarding the question: "And if theories are the best interpretation of the evidence that leaves a great deal of scope for dispute about which theories are the best of competing theories. That leaves it to an individual's judgement whether there is a majority consensus or not?"
Yes, of course! Scientists are human, and anyway it's more fun to be in the minority than the majority.
It is useful to discuss science with a lawyer because rational argument is used in both contexts, while some aspects of the associated social systems are quite different. In practice, scientists often make their hypotheses (not theories, which are too grand for most of us) after having had a hunch which they test to see whether it's a foolish guess or a possibility. In my experience, these hunches tend to be based on observations of natural phenomena that are puzzlingly inconsistent with the model (paradigm) that is in my head. And so I devise an experiment, or opportunity for specific observation, that will test an aspect of an alternative model. This may be a very elegant and complex experiment, but more often is just a simple test allowing me to verify doubt (i.e. discover whether there is an opportunity to falsify an aspect of the model). There are usually complex series of such tests.
Not being a lawyer, I don't know how the legal system works, but it does seem to me that lawyers operating within the court system, at least, may take positions that they don't actually believe in for the sake of their clients. Academic scientists (and probably academic lawyers) don't do that in my experience. Most of the time we muddle along until we come across an opportunity to design a fine experiment and we then put a lot of effort into doing it in such a way that only one interpretation is possible.
Experiments are inevitably flawed, however, and so other scientists come along and exploit the flaw. Scientists compete with each other (as do lawyers), but in a sense are mainly competing with something greater than other scientists: with 'Nature'. We are trying to discover 'Nature's' secrets, and Nature has no stake in the game. Perhaps this is our fundamental difference, in that the 'laws' of Nature (over which we have no control) are fundamentally different from the mutable moral laws of legal practice.
Hello Clifford.
Regarding your second question: "So in that case if one subscribes to one view and not another is it valid to cite the prior papers which favour that view over papers which favour another view when one is writing about a theory to which one subscribes?"
Properly, in the situation you describe, scientists should begin a publication by explaining that there are competing hypotheses, and should describe the basis for each and how they differ. There are often review articles that make it easier to do this. Because many scientists have a stake in one side or another, they are biased and the bias is presented by writing something like "In our previous work ...". The reader is then properly informed of the socio-historical context.
"Science" is rife with works in which this is done poorly, and with bias, but those works get weeded out over time. The peer-review process is supposed to keep them out of the literature, but peer-review is a messy process and often fails to do so.
Clifford,
Yes, of course, such situations are commonplace and generally non-contentious. We sometimes call it "descriptive" work and we set it in context so as to point out its utility. This is commonly the case in applied science, in which utility is more important than theory.
@ Siddhartha
I am in agreement with the question of where can the "Philosophy" of a "Ph.D." be found, and have woven it into every class I've taught. I have been told by mentors that a Ph.D. is a scholar: a person who loves to learn new things (research in some form); shares what they know (teach) and uses what they have learned to improve mankind's lot (application). I have been told by philosophy professors that philosophy was chosen for this highest degree due to the fact that, like philosophers in general, someone pursuing their Ph.D. loves knowledge thus reads, discusses, discovers, is open to any and all information related to a specific topic, he is relentless, he is as happy as a philosopher can be delving into some aspect of human nature; also, like philosophers, doctoral candidates must be exposed to semantics and learn to use language effectively, given the role of communication in the candidate's theoretical future. If what I have been told is accurate, where is the education, exposure, discussion and training in these areas?
@Kelly
If by philosophy in applied sense we understand 'investigation of the nature of being' and while practicing find ourselves standing in front of different 'systems of philosophy' from which we are supposed to choose to validate certain arguments/data-set/models , dabbling with trans-paradigm nomenclature does not help to facilitate better communication nor does it expand the boundary of semantics. You are right there is serious lack education/exposure/discussion/training in these areas. These days, PhD candidates are supposed to do some course work. But, without the presence of proper counselling and absence of faculties in the field of epistemology, most of the institutes offer routinely certain subjects which help at best to improve certain technical skills, enrichment of philosophy as a common pursuit remains a far cry. As a result, 'referencing' in majority of the situations does not follow a rational policy, a right concern expressed in this thread.
I was not aware that a 'movement' (maybe?) was in the works to attempt to add philosophy to doctoral training. However, I completely agree that, like statistics, to be useful, one can't simply add a class or two. You do need guidance, background, lower division courses...context. Thanks!
In fact the Ph.D. became "Ph.D" for historical reasons. Ancient learning was divided into medicine, law, theology and philosophy. The first three were "professional" in that they trained students to take up those professions so that they became medical doctors or lawyers or priests, Philosophy, on the other hand, was the program that did not train students to become anything. In fact if they "became" anything they would be "philosophers" meaning, of course, that they could not do anything, and in the medieval times there was no distinction between natural scientists and humanist scholars, so everybody became "philosophers". Natural science was known as "natural philosophy" then. Those who are really good at these things thus are known as "doctors of philosophy."
The original question seems to be why researchers do cite older publications that are not supported or backed by recent publications.
Certain statements are quoted very often while they were only peripheral in the work of an author. They are quoted just because they are easy to remember, e.g. Adam Smith’s “invisible hand”. The new publications try do argue in the name of the famous author but at the same time try to be independent from him. New contexts and their implications will then overgrow the intentions of the author. The loss of the original intention will be perpetuated if the following quotations stem from later sources. After some time the original author is confronted with more or less contradictive opinions he never uttered. It is good to go back to the roots then. Even if it’s only in order to assess how peripheral the original statement was.
Martin's example of Adam Smith's invisible hand can be somewhat generalized. To put the matter as simple as possible, let me remind the reader of how, in our undergraduate days, all of us were impressed by the apparent depth of what our white-haired professor of logic was teaching us about the basic rules of deductive reasoning, the inconclusiveness of inductive inferences, the pros and cons of logical empiricism and, for good measure, the logical fallacies of appeals to authority, ad personam arguments, and the like. A few years later, as graduate students or young assistant professors, we soon caught ourselves browsing journals in our libraries and selecting articles as worthy of reading on the basis of these very same fallacious patterns of reasoning. To sum up: many quotations of older pubblication are simply attempts to appeal to some kind of authority.
The sceptical view point: we are in an era where we often ask ill-thought out questions. Asking questions supposedly shows we have an inquiring mind, and if taken on face value they support our roles as 'academics', 'professors' or 'philosophers' (great comments above). Put many of these questions in front of an objectively critical panel however (like a funding stream) & see how many of them are considered 'worthy' of investing in.
It is normal to see instructions for 'recent' references or 'references not more than 10 years old', but what is important is not their age, but their relevance. If a certain element has been adopted into the common orthodoxy of the field, then unless a specific angle of the original work is being raised, then IMO, there is no need to cite 'dated' work. On the other hand, if 'modern' literature does not show its provenance, then simply being 'recent' does not make it of any greater value. Martin Schulz covers this point well (above).
An important point in Marcel's question is the bit about 'Why do people continue citing older publications not supported by recent publications?' I think that 'not supported by' could perhaps be better interpreted as the more modern publication actually takes issue with or contradicts this earlier work. Simply not mentioning an established thinker that relates to the field does not mean you are offering unsupported opinion. Lots of superfluous references are seen in university-driven papers (especially to methodology textbooks ;-)), often by students who have been told to 'support' their statements with references. To me, a 'supporting reference' shows that the author has based their thought on information drawn from x,y,z, and that is how they reached their conclusion - their sources contributed to their 'new' perspective. We have plenty of current examples however, where the practice of 'repeating' the words of others is in conflict with demonstrating new knowledge. I think relevance is a key issue here.
How can we fix relevance? We may count citations, but I think it is not so easy. Today we often use automatic search. Problems for automatic search are new concepts, outdated concepts, concepts with a slightly different meaning. An old concept with a changed meaning may then show up in the context of automatic search but the finding will not always make clear that the meaning of this concept is slightly changed.
Concepts may occur more often in irrelevant contexts, than in the field under consideration. This brings a lot of misunderstandings about the scientific value of a contribution that uses concepts in a new manner.
An example of diverging use of concepts without any controversy between scientific camps is this: Meaning – reference – sense. G. Frege, who established a whole new field of philosophy at the end of the 19th century, used these concepts in a way that is outdated nowadays. Every author writing today about the issue has to quote him and make sure in which way he uses the concepts. This is annoying and makes a lot of headache. It leads to a need for clarifications like this:
Meaning = Bedeutung(Frege) not = sense(today) - Sinn(Frege) = meaning(today) and not = sense(today) – reference = Bedeutung(Frege) not = Sinn(Frege) and not = sense(today)
And so on. You can imagine, that automatic translations of cases like this produce a chaos. Most publications in our time are forced to make some preliminaries and comments on the terminology even when they do not quote the “old” author. So texts will get longer and longer. I think this will continue for some time.
Nicholas, you asked about relevance: I don’t know how relevance can be fixed facing the problems mentioned above. I propose a knowledge base where terminologies are documented in order to view which concepts became outdated, which concepts changed their meanings, or why an author doesn’t use a concept at all. (This may be because he belongs to a different scientific camp). It should be a knowledge base kept up to date by researchers with a few clicks in order to allow shorter publications with a link to the knowledge base for agreed terminologies. This will make quite a few quotations superfluous. Something like this can easily be fixed for every field of specialization:
http://philosophy-science-humanities-controversies.com/table-scientific-camps.php
Martin - I am not sure I want to fix relevance as it is fluid and contextual to individual perception. Luckily, in my wrings and research nobody has been so restrictive as to impose any difinitive 'anchors'. Likewise, I extend the same to my own observations and readings. I can only offer the best explanation or rationale I have .... & let others make of it what they will ;-)
Marcel,
I think it’s absolutely tricky and it became even trickier in times of automatic search and automatic translation. As far as I see it is not realistic to aim for a consensus.
My talk of “agreed terminologies” was misleading. I meant a synopsis that is permanently actualized for one field and at the same time shows the conceptual boundaries in relation to other fields. The synopsis in my link confronts the diverging use and shows the reasons for it. It’s not a consensus but a bunch of controversies! And you do not have to read 20 papers to get it. Our aim should be shorter publications with reference to documented use of concepts. So one might simply write “I am using the concept X in the way author A was using it and not in the way of author B” without having to make further explanations that sometimes need one third of the whole text. The reader may then look it up in the tables if he likes. The main result of the new paper can be shown in the tables thereafter. This should save a lot of time and show the whole of the discussion and the state of the art in a condensed form. Does anyone have another idea?
Dear Martin,
To me, your proposition is very constructive. It happens already with books devoted to one word like 'Adaptation', but the book approach is probably not dynamic enough?
History is important, since it shows how developments occurreed. By reading history you frequently find that you "own ideas" are only partially your "own Ideas". To predict the future you need to know history. Please read the true story of Ignatius Semmelweiz, who long before germs were discovered, introduced hand-washing before examing pregnant women, and thereby drastically reduced maternal mortality. Suddenly, in 2014, the imporance of hand-washing is again realised, as shown by several publications in medical journals, including the " South African Medical Journal". This only one of numerous examples. Regards, Johan (JT) Nel.
Do we have any papers that support this? I think the first statement is very true, but the measurement of quality is problematic. Especially when the internet renders so much more material available (reliable and otherwise).
Concerning the second statement of Abderrahmane about internet accessibility and the response from Nicholas: there is much more material available today via internet, but how to truly know which old/new publication is right/high quality versus wrong/low quality at a long-term basis? Few studies are truly replicated from a methodological point of view. It's not about mass of information, it's about quality of information or not?
Mass of information and its consequences:
It's like a needle (an excellent publication) in a haystack (the total mass of information produced in RG, Google Scholar, Web of Knowledge..). Is the needle (the excellent publication) more difficult to find when the 'haystack' (e.g. internet and its exposed information) becomes larger and larger and larger and larger and....?
Dear A.,
But then you replace the problem at another scale. What might be a high-quality source for some is perhaps not a high-quality source for others, and you need to be expert to judge what a high-quality versus low-quality source is, or not?
If important early work represents a foundation from which knowledge of a particular subject has evolved, it is relevant and should be cited. Failure to cite older papers may be an indicator that the researcher is unaware of earlier work.
There is an interesting (I think) example out of the "older publications that are not supported by recent publications" as it is cited in cutting-edge quantum physics literature (a quick look through various databases can only give a rough idea, but in the APS journals alone over 1000 papers cite in since 2010 and that number grows as high as ~4,000 for some bigger academic databases I looked at). The paper is generally referred to as EPR (for it's authors' surnames: Einstein, Podolsky, Rosen) but was published under the title "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" (Phs. Rev. 47, 777, 15 May 1935). It was one of Einstein's last and most devastating attempts to show beyond doubt that either quantum mechanics wasn't a complete theory or it couldn't be a physical theory. Even better, the argument it used to do so formed the basis of another famous publication known mostly by its product- Bell's inequality. EPR argued that quantum mechanics allowed for a physical system to have classically incompatible states (or "observables") simultaneously. One such consequence (which Einstein derisively called "spukhafte Fernwirkung", commonly translated "spooky action-at-a-distance) led to Bell's incredibly frequently cited 1964 paper in which he devised an inequality whereby measurements of a particular type necessarily entailed nonlocality (for the purist- I'm simplifying here as assumptions like realism aren't exactly relevant).
Empirical support, however, was rather scant until the groundbreaking experiments by Alain Aspect and colleagues, esp. as published in Aspect et al. (1982) "Experimental Realization of Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: A New Violation of Bell's Inequalities." (Phys. Rev. Lett. 49,1804). The problem was that the empirical support, now repeated in hundreds upon hundreds of other experiments in various ways and at distances of ~15 kilometers, showed basically what Einstein and co-authors had argued quantum physics entailed, but not what their argument did. Their argument was that such experiments could not and could never take place because nonlocality was clearly and obviously false- hence the conclusion that QM must either be incomplete or wrong. Instead, the logic of the argument, combined with a couple of later papers including Bell's, is still frequently used to show that what Einstein had considered absurd even in theory can be demonstrated empirically.
The literally thousands and thousands of physics papers, monographs, conference proceedings, etc., that have cited EPR since 1982 alone have done so almost always to use the logic of the paper's argument in a way Einstein never intended. This is especially true of the thousands of peer-reviewed empirical studies citing EPR, which generally report experimental realizations of preciesly the sort EPR argued couldn't be shown as they were (supposedly) impossible. Here, then, is a paper not in classics, comparative linguistics, philosophy, history, etc., but modern physics, and cited not simply in historical works, textbooks, and philosophy of physics scholarship but in studies published by teams at places like CERN; physics labs at univerites like Cornell, JHU, Oxford; institutes like the Max Panck Institutes and IQOQI; collaborations like BESIII, military/academic research labs (e.g., ESSD at the Naval Surface Warfare Center in VA, USA), and on and on. Moreover, other papers that would seem likewise outdated are frequently cited alongside as well as independently, such as Bell's paper from the 60s (or even Aspect et al.'s study), although unlike EPR these are not inconsistent with over 50 years of research and 2 decades of vast empirical evidence.
There are certainly other good examples of papers that are not consistent with modern theories or are outdated but are frequently cited. Examples that spring to mind are Shannon's 1948 paper which basically founded information theory but for various reasons (not the least of which being quantum information theory) simply can't be cited for anything other than as the founding work in all things information theory related; Chomsky's hopelessly outdated Syntactic Structures and his review of Skinner's attempt to explain language via the framework of behaviorism (a paper more famous and far more cited, not to mention read, than the book it reviewed); Mcculloch & Pitts' 1943 publication of a study which is still used both as a model for neuronal dynamics as well as artifical neural networks, and likewise for the far more neurophysiologically-oriented paper cited everywhere as perhaps the basic neuronal model by Hudkin & Huxley (1952); the thought-experiments from Schrödinger's famous cat to Wheeler's delayed choice (the former dating back as far as EPR) that have now been realized empirically; Miller's "The Magical Number 7" (on display in a case of pictures, original copies of papers and books, and other historical trinkets, plaques,etc. outside of one of Harvard's neuroscience labs in the psychology dept.); Thomas Bayes' posthumously published 1763 paper recognized as the founding work in Bayesian statistics and cited as the origin (although it isn't, exactly) of Bayes' theorem; Heisenberg's formulation of matrix mechanics (despite his not knowing what a matrix was) as well as equivalent mathematical formulations of quantum mechanics by Schrödinger and Dirac; the two papers that together form what is commonly referred to as the Sapir-Whorf hypothesis in linguistics, the work by von Neumann and a co-author, whose name espapes me, which single-handedly created game theory in 1943 or 1944 (Euler's single-handed foundation of graph theory is still cited but as it is so irrelevant to modern graph theory it's barely cited in modern mathematical research); and far more that I am forgetting, can't think of, or have decided not to mention as this is already wayyyy too long.
Works that are old and somewhat or mostly out-of-date but are cited in modern research rather than historical reviews or similar treatments tend to share certain traits. They are almost always foundational, making it possible to cite them and ignore subsequent work until the next big leap forward. They tend to contain material that, thanks largely to the increasingly specialized nature of scientific research, contribute to multiple fields. There is no standard historical treatment which can be cited instead or often even a reason to cite one if there were (as there is for most of the developments in mathematics in the 18th and to a lesser extent 19th centuries; this is even more true of physics, where we find constant references to e.g., Young's double-slit experiment without any citation of Young). There is something in that that continues to be relevant even if, as is the case for EPR, it is entirely against the nature and matter of the work. The more empirical the work, the less likely it is to be old (works from the 40s and 50s that are still reguarly cited like Watson & Crick's, Mcculloch & Pitts', Hodkin & Huxley's, Beadle & Tatum, Tolman, Ritchie, & Kalish, etc. are the exception, tend to be cited less frequently and in scholarship with a more theoretical and/or historical bent).
The simple fact is that even in cutting-edge applied sciences like nanotechnology and quantum computing, foundational and still relevant works exist written over 50 years ago.
Dear Andrew,
great to mention some general characteristics of studies that may allow them to resist in time. Interesting is that different people may interpret the same article content in different ways also having potential consequences for more recent research directions. Do misinterpretations of article contents create novelty in science?
A novel idea might be one or two sentences hidden in a book of let say 300 pages, and rediscovered many years after the publication of the book
It's not that the thousands of papers on EPR interpret it differently. EPR clearly argued that QM entailed particular things that subsequent citations accurately captured. The difference between EPR and those which have continuously cite EPR for reasons that the authors claimed were nonsense isn't the logic of either their or EPR's argument. It is simply because in EPR we find that the argument is used to discredit QM on principle. In modern citations, the logic still holds, except now it has been shown to support empirical findings. There is no misinterpretation, everybody knows EPR was intended to argue against the view that they cite it for, but nobody cares. The argument is sound. Einstein and coauthors didn't wish it to be (they considered it valid and therefore unsound because were it sound then it couldn't be true). The logic doesn't change., and thus the only novelty is the realization that the argument does indeed show what it is intended to, but the consequences of this do not show the argument demonstrates what the authors intended. This isn't misinterpretation.