When should we say that a researcher or a team of scientists make a "mistake”, and when are they committing an ”infringement“? Who has the power to determine this difference? The scientific community of peers? The judges of the courts of justice? Or the community of citizens, on the basis of their usefulness?
I'd use the example of a recent [alleged] error, which resulted in severe punishments to some seismologists convicted for committing "errors" [or "infringements"?], providing a wrong scientific judgment about the earthquake probability — in L'Aquila area, (2009) — http://en.wikipedia.org/wiki/2009_L'Aquila_earthquake
The issue of "scientific errors", and "infringement of rights" by scientists is very complex, and - on the other hand - clearly extends beyond the boundaries of the scientific community.
In my opinion, a researcher does not commit "infringement" so long as his actions are purely scientific and not intrusive in the public sphere. Such employee shall be verified only by their environment, taking into account the purely scientific aspects, which may affect its prestige in the environment.
Following this line of thought, "infringement of rights" may be ordered with respect to the scientist, who decides to presentation his opinion publicly, in the non-scientific community (eg. political, socio-economic). For example, in this type of situation may find themselves the experts appointed by different public or government bodies, or scientists-forensic experts.
I think it depends on the magnitude of the error and the impact this may have on another. Ie if the impact is very small and only affects the reputation of the researcher as it could be a scientific committee who discredits this, but as in case of earthquake and affects another, it is right to be tried by a judge.
An error is something you do not make on purpose. An infringement is something you do although you know that it is wrong.
Ok with Tiia´s answer. But this would allow to make eg a bad medical job because you are lazy and desinterested. I´m sure such a lazy doc is convinced by his behaveour. But this is infringement poor! I think its better to define infringement as acting with clear neglecting of established rules.
I'm with Hanno. Some mistakes are made in good faith, yet stem from negligence. Such mistakes should not be taken in stride when they have disastrous consequences, but examined under law.
Scientific errors are not always harmful. Some interesting findings generate new and very important as penicillin.
Collective arrogance of those who have published a lot supporting an error leads to forming a cult in course of time, and it is difficult to break this nexus.
I am agreed with Hemanta. There is example on this difficult nexus: Pseudoscientific movement named Lysenkoism - http://en.wikipedia.org/wiki/Trofim_Lysenko. In that case only TIME was as the referee of mistakes for those “scientific” results.
Seismology as a science with a very high level of uncertainty has the results with same level of uncertainty and the right to make a mistake (in the range of this uncertainty).
The intent of an action is what determines that a person is guilty or not against a possible juicio.Es a person is tried for an error he committed and then I could see if the error was intentional or not. The important thing is that this error was very serious and will cost the lives of other people. I feel that in this case it is important to know who had l mayoresponsabilidad and then make a desicion sore reaente who are l should not judge because this siplemente judging a researcher but na public service person you have in your hands safety of others.
In this case I think that was the case that l had to come to a public trial.
Who should detect errors?:
1) Author(s), collabolators within the same project (project management methodologies provide such tools within the research team, harmful procedures should be supervised),
2) colleagues and superiors of the author(s) within the same research center(s),
3) peer-reviewers and editors,
4) all readers - other scientists.
Of course it may depends on kind of he error.
Who should detect infringements?:
2)-4).
When?
As early as possible, even not harmful.
Why?
Because we look for true knowledge, and early detected errors may spare efforts, time, and funds.
The issue of "scientific errors", and "infringement of rights" by scientists is very complex, and - on the other hand - clearly extends beyond the boundaries of the scientific community.
In my opinion, a researcher does not commit "infringement" so long as his actions are purely scientific and not intrusive in the public sphere. Such employee shall be verified only by their environment, taking into account the purely scientific aspects, which may affect its prestige in the environment.
Following this line of thought, "infringement of rights" may be ordered with respect to the scientist, who decides to presentation his opinion publicly, in the non-scientific community (eg. political, socio-economic). For example, in this type of situation may find themselves the experts appointed by different public or government bodies, or scientists-forensic experts.
Going back to the initial question, these Italian scientists had no chance at all. Had they announced this earthquake six days ahead, they would have been probably called "alarmists" as was this Italian technician from the article. Besides, losses seems to baused by bad construction of buildings.
Thus, if they were supposed only on their scientific experience, they probably committed only error. But if they had exactly set rules for announcing of the danger and the responsibility for doing so and if they failed to do so, then they probably committed the crime.
I am submitting for further thinking so-called "Contergan" case. The reasons are those: a. drug had to be result of research, b. it has been useful in lot of cases (leprozy, cancer) exept of one, c. market pressure was in all countries approx. the same, d. some countries kept their rules, some countries didnt.
http://en.wikipedia.org/wiki/Thalidomide
This is article can be useful:
de Jager Cornelis (March 1990). "Science, fringe science and pseudo-science". Quarterly Journal of the Royal Astronomical Society 31 (1): 31–45. Bibcode:1990QJRAS..31...31D. ISSN 0035-8738.
Dear Joseph and all,
thank you very much for your valuable contributions.
I have chosen the example of the 6 Italian seismologists and L'Aquila earthquake because I find that singular situation overturns the classical distinction treated in the article of the Quarterly Journal, finding a conflict in the relation between science and truth.
In particular, what made that situation different was that the 6 seismologists flowed into the paradox to assert **a scientific truth that never came true**, to counter a **scientific false (maybe a fraud) that became true afterwards**.
Those seismologists, in fact, tried to turn off the alarm generated by a pseudo-expert who did not belong to the accredited scientific community, who was forecasting L'Aquila earthquake on the basis of an unproven and totally fortuitous pseudo-precursor.
Hence, seismologists were condemned because they used the scientific method correctly. What is, then, the scientific error?
Dear Giuseppe
Since the beginning of time , man suffers from the "why " in order to question how it was created , what the basic elements of his nature , the reason for its existence , etc.
Even the greatest geniuses make mistakes . However , its flaws generally proved very useful for new discoveries times after
5 greatest scientific mistakes of history , but that ended up being of paramount importance to science .
1)The notion of inheritance Darwin
2)Estimated age of the Earth Kelvin
3)The triple helix of Pauling
4)Big Bang Hoyle
5)Einstein's cosmological constant
The scientific errors do not always end in defeat and squandering of money. Generate some interesting new discoveries . One of the most famous examples is Viagra , which was originally created to treat angina pectoris. Although the drug failed to increase blood flow in the heart , the patient noticed a curious side effect : another part of the body had an increase of blood .
Hi Nelson,
In a previous answer I also emphasized that penicillin was an "accidental discovery" which derived from a scientific error in 1928, related to a forgetfulness of Fleming.
But another view of the matter, the review in accordance with criteria established by the journal can create a niche, a kind of monopoly in scientific publications which limits much the entry of potential concurrents. On the other hand when I'm a reviewer in different journals note that the selection criteria may have different approaches, particularly in theoretical reasoning, in the method, the results ... This heterogeneity has implications for the rejection of articles. I think the criteria should be more consensual and homogeneous to check more justice and fairness in terms of quality, in the review and approval of articles.
During the review of articles you can detect multiple faults, weakness or even methodological errors depending on experience and rigor of the researcher, but in my opinion is the plagiarism that has more meaning.
This is not my subject at all, but I want to share a small anecdote. A friend of me works in high-energy physics, work that has no foreseeable use and is rather costly too. So the chance of some practical consequence is slight. Still. Some of their articles are written by five people, signed by three thousand, and purportedly read by a handful. So my friend, and I think he is right, holds that in some fields one could as well do away with authorship and individual responsibility at all. If science is much like a factory ..... (the reader might well try to finish this sentence him/herself).
Giuseppe.
Have been following this question for some time, but waited reflect best to give a contribution on a subject that can not have a irresponsible answer, ethics.
.
I will make a contribution on the posture adopted by a Brazilian researcher in 1950 that may serve as an example to all.
.
In Revista Brasileira de Geografia, January-March 1950, vol 12 n1, J. Sampaio Ferraz, published an article entitled: Iminência duma “Grande” Seca Nordestina (Imminence of a "Great" Drought Northeastern), this author with an extremely limited amount of data, and based on a theory that was very new to make a correlation between severe droughts in Northeast Brazil and Sunspot Cycles, who accompanies the climate knows how this theory is controversial in the present day (and considered recent by many).
.
Despite the unpublished appearance (at the time) of the theory, despite the risk of loss of credibility of the researcher said he aware of occupational risks he was suffering, he publishes this article. He had presented the Eighth Scientific Congress in May 1940, Washington, D, C. 1942 Vol VII p 333 Sugestions for Explanation of Problable Connections betwenn Solar Activity and Rainfall Variation in Southeastern Brazil
In the introduction of the first article mentioned above makes it clear that empirical relations are subject to errors and that he has not completely sure about your prediction. But in the same issue, knower of hundreds or thousands of deaths that can occur if the government and people of the region were not prepared for the event.
.
What happened? The author publishes a letter in 1953 in the same journal titled: "The current dry Northeast," where he reports that the drought occurred, but with a slightly lesser extent, the government did nothing and only a researcher of the study area continued . He latter article implies that his work fallen into disrepute, but he continues trying to follow in the prediction line.
.
So let's see, a researcher who had no obligation to make predictions based on a new theory, dared to put his name at risk because of the social problems with which his work could be mitigated, and dared to put his credibility at risk, even making it clear at the time it was a job with restrictions.
.
Now what did the Italian scientists who had for obligation to make predictions? Hid behind the incertitude of any risk prediction of earthquakes and have not ventured to safeguard their scientific credibility! It was a typical violation of the precautionary principle!
.
I until recently worked for the oil industry and this is accepted jobs with high risk of error, but with immense benefit if it works Out, because oil companies due to probable success accept the risk.
.
Why if it serves a business can not serve for human lives?
.
The miscalucation of NASA that resulted in them losing the Mars - the issue of Miles/KMs! that what I will consider a scientific error - that was not done on purpose and cannot be referred to as infringement.
Regards
Theodora Issa
In this book:
"Heisenberg and the Nazi Atomic Bomb Project: A Study in German Culture"
one of the famous scientific errors (fast neutron fusion cross-section) was described
I have read an article regarding pseudo science, let me check the link, it may be helpful
In science we have allowable margins of error, what we need to know is if these errors are within set limits. Any good scientific process comes with certain assumptions
Assumption, accidental errors or miscalculations might be tolerated - however, the infringement (or knowing that you are fabricating anything) is a major NO NO NO!
Regards
Theodora Issa
In the Netherlands, where I live, there is much attention in the newspapers for a professor who publishes weekly, but often with quotes of his own work and of work he purportedly did with others, and without acknowledgments. Self plagiarism it is called and it illustrates that scientists can be very human and that ethics is larger than correct methodology. The university used to be very proud of him, and no doubt used his impressive 'productivity' to tell his colleagues to publish or perish. They are reticent, to say the least, to really investigate if it is only onethousand pages (of six thousand pages that he published) or much more. So it is not just the individual scientist but also institutes and finally society as a whole that should fight corruption and strive to live a moral life.
Well, I agree in full with the big picture emerging from your comments.
And I'm intrigued by the particular symmetry that continually emerges from all our points of view.
You are suggesting that scientist is subject to the chance of making mistakes, “the scientific error”. And that he has to use ethics. And even courage.
You said that even evaluators, or contractors [industries] receiving or requiring a scientific research should be subject to the same risks of error. In fact, to accept or to reject new hypotheses or theories is a risk to the reviewer, and, through his choice, is a risk to the entire apparatus of Science. And symmetrically, even for industrial players, to accept or reject the intuition of scientists / experts involves the risk of errors, losing or making money from investments. Are we interested in this last kind of "error"? Yes, we are: look at the Stamina case.
However, I must say —and I hope to express it without any arrogance — I am not fascinated at all from this kind of symmetry. Nay, I feel a sense of rejection, i believe it is nothing of useful. A kind of pseudo-inquiry.
The reason for my refusal is based on a principle of realism, which deletes all at once the perfect symmetry of the epistemological value.
It seems to me clear that in the social ecosystem, and all along the history, Science is a system increasingly treated as a mere instrument.
As a pure instrument, all modern Constitutions didn’t grant to Science an equivalent power with the other social organisms. They subdued the ecosystem of Science to other entities. To them, the modern Constitutions assigned the constitutional role of "powers". As owners of ontological "power", all the other social subsystems are deemed as “essential" for societies and “constitutional" in society, and prevail on Science.
That, in fact, suffers everywhere from social defects — in education, economy and resources.
Subsequently, the risk for the scientist from the powers of Law and the requirements of Finance is fragilely deputy to his individual ethics, and defended only by the inner self-regulatiory policies of his own closed community.
For this, I wonder:
#1. Couldn’t it be for this disproportion that the classic scientists [of course, not the off-sider described by Rogerio] endeavor more willingly to the precautionary principle?
#2. Isn’t it for this reason that scientists facing with critic social issues [emergencies, safety, health], prefer to use a huge caution exposing disruptive theories, instead of using a disciplined, yet free right to carry out normal experimental attempts?
#3. Couldn’t it be for this reason that the researcher has learned to fear the “scientific error”?
[##. And, isn’t this paradox what describes effectively the reasons at the basis of 1st Joris’s anecdote?]
A mistake is a relative concept, probably scale-dependent, also related to the human-defined precision required or imposed. Scientists often make claims addressed to citizens to make messages more clear, but that are scientifically incorrect. For instance, many research dealing with studies of body color in wildlife measure reflectance patterns using spectrometers, but keep calling what they measure 'colors' to make scientific publications more readable or digestible. Very critic scientists might call this a mistake, but for less critical scientists this is OK.
Dear Giuseppe!
Thanks for your question, and my opinion is that no one has the right to judge more of the other's fault. I call attention to it, I think that is a mistake, but I did not offend, and I especially do not hurt others for it.
There is an old Hungarian saying: the one maket only mistake, who works. Those who do not do anything, never make the mistake.
I think the error also, and the good work are mine too, and belongs to me. Nobody do need dang me or hurt me therefore. Of course this is my opinion.
Yours sincerely
Istvan
There are conscious 'mistakes' and 'unconscious 'mistakes'. I do not judge the 'unconscious' mistakes.
Philosophers might argue that humans and their methods are true products of nature. The mismatch between a phenomenon and the measurement of that phenomenon for at least one scale of analysis might be considered as a natural phenomenon deserving more scientific study to identify underlying mechanism of human observer effects (e.g. caused by 'unconscious' individually based perception constraints).
And of course, large scale collaboration network, also using 'replicated' research, might identify more easily 'mistakes' and filter them out in time.
Hi Marcel, might also create more groupthink, conservatism, false pretense, social pressure and social talk.
There is also intended and unintended killing. The punishment for the latter is usually lower, conditional or none at all. But the result for aggrieved person(s) is the same in both cases.
I agree with Joris when the large scale network is guided by 'feelings' independent from education backgrounds.
@Giuseppe, I like Your thread on the scientific errors! Of course, I am not going to treat small errors which have no bad impact to community, but to the scientist.No one makes mistakes consciously, but the consequences of these errors are sometimes very harmful to the community. Huge mistake concerning the consequences of large-scale (floods, fires, earthquakes ...) are subject to condemnation of society and the state, through the court, as well as by scientific community!
Hello Ljubomir,
Decisions perceived as 'mistakes' are evidently context-dependent and perceiver-dependent. Decisions without perceived mistakes, that is those that maximize benefits and minimize costs, are more likely when people can perfectly predict the future......
The perceived costs and benefits of 1 decision, and thus the decision perceived as a mistake, are perceiver and environment dependent. For instance, if in country A a decision is taken, this might be perceived as an excellent decision in country A and perceived as a 'mistake' in country B, etc... . This applies in 'Politics', does it apply in 'Science Politics'?
Marcel,
your environmental, or contextualist point is very important. But I can not put it together with a hermeneutical normative for science, which I think is totally absent, or at odds with your vision at the moment.
Please, could you explain in more detail? Tnx.
Hello,
please explain in this public discussion 'hermeneutical normative for science'
Sorry. Could I give you an url? — we could discuss shortly: http://download.springer.com/static/pdf/767/bfm%253A978-1-4020-4713-8%252F1.pdf?auth66=1390640113_3aa6217c81f9a15cd09799375eac9267&ext=.pdf
Best,
g
Hello G,
this text is a product of human nature and is therefore context/environment/biology dependent.
Mh, yes it is a product of a human [prof. Dimitri Ginev].
So, you're talking about the well note paradoxical "hermeneutical spiral". The problem is that a contextualist/environmentalist view should avoid falling into a labyrinthine paradox.
g
The tools you have (e.g. a brain imprinted in a given education environment) will influence what you will produce (e.g. a philosophical product), whatever the complexity of the language produced.
Ok, I see. Maybe a philosopher could contribute, I'm not an expert in this.
Thank you for your words.
g
And to go back to the initial question: the tools you have in environment A (e.g. brain A developed in environment A) will determine whether the philosophy product produced by another tool in another environment (e.g. brain B developped in environment B) will be perceived as a 'mistake' or not. How many philosophy books have been written to show this? Thousands....
And a concrete example: Ornithologist A studies species A in the North, whereas ornithologist B studies species A in the South. Ornithologist A and B working on the same model species and the same problem (e.g. reproductive investment) do not observe the same phenomena because of spatiotemporal dynamics of the biotic or abiotic environment. Ornithologists A might judge the observations from ornithologist B as a mistake because A never observed what B reported. In this case the mistake is at the level of the judgement of results from unfamiliar environments.... Etc...
Thank you very much for referring to my study (2006). With respect to the foregoing considerations it is appropriate to distinguish between two basic perspectives on
scientific error. In the perspective of the (normative) epistemology, an error is made when an explicit or implicit norm/standard/criterion of coherent scientific behavior gets violated. The assumption is that there is a universally valid (de-contextualized) and possibly invariant codex of rational scientific conduct. On the perspective of hermeneutics of scientific research (which is not to be confused with a kind of epistemic relativism), an error is committed by choosing possibilities for doing research. The choice takes place always in a hermeneutic circle of projected totality (horizon) of possibilities and actualization of particular possibilities. This is why the choice (including the wrong one which leads to an error) is ineluctably contextualized by scientific practices that at once project and actualize possibilities. Accordingly, the
error is defined not as a violation of a normative codex, but rather as a transgression of a pre-normative ethos shaped by a contextualizing hermeneutic circle. The two perspectives I am speaking about offer different strategies for differentiating between error and (legal) infringement. In the framework of normative epistemology, one is not able to distinguish effectively between the two. Violation of the codex of scientific rationality might be treated in many cases as implying infringement. From the viewpoint of hermeneutics, committing scientific error is by no means to be recast in legal terms. The context of scientific practices in which an inadequate choice has been made in untranslatable in juridical language. If one goes on to insist on a translatability, then one would violate the autonomy of scientific research. For me, the case of Aquila is a classical example for such a violation.
Ad Marcel's ornitologists.
In fact both ornitologists made mistake not specifying exactly initial conditions for their research, i.e. at least area of research and area where such results are valid.
Not doing so makes the same sense as research on average temperature on Equator and North Pole. Both sets of results might be right, but not generally applicable.
I am thinking of the case of "Cold Fusion" where researchers infringed the conventional wisdom about the declaration of results while the issue was not substantiated with enough scientific evidence. The purpose was who, among two research teams, goes to the media to announce the breakthrough!
As for scientific errors, it is related to the same issue above where researchers declared the energy generated results as chemical reactions from within the setup of the experiment, but in reality were resultant from external agents or insufficient evidence!
Tomy mentions indirectly the problem of 'pseudoreplication'. There is a mismatch between the scale of the empirical results obtained (that are scientifically OK) and the titles, abstracts and conclusions of publications focusing on wider scales of analysis (that are scientifically not OK).
Thus, an error is that decision-makers that are not familiar with science practice can ask too much
If competition for grants is severe, and jury members do not truly master the topics they judge (like editors receiving hundreds of manuscripts to be judged), communication becomes more important than the true science messages, with the risk to support only overambitious projects. Both the scientists and the decision-makers make errors in these cases.
Dear Radu Leca,
I agree with most of your arguments.
But, for a critical issue from me, I would understand if the following series can come from your topic.
#1. Given the triple helix of a well-balanced social system [hermeneutical circle gives the same steady-state power equilibrium], and facing the situation of a scientific error, (a) who says that judges of the modern courts of justice, using the modern Law, are in right? (b) Who says that the community of citizens with their pragmatical needs are in right? (c) What works as warranty for the scientist? i. The pre-normative ethos of law? ii. The common sense? iii. The rigor of truth? iv. The essential goods, such as that of life?
Yes, I think all of these 4.
#2. But aren't those 4 principles the same ones that are at the basis of Science? If not, then all Science is guilty.
#3. Thus, it could be true that - as in Nature' comment after that exemplary judgment [(see Nature 490, 446; 2012) — http://www.nature.com/news/shock-and-law-1.11643] - that was a "Science in trial" case, similar to Galileo's persecution by the Catholic Church 1610~33.
#4. And if that was a trial, what can we expect now? That the next scientist will be defended by lawyers who will dismantle the scientific truth, trying to build a truth just "useful"?
If yes, we are in a bad combination.
g
Can lawyers defend science practices for which they did not had adequate scientific formations (at least 4 years of university courses, with additional specialisations for another >3 years...)? And do the jury members master sufficiently science practice to judge the arguments of the lawyers? Etc...
In such cases, may be the court should rely on the Legal firms involved in filing for patents. These specific firms have scientists in their ranks who provide advise and direction, in addition to the crew that does the check up research from scientific publications. If this is the case, then the lawyer is capable to present the material in court in a down-to-earth manner for the jury to understand.
Actually these legal offices have contacts (possibly not employees, but contracted) to cover all fields of science and engineering. At least I was involved in such a case when I worked at Syracuse University for my PhD and we filed for a patent in superconductivity. The events then were drastic since I had to sue my advisor for enfringement of the invention process and registration of the patent. Luckily I won the case and was backed by the research conducted by the legal firm of the University which after clarifying all disagreements continued the filing process for the patent.
http://www.pseudology.org/science/ObmanNauke.pdf
Dear Giuseppe,
Your question is actual. I found a very interesting book (I think it's a rare scientific-popular book) , which is called "Theft and deception in science"by S.G.Bernatosyan, St.Petersburg, 1998.This book shows, how mean people of all ranks and titles cause damage humanity. I think you can translate it with a translator.
Sure, there is theft and deception and even misuse of state power as in the times of Lysenko or artificial stupidity as in the case of Benveniste.
But there is also neglect and lack of honesty and lack of integrity, and some suggest that it is rampant, and wherever you look, and not just in science too!
In the discussion there is the issue of graphomania. Scientist is not sure that the obtained results are correct but he wants to publish these results of some years " blood, sweat and tears".
Dear Joris and dear Joseph, you are right! There are a lot of deceivers and sharks in science (and not only). They could do the elbow work for the sake of the place in the sun.
Thanks Irina, but... not every idealist is a holy man or woman either.
I have repeatedly encountered situations where I was convinced a result was valid, and interesting ... but could not find the means, or the support, needed to really proof it beyond reasonable doubt and according to standard methodology. This also is a frequent situation. Or that one has insufficient time to spend to do a job right. Not infrequent in medicine too...
You are right, Joris. "There is no poison, there is a dose"Paracelsus . One of my professions is a hospital nurse, besides I'm a healer in our healers' association.(I chose another fate) Even simple manipulations (massage, injections, bad care) can cause complicated consequences.Mistakes in teaching can cause students under maintenance, too.Once I was shocked when my ill father with the presence of high protein was called "healthy"in hospital. Only my friend- highly qualified doctor saved my father.Another fact, I met a student in my early practice, who suffered repeated education for 4 (!) times.It was terrible. I saved him only through support and kind attitude.Nowadays he is absolutely socialized adult. The most important thing- he remembers my name and gives me a piece of smile.
I believe that we should clearly distinguish between the concept of "scientific error" and "scientific fraud", if only because the same genesis of these terms lies on diametrically different connotation (in the first case - positive, it means accepting the possibility of scientific error in the second case negative - clearly condemning and does not leave any margin for tolerance of scientific fraud).
Dear Giuseppe, thank you for your kind words. I think, to Protagoras, ""a man is the measure of all things", that's why a medical principle "Do not harm"("Noli nocere!") is acceptable to all spheres of life, including science.I remember the sensation around the "scientific" idea to turn rivers back. We know about the bitter fate of the Aral sea (now it's dead), because of "scientific"intensive irrigation. The bitter lesson is building Pulp-and-Paper Mill near my favorite - absolutely unique- lake Baikal, or the failed experiences (once again- "scientific") with land reclamation on becoming a desert into swamp and v.v. To Henly, "One can't pluck a flower without troubling a star".All is interconnected in our global village.Human greed can't be justified.
Having read irina's comment I decided to add another class and it is "pure stupidity". On average it is more dangerous than "scientific error" and "scientific infringement" together because it offers simplistic solution/happy future without any neccessity to think. As it is usually based only on nice-to-hear phrases, it can be accepted widely and extremely quickly.
@ Tomy
I think that the "pure stupidity" of which you write, however, is not able to heavily compromise the professional scientific community.
I notice, however, the risk of ingress of views based on "pure stupidity" (as you also noticed) to the so-called circulation of popular science. In this context, it can be really terrible plague, devastating the minds of a wide range of people who are rather scientific hobbyists, which his scientific interests do not always meet by the studying of the source scientific literature.
Dear Andrrzej, true science, like a top true human value, can't be bought, can't be sold..Science can't be a conformist (or a slave of the pillars of society), it must be a free, independent and true creator The main aim of science is a human welfare, isn't it?. Sorry, but it wasn't a scientific literature.I'll give a nonsense source.I think, it's more stupid, than it was.Sorry, once more.I was shocked. by the phrase "We must go on our brightest forgotten projects"
http://www.strf.ru/material.aspx?CatalogId=221&d_no=60685#.UuvKgEDXbuU
http://www.diletant.ru/articles/8281751/
The sources are not artificial.It's a fact.
True scientific findings is necessary for huma ncomfortable and I believe personally majority of findings are in this way. Honesty is the basic of research and I never like to think something else.
Dear Ali,
I have the same opinion, but some will say there is a difference between 'theory' and 'practice' for one or more reasons
@ Andrzej
You are right because it is not possible to forbide thinking.
But to delay or to block research is not so difficult. It usually needs only one fool (or self-supporting group of fools) in key position for decision making who is able block funding and/or publishing. I sincerely hope that days of direct censorship are really over (HA-HA-HA).
But theres story dealing with research and science directly.
When world started to be involved in semiconductors research, it was submitted to appropriate party member here for approval. His reaction was this: " So they are starting with semiconductors (exact translation from Czech would be half-conductors) research, right? Well, we'll wait until they invent full-conductors, and we will start with our research after that!"
It probably didnt stop thinking, but it delayed real research and development in given area for years.
A scientific error is any thing that is done and published then it is discovered that it is not completely true.
Dear Hatem,
some organisations support risk-taking in science, which includes what you would call 'errors'
Meteorologists are the scientists that work for public, transport and industry. They make errors quite often. Mistakes are the part of their jobs. But they have exact rules how they can make errors. If they break those rules they get some punishment, sometimes even court trials. I believe every scientist has the right to make errors but he must think about consequences and know very well the rules for making errors in own scientific area.
Error is an error and to err is human! But to repeat that is inhuman . Infringement inclines towards lazy, careless, purposeful attitude. Anyone can err but that error which is responsible for life and death of someone, can be considered as a severe scientific error. If it is reversible, I believe, the punishment can be rethought or reconsidered but if it is irreversible, has lead to something very serious then punishment may be different. The issue that it was an error, not infringement, needs to be considered, though !
Thank you dear S. El.Aleem, I believe you're not wrong.
However, even in your intervention, as in the above Radu Leca's one, you use to refer to the sphere of Law as if it were: #1. immovable, and #2. a superset of the other spheres, including Science.
But in real life Law does not determine any truth, but regulates necessities.
Law is not a natural science, therefore it has to build methods and synoptic rules for the governance of things and phenomena.
There is nothing of similar to *truth* to establish a willful infringement; nothing but the *need* to attribute a causal link, and the synallagma between an active and an injured party. This is right, but not necessarily *true*.
Now, how the sphere of needs and necessities could be a super-set to the sphere of the truth, does not seem so simple to me. Finally, I don't believe we can use cavies for experiment a complete solution — even though these cavies are superb scientists.
g
I think that when practicing research, scientists are human and as any human they could make errors in measurements, in data, in calculations, etc.... Many of this errors could bring to some inadvertent and unexpected data or may be other discoveries. So errors in science for scientific subject is not problematic in its self even if it is published. Which is problematic is intentional errors managed or disclosed into final products or patent rights blocked by 'economical or political' interests.
Hallo Fairouz,
and what if you made no errors and the error occurred anyway?
Put at the next trial, next lawyer will relate a sort of false negative paradox : —> see pag. 94, "Say you have a new disease, called Super-AIDS"… — http://www.jus.uio.no/sisu/little_brother.cory_doctorow/portrait.letter.pdf (Little Brother - Cory Doctorow, 2008)
Aftab
There is a saying 'lying is short lived' ... so, the error will sooner or later become evident... and there were several stories about this.
In addition, when you are in an organization with the necessary ethics approval for conducting the research - you will be monitored - later on!
But, no matter what - it is the conscious of the researcher - how would a researcher live with him/herself if they have published something FABRICATED.
Regards
Theodora Issa
Dear Theodora,
the hope for the mental self control of your lying researcher is irrational, because a person who values a really good conscience would not ly!
I agree with you that the question is hugely complex. And i see that many of you are led to entangle the concept of scientific error, mistake, violation of right and truth. Preliminarly I would point out that the last concept (I mean truth) is by itself largely elusive. Because, in my opinion, it is not possible to introduce the concept of absolute truth. I means that in any of human disciplines can be defined a soft concept of truth. But it is not obvious that the truth emerging within a given context can be immediately shared when the perspective changes. I will try to better explain my meaning (while apologizing for my poor english). In Science, the truth is the best explanation for a given phenomenon (or even for the whole Universe) that we can propose, in view of what we actually know. The explanation have to save the internal coherence of the formal language of Science, and must be experimentally validated. If the formal coherence is not saved, the proposed explanation is surely an error. But, even if coherence is saved we never can be sure that the explanation can be invalidated tomorrow, when the collection of new data will require a correction of the old theory or a new theory. So what is the truth today can become a scientific mistake tomorrow, when the new knowledge will show that the old truth had been obtained due to a limited perspective. This is not only the case of Science, but it is the same for any human activity. Also in a Court, the aim is not the search of absolute truth but it is the search of the better explanation that can be coherent with the facts that becomes known during the legal action. The legal action is not too much different from the mechanism adopted in Science to obtain the scientific truth. In both cases, people try to obtain the explanation of the observed facts which has the highest probability to be the correct one. When the truths obtained in different fields becomes intersecting, the situation becomes higly complicated. An example is immediately found looking at a topic which actually thrill the italian people. I am refering to the Stamina case. From the scientific point of view the situation is quite clear. The proposed method is scientifically unfounded. The proponents of the method do not explain how the method can work. Strictly speaking, they do not tell in which the method really consist. So there are no data which Science can discuss. So, the Stamina case is not a scientific case at all. In spite of this, there are some courts which determined that the clinic trial of the method must be continued. Scientifically, this decision appears irrelevant. From the point of view of the law, the decision arises from the observation that the procedure adopted to stop the trials apparently violated some legal rules. I do not know if this is corret (law is not my field). But it is possible. And this means that while Science says that the method is unfounded, law says that it must be applied. Both the two conclusions can be correct, each one within its own framework. But they clearly does not match. This conflicting situation leads to other striking results, because the case has ethical implication and call for political decision (two additional points of view are called). I do not wish to go on with this exploration of different kind of truths. It is clear that we have to limit ourselves to the concept of scientific truth. To determine which is the scientific truth, the decision of politics is not relevant, so as it is not relevant the decision of a Court. And public opinion is not relavant too. Because Science is not a Democracy. For the same reason, it is not relevant what most of the scientists believe. If only one is able to unambiguously show that he formulated a better theory, it is just question of time. The new truth will become, soon or late, the new truth. Also limiting ourself to scientific ambit, we obtain the result that Science is not a Democracy. But, if we limit ourself to the scientific world, the answer can be easy. It is enough to follow severely the scientific method. And the scientific method imposes that any result must be published and discussed within the scientific community. At the end, it will be validated or not. if vilidated it will become another brick of the scientific building. To talk about truth is not fruitful. A different point is the requirement that the scientific knowledge must be shared with non scientific worlds, It must be, for sure, because Science is the product of human intellect and must be part of the shared knowledge. This means that any scientist should devote part of his time to a serious popularization activity. This is the only way we can pursue for making scientific truth coherent with the different (political, ethical etc.) perspectives.
Dear Aftab,
indeed, worlds 'invisible' versus 'visible' to the human eye may be quite different!
Cheers
I don't know these stories, but they cannot be excluded just based on what other people tell
Stories, legends about unfair behavior... Scientific world is plenty of them like total human world at whole. I believe we have to see what is mistake and what is fabrication or unfair rivalry.