In the end of her historical overview on the theory and policy of international trade Carmen Elena Dorobat (RG member) cites Necker (2014)
cherry-picking of findings that conform to a desired hypothesis may be interpreted as the 'quest for positive results’ but not exactly as the 'quest for truth'. (Dorobat 2015, p.124)
While interesting and nicely phrased, the quotation is more related to the general approach to doing research than to econometrics, which is one of the instruments in a researcher's toolkit (at least that is what follows from both the quotation and the context where Dorobat put it -- she did not include the Necker's work she cites in the bibliography, unfortunately, so I do not precisely know the context in which he mentions it). As an instrument, econometrics cannot force the researcher to engage in 'cherry-picking' or in the 'quest for positive results'. In fact, yes, there might be some psychological pressure to suppress negative results with respect to one's hypothesis, but then it's truly the researcher's will whether to submit to this temptation or publish the actual results. Econometrics as such is then not at all to be blamed (or praised), as a pencil should not be blamed (or praised) for whether the writer writes a good poem or an unreadable rubbish with it.
I am mainly working in the domain of international trade. I have a feeling that economists who work with the aid of econometrics or econometic tools are more interested to positive / negative results than theoretical implications that those results imply. I got this feeling by reactions to questions like the followings below. I have many other question pages that I think support me.
Dr. Tahsen Alqatawni's question
What are the pragmatic problems with Heckscher-Ohlin model?
This is the main reason I began to think that people who work with econometrics are more interested in the 'quest for positive results’ than the 'quest for truth'.
This is an almost unsolvable problem of economics, and in particular, econometrics. Researchers frequently arrive at their desired results after extensive search/tests, and claim that evidence supports their hypotheses. With more complex tests, the opportunities for bluffing seem to rise rather than decline. My approach is to look at the various hypotheses carefully to decide which might be more appropriate and then subject it to simple tests. But if publishing is your main objective, you might be forced to go for sophisticated tests that corroborate bigwig hypothesis. Good luck.
One of empirical studies problems in economics is that most of studies aim to establish too big hypothesis. For example, many of such works use general equilibrium model. A General Equilibrium Model (GEM) includes everything. This kind of attempt was never tried in natural science.
In the time of Galileo Galilei, Galileo tried to find a law of falling bodies. This is a simple and controllable experiment. GEM is something comparable to atmosphere of the earth. By the development of meteorology and computers, its forecast became very accurate. Even in the case of meteorology there is no simple law that covers all atmosphere. Economic studies seem to trying to do that. It is impossible.
We have to change the basic strategy of our economic research. How do you think of my dogmatic judgement?
Focusing exclusively on ffinding supporting a desired hypothesis would seem to be inconsistent with the scholar process. Having a desired hypothesis in the first place would seem to indicate a lack of objectivity. Exacerbating the issue it is the so-called cherry-picking of findings.
Although I am not sure it is enough, the scholarly process also involves experts the field and peer review. Their experience and expertise are incredibly valuable in this process. Furthermore, replication and further study might reveal some of these issues.
Other than this, I am not sure I can add much to the previous answers. Alexander Tarvid's answer, especially about not blaming the tool, was right on target; the metaphor was outstanding!
Picking the right cherry amongst others is a difficult job to do.econometrics is just a statistical tool used in economics to validate a result and collaborate the empirical findings with the theoretical implications. I think both economics and econometrics should go together and they are incomplete without each other.
If we consult Wikipedia, cherry picking stands for picking suitable results among many disparate data. It is, of course, what we should avoid. But I have a feeling that there are much more subtle cherry-pickings. It is to search small easy question and write a paper on it , although you know well that there are many more important but difficult questions.
Econometric is helping those cherry-pickings, isn't it?
Finding an answer to a difficult question is not an easy job. Only theory cannot help. You need to validate the theory through empirical findings. Then only it will have applicability. About those who search easy question and write a paper.......I will only comment......question is always a question..................for which so long no answers could be found......Identifying the right question is important......even it may be a small question. And writing a paper is always not a easy job.At least you are doing something, which you did not do before.
If a person has a hammer, everything looks like a nail and if an economists has modern tools, then every issue looks like a chance to apply those tools.
This must be an old parable in economics. I found this in p.152 of an article by Mark Blaug (2001) "No History, Please, Wer'e Economists" Journal of Economic Perspective 15(1): 145-164. However, I have heard or read similar story in other places.
Where there is a high pressure "Publish or Perish" it is natural that we see every issue as a nail. Econometrics gives us a good hammer.
I feel we are required to have a kind of work ethics which prevent cherry-piking and easy nail hitting. To refrain from this easy way is also, I believe, a true way to an innovation in economic research.
as topics. But this link was cut off from some days ago. Is it permissible? At least, this is the question that all specialists in econometrics should once consider. This is an offense to the academic freedom and disguised censorship.
I found a phraze (which makes the same aphorism) in Phelps Brown:
it [clinical commitment] removes the temptation to seek the job for the tools instead of the tools for the job (Phelps Brown 1972 Economic Journal p.8)
This was cited by Holander in his Preface to Helen Boss Heslop's book Theories of surplus and Transfer, Routledge, 1990. Phelps Brown's paper is in fact a presidential address in 1971 for Royal Economics Society. In this decade arround 1970 was what Frank Hahn called "a winder of iscontent." Many eminent economists warned the state of economics pointing dangerous trends found in everywhere from theoretical economics to applications. This was a general atmosphere of the time but after 1975 new trends emerged with Rational Expectation Hypothesis and then with the Dynamic Stochastic General Equilibrium analysis. They (REH and DSGE) provided many jobs to economists but economic science lost its relevance to the reality. Major part of economic theorists were trapped in these tools. Instead of "job" seeking with a "tool" in hand, Phelps Brown recommends "clinical commitment."