I mean a formal assessment of research output, as in the UK and other countries. If you know of papers that talk about this, that would be useful too. Thanks in advance.
I don't know, but my best guess would be 1. scale. The UK exercise costs a fortune for just 180 higher education institutions. 2. History. The UK exercise evolved alongside political moves to dissolve the old binary divide between universities and polytechnics and higher education colleges. In 1992 the former polytechnics were allowed to call themselves universities, and the first research exercise followed shortly as a means of determining central (state) research funding allocations on merit. 3. Reason. The USA's different HE funding structure means there is really no reason to have one. 4. Outcome. Many feel that the UK's experiment has ultimately been a costly failure. Research funding remains largely concentrated amongst the traditionally elite universities.
Some authors make assessment, for example with respect to funding of the private R&D sector. See: http://www.jstor.org/discover/10.2307/2098599?uid=3737592&uid=2&uid=4&sid=21104638720863
And for all activities? Well over the last decade, ex post research assessment at the program level in the United States has seemed much less active than the equivalent activities in Europe, both west and east. This seeming lull was the result of a decline in program evaluation activity across the U.S. government in the 1980s, which slowed the rate of formal evaluations. Program review activities within agencies, however, were common, especially at such mission-oriented research supporting organizations as the Department of Energy and the Office of Naval Research. Review processes at these agencies relied primarily on expert assessment, sometimes at the project level, supplemented by user inputs. Quantitative performance measures were seldom used. That situation is about to change. In 1993, Congress passed the Government Performance and Results Act, which requires all agencies including those support research to set quantitative performance targets and report annually on their progress toward them. Agencies with clear technological goals are rapidly developing sets of indicators for this use, including peer assessments, bibliometric measures including patents, and customer satisfaction ratings. But fundamental research agencies do not find such measures satisfactory, and are just beginning to develop alternative ones.
See more for recent developments at: http://www.akademiai.com/content/jt5r523248835u24/
And this one a well:
The US National Research Council released A Data-Based Assessment of Research-Doctorate Programs on September 28, 2010. The report consists of a descriptive volume, and a comprehensive data table in Excel containing data on characteristics and ranges of rankings for over 5000 programs in 62 fields at 212 institutions. See: http://sites.nationalacademies.org/PGA/Resdoc/
Thank you very much for all the information and reflections. I have read the papers and explored the starmetrics website. With the advent of research evaluation systems, and the many papers showing their failure all over the world, I wonder if there is room for alternative conceptions of scientific knowledge other than productivity. Isn't it time to develop new metaphors and approaches to the "production" of knowlege?
This issue needs to be examined in the light of interesting works that have come in the context of how public universities were systematically destroyed: http://junctrebellion.wordpress.com/2012/08/12/how-the-american-university-was-killed-in-five-easy-steps/
I think the major cause is the US federal system. Research evaluation systems are created by governments that are responsible for research because they fund it from taxes. In the US, universities are either private or funded by the federal states. As for the evaluation of federal agencies, you might want to look at
Cozzens, S. E. (2007). Death by Peer Review? The Impact of Results-Oriented Management in U.S. Research. In: The Changing Governance of the Sciences: The Advent of Research Evaluation Systems. R. Whitley and J. Gläser (eds.). Dordrecht, Springer: 225-242.