I search a standard test bases in context of collaborative information retrieval (containing annotations or comments of collaborators) to test my approach.
-the SMART collections ADI, CACM, CISI, CRAN, MED and NPL
-the TREC QA (question answering) collection prepared for the Question Answering track held at the TREC-9 conference, the QA-AP90 collection containing only those questions having a relevant answer document in the AP90 (Associated Press articles) document collection, the QA-AP90S collection (extracted from the QA-AP90 collection) having questions with similarity of 0.65 or above to any other question, and the QA-2001 collection prepared for the Question Answering track held at the TREC-10 conference.