Kamal H Karim started the discussion here in RG six month ago: “Why there are too many false citations in research papers published in international and local journals?” My impression is this process is boosted exponentially during the last decades. Many causes reasons and problems have been discussed here by many researchers. Many participants in discussion said: Responsible for a correct citation are primarily the authors, but also the peer-reviewers and the publishers have a share of responsibility. Many participants in discussion united the desire that it should become better again. The implementation, however is difficult for many reasons like: 1. A culture change in universities of the past 2 decades has been; students are expected to start doing research in their first year. In some cases, they are required to have publications before graduating. This puts more pressure on students to do things faster. They feel they don't have the time to spend doing things the 'long' way. 2. In the age of Big Literature we can see knowledge is published in prolific rates. The magnitude and rapid expansion of literature makes it increasingly impossible to conduct comprehensive and transparent assessments. During my study at the university and my first scientific work in the 1980s I have learned by my old professor that I have to read all literature sources entire, not only the summary and I have to read always the primary sources of literature, the original paper, not only the cited extracts in a secondary source. I worked so in this way over all the years and I’m doing it in this way today. But this means, I’ not able to write and publish 10 or more papers in the year as some researchers do. However the evaluation of researchers and the allocation of funds for projects often depend on the quantity of published papers. That’s not a good development and it might contribute to false citation and plagiarism. Somehow, there is often a lack of time to be thorough and conscientious and that’s not good at all. Can modern technology help us with this? Can computer algorithms now make fewer mistakes then we do or make more mistakes? On the one hand at the moment it is often still this: When we use automatic literature search systems like Web of science advanced search or Scopus, we get many hits, but many of them turn out to be useless because they do not really contain what was queried. It is still difficult to rely on it. On the other hand: Innovations in research and assessment practices and tools are needed. We need gig data methods for analyzing literature and making assess urgently. But caution: we also should ensure comprehensiveness. This is a problem really. What do you all mean: In the age of big literature, can new computer-aided methods and procedures reduce or increase the numbers of false citations in research papers?