Scientists should be able to do their own judgements about quality when reading a paper. This also holds for the general quality of journals. When you know the publications in your field and their quality, you will also know something about the journals in which these papers appeared. This seems to me a better way than looking at impact factors of the journals and to conclude from this about the quality of a paper.
A lot of research out there is good research. It is just not relevant to me. I might enjoy reading about cat behavioral biology, but it's not relevant to me as a molecular biologist. But it's still good stuff.
Open access journals these days are now publishing journal articles with the correspondence from the authors and the reviewers. That's really cool stuff that they don't hide anything. I am somewhat biased to refer to research articles that are highly cited, have reproducible methods, figures are descriptive, and use the least number of parameters to fit data. Still some research articles are really good stuff that I can't believe no one has cited, but I definitely will cite! I tend to search research articles on Google Scholar or Web of Science, mainly.
I don't really read every article in Nature or Science. I don't have the time, and I don't think it is empirically better.
Questionable research does not have negative controls, does not have reproducible protocols, and the conflict of interest section is not blank.
There are qualitative (expert opinion) and quantitative approaches to evaluating research work. The best way is to combine both approaches, especially when it comes to academic advancement. However, only bibliometric indicators (impact factor, number of citations) are easily available for individual articles.
In my experience, the journal impact factor only determines the probability of noticing and quoting articles, in conditions of so much "competition for attention" (Franck 1999; Science 286: 53, 55).
Its prognostic value was tested in the article:
Article Rank-normalized journal impact factor as a predictive tool
I do not believe that the impact factor of journals (and of articles published in it) is in any way a measurement of research quality or significance. Both good and very bad papers (negative examples how not to deal with a topic) are quoted very often. Also, impact factors just push the mainstream. There are wide fields of science, especially in palaeontology, where hardly any specialists are left. If you work on Devonian bivalves or nautiloids, who is left to quote you (ca. three or four persons globally). Good papers will stand the test of time and continue to be quoted for decades and much more - wherever they have been published. The annual impact is insignificant and only adored by the too many "fashion scientists".
Many aspects to evaluate the quality of papers are mentioned above already (thanks!). One can also look at to what degree the results remain accepted over time by the majority of specialists in that particular field of study. Two additional thoughts: (1) whether all data is available as supplementary material or at a data repository, and (2) whether specimens used in the paper have been deposited and remain available for study.
Applauds to Thomas Becker, am joining my voice under his comments. Fashion science, indeed, overwhelmed metric tools not in the favor of lifetime committers and too often not in the favor of real increment of knowledge. I found that authoritative citation engines are always frustratingly incomplete. As an example, Scopus states that they capture 150,000+ book titles, but I don't see any sign of indexing text contents in books of my selection.
I am sure this incompleteness is the sickness of growth in the world of new global technologies rather than something intrincically unfair or dysfunctional. The fairness of engine coverage is bound for substantial improvement.
Meanwhile, I would adhere to the good old way of building specific research networks through collegial contacts based on merits of their respective publications.
Scopus cite score is also another option. U can track real time the cite score of a journal. Higher the cite score a journal has, the more influential it becomes. It is more transparent than impact factor, I think.
I have said it above: the number of citations is NOT a quality measure because papers with bad data and bad interpretations are also often quotated - by many who state that a paper is bad and had mistakes. And you only get quoted often when you work in a field with many researchers. Work on dinosaurs and you get many quotations, work on Paleozoic bivalves, and it won´t happen.