Has anyone considered computationally (e.g., Google site mining) identifying repeat patterns in publishing sites, building a site-similarity rank index, then letting millions of scholars judge for themselves? Most of these sites are clones of bulk predatory publishers (i.e., they self-plagiarize their sites, because doing otherwise is not as profitable as slumlording). Legitimate ones are far more unique.

I just Google-searched "Submission of a manuscript implies that authors have met the requirements of the editorial policy" (a string I selected from twasp.info/journal). Dozens (hundreds? thousands?) of sites popped up. There's no Jeff Beale to attack in such a model, because scholars are only confirming or rejecting given sites.

More Peter Zelchenko's questions See All
Similar questions and discussions