Some journals and universities use these soft wares to check plagiarism. In some cases the check results are net accurate as expected. Accordingly, some bad decisions are taken based on non accurate test
Zakaria - I view it differently. It's not that one anti-plagiarism software is fairer than the other. The level of fairness, applied to students, is more at the human level - in the interpretation of what the software reports and the institutional policy that governs the process. The institutional policy is often very 'grey' - meaning that there is a lot of room for subjective interpretation at all levels and the process is never very clear. This mainly applies around the interpretation of what constitutes deliberate and non-deliberate plagiarism.
Most software uses a similar text-matching cross-reference matrix.
Zakaria - I think that similarity check or plagiarism check soft wares can't be fair in all cases unless as a rule all the cases are checked by only one software. But the problem is that institutions mostly use more than one software for plagiarism check. As a guide, a returned percentage of below 15% would probably indicate that plagiarism has not occurred. However, if the 15% of matching text is one continuous block this could still be considered plagiarism. A high percentage would probably be anything over 25% (Yellow, orange or red). It therefore follows that if the colored square for an originality report is Yellow (25% - 49%) Orange (50% - 74%) and red (75% and above) of the text in a paper matches something already in the database. There is a lack of consensus or clear-cut-rules on what percentage of plagiarism is acceptable in a manuscript. Going by the convention, usually a text similarity below 15% is acceptable by the journals and a similarity of >25% is considered as high percentage of plagiarism.
I think that many have a very extreme position about plagiarism. In all kinds of research, we have to refer to other writers' ideas and suggestions. However, we should acknowledge the owner of the idea and we need to paraphrase what we have copied .
I think the software is not the one we should blame, it is the human who makes the decision according to the results from the software.
I saw some cases where the paper is rejected just because the methodology is kind of similar to another paper (which is fine since you are not inventing your own methodology). In other cases, journal editors are taking it easy with plagiarized discussion of the paper.