ROC curves reveal more information about the performance of binary classifiers. However, different indexes about the ROC cureves such as AUC can help to analyse the ROC curve better.
Be careful about AUC. It has been criticized by many experts such as Professor David Hand (Imperial College London) who has published dozens of articles against area under curve (AUC). He argues that AUC can give potentially misleading results if ROC curves cross. He proposes a more coherent alternative in this article:
Hand, David J. "Measuring classifier performance: a coherent alternative to the area under the ROC curve." Machine learning 77.1 (2009): 103-123.
You also might be interested to read the following articles as well:
Hand, David J., and Christoforos Anagnostopoulos. "When is the area under the receiver operating characteristic curve an appropriate measure of classifier performance?." Pattern Recognition Letters 34.5 (2013): 492-495.
Lobo, Jorge M., Alberto Jiménez-Valverde, and Raimundo Real. "AUC: a misleading measure of the performance of predictive distribution models." Global ecology and Biogeography 17.2 (2008): 145-151.
This confusion is another instance of the old problem that people try to summarize more or less complex properties in not more than a single number.
The ROC curve gives the complete information.
When you take a particular point, a summary or statistic of the curve (like the AUC for instance), then you have a single value that will NOT obey the complete information. It neccesarily highlights one particular aspect and igores others. Whether or not the highlighted aspect is relevant for you depends on your aims.
Similar problems occur when people try to use only p-values or only r² values or only mean values etc. to interpret something that is actually too complex to be represented by a single numeric value. This all can be helpful - under particular circumstances. The bad thing is that people tend to forget this.