Recently many of the researchers use logistic regression predicted value probability for finding the combined effect of two or more diagnostic test [Test Variable] in ROC analysis. Is it a good idea?.
It is a good idea to do that, provided that helps you answer the research question. Otherwise not necessarily.
If you want only to compare different tests and you don't want to use them together there is no need for that.
If you want to harness the power of several diagnostic tests then it helps.
If you want to see how much more a test helps in reclassifying a group of other tests/markers, then you might use integrated discrimination improvement (IDI) and net reclassification improvement (NRI). http://www.ncbi.nlm.nih.gov/pubmed/22679181
Yes, it is a good idea. What you can also do is determine the added diagnostic value that a second test brings. You can run one logisitic model with a single test and compare the AUC value of its ROC curve to that of the second model which would include both tests. I use a program called MedCalc which allows you to run a statistical test comparing AUC values to determine if they are significantly different. You can also compare the pseudo R-squared values of the two models to see how much more variance is accounted for. I've attached one of my publications that provides an example of what I just described.
Leucata raises important points about depending on the question. The papers by Pencina and colleagues introducing the NRI and IDI are worth a read if you don't mind the statistics theory. The most recent is particularly good: Leening, M. J. G., Steyerberg, E. W., Van Calster, B., D'Agostino, R. B., Sr, & Pencina, M. J. (2014). Net reclassification improvement and integrated discrimination improvement require calibrated models: relevance from a marker and model perspective. Statistics in Medicine, 33(19), 3415–3418. doi:10.1002/sim.6133
I've attached a link to some R code for comparison of AUCs, and calculation of NRIs, IDIs, and the Risk Assessment Plot which I described in the attached publication.
Article New Metrics for Assessing Diagnostic Potential of Candidate Biomarkers
Data R function for Risk Assessment Plot & reclassification metri...
I know this answer comes a lot latter then the remaining. John and Leucuta have interesting points and point out important issues on the this approach. In general I would add the following. One must imagine how these test would be combined in practice. Would they replace another test? used interchangebly? in series or in paralel? If you just run something like outcome ~ test 1 + test 2, then you are probably assuming that the overall accuracy is representing an combination in series. This means that both tests must be conducted and either positive result may be a positive result in the outcome. In general regressions are conducted to adjust the effect of one predictor by another, and in this way, you may be estimating how much the second test add to the overall accuracy when the first is done. So you must give some thought on how the tests were in fact combined to have the proper interpretation. These toughts may lead you to slight different analysis approach, for example how combine the tests or how to define the outcome.
The second comment is that you might be thinking on prediction models, where you are not interested on how much accuracy a test add to another, but how precise predictions of the outcome are. A concern about this came by when the alternative measures proposed by john are often used for prediction models. If the interest is to predict how likely a subject have a condition given these test results, then you are not talking about the tests accuracy anymore, but in individual predictions. Calibration at large, net benefit and other decision curves may also be a good addition to ROC curve results. If this is the case I suggest reading Styerberg "Clinical prediction models" book.