Yes, there is a need, driven by the rise of Big Data and the emergence of data sets that have billions and billions of data points for analysis. New tools are emerging all the time, but there are software packages that support this work: http://bigdata-madesimple.com/top-big-data-tools-used-to-store-and-analyse-data/
My underlying response to the aforementioned issue is particularly oriented to the domain of language based-social research e.g. corpus linguistics, lexicography, and (critical) discourse analysis, assuming certain advantages and drawbacks of using modern tools in analyzing huge quantities of data.
Use of the large quantities of data is commensurable especially with corpus linguistics and lexicography research that aims to identify and/or describe the language patterns including lexical collocations and grammatical colligations of the selected linguistic items. Nonetheless, if the objective of the research as in discourse analysis is to unravel the social tensions between two competing ideologies, there is then no need for the implementation large quantities of data and modern tools, because ideological import, hegemonic visions, and exclusion, for example, is observable even in a short text.
For lexicography and corpus linguistics research, validity claim increases if it is supported by huge quantities of data and analyzed by modern tools. For discourse analysis, on the other hand, it is dangerous because corpus can be vulnerable to contextual distortion since the probability for the software to pick up any "keyed" item—typical, borderline, and failure examples—is high. To minimize the risk and for the sake of increasing justifiability and reducing falsifiability, checks and balances is strongly required.