I recommend you to use the Pandas library and process these data in batches. In addition, this link provides a set of data science libraries in python:
Artur Jordão I have been working with pandas and other libraries in python and I know of their capabilities. My problem is that I have a huge data base (2TB). Importing and handling such a huge file in python is very challenging and inefficient. I am trying to look for possibilities for data compression or other solutions. Thanks for the answer though.
Dependent on the type of analysis you like to perform, it may suffice to draw a representative sample from the data. Otherwise, if you need to get simple exact counts, you may set up a HADOOP infrastructure and parallelize your analytics via map/reduce.
I would recommend using cloud computing services (AWS, google cloud, ...etc). they provide suitable solutions to this problem. HADOOP and Spark are available on the cloud as well.
As Mr. Ahmad mentioned, you can use cloud computing which fortunately every university at least have one of them especially at electrical engineering, mathematical, physics colleges. I will never suggest cloud computing service online because you have big data and transfer them as so much time-consuming. you better find one of cloud computing system at your university and use it.