For big data processing, you need take care of CPU speed and amount of RAM. in personal research level, you can take latest processor available in the market (like: Core i7) and good amount of RAM (8 GB or higher).
I recommend to use cloud infrastructure like AWS, Google Cloud Plateform or Azure where you can rent massive computing power for big data/ML/DL jobs. There are heaps of preconfigured options for such jobs e.g. AWS deep learning image for EC2, SageMaker etc.
If you are going to analyze really huge amounts of data, take a look at horizontally scaling tools/frameworks (like Kafka, Cassandra, Spark; on premise, or in the cloud), organized in a Lambda architecture. Because of that horizontal scaling, it is even possible to start with a small server and then scale out later on.
If your data is not that huge, take a look at all the python libraries around, you will find a lot of great things.
The concept of "the best" will depend on budget and project's requirements. However, you can try with the hadoop environment in a virtualized context. Using at least 3 nodes with Hbase, pig, etc will give you the enough scalability for demostrating some kind of performance before making an investment. Even, You can incorporate some data from your customers and visualize them using zeppelin (among others).