This depends on type of data. In general if you do not have any idea about how the data should be classified or what are the classes that they should be clustered accordingly use SOM "Self-Organizing Maps". Otherwise, use BP or any other supervised ANN. For more details on how to use numerical data and ANN check the following Web page:
Above link is an example on how to load numerical data into the ANN toolbox in Matlab to cluster data using SOM. You can also use deep learning in Matlab see the following link
Since the question mentions 'classify' rather than 'cluster', I will assume it is a supervised regression problem.
The most popular framework/package for Neural Networks right now is TensorFlow. Keras is a high level wrapper for TensorFlow. The Python API is very well documented readable.
I'd like to add to the other's suggestion that data play a vital role in selecting adequate algorithm. How much data do you have (enough for DL methods?)? How complex are the data? Are they, for example, linearly separable? If so perhaps you don't need neural network but some other, not so complex (you don't have to throw TensorFlow on every single problem), method. This would be nice to provide in your future questions.
I want to built a decision making system in medical field using neural network. The collected data is blood sample analysis of patients. If using DL methods how much data samples needed? Also what all are the steps requiredfor developing a decision making system?
Ideally if you are working with blood tests (I hope that u mean blood test results by blood samples) it would be possible to build a neural network, but it is not recommended. As neural network are black box and cannot be interpreted easily. Therefore it would be a better option to either use random Forrest or other forms of boosting algorithm to solve this problem which allows you to see the feature weights assigned by the algorithm.
Another main issue with medical data is there is usually a lot of missing data, in these scenarios neural networks do not perform really well.
Hi, I would like to add that, before applying deep learning techniques that require many resources like GPU, it is better for you to first apply some traditional machine learning techniques like SVM or decision tree.
In my area, several papers have shown that when training data is not enough, traditional machine learning techs can achieve similar (or even better) performance than deep learning. As shown in the following papers:
Easy over Hard: A Case Study on Deep Learning.
Neural-machine-translation-based commit message generation: how far are we?
In comparison with deep learning, I think that traditional ML techniques are easier to learn. Also, if your do not possess much data, traditional ML techniques should be better choice.
You can use the following platforms. Many tutorials of these platforms have been published.
R (https://www.r-project.org/)
Weka (https://www.cs.waikato.ac.nz/ml/weka/)
scikit-learn (https://scikit-learn.org/)
No matter which technique you use (deep learning or traditional machine learning), you always need to first prepare a dataset and use the dataset to build models. For all of the mentioned platforms, you need to prepare a csv (or other formats like arff) files.
If you are familiar with Python, you can use scikit-learn. There are many examples in its website. I think that after trying several examples, it will be easy for you to apply the techniques on your problem.