Basically, you would need to train your AI using some representative sample of data from your system (aka the training set). Follow this up with testing its predictions for another dataset (the testing set). Breaking up your initial dataset into 70:30 ratio to generate these is generally considered a fair starting point. In principle, this would work for data sets with sufficient number of datapoints in order to establish correlations and trends between various parameters/groups of parameters. If you plan to use this into some real-time analysis of a process, then over time with more results, the accuracy of your predictions can be further fine-tuned and improved.
As to whether this is a fruitful exercise and what results this would generate depends on what your constituent variables are. Of course since there is underlying physics in the system, the dependancies that you find will inevitably satisfy some constrains and relationships.
You may want to start with identifying target variables that you want to predict and potential influencing parameters in the system which can be obtained from your sensors.
AI is often used to classify patterns or make predictions of continuous or time-domain data. It is first necessary to define the purpose of your project. That is, what are you trying to accomplish with your collected data and do you believe the data contains information that can contribute to your desired result.