One possible way is to use methods like principal component analysis and reduce the number of dimensions (variables). in some cases feature selection methods could be useful as well. One may then use any of data mining methods to work with the reduced number of variables.
It depends on what you mean by non-linear data. For each property your data have, there will be specific methods to help.
Are these data a time series?
Are these questionnaire data or something physiological?
Is the non-linearity something theoretically meaninful for you or not?
If the non-linearity matters and you want to predict something out of something else, and cannot decide which are the most relevant variables for your model, it will be instructive to start calculating a series of box-cox analyses. They will tell you the degree of non-linearity in the data by finding the expoent of your predictor/criterium which maximize the r^2. The output of these analyses will help you to understand better whether all variables are non-linear in the same way, have more or less the same expoent or not.
So, you may consider then making some transformation to your data before trying to reduce the dimensionality. This would reduce the effect of non-linearity and improve the output of the principal component analysis. If your data come from a time series, you might consider looking periodic dynamics and fourier. This would simplify your life a lot. All tools you need for that can be found in the free software R.
I think Genetic Programming (GP) may be useful to evolve the mathematical (white box) model for high dimensional non-linear data. Moreover, GP has an advantage that it selects only relevant variables in final model and all irrelevant variables will be automatically removed. Techniques like fitness sharing are developed by researchers to know the importance of different variables during evolutionary process. To evolve more accurate model, you can increase the number of data instances (records) by doing over-sampling.