If you consider machine learning as part of big data analytics, there have been quite a few proposals. Most of these are theoretical, but a handful of papers also talk about experimental realizations. Outside the lab, the most viable solution is D-Wave's adiabatic quantum optimizer, but the paradigm has its own set of problems, it will be a while before it becomes large-scale, and it is, of course, prohibitively expensive. In any case, I believe we will see practical quantum machine learning far sooner than a universal quantum computer.
I have written an introductory level monograph on the subject, you might want to take a look.
Book Quantum Machine Learning: What Quantum Computing Means to Data Mining
If you consider machine learning as part of big data analytics, there have been quite a few proposals. Most of these are theoretical, but a handful of papers also talk about experimental realizations. Outside the lab, the most viable solution is D-Wave's adiabatic quantum optimizer, but the paradigm has its own set of problems, it will be a while before it becomes large-scale, and it is, of course, prohibitively expensive. In any case, I believe we will see practical quantum machine learning far sooner than a universal quantum computer.
I have written an introductory level monograph on the subject, you might want to take a look.
Book Quantum Machine Learning: What Quantum Computing Means to Data Mining
@Remi, yes the question looks too general. I did not mean to use quantum computers. However, my question is: is it possible to use quantum in big data, more precisely search through big data using the concept of superposition.
What do you call "classical" means in reality macroscopic.
Macroscopic waves differ from individual and quantic waves by having many, many absorbers, when an individual wave has only one emitter and only one absorber.
Born-Heisenberg's "amplitude of probability" waves differ from physical waves : they have no absorbers, nor are physical in any way. They only represent the half-knowledge of a Göttingen-Københavnist physicist, in the limits of the surrepticious postulates he/she has inherited from the Göttingen-København sect.
I did not speak of "absorption", but of the absorber. For each individual wave (photon or fermion). Said otherwise : the annhilation reaction, if we use the terminology that in a quantic graph, an arc joins a creation apex (or reaction) to the annihilation apex (or reaction)
I'm with Stam on this, Grovers Algorithm is a good place to start for you. Deciding on what BigData means may be another! Particularly, I have experience in using simulated quantum tunnelling in the fast recognition of images. So yes, you can use simulated quantum mechanics to gain speedups over classic algorithms that may have application in analytics of BigData.
Research Neuroplasticity demonstrated in a Zero Logic Quantum Neural Network
If I'm right understood, there are two very different situations.
First one is : you have gotten big statistics data by the method of large number of measurements of absolutely identical small quantum systems. You can use quantum mechanics and classical statistics methods for the analysis of these data.
Second one is: you have gotten big data by the single measurement of quantum system with big number of particles. In that case you can use quantum statistics. for the analysis of these data.
I suppose that also depends on the results from the future. Concerning above mentioned (Peter Wittek) algorithms: Since most of them rely on (a) data in form of quantum information and (b) Q-RAM - neither of both having a practical implementation yet - the question if big data analysis can maintain a speedup through quantum computing is still an open one. Additionally there are issues with the prefactors in recent Quipper implementations of e.g. the HHL which also might make quantum useless for the application to big data (if big is not big enough).
Your question is rather vague. I would define the essential elements of quantum mechanics as the representation of physical objects and interactions as operators and vectors in a vector space (often the space of L^2 integrable functions, but many other spaces work just as well). Additionally, the specific operators come from the Lie algebra of the symmetry group related to specific conserved quantities (Noether's Theorem). Certainly, anyone is free to take analogies from this and make big data algorithms, with examples given in other answers. But I would posit that is not "using quantum mechanics" but learning from it. In big data, the underlying symmetry is difficult to define, (though the submanifold implied with dimensional reduction might count). Finding the correct Lie-algebra and symmetry group for an empirical data set is also not well defined. A different question is whether quantum processes could be useful in big data, i.e., quantum computing. There is a very large research community that have been working on that question for years and I assume they will continue to work on it for some time.