There is more and more literature about quantum machine learning. However I do not understand how to read the data points.

The reading problem: The amplitude distribution of a quantum state is initialised by reading N data points. Although the existing quantum algorithms requires much fewer steps as O(N) and are faster than the classical algorithms (square root, logarithmic), N data points must be read O(N). And the dataset representing the points will be destroyed (collapse) during the measurement.

Where do you get the advantage over a classical algorithm?

Is this reading/measurement problem already solved? If not, why is it ignored?

More Andreas Wichert's questions See All
Similar questions and discussions