They're certainly related - both are different applications of dynamic programming.
However, these are some differences:
A* needs a model of transition probabilities between states, Q-learning can learn directly through experience without any prior knowledge of the problem.
A* needs an heuristic cost function, Q-learning only requires cost feedback implicitly from the environment.
A* is (in some sense) optimal in the number of backups. Q-learning, not so much.
A* is excellent at exploring the space. Q-learning, not so much.
Q-learning can be applied to cyclic (infinite maintenance) tasks through the use of discount factor that keeps the cost function finite. I think A* assumes a terminal state (else a non-discounted cost function my be infinite), and so is suitable for non-terminating processes.
Its not immediately obvious to me that A* is for stochastic processes. Although I'm sure if you look you'll find some work.
Q-learning generalizes to continuous and high dimensional state spaces (kinda). Its less obvious that works for A* (again, though, I'm sure if you look you'll find some work).
In short, if you can apply A* to your problem, there's probably nothing better. If you can apply Q-learning to your problem, probably anything else is better.