Indeed when solving the optimal control problem, there are other alternatives than the principle of maximum pontryagin such as the explicit or implicit Euler method with fixed step size or we can use RL and DP.
For more information about this subject i suggest you to see link and attached file on topic.
The state-of-the-art methodologies are moving toward Reinforcement Learning. If I want to briefly explain you how RL is working, It is solving the Dynamic Programming problem but with an approximated cost-to-go. One drawback of DP solution is curse of dimensionality which occurs for large state space models. Increase in your problem size will increase the computational time exponentially. RL solves this issue by approximating the recursive cost-to-go computation of DP. MPC is another solution technique for optimal control problems which can be categorized as a simplified version of rollout (an RL technique to solve fine horizon optimal control problems).