Local search method helps to increase the exploitation capability of optimization and meta-heuristic algorithm. It can help to avoid local optima also.
You have two types deterministic or stochastic algorithms that you can hybrided with metaheuristic methods. We can cite interpolation methods (approximate gradients), direct search methods... With regard to uncertain decision models, the dominant approaches to the modeling of stochastic programming problems are models with probabilistic constraints and robust models.
Local search cannot escape local minima - by the very definition of the term. Buy my course book "An Introduction to Continuous Optimization" from Dover Publication.
There are a large variety to combine them with other meta-heuristics to achieve balance of exploitation and exploration depending on each problem characteristics.
But, as highlighted above, local search are not the most appropriate way to escape local optima. However, there are some extensions (e.g. iterated local search) that might be useful for this.
In a non-convex problem, even iterated local search will most likely fail - unless you know something that comes from experience from similar problems.
General nonlinear optimisation is NP-hard.
You do have a pretty good chance if you know more about the functions involved; Lipchitz continuity helps a lot!
A hint: it is not OK to be ignorant of nonlinear mathematical optimisation. First you absolutely need to understand optimality criteria - as those conditions are the goal to fulfill. Mathematical optimisation tools are DESIGNED such that accumulation points from the iterative scheme WILL satisfy the optimality criteria, and in most cases you will be able to see the "fault" in the optimality criterion shrinking.
In contrast metaheuristics have no such "guiding light" at all - they fumble in the dark! Few sensible scholars with knowledge in mathematical optimisation would use these tools.