You can take a possible generalized inverse or the Moore Penrose psuedoinverse of your numerical realization of Jacobian matrix, at the singular points , if doing so, i.e taking a generalized inverse is conforming with the physical problem you are dealing with.
It depends quite significantly on what you want to do-computing the Jacobian matrix is a means to an end, not an end in itself.
In addition, the question is, whether the singular behavior is a numerical artifact, or reflect a property of the system under study. In the latter case it's useful to understand that this reduction in rank describes a symmetry of the system and deal with the symmetry-this will lead to an equivalent system with non-singular Jacobian.
an interesting question is what happens near a singularity of the jacobian. This affects the numerical behavior, and may be related to Arnolds singularity theory, see https://en.wikipedia.org/wiki/Singularity_theory
Presumably, you're talking about solving a square system F(x)=0. In Newton's method, you could get lucky and the system J(x)d = -F(x) could have a solution even if J(x) is singular, and it will still be a descent direction for the residual merit function ||F(x)||^2, so that you can perform a linesearch and move on. But you're in trouble if J(x)d = -F(x) is inconsistent. In that case, you could simply use Levenberg and Marquardt's method to minimize ||F(x)||^2. Instead of solving J(x)d = -F(x), you'll be solving a regularized least-squares problem. Look up the method of Levenberg and Marquardt.