The term formal verification refers to theory and practice of computer-supported mathematical analysis methods for ensuring correctness of software (and hardware) systems. The modeling and verification of such systems is an important issue. In particular, safety-critical systems are ones in which errors can be disastrous: loss of life, major financial losses, etc. Techniques to safeguard against such scenarios are essential for such systems. Testing can identify problems, especially if done in a rigorous fashion, but is generally not sufficient to guarantee a satisfactory level of quality. The latter (additionally) requires proving correctness of systems via automated deduction, e.g., using theorem proving or model checking as a core reasoning method.
Another answer. Formally verifying machine learning algorithms, including deep learning ones, is a challenging and complex task. The key reason is the non-linear nature and the complexity of the mathematical models used. However, there are a few techniques that researchers have been exploring to provide some degree of formal verification for these algorithms.
Here's an overview of some methods that have been applied to this problem:
Bounded Model Checking: This involves checking all possible executions within a certain length, verifying if a system holds specific properties. In the case of a deep learning algorithm, it would involve examining whether the system adheres to defined properties for all inputs within a certain bound. This is computationally expensive, however, especially as the input size increases.
Satisfiability Modulo Theories (SMT) Solvers: These are used to determine if the logical formulas over one or more theories are satisfiable. By representing the deep learning model as a logical formula, these solvers can provide formal guarantees about the properties of the model. This approach has been used to prove properties about robustness against adversarial examples.
Inductive Synthesis: This involves generating models that satisfy a given set of specifications and then proving that these models are correct with respect to the specification. This technique is commonly used in the formal verification of software and has been adapted for machine learning models.
Interval Arithmetic: This involves performing computations on sets of real numbers (intervals) instead of individual real numbers. This can be used to verify the properties of neural networks, as the propagation of these intervals through the network can provide an over-approximation of the output of the network for a given set of inputs.
Symbolic Methods: These methods involve creating a symbolic representation of the computations performed by the neural network, which can be used to perform formal verification. This approach can be computationally expensive, however, and might not scale well to large networks.
Reachability Analysis: This is used to verify that all possible states of a system are within the desired range or 'reachable'. This is typically used for systems with well-defined states and transitions, but it can be adapted for neural networks by treating the neurons and their activations as the states and transitions of the system.
However, these methods are generally quite computationally expensive and may not scale well to large, complex deep learning models. Also, they often provide guarantees that are only valid under specific assumptions, such as the absence of numerical errors during computation. Therefore, the development of practical formal verification methods for deep learning remains an active area of research.