Reservoir Simulation
Whether the concept of (standard) ‘history matching’ problem remains mathematically ‘well-posed’?
If so, then, how do we end up with ‘numerical instabilities’ in the estimated parameters?
If so, then, are we handling an infinite number of parameters (that would be required for the full description of a reservoir) during reservoir simulation?
If so, whether the reservoir rock and fluid properties keep significantly varying with space (and time) – and thereby requiring an infinite number of parameters?
If so, with a reservoir simulator that contains only a finite number of parameters corresponding to each grid-block in the spatial domain; could ML/AI help us - in handling - the potential large dimensionality of the unknown parameters - associated with an anisotropic and heterogeneous carbonate reservoir?
In case, if ML/AI helps us (a) in alleviating the ill-posed nature of the problem; and (b) in deducing an efficient computational algorithm for solving huge coefficient matrices; then, whether the improved reservoir model will be able to give the best fit to measured well pressure and production data (history matching) – in the absence of decreasing the number of unknown parameters (either through zonation or by using assumed probability density function) - towards alleviating ill-conditioning in the parameter estimation?