Reservoir Engineering: Data handling by ML

During reservoir characterization and reservoir engineering applications, we got to deal with ‘data’ from ‘multiple sources’ gathered @ ‘various scales’. If so, how could we, efficiently, make use of such ‘inconsistent data bases’, using (a) consistent query answering frameworks; (b) inconsistent management policies; (c) interactive data repairing and cleaning systems; and (d) interactive data exploration tools?

Can ML would be able to measure 'the amount of inconsistency' in a given data set (quantifying and understanding inconsistency)?

If so, whether, ML would be able to help us in understanding the primary sources of conflicts as well as devising ways to deal with these inconsistent data?

Could ML and data-driven approaches ensure us the ‘quality’ of the reservoir data used to remain to be better – as poor quality data can have serious adverse consequences on the quality of ‘reservoir management decisions’ made using AI?

Also, feasible for us to compare the amount of inconsistency between reservoir ‘rock properties’ and ‘fluid properties’ (and rock/fluid interaction properties) using ML/AI (such as inconsistency-tolerant integrity checking)?

In addition, would it remain feasible for us to make it as an ‘online learning system’ using ML, in which, the model learning performed by a central server requires training data that are sent in mini-batches from core and well-logs (as production evolves with time)?

More Suresh Kumar Govindarajan's questions See All
Similar questions and discussions