The data stream engines are characterized by receiving and processing an unbounded data stream using the available resources (i.e. memory, processors, etc.). Suppose that each data stream comes with a certain tagging, which allows you to identify the related concept with a given measure. The point is the way to recognize from previous pieces of knowledge or experiences from some repository (e.g. an Organizational Memory) patterns or situations in order to avoid risk situations or catch a given event.

In a data stream context...What do you think is the best way to represent the knowledge in order to allow dynamic reasoning on the fly at the moment in which each measure is read and matched with its concept?

Similar questions and discussions