We are working on adaptive assistance and training systems, i.e. for avation pilot training. For this purpose, we try to monitor the activities of the trainess, gaze behavior, cognitive load, the progress in the task, accuracy and efficiency in task execution, the context, etc.... basically everything we can think of...

From these raw sensor data, we infer features to represent certain skills like where to find information, accuracy, automation, efficiency, etc....

our problem is to go from these single representation of skills to a higher-level representation of more abstract but associated competencies like spatial awareness, communication, decision-making, etc.

Currently we are trying out heuristics, manual rule-based associations end evaluations and comparison to expert reference executions

We would like to use computational cognitive models of competencies to find the multi-modal associations between single observations to more abstract representations of competence.

Do you have any ideas, models to look at?

thank you very much

Benedikt

More Benedikt Gollan's questions See All
Similar questions and discussions