What do you think about an artificial intelligence that doesn’t feel but can understand human emotions and infer what a person really wants instead of only what they say?
I’m investigating a cognitive AI that interprets affect without “feeling.” The formulation is mathematical and continuous: emotions as latent state vectors evolving in time, modeled with ODE/SDE dynamics, stability shown via Lyapunov criteria (including on manifolds), a Riemannian metric on the state space (curvature used as a geometric regularizer), spectral analysis for modes/energy, and observability-based inference from multimodal signals (voice, face, text, physiology). Temporal distributions are treated with a Fokker–Planck view for density evolution. In short, it’s not vibes, it’s controllable, interpretable dynamics.
Under clear assumptions (stability, identifiability, bounded noise) I’ve proved that such interpretation is possible; preprints are in preparation. I’d love your take: once AIs can reliably interpret humans in this way, what are the philosophical and social consequences? Agency and privacy, education and mental health, governance and product design, UX and consent, acceptable use and limits. References, critiques, and concrete scenarios are very welcome.