I am exploring the concept of dynamically modeling user identity and intent by analyzing behavior across generative AI tools (e.g., ChatGPT, Claude, Notion AI, etc.). The goal is to infer high-level cognitive or professional trajectories based on a user’s queries and interaction patterns, without directly accessing the raw prompt content (for example, utilizing embeddings, token patterns, and usage metadata).
I’m particularly curious about the following key areas:
1. Has there been research on creating intent graphs or behavioral fingerprints using data from interactions with large language models (LLMs)?
2. Are there known architectures or agent-based frameworks for building persistent identity models that work across multiple tools and platforms?
3. What approaches are available (or emerging) regarding zero-knowledge sharing or federated learning to ensure that user modeling remains private?
I am especially interested in applying this work to professional development or networking. However, I welcome insights from anyone researching multi-agent learning, identity in AI systems, or private user modeling. I am open to papers, frameworks, or even counterarguments related to these topics.