Can AI LLMs be used to establish REST API connections between engineering tools like DOORS, JAMA, Epsilon3, PLM (Siemens or PTC), in a cloud environment without the need for a data broker software layer (like Synera or Syndeia)?
From my point of view, we can rely on LLMs to make these connections (REST API) as the initial step for general and straightforward view for agile deployment, where we can even make new prompts for maintenance. But once this system integration is becoming larger and complex (for many users) where we can encounter specific and critical problems, we should use broker software layers. And we can use LLMs to support and add flexibility in the generation of these connections, but middleware (brokers) made by developers can handle large and complex connections more efficiently for many users.
The integration of Large Language Models (LLMs) into Digital Thread ecosystems represents a significant leap forward in how engineering tools, from requirements management systems like IBM DOORS to PLM platforms such as Siemens Teamcenter, communicate and synchronize data. Rather than relying solely on traditional middleware solutions, organizations now have the opportunity to leverage LLMs as intelligent, adaptive connectors that understand both the technical and semantic layers of system interoperability.
At their core, LLMs bring a transformative capability: the ability to dynamically interpret API specifications, generate context-aware requests, and mediate data transformations; all through natural language or structured prompts.
Imagine an engineer simply instructing, "Mirror these DOORS requirements in JAMA with traceability," and the LLM autonomously resolving schema differences, constructing the appropriate API calls, and handling versioning nuances. This shift could drastically reduce the time and complexity traditionally associated with point-to-point integrations, making cross-platform data flows more agile and accessible.
However, as with any emerging technology, pragmatic considerations must temper enthusiasm. While LLMs excel at semantic understanding and ad-hoc translation, enterprise Digital Threads demand transactional reliability, robust security, and auditability qualities that mature middleware platforms have spent years refining. Challenges such as OAuth2 token management, real-time error recovery, and data integrity verification remain areas where purpose-built orchestration layers still prove indispensable.
The most viable path forward lies in a hybrid architecture, where LLMs handle the cognitive heavy lifting, mapping ontologies, generating API logic, and resolving contextual mismatches, while lightweight orchestration services provide the necessary governance, monitoring, and fallback mechanisms. This approach doesn’t just reduce dependency on monolithic middleware; it reimagines integration as a collaborative process between human intent, AI adaptability, and enterprise-grade operational controls.
In this evolving landscape, the question isn’t whether LLMs will change Digital Thread connectivity; they already are! but how we can harness their potential responsibly. By combining their linguistic and reasoning strengths with the stability of proven integration patterns, we can achieve a new standard of interoperability: one that is as intuitive as it is industrially robust.
Mike Arnold, Accuris’ Senior Director of Product and Technology, talks about digital threading, model-based systems engineering, and how AI plays a critical role in both. You can watch the full video here or read a summary of Mike’s answers below.
An industry-leading artificial intelligence expert, Mike Arnold is responsible for addressing engineering firms’ greatest challenges with cutting-edge technology – all while helping us understand the benefits and limitations of emerging technology like AI.