Rippletide at Adopt AI 2025: Building the Foundation for Trustworthy AI Agents

Rippletide at Adopt AI 2025: Building the Foundation for Trustworthy AI Agents


On November 25–26, 2025, at the Adopt AI event held at the Grand Palais in Paris, Rippletide was there to build AI agents that are reliable by design. As technology leaders (CTOs, CDOs, CIOs and others) seek to harness autonomous AI agents while keeping risks under control, one central challenge has emerged: trust

How can organizations deploy intelligent conversational agents capable of autonomous decision-making for Enterprise without fearing hallucinations? This question of reliability was at the heart of the discussions at Adopt AI.

In this article, we will  explore together why trust is the cornerstone of enterprise adoption of AI agents, and how innovations such as Rippletide’s hypergraph database and our new hallucination evaluation module are laying the foundations for trustworthy AI at industrial scale.

TRUST as the cornerstone of today’s enterprise agentic needs

While business leaders are enthusiastic about the potential of AI agents, they simultaneously identify a significant trust gap when it comes to these emerging technologies. On one hand, technical advances point toward increasingly autonomous agents. Gartner projects that 33% of enterprise software will include AI agents by 2028, to the extent that these agents will independently handle 15% of daily decisions. On the other hand, organizations remain reluctant to give free rein without guardrails. There is a clear difference between what agents are technically capable of and what companies are willing to entrust to them. This divide, this “trust gap,” represents the NUMBER 1 challenge to overcome in order to deploy agents in production.

The data confirms that current skepticism is proportional to perceived risk. Gartner reports that 74% of companies see AI agents as a new security attack vector, that only 13% believe they have adequate internal governance frameworks and that 81% do not trust vendors to prevent hallucinations in their models, with only 19% expressing confidence (Source). Similarly, a 2024 PwC survey revealed that 80% of executives do not trust AI agents to autonomously manage sensitive interactions such as employees or finances, due to doubts about accuracy and reliability. It is therefore unsurprising that, in a recent Forrester survey, 29% of AI decision makers cited lack of trust as the primary barrier to adopting generative AI within their organization.

How can the necessary trust be built? Experts highlight several levers. First, establishing robust and cross-functional AI governance. Most executives agree that without an appropriate governance framework, trust cannot scale across the enterprise. Gartner therefore recommends concrete measures such as “design for transparency and trust,” meaning that explainability and full auditability must be integrated into agents by design. An AI system that can justify each of its actions and recommendations will be far more readily accepted, particularly in regulated industries where controls and audits are standard.

In summary, trust has become the sine qua non condition for moving from limited pilots to widespread adoption of AI agents. This requires technology combining control, traceability, and transparency at the heart of AI agents today. This is precisely what enterprises need: “trust AI” with innovations designed to meet the reliability requirements of large enterprises.

→To go deeper: A must-read for executives looking to decode the agentic market: The State of AI Agents 2025.


Hypergraph database: the trust layer for agentic governance, auditability and compliance for Enterprise


Schematic representation of a hypergraph modeling interconnected knowledge. This type of advanced structure enables an AI agent to navigate a rich network of facts and rules, providing contextualized memory and more reliable reasoning.


The benefits of this architecture are immediately visible in the reliability of outputs. By combining memory with logical reasoning, the agent drastically reduces the risk of hallucinations, reaching nearly 99% accuracy. In addition, the hypergraph provides an integrated governance framework. Every action taken by the agent is constrained by explicit rules recorded in the database, including company policies, business rules and legal restrictions. In practice, every decision is made within defined boundaries, guardrails are systematically respected, in other words, Rippletide’s architecture itself enforces the rules and prevents the agent from operating outside the authorized scope. This compliance by design is a major advantage for sectors such as finance or healthcare, where no hallucination can be tolerated.

Finally, the very nature of the hypergraph database delivers full auditability. All information used and all chains of reasoning followed by the agent are formalized and traceable. It is therefore possible after the fact to explain every decision made by the agent by retracing the knowledge nodes and rules that led to it. This total transparency provides reassurance for risk management and compliance teams. AI is no longer a black box, it becomes a system that can be continuously governed and verified.

In summary, through its hypergraph database, Rippletide delivers a technological answer to the three pillars of trust: reliability through a drastic reduction in hallucinations, compliance through strict respect of guardrails, and explainability through end to end traceability. This is the foundation on which it finally becomes possible to confidently delegate operational decisions to an AI agent.

→  To explore the challenges of agentic explainability in enterprise: read our article or see this video.


Product Focus: the Rippletide module for evaluating hallucinations in AI agents

At the Adopt AI 2025 conference, Rippletide team was proud to unveil a preview of our innovative hallucination evaluation module for AI agents, concretely illustrating its “Agentic Trust by Design” philosophy. This module delivers a novel response to a universal challenge in AI agents: evaluating agents in order to detect and identify hallucinations. Agent hallucinations can occur at every stage of an agent’s decision-making process, resulting in fallacious assertions in the generated outputs. Until now, enterprises had very limited means to detect these hallucinations in real time before they impacted the agent’s responses. This is precisely the gap Rippletide’s module is designed to fill.



Rather than assessing the damage after the fact, Rippletide has built a module that evaluates the hallucination rate of any agent in real time. To do so, the module extracts all factual claims made by the agent (for example, a numerical value, a reference to a policy, or a document citation). Each of these assertions is then verified against the organization’s unified knowledge hypergraph. For every fact, the system determines whether it is sourced and supported by data, unsupported (unknown), or directly contradicted by reference information. Based on this process, the module can detect hallucinations in real time, flag them, and subsequently optimize the agent.

The outcome of this runtime evaluation is expressed as a deterministic reliability score. For each analyzed response, the module calculates a hallucination rate and derives an overall score on a scale from 1 to 4 (with 4 being the highest level). For example, a score of 4 corresponds to an agent with a very low hallucination rate, typically below 1%. Conversely, a score of 1 indicates a response riddled with inaccuracies and therefore unacceptable. Importantly, this scoring is not a vague opinion generated by another AI model; it is entirely objective, as it is based exclusively on the company’s own data. The tool also explicitly highlights the portions of the response that may be hallucinated, enabling the organization to understand exactly where the agent diverged from reality.

→ To know more about AI agent evaluation methods: watch this video or read this article about Micro and macro determinism in AI agents



In addition, in offline evaluation mode, the module enables cold benchmarking of an agent prior to deployment. By running the agent on question-and-answer datasets or test scenarios, organizations can obtain its average hallucination rate and a “readiness” score. This makes it possible to compare multiple prompt versions, measure the impact of a new knowledge base, or decide whether a given agent meets the required reliability threshold for production. Here again, Rippletide’s deterministic approach stands in sharp contrast to the subjective or partial evaluations that previously prevailed.

→ To know more about our hallucinations evaluation module: read here

Big thanks to the La French Tech Grand Paris team, as well as to the other startups that were present at Adopt AI. The exchanges, feedback, and shared energy throughout the event made it a particularly valuable experience!

And last but not least, a highlight moment with Emmanuel Macron, President of the French Republic. Thanks again for coming at La French Tech Grand Paris booth!



Others ressources:

Rippletide at Adopt AI: how to deploy trustworthy AI agents?https://www.youtube.com/shorts/d4az3WGGcek

How to enforce guardrails for AI agents?
https://www.youtube.com/shorts/dNtNEyHfcoU

What are the biggest challenges facing AI agents today?
https://www.youtube.com/shorts/x_I-TVGQKsg

How to evaluate AI agents hallucinations?
https://www.youtube.com/shorts/0Qcj8WzDFNc

The 3 challenges to deploy AI agents in production
https://www.youtube.com/shorts/KW8h03hMnio



On November 25–26, 2025, at the Adopt AI event held at the Grand Palais in Paris, Rippletide was there to build AI agents that are reliable by design. As technology leaders (CTOs, CDOs, CIOs and others) seek to harness autonomous AI agents while keeping risks under control, one central challenge has emerged: trust

How can organizations deploy intelligent conversational agents capable of autonomous decision-making for Enterprise without fearing hallucinations? This question of reliability was at the heart of the discussions at Adopt AI.

In this article, we will  explore together why trust is the cornerstone of enterprise adoption of AI agents, and how innovations such as Rippletide’s hypergraph database and our new hallucination evaluation module are laying the foundations for trustworthy AI at industrial scale.

TRUST as the cornerstone of today’s enterprise agentic needs

While business leaders are enthusiastic about the potential of AI agents, they simultaneously identify a significant trust gap when it comes to these emerging technologies. On one hand, technical advances point toward increasingly autonomous agents. Gartner projects that 33% of enterprise software will include AI agents by 2028, to the extent that these agents will independently handle 15% of daily decisions. On the other hand, organizations remain reluctant to give free rein without guardrails. There is a clear difference between what agents are technically capable of and what companies are willing to entrust to them. This divide, this “trust gap,” represents the NUMBER 1 challenge to overcome in order to deploy agents in production.

The data confirms that current skepticism is proportional to perceived risk. Gartner reports that 74% of companies see AI agents as a new security attack vector, that only 13% believe they have adequate internal governance frameworks and that 81% do not trust vendors to prevent hallucinations in their models, with only 19% expressing confidence (Source). Similarly, a 2024 PwC survey revealed that 80% of executives do not trust AI agents to autonomously manage sensitive interactions such as employees or finances, due to doubts about accuracy and reliability. It is therefore unsurprising that, in a recent Forrester survey, 29% of AI decision makers cited lack of trust as the primary barrier to adopting generative AI within their organization.

How can the necessary trust be built? Experts highlight several levers. First, establishing robust and cross-functional AI governance. Most executives agree that without an appropriate governance framework, trust cannot scale across the enterprise. Gartner therefore recommends concrete measures such as “design for transparency and trust,” meaning that explainability and full auditability must be integrated into agents by design. An AI system that can justify each of its actions and recommendations will be far more readily accepted, particularly in regulated industries where controls and audits are standard.

In summary, trust has become the sine qua non condition for moving from limited pilots to widespread adoption of AI agents. This requires technology combining control, traceability, and transparency at the heart of AI agents today. This is precisely what enterprises need: “trust AI” with innovations designed to meet the reliability requirements of large enterprises.

→To go deeper: A must-read for executives looking to decode the agentic market: The State of AI Agents 2025.


Hypergraph database: the trust layer for agentic governance, auditability and compliance for Enterprise


Schematic representation of a hypergraph modeling interconnected knowledge. This type of advanced structure enables an AI agent to navigate a rich network of facts and rules, providing contextualized memory and more reliable reasoning.


The benefits of this architecture are immediately visible in the reliability of outputs. By combining memory with logical reasoning, the agent drastically reduces the risk of hallucinations, reaching nearly 99% accuracy. In addition, the hypergraph provides an integrated governance framework. Every action taken by the agent is constrained by explicit rules recorded in the database, including company policies, business rules and legal restrictions. In practice, every decision is made within defined boundaries, guardrails are systematically respected, in other words, Rippletide’s architecture itself enforces the rules and prevents the agent from operating outside the authorized scope. This compliance by design is a major advantage for sectors such as finance or healthcare, where no hallucination can be tolerated.

Finally, the very nature of the hypergraph database delivers full auditability. All information used and all chains of reasoning followed by the agent are formalized and traceable. It is therefore possible after the fact to explain every decision made by the agent by retracing the knowledge nodes and rules that led to it. This total transparency provides reassurance for risk management and compliance teams. AI is no longer a black box, it becomes a system that can be continuously governed and verified.

In summary, through its hypergraph database, Rippletide delivers a technological answer to the three pillars of trust: reliability through a drastic reduction in hallucinations, compliance through strict respect of guardrails, and explainability through end to end traceability. This is the foundation on which it finally becomes possible to confidently delegate operational decisions to an AI agent.

→  To explore the challenges of agentic explainability in enterprise: read our article or see this video.


Product Focus: the Rippletide module for evaluating hallucinations in AI agents

At the Adopt AI 2025 conference, Rippletide team was proud to unveil a preview of our innovative hallucination evaluation module for AI agents, concretely illustrating its “Agentic Trust by Design” philosophy. This module delivers a novel response to a universal challenge in AI agents: evaluating agents in order to detect and identify hallucinations. Agent hallucinations can occur at every stage of an agent’s decision-making process, resulting in fallacious assertions in the generated outputs. Until now, enterprises had very limited means to detect these hallucinations in real time before they impacted the agent’s responses. This is precisely the gap Rippletide’s module is designed to fill.



Rather than assessing the damage after the fact, Rippletide has built a module that evaluates the hallucination rate of any agent in real time. To do so, the module extracts all factual claims made by the agent (for example, a numerical value, a reference to a policy, or a document citation). Each of these assertions is then verified against the organization’s unified knowledge hypergraph. For every fact, the system determines whether it is sourced and supported by data, unsupported (unknown), or directly contradicted by reference information. Based on this process, the module can detect hallucinations in real time, flag them, and subsequently optimize the agent.

The outcome of this runtime evaluation is expressed as a deterministic reliability score. For each analyzed response, the module calculates a hallucination rate and derives an overall score on a scale from 1 to 4 (with 4 being the highest level). For example, a score of 4 corresponds to an agent with a very low hallucination rate, typically below 1%. Conversely, a score of 1 indicates a response riddled with inaccuracies and therefore unacceptable. Importantly, this scoring is not a vague opinion generated by another AI model; it is entirely objective, as it is based exclusively on the company’s own data. The tool also explicitly highlights the portions of the response that may be hallucinated, enabling the organization to understand exactly where the agent diverged from reality.

→ To know more about AI agent evaluation methods: watch this video or read this article about Micro and macro determinism in AI agents



In addition, in offline evaluation mode, the module enables cold benchmarking of an agent prior to deployment. By running the agent on question-and-answer datasets or test scenarios, organizations can obtain its average hallucination rate and a “readiness” score. This makes it possible to compare multiple prompt versions, measure the impact of a new knowledge base, or decide whether a given agent meets the required reliability threshold for production. Here again, Rippletide’s deterministic approach stands in sharp contrast to the subjective or partial evaluations that previously prevailed.

→ To know more about our hallucinations evaluation module: read here

Big thanks to the La French Tech Grand Paris team, as well as to the other startups that were present at Adopt AI. The exchanges, feedback, and shared energy throughout the event made it a particularly valuable experience!

And last but not least, a highlight moment with Emmanuel Macron, President of the French Republic. Thanks again for coming at La French Tech Grand Paris booth!



Others ressources:

Rippletide at Adopt AI: how to deploy trustworthy AI agents?https://www.youtube.com/shorts/d4az3WGGcek

How to enforce guardrails for AI agents?
https://www.youtube.com/shorts/dNtNEyHfcoU

What are the biggest challenges facing AI agents today?
https://www.youtube.com/shorts/x_I-TVGQKsg

How to evaluate AI agents hallucinations?
https://www.youtube.com/shorts/0Qcj8WzDFNc

The 3 challenges to deploy AI agents in production
https://www.youtube.com/shorts/KW8h03hMnio


Ready to see how autonomous agents transform your enterprise?

Rippletide helps large organizations unlock growth with enterprise-grade autonomous agents

Rippletide wave

Ready to see how autonomous agents transform your enterprise?

Rippletide helps large organizations unlock growth with enterprise-grade autonomous agents

Ready to see how autonomous agents transform your enterprise?

Rippletide helps large organizations unlock growth with enterprise-grade autonomous agents

Rippletide wave

Stay up to date with the latest product news,
expert tips, and Rippletide resources
delivered straight to your inbox!

© 2025 Rippletide All rights reserved.
Rippletide USA corp. I 2 embarcadero 94111 San Francisco, CA, USA

Stay up to date with the latest product news,
expert tips, and Rippletide resources
delivered straight to your inbox!

© 2025 Rippletide All rights reserved.
Rippletide USA corp. I 2 embarcadero 94111 San Francisco, CA, USA

Stay up to date with the latest product news,
expert tips, and Rippletide resources
delivered straight to your inbox!

© 2025 Rippletide All rights reserved.
Rippletide USA corp. I 2 embarcadero 94111 San Francisco, CA, USA