Oraichain’s AI Oracle: More than a Data Feed

Apr 25, 20239 min read

How many new AI projects have you come across in the last week? Did you know that there were more than 2000 new AI-based tools and services launched in March alone?

While the emergence of AI technology is certainly one of the most exciting developments of 2023, generalization when discussing these technologies can lead to confusion surrounding the particular purpose of certain systems. In the context of blockchain, the issue is compounded by the complexity of smart contracts and the finality of execution.

In truth, the rapid speed at which this space is evolving continues to generate more questions than answers. There is no clear regulatory framework for DeFi. There is no clear regulatory framework for AI. This lack of clarity ultimately leads to general distrust and fear of these frontier technologies, hindering adoption, spurring confrontation and diminishing the ultimate goal: innovation towards transparency and inclusivity. The successful integration of AI and smart contracts should enhance usability without further obfuscating the intended purpose of blockchain technologies.

Oraichain’s AI Oracle is the only existing decentralized solution for Trustworthy AI. Typically, blockchain-based oracles are responsible for last mile aggregation and delivery of data, in many cases anchored by immutable on-chain data or verified off-chain data sources. The unique challenge of creating an oracle solution for artificial intelligence and machine learning is far more complex, requiring systems that go beyond delivery to provide performance and security monitoring for the underlying software.

The Trilemma

There is always a Trilemma. Blockchain solutions are, generally, solving for security, scalability and interoperability. Oraichain’s AI Oracle is solving for trust, identity and decentralization. For the purpose of this discussion, let’s zoom out a bit more to understand the demand for these solutions within the macro context, establishing a trilemma that can help guide development in an ever-shifting landscape: Innovation, Integrity and Regulation.


  • How do we build a flexible solution for Trustworthy AI that is both compatible with existing systems and accounts for rapid evolution?
  • Can a modular approach support both internal and external innovations without disrupting the core functionality of the system itself?


  • Can a software agnostic solution be developed to ensure AI models are producing reliable, accurate, explainable and unbiased output?
  • How do we mitigate the potential risk that AI could diminish the integrity of on-chain actions, especially given new attack vectors introduced by reliance on off-chain software?


  • How do we create a risk management framework that aligns with emerging international standards and can actually help to drive the compliance agenda?
  • Can this risk management framework be operationalized, productized and democratized, ensuring that government regulators do not have exclusive control over innovation?

As we observe the complexity of these issues, it’s important to be aware that there are no agreed upon solutions or frameworks currently required for AI products and their developers. Blockchain projects face a similarly murky compliance landscape, especially as regulators continue to overgeneralize the industry without a deep understanding of the technology. In both cases, companies are going straight to market, pushing forward with innovation and disregarding the inaction of centralized authorities.

As a result, end-users (consumers and institutions) are subject to a wide range of risks, some of which are completely unknown because of the lack of disclosure requirements. There is a growing expectation that decentralized technologies will bring transparency to capitalism, but to do this, the solutions must be all-inclusive, encompassing technology, risk management and compliance innovations. In many ways, this is the void that Oraichain’s AI Oracle is looking to fill.

The Dream

Like any new industry, the exponential growth of the AI sector has attracted thousands of entrepreneurs and developers looking to cash in on the gold rush. For some, the intention is pure and the innovation is real, but it’s important to be cautious and aware of “snake oil” salesmen. Many will attempt to sell you a dream that may not come to fruition in the next ten years, nevermind being ready with a product today. And in cases where AI solutions are built to automate high-risk transactions, the relative lack of transparency can expose users to unreasonable risks.

Since many in the crypto world are focused on DeFi, trading and ultimately profit, let’s just put all the cards on the table: if a product offers fully automated management of your investment strategies, you are entitled to know how that system works and its reliability. The expectation of proper disclosure should not stop at DeFinTech products, but should expand to all AI-based products. Importantly, the AI sector can take it upon itself to lead this agenda, setting standards from within rather than waiting for them to be imposed.

“The Dream” is not simply visions of an AI-infused future led by opaque monolithic corporations, but rather, radical transformation of capitalism as we see it today, driven by the desire for transparency, inclusivity and equitable opportunities. Oraichain’s AI Oracle is a key element of our future as a society, providing a method for oversight that is decentralized and democratized, with applications in Web2 and Web3.

The Risk

While it’s impossible to provide a truly exhausted list of risks associated with AI, there are some overarching themes that should be considered before engaging with these technologies. To the extent possible, AI-centric businesses should attempt to address these risks to the best of their ability:

Explainability of Output

In general, our expectation is that systems will perform as intended, but since AI and machine learning deal heavily in approximation, there will always be a varying degree of accuracy and margin of error. In the event of low quality output, it's vital to understand why the AI model has produced the given result. Accurate reporting can help end-users make informed decisions on which products to trust, as well as best practices when using certain products.

As we move deeper into the era of self learning systems, it is also important to disclose the highly experimental nature of AI technologies. In many instances, AI systems may produce output or perform actions that are not entirely explainable, requiring extensive research and subsequent debriefing to bring consumers up to speed. In the event that these systems involve high-risk activities, a failsafe may be necessary in production environments to prevent damages.

Technical and Institutional Opacity

Capitalism is built on intellectual property. Open-source technologies continue to flourish, but there will always be centralized entities that choose to keep the specific composition of their AI-based systems private. These black box instances should not be excluded from disclosure requirements. In fact, the opaque nature of these products should require thorough reporting on intention, operations and security, as well as detailed results from quality assurance testing.

The Potential for Bias

As the reliance on AI technologies grows, the need to ensure systematic fairness becomes more urgent. Bias can emerge as a result of training data and human contributions; discrimination has already been observed in some AI products, producing results that could have long term implications for underserved and underprivileged populations. Diversity of both personnel and training data can help to mitigate this risk, but ultimately there will need to be a comprehensive framework for documenting and reporting risks associated with bias.

Beyond bias in the model itself, there is also an issue of bias towards a particular end-user. All AI-focused companies should have clear guidance on who the product is for, and when possible, work towards providing broad accessibility regardless of race, gender, nationality or socio-economic status.

Privacy and Data Protections

The rise of zero knowledge technologies presents an opportunity for users to access AI products without exposing private data to 3rd-party operators, but in situations when this data is exposed to centralized parties, end-users deserve to know how their data is collected, stored and used. Without imposing personal beliefs, there is an opportunity for fair trade, where data is vital to a company's continued development. In these cases, compensation should be clearly defined. Data Privacy disclosures are not new, but a fresh approach to this topic may help to mitigate potential damages and legal fallout resulting from exploitation.

Security + Adversarial Attacks

In the blockchain space, smart contract audits have become a standard security measure to provide objective technical assessment of systematic risks. These reports are often issued once and provide little ongoing monitoring of critical changes in logic that may expose end users to risks previously unknown. All AI products should be required to disclose complete security controls, helping consumers understand the measures taken throughout the software’s lifecycle. This disclosure should address the following needs:

  • Identify: Complete a full risk assessment of the system. This includes both technical and operational risks that may be exploited.
  • Protect: Ensure proper controls are in place to eliminate and mitigate risks through access controls, monitoring regimes and maintenance.
  • Detect: Implement continuous monitoring to maximize awareness, helping to reduce the response time in the event of breaches. Test these controls regularly.
  • Respond: In the event of an exploit or anomaly, communication is key. There should be an explicit plan in place to halt operations, mitigate damages and report throughout the event.
  • Recover: Once the exploit has been mitigated, there needs to be a plan to patch security and learn from the event. In the case where damages are material, there also needs to be explicit redress plans in place.

In sum, these five categories comprise the “disclosure” framework for AI + blockchain products, which, in addition to a detailed product description, can give users a complete understanding prior to engaging. It would be prudent to present such a disclosure in a more digestible fashion than a simple check box at the bottom of the page (like existing privacy policies and user agreements), but to actually use this document as an opportunity to educate the future generation of users and technologists.

Within the context of Oraichain’s AI Oracle, disclosures can help to strengthen trust for individual software products published to the network. On a technical level, the AI Oracle system itself can contribute to transparency and reporting, enabling real-time monitoring, preventative security and even expedited response in the event of an exploit.

The AI Oracle

Oraichain’s effort to expand on blockchain-based oracle workflows to produce case-specific solutions for the AI industry has resulted in a robust system for AI-model performance and security monitoring, as well as last mile aggregation and delivery. To understand this system in complete detail, it is helpful to break it down into individual modules:

  1. API Onboarding and Security
  • Service provider: Publish AI Models via APIs on DINO Hub. These models can be either open-source or black box.
  • DINO: Provide pre-trained and custom Utility AI services to analyze security, performance and resilience. These “sentinel” services will run continuously.

2.   Decentralized Test Cases

  • Service provider: Publish a “call for testing prompts” on DINO Center, providing incentives for users to contribute to these tasks.
  • DINO Community: The community will work together to complete the task, contributing testing prompts to a dataset which will be used to provide test cases.
  • DINO: Randomized testing prompts in order to provide test cases to support the assessment of the AI Models accuracy, reliability and consistency.

3.   Verification Script

  • DINO: This script combines results from decentralized test cases and Utility AI “sentinel” services in preparation for submission to AI executors subnetwork.

4.   Validation

  • AI Executors: The verification will be executed via several AI executor nodes in order to avoid biased verification. An aggregation algorithm will summarize the verification results and validate the winning AI model for use by consumers.

5.   Zero Knowledge Engine (case-specific)

  • Consumers: End users can utilize AI models without exposing their private input to any third party, i.e. infrastructure providers, service providers or other centralized entities.

6.   Decentralized File Storage (case-specific)

  • Consumers: Consumers can retain full ownership of all private data (input and output) related to AI interactions. This data is stored with encryption and access controls, allowing users to share/revoke permissions at will and monetize if they see fit.

As we can see, Oraichain’s AI Oracle workflow goes beyond the relatively simple task of feeding data to a smart contract, intensively integrating cybersecurity measures to protect end users, monitoring to keep service providers on top of potential adversarial attacks and on-chain proof for permanent record keeping for performance. In combination with proper disclosures and reporting, these systems bring us one step closer to a sustainable risk management framework for AI-based products. Ultimately, continued democratization of these systems will ensure a high degree of transparency and put users in control of their decisions, even as AI is introduced into their task management systems.

The Conversation

To close out this discussion, it's time for “The Conversation”:

We live in a time of instant gratification. The world is at our fingertips and yet, individuals are rarely satisfied with their circumstances in life. Per usual, humans are in a rush to reach the next stage of technological innovation, but rarely stop to consider how to protect themselves and, more importantly, how to protect disadvantaged populations from being left behind. Accessibility is not about sales, it's about availability of information. AI should not be forced upon the world. AI should not be integrated by default into every product we interact with. AI should be a choice and that choice should be made with full knowledge of the potential benefits and risks associated.

This article should stand as a reminder that we have a long way to go. Everyone is selling you the AI future, but very few are working to make sure that it's done safely. This is an opportunity to build our future better than our past, to rewrite the rules of capitalism and make the world better for our children.

Table of Contents
  1. The Trilemma
  2. The Dream
  3. The Risk
  4. The AI Oracle
  5. The Conversation