Oracle Solutions for Trustworthy AI Systems

Mar 23, 20234 min read
photo

Oraichain’s novel oracle mechanism enables the integration of AI into smart contracts, providing Proof-of-Execution while taking a chain-agnostic approach to AI service delivery. To fully appreciate Oraichain’s AI Oracle, we must understand its fundamental goal: ensuring the trustworthiness of AI when integrating into decentralized applications (dApps). So how do we define trustworthiness in this context? And how can the AI Oracle increase transparency to improve the trustworthiness of AI in dApps?

Trust in Blockchain and AI systems

In essence, blockchain systems are trustless because they enable the peer-to-peer transfer of data and assets without any third-party intervention, eliminating censorship and providing an immutable record of balances. In the case of Oraichain’s delegated proof-of-stake (DPoS) architecture, the consensus is a responsibility shared amongst the network’s validators, confirming transaction authenticity within each proposed block. This decentralization ensures a tamper-proof ledger, a “trustless” system that we can rely on to work as expected. Hence, we can say, the blockchain is “trustworthy”. (I hope you can bear this little wordplay here :) )

Trustworthiness of a system, like a blockchain network, can be established through transparency, explainability, and controllability. These characteristics make decentralized systems more desirable than their centralized counterparts.

Artificial Intelligence (AI) is another important frontier technology, raiding the headlines and creating new opportunities for both personal and business development. By now, you’ve likely heard of (or even tried) ChatGPT. You’ve probably read about how AI can transform our lives, how adopting AI tools into your daily workflow can save you hours every day, and even how automation threatens to replace workers. Inevitably, AI and  Blockchain will merge to create more powerful, innovative, and intelligent decentralized applications (dApps).

When AI comes to Blockchain, “trustworthiness” becomes more critical for the safety and security of the whole system. Inaccurate face recognition can cause loss of digital identity or wallet account. Unavailable AI services can cause delays or even DDoS to the system. Unexplainable AI results can destroy the admirable transparency characteristic of the blockchain system. Attacks on AI models can flow into the integrated dApps and increase the risk of using these applications.

Trusted execution environments (TEEs) at the hardware level can protect machine learning (ML) systems from disclosure and modification of code and data during training and use processes. TEEs can help AI developers train and deploy their AI models even in a multi-tenant cloud. However, when dApp developers don’t have access to the training or inference processes of AI models, how can they confirm that the AI models are working properly? For example, ChatGPT is a black-box API, we can integrate it into our dApp, but can’t control how it responds to our input. For such black-box AI, must we always put our trust in the performance of a centralized entity?

AI Oracle

Oraichain’s signature technology, AI Oracle, is developed to support and improve the trustworthiness of AI integrated into dApps and Blockchain. AI Oracle provides oracle solutions to process and feed AI and data to smart contracts in order to build dApps. Several layers of verification and validation are carried out during an AI execution, but in general, they can be categorized into two types:

  • Verification of AI:

AI will be tested for availability, speed, accuracy, and then robustness against adversarial attacks and other types of vulnerabilities. Only AI models that can attest to passing the tests or have the highest ranking performance can be said to be reliable to use.

Oraichain implements test case-based verification and utility AI-based verification mechanisms to execute this kind of quality assurance.

  • Validation of verification:

The previous verification will be executed via several AI executor nodes in order to avoid biased verification. An aggregation algorithm will summarize the verification results and validate the winning AI model.

Figure 1 briefly describes the process of verification and validation of AI models to enhance trustworthiness. Black-box AI APIs are published to DINO Hub AI Marketplace. Test case data are fetched from DINO Hub Data Marketplace. These testcases are used to verify the accuracy and speed of AI models. They are automatically created on-the-fly or randomly augmented to ensure the diversity and reliability of the test. Some utility AI for the verification process will be combined with testcases to make the verification script. Each validator node will run the verification script separately. Via the decentralized AI and Data Marketplace on DINO Hub, Oraichain promotes transparency of verification and hence, improves the trustworthiness of the overall AI-integrated system.

Figure 1: Oraichain’s AI Oracle: Verification and validation of black-box AI models to support and improve the trustworthiness

Standard compatibility

Trustworthiness is vital and should be considered early in the planning of AI-integrated dApps. Essentially, large organizations are paying attention and building standards for trustworthy AI systems.

To list a few, the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) has published ISO/IEC TR 24028:2020 to give an overview of trustworthiness in artificial intelligence. In this document, ISO/IEC defines “trustworthiness” as the “ability to meet stakeholder expectations in a verifiable way… Characteristics of trustworthiness include, for instance, reliability, availability, resilience, security, privacy, safety, accountability, transparency, integrity, authenticity, quality, usability.”

National Institute of Standards and Technology (NIST) of the US government has conducted research and produced reports on “Trustworthy and responsible AI”. The essential characteristics of such AI are accuracy, explainability and interoperability, privacy, reliability, robustness, safety, resilience, and mitigation of harmful bias.

In a similar research, but for IIoT systems, Industry IoT Consortium (IIC) defines trustworthiness as “the degree of confidence one has that the system performs as expected. Characteristics include safety, security, privacy, reliability, and resilience in the face of environmental disturbances, human errors, system faults, and attacks.” (reference link)

Oraichain follows closely to those standards and others to make sure the design of AI Oracle would be compatible and overall improve the performance of AI-integrated dApps.

Final note

Trustworthiness of AI systems, of dApps using AI & Blockchain technology; vulnerabilities of such systems and applications as well as typical attacks are all significant and require thorough attention and consideration. Oraichain Academy hopes to bring insights into these topics via future discussion. Stay tuned.



Table of Contents
  1. Trust in Blockchain and AI systems
  2. AI Oracle
  3. Standard compatibility
  4. Final note