🏠 DIN Architecture

In the DIN protocol, three participants in the network are continuously involved in the activity of data, which are:

Data Collectors: Targeted Both On-Chain and Off-Chain

Our data collection approach bridges the gap between on-chain data (transactions, wallet addresses, smart contracts) and off-chain data (market sentiments, regulatory changes, social media trends), offering comprehensive insights. This strategy empowers a broad spectrum of users, from casual enthusiasts to professional analysts, across sectors like crypto, medical, academic, and industrial. Through our two products - Analytics and xData for data aggregation, we ensure access to actionable, up-to-date information, facilitating informed decisions across public and private domains.

Data Validators: Ensuring Model Accuracy with Blockchain

The Decentralized Prediction with Sharing Updatable Models (SUM) framework revolutionizes data validation by leveraging the blockchain's decentralized nature. This ensures model updates are transparent, immutable, and collectively refined, enhancing prediction accuracy and reducing data tampering risks. SUM fosters a collaborative ecosystem for continuous model improvement, promising a new era of accurate, secure, and transparent predictive analytics.

Data Vectorizers: Streamlining AI Data Preparation

Vector conversion is crucial for AI readiness. It transforms raw data into a structured format that AI models can effectively process. This step is vital for encoding data, normalizing numerical values, managing high-dimensional data, and optimizing AI training and predictions. By making data AI-ready, vector conversion accelerates AI application development, enhancing model accuracy and scalability.

The DIN protocol streamlines data processing through a series of concise steps, ensuring data integrity and privacy:

  1. Data Collection: Collectors gather on-chain and off-chain data from diverse sources.

  2. Validation Routing: Data is forwarded to selected validators based on their locally deployed models.

  3. Verification: Validators employ computational resources to predict and ascertain data accuracy.

  4. Privacy Processing (Dataset): Validated data undergoes privacy enhancement via the ZK processor.

  5. Model Update: The relevant model is refined with the latest data and updated across validators.

  6. Vector Conversion: Computation nodes transform the validated data into vectors.

  7. Privacy Processing (Vector): Vectors are processed through the ZK processor for privacy.

  8. Data Finalization: The finalized dataset and vectors are stored on IPFS, making them accessible to third parties.

As shown above, the data process will undergo three phases, as in Fig.2: collection, validation, and computation. An incentivized mechanism was applied for all participants to reward the network's contribution. More details will be presented in the following chapters.

最后更新于