DLPs may choose to rely on a network of DLP Validators to run their DLP's proof-of-contribution (PoC). After running PoC, these validators form a consensus with each other and write the proof-of-contribution assessment back on-chain. In this model, DLPs are responsible for deploying and maintaining their validators. DLP Validators earn DLP token rewards for accurate and consistent data evaluations.
Verify user data within DLPs according to standards set by DLP owners.
Use Vana’s Proof-of-Contribution system to assess data legitimacy and value.
Attest to the validity of the data and write the attestation back on-chain
Participate in the Nagoya Consensus to ensure consistent and accurate data scoring.
Perform accurate and consistent data evaluations and disincentivize bad actors through slashing.
Evaluate the performance of other validators to maintain network integrity.
Back evaluations with personal stake to ensure accuracy and reliability.
Respond to queries from data consumers, including decrypting data and validating results.
Each DLP owner is responsible for deploying a smart contract specific to the DLP's needs. We provide contract templates that offer a starting point for registering DLP validators, recording and verifying data transactions written on-chain, and validators reaching consensus through Nagoya consensus. We also provide a template implementation for the corresponding validators.
The provided templates include:
A smart contract: https://github.com/vana-com/vana-dlp-smart-contracts
A sample validator node (transacts with contract): https://github.com/vana-com/vana-dlp-hotdog
The Vana framework (used by validators): https://github.com/vana-com/vana-framework
The Vana Framework is a library designed to streamline the process of building a DLP.
This object encapsulates interactions with the blockchain.
The state contains information about the current state of the DLP, including nodes in the network (and how they can be reached), scores of the nodes, when it was last synced, current block number, etc.
The Vana framework provides a node abstraction that simplifies the creation and management of a peer-to-peer network that operates the DLP.
A node is a network participant responsible for validating, querying, scoring, or performing any arbitrary task necessary for the DLP to perform proof-of-contribution. A node can be a validator tasked with ensuring a data point belongs to the data contributor and is not fraudulent. A node can also be a miner responsible for aggregating data points to respond to a data query. A DLP is responsible for defining who the DLP participants are, and how they're incentivized for good behavior and penalized for bad.
Nodes can communicate with each other by encapsulating information in a Message object, and sending that object back and forth using a client-server relationship over HTTP.
A NodeClient is responsible for building the inputs of a Message object, and sending it to one or more NodeServers.
The NodeServer runs a FastAPI server that is responsible for responding to API requests sent from a NodeClient. It will perform a task, then fill the outputs of the Message object and send it back to the NodeClient that requested it.
The Messages object is sent back and forth between nodes, providing a vehicle for communication between nodes. It wraps the inputs and outputs of a communication exchange sent between nodes.
Writer's note: We had to design a new consensus mechanism to handle the fuzziness of data contributions. For example, if I believe your data deserves a score of 100, and another validator believes your data deserves a score of 102, we could both be pretty much right. Neither of us as validators are acting maliciously or incorrectly. But generally crypto consensus mechanisms are designed for exact consensus only. Bittensor proposed an early version of this fuzzy consensus, which we have modified to work for private data and proof of contribution.
To reach a state of agreement on data contributions and disincentivize malicious validators, the Proof-of-Contribution system employs Nagoya Consensus. In Nagoya Consensus, each DLP Validator expresses their perspective on the quality and value of data contributions as a rating. Validators then use their rating to score other validators through a set of ratings weighted by stake.
Nagoya Consensus rewards validators for producing data contribution scores that are in agreement with the evaluations of other validators. This disincentivizes divergence from the consensus majority while incentivizing validators to converge on honest assessments of data contribution.
By requiring validators to put stake behind their evaluations and rewarding convergence weighted by stake, Nagoya Consensus makes it economically unfavorable for even a significant minority of validators to collude and manipulate the state of the DLP. As long as an honest majority of stake-weighted validators participate, the system can come to consensus on data contribution scores that accurately reflect the quality and value of data in the DLP.
Validate user data for specific Data Liquidity Pools
Start to validate user data for specific Data Liquidity Pools.
Please join our Discord to get an overview of all existing DLPs and how to access them.
If you need to meet specific minimum staking requirements for a DLP, please reach out to the community in our #testnet-help channel for support as well.
You can run a validator on your own hardware or on a cloud provider like GCP and AWS, ensuring the quality of data in the pool and earning rewards accordingly.
Minimum hardware requirements: 1 CPU, 8GB RAM, 10GB free disk space
See example integration of a Validator here.
Choose the DLP you'd like to run a validator for.
You can run validators in multiple DLPs
Register as a validator through the DLP via its smart contract.
You must meet the minimum staking requirements for the DLP
Wait for your registration request to be approved by the DLP.
Run the validator node specific to the DLP. Confirm that your validator is running correctly. Your logs should look something like this, which will vary by DLP:
See DLP-specific instructions for running a validator node
Congratulations, your validator is up and running! You can keep track of your stats and trust score by looking onchain.
Validators earn rewards for validating uploaded files. For a given data validation request, each validator scores data based on metrics that are relevant to the data type. The scores are aggregated and written onchain, but how does the DLP decide how to reward its validators?
Every 1800 blocks (~3 hours), a DLP epoch concludes and the DLP contract sends a chunk of rewards to its validators. The precise amount of rewards a given validator receives depends is determined by the Nagoya consensus process.
In Nagoya consensus, each validator submits a score for every other validator to the DLP smart contract. Validators score each other based on the quality of their assessments and their operational performance. For instance, if a validator is perceived as wrongly giving an uploaded file that appears fraudulent or low quality a high score, it may receive low scores from other validators.
Somewhat more formally: a validator peer's emissions amount is based on calculations for their rank and consensus. It is calculated by multiplying these two scores, integrating both the peer's individual valuation by the network (rank) and the collective agreement on that valuation (consensus). This multiplication ensures that emissions are allocated in a manner that considers both the quality of contributions and the degree of communal support for those contributions.
When an epoch concludes, the consensus process converts the most recent validator scores into a distribution for that epoch's emissions, and the rewards are distributed to the validators accordingly. Low-ranking validators quickly realize fewer rewards for their contributions, ensuring that an honest majority of validators is able to out-earn dishonest actors and therefore uphold the integrity of the DLP over time.