Welcome to Polygon CDK Tech Docs
Welcome to the official documentation for the Polygon CDK (Chain Development Kit). This guide will help you get started with building and deploying rollups using the Polygon CDK.
Getting Started
To get started with Polygon CDK, follow these steps:
Documentation
Explore the comprehensive documentation to understand the various features and capabilities of the Polygon CDK:
Support
Happy coding with Polygon CDK!
Setup environment to local debug on VSCode
Requirements
- Working and running kurtosis-cdk environment setup.
- In
test/scripts/env.sh
setupKURTOSIS_FOLDER
pointing to your setup.
tip
Use your WIP branch in Kurtosis as needed
Create configuration for this kurtosis environment
scripts/local_config
Stop cdk-node started by Kurtosis
kurtosis service stop cdk-v1 cdk-node-001
Add to vscode launch.json
After execution scripts/local_config
it suggest an entry for launch.json
configurations
CDK DA Integration
The purpose of this document is to explain how a 3rd Party Data Availability (DA) solution can integrate with CDK.
Considerations
The code outlined in this document is under development, and while we’re confident that it will be ready for production in a few weeks, it is currently under heavy development.
For the first iteration of integrations, on-chain verification is not expected. Although this document shows how this could be done at the contract level (doing such a thing on the ZKPs is out of the scope right now). In any case, Agglayer will assert that the data is actually available before settling ZKPs.
Smart Contracts
The versions of the smart contracts that are being targeted for the DA integrations are found in zkevm-contracts @ feature/banana. This new version of the contracts allow for multiple “consensus” implementations but there are two that are included by default:
- zkEVM to implement a rollup.
- Validium to implement a validium.
- Adding a custom solution.
This document only considers the first approach, reusing the PolygonValidium
consensus. That being said, the PolygonValidium
implementation allows a custom smart contract to be used in the relevant interaction. This could be used by DAs to add custom on-chain verification logic. While verifying the DA integrity is optional, any new protocol will need to develop a custom smart contract in order to be successfully integrated (more details bellow)
This is by far the most relevant part of the contract for DAs:
function sequenceBatchesValidium(
ValidiumBatchData[] calldata batches,
uint32 indexL1InfoRoot,
uint64 maxSequenceTimestamp,
bytes32 expectedFinalAccInputHash,
address l2Coinbase,
bytes calldata dataAvailabilityMessage
) external onlyTrustedSequencer {
And in particular this piece of code:
// Validate that the data availability protocol accepts the dataAvailabilityMessage
// note This is a view function, so there's not much risk even if this contract was vulnerable to reentrant attacks
dataAvailabilityProtocol.verifyMessage(
accumulatedNonForcedTransactionsHash,
dataAvailabilityMessage
);
It's expected that any protocol build their own contract that follows this interface, in the same way that the PolygonDataCommittee
does. The implementation of verifyMessage
is dependant on each protocol, and in a first iteration could be "dummy", since the AggLayer will ensure that the DA is actually available anyway. That being said we expect protocol integrations to evolve towards "trustless verification"
Setup the Node
In order to integrate a DA solution into CDK, the most fundamental part is for the node to be able to post and retrieve data from the DA backend.
Up until now, DAs would fork the cdk-validium-node
repo to make such an integration. But maintaining forks can be really painful, so the team is proposing this solution that will allow the different DAs to be 1st class citizens and live on the official cdk
repo.
These items would need to be implemented to have a successful integration:
- Create a repository that will host the package that implements this interface. You can check how is done for the DAC case as an example.
- Add a new entry on the supported backend strings
- [OPTIONAL] Add a config struct in the new package, and add the struct inside the main data availability config struct, this way your package will be able to receive custom configuration using the main config file of the node.
go get
and instantiate your package and use it to create the main data availability instance, as done in the Polygon implementation.
tip
By default all E2E tests will run using the DAC. It’s possible to run the E2E test using other DA backends changing the test config file.
Test the integration
- Create an E2E test that uses your protocol by following the test/e2e/datacommittee_test.go example.
- Follow the instructions on Local Debug to run Kurtosis enviroment for local testing
- Deploy the new contract contract to L1 running in Kurtosis
- Call
setDataAvailabilityProtocol
in validium consensus contract to use the newly deployed contract. - Modify the
Makefile
to be able to run your test, take the case of the DAC test as an example here
Example flow
- Sequencer groups N batches of arbitrary size into a sequence
- Sequencer calls
PostSequence
- The DA BAckend implementation decides to split the N batches into M chunks, so they fit as good as possible to the size of the DA blobs of the protocol (or other considerations that the protocol may have)
- The DA BAckend crafts the
dataAvailabilityMessage
, this is optional but could be used to:- Verify the existance of the data on the DA backend on L1 (this message will be passed down to the DA smart contract, and it could include merkle proofs, ...). Realisitcally speaking, we don't expect to be implemented on a first iteration
- Help the data retrival process, for instance by including the block height or root of the blobs used to store the data. If many DA blobs are used to store a single sequence, one interesting trick would be to post some metadata in another blob, or the lates used blob, that points to the other used blobs. This way only the pointer to the metadata is needed to include into the
dataAvailabilityMessage
(since this message will be posted as part of the calldata, it's interesting to minimize it's size)
- The sequencer posts the sequence on L1, including the
dataAvailabilityMessage
. On that call, the DA smart contract will be called. This can be used to validate that the DA protocol has been used as expected (optional) - After that happens, any node synchronizing the network will realise of it through an event of the smart contract, and will be able to retrieve the hashes of each batch and the
dataAvailabilityMessage
- And so it will be able to call
GetSequence(hashes common.Hash, dataAvailabilityMessage []byte)
to the DA Backend - The DA BAckend will then retrieve the data, and return it
Integrating non-EVM systems
This guide explains how to connect a third-party execution environment to the AggLayer using the CDK.
Important note
The following information is experimental, and there aren't any working examples of non-EVM integrations with the AggLayer yet. While we know what needs to be done conceptually, the implementation details are likely to evolve. Think of this as a rough overview of the effort involved, rather than a step-by-step guide towards a production deployment.
Key Concepts
Any system (chain or not chain) should be able to interact with the unified LxLy bridge and settle using the AggLayer; especially when using the Pessimistic Proof option. Support for additional proofs, such as consensus, execution, or data availability are planned for the future. But, for now, this guide is based solely on using the Pessimistic Proof for settlement.
The CDK client handles the integration with both the unified LxLy bridge and AggLayer. Think of it as an SDK to bring your project into the AggLayer ecosystem. You'll need to write some custom code in an adapter/plugin style so that the CDK client can connect with your service.
In some cases, you might need to write code in Go
. When that happens, the code should be in a separate repo and imported into the CDK as a dependency. The goal is to provide implementations that can interact with the smart contracts of the system being integrated, allowing the CDK client to reuse the same logic across different systems. Basically, you’ll need to create some adapters for the new system, while the existing code handles the rest.
Components for integration
Smart contracts
For EVM-based integrations, there are two relevant smart contracts:
The integrated system needs to implement similar functionality. It doesn't have to be a smart contract per se, and it doesn't need to be split into two parts, but it should perform the functions that we list here:
- Bridge assets and messages to other networks.
- Handle incoming asset/message claims.
- Export local exit roots (a hash needed for other networks to claim assets).
- Import global exit roots (a hash needed for processing bridge claims).
AggOracle
This component imports global exit roots into the smart contract(s). It should be implemented as a Go
package, using the EVM example as a reference. It should implement the ChainSender
interface defined here.
BridgeSync
BridgeSync synchronizes information about bridges and claims originating from the L2 service attached to the CDK client. In other words, it monitors what's happening with the bridge smart contract, collects the necessary data for interacting with the AggLayer, and feeds the bridge service to enable claims on destination networks.
Heads up: These interfaces may change.
To process events from non-EVM systems, you'll need a downloader
and driver
. The current setup needs some tweaks to support custom implementations. In short, you need to work with the Processor
, particularly the ProcessorInterface
found here. The Events
in Block
are just interfaces, which should be parsed as Event
structs defined in the Processor
.
Claim sponsor
This component performs claims on behalf of users, which is crucial for systems with "gas" fees (transaction costs). Without it, gas-based systems could face a chicken/egg situation: How can users pay for a claim if they need a previous claim to get the funds to pay for it?
The claim sponsor is optional and may not be needed in some setups. The bridge RPC includes a config parameter to enable or disable it. To implement a claim sponsor that can perform claim transactions on the bridge smart contract, you'll need to implement the ClaimSender
interface, defined here.
Last GER sync
Warning: These interfaces may also change.
This component tracks which global exit roots have been imported. It helps the bridge service know when incoming bridges are ready to be claimed. The work needed is similar to that for the bridge sync: Implement the ProcessorInterface
, with events of type Event
defined here.
Additional considerations
Bridge
Once all components are implemented, the network should be connected to the unified LxLy bridge. However, keep in mind:
- Outgoing bridges should work with current tools and UIs, but incoming bridges may not. When using the claim sponsor, things should just work. However, the claim sponsor is optional... The point being that the existing UIs are built to send EVM transactions to make the claim in the absence of claim sponsor. So any claim interaction beyond the auto-claim functionality will need UIs and tooling that are out of the sope of the CDK.
- Bridging assets/messages to another network is specific to the integrated system. You'll need to create mechanisms to interact with the bridge smart contract of your service for these actions.
- We’re moving towards an in-CDK bridge service (spec here), replacing the current separate service (here). There's no stable API yet, and SDKs/UIs are still in development.
AggLayer
AggLayer integration will work once the components are ready, but initially, it will only support Pessimistic Proof. Later updates will add more security features like execution proofs, consensus proofs, data availability, and forced transactions. These will be optional, while Pessimistic Proof will remain mandatory.