The best products provide the best user experiences. However, creating these experiences often introduces significant complexity, with many moving parts and glue that holds them together.
At Nitro Labs, we built Termina - a network extension platform to scale Solana and empower teams to develop seamless experiences with less friction. The platform can be broken down into three main modules:
These modules can be used independently, but we've integrated them into a rollup stack to simplify adoption for teams that require a more traditional scaling solution, removing the need to assemble the components themselves.
Instead of developing the stack from scratch, we decided to leverage an existing rollup framework. We considered different options like the OP Stack and Rollkit but ultimately chose the Sovereign SDK for its flexibility and built-in ZK provability.
In this blog, we describe how we connected our stack with Sovereign to build a full scale rollup on top of Solana L1.
Before we dive into the specifics of how we built an SVM rollup, let’s quickly go over what a rollup is.
A rollup is a layer-2 blockchain scaling solution that executes off-chain and bundles them into compressed batches and state roots, which are then submitted and verified on the main layer-1 blockchain. To prove the authenticity of this data, the rollup sequencer posts zero-knowledge proofs (ZKPs) with the bundles. By aggregating transactions this way, rollups dramatically improve scalability, reduce fees, and increase throughput, all while inheriting the security guarantees of the underlying blockchain.
A state transition function is a fundamental part of a rollup and defines how changes occur. It takes inputs and produces new data on the underlying chain for the rollup. This process is extremely complex and must be managed carefully to prevent logic issues or security attacks such as malicious transactions or DDOS attacks.
The Sovereign SDK streamlines this problem by providing a robust abstraction layer and built-in safeguards. Instead of building everything from the ground up, all we had to do was integrate our components, and the SDK took care of the rest.
In Sovereign’s system, the runtime is defined by each customer and provides the required building blocks for their particular rollup to work as intended. Naturally, this is where most of our time has gone into - building our runtime implementation.
The runtime needs to store all the components responsible for producing a new state given the previous state and new transaction batches.
Sovereign provides several default modules off-the-shelf:
We were able to plug these into our runtime and rock ‘n roll.
To execute Solana transactions, we had to build our own “transaction processor” module. One of the core extensions in our offering is the SVM Engine, so it was the perfect piece to use to build out this module.
Due to the need to serialize objects (in order to be able to prove the STF) in Sovereign SDK, we had to construct a new SVM engine instance to process each rollup transaction, which meant it needed a storage layer to access account data. Our solution was to create SVMStorage
abstraction that allowed our engine to take any implementor of this interface and blast through transactions.
But to implement this abstraction inside the rollup, we had to combine existing modules and new data structures to store all the information the engine required.
‘Simple!’ We say. Just store all the data in our own map and fetch it by the account address.
Amazing, it works!
Well, for a short time.
As soon as we start creating accounts that didn’t exist at genesis and using them to sign transactions - the rollup runtime ran into problems again. As mentioned before, balances are expected to be kept in the bank module, but our precious lamports are stored in our own accounts map. So we had to address our implementation of the SVMStorage
to work with the bank module in addition to our accounts map and keep the two in sync.
It works, again!
This covers the basic logic of our rollup execution, and we’re working on performance optimizations for this process - specifically trying to avoid creating the whole engine instance for each individual transaction. We’re collaborating closely with the team over at Sovereign Labs and will cover this topic in a separate article.
Let’s talk about APIs.
Building a Solana rollup means we want the users of the product to have an experience that just works. Users should be able to simply change the RPC URL and get the same familiar behavior of the base chain (deploying programs, transferring tokens, sending instructions, etc.).
To achieve this compatibility, we had to develop a custom module that would work inside of the Sovereign SDK and provide the required API experience. This was the largest piece of work we had to build ourselves within the framework.
The module we mentioned earlier was basic, only storing account data, engine initialization configurations, and associated logic. However, it also needs to allow users to query this data and submit new transactions to the system in the first place. After all, we can’t process something that doesn’t exist.
To give the users access to the data in a familiar way, we needed to conform to the Solana RPC specification. Although tedious, this was not terribly difficult since the Sovereign SDK provides tools to generate RPC methods and hook them into the rollup's server. So we went and implemented all the functions we needed to - get balances, account data, transactions, etc.
Ahh, wait!
We weren't able to respond to RPC queries for transactions because we didn’t store transaction details. We had only saved the resulting state changes, because transaction metadata was not required for our module to work properly or for its execution to be provable. Luckily, the SDK provides the ability to store accessory state which is separate from the main state and doesn't need to be ZK proven. This allowed us to read and write data from the accessory state liberally without having to worry about memory constraints of the zkVM.
What about RPC endpoints for blocks…
Wait, again!
Why can’t we store blocks just the same? Well, blocks are only built once we finish processing a batch of transactions. But how do we know when this happens? Sovereign provides hooks that we can implement on a per-module level, which allows us to pack Solana blocks once a transaction batch is complete. We accumulate processed transactions and move them into the confirmed state, which reduces the amount of transactions we store in the provable part of the flow - namely, the processed and processing transactions.
And as for sending transactions…
Hold on!
Transaction submission is more complex because they affect the state of the rollup, so we can’t put them in the same place as the blocks - they modify account data and thus have to go through the whole state transition flow (STF). This is where the sequencer comes in.
The sequencer is responsible for accepting transactions from users, batching them into blobs, and forwarding them to the DA layer. These data blobs are then read by the rollup nodes and executed.
This was the place to add our state-mutating endpoints, so we implemented all the methods that change our state (send transaction, request airdrop, etc.) and attached them to the sequencer.
By using the built-in sequencer instead of writing a custom one, we get the benefit of easily switching between a basic and “soft confirmation” kernel that can provide faster response times for users.
📖 Soft confirmations are a process by which the sequencer validates the transaction (usually by running it but not committing any changes to state) and responds with a confirmation that this transaction will be accepted or rejected before even posting the blob, which allows the user to have a near-instant update on their transaction status.
Storing batches and proofs is where our data module comes into play. The Sovereign SDK has native support for storing data in an SQLite database or to Celestia, but building a Solana rollup means the data should live on the Solana L1, so our data module was the best fit to plug into the rollup.
To integrating our data module with Sovereign, we simply had to implement the SDK's DaService
interface, which was abstract enough to allow us all the flexibility we needed in terms of custom behavior and performance.
🔗 If you want to find out more about our Data Module, check out our data module page.
After reading the blobs from the DA layer, the node validates the transactions to ensure there are no repeat transactions, signatures are correct, signers have enough funds, among other checks.
Our main responsibility was to implement the transaction authorization logic by creating custom SVM transaction authorization functions. Well, in EVM Land, this is done by key nonces, an incrementing integer which is used to make sure transactions from the same signer aren’t repeated. In SVM Land, however, this is done by the recent block hash. The recent block hash helps the engine check for how recently this transaction was created and if it has since become too old.
Since there is no direct replacement for the nonces in Solana, and our SVM engine checks for the block hash value, we decided to just ignore this value and use a default instead. We’ll see how that is handled below.
Everything seemed as it should work fine now, right? Well, due to the removal of the nonces, the rollup had to implement some behaviors (e.g. the StandardProvenRollupCapabilities
) in a custom way. Fortunately, most of that work was just hitting the ol’ copy/pasta and removing with surgical precision the parts which involved nonces.
The final piece of the puzzle is generating the proof of execution for the rollup’s transaction batches. While this isn’t specific to Sovereign, we had to adapt the SVM and other parts of the Agave codebase to be executable inside general-purpose zkVMs, which we shared in this article.
Once that was in place, the rest of the work was simply patching dependencies and passing the right feature flags where needed. The actual logic for creating the verifier was handled by Sovereign’s StfVerifier
, which takes in our custom STF as input.
Building a rollup on Sovereign taught us valuable lessons about combining different blockchain ecosystems. By integrating our Solana network extensions (SVM Engine, zkSVM, and Data Module) with Sovereign's framework, we've created a solution that maintains Solana's familiar developer experience while leveraging the benefits of rollup technology.
The journey wasn't without its challenges - from implementing custom storage solutions and handling transaction uniqueness to adapting our system for zkVM compatibility. However, each obstacle pushed us to create more elegant solutions and better understand our customers.
So how did the rollup end up looking from a birds eye view in the end:
And here are the resulting performance numbers for this rollup (we are currently in a optimization phase so these are preliminary numbers):
⚠️ The benchmarks were ran on a Apple MacBook Pro M3 using only SOL transfers between accounts as instructions.
Moving forward, we're continuing to optimize performance and explore new possibilities at the intersection of Solana and rollup technology. Our experience shows that while integrating different blockchain architectures is complex, it's also incredibly rewarding when you can maintain the best aspects of both worlds.