Network Extensions: Termina's Blueprint for Solana Scaling

The future of scaling on Solana isn’t blindly following Ethereum’s rollup-centric roadmap and shifting all execution off the L1. We’ve seen how copy-paste, general-purpose rollups can create fragmentation and complicate both the developer and user experience. Solana’s 400ms block times and sub-cent transaction fees already outperform many Ethereum L2s like Base, making the network more than sufficient for most use cases.

Last summer, we challenged the idea that Solana doesn’t need L2s by pointing out that they already exist. Projects like Zeta and Cube have built sovereign rollups and “meta L2s” for their DEXs to reduce latency and compete with centralized exchanges while still preserving self-custody. Code designed an L2 sequencer for their micropayments app to provide the responsiveness and privacy that the L1 lacks. Grass developed a data rollup to prove data lineage off-chain while verifying it on-chain because the L1 didn’t allow the kind of throughput they needed. These are only a small handful of examples, but they illustrate how Solana teams have organically adopted L2-like architectures as the solution for specific issues.

It’s important to note that these projects are consumer, not infrastructure, layer apps and have built their architectures to resolve their own pain points and not those of others. To prevent teams from reinventing the wheel with redundant L2 architectures, we designed and developed a reusable and customizable stack that can be used by anyone to scale their project. This frees teams to focus on their core product rather than spending time and resources building custom infrastructure.

Network Extensions ≠ Rollups

Image

Unlike Ethereum’s rollups, which often create UX and fragmentation challenges, the above teams differ because they’re not general purpose chains that complicate the UX but rather are tailored for specific applications. By using L2s more as backend blockspace and asynchronously committing the results on chain, developers are confused where they have to deploy and users don’t have to bridge funds between multiple networks.

Many teams are most familiar with a rollup-based architecture, so we do support SVM Rollups—particularly for use cases where interoperability is not a priority and an isolated environment is preferable. In certain settings, liquidity or state fragmentation are unnecessary and overrated. For example in web2, Notion’s servers don’t need to read or write data to Netflix’s servers—they operate in completely separate domains. In fact, the transparency of the underlying blockchain can make a project liable to snipers and other attacks.

But rollups aren’t the ultimate goal. They’re just one of many tools for scaling. Network Extensions (NEs) is a holistic term that go beyond rollups and validiums to encompass a broader set of scaling architectures.

Modular NEs

We see three core types of NEs to scaling Solana, each of which can be used independently or together in a plug-n-play fashion to build not only traditional rollups but entirely new architectures.

SVM Engine

The SVM Engine is a pure execution environment that’s focused on transaction processing without the concept of consensus, networking, or even blocks. In rollups, this serves as the heart of the sequencer and is responsible for executing transactions and updating the global state. Outside of the rollup context, the engine can be used as a low-latency and private setting to process transactions off-chain.

While Solana’s architecture offers high throughput and parallel execution, some applications require greater performance guarantees and isolation to avoid shared resource bottlenecks. For instance, HFT platforms and data-intensive applications need consistent transaction finality and ultra-low latency, which can be difficult to achieve in a global execution environment, especially during periods of peak demand.

zkSVM

Zero-knowledge proofs (ZKPs) secure off-chain computation and can also enable seamless interaction between the L1 and NEs.

In the context of a rollup, the zkSVM prover can generate non-interactive rollup proofs that allow disputes to be to complete in a few hours, which drastically shortens existing challenge windows that are currently several days in length.

In addition, it can secure high throughput and privacy-preserving applications outside of rollups. Instead of executing slow and expensive transactions directly on the L1, users can generate a ZKP over the same computation off-chain and simply verify the proof on-chain. On successful verification, the app can run additional actions on-chain. This  significantly improves efficiency and performance while also providing privacy guarantees with advanced ZKPs (such as Groth16 and PLONK) that don’t leak any information.

Data Module

The Data Module allows the creation of “data rollups,” which are often used by dePIN projects that need to send proofs. They don’t require direct interoperability with the Solana mainnet, but instead function as specialized backends for data storage.

In the context of a rollup, the data module is responsible for the storage of transaction batches. To minimize the on-chain footprint and rent cost, this data needs to be packed, compressed, and stored efficiently in the cheap ledger history and only commitments can be retained in the expensive accounts space. However, the data module isn’t limited to transaction storage—it can manage arbitrary data, such as raw data points and proofs of computation or availability.

In particular, beyond rollups, this is useful for data-intensive use cases like DePIN projects, which can generate a massive volume of metrics. These networks often involve hundreds of thousands of nodes sending concurrent data points, which translates into tens of millions of requests per minute. At this scale, it’s infeasible to persist all of the data on-chain from both a technical and economic sense.

The key distinction between a regular rollup and a data rollup is that the latter can be considered a subset of the former. In a data rollup, there is minimal or no execution occurring. For example, limited computation such as the generation of small ZK proofs or Merkle paths happen off-chain, as opposed to general-purpose execution of a complete SVM, and these proofs can be stored on-chain via the data module.

Looking Forward

Instead of moving workloads off-chain, our goal is to bring off-chain workloads onto the L1—batching, proving, or attesting to them in a verifiable way. This ensures that the security of these processes remain anchored to Solana mainnet while scaling efficiently.

Instead, we’re focused on empowering builders with flexible tools to scale however they need without forcing them into a rigid rollup-first model.