Optimism - A Holistic Research Report
A complete research report on Optimism made by redacted minds
Executive Summary:
Optimism is an innovative scaling solution designed to address Ethereum's scalability challenges through the creation of an optimistic rollup. The chain has been in development since 2018, launched its token in 2022 and attracted more than $595M in TVL on its ecosystem.
In the form of the OP Stack, Optimism provides a highly versatile and customizable framework to abstract complexity for builders that want to spin up (application-specific) rollups. In the context of the superchain vision, these rollups will all be combined into an interoperable ecosystem of chains that will provide the end user with a UX reminiscent of a monolithic, logical chain. This seamless interoperability will be enabled by shared sequencing and cross-rollup message passing protocols.
Thanks to the high degree of EVM equivalence, Optimism enables the reuse of code, tooling and infrastructure from Ethereum L1, thereby offering developers and users an experience akin to Ethereum's mainnet and enabling seamless migration of Solidity contracts, all while providing high throughput and low cost. Moreover, the versatility & customizability the Bedrock implementation of the OP stack offers, enables builders to address some of the core issues rollup-based scaling solutions face today, allowing them to customize every layer of the tech stack from sequencer to data availability solution and settlement layer.
However, both rollups in general as well as Optimism and the OP Stack in specific are very nascent technologies and are still in an early stage of development. This is evidenced by the current lack of a state validation mechanism in the mainnet implementation of Optimism’s OP mainnet and other OP Stack based chains as well as in the inexistence of a mechanism to address proposer failure. Additionally, the upgradability of the rollup contract through fast upgrade keys and the currently centralized sequencer introduce additional risk from a user perspective and centralize control over the network. However, there is an ambitious roadmap in place that aims to address these issues and take off the training wheels as the Optimism and the OP stack continue to evolve.
The product-market fit of the technology is showcased by the traction the OP stack has been gaining over the course of the past months, resulting in a fast-growing ecosystem of OP stack based chains that will form the backbone of the Superchain in the future.
Aside from the technological standpoint, Optimism successfully developed an ecosystem with hundreds of dApps that attracted users, developers and bright teams. The development of this ecosystem has played a crucial role in establishing Optimism as a leading Layer 2 solution within the industry.
Introduction
Aiming to solve Ethereum’s scalability problem, Optimism’s OP Mainnet was one of the first general purpose EVM rollups to go live on mainnet. Since its launch, Optimism has consistently been at the forefront of L2 innovation, evolving into an ambitious vision of an interoperable ecosystem of L2s based on the OP stack termed the “Superchain”. This shared technological foundation of OP Mainnet and its fast growing ecosystem is what provides the basis for this Superchain vision to become reality, shaping the future of a more scalable Ethereum ecosystem.
Based on hundreds of hours of in-depth research, this report breaks down the Optimism ecosystem, exploring the technology behind Optimism and the Superchain vision, its unique features, and how it sets itself apart from other L2 solutions.
Whether you're a user or a builder, whether you’re a blockchain enthusiast already deeply familiar with the underlying technology or simply want to learn about the latest developments in the industry, this report is guaranteed to provide you valuable insights into the past, present and future of scaling Ethereum and Optimism’s optimistic rollup tech in specific.
The report walks you through the history of scaling, explaining in detail how we got from a monolithic Ethereum to a rollup-centric vision for Ethereum 2.0 that will move users and on-chain activity from L1 to highly scalable rollup-based execution layers on L2. We introduce you to the modular thesis and dive deep into rollup mechanics and limitations, as well as potential solutions to overcome the obstacles this nascent scaling technology is facing. Subsequently, the OP Stack and the Superchain vision are broken down in a digestible way and covered in-depth to enable you to stay ahead of the curve. Finally, we have a look at network metrics, ecosystem, tokenomics, team, funding and financials to provide you a highly comprehensive assessment of the overall state of the ecosystem and serve you with actionable information.
History of Scaling
Scaling the throughput of monolithic blockchain networks while maintaining decentralization and network security has been a key problem that researchers and developers have been trying to solve for years. It is indisputable that to reach true “mass adoption”, blockchains need to be able to scale, meaning they have to be able to process a large amount of transactions quickly and at low cost. This consequently means that as more use cases arise and network adoption accelerates, the performance of the blockchain doesn’t suffer. Based on this definition, Ethereum lacks scalability.
With increasing network usage, gas prices on Ethereum have skyrocketed to unsustainably high levels, ultimately pricing out many smaller users from interacting with decentralized applications entirely. Examples include the BAYC land mint (leading to a surge of gas fees up to 8000 gwei) or the artblocks NFT drop (leading to a surge of gas fees to over 1000 gwei) - as a reference, gas sits at 6 gwei at the time of writing. Instances such as these gave alternative, more “scalable” L1 blockchains (i.e. Solana) a chance to eat into Ethereum’s market share. However, this also spurred innovation around increasing the throughput of the Ethereum network.
However, the scaling approaches these Alt-L1s are taking often come at the cost of decentralization and security. Alt-L1 chains like Solana for example have chosen to go with a smaller validator set and have increased hardware requirements for validators. While this improves the network’s ability to verify the chain and hold its state, it reduces how many people can actually verify the chain themselves and increases barriers of entry to do so. This conflict is also referred to as the blockchain trilemma (visualized below). The concept is based on the idea that a blockchain cannot reach all three core qualities that any blockchain network should strive to have (scalability, security & decentralization) all at once.
Since decentralization and inclusion are two core values of the Ethereum community and Ethereum is built on a culture of users being able to verify the chain, it is not very surprising that running the chain on a small set of high-spec nodes is not a suitable path for scaling Ethereum. Even Vitalik Buterin argues that it is “crucial for blockchain decentralization for regular users to be able to run a node”. As other ecosystems share this ethos, different scaling approaches gained traction.
Sidechains, Plasma & State Channels
The idea behind side chains is to operate an additional blockchain in conjunction with a primary blockchain (Ethereum). This means the two blockchains can communicate with each other, facilitating the movement of assets between the two chains. A side-chain operates as a distinct blockchain that functions independently from Ethereum and links to Ethereum mainnet through a two-way bridge. Side-chains generally have their own block parameters and consensus algorithms, which are frequently tailored for streamlined transaction processing and increased throughput. However, utilizing a side-chain also means making a trade-off as it does not inherit Ethereum's security features.
In the unlikely event of a collusion scenario where the majority of validators engage in malicious activities, the community can still collaborate to redeploy the contracts on Ethereum and implement a fork that eliminates the malicious validators, enabling the chain to continue operating as intended.
One example of a side-chain or maybe rather a “commit chain”, is Polygon PoS. The network uses a checkpointing technique to increase network security in which a single Merkle root is periodically published to the Ethereum layer 1 but is otherwise basically a standalone network with its own proper validator set. This published state is referred to as a checkpoint. Checkpoints are important as they provide finality on the Ethereum chain. The Polygon PoS Chain contract deployed on the Ethereum layer 1 is considered to be the ultimate source of truth, and therefore all validation is done via querying the Ethereum main chain contract.
Similarly, plasma chains also utilize a proprietary consensus mechanism to generate blocks. However, unlike side-chains, the "root" of each plasma chain block is broadcasted to Ethereum. This is very similar to checkpointing on Polygon PoS but demands more communication with L1. The "root" in this context is essentially a small piece of information that enables users to demonstrate certain aspects of the L2 block's contents.
But while Polygon PoS has gained significant traction as outpriced users fled Ethereum in the search of lower transaction fees, the overall adoption of side-chains and plasma chains as a scaling technology has remained limited.
The same goes for a scaling approach referred to as state channels, that enables off-chain transactions between two or more parties. State channels allow participants to engage in a series of interactions, such as payments or game moves, without requiring each transaction to be recorded on the L1 blockchain.
The process begins with the creation of a smart contract on the Ethereum main chain. This contract includes the rules that govern the interactions and specifies the parties involved. Once the contract is established, the participants can open a state channel, which is an off-chain communication channel for executing transactions.
During the state channel's lifespan, the parties can engage in multiple interactions, updating the state of the contract through signed messages. The state changes are not immediately recorded on the blockchain, but they are verified by the smart contract, which can be enforced at a later time if necessary.
When the participants are finished with the interactions, they can close the state channel and publish the final state of the contract to the Ethereum main chain, which executes the contract and finalizes the transaction. This approach reduces the number of on-chain transactions needed to complete a series of interactions, allowing for faster and more efficient processing of transactions.
While state channels initially seemed like a promising solution for scaling blockchain networks like Ethereum, state channels require a certain level of trust between the participants and there is a considerable risk of disputes arising if one party fails to follow the rules outlined in the smart contract. Therefore, state channels are primarily well suited for interactions between parties who trust each other, which limits the number of use cases the technology can feasibly support. Consequently, adoption has remained low as other scaling technologies took the spotlight.
Homogenous Execution Sharding
While the Ethereum community was experimenting with side chains, plasma and state channels (which all have certain drawbacks that render them sub-optimal solutions), a scaling approach that many alternative L1 blockchains have chosen to take is referred to as homogenous execution sharding. This also seemed like the most promising solution to Ethereum’s scalability issues for quite some time and was the core idea behind the old Ethereum 2.0 roadmap.
Homogeneous execution sharding is a blockchain scaling approach that seeks to increase the throughput and capacity of a blockchain network by splitting its transaction processing workload among multiple, smaller units (validator sub-sets) called shards. Each shard operates independently and in parallel, processing its own set of transactions and maintaining a separate state. The goal is to enable parallel execution of transactions across shards, thereby increasing the overall network capacity and speed. Examples of Alt-L1 chains that have opted for this approach are Harmony or Near.
Harmony is an L1 blockchain platform that aims to provide a scalable, secure, and energy-efficient infrastructure for decentralized applications (dApps). It uses a sharding-based approach in which the network is divided into multiple shards, each with its own set of validators who are responsible for processing transactions and maintaining a local state. Validators are randomly assigned to shards, ensuring a fair and balanced distribution of resources. The network uses a consensus algorithm called Fast Byzantine Fault Tolerance (FBFT), a variant of the Practical Byzantine Fault Tolerance (PBFT) consensus mechanism, to achieve fast and secure transaction validation across shards. Cross-shard communication is facilitated through a mechanism called "receipts," which allows shards to send information about the state changes resulting from a transaction to other shards. This enables seamless interactions between dApps and smart contracts residing on different shards.
Similarly, NEAR Network uses an execution sharding technology called Nightshade, that splits the work of processing transactions across many validator subsets. Nightshade utilizes block producers and validators to process transaction data in parallel across multiple shards so that each shard will produce a fraction of the next block (called a chunk). These chunks are then processed and stored on the NEAR main blockchain to finalize the transactions they contain.
This resembles the old Ethereum 2.0 roadmap, in which Ethereum would have consisted of a Beacon Chain and 64 shard chains. The Beacon Chain was designed to manage the PoS protocol, validator registration, and cross-shard communication. The shard chains on the other hand were thought of as individual chains responsible for processing transactions and maintaining separate states in parallel. Validators would have been assigned to a shard and would rotate periodically to maintain the security and decentralization of the network with the Beacon Chain keeping track of validator assignments and managing the process of finalizing shard chain data. Cross-shard communication was planned to be facilitated through a mechanism called "crosslinks," which would periodically bundle shard chain data into the Beacon Chain, allowing state changes to be propagated across the network. However, the Ethereum 2.0 roadmap has since evolved, and execution sharding has been replaced by an approach referred to as data sharding that aims to provide a scalable basis for a more complex scaling technology known as rollups (more on this later on).
But while homogenous execution sharding promises great scalability, it does come at the cost of security trade-offs, as the validator is split into smaller subsets and hence the network decentralization is impaired. Additionally, the value at stake that provides crypto-economic security on the shards is reduced. This is part of the reason why execution sharding is not the approach that Ethereum takes to solve the scalability problem in the current (rollup-centric) roadmap of Ethereum 2.0.
Heterogenous Execution Sharding
Heterogeneous execution sharding is a blockchain scaling approach that connects multiple, independent blockchains with different consensus mechanisms, state models, and functionality into a single, interoperable network. This approach allows each connected blockchain to maintain its unique characteristics while, depending on the design, benefiting from the security and scalability of the overall ecosystem. Two prominent examples of projects that employ heterogeneous execution sharding are Polkadot (v1) and Cosmos.
Polkadot (v1) is a decentralized platform designed to enable cross-chain communication and interoperability among multiple blockchains. Its architecture consists of a central Relay Chain, multiple Parachains, and Bridges.
Relay Chain: The main chain in the Polkadot ecosystem, responsible for providing security, consensus, and cross-chain communication. Validators on the Relay Chain are in charge of validating transactions and producing new blocks.
Parachains: Independent blockchains that connect to the Relay Chain to benefit from its security and consensus mechanisms, as well as enable interoperability with other chains in the network. Each parachain can have its own state model, consensus mechanism, and specialized functionality tailored to specific use cases.
Bridges: Components that link Polkadot to external blockchains (like Ethereum) and enable communication and asset transfers between these networks and the Polkadot ecosystem.
Polkadot uses a hybrid consensus mechanism called Nominated Proof-of-Stake (NPoS) on the Relay Chain to secure its network. Validators on the Relay Chain are nominated by the community, and they, in turn, validate transactions and produce blocks. Parachains can use different consensus mechanisms run by what is referred to as collators, depending on their requirements. What is an important feature of Polkadot’s network architecture is that by design, all parachains share security with the relay chain, hence inheriting the Relay Chain security guarantees.
Similarly, Cosmos aims to create an Internet of “Blockchains”, facilitating seamless communication and interoperability between different application-specific blockchains. Its architecture is kind of similar to Polkadot’s and is composed of a central Hub, multiple Zones, and Bridges.
Hub: The central blockchain in the Cosmos ecosystem enabling cross-chain communication and soon inter-chain security (shared security similar to Polkadot). Cosmos Hub uses a Proof-of-Stake (PoS) consensus mechanism called Tendermint, which offers fast finality and high throughput. Theoretically, there can be multiple hubs. But especially with ATOM 2.0 and inter-chain security coming up, the Cosmos Hub will likely remain the center of the Cosmos-enabled internet of blockchains.
Zones: Independent blockchains connected to the Hub, each with its own consensus mechanism, state model, functionality and generally also validator set. Zones can communicate with each other through the Hub using a standardized protocol called Inter-Blockchain Communication (IBC).
Bridges: Components that link the Cosmos ecosystem to external blockchains, allowing asset transfers and communication between Cosmos Zones and other networks.
The main difference between the Polkadot and the Cosmos approach is the security model. While Cosmos goes for an approach in which the app-specific chains (heterogenous shards) have to spin up and maintain their own validator sets, Polkadot opts for a shared security model. Under this shared security model, the app-chains inherit security from the relay chain that stands at the center of the ecosystem. The latter is much closer to the rollup-based scaling approach that Ethereum wants to take to enable scaling.
Modularity & Rollup-based Scaling
The Modular Thesis
As we have seen, building decentralized applications that are sovereign & unconstrained by the limitations of base layers is a complex endeavor. It requires coordinating hundreds of node operators, which is both difficult & costly. Moreover, it is hard to scale monolithic chains without making significant tradeoffs on security and decentralization.
While frameworks such as the Cosmos SDK and Polkadot’s Substrate make it easier to abstract certain software components, they don’t allow for a seamless transition from code into the actual physical network of p2p hardware. Moreover, heterogenous sharding approaches might fragment ecosystem security, which can introduce additional friction & risk. So what is the alternative? Can we solve the blockchain trilemma by taking a different approach?
The answer is: Yes! This is exactly where the before-mentioned rollups come to the rescue! But before we dive into what rollups are and how they work, let’s have a look at the bigger picture. Rollups sit at the core of what is often referred to as the “Modular Thesis”, which is basically the antithesis to monolithic scaling.
Modular blockchain design is a broad approach that separates a blockchain’s core functions (execution, settlement & consensus/DA) into distinct, interchangeable components. Within these functional areas, specialized providers arise that jointly facilitate building scalable and secure rollup execution layers, broad app design flexibility, and enhanced adaptability for evolving technological demands.
In monolithic networks on the other hand (e.g. the before-mentioned Ethereum, Solana, Harmony or Near) execution, settlement & consensus/DA are unified in one layer. Let’s unpack what these terms mean:
Data Availability: The concept that data that is published to a network that is accessible and from where data retrievable by all network participants (at least for a certain time).
Execution: Defines how nodes on the blockchain process transactions to transition the blockchain between states.
Settlement: Finality (probabilistic or deterministic) is a guarantee that a transaction committed to the chain is irreversible. This only happens when the chain is convinced of transaction validity. Hence, settlement means to validate transactions, verify proofs & arbitrate disputes.
Consensus: The mechanism by which nodes come to an agreement about what data on the blockchain can be verified as true & accurate.
But while this monolithic design approach has some advantages of its own (e.g. with regards to reduced complexity & simple composability), we have already covered why it doesn't necessarily scale well. Consequently, modularists strip these things apart and have them done by separate, specialized layers.
The modular design space hence consists of:
Execution layers (rollups)
Settlement layers (e.g. Ethereum)
Consensus/DA layers (e.g. Ethereum or Celestia)
By separating these functions we can create a landscape in which rollups can pick and choose the ideal components for their use case, enabling a high degree of use case tailored customization. This enables a broad design space for rollup-based scaling solutions. Depending on the exact definition the term can be quite broad, including smart contract rollups, sovereign rollups as well as Validiums/Optimiums.
Smart Contract Rollups: Smart Contract Rollups are a type of blockchain that publish their entire blocks to a settlement layer like Ethereum. The settlement layer’s primary functions are to order the blocks and verify the correctness of the transactions.
Sovereign Rollups: Sovereign Rollups are a type of blockchain that publishes its transactions to another blockchain, usually for data availability purposes, but handles its own settlement. Unlike Smart Contract Rollups, Sovereign Rollups do not use a settlement layer to determine their canonical chain and validity rules. The canonical chain of the Sovereign Rollup is determined by the nodes in the rollup's peer-to-peer network.
Validiums: Validiums are scaling solutions similar to “pure” rollups, but designed to improve throughput by processing transactions off the Ethereum Mainnet and specifically using an off-chain data availability solution. Similar to validity rollups, Validiums publish validity (zero-knowledge) proofs on-chain to verify off-chain transactions. However, they keep the data off-chain, which (depending on the design) can be posted to a committee of trusted parties or a separate chain.
Optimiums: Same as Validiums but using an optimistic state validation mechanism.
Rollup Mechanics
Before diving into the rollup mechanics, let’s have a look at the network participants. Depending on the exact implementation, a rollup can have various actors involved. Generally though there are three main actors which are defined below.
Light clients: Only receive the block headers and do not download nor process any transaction data (else it would fulfill the tasks of the full node). Light clients can check state validity via fault proofs, validity proofs or data availability sampling.
Full nodes: Retrieve complete sets of rollup block headers and transaction data. They handle and authenticate all transactions to compute the rollup's state and ensure the validity of every transaction. If the full nodes find an incorrect transaction in a block the transaction is rejected and omitted from the block.
Sequencers: Specific rollup node with the primary tasks of batching transactions and generating new blocks for the rollup chain.
The two core principles that underlie the concept of rollups are:
Aggregation: Rollup nodes collect multiple transactions and create a compressed summary, known as a rollup block, which contains the essential information needed for transaction verification and state updates.
Verification: These rollup blocks are then submitted to the main blockchain, where validator nodes verify the validity of the transactions within the block and ensure that they comply with the predefined rules.
Once the block is validated, the state of the rollup is updated on-chain, reflecting the outcome of the transactions. This allows rollups to achieve significant scalability improvements by reducing the computational transaction load and hence blockspace demand on the base layer blockchain while still being able to maintain secure & trustless transaction processing that is rooted in the main chain’s security guarantees. This effectively means moving transaction processing off-chain, while keeping the resulting state on-chain. The amount of data posted to the L1 can be further reduced using compression tricks such as superior encoding, omitting the nonce, replacement of 20-byte addresses with an index and more.
So to put this into context with the scaling solutions discussed previously, rollups basically take heterogenous sharding within a shared security paradigm (the Polkadot approach) to the next level. Transactions are processed off-chain and (as the name suggests) rolled up into batches. Sequencers collect transactions from the users and submit the data to a smart contract on Ethereum L1 that enforces correct transaction execution on the L2, storing the transaction data on L1. As hinted at before, this enables rollups to inherit the security of the battle-tested Ethereum base layer (abstracting the potential risk arising from sequencers and other actors in the system). What were essentially shards in the old Ethereum 2.0 roadmap are now completely decoupled from the base layer. This leaves devs with a wide open space to customize their L2 chain however they want (similar to Polkadot’s parachains or Cosmos’ zones), while still being able to rely on Ethereum L1’s security. Compared to side-chains or Cosmos zones, this comes with the advantage that a rollup does not need a validator set and consensus mechanism of its own. A rollup only needs to have a set of sequencers (responsible for transaction ordering), with only one sequencer needing to be live at any given time. With weak assumptions like this, rollups can actually run on a small set of high-spec server-grade machines or even a single sequencer, allowing for great scalability (which comes at the cost of decentralization and subsequently security though). However, most rollups try to design their systems as decentralized as possible (more on that later). While rollups generally don’t need a consensus mechanism (as finality guarantees come from L1 consensus where state validation happens) rollups can have coordination mechanisms with rotation schedules to rotate through a set of sequencers or fully fledged PoS mechanisms in which a set of sequencers reaches consensus on transaction inclusion & ordering, thereby increasing decentralization & improving security.
Optimistic vs. Validity Rollups
Generally, there are two types of rollup systems, which differ with regards to the state validation mechanism they use.
Aside from the approach to state validation, the mechanics of optimistic and validity rollups are quite similar. Both have a sequencer node that collects transactions from users, subsequently submitting this raw data to the DA layer (Ethereum L1 in most cases) alongside the new L2 state root. However, in order to ensure that the new state root submitted to Ethereum L1 is correct, optimistic rollups take a social-economic fault proving (a.k.a. fraud proving) approach. In this model, transactions are generally assumed to be valid unless proven otherwise (hence the term “optimistic”).
To detect faults, verifier nodes will watch the sequencers and compare the new state root they compute themselves to the one submitted by the sequencer. If there is a difference, they will initiate what is called a fault proving process. If the fault proving mechanism proves that the actual state root is different from the one submitted by the sequencer, the sequencer’s stake (a.k.a. bond) will be slashed (similar to slashing in PoS networks). However this only holds true if the sequencer is actually required to put up a stake, which in a scenario where the sequencer is run centrally by the team, is generally not the case. The state roots from the fraudulent/faulty transaction onward will be erased and the sequencer will have to recompute the lost state roots (more details on this later).
Validity rollups (also called zero knowledge rollups) on the other hand rely on validity proofs in the form of zero knowledge proofs (e.g. SNARKs or STARKs) instead of fault proving mechanisms. Similar to optimistic rollup systems, a sequencer collects user transactions and is responsible for submitting the validity proof to L1. Depending on the design, the sequencer’s stake can also be slashed if they act maliciously, which incentivizes them to post valid blocks (and proofs). The prover (a role that doesn’t exist in the optimistic setup) generates unforgeable proofs of the execution of transactions, proving that these new states and executions are correct.
The sequencer subsequently submits these validity proofs to L1, providing it to the verifier smart contract on Ethereum mainnet in the form of a verifiable hash. The validity proof is also posted to the data availability/consensus layer alongside the transaction data, enabling state reconstruction in case something goes wrong on the L2. Theoretically, the responsibilities of sequencers and provers can be combined into one role. However, because proof generation and transaction ordering each require highly specialized skills to perform well, splitting these responsibilities prevents unnecessary centralization in a rollup’s design.
Determining which state validation mechanism is superior is difficult. However, let's briefly explore some key differences:
Firstly, because validity proofs can be mathematically proven, the Ethereum network can verify the legitimacy of batched transactions instantly and in a trustless manner through a verifier contract. This differs from optimistic rollups, where Ethereum relies on verifier nodes to validate transactions and execute fault proofs if necessary. Hence, some may argue that validity rollups are more secure. Furthermore, the instant confirmation of rollup transactions on the main chain allows users to transfer funds seamlessly between the rollup and the base blockchain (as well as other zk-rollups) without experiencing friction or delays. In contrast, optimistic rollups impose a waiting period before users can withdraw funds to L1 (7 days in the case of Optimism & Arbitrum), as the verifiers need to be able to verify the transactions and initiate the fault proving mechanism if necessary. While there are ways to enable fast withdrawals, it is generally not a native feature and these mechanisms often come with additional trust assumptions for the user.
However, validity proofs are computationally expensive to generate and often costly to verify on-chain (depending on the proof size). By abstracting proof generation and verification, optimistic rollups gain an edge on validity rollups in terms of cost.
Rollup Limitations & Issues
But while rollups hold great promise with regards to solving the scalability issues Ethereum is facing, it is still a very nascent scaling technology that does have some limitations to overcome. This section focuses on the problems that rollup based scaling solutions have to solve and the approaches that can potentially help to do so. Please note that this data availability focused section is an excerpt from an in-depth research report written by zerokn0wledge for Castle Capital.
Data Availability
The Data Availability Problem
The Data Availability problem refers to the question of how peers in a blockchain network can be sure that all the data of a newly proposed block is actually available. If the data is not available, the block might contain malicious transactions which are being hidden by the block producer. Even if the block contains non-malicious transactions, hiding them might compromise the security of the system, as nobody can verify the state of the chain anymore. Therefore, one can’t be sure all transactions are actually correct. So in other words, data availability refers to the ability of all participants (nodes) to access and validate the data on a network. This is a prerequisite for functioning blockchains or L2 scaling solutions, and ensures that the networks are transparent, secure, and decentralized.
This data availability problem is especially prominent in the context of rollup systems, which inherit security from the base layer they are built upon, where they post the transaction data to. It is hence very important that sequencers make transaction data available because DA is needed to keep the chain progressing (ensure liveness) and to catch invalid transactions (safety in the optimistic case). Additionally, data needs to be available to ensure that the sequencer (basically the rollup block producer) doesn’t misbehave and exploit its privileged position and the power it has over transaction ordering.
Data availability poses limitations to contemporary rollup systems: even if the sequencer of a given L2 were an actual supercomputer, the number of transactions per second it can actually process is limited by the data throughput of the underlying data availability solution/layer it relies on. Put simply, if the data availability solution/layer used by a rollup is unable to keep up with the amount of data the rollup’s sequencer wants to dump on it, then the sequencer (and the rollup) can’t process more transactions even if it wanted to.
Hence, strong data availability guarantees are critical to ensure rollup sequencers behave. Moreover, maximizing the data throughput of a data availability solution/layer is crucial to enable rollups to process the number of transactions necessary to become execution layers supporting applications ready for mass adoption.
How can we ensure that data on a given data availability layer is actually available, or in other words, that a sequencer has actually published complete transaction data?
The obvious solution would be to simply force the full nodes in the network to download all the data dumped onto the DA layer by the sequencer. However, this would require full nodes to keep up with the sequencer’s rate of transaction computation, thereby raising the hardware requirements and consequently worsening the network’s decentralization (as barriers of entry to participate in block production/verification increase).
Not everyone has the resources to spin up a high-spec block-producing full node. But blockchains also can’t scale with low validator requirements. Consequently there is a conflict here between scalability and decentralization. Additionally, centralizing forces in PoS networks (as take naturally concentrates to a small number of validators), augmented by centralization tendencies caused by MEV, lead researchers to the conclusion, that blockchains and decentralized validator sets don’t guarantee all the nice security properties, we hoped for in the first place. Hence, many researchers argue that the key to scaling a blockchain network while preserving decentralization is to scale block verification and not production. This idea follows the notion that if actions of a smaller group of consensus nodes can be audited by a very large number of participants, blockchains will continue to operate as trustless networks, even if not everyone can pariticpate in block production (consensus). This has been the core thesis of Vitalik’s endgame article (December 2021), where he states:
“Block production is centralized, block validation is trustless and highly decentralized, and censorship is still prevented.”
Luckily, there are approaches that address the data availability issue by leveraging error correcting techniques like erasure coding and probabilistic verification methods such as availability sampling (DAS). These core concepts play a key role in building scalable networks that can support the data availability needs of rollup-based execution layers.
Data Availability & Calldata on Ethereum
Before diving into these concepts, let’s have a look at the status quo of data availability on Ethereum to understand why these solutions are actually needed.
As already mentioned in the previous section, the capacity of the DA layer to handle the data that a rollup sequencer posts onto the DA layer is crucial. Currently, rollups utilize calldata to post data to Ethereum L1, ensuring both data availability and storage. Transaction calldata is the hexadecimal data passed along with a transaction that allows us to send messages to other entities or interact with smart contracts.
However, calldata is limited to ~10KB per block and with a fixed price of 16 gas / byte is very expensive. The limited data space on Ethereum and the high cost of L1 block space limit rollup scalability and currently are the main bottlenecks L2 scaling solutions face. To quantify this, call data cost accounts for 90% of rollup transaction cost.
To overcome these limitations and reduce transaction costs on their execution layer, some L2 teams have come up with the idea of using off-chain DA solutions, instead of posting the data to Ethereum L1. This can be a fully centralized solution (such as a centralized storage system) or what is referred to as a data availability committee (DAC).
In the case of a DAC, the protocol relies on a set of trusted entities that guarantee to make data available to network participants and users. When we remove data availability requirements from Ethereum, the transaction cost of the L2 becomes much cheaper.. However, having a DA off-chain comes with a trade-off between cost and security. While Ethereum L1 provides very high liveness and censorship resistance guarantees, off-chain solutions require trust in parties that are not part of the crypto-economic security system that secures Ethereum mainnet.
Not surprisingly, there are initiatives aiming to reduce the cost rollups incur when posting data to L1. One of them is EIP-4488, which reduces the calldata cost from 16 to 3 gas per byte, with a cap on calldata per block to mitigate the security risks arising from state bloat. However, it does not directly increase L1 data capacity limit, but rather balances cost of L1 execution with cost of posting data from L2 execution layers in favor of rollups, while retaining the same maximum capacity. This rebalancing means that the average data usage of available capacity will go up, max capacity doesn't. The need to increase max capacity still remains, however, at least in the short term, EIP-4488 allows us to enjoy lower rollup fees (as it could reduce costs by up to 80%).
The long-term solution to this is Danksharding. The mid-term, intermediate solution on the other hand is EIP-4844, which aims to increase the blockspace available to the rollups. EIP-4844 is expected to massively reduce the rollup cost base. But more on this in the next section.
EIP-4844 a.k.a. Proto-Danksharding
With transaction fees on L1 spiking with increased on-chain activity, there is great urgency to facilitate an ecosystem-wide move to rollup-based L2 solutions. However, as already outlined earlier in this report, rollups need data. Right now the cheapest data option for rollup is calldata. Unfortunately, this type of data is very expensive (costing 16 gas per byte).
So how can this issue be addressed? Simply lowering calldata costs might not be a good idea: as it would increase the burden on nodes. While archive nodes need to store all data, full nodes store the most recent 128 blocks. An increased amount of calldata to store would therefore increase storage requirements even more, worsening decentralization and introducing bloat risks if the amount of data dumped onto L1 continues to increase. At least not before Ethereum has reached the ETH 2.0 stage that is referred to as “The Purge”. This involves a series of processes that remove old and excess network history and simplify the network over time. Aside from reducing the historical data storage, this stage also significantly lowers the hard disk requirements for node operators and the technical debt of the Ethereum protocol.
Nevertheless, rollups are already significantly reducing fees for many Ethereum users. Optimism and Arbitrum for example frequently provide fees that are ~3-10x lower than the Ethereum base layer itself. Application-specific validity rollups, which have better data compression, have achieved even lower fees in the past.
Similarly, Metis, an unconventional optimistic rollup solution, has achieved 0.01$ transaction fees by implementing an off-chain solution for DA & data storage. Nontheless, current rollup fees are still too expensive for many use cases (or in the case of Metis they come with certain security tradeoffs).
The long-term solution to this is danksharding, a specific implementation of data sharding, which will add ~16 MB per block of dedicated data space that rollups could use. However, data sharding will still take some time until it’s fully implemented and deployed.
Until that stage, EIP-4844 provides an interim solution by implementing the transaction format that will be used in danksharding (data blobs), without actually implementing any sharding on the validator set. Instead,data blobs (Binary Large Objects) are simply part of the beacon chain and are fully downloaded by all consensus nodes.
To prevent state bloat issues, these data blobs can be deleted after one month. The new transaction type, which is referred to as a blob-carrying transaction, is similar to a regular transaction, except it also carries these extra pieces of data called blobs. Blobs are extremely large (~125 kB), and are much cheaper than similar amounts of call data. This EIP-4844 primarily addresses L2 transaction fee issues and is expected to have a large impact on those (see figure 3).
So, EIP-4844 (also referred to as proto-danksharding) is a proposal to implement most of the groundwork (eg. transaction formats, verification rules) of full danksharding and is a pre-stage to the actual danksharding implementation. Rollups will have to adapt to switch to EIP-4844, but they won't have to worry about adapting more stuff when full sharding will be rolled out, as it will use the same transaction format.
In a proto-danksharding paradigm, all validators and users still have to directly validate the availability of the full data. Because validators and clients still have to download full blob contents, data bandwidth in proto-danksharding is targeted to 0.375 MB/block instead of the full 16 MB that are targeted with danksharding.
However, this already allows for significant scalability gains as rollups won’t be competing for this data space with the gas usage of existing Ethereum transactions. Basically, this new data type is coming with its own independent EIP-1559-style pricing mechanism. Hence, layer 1 execution can keep being congested and expensive, but blobs themselves will be super cheap for the short to medium term.
While nodes can safely remove the data after a month, it's extremely unlikely for it to truly be lost. Even a single commitment can confirm that "this blob was committed inside block X” to prove to everyone else that it’s true (this is the 1-of-N trust assumption). Block explorers will likely keep this data, and rollups themselves will have an incentive to keep it too since they might need it. In the long term, rollups are likely to incentivize their own nodes to keep relevant blobs accessible.
Rollup-Centric Ethereum 2.0 & Danksharding
Alongside the Proof of Stake (PoS) consensus mechanism, the other central feature in the ETH 2.0 design is Danksharding. While in the old ETH 2.0 roadmap the key idea of how to scale the network was execution sharding, where L1 execution was split among a set of 64 shards, all capable general-purpose EVM execution, Ethereum has pivoted to a rollup-centric roadmap, introducing a limited form of sharding, called data sharding: these shards would store data, and attest to the availability of ~250 kB-sized blobs of data (double the size of EIP-4844 blobs), transforming the Ethereum base layer into a secure, high-throughput data availability layer for rollup-based L2s.
In a post in the Ethereum Research forum, Vitalik Buterin pointed out how this differs from execution sharding and how rollups fit into the data sharding approach:
But let’s have a look at how data sharding (more specifically danksharding) works in detail. As we know from earlier sections, making data available is a key responsibility of any rollup’s sequencer and ensures security & liveness of the network. However, keeping up with the amount of data that sequencers post onto the data availability layer can be challenging, especially if we want to preserve the network’s decentralization.
Under the Danksharding paradigm we use two intertwined techniques to verify the availability of high volumes of data without requiring any single node to download all of it:
Attestations by randomly sampled committees (shards)
Data availability sampling (DAS) on erasure coded blobs
Suppose that:
You have a big amount of data (e.g. 16 MB, which is the average amount that the ETH 2.0 chain will actually process per block initially)
You represent this data as 64 “blobs” of 256 kB each.
You have a proof of stake system, with ~6400 validators.
How do you check all of the data without…
…requiring anyone to download the whole thing?
…opening the door for an attacker who controls only a few validators to sneak an invalid block through?
We can solve the first problem by splitting up the work: validators 1…100 have to download and check the first blob, validators 101…200 have to download and check the second blob, and so on. But this still doesn’t address the issue that the attacker might control some contiguous subset of validators. Random sampling solves the second issue by using a random shuffling algorithm to select these committees.
DAS is in some ways a mirror image of randomly sampled committees. There is still sampling going on, in that each node only ends up downloading a small portion of the total data, but the sampling is done client-side, and within each blob rather than between blobs. Each node (including client nodes that are not participating in staking) checks every blob, but instead of downloading the whole blob, they privately select N random indices in the blob (e.g. N = 20) and attempt to download the data at just those positions.
To cover the extreme case where an attacker only makes 50-99% of the data available, Danksharding relies on a technology called erasure coding, an error correction technique. The key property is that, if the redundant data is available, the original data can be reconstructed in the event some part of it gets lost. Even more importantly though, it doesn’t matter which part of the data is lost: as long as X% (tolerance threshold) of the data is available, the full original data can be reconstructed.
Rollups under the Danksharding paradigm offer similar scalability to the computational shards in the old roadmap. This is possible thanks to the final piece of the puzzle: data shards. Data sharding essentially means that not every validator will continue to download the same transaction data as nodes do currently (validators also have to run nodes). Instead, Ethereum will essentially split its network of validators into different partitions (or committees) called “shards.”
Let’s say Ethereum has 1000 validators that currently store the same transaction data. If you split them into 4 groups of 250 validators each that will now store different data, you have suddenly quadrupled the amount of space available for rollups to dump data to.
This introduces a new problem. If validators within a shard will only download and store the transaction data that is dumped to their shard, validators within one shard won’t have guarantees that the entirety of the data dumped by a sequencer was indeed made available. They would only have guarantee for their shards, but not that the rest of the data was made available to other shards.
For that reason, we run into a situation where validators in one shard cannot be sure that the sequencer was not misbehaving because they do not know what is happening in other shards. Luckily, this is where the concept of DAS comes in. If you are a validator in one shard, you can sample for data availability using data availability proofs in every other shard! This will essentially give you the same guarantees as if you were a validator for every shard, and thereby allowing Ethereum to safely pursue data sharding.
With executable shards (the old vision of 64 EVM shards) on the other hand, secure cross-shard interoperability was a key issue. While there are schemes for asynchronous communication, there’s never been a convincing proposal for true composability. This is different in a scenario where Ethereum L1 acts as a data availability layer for rollups. A single rollup can now remain composable while leveraging data from multiple shards. Data shards can continue to expand, enabling faster and more rollups along with it. With innovative solutions like data availability sampling, extremely robust security is possible with data across up to a thousand shards.
Now that we understand the concept of data sharding, let’s have a look at how the specific danksharding implementation looks like. To make the data sharding approach outlined in the previous section feasible on Ethereum, there are some important concepts we should be aware of.
In simplified terms, danksharding is the combination of PBS (proposer-builder separation) and inclusion lists. The inclusion list component is a censorship resistance mechanism to prevent builders from abusing their magical powers and force non-censorship of user transactions. PBS splits the tasks of bundling of transactions (building blocks) and gossiping the bundle to the network (broadcasting). There can be many builders, of course, but if all builders choose to censor certain transactions there might still be censorship risks within the system. With crList, block proposers can force builders to include transactions. So, PBS allows builders to compete for providing the best block of transactions to the next proposer and stakers to capture as much of the economic value of their block space as possible, trustlessly and in protocol (currently this happens out of protocol).
On a high level, proposers collect transactions from the mempool and create a “crList”, which is essentially a list that contains the transaction information to be included in the block. The proposer conveys the crList to a builder who reorders the transactions in the crList as they wish to maximize MEV. In this way, although block proposers have no say when it comes to how transactions are ordered, proposers can still make sure all transactions coming from mempool enter the block in a censorship-resistant manner, by forcing builders to include them. In conclusion, the proposer-builder separation essentially builds up a firewall and a market between proposers and builders.
Additionally, in Danksharding’s latest design, the beacon blockchain will contain all the data from shards. This is achieved by having both Beacon blockchain and sharding data validated by a “committee” composed of validators (the shards). In this way, transactions from the same beacon blockchain can freely access shard data, and can be synchronized between rollups and the Ethereum base layer, greatly improving data availability. This greatly simplifies rollup structure as problems such as confirmation delay will no longer exist and opens up new possibilities of cross-rollup composability (e.g. synchronous calls of various L2s with an L2 powered but L1-based AMM contract).
Alternative DA Solutions
So the good news is that Ethereum is working on upgrades that will mitigate the DA problem on Ethereum (at least to some degree). The bad news though is that it will still take some time until EIP-4844 and especially full danksharding are implemented on Ethereum L1. And even once these solutions are implemented, DA on Ethereum might still be too limited or expensive for some highly cost-sensitive use cases. Consequently, alternative DA solutions have emerged. While covering those in depth would go beyond the scope of this report, we will briefly provide an overview here:
Data Availability Committee (DAC): DACs are committees consisting of multiple nodes run by trusted entities that guarantee to store data off-chain and ensure data availability for rollups. Nodes in DACs attest on-chain that the data of a L2 is available. DACs are utilized by projects like DeversiFi and ImmutableX, with a relatively low operation cost. However the low cost comes at a security trade-off inherent to the centralized design (if DAC goes offline, rollup won’t have access to transaction data anymore).
Celestia: Celestia operates as a modular Data Availability (DA) layer separate from Ethereum (its own PoS L1 built on Cosmos SDK), allowing for data availability attestations with more economic guarantees and less trust assumptions compared to DACs. It employs a mechanism known as Data Availability Sampling (DAS) to enable scalability & end-user verification. As it operates independently, it is a more neutral solution for DA compared to DACs.
Avail: Similar to Celestia Avail is also a modular DA layer that functions as its own proper L1. It’s built on Substrate (Polkadot framework) and uses KZG commitments instead of fault proofs to ensure security for data availability sampling light nodes.
Eigenlayer’s EigenDA: Developed by EigenLabs, EigenDA is a secure, high throughput, and decentralized Data Availability (DA) service built atop Ethereum using the EigenLayer restaking primitive. It's aimed at reducing gas fees on Layer 2s (L2s) and enhancing data availability bandwidth, thereby reducing data storage costs for Layer 2 Ethereum rollups. EigenDA is part of the broader EigenLayer ecosystem, which aims to facilitate securing middleware services like oracles, bridges or in this case a hyperscale Data Availability (DA) layer.
Decentralization of Sequencer
Another issue rollups are currently facing is decentralizing certain components in the system. The primary issue in this regard is the sequencer, which undoubtedly plays an essential role within any rollup system.
Current State of Rollup Sequencing
We already know what the role and responsibility of sequencers is within rollups. So let's look at the current sequencer design landscape and the tradeoffs these designs come with. L2Beat offers an oversight of the status of sequencers of the most prominent L2s out there. As is evident from the below screenshot, there is still some work to be done here. Luckily, a lot of development is happening on this as you are reading this report.
Currently, all major rollups rely on a single centralized operator, often managed by the team responsible for the respective rollup. While that makes sense in an initial stage (as it is very easy to set up & maintain), most rollups aim to decentralize the sequencer in the future. However, rollups like Arbitrum can hardly be blamed for continuously stating that they are going to decentralize their sequencer “soon”, considering the revenue it generates. Already in May 2023, there was over 5900 ETH in fee revenue reserved for the sequencer according to Arbiscan.
From a UX perspective it doesn’t really matter if the sequencer is centralized or not (centralized sequencers actually even tend to have a faster pre-confirmation time), if we assume that the sequencer is operating as intended. But having a single centralized sequencer does introduce additional trust assumptions. The issues that come with a centralized sequencer design are:
Single Point of Failure: No viable alternative if the sequencer is offline due to a variety of potential issues.
Monopolistic behavior: To maximize the profit the sequencer could artificially bloat the price that needs to be paid to get transactions included in the block.
Weak censorship resistance: There usually is not a mechanism to circumvent order exclusion or other dubious behavior.
No atomic composability: These ‘siloed’ setups need to have a cross-sequencer bridge to communicate with other rollups and won’t be atomic composable in any way.
But luckily, there are mechanisms that can help mitigate the risks caused by the centralized sequencer. Even though suboptimal, an escape hatch (like Arbitrum has implemented) could prevent transactions from being excluded in a block.
Current State of Sequencing
As rollups scale and grow, one risk attracting significant attention is the potential failure of sequencers and validators. Arbitrum for example operates a centralized sequencer, run by the Foundation, with 13 whitelisted validators exclusively running the state validation mechanism.
Summary of the current sequencer landscape. (Source: Twitter)
Like Arbitrum, Optimism operates a centralized sequencer that is run by the Optimism foundation. If the sequencer goes offline, users can still force transactions to the L2 network via Ethereum L1 though, providing users with a sort of safety net. However, Optimism plans to decentralize the sequencer in the future using a crypto-economic incentive model and governance mechanisms but the exact architecture is not agreed upon as of now (more on this later). Actually a while ago a RFP was put out by the Optimism foundation to enhance their sequencer design.
zkSync Era also has a centralized operator, serving as the network’s sequencer. Given the project's early stage, it even lacks a contingency plan for operator failure. Future plans include decentralizing the operators by establishing roles of validators & guardians though.
Finally, Polygon’s zkEVM features a centralized sequencer operated by the foundation as well. If the sequencer fails, user funds are frozen which means that users can’t access them until the sequencer is back online. However, Polygon plans to decentralize sequencing in the context of the Polygon 2.0 roadmap.
Approaches to Sequencer Decentralization
As a detailed analysis of decentralized sequencing models would go beyond the scope of this article, we won’t cover the different approaches in-depth. But it’s important to note that a decentralized sequencer is very important to address the aforementioned issues that come with a centralized sequencer design.
While we will briefly touch upon Optimism’s plans to decentralize the sequencer later on in this report, we will just focus on a short overview of other siloed (not shared) but decentralized sequencing proposals that are currently being discussed in other ecosystems:
A while ago Aztec released a RFP in which they asked the public to come up with decentralized sequencer designs. In total 8 designs were deemed interesting which can be found in this post but only 2 have been selected for further research which are Fernet and B52. Metis, another L2 working on its own sequencer, came up with a design they dubbed ‘ decentralized sequencer pool' which is part of their goal to become a fully decentralized and functional rollup.
Decentralization of State Validation Mechanism
To ensure security in the context of state validation, it’s important that proof generation (in the case of validity rollups) or fault proving mechanisms (in the context of optimistic rollups) are permissionless and decentralized. Let’s have a look at this in more detail:
Decentralized Provers in Validity Rollups:
Integrity Assurance: In Validity rollups, provers are responsible for generating proofs of validity for batches of transactions. The decentralization of provers ensures that no single entity or a small group of entities can control the validation process.
Resilience and Redundancy: Decentralization among provers also creates a system of redundancy. If a prover were to go offline or act maliciously, others in the network can continue to generate proofs and ensure the rollup can still validate transactions. This improves liveness guarantees and ensures uninterrupted service.
Competitive Verification: A decentralized network of provers promotes competition in generating validity proofs. This competition can lead to faster validation and better security as provers vie to provide accurate and timely validations to maintain their reputation and earn incentives.
Trust Minimization: The nature of decentralization minimizes trust assumptions. Users don't need to trust a single centralized entity. Instead, the mathematically-verifiable proofs generated by a decentralized network of provers provide a trustless validation system1.
Permissionless Fraud Proving in Optimistic Rollups:
Open Participation: Optimistic rollups operate on the principle of optimistic execution, where transactions are assumed to be valid unless proven otherwise. The permissionless nature allows anyone to challenge potentially faultulent transactions. This open participation is crucial for maintaining network integrity and trust (if you can’t verify that the sequencer isn’t posting invalid state roots you have to trust someone else to do so).
Incentivized Honesty: Fraud proofs not only allow for the detection and correction of faultulent activities but also incentivize honesty. Malicious actors are penalized if a fault proof succeeds, and this penalty serves as a deterrent for dishonest behavior.
Enhanced Security: Permissionless and decentralized fault proving creates a self-regulating ecosystem where malicious actions are likely to be caught and corrected. This collective vigilance enhances the security and robustness of the network.
Community-Driven Regulation: Permissionless fault proving empowers the community to act as a regulatory force. This decentralized approach to regulation promotes transparency, fairness, and a sense of collective ownership over the network's security and integrity.
Both decentralized provers in Validity Rollups and permissionless fault proving in Optimistic Rollups embody the ethos of blockchain technology, fostering a decentralized, secure, and open environment conducive for transparent interactions and scalable solutions.
Rollup Landscape
The current state of Smart Contract Rollups is subdivided into Optimistic Rollups and Validity Rollups (ZK Rollups).
Optimistic Rollups bundles and executes transaction off-chain and publish the transaction on-chain back to Mainnet where consensus is reached while Validity Rollups bundles transaction and runs computation off-chain reducing the amount of data to be published to Mainnet and submits a cryptographic proof (validity proof) to prove correctness of their changes.
The Optimistic Rollups can further be subdivided into Pure Optimistic Rollups and Optimium.
The difference between pure Optimstic Rollups and Optimium depends on their Data Availabilty (DA) solution, as both pure Optimistic Rollups and Optimiums use fraud proof systems. But while Optimiums publish data off-chain, pure optimistic rollups publish data on-chain (to Ethereum).
Validity Rollups can be further subdivided into pure Validity Rollups and Validiums. The difference between pure Validity Rollups and Validiums is also the DA solution as pure Validity Rollups publish data on-chain, while Validiums publish data off-chain.
The OP Stack - Bedrock of the Ecosystem
Introduction to Optimism Bedrock
The OP Stack, as it stands, is the foundational software set that powers Optimism. It's the tech stack that the Optimism Mainnet is built on. However, its evolutionary trajectory is geared towards eventually supporting the Optimism Superchain and its associated governance. Oversight and continual refinement of the OP Stack are diligently carried out under the guidance of the Optimism Collective. The overarching vision of the OP Stack is to position it as a public good, bringing value to both the Ethereum and Optimism communities.
A crucial USP of the OP Stack is its versatility. While its contemporary design has been a big step in simplifying the process of creating L2 blockchains, it doesn't singularly define the OP Stack. Rather, the OP Stack encapsulates all the software components that exist in the Optimism universe. And, as Optimism evolves, so does the OP Stack.
The OP Stack is composed of software elements that either distinctly define a layer within the Optimism ecosystem or can be integrated as a module within these layers. At the center stands infrastructure that facilitates the operation of L2 blockchains. However, it goes beyond the foundational layers of a blockchain as higher layers might include tools such as block explorers, message passing mechanisms, governance systems, and more.
An intriguing architectural distinction of the OP Stack is exactly this layering. The foundational layers, for instance the Data Availability Layer, are very strictly defined. But as one moves higher up the stack, the layers like the Governance Layer tend to be more flexible and only loosely defined.
The current version and most recent iteration of the OP Stack is “Optimism Bedrock”. Bedrock equips builders with all the tools necessary to build out a production-ready optimistic rollup. The Bedrock upgrade significantly improved the OP Stack in various regards:
Reduced Transaction Costs: Transaction fees are drastically lowered by optimizing data compression and employing Ethereum for data availability, which reduces fees significantly. Additionally, the elimination of EVM execution-related gas costs during L1 data submission further lowered fees by roughly 10%.
Fast Deposit Times: Bedrock’s node software now supports L1 re-orgs, thereby drastically reducing deposit wait times from a previous maximum of 10 minutes to around 3 minutes.
Modular Proof Systems: A critical enhancement in Bedrock is its ability to detach the proof system from the main stack, allowing rollups to utilize either fault or validity proofs (such as zk-SNARKs) to verify execution validity.
Superior Node Performance: Node software performance has seen a significant boost. Nodes can now execute several transactions within a single rollup block, a departure from the previous "one transaction per block" protocol. In conjunction with improved data compression, this efficiency means state growth is cut down by an estimated 15GB annually.
Greater Ethereum Compatibility: Bedrock's design is closely aligned with Ethereum. This approach means several deviations present in the older versions have been rectified, including the one-transaction-per-block model mentioned above, unique opcodes for accessing L1 block data, separate fee structures for L1/L2 in the JSON-RPC API, and custom representations of ETH balances.
Additionally, Optimism Bedrock is an important step towards the realization of Optimism’s Superchain vision.
SystemConfig Contract: This new introduction aims to directly define the L2 chain with L1 smart contracts. The objective is to encapsulate all details defining the L2 chain, including the creation of unique chain IDs, setting block gas limits, etc.
Sequencer Flexibility: A distinctive feature in Bedrock is the capacity to designate sequencer addresses via the SystemConfig contract. This innovation introduces the concept of modular sequencing, allowing various chains with unique SystemConfig contracts to have their sequencer address determined by the deployer (more on this soon).
DA Flexibility: Bedrock also provides flexibility with regards to Data Availability (DA). This flexibility allows builders to opt for optimized DA solutions, significantly reducing cost compared to DA on Ethereum L1 (more on this soon).
Multi-Proof System Support: Bedrock will adopt Cannon as its primary fault-proof mechanism. Yet, it remains flexible, hinting at future integration with various optimistic as well as validity proof based systems in the future. This multiplicity in proof systems ensures a broader spectrum of security and optimization options, enhancing the resilience and adaptability of the network.
Breaking down the OP Stack
Sequencing
Sequencer Dynamics in Bedrock and OP Mainnet
The Sequencing Layer in the OP Stack determines how user transactions on an OP Stack chain are collected & published to the DA layer(s) in use. Central to Optimism block production is the role of the "sequencer." The sequencer provides crucial services for the network:
Providing transaction confirmations (soft confirmation) and (pre-mature) state updates.
Constructing and executing L2 blocks.
Submitting user transactions to L1.
In the Bedrock iteration of the OP Stack, the sequencer maintains a mempool, bearing similarities to that of L1 Ethereum. To ward off potential MEV exploitation threats, this mempool is kept private though. On OP Mainnet, blocks are produced every two seconds, regardless of whether they are empty, have filled to the block gas limit with transactions or anything in between.
User transactions are channeled to the sequencer via two methods:
L1 Submissions: Transactions on L1, also known as deposits (even if they don't have assets attached), are included in the right L2 block. Each L2 block is uniquely identified by its "epoch" (the associated L1 block) and its serial number within that epoch. If the sequencer attempts to censor a valid L1 transaction, it results in a state different from what the verifiers compute. The fault proof mechanism that will consequently be initiated equips OP Mainnet with L1 Ethereum level censorship resistance (as seen earlier in this report).
Direct Submissions: Transactions can be dispatched directly to the sequencer. These bypass the expenses of a separate L1 transaction but aren't censorship-resistant at the same level since only the sequencer alone of them.
At present, the sole responsibility of sequencing (a.k.a. block production) on OP Mainnet rests with the Optimism Foundation. There's a vision however to eventually decentralize the Sequencer role. We will explore this in more depth in the next section.
Progression towards a decentralized Sequencing Model
In the default rollup configuration of the OP Stack (which is what OP mainnet is built on), sequencing is typically overseen by a singular, centralized sequencer. The final vision, though, envisions sequencing as modular, allowing chains flexibility in selecting or altering their sequencer configuration. An evolutionary step from the Single Sequencer model will be to add a “multiple sequencer” module to the OP Stack. In this, the sequencer is selected from a predefined pool of possible candidates. The precise mechanisms for this set's creation and the node selection can be tailored to the needs of the individual OP Stack-based chains.
However, for now the default Sequencer module for the OP Stack (and hence OP Stack based chains) remains the single sequencer module, allowing a governance mechanism to decide the sequencer's identity. Through the SystemConfig contract, teams building on the OP Stack have the option to choose an alternative sequencing solution such as a shared sequencer (e.g. Espresso, Astria or Radius) though.
The first decentralization step of the default sequencer module aims to implement sequencer rotation. The sequencer rotation mechanism, though not finalized yet, combines an economic mechanism, promoting competitive sequencing, and a governance mechanism safeguarding the network's long-term interests. Subsequent steps aim to support multiple concurrent sequencers, drawing inspiration from standard BFT consensus protocols adopted by other L1 protocols like Polygon and Cosmos.
Shared Sequencing in the Superchain Vision
In the Superchain vision, OP-Stack based chains are conceptualized as app–specific shards of what feels to the user like a unified, logical chain. Thanks to a shared sequencer producing blocks across different OP chains, atomic interactions between these networks become feasible.
While this offers considerable potential, there are inherent challenges though, particularly since the shared sequencer doesn't actually execute the proposed transactions. The lack of execution guarantees limits the degree to which atomic composability can be leveraged within the framework. Potential solutions like Shared Validity Sequencing (SVS) and SUAVE could offer pathways to address these concerns.
Shared Validity Sequencing (SVS): Shared Validity Sequencing as proposed by Umbra Research introduces an advanced block-building algorithm tailored for the shared sequencer to ensure atomicity and conditional execution guarantees. It also postulates the Implementation of shared fault proofs among the participating rollups to enhance the integrity of cross-chain actions. While being geared primarily towards optimistic rollups, SVS can also be applied to validity rollups (that are naturally asynchronically composable) to enable synchronous composability.
Flashbot SUAVE: A solution like SUAVE can help by providing economic atomicity (as any builder basically). However, only proposers (validators) have the power to guarantee transaction inclusion. SUAVE executors are not necessarily validators of other chains, and hence can’t guarantee atomic inclusion of cross-chain transactions. Shared sequencers on the other hand act as proposers of the rollups that opt into the sequencing solution. But since the shared sequencer doesn't execute transactions, it can’t guarantee that a transaction won’t revert upon execution. That’s why shared sequencers will need stateful builders such as SUAVE to sit in front of them. SUAVE basically anticipates that different cross-chain atomicity approaches will emerge & looks to support preference expression for all of them, which basically makes SUAVE a demand-side aggregator for cross-domain preferences.
In conclusion, sequencing within the OP Stack is continually evolving, transitioning from a centralized single sequencer model towards a shared, decentralized sequencer framework, mirroring the broader aspirations of the Optimism ecosystem in line with the Superchain vision.
Execution environment
Introduction to the Execution Layer in the OP Stack
Central to the architecture of the OP Stack is the Optimistic Virtual Machine (OVM), a specialized execution layer module. Notably, it draws extensively from the Ethereum Virtual Machine (EVM) in both state representation and the state transition function, offering a harmonious integration between the two systems.
However, this is not a 1:1 copy of the EVM. Rather, it's a lightly modified version of the EVM that incorporates:
Support for L2 transactions initialized on Ethereum.
An additional L1 data fee to each transaction to offset the cost of transferring transactions to Ethereum.
The OVM doesn't rigidly enforce transaction validity. Instead, it adopts an optimistic approach, processing transactions with the assumption of their validity and leaning on the L1 chain for arbitration in cases of state transition discrepancies, known as fault proofs.
Transaction Dynamics within the OVM
Transaction processing within the OVM exhibits strong parallels with EVM transactions. A transaction can be initiated by an externally owned account (EOA) or a contract account. When initiated by EOAs, transactions can manifest as:
Asset transfers (Example: Alice transferring 5 ETH to Bob).
Contract deployments, evidenced when compiled bytecode is relayed as a data component of a transaction.
Smart contract interactions (e.g. Alice's buying 5 UNI tokens).
Contracts can initiate transactions, named “message calls” as well. These can be directed at an EOA or another contract. Furthermore, within the OVM's ecosystem, a contract can even deploy other contracts via a specialized contract-creation transaction.
Like its counterpart (the EVM), the OVM deploys the concept of “gas,” a mechanism to delineate the execution steps for each transaction and prevent potentially damaging infinite execution loops.
Bytecode & EVM Equivalence
Bytecode serves as the foundational low-level instructions, indispensable for the OVM's functioning. Prior to their deployment within the OVM, high-level EVM-compatible programming languages (such as Solidity) undergo a transformation into bytecode. This bytecode then becomes operational, executing a sequence of opcodes based on the respective transaction inputs. More specifically, Optimism uses a Solidity compiler that works by converting the Solidity to Yul and subsequently into EVM Instructions and finally bytecode
However, it's pivotal to distinguish between EVM compatibility and true EVM equivalence. While the OVM's architecture is EVM-compatible, it isn’t a perfect duplicate of the EVM. In the case of mere compatibility, developers often still need to to adjust or re-engineer the fundamental code which Ethereum-based infrastructure relies on.
Attaining true EVM equivalence on the other hand means achieving comprehensive alignment with the Ethereum Yellow Paper, the rigorous definition of the Ethereum protocol, and EVM bytecode support. This holistic integration ensures a seamless meld between existing Ethereum toolsets and the L2 system. For developers, a high degree of compatibility (or rather equivalence) with the EVM is hence beneficial, significantly reducing the developer overhead and making migration of dApps from L1 to L2 a seamless experience.
Optimism always took the approach that if L2s want to leverage Ethereum’s infrastructural network effects, they must be EVM equivalent. Hence, Optimism built their software on Geth, Ethereum’s most robust and popular execution layer client implementation from the beginning. But let’s see how a transaction typically gets executed in go-ethereum.
On each block, the state processor’s process is called which calls ApplyTransaction on each transaction. Internally, transactions are converted to messages and messages get applied to the current state. The newly produced state is finally stored back on the blockchain.
This core data flow remains the same on Optimistic Geth, with some modifications to make transactions “OVM friendly”:
Modification 1: Converting Transactions to OVM Messages via Sequencer Entrypoint
Transactions are transformed into OVM (Optimistic Virtual Machine) Messages, with signatures being replaced by message data inclusive of the original transaction signature. The "to" field is substituted with the address of the "sequencer entrypoint" contract to achieve a compressed transaction format for Ethereum publication, adding scaling benefits.
Modification 2: Enabling OVM Sandboxing through Execution Manager
For transactions to be processed within the OVM sandbox, they are required to be sent to the Execution Manager's run function. This modification ensures all messages are internally directed to the Execution Manager, replacing the message’s “to” field with the Execution Manager’s address, and packing the original data as arguments for the run function, simplifying the process for users.
Modification 3: State Manager Call Interceptions
The StateManager, a specific contract absent on Optimistic Geth, is only deployed during fault proofs. During run call argument packing, a hardcoded State Manager address is included by Optimism’s Geth, which then becomes the final destination for any ovmSSTORE or ovmSLOAD calls. On Layer 2, messages aimed at the State Manager are intercepted and rerouted directly to Geth’s StateDB, or otherwise disregarded.
Modification 4: Transition to Epoch-based Batches from Blocks
Unlike traditional block structures, the OVM keeps an ordered transaction list, eliminating block gas limits. Gas consumption is now regulated through time segments known as epochs. Transactions undergo a pre-execution check for new epoch initiation, and post-execution, their gas usage is added to the epoch's cumulative gas tally. Distinct gas limits are set for sequencer submitted transactions and “L1 to L2” transactions within each epoch, with transactions surpassing these limits being returned prematurely.
Modification 5: Introduction of Rollup Sync Service
The Rollup Sync Service, a novel process, operates in tandem with standard Geth operations, tasked with monitoring Ethereum logs, processing them, and subsequently injecting the corresponding L2 transactions into the L2 state via Geth’s worker, enhancing the synchronization between L1 and L2 transactions.
State Validation Mechanism
As outlined in previous sections of this research article, Optimism Mainnet and other OP Stack based chains are optimistic rollups. In such a system, state commitments are made public on a settlement layer, which, in the case of the OP Mainnet (and most other OP Stack based chains), is Ethereum L1. What differentiates this system from validity rollups is that these commitments don't come with any direct proof of validity. They remain in a tentative state during a designated "challenge window." If these commitments face no challenges throughout this window, which currently spans seven days, they attain finality. Following this finality, Ethereum's smart contracts can securely process withdrawals based on that commitment. In this chapter, we will delve deeper into the nuances of state validation in the context of the OP Stack.
Fault Proof Mechanism
When a state commitment is subject to a challenge, it can be deemed invalid via a process known as "fault proof" (also called "fraud proof"). If a challenge against a commitment succeeds, this commitment is purged from the State Commitment Chain (SCC), only to be replaced eventually by a new proposed commitment. It's important to be aware that while the commitment might be revoked, the state of OP Mainnet remains intact and won’t be affected by the challenge.
In the previous chapter, we have had a look at the OVM running in its “simple” execution mode in L2. When running in L1 however, the OVM is in fault proof mode and a additional components get enabled (both the Execution Manager & the Safety Checker are deployed on both L1 and L2 though):
Fraud Verifier: Contract that coordinates the fault proof verification process. It calls the State Transitioner Factory to initialize a new fault proof process and if the fault proof was successful it prunes any batches from the State Commitment Chain that were published after the dispute point.
State Transitioner: Activated by the Fraud Verifier when there's a dispute involving a pre-state root and the contested transaction. It works by reaching out to the Execution Manager to accurately carry out the transaction on-chain based on established guidelines, aiming to generate the right post-state root for the contentious transaction. A fault proof that's executed correctly will show a discrepancy between the post-state root in the state transitioner and the one in the State Commitment Chain. The state transitioner can exist in one of these three states:
Pre-Execution
Post-Execution
Complete
State Manager: This is where data supplied by users is kept. It acts as a "temporary" state manager, set up exclusively for the fault proof, holding only details about the state affected by the debated transaction.
The OVM running in fault proof mode looks like:
Fraud proofs are broken down in a few steps:
Step 1: Declare which state transition you’re disputing
The user calls the Fraud Verifier’s initializeFraudVerification function, providing the pre-state root (and proof of its inclusion in the State Commitment Chain) and the transaction being disputed (and proof of its inclusion in the Transaction chain).
A State Transitioner contract is deployed via the State Transitioner Factory.
A State Manager (SM) contract is deployed via the State Manager Factory. It will not contain the entire L2 state, but will be populated with only the parts required by the transaction; you can think of it as a “partial state manager”.
The State Transitioner is now in the Pre-Execution phase.
Initializing a fault proof deploys a new State Transitioner & State Manager, unique to the state root and transaction in dispute. (Source: Paradigm)
Step 2: Upload all the transaction pre-state
Executing the disputed transaction directly will result in an INVALID_STATE_ACCESS error. This is because the L2 state it references isn't loaded onto the newly-established L1 State Manager from Step 1. If the State Manager isn't pre-filled with the necessary state, the OVM sandbox identifies this and mandates that all required states be loaded before proceeding.
To illustrate, if the contested transaction involved an elementary ERC20 token exchange, the beginning steps would include:
Deploy the ERC20 on L1: To ensure consistent execution across L1 and L2, the contract bytecode for both must be the same. This alignment is achieved by adding a special prefix to the bytecode, which then replicates it in memory and saves it to the given address.
Initiate proveContractState: This action connects the L2 OVM contract with the newly launched L1 OVM contract. Even though the contract is now operational and linked, its storage remains empty. The "linking" process involves using the OVM address as an identifier in a map, where its corresponding value showcases the contract's account status.
Call proveStorageSlot: Regular ERC20 transactions decrease the sender's funds by a certain quantity and boost the recipient's funds by an equivalent amount, usually documented in a map. This action introduces the pre-transaction balances for both parties. In the case of an ERC20, such balances are often logged in a map. Thus, the identifier becomes the keccak256(slot + address), following the storage structure of Solidity.
During the fault proof pre-execution phase, all state that is touched must be uploaded. (Source: Paradigm)
Step 3: Once all pre-state has been provided, run the transaction
The user must then trigger the transaction’s execution by calling the State Transitioner’s applyTransaction. In this step, the Execution Manager starts to execute the transaction using the fault proof’s State Manager. After execution is done, the State Transitioner transitions to the Post-Execution phase.
When the L2 transaction gets executed on L1, it uses the State Manager which was deployed for the fault proof and contains all the uploaded state from the pre-execution phase. (Source: Paradigm)
Step 4: Provide the post-state
During the L1 execution (Step 3), alterations will occur in the contract storage slots or account status (like nonces). These modifications should prompt a shift in the State Transitioner’s subsequent state root. Nonetheless, due to the State Transitioner / State Manager duo's lack of full knowledge of the L2 state, they can't automatically determine the updated post-state root.
To address this, when there's a change in the value of a storage slot or an account's status, it's tagged as "modified", and a tally for unrecorded storage slots or accounts increases. It's mandatory that for each altered item, users present a merkle proof from the L2 state, confirming the observed value. Whenever there's a commitment to a change in a storage slot, the storage root of the contract account gets refreshed. Once all the modified storage slots are recorded, the state of the contract is also confirmed, leading to an update in the transitioner’s subsequent state root. For every post-state detail that's shared, the counter reduces in kind.
Consequently, it's anticipated that after committing the state adjustments for every contract affected by the transaction, the derived subsequent state root will be accurate.
In the post execution phase, any state that was modified must be uploaded. (Source: Paradigm)
Step 5: Complete the state transition & finalize the fault proof
Finalizing the state transition is straightforwardly achieved by invoking completeTransition. This ensures that all accounts and storage slots from Step 4 are recorded, confirmed by verifying that the counter for uncommitted state stands at 0.
Subsequently, the finalizeFraudVerification function is activated on the Fraud Verifier contract. It assesses if the state transitioner has concluded its operations. If affirmative, it triggers the deleteStateBatch function, which then removes all state root batches post and inclusive of the contentious transaction from the SCC. The Canonical Transaction Chain (CTC) stays intact, ensuring the initial transactions are processed again in their original sequence.
Once the State Transitioner is complete, the fault proof is finalized & the invalid state roots get pruned from the state commitment chain. (Source: Paradigm)
With the rollout of the OVM 2.0 upgrade though, the fault proof mechanism of Optimism was temporarily suspended. Presently, users of OP Mainnet and other OP Stack based chains must trust the Sequencer node, overseen by the Optimism Foundation, to reliably publish valid state roots on Ethereum.
However, the team argues that the addition of fault proofs doesn't significantly bolster the security of a system if upgrades to that system can be made within the 7-day challenge window via “fast upgrade keys”. For OP Mainnet, the security heavily depends on these upgrade keys. The ultimate objective for the OP Mainnet is to pioneer the deployment of fault proofs that can independently ensure the system's security, devoid of the need for fast upgrade keys.
But let’s have a look at how this might be achieved in the future. As mentioned before, there are multiple state validation mechanisms that can be used in the context of the OP stack.
Attestation-based Fault Proof
An Attestation-based Fault Proof mechanism uses an optimistic protocol to establish a view of an OP Stack chain. In optimistic settlement mechanisms generally, Proposer entities can propose what they believe to be the current valid state of the OP Stack chain. If these proposals are not invalidated within a certain period of time (the “challenge period”), then the proposals are assumed by the mechanism to be correct. In the Attestation Proof mechanism in particular, a proposal can be invalidated if some threshold of pre-defined parties provide attestations to a valid state that is different than the state in the proposal. This places a trust assumption on the honesty of at least a threshold number of the pre-defined participants.
Fault Proof Optimistic Settlement (proposed)
The proposed Fault Proof Optimistic Settlement mechanism closely mirrors the current Attestation-based system but replaces the MultiSig challenger with an open and permissionless fault proving procedure. A fault proof, when correctly constructed, should efficiently negate any incorrect propositions within the designated challenge period. Trust in this model hinges on the accuracy of the fault proof's design. A notable development in this domain is Cannon. OP intends to utilize Cannon as an interactive fault proof system within Bedrock and the OP Stack.
Cannon's Features:
EVM Equivalence: Cannon is a pioneering EVM-equivalent fault proof system.
Reuse of geth EVM: Instead of re-inventing the wheel, Cannon taps into the existing EVM implementation, namely, geth. More specifically, minigeth, a streamlined subset of go-ethereum, is adapted to MIPS, a Reduced Instruction Set Computer (RISC). This design is remarkably simplistic, resonating with Optimism's love for simplicity.
Stateless Approach: Cannon deviates from traditional fault proof designs. Instead of re-running the entire EVM transaction on L1, it executes just one minigeth MIPS command on-chain. Minigeth's state database has been replaced with a new entity termed the preimage oracle. This ingenious abstraction allows the fault proof software to access any information within the L1 or L2 state, irrespective of state size.
Cost Efficiency: Existing rollups impose costs as they submit transaction data to a smart contract. Cannon's approach reduces L1 gas costs for an L2 transaction, leading to significant user savings.
Validity Proof Settlement (proposed)
The Validity Proof Settlement mechanism leverages mathematical proofs to vouch for the accuracy of a proposed view. Only with a valid proof is a proposed state accepted, making trust dependent on the proof's accuracy.
In summary, state validation in the context of the OP Stack is a dynamic and evolving field. As technologies and methods mature, they promise a more secure, efficient, and reliable Optimism Mainnet.
Settlement Layer
The Settlement Layer is the place where proof verification (in the validity rollup case) or rather fault proving disputes (in the optimistic case) take place. Hence, settlement is a mechanism on external blockchains (the settlement layer) that establishes a view of the state of an OP Stack chain on those external chains. For each OP Stack chain, there may be one or more Settlement mechanisms on one or more external chains. Settlement layer mechanisms are read-only and allow parties external to the network to make decisions based on the state of an OP Stack chain.
The term “Settlement Layer” has its origins in the fact that settlement layer mechanisms are often used to handle withdrawals of assets out of a blockchain. This sort of withdrawal system first involves proving the state of the target blockchain to some third-party chain and then processing a withdrawal based on that state. However, the settlement layer is not strictly (or even predominantly) financial and, at its core, simply allows a third-party chain to become aware of the state of the target chain.
Once a transaction is published and finalized on the corresponding data availability (DA) layer, the transaction is also finalized on the OP Stack chain. Short of breaking the underlying DA layer, it can no longer be modified or removed. It may not be accepted by the settlement layer yet because the settlement layer needs to be able to verify transaction results, but the transaction itself is already immutable.
While most of the OP Stack based chains use Ethereum for settlement, it’s also possible to opt for alternative settlement layers. One example is opBNB that uses BNB Chain for settlement (fault proving) as well as DA.
Data Availability Solution
The data availability (DA) layer defines where the raw inputs (transaction data) of an OP Stack based chain are published. An OP Stack chain can use one or more DA modules to source its input data. Because an OP Stack chain’s state is derived from the DA layer, the DA module(s) used have a significant impact on the security model of the system. For example, if a certain piece of data can no longer be retrieved from the DA layer, it may not be possible to sync the chain anymore (see chapter on DA Problem).
Currently, Ethereum is the most widely used Data Availability module for the OP Stack. When using the Ethereum DA module, source data can be derived from any piece of information accessible on the Ethereum blockchain. This includes Ethereum calldata, events, and will include EIP-4844 data blobs after successful implementation of EIP-4844.
However, OP Stack based chains can also opt for alternative DA solutions such as the ones discussed earlier in this report. This includes Celestia, Avail, EigenDA or a more centralized data availability committee (DAC). Just like Optimism combines settlement and DA on one layer (Ethereum) projects building on the OP stack can also opt for alternative smart contract chains for settlement/DA in one. An example is the before-mentioned opBNB that is using BNB chain for this purpose.
Projects building on OP Stack
The OP stack technology has gained impressive traction in the L2 space and since inception has attracted substantial amounts of builders whose aims are strongly tied towards the Optimism collective initiative.
The chains built with OP stack are approaching $3B market cap cumulatively (and some of them don’t even have a token yet).
Optimism
An EVM-equivalent Optimstic Rollup chain.
Optimism is live on the OP Stack and since june 2023 is running on Bedrock.
According to L2BEAT, it’s ranked 2nd with a TVL of $2.61B.
Over 150+ projects are live on Optimism.
Base
An Optimistic Rollup incubated by Coinbase utilizing OP stack technology.
Base is live on OP Stack and according to L2BEAT, it’s ranked 3rd with a TVL of $549M.
Over 100+ Projects has been deployed on Base today.
opBNB
An optimized L2 chain by Binance with lower fees and high throughput utilizing OP Stack.
The avg gas transfer on opBNB is $0.005 with a 4k+ Transaction Per Second (TPS).
opBNB is live on Mainnet and according to DefiLlama, it’s ranked 16th on Rollups with a TVL of $917k.
Over 20+ projects is now live on opBNB today.
Manta Pacific
Manta Pacific is an Optimistic Rollup chain offering ZKaaS for Dapps and utilizing OP stack which is deployed through Caldera.
Manta Pacific is live on Mainnet and according to L2BEAT, it’s ranked 16th with a TVL of $11.04M.
Over 20+ projects are already building on Manta Pacific.
Aevo
A Decentralized Options Exchange utilizing OP Stack.
Aevo is already live on the OP Stack and according to L2BEAT, it’s ranked 21st with a TVL of $6.76M.
You can start trading on Aevo today.
Zora
A fast, scalable and cost-efficient L2 chain for NFTs utilizing OP Stack.
Zora is already live on the OP Stack and according to L2BEAT, it’s ranked 22nd with a TVL of $6.46M.
You can start using Zora today.
Public Goods Network
A recurring funding initiative for pubic goods utilizing OP Stack.
Public Goods Network is already live on the OP stack and according to L2BEAT, it’s ranked 28th with a TVL of $553k.
Start using Public Goods Network today.
Worldcoin
A world identity and financial network to serve as a public utility giving ownership to everyone.
Worldcoin is already live on the OP Stack.
Start using Worldcoin today.
The Superchain Vision
Introduction to the Superchain
Unification and not fragmentation. This aligns with the Optimism Superchain vision of scaling together and not apart.
The Optimism Superchain allows new chains to be deployed permissionlessly which enables them to share security, bridging, upgrades, decentralized governance, and a communication layer while maintaining an open source development stack.
The Superchain introduces a new revenue model which allows Developers to benefit from fees generated by their chain through rewards and which extends to protocol developers of public goods.
The Superchain will be built on the OP stack codebase and upon launch will merge the Optimism Mainnet with other chains utilizing the OP Stack to a single unified network of OP chains.
This Superchain concept will generally introduce massive scalability to the OP Stack.
How the Superchain becomes a Reality
Talking about interoperability between chains, it traces down to the root where scalability has been of a major concern and this leads to the foundation of the Superchain concept.
The three key properties of scaling a blockchain are:
Blockchain Execution
Blockchain Storage and
Blockchain Consensus
The Blockchain Execution refers to the computation required to execute transactions and perform state changes.
The Blockchain Storage refers to the storage requirements of full nodes that maintain and store a copy of the ledger.
The Blockchain Consensus refers to how nodes in a decentralized network reach an agreement on the present state of a blockchain.
These key three properties can be scaled. In scaling the storage layer, the main issue to solve is how to allow blockchains to process and validate more data without increasing the storage requirement for full nodes, in terms of where data can be stored long-term without major changes to the trust assumptions of blockchains (also see chapter “Data Availability”).
In scaling the consensus layer, the main issue to solve is how to reach finality faster, cheaper and with a more trust-minimized approach.
In scaling the execution layer, the main issue to solve is how to achieve more TPS or in other words how to achieve more computation per second without increasing the hardware requirement for individual full nodes.
Furthermore, five different approaches are currently being considered across the broader blockchain space to scale the execution layers. These include:
Vertical Scaling of Validator Hardware Requirements
Horizontal Scaling via Multi-Chain Ecosystems
Horizontal Scaling via Execution Sharding
Horizontal Scaling via Modularity and
Payment and State Channels.
We have already discussed these approaches in previous chapters, but will, in the context of the Superchain, revisit what is referred to as Horizontal Scaling via Multi-Chain ecosystems here. This is basically the foundation for the Superchain vision and Interoperability.
Vertical scaling, which can be achieved by increasing the hardware requirement for block producers. However, the trade-off with this approach is that it limits network decentralization, given the higher cost of running a validator or a full node.
The horizontal scaling approach on the other hand is an alternative to vertical scaling that allows the use of multiple independent blockchains across the ecosystem with each chain having its own block production mechanism and execution capacity.
The benefit that comes with horizontal scaling is that each individual chain has fully customizable features such as the privacy features, gas token usage, node hardware requirements, permission settings etc. To achieve horizontal scalability, chains must be run in parallel.
In previous approaches to Horizontal Scaling via a multi-chain ecosystem however, each blockchain often had to bootstrap its own security through a native token issued in an inflationary manner (see Cosmos for example).
Traditional approaches to multi-chain architecture are hence fundamentally flawed as each chain introduces a new security model which results in compounding systemic risk as new chains are introduced into the ecosystem. Additionally, new chains are costly to set up as they have to spin up and maintain proprietary validator sets.
As discussed earlier, this problem can be solved by using L2s instead of trying to scale a monolithic L1 vertically or build out a network of standalone L1 chains. With a network of L2 chains that benefit from a shared source of security, chains don't require a new set of validators, as the L1 chain can serve as the consensus layer. Subsequently, adding new L2s to the ecosystem will not introduce systemic risks in the system. Additionally, the shared base layer (in combination with a shared sequencer and cross-rollup message passing protocols) will enable interoperability between chains. This allows a network of interoperable chains to be treated as a single unit, thereby supercharging the superchain vision and every participating OP stack based L2 chain.
The Superchain is enabling interoperability with this following properties:
Shared bridge for all OP chains: OP chains now have standardized security properties.
Shared Sequencer: Transaction ordering across all OP Chains (see chapter “Shared Sequencing in the Superchain Vision”).
Secure transaction and cross-chain messaging: Users can safely migrate assets between L2 chains, without having to worry about fragmented security.
Cheap OP chain deployment: Abstraction of high transaction fees on L1 and enabling cheap and easy deployment.
Configuration options for OP chains: OP chains can configure their DA provider, address, sequencer etc. (see chapter “The OP Stack - Bedrock of the Ecosystem”)
The Road to the Superchain
Optimism in its current state is still in Bedrock, which lays the groundwork for the Superchain vision. However, the Superchain vision is yet to reach its full potential with more key components to be added to the framework in the future.
For Optimism to make the full Superchain vision a reality, the below changes to the tech stack still need to be implemented:
Upgrade the Bedrock Bridge to be a chain factory
This will allow all information such as generating a unique chain ID, configuration values like block gas limit defining the L2 chain to be on-chain and once the whole data is available on-chain, the factory which deploys all required contracts and configurations is created.
Derive the OP chain data using the chain factory
This will allow local computation of the synced OP chain and since the state of the OP chain is fully secure and permissionless, no proof system is required for chain derivation as all invalid transactions are ignored by local node computation.
Permissionless proof system to enable withdrawals
Just like sequencers who orders, batches and includes users transaction in a block and submits to Mainnet, the proposer submits user withdrawals to L1. in the context of the Superchain thesis, this will introduce overhead as the number of chains in the Superchain increases and also an upper bound to the no of chains due to limited resources in L1.
This can be resolved by:
Adding Permissionless Proposers (withdrawal claims) as this will allow anyone to submit a proposal (for withdrawal), thereby removing the previously permissioned entity in the system.
Enabling withdrawal claims to be only made when a user needs to withdraw, thereby removing overhead upon deploying a new chain.
The above in practice is achieved by introducing a permissionless proof system to the Optimism bridge contract. The proof now can be either fault proof or validity proof due to the modular design already implemented in Bedrock (see chapter “State Validation Mechanism”).
At the current state where validity proofs have not yet been implemented, fault proofs will be the initial state validation mechanism (see chapter “State Validation Mechanism”). In a permissionless system where anyone can be a proposer and can submit withdrawal claims, bonds are attached to them which acts as a collateral. The bond is paid out to the challenger if it successfully challenges the claim and if invalid, it’s slashed.
These claims are attested by a set of trusted attestors to be the final arbiter of the dispute with which the challengers have to request attestation from large no of attestors, and the attestations are combined to a single transaction “Attestation Proof” and which is also use to challenge invalid claims (see chapter “Attestation-based Fault Proofs”).
The tradeoff which comes with the Attestation Proof system adds latency to withdrawals in case the attestors act malicious but in the future, Optimism plans to replace that with trust-minimized proofs (Cannon proof System).
Additionally, as discussed earlier, the Superchain comes with additional tech stack improvements, which include:
Configurable sequencer per OP chain
This will enable each OP chain deployer to configure their own sequencer address (Modular Sequencing) since bedrock already allows sequencer addresses to be set in the system configuration contract. This means OP chains can be sequenced by different entities while retaining the standard Superchain bridge security model. The Superchain bridge ensures liveness (censorship resistance) and safety (validity) as the combination of both in the system will allow users to submit transactions to L1 in case any OP chain sequencer were to misbehave, this will migrate the users to a new OP chain with functional sequencer still within the Superchain.
One shared upgrade path for al OP chains
This will introduce a security council that will govern upgrades, initiate contract upgrades with delay, update set of chain attestors and finally be able to pause bridge contracts that cancels pending upgrades in the case of emergency as this follows the design principle (safety over liveness). In the case of an emergency and funds are frozen, it can be recovered via an L1 soft fork. The “L1 Soft Fork Upgrade Recovery” Mechanism will enable L1 initiate a bridge upgrade to bypass all permissions within the Superchain bridge contract. This may introduce a systemic risk to Ethereum but still under research as in the future, there will be a solution to that or the research can be discontinued.
Having the features to be implemented will enhance the Superchain properties and enable Optimism to realize its Superchain vision.
Law of Chains
The Law of Chains is the ideological foundation of Optimism’s Superchain vision, aiming at unifying its collective of chains.
Optimism’s Law of Chains at its core is a neutrality framework designed to inform and enhance governance decisions and this will promote Optimism’s initiative towards user protection, decentralized economic autonomy, supercharging the vision of a unified Superchain.
According to the draft in the Optimism forum, The Law of Chains is grouped into sections with each section covering its own defined entity with respect to what is being ensured.
The Optimism Law of Chains will lay out a guiding principle for all OP chains who opt in to become part of the Superchain, in which they will all benefit from a shared commitment and decentralized blockspace.
But what does shared & unified refer to exactly?
Security
Quality and
Neutrality
The Optimism Law of Chain benefits to Superchain includes:
Constant Improvement
If the best tech comes to Optimism, it comes to all without having to worry about maintenance and all.
Better and more available Infrastructure.
This will allow all the OP chains to work together and benefit from shared resources which extends to Indexing, sequencing etc.
Blockspace remains homogeneous, neutral, and open
This will protect chain users, end developers and stakeholders within the Superchain.
Metrics
When assessing the efficiency and efficacy of a blockchain network, a critical aspect to consider is its metrics. These metrics offer valuable information regarding the network's operational efficiency, user engagement, and security measures in place.
Wallets created
The quantity of wallets generated within a blockchain network is frequently regarded as a crucial parameter for gauging its user community and acceptance. Although a portion of these wallets might be generated with malicious intent, such as sybil attacks or artificially boosting the statistics, they nonetheless furnish important information about the network's general vitality and expansion.
Keeping this in consideration, it's important to highlight that Optimism has observed a remarkable creation of 35 million wallets within its network up to this point. As a comparison, Polygon has 365M unique wallets created, Ethereum 247M and Arbitrum 13,5M
Total Value Bridged
Aside from the total wallet created, an additional important metric to evaluate the global health and adoption of a blockchain is the Total Value Bridged within the chain.
This metric is different from the TVL as it takes into account all the funds bridged on Optimism, and not solely the funds locked in underlying dApps.
In the case of optimism, the actual total value bridged on the chain amounts to $2.63 billion with $1B from the $OP token.
The rest of the TVB is separated between natively minted value ($1.14 B) and canonically bridged value ($1.49 B).
Total Value Locked
Optimism is ranked second in terms of TVB, trailing just behind Arbitrum with a TVB of $5.79 billion. When we consider the TVL metric, Optimism ranks second on DeFillama L2s with $595 million.
This highlights Optimism’s growing importance in the Ethereum ecosystem. The competition with Arbitrum and its role in the DeFi landscape showcase its versatility and the value it brings to users, however, such a difference between TVL and TVB is showing a lack of interest from users to deposit and use the liquidity into Optimism’s dApps.
The majority of the TVL is spread among 3 main projects, Synthetix (leverage trading), Velodrome (ve3,3 DEX) and AAVE (lending borrowing).
We will dive into those projects over the ecosystem part of this research report. Here the important factor to notice is the dominance that a few projects have over the global TVL as they’re handling almost half of it.
While this sounds interesting regarding the adoption of native projects (Synthetix and Velodrome), it could represent a lack of market fit from the rest of the DeFi projects built on Optimism as they struggle to attract liquidity to the chain or from their peers.
Having a diverse and well developed ecosystem is crucial for a chain to succeed over the long run. Users are looking for opportunities and innovation which will then foster the general activity and attractivity of a chain.
Daily activity
Daily activity in terms of unique addresses or transactions serves as another pivotal metric, and it has demonstrated a relatively consistent pattern over the last year.
After reaching a peak transaction volume in august 2023, the activity has since then decreased by almost 50%, a phenomena observed on its closest competitor, Arbitrum, which is due to the relatively calm market witnessed during the period.
Funding
Having proper funding is essential for any blockchain project, given the substantial resources needed to develop, operate, and maintain the network/ecosystem. These projects are typically built on the long-term, often taking years to gain significant adoption and generate sufficient revenue. Being live since 2022 and founded in 2018, we've observed that Optimism is not yet profitable, emphasizing the importance of having a substantial funding runway for sustainability and continuous development.
Sufficient funds enable blockchain projects to invest in crucial areas like research, development, hiring top talent, and expanding network infrastructure. Moreover, funds can facilitate partnerships and collaborations with other projects, boosting the network's user base and enhancing its value proposition.
Fund raising
To ensure continuous growth and development, Optimism has been proactively securing funds through a series of fundraising rounds since 2020. Over this duration, Optimism has successfully completed four distinct rounds, amassing a disclosed sum of $178.5 million. This substantial funding underlines the platform's widespread recognition and attractiveness within the industry.
The main rounds were:
Jan 15, 2020 - Seed round for $3,5M including Paradigm and IDEO CoLab Ventures
Feb 24, 2021 - Series A for $25M led by a16z and joined by Wintermute and Nascent
Mar 17, 2022 - Series B for $150M with a pre-money valuation of $1.5B, led by Paradigm and a16z which were joined again by Wintermute and Nascent.
Nov 30, 2022 - Funding Round by Cynegetic Investment Management for an undisclosed amount.
According to CryptoRank, the seed round was raised at a $64,4M valuation ($0,015 per $OP) which granted investors an ATH return of 216x to currently stand at a 80x ROI.
Recently, in September 2023, Optimism sold 116M OP tokens locked 2 years for $157M to seven different purchasers citing “treasury management purposes”. The supply comes from an unallocated portion of the OP token treasury.
This treasury management brings the total funds raised by Optimism to $335.5M, granting a high funding and capacities to navigate the bear market while developing the ecosystem.
Treasury
While Optimism may not yet be profitable as a blockchain, the network does have a significant advantage in the form of its substantial treasury, which is currently valued at $1,2 billion (made of 907M $OP tokens).
It is however unclear how much funds are left from the various funding rounds and the $335,5M raised.
One of the notable advantages of possessing a treasury is providing the chain with a secure runway, eliminating the necessity of frequent fundraising during varying market phases (bear and bulls), even though Optimism recently made a private sale or OTC deal as Treasury management (which was a planned sale). This stability holds paramount importance in today's dynamic market, characterized by rapid and unpredictable condition shifts.
With a treasury, chains can prioritize the development of new products and services, broaden their range of offerings, and delve into potential partnerships and collaborations. This proves to be especially critical in this fiercely competitive industry, where innovation and ecosystem attractivity are key for maintaining a competitive edge and onboarding new users.
Financial Situation
The generation of revenue through fees and transactions is a pivotal element when assessing the effectiveness and long-term viability of a blockchain. In this aspect, Optimism has been generating continuous fees over 2023, averaging $3.02M per month. While annualizing the gross revenue, we can estimate Optimism to generate $36 million in fees this year, solidifying its position as a Layer 2 generating substantial revenue through the current market.
However, it's important to note that revenue generation alone doesn't automatically translate into profitability. When we analyze Optimism's financial reports, we can observe substantial Layer 1 settlement cost and token emissions directed towards various incentives and protocols building on the chain.
As of 2023, Optimism is expected to have $358 million as expenses for its operations, with $27,8M allocated to the Layer 1 settlement cost, and $331 million as token emissions and incentives.
While these investments are crucial for the network's sustained growth and success in the long run, they do represent a significant expenditure that necessitates prudent management to secure profitability. Presently, Optimism operates at a gross deficit and isn't yet profitable while taking token incentives into account. Nonetheless, its revenue from fees generated (gas paid by users) is higher than the expenses to settle transactions on the Layer 1 with $9.27M as net profits in 2023. From this figure there is still the need to take into account the cost of operating a centralized sequencer, which is unknown but should not exceed the profits generated.
In an article published in 2021 and co-written with Vitalik Buterin, Optimism committed to redistribute 100% of the profits made from sequencing (prior to its decentralization) to public good funding. Later in 2022, Optimism mentioned allocating those funds to Ethereum Protocol development.
Ultimately, akin to any blockchain network, the key to enduring success and profitability hinges on striking the right balance between generating revenue and managing expenses and token emissions. Simultaneously, it's crucial to attract and retain users and developers within the ecosystem as they’ll be key in the chain’s growth which requires incentives.
Tokenomics
Tokens are always an important component of blockchain ecosystems and particularly Layer 2s as they can represent multiple use cases. Let’s dive into the $OP token, its use cases, the distribution and the current token status.
Use cases
The $OP token solely grants holders governance rights, enabling them to actively engage in on-chain governance proposals. The network operates on a community-driven approach, allowing token holders to cast their votes on proposals concerning network upgrades, fee adjustments, and other critical matters.
As with other ETH L2, OP token isn’t used as a gas token since settling transactions on L1 would require selling OP for ETH, creating a constant selling pressure.
This also comes as a natural decision regarding Optimism's close relationship with Ethereum and their vision of scaling the chain.
Supply
At genesis, the total supply of OP tokens stood at an initial count of 4,294,967,296 OP tokens and inflates by 2% per year. Out of this, approximately 880,444,572 tokens are currently circulating, corresponding to 20.5% of the total supply.
User Airdrops Allocation: 19%
The Optimism Foundation allocated 19% of its supply to airdrop, separated in multiple rounds. The first one allocated 5% of the total supply and was received by more than 250,000 users that actively engaged with the chain while participating in the ecosystem.
The airdrops 2 & 3 respectively distributed 11.7M and 9.4M tokens to the community, leaving 570M remaining tokens available for future airdrops (13,2% of the total supply)
Ecosystem Fund Allocation: 25%
The Ecosystem Fund, a crucial allocation for a chain growth, is an incentive program designed to directly fund the communities and projects driving the expansion of the ecosystem.
The allocation for the Ecosystem Fund is further divided into the following categories:
Governance Fund where 5.4% of the supply. Projects showcasing usage on Optimism can request tokens from this fund through the Optimism Grants Council.
Partner Fund where 5.4% of the supply. These funds will be strategically distributed by the Optimism Foundation to nurture the Optimism ecosystem..
Seed Fund where 5.4% of the supply is dedicated to supporting early-stage projects launching within the Optimism ecosystem.
Unallocated where 8.8% of the supply will be reserved for future programs, to be determined by the Foundation and/or the governance community as deemed appropriate.
Properly allocating those funds will define how Optimism will attract and retain projects, users and liquidity.
Retroactive Public Goods Funding Allocation: 20%
Retroactive Public Goods Funding (RetroPGF) will be distributed by the Citizens' House.
During the initial RetroPGF round, 76 projects were nominated, out of which 58 were granted funding by the badgeholders.
The median funding received by a project in RetroPGF 1 amounted to $14,670, while the top 10% of projects received more than $36,919.
In the second RetroPGF round, all 195 nominated projects received funding. Badgeholders, assessing impact, voted for an average of 40 projects. The median OP for a project was 22,825, while the top 10% received over 140k OP.
The 3rd is currently live, projects and individuals that contributed to the chain can apply until Oct 23rd for a total of 30M $OP tokens (the voting period will be held between November 6th and December 7th with results and token allocations sent in early January.
Core Contributors Allocation: 19%
The Core Contributors will be distributed to individuals instrumental in bringing Optimism and the Optimism Collective from conception to realization.
Sugar Xaddies Allocation: 17%
This allocation is for investors, the "Sugar Xaddies," who invested in Optimism’s early rounds. $175M were raised through 4 different rounds subject to different valuation and release schedule that we’ll touch on in the following parts.
As mentioned earlier, the current supply is not yet completely distributed and started increasing the dilution this year by ending the cliff period for core contributors and investors allocations.
The current release schedule is on a monthly basis (30th of each month) with 2.74% of the supply being released, corresponding to 24,1M $OP tokens ($28,99M at current valuations).
Governance
Optimism is pushing the boundaries of governance and decentralization by introducing new governance models and ways of sustaining development through public retroactive grant fundings.
The Optimism Collective is a group of companies, communities and users collaborating to reward public goods while building the Ethereum ecosystem and future.
The Optimism Foundation and Optimism Collective are collaboratively working together to shape the constant evolving rules of Governance at Optimism. This system introduced in April 2022 remains flexible over time, granting future adaptations to the model. The Collective is composed of two equal components, the Token House and Citizen’s House.
The Token House consists of $OP token holders which obtain governance power while detaining the token. The Citizen’s house is made of individuals who have been granted “citizenship” by the Optimism Foundation thanks to their constant involvement in the chain’s development or through major contributions. Their on-chain identity is recognized through a specific badge emitted by the Optimism Foundation, giving them the name of badge holders..
Striking a balance between governance obtained through monetary means and governance earned through merit is crucial. This ensures that highly funded malicious third parties cannot unilaterally execute plans that contradict Optimism's short and long-term vision.
The token house and citizen’s house have two clear missions in the governance mechanism. The token house, and thus token holders, will vote on project grants, protocol upgrades or modifications, governance modification and potential uses of the treasury. Regarding the citizen’s house, the badge holders have the mission to vote for the RetroPGF and assess the work that has been done by the various contributors.
These models are currently in the experimental phase and may undergo future development or modification. It is encouraging to witness the collaboration between the Optimism foundation and Collective in decentralizing and rewarding the progress made within the Ethereum ecosystem throughout the year.
Current status
The $OP token got airdropped in May 2022 and reached an ATH price of ~ $3,22 per token in 2023 to decline by more than 60% and reach $1,2 per token in October 2023.
The current circulating supply is valued at $1,06B and Fully Diluted Value (FDV) $5,1B.
Despite such drop, $OP holds the 38th place as the most capitalized tokens on the market, behind MATIC (Polygon) at the 13th place with $4,8B of circulating market cap but above ARB (Arbitrum) at the 40th place with $1B as circulating supply.
Team
One of the key advantages that sets the Optimism team apart on this project is the wealth of experience they have gained over the years by creating and monitoring a chain that has grown to become one of the leaders in the L2 landscape.
The Optimism chain is separated into two entities, the main one being the Optimism foundation which is a non profit organization with the objective of developing the chain while OP Labs (PBC) is the private corporation taking care of the protocol development. Such separation allows Optimism as a chain to evolve as a decentralized entity “without” risking legal actions.
The total publicly known workforce at Optimism consists of 86 individuals, with 25 working at the Optimism Foundation and 61 at OP Labs.
To run such the project, Optimism onboarded a multitude of talents, here are some of the main actors:
Jing Wang (@jinglejamOP) - CEO and Co-founder of OP Labs and the Optimism Foundation. Jinglan is a former Microsoft software engineer who made significant contributions to Ethereum, Ethereum Classic, Zcash, and Polkadot blockchain projects. She's known for her expertise in blockchain development and operations.
Benjamin Jones (@ben_chain) - Co-founder and Chief Scientist at OP Labs. Benjamin worked at Underscore VC as a blockchain associate, now providing strong support on the investment side.
Karl Floersch (@karl_dot_tech) - CEO at OP Labs, Karl was a full time researcher at the Ethereum foundation for more than 2 years and blockchain engineer for 2 years at Consensys
Kevin Ho (@kevinjho_) - Co-founder and Protocol Product Manager at OP Labs. Prior to OP Labs, Kevin led the development of Cryptoeconomics.study, an open-source blockchain course in Greater Los Angeles, promoting blockchain technology education.
Prithvi Subburaj - COO of OP Labs and ex COO at Google with more than 15 years as a General manager, Prithvi excels in strategic vision, organizational leadership, business transformation, and cross-functional collaboration.
Tasia Potasinski (@tasiagatchi) - Head of Marketing at OP Labs with 5 years of experience at Nivas as a VP product marketing
Bobby Dresser (@b0bby) - Head of Product at OP Labs and General manager at Optimism foundation. Bobby is a former Metamak head of Product, he focuses on tech for human wellbeing, leading in crypto and online fundraising.
Emmanuel Aremu (@earemu) - Growth Marketing Manager at OP Labs with half a decade of experience as a head of marketing in multiple companies
In summary, the Optimism team gathers a wealth of experience, talent, and unwavering dedication to this project. Their strong commitment to innovation and excellence is poised to propel the project forward and yield outstanding outcomes for all stakeholders involved.
Ecosystem
The Optimism ecosystem is flourishing with an extensive range of dApps and protocols that have been implemented on the network. As per dApp Radar, Optimism hosts 194 dApps, attracting a TVL of $594.54M, as seen previously. Each of these dApps provides distinctive features and capabilities to users.
The range of dApps available on Optimism is remarkable, spanning from decentralized exchanges (DEX) and lending platforms to gaming and NFT marketplaces. This broad spectrum of use cases underscores Optimism’s capacity to attract a broad ecosystem.
Optimism has a relatively diversified DeFi ecosystem, with a multitude of applications, ranging from DEXs and derivatives to further sophisticated protocols like options or active liquidity management.
Having a broad range of DeFi protocols and sectors is crucial for an ecosystem to thrive. The more innovative and thus attractive projects a chain can have, the more users and liquidity it will attract.
In that sense Optimism is competing with Arbitrum’s DeFi ecosystem by offering some of the broadest products and opportunities for users. This competition fosters innovation, ultimately benefiting users who have more choices and avenues for participating in the DeFi space.
For a chain to succeed it’s important to have both native native and non native projects which will drive the attention onto the chain. The native projects will hence financial opportunities for users as well as innovative products and designs, while the non-native ones will bring their reputation and security to the chain.
Here are some of the leading project by TVL or category on the chain:
DEX:
@VelodromeFi TVL -> $120M
@Uniswap TVL - > $38.1M
@beethoven_x TVL -> $15.8M
@CurveFinance TVL -> $13,05M
@KyberNetwork TVL -> $12,23M
@KromatikaFi TVL -> $951k
@SushiSwap TVL -> $800k
Derivatives
@synthetix_io TVL -> $138,1M
@perpprotocol TVL -> $12,8M
@thalesmarket TVL -> $1.15M
Lending/borrowing:
@AaveAave TVL -> $67.9M
@TarotFinance TVL -> $2,31M
ALM
@GammaStrategies TVL -> $5,8M
@ArrakisFinance TVL -> $3,2M
@DefiEdge TVL -> $770k
Options:
@lyrafinance TVL -> $8,6M
@thalesmarket TVL -> $2.03M
@aevoxyz TVL -> $230k
RWA
The Optimism network hosts a diverse array of projects that have been developed and deployed, contributing to a thriving DeFi ecosystem. These projects present users with ample opportunities for trading and maximizing the use of the optimistic rollup.
As mentioned above, Optimism has a very diverse ecosystem which brings a wide range of opportunities to users and TVL.
A focus has been made on bringing new innovative projects to the chain which foster activity, however, those projects did not reach mass attention for the moment and would require more visibility to show their full potential.
This is shown by the difference between the TVB ($2.69B) and TVL ($595M) that was noticed in the metrics section, only 22% of the value bridged to Optimism is used and locked among its dApps.
As a comparison, Arbitrum attains 30% of its TVB being locked in dApps.which shows that users are more willing to lock funds in Arbitrum projects than Optimism, potentially inducing a disinterest for the current ones on Optimism.
Fostering the ecosystem through incentives and grants is happening between 2 to 3 times per year while projects receive the $OP tokens from the ecosystem fund allocation. This induces rewards and a higher DeFi activity on the chain as projects are redistributing a part or completely their allocation to users, depending on specific criterias (trading competition, providing liquidity,...)
In summary, the Optimism chain provides a dynamic and user-friendly environment, ensuring a smooth and convenient experience for participating in its DeFi ecosystem while having potential for improvement in terms of dApp adoption.
Community & Marketing
To thrive in the Layer 2 market, projects must implement effective communication strategies and establish their narrative early on. The crypto market's irrationality is an important factor to take into account, having a great product and technology will help to get recognized on the market, but in order to attract liquidity, users and developers, chains need to build attractive environment where all the different stakeholders are willing to join and develop a thriving ecosystem.
It is thus fundamental to assess Optimism’s communication strategy and community-building efforts.
Optimism is present on twitter and Discord which are the main platforms used in the crypto industry to develop and nurture communities.
Optimism’s twitter account @optimismFND gathered 594K followers while following 69 accounts for a nice crypto joke while their Discord gathers more than 160k persons.
Their communication style consists of sharing news, weekly updates and education content through their Twitter account while posting long form content on their Mirror for a more detailed explanation.
This is joined with the OP radio hosted every Friday on their Twitter account which discuss the latest news around the chain and ecosystem. The OP radio allows users and community members to stay updated about the market, while fostering the activity and having a close relationship with the chain.
While comparing Optimism to the other Layer 2s twitter recent growth, the chain is ranking 4th behind zkSync, Arbitrum and scroll in terms of total followers. Regarding the recent growth, Optimism is lagging behind some of the competitors that are currently having specific growth campaigns and about to launch new features.
While the number of followers is a metric for assessing a community's size, analyzing the overall engagement and interest the latter has is much more essential. Ultimately, a community quality holds more significance than its quantity.
Once we analyze the last ten OP radio audiences, we find that on average 437 listeners tune in, which is a relatively low score knowing how large Optimism’s ecosystem is.
To continue developing its growth, Optimism implemented, as we've seen above, the Retroactive Public Goods Funding (RetroPGF) and allocated 20% of its total supply.
While this program is accessible to all the different stakeholders a chain can have (developers, projects, users, …) this retroactive grant mechanism incentivizes users and community members to contribute to the chain all year long and apply for a grant at the end of the given year.
Such a model ensures constant coverage and engagement from the community while also inducing competition and quality to receive the highest grant, awarded by the Badge holders.
From this initiative emerged multiple media and accounts focusing on sharing knowledge, information and updates about the chain. Here are some of the main medias and accounts covering Optimism ecosystem:
https://twitter.com/Subli_Defi
https://twitter.com/Th3Optimist
https://twitter.com/OptimismHub
https://twitter.com/RetroPGFHub
https://twitter.com/Optimism_TR
https://twitter.com/OptimistMage
https://twitter.com/OptimismPT
https://twitter.com/OPambassador
https://twitter.com/thankoptimism
https://twitter.com/Reformed_Normie
https://twitter.com/stephanie_vee
https://twitter.com/OptimismGrants
https://twitter.com/OptimismGov
https://twitter.com/banklessDAO
This list is not exhaustive and will continue evolving over the years while more and more users will be joining the ecosystem and contributing to Optimism’s growth.
For Optimism to thrive in this highly competitive market, it's imperative to draw in and foster innovative protocols while also offering compelling marketing, communication, and financial incentives to entice users and community members to the chain.
It's equally crucial to keep in mind that markets can be irrational, and even possessing the best technology doesn't ensure a protocol's success in the market. The crypto industry remains relatively immature, and users are often more driven by marketing, hype, and financial prospects than by technology.
Hence, achieving widespread adoption and TVL will necessitate a dual focus on both the technology and marketing aspects of Optimism’s solution.
Roadmap
Ideated in 2018, Optimism has come from a long way and achieved milestones over time, no matter the market status. The latest major milestones were the Open main-net and Bedrock developments that, since then, achieved high success.
While focusing on the future and what awaits Optimism’s users, the foundation announced 4 last main Milestones to achieve before finishing its 11 steps masterplan.
The next steps consist in:
Developing a next generation fault proof system (see the chapter “State Validation Mechanism” for reference).
Developing a multi proof, which will add even more customizability to the OP stack as projects can opt for a state validation mechanism of choice and even combine multiple mechanisms in one implementation (see the “Sequencing” section in the “The OP Stack - Bedrock of the Ecosystem” chapter).
Decentralizing the Sequencer as outlined in the chapter “Progression towards a decentralized Sequencing Model”.
Having L1 governed Fault proofs to improve network robustness & security.
The development prospects for Optimism are poised to bring about a profound transformation in multiple dimensions. This evolution will necessitate substantial effort and commitment to achieve the envisioned milestones. As a result, Optimism has diligently assembled a team comprising not only developers but also adept marketers. This comprehensive team composition is strategic, positioning Optimism to effectively address both the technical and promotional aspects of their project.
Pumpamentals
In order to establish a competitive advantage in the market and gain market shares, Optimism must prioritize the cultivation of what we call 'pumpamentals.' This concept pertains to the narratives and strategies that a project should actively deploy to efficiently market and promote itself.
Optimism is a project that is highly aligned with Ethereum and its values, giving it a unique position in the overall Ethereum ecosystem and likely making it one of the primary choices of L2s among Ethereum-native users. The Ethereum alignment is also mirrored in the high degree of EVM equivalence that OP Mainnet provides developers, making it easy to re-use existing code, tools and infrastructure. This is important with regards to developer experience and allows for a seamless transition of existing EVM/Solidity applications onto Optimism’s execution layer.
In order to drive adoption and incentivize users, it is also worth mentioning that Optimism still has 13,2% of its supply left that will be used for future airdrops. Additionally, the grant programs from the Optimism Collective are likely to spur growth both with regards to the community as well as the ecosystem of applications.
Having a retroactive grant program ensures that Optimism is allocating funds to actions, persons or projects that provide a net positive impact to the chain, while a traditional grant system simply bet on a future net positive impact.
From a technical perspective, what is definitely a strong driver for the Optimism ecosystem is the highly versatile OP Stack that abstracts a large part of the complexity that is otherwise involved in building rollup L2s. Thanks to the high degree of customizability, teams can easily build application-specific rollup systems and optimize for their use case of choice. This includes flexibility with regards to sequencer designs, DA solutions, settlement layers and even state validation mechanisms.
What adds to the attractiveness of the OP Stack is that the Superchain vision aims to combine all OP Stack based chains into one interoperable ecosystem that feels like one unified, logical chain to the end user, enabling novel use cases and seamless composability across chains, enabled through a shared sequencing mechanism and cross-rollup message passing protocols. The fast growing number of teams opting to build on top of the OP Stack is strong evidence to the demand for such a solution that exists in the market. With the fast growing ecosystem of OP chains, network effects are likely to emerge and further strengthen the position of the OP stack in the L2 space.
Optimism acted as one of the pioneers by introducing a relatively novel model of governance, combining the foundation and the collective to achieve the long term plan of the chain. While this might not yet be perfect, decentralizing governance and making sure that no well funded third party can operate malicious governance activities that would go against Optimism objective and core values.
Optimism developed a rich a various DeFi ecosystem that placed it at the 2nd place in terms of L2 TVL. This notable achievement shows the interest from users in some of the native projects built on Optimism
Optimism boasts a team of blockchain developers and management experts, setting them apart from competitors. The team's extensive knowledge and proficiency have been key drivers of their remarkable success, as shown by their substantial market capitalization and one of the highest TVL among L2s.
Generally, Optimism has a very ambitious roadmap with regards to their tech stack and will implement a lot of features in the future that are likely to keep the OP stack (and OP Mainnet) among the leaders in the realm of modular execution layers, taking center-stage in shaping the multi-chain future. With the generally strong innovation & growth in the modular space and the emergence of Rollup-as-a-Service (RaaS) providers, the trend towards (app-specific) L2s or L3s will only accelerate from here onwards and the OP Stack is well positioned to gain significant market share.
Risks
However, there are also considerations that have to be made on the risk side. From a technical perspective users of OP mainnet (as well as other OP stack based chains that use the default configuration) should keep in mind that:
Currently the system permits invalid state roots as the fault proof mechanism is still under development and is not yet deployed on mainnet. This means that there is effectively no state validation happening.
Additionally, the code that secures the system can be changed arbitrarily and without notice using fast upgrade keys. This means the team (or any malicious actor with access to the fast access keys) could theoretically maliciously upgrade the rollup contract and thereby compromise security (e.g. steal funds).
Finally, since only whitelisted proposers can publish state roots on L1, withdrawals are frozen, effectively locking user funds on L2 if there is a proposer failure.
What Optimism has implemented already though is a mechanism to address sequencer failure. In the case of a sequencer failure, users can force transactions to be included in the project’s chain by sending them to L1. However there is a 12h delay on this operation.
Also, Optimism is a “true” rollup and all the data needed for proof construction is published on chain on Ethereum L1.
According to L2Beat’s trust framework for rollups, Optimism currently still is a Stage 0 rollup. To reach stage 0, Optimism has fulfilled the below requirements:
The project calls itself a rollup.
L2 state roots are posted to Ethereum L1.
Inputs for the state transition function are posted to L1.
A source-available node exists that can recreate the state from L1 data.
To reach stage 1, Optimism has to meet the following criteria:
The proof system is still under development.
Users’ withdrawals can be censored by the permissioned operators.
Upgrades executed by actors with more centralized control than a Security Council provide less than 7d for users to exit if the permissioned operator is down or censoring.
Beyond the purely technical risks arising from the tech stack’s design that users face, one might also consider risks from an investor (token holder) perspective. What investors might want to consider in this context is the rather limited token utility. Currently, OP tokens only serve a governance purpose. While this might change in the future (as the token could get additional utility in decentralizing the sequencer), this might impair the value the market assigns the token.
Mentioning the OP token, the current mechanism for the yearly supply inflation of 2% remains unknown and very little communication is made around this model.
As seen in the ecosystem part, 3 projects detain more than 60% of the total TVL on the chain which creates a major dominance by those actors on the market. Even though Optimism is ranked 2nd in terms of L2 TVL, such figures also highlight the little appetite the community has for smaller and newer projects, potentially showing a disinterest on the current available offer. This disinterest is demonstrated by the difference between TVL and TVB (only 22% of the TVB is Locked as TVL). An effort has thus to be made on attracting and developing innovative projects that would foster the ecosystem growth.
The previous point also leads to the community and its interest in the chain, a few figures showed potential mismatch between the available strategy and the user’s interest while the RetroPGF could be perceived as a way to financially incentivize a community and one could argue against users real intentions.
Furthermore, the 3rd round of the RetroPGF, corresponding to the 3rd year, will distribute 30M $OP ($36.6M) tokens to elected participants, taken from the 20% allocation in the tokenomics. Considering the constant growth of the global grant allocation, one could wonder what would happen once the 20% of the supply has been allocated. Even though Optimism foundation announced redistributing 100% of its sequencer profits to Ethereum Protocol development, the $9.2M of sequencer profits aren’t covering the grant expenditures.
Additionally, while Optimism is definitely building some amazing technology, competition in the L2 space is heating up quickly. There are multiple other rollup teams (both optimistic and validity rollups) that are fighting for market share as well. While Optimism was definitely a pioneer with the OP Stack, there is an increasing number of tech stacks that also aim to reduce the complexity that devs face when building rollup L2s. This includes Arbitrum but also Starkware (Starknet) or zkSync and Polygon.
With its strong focus on Ethereum alignment, Optimism might also take the risk to lose edge on other ecosystems that innovate beyond the EVM on their execution layer. With the emergence of modular rollups like Eclipse and others, more performant & parallelizable virtual machine implementations are coming to Ethereum, enabling novel use cases that are not feasible in the single-threaded EVM.
Conclusion / Opinion
Optimism has established a prominent presence in the blockchain industry and is among the leading Ethereum scaling solutions. They have a reputation for pioneering and pushing the boundaries of what's achievable with blockchain technology. Their latest creation, zkEVM, serves as a testament to this innovative track record.
In the form of the OP Stack, Optimism provides a highly versatile and customizable framework to abstract complexity for builders that want to spin up (application-specific) rollups. In the context of the superchain vision, these rollups will all be combined into an interoperable ecosystem of chains that will provide the end user with a UX reminiscent of a monolithic, logical chain. This seamless interoperability will be enabled by shared sequencing and cross-rollup message passing protocols.
Thanks to the high degree of EVM equivalence, Optimism enables the reuse of code, tooling and infrastructure from Ethereum L1, thereby offering developers and users an experience akin to Ethereum's mainnet and enabling seamless migration of Solidity contracts, all while providing high throughput and low cost. Moreover, the versatility & customizability the Bedrock implementation of the OP stack offers, enables builders to address some of the core issues rollup-based scaling solutions face today, allowing them to customize every layer of the tech stack from sequencer to data availability solution and settlement layer.
However, both rollups in general as well as Optimism and the OP Stack in specific are very nascent technologies and are still in an early stage of development. This is evidenced by the current lack of a state validation mechanism in the mainnet implementation of Optimism’s OP mainnet and other OP Stack based chains as well as in the inexistence of a mechanism to address proposer failure. Additionally, the upgradability of the rollup contract through fast upgrade keys and the currently centralized sequencer introduce additional risk from a user perspective and centralize control over the network. However, there is an ambitious roadmap in place that aims to address these issues and take off the training wheels as the Optimism and the OP stack continue to evolve.
The product-market fit of the technology is showcased by the traction the OP stack has been gaining over the course of the past months, resulting in a fast-growing ecosystem of OP stack based chains that will form the backbone of the Superchain in the future.
Thankfully, the Optimism team is well-equipped to lead this endeavor. They boast a sizable and highly skilled team that has elevated Optimism to a prominent position within the blockchain industry. Their track record is marked by the noteworthy past roles and contribution to the ecosystem while also working for internationally renowned projects.
The Optimism ecosystem is in the midst of a remarkable expansion, characterized by the rapid proliferation of native projects within the chain. This dynamic growth is unmistakably revealing the vast potential for DeFi adoption and the continual enlargement of the user community, igniting an atmosphere of innovation and providing an attractive ecosystem for developers to conceive innovative applications that leverage Optimism's cutting-edge technology.
Despite the considerable progress achieved thus far, Optimism is still on a journey towards the realization of its ambitious long-term master plan. Nevertheless, it is apparent that the team has managed to assemble highly talented individuals and secure substantial financial resources to advance their vision.
We are highly confident that Optimism will continue to thrive over the next few years and are looking forward to the development of the chain, ecosystem, governance and OP-Stack.
Disclaimer & Limitations
We hold $OP and tokens from projects on the Optimism blockchain. It's essential to clarify that this article was written in an objectively and completely independent way, with no affiliations or associations with the team.
While this article was not created in collaboration with Optimism, we would like to express our gratitude to Subli DeFi for guiding us in composing this report. His flawless contribution and extensive knowledge of the Optimism ecosystem ensured comprehensive coverage of all aspects in this report.
The content presented in this article is solely for educational and informational purposes and should not be construed as investment advice (or any advice at all).
Given that Optimism is constantly evolving and some details are retained by the team, this article has limitations regarding the financial situation of the chain (revenue and funds available), specific team members, the current burn rate, roadmap, and any critical information that the company cannot share.
There might be occasional inaccuracies or errors in the information provided, but we welcome discussion and corrections.
Don’t trust, verify! 🀄️
This article has been written by the Redacted Research team consisting of Louround, ZeroKnowledge and DeFi Saint, we would be delighted to get your feedback on this initiative.
We also released a thread highlighting the most important parts of this article, any support is highly appreciated:
https://x.com/RedactedRes/status/1715431066604028318?s=20
Do not forget to follow us on twitter to not miss the next research pieces
https://twitter.com/RedactedRes
Sources / Links - Everyone
https://community.optimism.io/docs/governance/retropgf-1/
https://community.optimism.io/docs/governance/retropgf-2/
https://community.optimism.io/docs/governance/retropgf-3/
https://www.crunchbase.com/organization/optimism/investor_financials
https://finance.yahoo.com/news/optimism-foundation-sells-157m-op-064316298.html
https://optimism.mirror.xyz/x4LGFwa6RJ_opOaCOwr_VGk04Lp3of41H8ynWaFB27E
https://medium.com/ethereum-optimism/retroactive-public-goods-funding-33c9b7d00f0c
https://community.optimism.io/docs/governance/token-house/
https://community.optimism.io/docs/governance/citizens-house/
https://community.optimism.io/docs/governance/#
https://tokenterminal.com/terminal/projects/optimism/financial-statement
https://defillama.com/chain/Optimism
https://token.unlocks.app/optimism
https://coinmarketcap.com/currencies/optimism-ethereum/
https://optimistic.etherscan.io/charts
https://app.artemis.xyz/comparables?tab=chains
https://optimism.mirror.xyz/x4LGFwa6RJ_opOaCOwr_VGk04Lp3of41H8ynWaFB27E
https://medium.com/ethereum-optimism/retroactive-public-goods-funding-33c9b7d00f0c
https://gov.optimism.io/t/token-sale-september-2023/6846/4
https://cryptorank.io/ico/optimism
https://crypto-fundraising.info/projects/optimism/
Polkadot Docs
Cosmos Docs
Near Docs
Harmony Docs
Optimism Docs
Celestia Docs
https://www.alexbeckett.xyz/convergent-scaling/
https://members.delphidigital.io/reports/the-hitchhikers-guide-to-ethereum
https://medium.com/ethereum-optimism/introducing-evm-equivalence-5c2021deb306
https://www.alchemy.com/overviews/optimistic-virtual-machine
https://polynya.medium.com/why-calldata-gas-cost-reduction-is-crucial-for-rollups-30e663577d3a
https://vitalik.ca/general/2021/01/05/rollup.html
https://www.gemini.com/cryptopedia/blockchain-trilemma-decentralization-scalability-definition#section-what-is-scalability
https://www.seba.swiss/insights/the-blockchain-trilemma/
https://medium.com/infinitism/optimistic-time-travel-6680567f1864
https://blog.pantherprotocol.io/zk-rollup-projects-inner-workings-importance-analysis/
https://medium.com/ethereum-optimism/cannon-cannon-cannon-introducing-cannon-4ce0d9245a03
https://medium.com/plasma-group/introducing-the-ovm-db253287af50
https://members.delphidigital.io/reports/the-complete-guide-to-rollups
https://members.delphidigital.io/reports/pay-attention-to-celestia
https://ethereum.org/en/roadmap/danksharding/
https://www.alchemy.com/overviews/danksharding
https://www.eip4844.com/
https://www.quicknode.com/guides/ethereum-development/transactions/eip4844-explained
https://threesigma.xyz/blog/optimistic-rollups-challenging-periods-reimagined-part-two
https://ethereum.org/en/developers/docs/scaling/optimistic-rollups/
https://medium.com/starkware/redefining-scalability-5aa11ffc5880
https://ethereum.org/en/developers/docs/scaling/validium/
https://celestia.org/learn/sovereign-rollups/an-introduction/
https://www.paradigm.xyz/2021/01/how-does-optimisms-rollup-really-work#fault-proofs
https://gov.optimism.io/t/final-upgrade-proposal-bedrock-v2/5548
https://l2beat.com/scaling/summary
https://stack.optimism.io/docs/understand/explainer/
https://gov.optimism.io/t/law-of-chains-v0-1-full-draft/6514
https://gov.optimism.io/t/law-of-chains-v0-1-section-by-section-overview/6515
https://optimism.mirror.xyz/JfVOJ1Ng2l5H6JbIAtfOcYBKa4i9DyRTUJUuOqDpjIs
This is huge.
I am wowed
Still reading though
Excellent content, thank you guys!