A complete guide to MegaETH.
Ok so by now you are probably sick of all these L2s coming to market and performing like a moldy dog turd. I get it, I feel the same way. With over 50 L2s in existence, one could certainly argue that enough is enough.
In saying that, MegaETH could be a different type of player, and its backers seem to think the same.
Peter Thiel made it clear in his book “Zero to One” that in order for a new product to have any impact on a marketplace it needs to improve on the current thing by a factor of at least 10.
This is exactly what MegaETH hopes to achieve.
Bringing web2 performance to the blockchain is no easy feat. It requires a lot of rethinking and some serious technical adjustments. Real-time payments, transactions, and derivatives are just some of the possible outcomes that could become a reality if MegaETH can achieve its goals.
These guys raised $20 million in their seed round at the end of last month, led by Dragonfly Capital. Other big backers, such as Big Brain Holdings, Figment Capital, and Robot Ventures, also got involved.
Add to this some of the biggest angels to be found on CT, namely Cobie, Mert, Hasu and Vitalik himself, and surely you have the recipe for success. Now all you have to do is cook the damn thing.
So, is this just another ponzi scheme with a strange rabbit theme, or will MegaETH bring Vitalik’s “End Game” into the realm of reality?
Decide for yourself.
What is MegaETH?
As stated by the MegaETH team themselves, “MegaETH aims to push the performance metrics of ETH L2s to the limits of the hardware they run on, bridging the gap between blockchains and traditional cloud computing servers.”
In its simplest form, this means a real-time blockchain capable of pumping out transactions at speeds of 100,000 per second with instant latency, even under the strains of an overloaded network.
Speed and scalability is the key.
Now, to achieve this, it takes some serious adjustments under the hood. To truly understand this, we first need to look at what current L1s and L2s are doing to get an idea of what makes MegaETH that much more sophisticated.
The inner workings of L1s and L2s and how it relates to MegaETH
In order for a blockchain to change its state, transactions need to be sorted and added sequentially. The consensus mechanism sorts transactions into an order, and the execution mechanism adds them to the chain. Nodes perform these tasks.
As it currently stands in the L1 landscape, nodes all perform the same tasks to ensure consensus and execution. However, each L1 has different hardware requirements that need to be met for these nodes to function, and increasing these requirements can cause problems in the arena of decentralization vs. performance.
This issue is eloquently described in the following clip by MegaETH co-founder in true rabbit fashion.
Now, as mentioned in the above short clip, L2s have managed to find a way to get around this “straggler” problem by differentiating the tasks performed by nodes into areas of specialization. In other words, some nodes do some things better than others.
This can be seen in special sequencer nodes and different types of roll-ups, which use different tech stacks to get the job done.
So what makes MegaETH better?
The MegaETH crew hasn’t just jumped in head first and started spinning up new redacted ideas. Instead, they have taken a “measure, then build” approach that involves carefully examining how certain changes will affect performance and only changing what is really necessary.
Firstly, they found that by shipping security to the base layer itself, in this case, Ethereum and Eigen DA, they can seriously improve aspects of the L2 performance metrics. Nothing new, right?
The next step to increasing speed and scalability is to remove the task of execution from full nodes.
In the MegaETH network, they have created three main roles: provers, full nodes, and sequencers. By giving nodes specialized tasks, these tasks are spread out, and the load on nodes is drastically reduced.
A sequencer's job is still sorting and executing transactions, which just so happens to be the most compute-intensive task.
It’s for this reason that sequencer nodes are run on servers with vastly more power and higher-end hardware requirements. Moreover, MegaETH only runs one active sequencer at a time which gets rid of the consensus loads in the execution phase.
Full nodes need only to verify proofs before updating the state of the network and this task is pretty tame compared to the work a sequencer has to do. Therefore, as you may have guessed by now, the hardware needed to run a full node is much less than a sequencer.
The basic transaction flow can be seen in the diagram below.
This concept of node specialization was covered in Vitalik's “End Game” blog post as having the potential to drastically improve the speed and trustless nature of block validation despite making block production more centralized in the process.
The team at MegaETH believes that the current EVM model's TPS speeds are simply not enough for crypto to compete with the web2 world.
By increasing the hardware capabilities for sequencer nodes, these nodes now have enough RAM to store the entire blockchain state, removing the lag created by SSD reading and making the entire process much faster.
So, sequencer nodes aside, some other issues need to be addressed to bring the first real-time blockchain to life. The tech talk continues.
There is this little process known as “state sync” that is required by all blockchains for full nodes to know what is going on with the sequencer, and making this mechanism highly efficient is quite the challenge in itself.
Long story short, the way an ERC-20 modifies its transactions to be state-synced currently takes up more bandwidth than it would if it just sent the data in its original, raw form.
In order to reduce the bandwidth needed, this process needs to be seriously compressed somehow.
A typical blockchain data structure can be thought of as a tree, a Merkle Tree, for that matter. The state root is the key element to the tree structure that both sequencers and full nodes need to maintain.
In its simplest form, from a none-dev, left-curver like myself, *deep breath*... every time a change in state takes place, all the leaves on the tree that store key values in the form of hashes, which point to other “child nodes” that relate to the change in state, need to be updated and the hashes recomputed.
This results in the need for millions of data reads and a shitload of compute power.
To make this process more efficient, MegaETH will group these leaves together into “subtrees” to reduce the number of data reads needed to update the state root.
This is a highly complex process that even the MegaETH team says is still suboptimal for the kind of performance they are aiming for.
Then, there is the problem presented by block gas limits. These are the limits on the maximum amounts of gas that can be consumed within a single block.
These limits exist to ensure that all nodes can keep pace with each other and nobody gets left behind in the race of the rabbits.
They also ensure that blocks can be reliably produced within the block time as failing to do so would open up a variety of attack vectors for would-be exploiters.
The MegaETH team is looking at these issues very closely, and solving these problems will be the key to producing the real-time blockchain in Vitalik's wet dreams.
Real giga-brained stuff, no doubt!
Why does any of this MegaETH stuff matter?
In order for EVM chains to reach their full potential and put up a fight with the web2 world, things need to be faster.
The low transaction throughput of current EVM chains simply does not compare to modern database servers in web2, which can process millions of transactions per second.
If crypto is really going to eat the world then this needs to change fast and MegaETH is looking to take big steps toward making this a reality.
The speed of the underlying chain, in turn, determines the capability of the dApps built on it. Speed and scalability are paramount for decentralized applications to compete with those in the traditional web world.
Compute power needs to be increased or made more efficient, and block time will need to be drastically reduced.
Current block intervals of a second will need to be boiled down into milliseconds to keep pace with the traditional web and bring things like real-world physics and high-frequency trading on-chain.
MegaETH has all these things in mind and is working to solve these problems in order to produce a real-time blockchain capable of instantly processing transactions and publishing the resulting updates in real-time, even when congestion levels are high.
Closing thoughts on MegaETH
Oversaturation of the entire crypto market is definitely a valid concern this cycle and in order for a new project to stand out, it will need something really special to set it apart from the rest.
The concept of a real-time blockchain that will allow users to experience never-before-seen levels of performance and builders to build without the limitations of the past is certainly an intriguing product in itself.
MegaETH has clarified that it will be more than just a supercharged centralized sequencer.
The team have looked deeper at the problems faced by current blockchain technology and aim to build something that, quote, “leaves little room for further performance improvements to the crypto infrastructure, so the industry can finally divert resources to other challenges hindering adoption.”
Maybe put a hat on it?
With the live testnet coming in the next few months, degens and nerds alike will be able to experience the future for themselves.
Let’s just hope this is what is needed to send us to those ungodly levels we have all been patiently waiting for.