Back to Overview

Auditor's Wishlist for Ethereum Security in 2026 by ChainSecurity

May 4, 2026

We've been auditing smart contracts since 2017 (before DeFi existed), and watched smart contract auditing evolve from simple reentrancy checks to securing protocols holding billions. Yet, certain issues persist. To address these, we’ve compiled a "wishlist" of five critical security improvements, sourced from our research team, including our seven ETH Security Badge holders, that focus on essential public goods. This list isn't exhaustive, and we decided not to repeat some of the more obvious solutions.

1. Whitehat fuzzing infrastructure that runs on real chain state

A meaningful fraction of the bugs disclosed and exploited in the past two years were found by fuzzing. Attackers run fuzzers continuously against critical contracts, while Whitehats run them sporadically, during audit windows, on local forks.

Local fuzzers don't have months of mainnet history to draw on. Instead, they synthesize states that look plausible but never occur on chain, and miss the ones that do.

What we'd suggest:

Shared fuzzing infrastructure where any whitehat can submit a target, a contract address plus a set of invariants or assertions, and have it fuzzed continuously against forked mainnet state. With real prices, real liquidity, real upgrade history. The bugs that take attackers months of patient observation to surface should be findable by a coordinated whitehat effort in days.

This exists in pieces through Echidna, Medusa, Foundry's invariant testing, and various proprietary internal fuzzers, and of course amazing paid solutions targeting developers like Tenderly. However, none of them run as a coordinated, persistent, mainnet-state service targeted for whitehats.

2. Fix problems once in the compiler, not a hundred times in user code

When a non-trivial protocol exceeds the EVM contract size limit (24KB), developers split their logic across multiple contracts, and immediately inherit a new class of problems. Access control between modules, state consistency across calls, cross-contract reentrancy, and upgrade coordination, to name a few. Each split is an opportunity to introduce a bug that wouldn't have existed in a monolithic version.

Compiler work is invisible, slow, and benefits everyone except the engineers doing it. There's no tokenomic incentive aligned with making solc and vyper better at handling the size limit, and no obvious customer to pay for it. The Ethereum Foundation funds what gets funded.

What we'd suggest:

A developer should be able to write a contract of arbitrary logical size and have the compiler handle the splitting. The compiler decides what goes where, manages the cross-module access patterns and proves the resulting deployment is equivalent to the source. Get this right twice, once in Solidity, once in Vyper, and an entire category of bugs disappears.

This is more ambitious than it sounds. It requires the compiler to reason about state layout, call patterns, and reentrancy across a deployed contract graph rather than a single bytecode blob. But it's the right layer to fix the problem at, and it's tractable.

3. Composability is a lifetime risk, not an audit-time check

A contract that passes an audit today is exposed tomorrow when one of its dependencies changes. A vault you integrate adjusts its share calculation, an oracle you trust changes its underlying source, a pool you depend on rebalances or adds a new asset type, and so on. None of these require code changes on the protocol side, but all of them can move you from "safe" to "exploitable".

We originally disclosed the read-only reentrancy in 2022 against Curve and Balancer V2 pools. The same bug class kept resurfacing for years afterward, and not because anyone deployed new vulnerable code, but because new integrators kept connecting safe-looking code to pools that exposed the underlying state.

The same dynamic shows up around ERC-4626 now. The standard is widely adopted, and also widely misunderstood. Rounding behavior, share-to-asset edge cases, donation attacks, integration assumptions that hold in one vault and break in another, and we see near-misses in this space monthly. The standard shipped without a conformance test suite, hence every integrator interprets it slightly differently. Bugs, of course, follow.

There's no canonical map of the dependency graph. When a protocol pushes an upgrade, integrators find out by watching governance forums, scanning transaction traces, or getting a call from a security researcher who happened to notice.

What we'd suggest:

Two things, ideally together:

  1. A real dependency map of the ecosystem. Protocol-level integrations, not just import graphs. This would also make coordinated disclosure dramatically more efficient. When a bug class is identified, you should be able to query "who else is exposed?" and get an answer in minutes rather than weeks.
  2. Runtime invariant checking as a standard practice, not an afterthought. The "this assert should never trigger, but just in case" line has stopped exploits. Defensive depth is unfashionable because it implies you don't trust your own code. It should be table stakes.


4. A CVE for Ethereum, and funding for the boring infrastructure

Smart contract vulnerabilities are scattered across blog posts, governance forums, post-mortems, and Twitter threads. The quality varies wildly. Some are public and well-documented. Others are public but buried. Others are technically public; disclosed in a forum nobody reads, but functionally invisible to anyone who'd benefit from knowing.

There is no Ethereum equivalent of the CVE database. There is no canonical place to look up "has this exact bug pattern been seen before, and where, and how was it fixed?" New protocols rediscover the same classes of bugs because there's no shared memory.

The same pattern shows up in standards. ERC-4626 doesn't ship with conformance test vectors. SEAL 911 and SEAL Frameworks run on volunteer goodwill. Client testing infrastructure is chronically underfunded compared to its importance. Every one of these is the same problem: nobody owns shared infrastructure, so it gets built once by a hero and then slowly rots.

Public goods problems are hard. The work doesn't generate revenue, the people who benefit are diffuse, and the people who'd fund it are usually focused on their own protocol's roadmap. The Ethereum Foundation funds some of this, as well as grant programs.

What we'd suggest:

A maintained, structured vulnerability registry. Conformance test suites for major EIPs (starting with 4626, since that's where we see the most preventable issues). Sustained funding for SEAL 911-style coordination, not goodwill, but actual organizational capacity. Sustained funding for client testing and fuzzing infrastructure, at the level befitting code that secures hundreds of billions of dollars.

5. Account compromises and signing flows

A signed-transaction trick drained a multi-billion-dollar exchange, frontends were taken over via compromised dependencies, there were phishing flows that look identical to legitimate ones, and private keys were lifted from compromised devices, malicious browser extensions, and social engineering. This is the elephant in the room for ETH Security.

This is not new, and being addressed from many angles. MetaMask, Rabby, and others include some form of alerting, transaction simulations, trying to decode arguments, and much more. It all feels like a bandaid though, is often circumvented by inventive attackers, or (more often) skipped by overwhelmed users.

Secure UX requires the user to understand what they're approving. EIP-712 makes structured signing possible, but the gap between "structured signing is possible" and "users actually understand what they're signing" is enormous. Hardware wallets still show truncated addresses, and browser wallets are racing for engagement, not safety. Frontends can be replaced. Extensions can lie.

What we'd suggest:

Signing flows where dangerous actions are difficult by default, not where security is an advanced setting most users never find. Wallets that simulate the actual outcome of a transaction and show it in human-readable terms, i.e.  "you will lose 12 ETH and gain 0 USDC", and refuse to proceed without an explicit second confirmation when the simulation looks unusual. Hardware that displays meaningful transaction summaries, not raw calldata. Frontend integrity checks that don't depend on the user noticing.

6. Conclusion

To recap, here is what we'd most like to see prioritized to meaningfully raise the floor for Ethereum security:

  • Better fuzzing infrastructure for whitehats
  • Fixing the contract size problem at the compiler level
  • Treating composability as a runtime concern, not an audit-time one
  • Funding the shared infrastructure nobody owns
  • And the one thing which could change the game: Raising the floor on wallet UX

Obviously the list isn't complete, and plenty of other things matter too. These are just what's been on our minds lately.


Because a more secure Ethereum benefits everyone, we believe in funding the work that protects it and are also supporting today’s community efforts. Every ChainSecurity employee has been granted funds to donate to the projects they deem most vital in the current ETH Security Quadratic Funding round by The DAO Fund & Giveth.