The numbers are still disturbing
Over $2 billion was lost to smart contract exploits in 2025. That number has been roughly consistent for three years running. Despite better tooling, more security researchers, more public post-mortems, and more standardised patterns - the problem persists.
Why?
Because most of the losses aren't coming from novel, sophisticated zero-day attacks. They're coming from known vulnerability classes that were present in the code from the start. Reentrancy, price oracle manipulation, flash loan exploits, access control failures - the same patterns, exploited again and again, on protocols that either didn't audit or didn't audit thoroughly enough.
For founders building in Web3, smart contract security isn't a technical nicety. It's product. It's trust. It's whether your users get to keep their money.
Here's where the risk actually lives, and what you can do about it.
Understand the threat model before writing a line
Security decisions can't be made in a vacuum. They require understanding what you're protecting, from whom, and at what cost.
A DEX holding $500m in liquidity has a fundamentally different threat model from a token presale contract handling $100k. The attack surface is different. The incentive for attackers is different. The appropriate security spend is different.
Before any audit or security review, every Web3 team should be able to answer:
- What is the maximum value at risk at any given moment?
- Which functions are privileged, and who controls them?
- What external dependencies does this contract have (oracles, other protocols), and what happens if they're manipulated or compromised?
- What is the worst-case outcome of a breach, and can users be made whole?
If you can't answer these questions, you can't brief an auditor effectively - and you'll miss the gaps in their coverage.
The persistent vulnerability classes of 2025–2026
The threat landscape evolves, but some vulnerability classes refuse to go away.
Reentrancy. Still here. Still killing projects. The fix (checks-effects-interactions pattern, or OpenZeppelin's ReentrancyGuard) is well understood and costs almost nothing to implement. There is no excuse for a new contract in 2026 to be vulnerable to reentrancy. And yet.
Price oracle manipulation. If your protocol's behavior depends on an asset price - for liquidations, for collateral ratios, for fee calculations - that price can be manipulated. Spot prices from a single AMM pool can be moved by a flash loan in a single transaction. TWAP oracles are harder to manipulate but not immune. Chainlink feeds are robust but introduce centralisation risk. Know your oracle, know its assumptions, and know what happens when it lies.
Access control failures. A function that should be owner-only isn't. An initializer that can be called by anyone. A proxy upgrade mechanism with insufficient guards. These mistakes are embarrassingly common and often exploitable in minutes by anyone who reads your contract. Review every privileged function. Use a multi-sig for anything that matters. Consider a timelock on critical operations.
Logic errors in business rules. This category is the hardest to catch with automated tools because the vulnerability isn't in the code's syntax - it's in the gap between what the code does and what the developer intended it to do. Rounding errors in reward calculations, off-by-one errors in vesting schedules, incorrect ordering of operations. These require human auditors who understand the protocol's intent, not just its implementation.
Flash loan-enabled attacks. Flash loans are legitimate features that can be weaponised to amplify any other vulnerability. A price manipulation attack that would require $10m to execute with normal capital might require $0 with a flash loan. When modelling attacks, assume the attacker has infinite capital for the duration of a single transaction.
The audit is not a security guarantee
A common misconception: "we've been audited" is equivalent to "we're secure."
Audits are an important and necessary part of a security program. They are not sufficient on their own, and treating them as a security certificate is a category error.
Audit coverage is bounded by the auditor's time, the scope defined in the brief, and the quality of the specification. Auditors can only find vulnerabilities in code they read and in the threat model they're given. If your specification is incomplete, if you update the code after the audit, or if the auditor misunderstands the intended function of a critical component - vulnerabilities will survive the audit process.
Practical implications:
- Never deploy code that differs from what was audited without re-auditing the diff
- Provide auditors with a detailed specification that explains not just what the code does but what it's supposed to do
- Run automated tooling (Slither, Mythril, Echidna) before and after the audit - not as a substitute, but as a complementary layer
- Consider multiple auditors for high-value contracts; different firms find different things
And understand that audits find problems in the code as it exists. They can't audit the behavior of the system after integration with protocols that didn't exist when the audit was performed.
On-chain monitoring is not optional
Most exploits don't happen instantly. Large attacks often start with a test transaction - a small probe to confirm the vulnerability exists - before the attacker scales up.
On-chain monitoring exists precisely to catch this window. Services like Forta, Tenderly, and OpenZeppelin Defender can watch for anomalous transaction patterns, unexpected privilege escalations, large unusual withdrawals, or function calls that don't match expected behavior.
If you have funds at risk, you should have monitoring in place before you have funds at risk. Not after.
Additionally, a well-publicised emergency contact and response process - a security disclosure email, a public commitment to respond within X hours, a defined process for pausing the contract if needed - signals to security researchers that responsible disclosure is worth their time. Many exploits are caught by researchers who would rather report them than run them. Make it easy for them to reach you.
The economics of security spending
Here is a useful heuristic: your security budget should scale with your maximum value at risk.
For a contract expecting to hold up to $1m: a single reputable audit, automated tooling, and basic on-chain monitoring is a reasonable baseline. Budget $30–60k.
For a contract expecting to hold $10m+: two independent audits from firms with complementary methodologies, a formal verification engagement for critical components, a bug bounty program post-launch, and active on-chain monitoring. Budget accordingly.
The framing we recommend to clients: what would it cost to make your users whole after a catastrophic exploit? Usually, that number is larger than any security measure you'd consider. The economics of prevention almost always beat the economics of recovery - and recovery often isn't possible at all.
The trust dimension
Everything above is technical. But smart contract security is ultimately a trust problem.
Your users are putting funds into a system they can't fully audit, operated by a team they may not know, built on code they may not be able to read. The fact that the code is on-chain and visible is a feature - but most users aren't equipped to evaluate it.
Security practices are part of how you communicate that the system is trustworthy. Publishing audit reports. Running a bug bounty. Being transparent about what's been reviewed and what's in scope. Responding publicly and promptly to security questions. These aren't just technicalities - they're signals that influence whether sophisticated users are willing to deploy capital into your protocol.
In a space where trust is scarce and catastrophic failures are highly visible, security isn't a cost center. It's a product differentiator. Build accordingly.
