The dApp is not the smart contract
One of the most common misconceptions we see in first-time Web3 founders: a conviction that building a dApp means writing smart contracts, and once the contracts are deployed the product is mostly done.
The smart contract is the settlement layer. It's the part that executes trustlessly, that lives on-chain, that can't be changed (or can only be changed in the ways you explicitly designed). It's important - often the most security-critical part of the whole system.
But it's almost never the majority of the work.
The majority of the work is everything else: the off-chain infrastructure that makes the contracts usable, the indexing layer that gives you fast query access to on-chain state, the frontend that makes actions feel safe and simple, the wallet integration that works across browsers and devices, the testing environment that lets you simulate complex scenarios before deploying with real funds.
First-time founders consistently underestimate this part of the build. Understanding the full scope before you start is how you avoid building yourself into a corner six weeks in.
Choosing a chain
The first real product decision in any Web3 build. The honest answer is: unless you have a specific reason to choose otherwise, start on an EVM-compatible chain.
Ethereum is the most battle-tested, has the deepest developer tooling ecosystem, and is where most of your eventual security references and audit firms have done most of their work. The problem is cost and speed at the base layer.
That's why most new products build on an L2 - Arbitrum, Base, Optimism, or Polygon. These give you EVM compatibility (meaning your Ethereum knowledge transfers directly, your Solidity tooling works, your auditors have experience) with dramatically lower transaction costs and faster finality.
If you're building something that needs high-frequency transactions or very low latency - a trading protocol, an on-chain game, something where action-per-second matters - you'll look at chains like Solana that trade some EVM familiarity for raw throughput. That trade-off is real: the developer ecosystem is smaller, the tooling is younger, and finding experienced auditors requires more work.
In almost every case, the right answer is the chain where your target users already are, where the liquidity or infrastructure your product depends on already lives, and where you can find security expertise. Start there.
What actually lives on-chain vs. off-chain
This is where many dApp architectures go wrong. There's a temptation - especially for developers coming from a smart contract background - to push everything on-chain. Trustless by default.
The problem is that on-chain storage and computation are expensive, slow compared to databases, and fundamentally limited in what they can express. On-chain state is for the things where trustless settlement matters. Everything else should live off-chain.
A reasonable mental model:
On-chain (lives in smart contracts):
- Asset ownership and transfer logic
- Core business rules where trustless enforcement matters
- Access control for privileged operations
- Treasury management and fund movements
- Governance votes and outcomes
Off-chain (lives in your backend/infrastructure):
- User profiles, preferences, metadata
- Search and discovery
- Notification systems
- Analytics
- Anything that needs to be fast, cheap to update, or frequently queried
The indexing layer (sits between them):
This is the part founders forget. Smart contracts emit events when state changes. Querying those events directly from a node is slow and expensive. An indexing service - The Graph, Ponder, or a custom event listener - watches for contract events and writes them into a queryable database. Your frontend then queries the index, not the chain directly.
Without this layer, your UI either loads slowly (polling the chain) or doesn't load at all (too complex to reconstruct state client-side). Build it in from the start.
The wallet integration layer
Your users interact with your contracts through a wallet. The wallet connection layer is where you'll spend a surprising amount of time.
The options range in complexity. At the simple end: wagmi + RainbowKit gives you a React hook library for wallet interactions and a pre-built wallet connection modal that handles most popular wallets. This is the right starting point for most products.
What to think about:
Mobile. Desktop browser extension wallets don't work on mobile. WalletConnect is the bridge - it lets mobile wallets connect to dApps via QR code or deep link. Your wallet library should handle this, but test it explicitly. Mobile wallet UX is where most dApps have the most friction.
Multi-chain. If your product works across multiple chains, the wallet connection needs to handle chain switching. This sounds simple but breaks in subtle ways if the user is on the wrong chain and tries to submit a transaction.
Transaction state. Submitted, pending, confirmed, failed - your UI needs to handle all four states gracefully. A transaction submitted to the mempool might sit there for minutes. Users need feedback. A transaction that fails with a revert reason needs a human-readable error, not a hex error code.
Testing before deployment
One of the most dangerous things about smart contract development is the cost of mistakes. You can't roll back a deployed contract. You can't patch a funds loss after the fact.
This makes testing more important here than almost anywhere else in software development. The minimum bar for any contract handling real funds:
Unit tests. Every function, every edge case, happy paths and failure paths. Forge (part of the Foundry toolkit) has become the standard here - it's fast, runs in Solidity, and makes parameterised fuzzing easy.
Fork testing. Deploy your contracts against a forked copy of mainnet state and run integration tests against real protocol conditions. This catches issues that synthetic test environments miss - oracle prices, liquidity conditions, protocol interactions.
Invariant testing. Define properties that should always be true ("the total supply never exceeds X", "a user's balance never exceeds their deposit") and let the fuzzer try to break them. This surfaces logical errors that directed tests miss because they test combinations of actions you didn't think to write manually.
Before deploying to mainnet: deploy to testnet, run through every critical user flow manually, and have someone who didn't write the code try to break it. Pay for an audit if there's any real value at stake. The cost of the audit is almost always less than the cost of the breach.
The deployment and upgrade strategy
One more thing nobody mentions clearly: once deployed, a smart contract is immutable by default. This sounds like a feature - and it is, for the trustless properties that matter. But it means you need a strategy for upgrades before you need one.
The options:
Proxy pattern. Deploy a proxy contract that forwards calls to an implementation contract. Upgrade by pointing the proxy at a new implementation. Your contract address stays the same, your state persists, your logic can be updated. The risk: upgrade mechanisms are themselves attack surface. A compromised admin key can upgrade your contract to one that drains all funds. Use a multi-sig or a timelock, or both.
Immutable with escape hatches. Deploy without upgrade capability but with admin functions that can pause the contract or update critical parameters (oracle addresses, fee rates). Smaller attack surface, less flexibility.
Modular architecture. Separate your logic into components with clear interfaces. This doesn't solve upgradability but makes it easier to reason about what needs to change.
The right choice depends on your risk profile and how much certainty you have about your core logic. A protocol that's been externally audited multiple times can reasonably move toward immutability. A product in active development should probably retain the ability to fix things.
Shipping the thing
After all of this: ship. Iteratively, carefully, with real users who interact with contracts where the stakes are real but not catastrophic.
The temptation is to wait until everything is perfect - until the audits are done, the mobile experience is flawless, the indexer handles every edge case. Don't wait. Deploy to mainnet with a meaningful but not catastrophic TVL ceiling. Learn with real conditions. Expand the ceiling as the system proves itself.
The best production environments are the ones that have handled everything - the edge cases, the adversarial inputs, the network congestion, the weird wallet behavior. You can't simulate all of that. At some point you have to ship to find out what you missed.
Just make sure what you're shipping has been as thoughtfully constructed as the stakes deserve.
