ChainScore Labs
All Guides

Testing Smart Contracts with Adversarial Scenarios

LABS

Testing Smart Contracts with Adversarial Scenarios

Chainscore © 2025

Core Adversarial Testing Methods

Essential techniques to simulate malicious actors and uncover vulnerabilities in smart contract logic and state.

Fuzz Testing

Fuzzing automatically generates a massive volume of random, invalid, or unexpected inputs to test contract functions.

  • Uses tools like Echidna or Foundry's fuzzer to provide random uint256 values or malformed calldata.
  • Discovers edge cases like integer overflows, underflows, and unexpected state reverts.
  • This matters as it automates the discovery of vulnerabilities that manual testing often misses, especially for complex input validation.

Symbolic Execution

Symbolic execution analyzes code by using symbolic variables instead of concrete values to explore all possible execution paths.

  • Tools like Manticore model the entire state space to find reachable program conditions.
  • Identifies whether certain problematic states (e.g., a broken invariant) are theoretically accessible.
  • This is critical for proving the absence of certain bug classes and verifying complex logical constraints in DeFi protocols.

Formal Verification

Formal verification uses mathematical proofs to demonstrate a contract's correctness relative to a formal specification.

  • Involves writing properties in a language like Act or using a verifier for Solidity (e.g., Certora Prover).
  • Proves that invariants (e.g., "total supply equals sum of balances") hold under all conditions.
  • This matters for high-value contracts where absolute assurance on specific behaviors is required, beyond testing.

Invariant Testing

Invariant testing asserts that certain properties of a system must always hold true, regardless of any sequence of actions.

  • In Foundry, invariants are defined and broken by a fuzzer calling any function in any order.
  • Tests properties like "the sum of all user balances equals the total supply" or "an admin can never be removed."
  • This is essential for testing the integrity of a system's core state after arbitrary interactions, simulating a live network.

Differential Testing

Differential testing compares the behavior of two similar implementations against the same inputs to find discrepancies.

  • Runs a reference implementation (e.g., a simple, audited contract) and a new optimized version with the same random inputs.
  • Flags any difference in output state or event emissions as a potential bug in the new code.
  • This matters for safely upgrading contracts or verifying that a complex optimization hasn't introduced subtle logic errors.

Stateful Property Testing

Stateful property testing validates system properties across sequences of state-changing transactions, not just single calls.

  • Tools like Echidna or Foundry's invariant tester generate random sequences of function calls to a contract.
  • Checks that high-level properties (e.g., "liquidity can never be negative") remain true throughout the sequence.
  • This is crucial for finding bugs that only emerge from specific, multi-step interactions, such as reentrancy or broken state machines.

Implementing Property-Based Fuzzing

Process for defining and testing invariant properties of a smart contract using a fuzzer to generate random inputs.

1

Define Core Invariant Properties

Identify and formalize the fundamental rules your contract must always uphold.

Detailed Instructions

Start by analyzing the contract's business logic to define invariants—properties that must hold true for all possible states and inputs. For a lending protocol, a key invariant is that the total borrowed assets cannot exceed the total supplied assets. For an ERC-20 token, the sum of all balances must equal the total supply. Write these as clear, testable statements. Avoid testing implementation details; focus on high-level system correctness. This step requires deep protocol understanding to prevent logical flaws in the properties themselves, which would render the fuzzing ineffective.

  • Sub-step 1: Review the contract's state variables and their intended relationships.
  • Sub-step 2: Formalize an invariant, e.g., assert(totalBorrows <= totalSupply).
  • Sub-step 3: Document edge cases the invariant must cover, like zero-value transfers or admin actions.
solidity
// Example invariant for a vault: user shares can never exceed total shares. function invariant_shares_leq_totalSupply(address user) public view { assert(vault.balanceOf(user) <= vault.totalSupply()); }

Tip: Use assert statements within helper functions that the fuzzer can call to check the property.

2

Set Up the Fuzzing Test Harness

Configure the testing environment and write the property test function.

Detailed Instructions

Create a dedicated test file using a framework like Foundry's Forge, which has built-in fuzzing support. Write a test function that accepts the fuzzer's randomly generated arguments. For example, a test for a token transfer might accept random address sender, address recipient, and uint256 amount values. The function should set up a valid initial state (e.g., mint tokens to sender), perform the action under test, and then assert your invariants. Use the vm.assume cheatcode to filter out invalid inputs that would cause reverts for trivial reasons, allowing the fuzzer to focus on interesting cases.

  • Sub-step 1: Import necessary testing libraries and cheatcode interfaces.
  • Sub-step 2: Declare a test function with function testFuzz_PropertyName(params) public.
  • Sub-step 3: Use vm.assume to constrain inputs, e.g., vm.assume(amount > 0 && amount <= startBalance).
solidity
// Foundry fuzz test example for transfer invariance. function testFuzz_transfer_invariant(address sender, address recipient, uint256 amount) public { vm.assume(sender != address(0) && recipient != address(0)); vm.assume(sender != recipient); uint256 senderInitialBalance = token.balanceOf(sender); vm.assume(amount <= senderInitialBalance && amount > 0); token.transfer(recipient, amount); // Invariant: Total supply remains constant. assert(token.totalSupply() == INITIAL_SUPPLY); }

Tip: Start with a small number of fuzzing runs (e.g., 1000) for quick iteration, then increase for final validation.

3

Execute Fuzzing and Analyze Counterexamples

Run the fuzzer to discover inputs that violate your invariants.

Detailed Instructions

Run the fuzzing command (e.g., forge test --match-test testFuzz_PropertyName). The fuzzer will execute the test thousands of times with random inputs. If an invariant fails, the framework will report a counterexample—the specific input values that caused the assertion to revert. Carefully analyze this failing case. Does it reveal a genuine bug, or is it a false positive due to an overly strict invariant? Use debugger tools or add console.log statements to trace the contract's state at the moment of failure. Save the counterexample seed to deterministically reproduce the issue.

  • Sub-step 1: Execute the fuzz test suite and monitor for failures.
  • Sub-step 2: When a failure occurs, note the provided seed and calldata.
  • Sub-step 3: Reproduce the failure locally using the seed: forge test --match-test testFuzz_PropertyName --fuzz-seed <seed>.
bash
# Example Forge command to run a specific fuzz test. forge test --match-test testFuzz_transfer_invariant -vvv

Tip: The -vvv flag provides verbose output, showing the sequence of calls leading to the failure, which is crucial for debugging.

4

Refine Properties and Increase Coverage

Iterate on your invariants and test configuration based on fuzzing results.

Detailed Instructions

Use the insights from counterexamples to improve your test suite. If a failure was a false positive, refine the invariant logic or add more vm.assume conditions. If it was a real bug, fix the contract and ensure the test now passes. Next, increase coverage by adding more complex, stateful properties. Test sequences of actions rather than single operations. For example, after a deposit and a withdrawal, the user's net asset position should be correct. Use Foundry's invariant test mode for this, which runs random sequences of function calls against a deployed contract, checking invariants between each call.

  • Sub-step 1: Modify the property test or contract code to address the discovered issue.
  • Sub-step 2: Add stateful fuzzing tests using the invariant keyword to test interaction sequences.
  • Sub-step 3: Increase the number of fuzzing runs (e.g., to 50,000+) and seed corpus size for deeper exploration.
solidity
// Example of a stateful invariant test setup in Foundry. contract StatefulInvariantTest { TargetContract target; function setUp() public { target = new TargetContract(); } // The fuzzer will randomly call these functions in sequences. function deposit(uint256 amount) public { target.deposit(amount); } function withdraw(uint256 amount) public { target.withdraw(amount); } // This invariant is checked between every fuzzer-generated call. function invariant_totalAssetsMatch() public view { assert(target.totalAssets() == address(target).balance); } }

Tip: Integrate fuzzing into your CI/CD pipeline to run property tests on every commit, guarding against regressions.

Defining and Testing System Invariants

A systematic process for identifying, formalizing, and validating the core properties that must always hold true for a smart contract system.

1

Identify Core System Properties

Document the fundamental rules and constraints that define correct system behavior.

Detailed Instructions

Begin by analyzing the protocol's specification and business logic to list its invariants. These are properties that must hold true before and after any state transition. Common categories include value conservation (e.g., total token supply is constant), access control (e.g., only the owner can pause), and state consistency (e.g., user's balance never exceeds total supply). For a lending protocol, a key invariant is that the sum of all user collateral balances equals the total collateral held by the contract. Write these in plain English first, specifying the conditions under which they apply.

  • Sub-step 1: Review whitepaper and smart contract comments for stated rules.
  • Sub-step 2: Interview protocol developers to uncover implicit assumptions.
  • Sub-step 3: Categorize each invariant as state-based, transaction-based, or economic.
solidity
// Example: A simple invariant for an ERC20 token // Invariant: Total supply must equal the sum of all balances. function checkSupplyInvariant() public view returns (bool) { uint256 totalBalances; for(uint256 i = 0; i < users.length; i++) { totalBalances += balanceOf(users[i]); } return totalSupply == totalBalances; }

Tip: Focus on properties whose violation would lead to a critical failure, like fund loss or system halt.

2

Formalize Invariants into Testable Assertions

Translate conceptual properties into executable code assertions for your test suite.

Detailed Instructions

Convert each textual invariant into a pure function that queries the contract state and returns a boolean. This function is the invariant handler. Use Foundry's invariant test infrastructure or a similar fuzzing framework. The handler should access all relevant storage variables. For economic invariants, consider using boundary values and precise mathematical checks. For example, an invariant stating "interest rates are non-negative" becomes assert(apr >= 0). Ensure your assertions are gas-efficient to run thousands of times during fuzzing.

  • Sub-step 1: Write a Solidity function for each invariant that performs the check.
  • Sub-step 2: Ensure the function is view and has no side effects.
  • Sub-step 3: Integrate the handler function into your test contract's invariant block.
solidity
// Example: Formalized invariant for a vault contract InvariantTest { Vault public vault; function invariant_totalAssetsLTEtotalSupply() public view { // Total assets deposited must be >= total share supply * share value. // Using >= due to rounding and fee considerations. assert(vault.totalAssets() >= vault.totalSupply() * vault.convertToAssets(1e18) / 1e18); } }

Tip: Use assert for invariants; a failed assert consumes all gas, clearly indicating a state corruption.

3

Configure and Run Targeted Invariant Fuzzing

Set up a fuzzing campaign that randomly calls functions to attempt to break the defined invariants.

Detailed Instructions

Use a framework like Foundry's invariant testing to stress-test your assertions. Configure the test by specifying a target contract and a set of actor addresses (fuzzers) that will perform random sequences of calls. Set a high number of runs (e.g., 10,000+) and depth (e.g., 50 calls per sequence) to explore state space. The fuzzer will call any public function on the target contract in any order with random data, checking your invariant handlers after each call. Monitor for shrinking—the fuzzer's process of minimizing a failing call sequence to its simplest form for debugging.

  • Sub-step 1: In your test contract, annotate the target with @invariant and set up setUp().
  • Sub-step 2: Define excludeContracts or excludeSenders to filter out irrelevant addresses.
  • Sub-step 3: Run the test with forge test --match-contract InvariantTest --invariant.
solidity
// Foundry test setup example contract VaultInvariants is Test { Vault public vault; function setUp() public { vault = new Vault(); // Target the specific contract for fuzzing targetContract(address(vault)); } // ... invariant handlers from previous step }

Tip: Start with a lower run count to verify setup, then increase aggressively. Use --fail-fast to stop on the first broken invariant.

4

Analyze Failures and Harden the System

Diagnose broken invariants, patch vulnerabilities, and update specifications.

Detailed Instructions

When an invariant fails, the fuzzer provides a counterexample sequence. Analyze this trace step-by-step. Identify the specific function call and state values that led to the violation. Common root causes include reentrancy, integer overflow/underflow, incorrect access control, or oracle manipulation. Fix the vulnerability in the contract logic. After patching, re-run the invariant tests to ensure the fix works and doesn't introduce new breaks. Additionally, consider if the broken invariant reveals a flaw in your initial specification; update the documentation accordingly. This process turns testing into a feedback loop for improving both code and design.

  • Sub-step 1: Examine the minimized failing call sequence printed by the test runner.
  • Sub-step 2: Reproduce the failure in a standard unit test for precise debugging.
  • Sub-step 3: Implement the fix, often requiring a logic change or adding a check.
  • Sub-step 4: Re-run the full invariant suite and all other tests for regression.
solidity
// Example: Patching a broken invariant found via fuzzing // Broken: User could withdraw more than their balance due to underflow. function withdraw(uint256 amount) public { // Old, vulnerable code: // balances[msg.sender] -= amount; // Fixed code with a check: require(balances[msg.sender] >= amount, "Insufficient balance"); balances[msg.sender] -= amount; }

Tip: Treat every broken invariant as a critical bug. Document the failure and fix as a case study for future audits.

Tools for Scenario Simulation

Understanding the Toolbox

Scenario simulation tools allow you to model potential attacks or failures before deploying a contract. They create a controlled environment to test how your system behaves under stress or malicious conditions.

Key Capabilities

  • Forking Mainnet: Tools like Foundry's forge can create a local copy of Ethereum's state, letting you test your contract's interaction with live protocols like Aave or Compound using real data.
  • Invariant Testing: This checks for properties that should always hold true in your system, such as "total supply never decreases" or "user balances sum to total supply."
  • Fuzz Testing: Automated tools provide random, unexpected inputs to functions to uncover edge cases a developer might not have considered manually.

Practical Example

When testing a new DeFi vault, you would use a forked mainnet to simulate a sudden 50% drop in the price of ETH on Chainlink oracles, observing if your liquidation logic triggers correctly without causing insolvency.

Comparing Test Coverage Levels

A comparison of different testing methodologies and their effectiveness in identifying vulnerabilities in smart contracts.

Coverage MetricUnit TestingIntegration TestingFormal Verification

Gas Cost Validation

Limited to function scope

Cross-contract interactions

Mathematical proof of bounds

Reentrancy Detection

Manual mock setup required

Detects in integration flow

Formally verifies non-reentrancy

State Invariant Checks

Per-function assertions

End-to-end state validation

Proves invariants hold universally

Edge Case Coverage

Developer-defined inputs

Simulated user journeys

Exhaustive input domain analysis

Oracle Manipulation

Not typically covered

Can test with mock oracles

Can prove correctness of price feeds

Upgrade Safety

Tests individual versions

Tests migration paths

Formal spec compliance across versions

Time-Based Logic

Mocked block timestamps

Test with forked mainnet

Temporal logic verification

Integrating Tests into CI/CD

Process for automating adversarial test execution in development pipelines.

1

Configure the CI/CD Environment

Set up the pipeline runner with necessary dependencies and secrets.

Detailed Instructions

Begin by configuring your CI/CD runner (e.g., GitHub Actions, GitLab CI) with the required environment. This includes installing the correct version of Node.js, Python, or Rust, and the specific testing frameworks like Foundry or Hardhat. Securely inject environment variables such as RPC endpoint URLs (e.g., https://eth-mainnet.g.alchemy.com/v2/...) and private keys for forking and deployment using the platform's secrets management. Set up caching for dependencies like node_modules or ~/.foundry to significantly speed up subsequent pipeline runs.

  • Sub-step 1: Create a .github/workflows/test.yml file for GitHub Actions.
  • Sub-step 2: Define a job that runs on pushes to main and pull requests.
  • Sub-step 3: Use the actions/setup-node@v4 action and run npm ci or forge install.
yaml
# .github/workflows/test.yml snippet env: FOUNDRY_PROFILE: ci MAINNET_RPC_URL: ${{ secrets.MAINNET_RPC_URL }} jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Install Foundry uses: foundry-rs/foundry-toolchain@v1

Tip: Use a dedicated, funded test wallet for CI deployments and store its private key as a secret. Never hardcode keys.

2

Structure the Test Command Sequence

Define the order and flags for running unit and adversarial tests.

Detailed Instructions

Craft the command sequence in your pipeline to execute tests with the appropriate configuration. Start with fast, standard unit tests before running more computationally intensive fuzz tests and invariant tests. For Foundry, use forge test --match-test testNormalOperation followed by forge test --match-contract AdversarialTest --ffi to isolate adversarial suites. Enable verbose failure reports with -vvv and set a high fuzz run count (e.g., --fuzz-runs 10000) for CI to increase coverage. Ensure the command fails the pipeline on any test failure.

  • Sub-step 1: Run standard unit tests to catch basic regressions.
  • Sub-step 2: Execute fuzz tests with an elevated iteration count for broader input exploration.
  • Sub-step 3: Run invariant tests against forked mainnet state to simulate real conditions.
bash
# Example command sequence in a CI script forge test --no-match-path "*Adversarial*" # Standard tests forge test --match-path "*Adversarial*" --fuzz-runs 10000 -vvv forge test --match-path "*Invariant*" --fork-url $MAINNET_RPC_URL

Tip: Use the --gas-report flag in CI to monitor for unexpected gas cost increases, which can indicate new vulnerabilities.

3

Implement State Forking for Realistic Tests

Run tests against a forked blockchain state to simulate live network conditions.

Detailed Instructions

Adversarial scenarios often depend on real-world state, such as Uniswap pool balances or Compound's interest rates. Configure your tests to fork from a live network using an RPC provider like Alchemy or Infura. In Foundry, use the --fork-url and --fork-block-number flags to create a deterministic, pinned state. This allows tests to interact with live contract addresses (e.g., 0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48 for USDC) and complex DeFi interactions. Cache the forked state if possible to avoid rate limits and speed up tests.

  • Sub-step 1: Set the MAINNET_RPC_URL secret in your CI environment.
  • Sub-step 2: In your test command, append --fork-url $MAINNET_RPC_URL --fork-block-number 19500000.
  • Sub-step 3: Write tests that use vm.prank and vm.deal to manipulate sender addresses and balances on the fork.
solidity
// Example Foundry test using a fork function test_Exploit_Compound_FlashLoan() public { vm.createSelectFork(vm.envString("MAINNET_RPC_URL"), 19500000); address attacker = makeAddr("attacker"); vm.deal(attacker, 100 ether); // ... adversarial logic against forked cToken contract }

Tip: Pinning a specific block number ensures test reproducibility across all CI runs, preventing failures due to upstream state changes.

4

Generate and Archive Test Reports

Produce actionable artifacts from test runs for review and analysis.

Detailed Instructions

Configure the pipeline to generate and save detailed test reports as artifacts. For Foundry, use forge test --gas-report --json > report.json to output a JSON file containing test results, gas usage, and failure traces. For Hardhat, generate a JUnit report with --reporter junit. These artifacts should be uploaded using CI steps like actions/upload-artifact. Additionally, integrate a coverage report using tools like forge coverage or solidity-coverage to track which lines of your smart contracts are exercised by adversarial tests, highlighting untested code paths.

  • Sub-step 1: Run tests with JSON output and gas reporting enabled.
  • Sub-step 2: Generate a coverage report in LCOV format.
  • Sub-step 3: Use the CI platform's upload-artifact command to save the report files.
yaml
# GitHub Actions step to upload artifacts - name: Upload Test Report uses: actions/upload-artifact@v4 if: always() # Upload even if tests fail with: name: forge-test-report path: report.json - name: Generate & Upload Coverage run: | forge coverage --report lcov lcov --list lcov.info

Tip: Use the if: always() condition to ensure reports are uploaded even when tests fail, which is critical for debugging adversarial test failures.

5

Enforce Security Gates with Automated Checks

Define pipeline failure conditions based on test results and metrics.

Detailed Instructions

Establish security gates that automatically fail the CI/CD pipeline if adversarial tests uncover issues. This goes beyond simple test pass/fail. Integrate automated checks for specific criteria: a sudden drop in test coverage percentage (e.g., below 95%), the discovery of a new high-severity invariant violation, or a regression in gas costs for critical functions. Use scripts to parse the JSON test report and coverage output, setting an exit code if thresholds are breached. This ensures vulnerabilities cannot be merged without explicit override.

  • Sub-step 1: Write a script that parses report.json for failed tests and checks severity tags.
  • Sub-step 2: Set a minimum coverage threshold and fail the build if not met.
  • Sub-step 3: Integrate the check script as a final step in the CI job.
bash
#!/bin/bash # example-check.sh COVERAGE=$(lcov --summary lcov.info 2>&1 | grep "lines.*%" | awk '{print $2}' | sed 's/%//') if (( $(echo "$COVERAGE < 95" | bc -l) )); then echo "Coverage dropped below 95%: $COVERAGE%" exit 1 fi if jq -e '.failures > 0' report.json > /dev/null; then echo "Test failures detected." exit 1 fi

Tip: For critical projects, require all adversarial test runs (fuzz/invariant) to pass on a forked mainnet before allowing a merge to the main branch.

SECTION-FAQ

Adversarial Testing FAQ

Ready to Start Building?

Let's bring your Web3 vision to life.

From concept to deployment, ChainScore helps you architect, build, and scale secure blockchain solutions.