Skip to main content

Verify

Back to SDLC Intro

The Verify (or Testing) phase ensures that the software is defect-free and meets the requirements defined in the planning phase. It is critical for . Testing is not a phase you bolt on at the end - it is a continuous activity that should start as early as requirements and continue through production.

The Test Pyramid

The is the foundational mental model for structuring your test suite:

  • Unit Tests (base): Fast, isolated, and cheap. Test individual functions and classes. Aim for the majority of your tests to be here.
  • Integration Tests (middle): Verify that modules, services, or APIs work together correctly. Slower than unit tests but catch contract and communication issues.
  • End-to-End Tests (top): Test the complete user journey through the real UI. Slowest and most brittle - use sparingly for critical paths only.
Rule of Thumb

A healthy ratio is roughly 70% unit / 20% integration / 10% E2E. If your E2E suite takes hours to run and breaks constantly, you probably have an inverted pyramid - push more testing down to the unit and integration layers.

Test Strategy

A test strategy defines what you test, how you test it, and how much is enough.

  • Risk-Based Testing: Not all code is equally risky. Focus testing effort on code that handles money, auth, data mutations, and core business logic. Low-risk UI tweaks need less coverage.
  • Coverage Targets: Code coverage is a useful signal, not a goal. A team chasing 100% coverage wastes time testing getters and setters. Aim for meaningful coverage of business logic (70-80% is a pragmatic target for most teams).
  • Test Data Management: Tests need realistic data. Plan how you will create, seed, and reset test data. Avoid coupling tests to production data.

Shift-Left Testing

means moving testing activities earlier in the lifecycle:

  • : Write the test first, then the code to make it pass, then refactor. Forces clear thinking about requirements and produces well-tested code by default.
  • : Express tests in natural language (Given/When/Then) so product and can collaborate on acceptance criteria before development begins.
  • Static Analysis: Catch entire categories of bugs before code ever runs - type errors (TypeScript, mypy), security vulnerabilities (Snyk), and code smells (SonarQube).

Testing Methodologies

Testing identifies gaps, errors, or missing requirements contrary to the actual requirements.

  • Unit Testing: or functions.
  • Integration Testing: Verifying that different modules work together.
  • End-to-End (E2E) Testing: Testing the complete user journey from start to finish.
  • System Testing: Testing the complete and integrated software product.
  • Performance Testing: Checking speed, responsiveness, and stability under load.
  • Security Testing: Ensuring the application is secure against threats.

Non-Functional Testing

Functional correctness is necessary but not sufficient. Non-functional quality attributes must be tested explicitly:

  • Performance Testing: Define benchmarks for response time, throughput, and resource usage. Use load testing tools (k6, Gatling, JMeter) to simulate realistic traffic patterns.
  • : Ensure your product is usable by everyone. Automated tools (axe, Lighthouse) catch ~30% of issues; manual testing with screen readers catches the rest. Target WCAG 2.1 AA as a minimum.
  • : Test against the OWASP Top 10. Combine automated scanning (Snyk, Trivy) with manual penetration testing for critical applications.
  • Load and Stress Testing: Understand your breaking point before your users find it. Test at 2-3x expected peak load.
By Company Stage
  • Startup: Focus on integration and E2E tests for critical user paths (sign-up, payment, core workflow). Manual is acceptable and often faster at this stage. Smoke tests after every deploy. Do not chase coverage metrics - chase confidence in your core flows.
  • Growth Stage: Introduce test coverage thresholds (e.g. 70% for new code). Hire dedicated or engineers. Build automated suites. Implement performance benchmarks and accessibility audits. Make tests a required gate in .
  • Established: Establish formal test governance with test plans reviewed alongside requirements. Performance defined and automatically validated. Compliance testing (, , etc.) baked into the pipeline. Regular penetration testing by third parties. Chaos testing in staging environments.

Common Pitfalls

Anti-patterns to Avoid
  • Testing Only the Happy Path: Most bugs live in edge cases - empty inputs, concurrent access, network failures, boundary values. Explicitly test error scenarios.
  • Flaky Tests: Tests that pass sometimes and fail sometimes destroy trust in the test suite. Teams start ignoring failures, and real bugs slip through. Fix or delete flaky tests immediately.
  • No Test Data Strategy: Tests that depend on shared mutable data or production databases are fragile and unpredictable. Use factories, fixtures, or synthetic data generation.
  • Testing in Production Only: "We'll test it live" is not a strategy. It is a gamble that trades user trust for developer convenience.

Testing Key Deliverables

  • Test Strategy Document
  • Test Plans and Test Cases
  • Bug Reports
  • Test Execution Reports
  • Coverage Reports
How AI Can Help: Verification

AI is reimagining verification by analyzing vast amounts of data to spot patterns and anomalies and automating the most tedious parts of :

  • Static Analysis: Outside of AI in the IDEs listed in the Code section, advanced static analysis tools (e.g., SonarQube) use AI to identify complex bugs and code smells that traditional rules might miss.
  • Test Generation: AI can create comprehensive test suites and edge cases automatically from code. Outside of AI in the IDEs listed in the Code section:
  • Synthetic Data: Privacy laws often prevent testing with real user data. AI tools like Delphix can generate comprehensive synthetic datasets that mimic production data without the privacy risks.