Validation and verification (V&V) are essential quality assurance processes for PDE simulations. Verification ensures your code solves the equations correctly (solving the equations right). Validation confirms your model accurately represents real-world physics (solving the right equations). A robust V&V framework combines code verification via methods like the Method of Manufactured Solutions, solution verification with mesh convergence studies, and validation against benchmark problems or experimental data. Following established standards like ASME V&V 10/20 helps ensure credibility and reproducibility in scientific computing.
Introduction
Partial differential equation (PDE) simulations underpin countless scientific and engineering decisions—from materials science to fluid dynamics. Yet a simulation is only as trustworthy as the verification and validation (V&V) processes behind it. The consequences of insufficient V&V can be severe: incorrect research conclusions, faulty engineering designs, and wasted computational resources.
This guide provides a practical, implementation-focused framework for V&V in PDE simulations. We’ll cut through the theoretical jargon and give you actionable procedures you can apply to your finite volume solvers (like FiPy), finite element codes, or any PDE-based simulation software.
Understanding the V&V Distinction
Before diving into procedures, it’s critical to understand the fundamental difference between verification and validation. This distinction is often confused but forms the foundation of any credible V&V program.
Verification: Are We Solving the Equations Correctly?
Verification asks: “Did we build the model right?” It is a mathematical and code-quality process that confirms:
- The numerical implementation correctly solves the governing PDEs
- Discretization errors are properly estimated and controlled
- Iterative solvers converge to the correct solution
- Programming bugs and logic errors are eliminated
Verification is about internal consistency—ensuring the code behaves exactly as the mathematical model dictates, independent of whether that model is physically correct.
Validation: Are We Solving the Right Equations?
Validation asks: “Did we build the right model?” It assesses whether the simulation accurately represents the real physical system for its intended purpose:
- Do the model outputs match experimental observations within uncertainty bounds?
- Are the chosen physics and boundary conditions appropriate?
- Does the model perform reliably across its expected operating range?
Validation is about external accuracy—building confidence that the model can be trusted for decision-making.
Key takeaway: Verification must precede validation. You cannot validate an unverified code; doing so merely validates the buggy implementation.
Code Verification: Proving Your Implementation is Correct
Code verification is the rigorous process of demonstrating that your PDE solver correctly implements the underlying mathematical model. Two primary approaches are used in practice.
1. Method of Manufactured Solutions (MMS)
The Method of Manufactured Solutions is the gold standard for code verification in computational science. It provides a systematic, mathematically rigorous way to test your solver without relying on pre-existing analytical solutions that may not match your specific boundary conditions or equation forms.
How MMS Works
- Choose an analytical solution ( u_m(\mathbf{x}) ) that is smooth and sufficiently differentiable (e.g., ( u_m = \sin(x)\cos(y)e^{-t} ))
- Substitute ( u_m ) into your governing PDE operator ( \mathcal{L} ) to compute the required source term ( S ):
[
S(\mathbf{x}, t) = \mathcal{L}(u_m)
] - Run your simulation with the manufactured source term ( S ) and appropriate boundary/initial conditions derived from ( u_m )
- Compare the numerical solution ( u_h ) to the exact manufactured solution ( u_m )
- Perform mesh refinement studies to verify that the observed order of accuracy matches the theoretical order of your discretization scheme
The power of MMS lies in its universality: you can verify any code for any PDE, regardless of boundary conditions, because you manufacture a compatible solution on demand.
Practical tip: Use symbolic computation tools (SymPy, Mathematica, Maple) to analytically compute derivatives needed for ( S ), especially for complex nonlinear PDEs. Automate the process with scripts to generate test cases systematically.
MMS Resources
For detailed MMS procedures and examples:
- Code Verification by the Method of Manufactured Solutions (OSTI) – foundational paper by Salari & Knupp
- COMSOL Blog: Verify Simulations with MMS – practical implementation guide
- PyLith MMS Documentation – geophysics code examples
2. Order of Accuracy Testing
Order of accuracy testing verifies that your code achieves the expected convergence rate under mesh refinement. For a second-order finite volume scheme, halving the mesh spacing should reduce the error by approximately a factor of 4.
Procedure
- Choose a problem with a known exact solution (can be from MMS or textbook benchmarks)
- Solve on a sequence of increasingly refined meshes (e.g., 32×32, 64×64, 128×128)
- Compute the error norm (L1, L2, or L∞) at each refinement level
- Plot error vs. mesh size on a log-log plot and compute the observed order:
[
\text{order} = \frac{\log(e_{coarse}/e_{fine})}{\log(h_{coarse}/h_{fine})}
] - Confirm the observed order matches the theoretical discretization order within tolerance (±0.1–0.2 due to boundary effects, machine precision)
This is a minimum requirement for any PDE code claiming correctness.
Common pitfall: Using too coarse meshes or too simple problems that don’t exercise all terms in the PDE. Test complex, realistic scenarios including nonlinearities, discontinuities, and coupled physics.
3. Cross-Code Comparison (Caution)
Comparing results between two different codes can be useful but is not a substitute for verification against analytical solutions. Both codes could contain the same systematic error. Use cross-code comparison only as a supplementary check after proper MMS or order testing.
Solution Verification: Quantifying Numerical Error
Even a verified code produces numerical approximations with inherent errors. Solution verification estimates these errors for a given simulation run.
Discretization Error Estimation
The primary sources of numerical error in PDE simulations are:
- Discretization error (mesh/grid resolution)
- Iteration error (solver convergence tolerances)
- Round-off error (floating-point precision)
Discretization error is typically dominant. Richardson extrapolation can provide an error estimate using results from two meshes:
[
\varepsilon_{richardson} = \frac{u_{h_1} – u_{h_2}}{r^{p} – 1}
]
where ( r ) is the refinement ratio and ( p ) is the observed order of accuracy.
Convergence Criteria
Always verify that iterative solvers (e.g., Newton’s method, linear system solvers) have converged to the desired tolerance before trusting results. Check:
- Residual reduction by several orders of magnitude
- Solution changes between iterations below threshold
- Conservation errors (for finite volume methods)
Warning: “Converged” does not mean “correct.” An iterative solver can converge to a wrong solution if the initial guess is poor or the problem is ill-conditioned. This is why verification and validation are both necessary.
Validation: Comparing to Reality
Validation assesses whether your simulation model is sufficiently accurate for its intended purpose by comparing predictions to independent experimental data.
Benchmark Problems
Benchmark problems are standardized test cases with well-characterized experimental or high-fidelity reference data. They serve as objective validation targets.
Common PDE Benchmark Categories
- Fluid Dynamics: Flow past a cylinder (Re=20–1000), lid-driven cavity flow, Rayleigh-Bénard convection
- Transport Equations: 1D advection-diffusion with known analytical solutions
- Phase Field: Allen-Cahn, Cahn-Hilliard patterns validated against microscopy
- Diffusion-Reaction: Fisher-KPP wave speed validation
Repositories like PDEBench provide standardized datasets for comparing machine learning and numerical methods.
Validation Best Practices
- Use independent data: Never validate with the same dataset used for calibration or code verification
- Quantify uncertainty: Experimental measurements have uncertainty; model predictions have numerical error. Compare within combined uncertainty bounds
- Multiple validation points: Test across the parameter space, not just one operating condition
- Document validation basis: Record which problems were used, results, and pass/fail criteria
Critical mistake: Using experimental data to tune model parameters (calibration) and then claiming validation with the same data. This is circular reasoning. Reserve a separate validation dataset.
When Experimental Data Is Unavailable
For many research problems, high-quality experimental data is scarce. In such cases:
- Use high-fidelity reference solutions (e.g., DNS for turbulent flows) if available
- Compare with analytical solutions for simplified cases
- Perform cross-code comparison with multiple independent, well-verified codes
- Be transparent about the limitation and characterize predictive uncertainty through sensitivity analysis
Standards and Frameworks
Adopting established standards lends credibility to your V&V process and ensures completeness.
ASME V&V Standards
The American Society of Mechanical Engineers (ASME) has developed a suite of standards for computational modeling credibility:
- ASME V&V 10 – Computational solid mechanics
- ASME V&V 20 – CFD and heat transfer
- ASME V&V 40 – Risk-based framework for medical devices (adaptable to other fields)
- VVUQ 1 – Terminology standardization
These standards provide structured procedures for:
- Planning V&V activities
- Quantifying numerical uncertainty
- Assessing model credibility based on evidence
- Documentation requirements
Framework Hierarchy
A comprehensive V&V framework follows this hierarchy:
- Code verification → Prove the code is correct
- Solution verification → Estimate numerical error for this run
- Validation → Compare to experimental/reference data
- Uncertainty quantification → Propagate input uncertainty to outputs
- Predictive capability → Establish confidence for decision use
Common Mistakes and How to Avoid Them
Based on the literature and expert consensus, here are the most frequent V&V errors:
Verification Mistakes
- Assuming the code is bug-free – Even widely-used codes contain undetected bugs. Regular regression testing with MMS cases catches new errors.
- Neglecting order accuracy – Without confirming theoretical convergence rates, you cannot be confident in the error estimates.
- Using cross-code comparison as the sole verification – Two wrong codes can agree. Always include analytical or manufactured solutions.
- One-time verification – Verification should be continuous, not a one-off activity. Every code change requires re-verification of affected modules.
Validation Mistakes
- Calibration vs. validation confusion – Tuning parameters to experimental data and then “validating” with the same data inflates confidence artificially. Keep calibration and validation datasets separate.
- Ignoring experimental uncertainty – A 5% discrepancy may be statistically insignificant if experimental error is 10%. Always propagate measurement uncertainty.
- Extrapolation – Validating a model in one regime (e.g., low Reynolds number) and then using it in a vastly different regime (high Re, turbulence) without additional validation is unjustified.
- Poor documentation – Without detailed records of validation problems, results, and decisions, credibility cannot be assessed by others (or yourself months later).
General V&V Mistakes
- Poorly characterized inputs – Garbage in, garbage out. Uncertainty in material properties, boundary conditions, or geometry must be quantified and propagated.
- No independent peer review – V&V should be reviewed by experts not involved in the development. This catches confirmation bias and oversights.
- Lack of reproducibility – All V&V cases should be automated with version-controlled input files and scripts so others can reproduce your results exactly.
Implementing a Practical V&V Workflow for PDE Codes
Here is a step-by-step framework you can implement for your PDE simulation projects:
Phase 1: Code Verification (Before Any Production Runs)
- Develop MMS test suite covering:
- Each PDE type your code solves (diffusion, advection, reaction, coupled)
- All boundary condition types (Dirichlet, Neumann, mixed)
- Complex geometries if applicable
- Automate order accuracy tests on representative problems. Integrate into continuous integration (CI) so every code commit runs these tests.
- Verify convergence of iterative solvers to strict tolerances (residual reduction ≥ 10⁻⁶ for linear solves).
- Document verification results with convergence plots, error tables, and pass/fail criteria.
Phase 2: Solution Verification (For Each Simulation Case)
- Perform mesh convergence study with at least 3 mesh levels (coarse, medium, fine)
- Compute discretization error estimate (Richardson extrapolation or multiple-grid methods)
- Check solver convergence – confirm residuals and solution changes meet tolerances
- Record mesh quality metrics (orthogonality, aspect ratios) – poor mesh quality can corrupt results even with fine meshes
- Report estimated numerical uncertainty in key quantities of interest
Phase 3: Validation (Building Credibility)
- Select appropriate benchmark problems matching your application domain
- Run validation cases with the same mesh resolution and solver settings as production runs
- Quantify validation error against reference data, accounting for experimental/benchmark uncertainty
- Assess model form error – if discrepancies exceed numerical uncertainty, identify missing physics or incorrect assumptions
- Document validation evidence with comparison plots, error metrics, and conclusions about adequacy for intended use
Phase 4: Uncertainty Quantification (Advanced)
For critical decisions, propagate input uncertainties (material properties, boundary conditions) through to outputs using:
- Sampling methods (Monte Carlo, Latin hypercube)
- Polynomial chaos expansions for efficient propagation
- Sensitivity analysis to identify dominant uncertainty sources
The comprehensive VVUQ framework by Roy et al. integrates V&V with uncertainty quantification.
Special Considerations for Finite Volume PDE Solvers
FiPy and similar finite volume codes have specific V&V considerations:
Discretization Verification
- Verify face gradient calculations with manufactured solutions that produce non-zero source terms
- Test different mesh types (structured vs. unstructured) separately, as error constants differ
- Validate flux conservation by computing integral balances – finite volume methods should conserve quantities exactly (to within solver tolerance)
Common FiPy-Specific Checks
If using FiPy, ensure you have verified:
- Correct implementation of boundary condition types (fixed value, gradient, etc.)
- Handling of anisotropic or tensor-valued diffusion coefficients
- Time-stepping schemes for transient problems (CFL condition adherence)
- Coupled physics interactions (e.g., electrochemistry + diffusion)
Refer to the FiPy benchmark documentation for built-in verification problems.
Case Study: Verifying a Phase-Field Simulation
Let’s walk through a concrete example. Suppose you’re implementing a Cahn-Hilliard phase-field model in FiPy:
- Code Verification with MMS:
- Choose a manufactured solution that satisfies the Cahn-Hilliard equation with a forcing term
- Generate exact solution and source term symbolically
- Run on a 2D mesh and verify L2 error converges at second order
- Solution Verification:
- Run mesh refinement (e.g., 50×50, 100×100, 200×200)
- Compute interface width and total free energy on each mesh
- Use Richardson extrapolation to estimate discretization error in energy
- Validation:
- Simulate spinodal decomposition and compare characteristic length scale against analytical prediction or published results
- Validate coarsening kinetics (L ∝ t^{1/3}) matches theory
- Documentation:
- Save all input files, scripts, and results in a version-controlled repository
- Generate a verification report with plots and tables
Decision Guide: When to Use Which V&V Method
| Situation | Recommended Approach |
|---|---|
| New PDE code or major revision | Full MMS suite + order accuracy testing |
| Minor bug fix | Targeted MMS tests for affected modules |
| Production simulation | Solution verification (mesh study) mandatory |
| Model development phase | Validation against 3+ benchmark problems |
| Regulatory or high-stakes use | Full ASME V&V 40 framework with independent peer review |
| Research code with no experimental data | Document limitations; use multiple independent verification methods |
| Performance optimization | Re-verify after optimization to ensure no correctness regression |
Bottom line: The rigor of V&V should match the consequences of failure. Academic research code still needs basic verification, but full V&V 40 compliance may be overkill. Industrial or safety-critical applications demand comprehensive, documented V&V.
Related Guides
For related topics in scientific simulation workflows:
- From Equations to Simulations: The Modeling Pipeline – end-to-end simulation development process
- Using FiPy for Phase-Field Modeling – practical PDE implementation guide
- Introduction to Materials Modeling for Beginners – foundational concepts for newcomers
Summary and Next Steps
Validation and verification are not optional add-ons; they are integral to credible PDE simulations. The practical framework outlined here provides a roadmap:
- Start with code verification using the Method of Manufactured Solutions to establish baseline correctness
- Quantify numerical errors through solution verification for every production run
- Build validation evidence with benchmark problems and independent data
- Adopt standards like ASME V&V 10/20 to structure your process
- Avoid common mistakes—especially the verification/validation confusion and circular calibration
- Document everything for reproducibility and peer review
Implementing even a basic V&V program (MMS tests + mesh convergence) dramatically increases confidence in your simulation results and saves time catching errors early.
Next steps: Audit your current simulation workflow. Are you skipping verification entirely? Running only single-mesh calculations? Add at least one verification test (MMS or order accuracy) to your next project and measure the difference in confidence and bug detection.
Need Help Implementing V&V for Your Simulation Project?
Establishing a robust V&V process requires expertise and upfront investment. If you’re struggling with:
- Setting up manufactured solution tests for your PDE code
- Designing mesh convergence studies for complex geometries
- Interpreting validation results and quantifying uncertainty
- Preparing V&V documentation for publication or regulatory submission
Our team of computational science experts can help. We specialize in building verification frameworks for scientific Python codes, including FiPy-based simulations. Get in touch via our issue tracking system to discuss your project’s credibility needs.
References and Further Reading
- Roy, C. J. (2005). “Review of code and solution verification procedures for computational simulation.” Journal of Computational Physics.
- Oberkampf, W. L., & Roy, C. J. (2010). Verification and validation in scientific computing. Cambridge University Press.
- AIAA Guide for the Verification and Validation of Computational Fluid Dynamics Simulations (1998).
- ASME V&V Standards: V&V 10, V&V 20, VVUQ 1.