FiPy does not have built-in adaptive mesh refinement (AMR). Current approaches involve external mesh generation with Gmsh (inefficient for dynamic problems), integration with dedicated AMR libraries like libmesh (architecturally challenging), or switching to alternative phase-field codes that support AMR natively (MOOSE, PRISMS-PF). AMR provides significant speedups (often 2–10×) for phase-field problems with localized interfaces, but introduces algorithmic complexity and may not be worthwhile when high resolution is needed everywhere. If your FiPy simulation struggles with memory or performance due to uniform meshes, first optimize your static mesh, consider coarser global resolution with tighter solvers, and only pursue AMR if your problem has a clear small region of interest (e.g., a moving interface) and you’re willing to implement custom mesh management infrastructure.
Introduction: The Mesh Resolution Dilemma
Phase-field simulations often involve complex, evolving phenomena—dendrite growth in solidification, phase separation in alloys, or electrode-electrolyte interfaces in batteries. These simulations typically require high spatial resolution near interfaces where gradients are steep, while coarser meshes suffice far from these regions. A uniform mesh that’s fine enough to resolve the interface everywhere wastes computational resources on uninteresting areas, leading to excessive memory usage and slower run times.
Adaptive mesh refinement (AMR) solves this by dynamically adjusting mesh resolution during a simulation, refining cells where the solution changes rapidly and coarsening where it’s smooth. For phase-field problems—where the interface occupies a tiny fraction of the domain—AMR can reduce cell counts by an order of magnitude or more, enabling larger simulations or faster turnaround on desktop workstations.
But if you’re using FiPy, you’ll quickly discover a hard truth: FiPy does not support AMR natively. This wasn’t always clear in the documentation, so many users invest time exploring dead ends before hitting the same conclusion the FiPy core developers reached: the architectural mismatch between FiPy’s finite volume framework and traditional AMR libraries is substantial.
This guide cuts through the uncertainty. We’ll examine what AMR could mean for your FiPy simulations, review the current state of AMR support, evaluate practical workarounds, and provide a decision framework to help you choose the right path forward.
What Is Adaptive Mesh Refinement (AMR)?
Before diving into FiPy specifics, let’s establish a clear mental model of how AMR works and why it’s valuable.
The Core Idea
In a static mesh, you choose a single resolution (cell size) that must satisfy the most demanding part of your domain everywhere. If you need 10 nm spacing to resolve a dendrite tip but the domain is 1 mm across, a uniform mesh would require 100,000 cells in each dimension—clearly infeasible.
AMR starts with a coarse base mesh and introduces finer patches only where needed. The refinement criteria typically include:
- Gradient threshold: Refine where |∇φ| exceeds a value (for phase-field order parameters)
- Error estimators: Use solution-based or residual-based error indicators
- Feature tracking: Follow known interfaces or structures
The mesh adapts periodically (every N time steps or when certain criteria trigger), requiring:
- Cell splitting/coarsening algorithms
- Data interpolation to transfer solution fields from old to new meshes
- Load balancing for parallel simulations
- Conservation enforcement across coarse-fine boundaries
AMR vs Static Mesh: A Quantitative Comparison
A 2021 study in Computers & Mathematics with Applications analyzed AMR performance for various PDE problems and found that AMR reduced memory consumption by 60–80% and wall-clock time by 40–70% compared to uniform meshes refined to the same accuracy threshold, even after accounting for AMR overhead.
However, the same study noted that for problems where high resolution is needed nearly everywhere (e.g., early-stage spinodal decomposition with uniform composition fluctuations), AMR offered minimal benefit and sometimes underperformed due to management overhead.
The State of AMR in FiPy: Current Reality
Official Position from the FiPy Team
The most authoritative source on FiPy AMR is GitHub issue #618, opened in February 2019. The discussion reveals the project’s stance:
guyer (FiPy maintainer): “We’ve made occasional attempts at this, but nothing fruitful. Using gmsh to regenerate the mesh and then reinterpolating the data is hopelessly inefficient (I’ve done it for quasi-static problems). Properly, FiPy should be integrated with libmesh or some other dedicated adaptive mesher; I’d like this to happen, but thus far, the fundamental architectures of FiPy and the meshers I’ve looked at are just too dissimilar.”
This isn’t a “maybe someday” feature—it’s a recognition that FiPy’s design and the requirements of efficient AMR are fundamentally misaligned. The FiPy architecture, built on a straightforward finite volume discretization with structured or unstructured meshes represented in memory as fixed grids, doesn’t accommodate the dynamic topology changes that AMR requires without significant restructuring.
Why Integration Is Hard
FiPy’s mesh abstraction (Grid2D, Grid3D, Gmsh2D, Gmsh3D, etc.) assumes a static topology. Cells, faces, and their connectivity are established at simulation start and persist throughout. AMR requires:
- Dynamic connectivity: When a cell splits, new cells appear, neighbors change, and face-to-cell relationships are redefined.
- Data migration: Solution variables must be interpolated from old cells to new ones, preserving conservation properties.
- Solver recomputation: The sparse matrix structure changes as the number of cells changes, requiring matrix reallocation and preconditioner rebuilding.
- Parallel redistribution: In MPI simulations, refined cells create load imbalance; processors need to exchange ghost regions.
None of these are supported in FiPy’s current framework. Attempting to implement them would essentially mean rewriting large portions of FiPy itself.
Practical Approaches (and Their Trade-offs)
Given the lack of native support, what are your options if you need AMR-like behavior with FiPy? Here are the paths users have explored, ranked from most to least practical.
1. Use an Alternative Phase-Field Code with AMR
Verdict: Recommended if AMR is essential and you’re starting a new project.
Several open-source phase-field codes support AMR natively:
- MOOSE (Multiphysics Object-Oriented Simulation Environment): Uses libmesh for adaptive meshing, supports parallel AMR with load balancing, and includes phase-field modules.
- PRISMS-PF: Built on deal.II, offers scalable AMR for phase-field problems with sophisticated refinement criteria.
- OpenPhase: Supports cell-based AMR for microstructure evolution.
These codes are production-ready, well-documented, and have active communities. The trade-off is a steeper learning curve if you’re already invested in FiPy’s Python API and ecosystem.
When to choose this: You’re starting a new simulation project where AMR is a hard requirement (large domain, localized interface, limited compute resources), and you can afford to learn a new framework.
Internal link: For an overview of phase-field modeling concepts, see our guide Understanding Phase Field Models in Materials Science.
2. Pre-Compute Multiple Static Meshes and Switch Between Them
Verdict: Semi-practical for problems with predictable refinement patterns.
If your simulation has a known trajectory (e.g., a dendrite growing in one direction), you can create a series of meshes with refinement zones manually placed using Gmsh. At predefined time steps, you would:
- Stop the simulation
- Interpolate fields from the old mesh to the new mesh
- Restart with the new mesh
Challenges:
- FiPy has no built-in mesh switching. You’d need to export cell-centered fields, create a new
Meshobject, interpolate manually, and reinitialize solvers. - Interpolation must preserve conservation—simple linear interpolation won’t suffice for flux-conservative finite volume methods.
- The process is manual and error-prone; not suitable for dynamic adaptation based on solution gradients.
When to choose this: Your refinement pattern is spatially predictable (e.g., a moving front that stays within a known bounding region), and you can tolerate occasional stops and manual intervention.
Pitfall warning: Poor interpolation at coarse-fine interfaces can introduce first-order accuracy losses or non-physical oscillations.
3. External AMR Library Integration (libmesh, Paramesh)
Verdict: Research-level; not ready for production.
The FiPy maintainer mentioned libmesh as a potential integration target. libmesh provides a finite element/finite volume framework with AMR, used by MOOSE and others. However, integrating FiPy with libmesh would require:
- Rewriting FiPy’s discretization to use libmesh’s mesh and assembly system
- Porting FiPy’s existing equation and solver infrastructure
- Addressing fundamental differences: FiPy is finite volume, libmesh is primarily finite element
Similar challenges exist with Paramesh (a block-structured AMR library in Fortran). The architectural mismatch means integration isn’t a matter of “plugging in” AMR—it’s a major engineering effort.
When to consider this: You’re a developer planning to contribute to FiPy core or maintain a long-term fork with AMR support. Not recommended for end-users.
4. Accept the Limitation: Optimize Your Static Mesh
Verdict: The pragmatic default for most FiPy users.
Before pursuing AMR, ask: can I achieve acceptable performance with a well-designed static mesh? Often, the answer is yes if you:
- Use graded meshes: Manually refine near interfaces with smooth transitions (no large aspect ratio jumps)
- Employ local mesh density functions: In Gmsh, define mesh size fields that concentrate cells where needed
- Leverage FiPy’s solver options: Use efficient preconditioners (e.g.,
ScipyKrylovwithilupreconditioning) and appropriate time stepping - Reduce unnecessary refinement: Many users over-refine “just to be safe.” Convergence studies often reveal that coarser meshes suffice for qualitative results.
For example, a phase-field dendrite simulation might need fine cells only within a few interface widths of the solid-liquid front. A static mesh with manual grading can capture this with 10–20% of the cells of a uniform fine mesh, albeit without the adaptivity to track a moving front. If the front moves slowly relative to computation time, you might accept that some regions become under-resolved as the simulation progresses, or simply start with a mesh fine enough to cover the entire expected trajectory.
When to choose this: Your simulation domain isn’t enormous, your interface region is a manageable fraction of the domain, or you can tolerate some inefficiency for code simplicity.
5. Implement a Custom AMR Layer
Verdict: Advanced users only; high risk.
In theory, you could build AMR on top of FiPy by:
- Maintaining a mapping from FiPy’s cell indices to a hierarchical mesh structure
- Periodically constructing a new
Grid2D/Grid3Dwith refined cells - Interpolating solution variables using FiPy’s
cellVariablemethods - Rebuilding the linear system
This approach faces severe obstacles:
- FiPy’s
Meshobjects are not designed for mid-simulation modification - The
CellVariabledata structures are tightly coupled to the original mesh indexing - Rebuilding the equation system (
TransientTerm,DiffusionTerm, etc.) mid-simulation is complex - No conservation guarantees across refinement steps
A few researchers have attempted variants of this, but no working, open-source implementation exists.
When to consider this: You have specific research requirements that mandate FiPy, AMR is critical, and you’re prepared to invest months in development and validation. Not recommended for routine use.
Decision Framework: Should You Pursue AMR with FiPy?
Use this flowchart to determine your path:
Do you need AMR? (Is your problem large + interface-localized?)
│
├─ No → Optimize static mesh (Section 4)
│
└─ Yes → Can you switch to a phase-field code with AMR?
│
├─ Yes → Use MOOSE/PRISMS-PF/OpenPhase
│
└─ No → Are you willing to build custom AMR infrastructure?
│
├─ Yes → Research-level implementation (Section 5)
│
└─ No → Accept that FiPy cannot do AMR;
either live with static mesh inefficiency
or change your problem scope.
Key Questions to Answer
- What percentage of your domain needs fine resolution?
– < 20%: AMR could provide 3–10× speedup if available
– > 50%: AMR benefit diminishes; static mesh may be simpler and faster - Does your interface move significantly during simulation?
– Yes: AMR tracking becomes valuable; otherwise static graded mesh must cover entire possible trajectory
– No: A single static graded mesh is sufficient - Do you have developer resources to build AMR support?
– Yes: You could contribute to FiPy or maintain a fork (high long-term cost)
– No: Choose alternative code or static mesh - Is parallel scalability critical?
– AMR introduces load balancing challenges; static meshes parallelize more predictably
– If you need hundreds of cores, AMR libraries like MOOSE handle this better than any FiPy hack could
Common Pitfalls and Misconceptions
Even if you pursue AMR through alternative codes, beware these issues:
Pitfall 1: Over-Refinement
Setting refinement criteria too aggressive leads to excessive cell creation, negating AMR benefits and possibly exceeding memory limits. The NIST Phase Field Recommended Practices Guide advises: “Limit refinement levels” and “Use proper neighborhood constraint (no more than one refinement level difference between adjacent cells)”.
Recommendation: Start with conservative thresholds (e.g., refine only where |∇φ| > 0.05/Δx) and monitor cell count growth.
Pitfall 2: Poor Interpolation at Coarse-Fine Boundaries
When data moves between meshes of different refinement levels, conservation errors and accuracy loss occur. First-order convergence at these interfaces can dominate the global error.
Recommendation: Use conservative interpolation schemes; in finite volume methods, ensure flux matching across refinement interfaces. Libraries like libmesh handle this internally—avoid DIY interpolation unless you’re an expert.
Pitfall 3: Mesh Chatter
Continuous refinement/coarsening between adjacent time steps (e.g., when an interface oscillates around a cell boundary) causes instability and wasted computation.
Recommendation: Implement hysteresis: require a cell to stay above/below the refinement threshold for several steps before changing level.
Pitfall 4: Ignoring Load Balance in Parallel Runs
Dynamic refinement creates uneven cell distribution across MPI ranks. Some processors get flooded with refined cells while others idle.
Recommendation: Use libraries with built-in load balancing (MOOSE, PRISMS-PF). If implementing yourself, periodically repartition the mesh using space-filling curves or graph partitioning.
Performance Expectations: What Can You Realistically Gain?
Based on published benchmarks:
- PFHub Benchmark 3 (dendrite growth): MOOSE and PRISMS-PF with AMR achieved the fastest runtimes, using orders of magnitude fewer cells than uniform meshes.
- General CFD/phase-field: AMR typically reduces cell counts by 60–80% and wall time by 40–70% vs. uniform meshes at equivalent accuracy.
- Overhead: AMR adds 5–15% runtime overhead for mesh management and interpolation; this is amortized by the reduced cell count.
But note: These numbers come from codes that have AMR deeply integrated. A kludgy FiPy+Gmsh loop would likely be slower than a static mesh due to repeated mesh generation and interpolation costs.
Step-by-Step: What to Do Instead
If you’re blocked by FiPy’s AMR limitation, here’s a practical action plan:
Step 1: Optimize Your Current Static Mesh
- Perform a mesh convergence study: Find the coarsest mesh that gives acceptable results. Many users over-refine.
- Use graded meshes: In Gmsh, define a
MeshSizefield that varies with distance from your region of interest. - Enable efficient solvers: Use
ScipyKrylov(GMRES) withilupreconditioning instead of default PETSc options if your problem is moderate-sized. - Consider time stepping: Larger time steps with implicit solvers may compensate for coarser meshes.
Step 2: Evaluate Alternative Codes
If static meshes still don’t cut it:
- MOOSE: https://mooseframework.inl.gov/ – Comprehensive multiphysics with AMR, steeper learning curve but powerful.
- PRISMS-PF: https://github.com/prisms-center/phaseField – Phase-field focused, built on deal.II, excellent AMR.
- OpenPhase: https://github.com/openphase/openphase – Specifically for microstructure.
Run a small benchmark (e.g., a 2D spinodal decomposition) to gauge performance and porting effort.
Step 3: If You Must Stick with FiPy and Need AMR
Re-evaluate your problem formulation:
- Can you reduce domain size? Smaller domains need less refinement.
- Can you use symmetry? Quarter-domain models cut cell counts dramatically.
- Can you accept less accurate results? Sometimes qualitative behavior is sufficient for early-stage research.
If none of these work, you’ve hit FiPy’s ceiling for this problem. Consider contributing to FiPy AMR development (see issue #618) or switching tools.
Conclusion and Recommendations
Adaptive mesh refinement is a powerful technique for phase-field simulations with localized interfaces, offering substantial performance gains when implemented correctly. However, FiPy does not support AMR, and the architectural gaps mean native support is unlikely in the near term.
Our recommendations:
- For most FiPy users: Optimize your static mesh with graded sizing and efficient solvers. This solves 80% of performance issues without leaving FiPy’s comfortable Python environment.
- For new projects with stringent AMR needs: Choose MOOSE or PRISMS-PF. The time invested in learning these frameworks pays off in scalability and access to advanced features like AMR, nonlinear solvers, and multiphysics coupling.
- For researchers pushing phase-field boundaries: Consider contributing to FiPy’s development or maintaining a fork with AMR support—but be prepared for a multi-year commitment.
Remember: The best simulation code is the one you can use effectively. A “perfect” AMR implementation you can’t get working is worse than a “limited” static mesh that produces trustworthy results on schedule.
Related Guides
- Using FiPy for Phase-Field Modeling
- Understanding Phase Field Models in Materials Science
- Solving Diffusion Equations with FiPy
- Boundary Conditions: Theory and Implementation in FiPy
- Managing Large-Scale PDE Problems: Strategies and HPC Case Studies
- Reading and Understanding FiPy Documentation
Sources and Further Reading
- FiPy GitHub Issue #618: Adaptive Mesh Refinement with FiPy?
- NIST Phase Field Recommended Practices Guide: Numerical Implementation
- Kuo et al. (2021). “An analysis of the performance enhancement with adaptive mesh refinement.” Computers & Mathematics with Applications.
- PFHub Benchmarks: https://pages.nist.gov/pfhub/benchmarks/
- MOOSE Framework: https://mooseframework.inl.gov/
- PRISMS-PF: https://github.com/prisms-center/phaseField