MODFLOW 6 API Integration — Example 1b (Modern Update)¶
Modern Update: This example replaces the classic file-based Example 1b from the original mod-PATH3DU distribution with a fully API-driven workflow. No binary file I/O required.
What Makes This Modern¶
The original Example 1b relied on a chain of file-based operations: write MF6 input files, run the MF6 executable, read binary .HDS/.CBB outputs, then invoke the C++ particle tracker. This update eliminates every intermediate file by coupling three Python APIs directly in memory.
| Component | API | What It Replaces |
|---|---|---|
| MODFLOW 6 | modflowapi (shared library) |
Reading .HDS, .CBB binary files |
| MP3DU-RS | mp3du Python bindings |
C++ executable with file-based I/O |
| FloPy | flopy |
Manual text file editing |
Key Improvements Over the Original¶
| Original Approach | Modern API Approach |
|---|---|
| Read legacy boundary-condition files to define BCs | Use exact CHD cell IDs from the original Example 1b setup |
Run MF6 executable → read .HDS/.CBB binary |
Solve via modflowapi → extract heads/flows from memory |
Invoke C++ mp3du executable |
Call mp3du.run_simulation() directly from Python |
| Multiple disk I/O operations | Zero intermediate files |
| Batch processing only | Real-time coupling possible |
Output¶
The plot shows the Voronoi grid, MF6 head contours, constant head boundaries (blue), the pumping well (red), particle start locations, and a comparison between mp3du-rs pathlines (solid) and legacy C++ pathlines (dashed).
Model Description¶
This example uses the same conceptual model as the original Example 1b:
- Grid: Voronoi DISV (~400 cells, single layer)
- Domain: 500 m × 100 m × 50 m
- Hydraulic conductivity: K = 10 m/d, K33 = 1 m/d
- Porosity: 0.30
- Boundary conditions:
- Left boundary: Constant head = 50 m
- Right boundary: Constant head = 45 m
- One pumping well: Q = −50 m³/d (cell 361)
Key Concepts for LLMs and Developers¶
1. FLOW-JA-FACE Sign Convention¶
MODFLOW 6 FLOWJA uses positive = INTO cell. The mp3du-rs API also uses positive = INTO cell. Pass the FLOWJA array directly to both hydrate_cell_flows() and hydrate_waterloo_inputs() without any negation.
2. Extracting Boundary Flows via API¶
When reading boundary condition flows (like CHD) via the MF6 API, use SIMVALS (computed flow rates), not RHS (matrix right-hand side). The RHS array contains the right-hand side of the matrix equation, not the actual computed flow rate.
# Correct: use SIMVALS after mf6.update()
simvals = mf6.get_value("GWF/CHD_0/SIMVALS").copy()
# Wrong: RHS is not the flow rate
# rhs = mf6.get_value("GWF/CHD_0/RHS").copy()
3. IFACE-Based Capture for CHD Cells¶
Do not mark CHD cells as domain boundaries (is_domain_boundary_arr = True). Particles entering a domain boundary cell are immediately terminated with CapturedAtModelEdge. Instead, use IFACE=2 for lateral CHD flow, which allows particles to exist within the cell and only captures them when they exit through the appropriate face.
# CHD cells: IFACE=2 (lateral boundary capture)
bc_iface_list.append(2)
# Well cells: IFACE=0 (point sink capture)
bc_iface_list.append(0)
4. Exact Boundary Mapping from the Original Example¶
The updated create_model.py does not use heuristic boundary detection. Instead, it preserves the original Example 1b boundary-condition layout by hardcoding the exact left and right CHD cell IDs that correspond to the original Voronoi setup.
This keeps the modern API workflow self-contained while still reproducing the intended capture behavior and pathline geometry from the legacy example.
Workflow¶
create_model.py run_tracking.py
────────────────────────── ──────────────────────────────────────────
1. Parse mp3du.gsf geometry → 1. Load grid_meta.json + cell_polygons.json
2. Apply exact CHD/WEL mapping 2. Initialize MF6 via modflowapi
3. Build MF6 DISV model via 3. mf6.update() → solve steady-state flow
FloPy (no text file editing) 4. Extract heads + FLOWJA from memory
4. Write sim/ + grid_meta.json 5. Read WEL/CHD SIMVALS from memory
+ cell_polygons.json 6. Build mp3du grid + hydrate flows
7. fit_waterloo() → velocity field
8. run_simulation() → particle paths
9. Save trajectories.csv + capture_summary.csv + particle_paths.svg
Requirements¶
You also need the MODFLOW 6 shared library (libmf6.dll on Windows, libmf6.so on Linux):
Set the LIBMF6_PATH environment variable or place the library in %TEMP%/mf6api/.
Usage¶
# Step 1: Create the MF6 model (writes to sim/ folder)
python create_model.py
# Step 2: Run flow + tracking via API
python run_tracking.py
Scripts¶
1. Model Creation (create_model.py)¶
Builds the MODFLOW 6 DISV model using flopy. Reads the Voronoi geometry from the GSF file and applies the original Example 1b boundary-condition mapping using exact CHD cell IDs plus the original pumping well location.
"""
Create a MODFLOW 6 DISV model for Example 1b (MF6 API version).
This script builds the MF6 model from the Voronoi geometry in mp3du.gsf,
replicating the original Example 1b conceptual model:
- Voronoi DISV grid (~400 cells, single layer)
- Domain: 500 m × 100 m × 50 m
- Top = 50 m, bottom = 0 m
- Constant-head boundaries: Left = 50 m, Right = 45 m
- One pumping well: Q = -50 m³/d (cell 361)
- Uniform HK = 10 m/d, VKA = 1 m/d, porosity = 0.30
Outputs written to the local sim/ folder:
- mfsim.nam and MF6 input files
- grid_meta.json (metadata for mp3du)
- cell_polygons.json (cell geometry for mp3du)
"""
from __future__ import annotations
import json
from pathlib import Path
import flopy
import numpy as np
SCRIPT_DIR = Path(__file__).resolve().parent
SIM_WS = SCRIPT_DIR / "sim"
GSF_PATH = SCRIPT_DIR / "mp3du.gsf"
# Model parameters matching original Example 1b
TOP = 50.0
BOT = 0.0
HK = 10.0
VKA = 1.0
POROSITY = 0.30
# Boundary conditions from original Example 1b (Voronoi.CHD / Voronoi.WEL)
LEFT_CHD_HEAD = 50.0
RIGHT_CHD_HEAD = 45.0
WELL_Q = -50.0
WELL_CELL = 360 # 0-based (361 in 1-based)
# Exact CHD cell IDs (0-based) from the original Voronoi.CHD file.
# These must match the GSF cell ordering — do NOT use heuristic selection.
LEFT_CHD_CELLS = [0, 1, 4, 5, 10, 16, 17, 18, 19, 20, 21, 28, 29, 30, 31, 32, 49]
RIGHT_CHD_CELLS = [643, 660, 684, 707, 727, 744, 760, 773, 786, 797, 807, 816, 824, 831, 836, 840, 842]
def signed_area_xy(vertices: list[tuple[float, float]]) -> float:
"""Compute signed area of a polygon (positive = CCW, negative = CW)."""
area = 0.0
n = len(vertices)
for i in range(n):
x0, y0 = vertices[i]
x1, y1 = vertices[(i + 1) % n]
area += x0 * y1 - x1 * y0
return 0.5 * area
def parse_gsf(path: Path):
"""Parse a GSF (Grid Specification File) to extract vertices and cells."""
with path.open() as f:
lines = f.readlines()
n_verts = int(lines[1].strip())
vertices_xy: dict[int, tuple[float, float]] = {}
for i in range(2, 2 + n_verts):
parts = lines[i].split()
vid = int(parts[0])
vertices_xy[vid] = (float(parts[1]), float(parts[2]))
cells = []
for i in range(2 + n_verts, len(lines)):
line = lines[i].strip()
if not line:
continue
parts = line.split()
cell_id = int(parts[0])
cx, cy = float(parts[1]), float(parts[2])
nv = int(parts[3])
vert_ids = [int(parts[4 + j]) for j in range(nv)]
neighbor_ids = [int(parts[4 + nv + j]) for j in range(nv)]
verts = [vertices_xy[vid] for vid in vert_ids]
# MF6 DISV expects clockwise vertex order in CELL2D.
if signed_area_xy(verts) > 0.0:
vert_ids = list(reversed(vert_ids))
neighbor_ids = list(reversed(neighbor_ids))
verts = list(reversed(verts))
cells.append(
{
"id": cell_id,
"cx": cx,
"cy": cy,
"vert_ids": vert_ids,
"neighbor_ids": neighbor_ids,
"verts": verts,
}
)
return vertices_xy, cells
def build_disv_geometry(vertices_xy, cells):
"""Convert GSF geometry to MF6 DISV format."""
vertex_ids = sorted(vertices_xy)
vid_to_zero = {vid: i for i, vid in enumerate(vertex_ids)}
vertices = [(vid_to_zero[vid], xy[0], xy[1]) for vid, xy in sorted(vertices_xy.items())]
cell2d = []
polygons = []
centers_xy = []
for icell2d, cell in enumerate(cells):
iverts = [vid_to_zero[vid] for vid in cell["vert_ids"]]
cell2d.append((icell2d, cell["cx"], cell["cy"], len(iverts), *iverts))
polygons.append([[float(x), float(y)] for x, y in cell["verts"]])
centers_xy.append([float(cell["cx"]), float(cell["cy"])])
return vertices, cell2d, polygons, centers_xy
def build_model():
"""Build and write the MF6 DISV model."""
SIM_WS.mkdir(exist_ok=True)
vertices_xy, cells = parse_gsf(GSF_PATH)
vertices, cell2d, polygons, centers_xy_list = build_disv_geometry(vertices_xy, cells)
ncpl = len(cells)
# Use exact CHD cell IDs from the original Voronoi.CHD file
left_chd_cells = list(LEFT_CHD_CELLS)
right_chd_cells = list(RIGHT_CHD_CELLS)
# Create MF6 simulation
sim = flopy.mf6.MFSimulation(
sim_name="example1b_mf6",
sim_ws=str(SIM_WS),
exe_name="mf6",
)
flopy.mf6.ModflowTdis(sim, nper=1, perioddata=[(1.0, 1, 1.0)], time_units="days")
flopy.mf6.ModflowIms(
sim,
complexity="SIMPLE",
outer_maximum=200,
inner_maximum=200,
outer_dvclose=1.0e-8,
inner_dvclose=1.0e-10,
linear_acceleration="BICGSTAB",
)
gwf = flopy.mf6.ModflowGwf(sim, modelname="gwf", save_flows=True)
flopy.mf6.ModflowGwfdisv(
gwf,
nlay=1,
ncpl=ncpl,
nvert=len(vertices),
top=np.full(ncpl, TOP),
botm=np.full((1, ncpl), BOT),
vertices=vertices,
cell2d=cell2d,
length_units="meters",
)
flopy.mf6.ModflowGwfic(gwf, strt=np.full(ncpl, TOP))
flopy.mf6.ModflowGwfnpf(
gwf,
icelltype=np.zeros((1, ncpl), dtype=int),
k=np.full((1, ncpl), HK),
k33=np.full((1, ncpl), VKA),
save_specific_discharge=True,
)
# CHD package - constant head boundaries
chd_spd = []
for node0 in left_chd_cells:
chd_spd.append(((0, node0), LEFT_CHD_HEAD))
for node0 in right_chd_cells:
chd_spd.append(((0, node0), RIGHT_CHD_HEAD))
flopy.mf6.ModflowGwfchd(gwf, stress_period_data={0: chd_spd}, pname="CHD_0")
# WEL package - pumping well
wel_spd = [((0, WELL_CELL), WELL_Q)]
flopy.mf6.ModflowGwfwel(gwf, stress_period_data={0: wel_spd}, pname="WEL_0")
flopy.mf6.ModflowGwfoc(gwf)
sim.write_simulation()
# Write metadata for mp3du
well_xy = [float(cells[WELL_CELL]["cx"]), float(cells[WELL_CELL]["cy"])]
meta = {
"grid_type": "DISV",
"nlay": 1,
"ncpl": ncpl,
"layer_top": [TOP],
"layer_bot": [BOT],
"hk": [HK],
"vka": [VKA],
"porosity": [POROSITY],
"well_q": WELL_Q,
"well_cell_2d": WELL_CELL,
"well_xy": well_xy,
"left_chd_cells": left_chd_cells,
"right_chd_cells": right_chd_cells,
"chd_cells": left_chd_cells + right_chd_cells,
"left_chd_head": LEFT_CHD_HEAD,
"right_chd_head": RIGHT_CHD_HEAD,
}
with (SIM_WS / "grid_meta.json").open("w") as f:
json.dump(meta, f, indent=2)
poly_data = {
"ncpl": ncpl,
"centers_xy": centers_xy_list,
"polygons": polygons,
}
with (SIM_WS / "cell_polygons.json").open("w") as f:
json.dump(poly_data, f)
print(f"Created MF6 DISV model with {ncpl} cells")
print(f" Left CHD cells: {len(left_chd_cells)} (head = {LEFT_CHD_HEAD} m)")
print(f" Right CHD cells: {len(right_chd_cells)} (head = {RIGHT_CHD_HEAD} m)")
print(f" Well cell: {WELL_CELL} at ({well_xy[0]:.2f}, {well_xy[1]:.2f}), Q = {WELL_Q} m³/d")
print(f" Simulation dir: {SIM_WS}")
if __name__ == "__main__":
build_model()
2. Tracking Execution (run_tracking.py)¶
Runs the MODFLOW 6 simulation via the shared-library API, extracts heads and flows directly from memory, and runs mp3du-rs particle tracking. The script also writes a capture summary and generates an SVG comparison figure against the legacy C++ pathlines.
"""
Run MODFLOW 6 API + mp3du-rs particle tracking for Example 1b.
This script demonstrates the modern, all-API workflow for coupled groundwater
flow and particle tracking—no binary file I/O required.
Key Concepts:
-------------
1. **FLOW-JA-FACE Sign Convention**: MODFLOW 6 `FLOWJA` uses positive = INTO cell.
The mp3du-rs API also uses positive = INTO cell. Pass `FLOWJA` directly without
any negation.
2. **Extracting Boundary Flows via API**: Use `SIMVALS` (computed flow rates),
NOT `RHS` (matrix right-hand side) when reading CHD flows.
3. **IFACE-Based Capture**: CHD cells use IFACE=2 for lateral boundary capture.
This allows particles to exist within CHD cells and only captures them when
they exit through the appropriate face.
Workflow:
1. Load the DISV geometry written by create_model.py
2. Initialize MODFLOW 6 through the shared-library API
3. Read heads, FLOW-JA-FACE, and package rates directly from MF6 memory
4. Build mp3du flow structures and fit the Waterloo velocity field
5. Track particles with mp3du.run_simulation()
6. Save trajectories.csv, capture_summary.csv, and particle_paths.png
"""
from __future__ import annotations
import json
import math
import os
import sys
from pathlib import Path
import matplotlib.pyplot as plt
import mp3du
import numpy as np
import shapefile
from matplotlib.collections import LineCollection, PatchCollection
from matplotlib.patches import Patch, Polygon as MplPolygon
from matplotlib.tri import LinearTriInterpolator, Triangulation
SCRIPT_DIR = Path(__file__).resolve().parent
SIM_WS = SCRIPT_DIR / "sim"
META_PATH = SIM_WS / "grid_meta.json"
POLY_PATH = SIM_WS / "cell_polygons.json"
PARTICLE_SHP = SCRIPT_DIR / "PartStart_Intersect.shp"
CPP_PATHLINE_SHP = SCRIPT_DIR / "voronoi_v2_Pathline.shp"
def find_libmf6():
"""Locate the MODFLOW 6 shared library."""
env = os.environ.get("LIBMF6_PATH")
if env and Path(env).exists():
return str(env)
candidate = Path(os.environ.get("TEMP", "/tmp")) / "mf6api" / "libmf6.dll"
if candidate.exists():
return str(candidate)
candidate_linux = Path(os.environ.get("TEMP", "/tmp")) / "mf6api" / "libmf6.so"
if candidate_linux.exists():
return str(candidate_linux)
raise FileNotFoundError(
"Cannot find libmf6. Set LIBMF6_PATH or run:\n"
" python -m flopy.utils.get_modflow <dir> --repo modflow6"
)
def load_grid_data():
"""Load grid metadata and cell polygons from JSON files."""
with META_PATH.open() as f:
meta = json.load(f)
with POLY_PATH.open() as f:
poly_data = json.load(f)
cell_polygons_2d = [np.array(p, dtype=float) for p in poly_data["polygons"]]
centers_2d = np.array(poly_data["centers_xy"], dtype=float)
return meta, poly_data, cell_polygons_2d, centers_2d
def signed_area_xy(poly):
"""Compute signed area of a polygon."""
area = 0.0
n = len(poly)
for i in range(n):
x0, y0 = poly[i]
x1, y1 = poly[(i + 1) % n]
area += x0 * y1 - x1 * y0
return 0.5 * area
def build_mp3du_grid(meta, cell_polygons_2d, centers_2d):
"""Build mp3du grid structures from MF6 geometry."""
cell_vertices_list = []
cell_centers_list = []
top = meta["layer_top"][0]
bot = meta["layer_bot"][0]
zmid = 0.5 * (top + bot)
for ic in range(meta["ncpl"]):
verts = [(float(v[0]), float(v[1])) for v in cell_polygons_2d[ic]]
if signed_area_xy(verts) > 0.0:
verts = list(reversed(verts))
cell_vertices_list.append(verts)
cell_centers_list.append((float(centers_2d[ic, 0]), float(centers_2d[ic, 1]), zmid))
return cell_vertices_list, cell_centers_list
def build_cell_properties(meta):
"""Build mp3du cell properties array."""
n_cells = meta["ncpl"] * meta["nlay"]
return mp3du.hydrate_cell_properties(
top=np.full(n_cells, meta["layer_top"][0], dtype=float),
bot=np.full(n_cells, meta["layer_bot"][0], dtype=float),
porosity=np.full(n_cells, meta["porosity"][0], dtype=float),
retardation=np.ones(n_cells, dtype=float),
hhk=np.full(n_cells, meta["hk"][0], dtype=float),
vhk=np.full(n_cells, meta["vka"][0], dtype=float),
disp_long=np.zeros(n_cells, dtype=float),
disp_trans_h=np.zeros(n_cells, dtype=float),
disp_trans_v=np.zeros(n_cells, dtype=float),
)
def precompute_geometry(meta, cell_polygons_2d):
"""Precompute geometric quantities for flow extraction."""
ncpl = meta["ncpl"]
nlay = meta["nlay"]
n_cells = ncpl * nlay
edge_lookup = {}
for ic, poly in enumerate(cell_polygons_2d):
nv = len(poly)
for j in range(nv):
p1 = tuple(np.round(poly[j], 10))
p2 = tuple(np.round(poly[(j + 1) % nv], 10))
key = tuple(sorted((p1, p2)))
edge_lookup.setdefault(key, []).append((ic, j))
neighbor_2d = [[] for _ in range(ncpl)]
for ic, poly in enumerate(cell_polygons_2d):
neighbor_2d[ic] = [-1] * len(poly)
for entries in edge_lookup.values():
if len(entries) == 2:
(c0, f0), (c1, f1) = entries
neighbor_2d[c0][f0] = c1
neighbor_2d[c1][f1] = c0
areas_3d = np.zeros(n_cells)
perimeters_3d = np.zeros(n_cells)
radii_3d = np.zeros(n_cells)
centers_xy_3d = np.zeros((n_cells, 2))
face_offsets = [0]
face_vx1 = []
face_vy1 = []
face_vx2 = []
face_vy2 = []
face_length = []
face_neighbor = []
noflow_mask = []
for ic, poly in enumerate(cell_polygons_2d):
verts = [(float(v[0]), float(v[1])) for v in poly]
if signed_area_xy(verts) > 0.0:
verts = list(reversed(verts))
nbrs = list(reversed(neighbor_2d[ic]))
else:
nbrs = list(neighbor_2d[ic])
area = 0.0
perimeter = 0.0
nv = len(verts)
for j in range(nv):
x0, y0 = verts[j]
x1, y1 = verts[(j + 1) % nv]
area += x0 * y1 - x1 * y0
fl = math.hypot(x1 - x0, y1 - y0)
perimeter += fl
face_vx1.append(x0)
face_vy1.append(y0)
face_vx2.append(x1)
face_vy2.append(y1)
face_length.append(fl)
face_neighbor.append(nbrs[j])
noflow_mask.append(nbrs[j] < 0)
areas_3d[ic] = abs(area) / 2.0
perimeters_3d[ic] = perimeter
radii_3d[ic] = math.sqrt(areas_3d[ic] / math.pi)
centers_xy_3d[ic] = [float(np.mean(poly[:, 0])), float(np.mean(poly[:, 1]))]
face_offsets.append(face_offsets[-1] + nv)
return {
"ncpl": ncpl,
"nlay": nlay,
"n_cells": n_cells,
"neighbor_2d": neighbor_2d,
"areas_3d": areas_3d,
"perimeters_3d": perimeters_3d,
"radii_3d": radii_3d,
"centers_xy_3d": centers_xy_3d,
"face_offsets": face_offsets,
"face_offset_arr": np.array(face_offsets, dtype=np.uint64),
"face_vx1": np.array(face_vx1, dtype=float),
"face_vy1": np.array(face_vy1, dtype=float),
"face_vx2": np.array(face_vx2, dtype=float),
"face_vy2": np.array(face_vy2, dtype=float),
"face_length": np.array(face_length, dtype=float),
"face_neighbor_arr": np.array(face_neighbor, dtype=np.int64),
"noflow_mask": np.array(noflow_mask, dtype=np.bool_),
}
def precompute_flow_mappings(geo, ia, ja):
"""Map MF6 FLOWJA indices to mp3du face indices."""
n_cells = geo["n_cells"]
nbr_to_japos = [None] * n_cells
for ci in range(n_cells):
lookup = {}
for pos in range(ia[ci], ia[ci + 1]):
nbr = ja[pos]
if nbr != ci:
lookup[nbr] = pos
nbr_to_japos[ci] = lookup
hface_idx = []
hface_japos = []
for ci in range(n_cells):
lookup = nbr_to_japos[ci]
start = geo["face_offsets"][ci]
stop = geo["face_offsets"][ci + 1]
for local_face, nbr in enumerate(geo["face_neighbor_arr"][start:stop]):
face_idx = start + local_face
hface_idx.append(face_idx)
hface_japos.append(lookup.get(int(nbr), -1) if nbr >= 0 else -1)
return {
"hface_idx": np.array(hface_idx, dtype=np.intp),
"hface_japos": np.array(hface_japos, dtype=np.intp),
}
def extract_steady_flows(
geo,
flow_map,
head_arr,
flowja,
q_well_arr,
has_well_arr,
q_chd_arr,
bc_cell_ids_arr,
bc_iface_arr,
bc_flow_arr,
bc_type_id_arr,
bc_type_names,
is_domain_boundary_arr,
):
"""Extract flows from MF6 and build mp3du flow structures."""
n_cells = geo["n_cells"]
face_flow_into_arr = np.zeros(len(geo["face_neighbor_arr"]), dtype=float)
interior_mask = flow_map["hface_japos"] >= 0
# MF6 FLOWJA positive = INTO owning cell. Keep the same positive-into-cell
# convention for both hydrate calls.
face_flow_into_arr[flow_map["hface_idx"][interior_mask]] = flowja[flow_map["hface_japos"][interior_mask]]
q_top_arr = np.zeros(n_cells, dtype=float)
q_bot_arr = np.zeros(n_cells, dtype=float)
q_vert_arr = q_top_arr - q_bot_arr
cell_flows = mp3du.hydrate_cell_flows(
head=head_arr,
water_table=head_arr.copy(),
q_top=q_top_arr,
q_bot=q_bot_arr,
q_vert=q_vert_arr,
q_well=q_well_arr,
q_other=q_chd_arr,
q_storage=np.zeros(n_cells, dtype=float),
has_well=has_well_arr,
face_offset=geo["face_offset_arr"],
face_flow=face_flow_into_arr,
face_neighbor=geo["face_neighbor_arr"],
bc_cell_ids=bc_cell_ids_arr,
bc_iface=bc_iface_arr,
bc_flow=bc_flow_arr,
bc_type_id=bc_type_id_arr,
bc_type_names=bc_type_names,
is_domain_boundary=is_domain_boundary_arr,
)
waterloo_inputs = mp3du.hydrate_waterloo_inputs(
centers_xy=geo["centers_xy_3d"],
radii=geo["radii_3d"],
perimeters=geo["perimeters_3d"],
areas=geo["areas_3d"],
q_vert=q_vert_arr,
q_well=q_well_arr,
q_other=q_chd_arr,
face_offset=geo["face_offset_arr"],
face_vx1=geo["face_vx1"],
face_vy1=geo["face_vy1"],
face_vx2=geo["face_vx2"],
face_vy2=geo["face_vy2"],
face_length=geo["face_length"],
face_flow=face_flow_into_arr,
noflow_mask=geo["noflow_mask"],
)
return cell_flows, waterloo_inputs
def load_particles():
"""Load particle starting locations from shapefile."""
sf = shapefile.Reader(str(PARTICLE_SHP))
particles = []
for i, rec in enumerate(sf.iterShapeRecords()):
attrs = rec.record.as_dict()
x, y = rec.shape.points[0]
cell_id = int(attrs["P3D_CellID"]) - 1
z = float(attrs.get("ZLOC", 0.5) or 0.5)
particles.append(mp3du.ParticleStart(id=i, x=x, y=y, z=z, cell_id=cell_id, initial_dt=0.1))
return particles
def save_outputs(results, meta, particles, cell_vertices_list, left_chd_cells, right_chd_cells, well_cells, head_arr):
"""Save trajectory data and create visualization."""
# Save trajectories
traj_path = SCRIPT_DIR / "trajectories.csv"
with traj_path.open("w") as f:
header_written = False
for r in results:
for rec in r.to_records():
if not header_written:
f.write("particle_id," + ",".join(rec.keys()) + "\n")
header_written = True
f.write(",".join([str(r.particle_id)] + [str(v) for v in rec.values()]) + "\n")
# Save capture summary
capture_summary = []
for r in results:
records = r.to_records()
last = records[-1] if records else {}
capture_summary.append(
{
"particle_id": r.particle_id,
"final_status": r.final_status,
"x": last.get("x"),
"y": last.get("y"),
"z": last.get("z"),
"cell_id": last.get("cell_id"),
"steps": len(records),
}
)
cap_path = SCRIPT_DIR / "capture_summary.csv"
with cap_path.open("w") as f:
cols = list(capture_summary[0].keys())
f.write(",".join(cols) + "\n")
for rec in capture_summary:
f.write(",".join(str(rec[c]) for c in cols) + "\n")
# ── Modern, clean visualization ──────────────────────────────────────────
plt.rcParams.update({
"font.family": "sans-serif",
"axes.spines.top": False,
"axes.spines.right": False,
})
# Figure sized for 5:1 domain aspect ratio with breathing room
fig, ax = plt.subplots(figsize=(13, 4.5))
fig.patch.set_facecolor("#f8f8f8")
ax.set_facecolor("#f8f8f8")
# ── Head contours on a clipped regular grid (no edge artifacts) ──────────
centers_xy = np.array([[c[0], c[1]] for c in meta["centers_xy"]], dtype=float)
triang = Triangulation(centers_xy[:, 0], centers_xy[:, 1])
interp = LinearTriInterpolator(triang, head_arr)
# Regular grid strictly inside the domain
xi = np.linspace(0, 500, 400)
yi = np.linspace(0, 100, 160)
Xi, Yi = np.meshgrid(xi, yi)
Zi = interp(Xi, Yi)
n_levels = 10
contour_levels = np.linspace(float(np.nanmin(Zi)), float(np.nanmax(Zi)), n_levels + 1)
tric = ax.contour(
Xi, Yi, Zi,
levels=contour_levels,
colors="#4a6fa5",
linewidths=0.75,
alpha=0.7,
zorder=2,
)
# Sparse, non-overlapping contour labels
ax.clabel(
tric,
inline=True,
fontsize=6.5,
fmt="%.1f m",
inline_spacing=4,
use_clabeltext=True,
)
# ── Voronoi grid edges (very subtle) ─────────────────────────────────────
grid_segments = []
for verts in cell_vertices_list:
n = len(verts)
for j in range(n):
grid_segments.append([verts[j], verts[(j + 1) % n]])
grid_lc = LineCollection(
grid_segments, linewidths=0.15, colors="#cccccc", zorder=1
)
ax.add_collection(grid_lc)
# ── Boundary condition cells ──────────────────────────────────────────────
legend_handles = []
# ── Left CHD cells ────────────────────────────────────────────────────────
left_patches = [MplPolygon(cell_vertices_list[ci], closed=True) for ci in sorted(left_chd_cells)]
if left_patches:
pc_left = PatchCollection(
left_patches, facecolor="#5b9bd5", edgecolor="none", alpha=0.35, zorder=3
)
ax.add_collection(pc_left)
legend_handles.append(Patch(facecolor="#5b9bd5", alpha=0.5, label="Constant Head"))
# ── Right CHD cells ───────────────────────────────────────────────────────
right_patches = [MplPolygon(cell_vertices_list[ci], closed=True) for ci in sorted(right_chd_cells)]
if right_patches:
pc_right = PatchCollection(
right_patches, facecolor="#5b9bd5", edgecolor="none", alpha=0.35, zorder=3
)
ax.add_collection(pc_right)
# ── CHD head value annotations ────────────────────────────────────────────
left_head = float(meta.get("left_chd_head", 50.0))
right_head = float(meta.get("right_chd_head", 45.0))
if left_chd_cells:
verts = np.array(cell_vertices_list[sorted(left_chd_cells)[len(left_chd_cells) // 2]], dtype=float)
cx, cy = float(np.mean(verts[:, 0])), float(np.mean(verts[:, 1]))
ax.text(cx + 3, cy - 10, f"Left CHD = {left_head:.1f} m",
color="navy", fontsize=7.5, fontweight="bold", zorder=7,
bbox=dict(boxstyle="round,pad=0.2", fc="white", ec="none", alpha=0.7))
if right_chd_cells:
verts = np.array(cell_vertices_list[sorted(right_chd_cells)[len(right_chd_cells) // 2]], dtype=float)
cx, cy = float(np.mean(verts[:, 0])), float(np.mean(verts[:, 1]))
ax.text(cx - 45, cy - 10, f"Right CHD = {right_head:.1f} m",
color="navy", fontsize=7.5, fontweight="bold", zorder=7,
bbox=dict(boxstyle="round,pad=0.2", fc="white", ec="none", alpha=0.7))
# ── Well cells ────────────────────────────────────────────────────────────
well_patches = [MplPolygon(cell_vertices_list[ci], closed=True) for ci in sorted(well_cells)]
if well_patches:
pc_well = PatchCollection(
well_patches, facecolor="#e05c5c", edgecolor="none", alpha=0.55, zorder=3
)
ax.add_collection(pc_well)
legend_handles.append(Patch(facecolor="#e05c5c", alpha=0.6, label="Well"))
# ── Particle start locations ──────────────────────────────────────────────
start_xs = [p.x for p in particles]
start_ys = [p.y for p in particles]
h_start = ax.scatter(
start_xs, start_ys,
s=22, facecolors="none", edgecolors="#2ca02c",
linewidths=1.2, zorder=6, label="Start locations",
)
legend_handles.append(h_start)
# ── mp3du-rs pathlines — per-particle, solid lines ────────────────────────
colors = plt.cm.tab10.colors
for i, r in enumerate(results):
records = r.to_records()
if not records:
continue
xs = [rec["x"] for rec in records]
ys = [rec["y"] for rec in records]
h, = ax.plot(
xs, ys,
color=colors[i % len(colors)],
linewidth=1.1, alpha=0.9,
zorder=5,
label=f"mp3du-rs P{r.particle_id}",
)
legend_handles.append(h)
# ── C++ legacy pathlines for comparison — per-particle, dashed ───────────
if CPP_PATHLINE_SHP.exists():
cpp_sf = shapefile.Reader(str(CPP_PATHLINE_SHP))
for i, sr in enumerate(cpp_sf.iterShapeRecords()):
pts = sr.shape.points
attrs = sr.record.as_dict()
xs = [p[0] for p in pts]
ys = [p[1] for p in pts]
h, = ax.plot(
xs, ys,
color=colors[i % len(colors)],
linewidth=1.4, linestyle="--", alpha=0.6,
zorder=4,
label=f"C++ P{attrs.get('PID', i)}",
)
legend_handles.append(h)
# ── Axes, labels, legend ──────────────────────────────────────────────────
ax.set_xlim(0, 500)
ax.set_ylim(0, 100)
ax.set_aspect("equal")
ax.set_xlabel("x (m)", fontsize=9, labelpad=4)
ax.set_ylabel("y (m)", fontsize=9, labelpad=4)
ax.set_title(
"Example 1b — MF6 API · mp3du-rs vs C++ Particle Paths (MF6 Head Contours)",
fontsize=10, fontweight="bold", pad=8,
)
ax.tick_params(labelsize=8)
leg = ax.legend(
handles=legend_handles,
loc="upper left",
fontsize=7,
ncol=3,
framealpha=0.88,
edgecolor="#cccccc",
borderpad=0.6,
labelspacing=0.35,
columnspacing=1.0,
)
leg.get_frame().set_linewidth(0.5)
fig.tight_layout(pad=0.8)
fig.savefig(SCRIPT_DIR / "particle_paths.svg", bbox_inches="tight")
print(f"Saved plot to {SCRIPT_DIR / 'particle_paths.svg'}")
def main():
"""Main workflow: MF6 API solve → mp3du tracking."""
print(f"Running from {SCRIPT_DIR}")
if not (SIM_WS / "mfsim.nam").exists():
print("ERROR: Model not found. Run create_model.py first.")
sys.exit(1)
print("Locating libmf6...")
libmf6_path = find_libmf6()
print(f"Using libmf6: {libmf6_path}")
from modflowapi import ModflowApi
print("Loading grid data...")
meta, _, cell_polygons_2d, centers_2d = load_grid_data()
cell_vertices_list, cell_centers_list = build_mp3du_grid(meta, cell_polygons_2d, centers_2d)
geo = precompute_geometry(meta, cell_polygons_2d)
print("Initializing MF6 via API...")
mf6 = ModflowApi(libmf6_path, working_directory=str(SIM_WS))
mf6.initialize()
try:
print("Reading IA/JA connectivity...")
ia = mf6.get_value("GWF/CON/IA").copy() - 1
ja = mf6.get_value("GWF/CON/JA").copy() - 1
flow_map = precompute_flow_mappings(geo, ia, ja)
print("Solving flow (one time step)...")
mf6.update()
print("Extracting heads and FLOWJA from memory...")
head_arr = mf6.get_value("GWF/X").copy().astype(np.float64)
flowja = mf6.get_value("GWF/FLOWJA").copy().astype(np.float64)
n_cells = geo["n_cells"]
q_well_arr = np.zeros(n_cells, dtype=float)
has_well_arr = np.zeros(n_cells, dtype=np.bool_)
q_chd_arr = np.zeros(n_cells, dtype=float)
is_domain_boundary_arr = np.zeros(n_cells, dtype=np.bool_)
bc_type_names = ["CONSTANT HEAD", "WELLS"]
bc_cell_ids_list = []
bc_iface_list = []
bc_flow_list = []
bc_type_id_list = []
print("Reading WEL package data from API...")
nbound_wel = int(mf6.get_value("GWF/WEL_0/NBOUND")[0])
if nbound_wel > 0:
nodelist = mf6.get_value("GWF/WEL_0/NODELIST").copy()
q_vals = mf6.get_value("GWF/WEL_0/Q").copy()
for i in range(nbound_wel):
node = int(nodelist[i]) - 1
q = float(q_vals[i])
q_well_arr[node] += q
has_well_arr[node] = True
bc_cell_ids_list.append(node)
bc_iface_list.append(0)
bc_flow_list.append(q)
bc_type_id_list.append(1)
print("Reading CHD package data from API...")
nbound_chd = int(mf6.get_value("GWF/CHD_0/NBOUND")[0])
if nbound_chd > 0:
nodelist = mf6.get_value("GWF/CHD_0/NODELIST").copy()
simvals = mf6.get_value("GWF/CHD_0/SIMVALS").copy()
for i in range(nbound_chd):
node = int(nodelist[i]) - 1
q = float(simvals[i])
q_chd_arr[node] += q
bc_cell_ids_list.append(node)
bc_iface_list.append(2)
bc_flow_list.append(q)
bc_type_id_list.append(0)
bc_cell_ids_arr = np.array(bc_cell_ids_list, dtype=np.int64)
bc_iface_arr = np.array(bc_iface_list, dtype=np.int32)
bc_flow_arr = np.array(bc_flow_list, dtype=np.float64)
bc_type_id_arr = np.array(bc_type_id_list, dtype=np.int32)
print("Building mp3du grid and properties...")
grid = mp3du.build_grid(cell_vertices_list, cell_centers_list)
cell_props = build_cell_properties(meta)
cell_flows, waterloo_inputs = extract_steady_flows(
geo,
flow_map,
head_arr,
flowja,
q_well_arr,
has_well_arr,
q_chd_arr,
bc_cell_ids_arr,
bc_iface_arr,
bc_flow_arr,
bc_type_id_arr,
bc_type_names,
is_domain_boundary_arr,
)
print("Fitting Waterloo velocity field...")
waterloo_cfg = mp3du.WaterlooConfig(order_of_approx=35, n_control_points=122)
field = mp3du.fit_waterloo(waterloo_cfg, grid, waterloo_inputs, cell_props, cell_flows)
config_dict = {
"velocity_method": "Waterloo",
"solver": "DormandPrince",
"adaptive": {
"tolerance": 1e-6,
"safety": 0.9,
"alpha": 0.2,
"min_scale": 0.2,
"max_scale": 5.0,
"max_rejects": 10,
"min_dt": 1e-10,
"euler_dt": 1e-20,
},
"dispersion": {"method": "None"},
"retardation_enabled": False,
"capture": {
"max_time": 365250.0,
"max_steps": 1000000,
"stagnation_velocity": 1e-12,
"stagnation_limit": 100,
"capture_radius": 0.5,
},
"initial_dt": 0.1,
"max_dt": 100.0,
"direction": 1.0,
}
config = mp3du.SimulationConfig.from_json(json.dumps(config_dict))
particles = load_particles()
print(f"Running particle tracking for {len(particles)} particles...")
results = mp3du.run_simulation(config, field, particles, parallel=True)
meta["centers_xy"] = centers_2d.tolist()
save_outputs(
results, meta, particles, cell_vertices_list,
meta["left_chd_cells"],
meta["right_chd_cells"],
{meta["well_cell_2d"]},
head_arr,
)
print("Tracking workflow completed successfully.")
finally:
print("Finalizing MF6...")
mf6.finalize()
if __name__ == "__main__":
main()
Outputs¶
| Output | Description |
|---|---|
trajectories.csv |
Full particle trajectory data (x, y, z, time, cell_id per step) |
capture_summary.csv |
Final status and capture location for each particle |
particle_paths.svg |
Visualization with MF6 head contours, start points, and mp3du-rs vs legacy C++ pathlines |
Technical Notes¶
MF6 Memory Address Reference¶
| Variable | MF6 Address | Description |
|---|---|---|
| Heads | GWF/X |
Hydraulic head per cell |
| Cell-to-cell flows | GWF/FLOWJA |
Compressed sparse row flow array |
| Connectivity (row ptr) | GWF/CON/IA |
CSR row pointer (1-based) |
| Connectivity (col idx) | GWF/CON/JA |
CSR column indices (1-based) |
| CHD flow rates | GWF/CHD_0/SIMVALS |
Computed BC flow rates (use this, not RHS) |
| CHD node list | GWF/CHD_0/NODELIST |
1-based cell indices for CHD cells |
| WEL pumping rates | GWF/WEL_0/Q |
Pumping rates per well |
| WEL node list | GWF/WEL_0/NODELIST |
1-based cell indices for well cells |
Transient Extension¶
The same pattern extends to transient simulations. Instead of calling mf6.update() once, loop over stress periods and call mp3du.run_simulation() after each solve to track particles through time-varying flow fields.