Skip to content

Conversation

@ChrisZYJ
Copy link
Contributor

@ChrisZYJ ChrisZYJ commented Dec 13, 2025

User description

Description

Implements the correct hyperbolic divergence cleaning for 2D/3D MHD in place of the current Powell's method. It is fully working and validated (with results below).

It is based on an outdated MFC branch, so it needs to be rebased, cleaned up, and made GPU-ready. @danieljvickers might be interested in collaborating on this; otherwise, I will get this ready as time permits.

Formulation

image image

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Something else

Scope

  • This PR comprises a set of related changes with a common goal

How Has This Been Tested?

All the test cases are temporarily placed in the case_new folder.

  • Gaussian divergence pulse
image image image

  • Slanted Shock Tube
image image image

  • Orszag-Tang Vortex

orszag_tang_all

Orszag-Tang result at final time; comparison across WENO variants. Top row is p. Bottom row is ∇·B. The last column shows the last saved time step before crashing, for a simulation without hyperbolic cleaning. 3Z is a typo; it should be 5Z.


  • 2D Rotor Test
image

Checklist

  • I have added comments for the new code
  • I added Doxygen docstrings to the new code
  • I have made corresponding changes to the documentation (docs/)
  • I have added regression tests to the test suite so that people can verify in the future that the feature is behaving as expected
  • I have added example cases in examples/ that demonstrate my new feature performing as expected.
    They run to completion and demonstrate "interesting physics"
  • I ran ./mfc.sh format before committing my code
  • New and existing tests pass locally with my changes, including with GPU capability enabled (both NVIDIA hardware with NVHPC compilers and AMD hardware with CRAY compilers) and disabled
  • This PR does not introduce any repeated code (it follows the DRY principle)
  • I cannot think of a way to condense this code and reduce any introduced additional line count

If your code changes any code source files (anything in src/simulation)

To make sure the code is performing as expected on GPU devices, I have:

  • Checked that the code compiles using NVHPC compilers
  • Checked that the code compiles using CRAY compilers
  • Ran the code on either V100, A100, or H100 GPUs and ensured the new feature performed as expected (the GPU results match the CPU results)
  • Ran the code on MI200+ GPUs and ensure the new features performed as expected (the GPU results match the CPU results)
  • Enclosed the new feature via nvtx ranges so that they can be identified in profiles
  • Ran a Nsight Systems profile using ./mfc.sh run XXXX --gpu -t simulation --nsys, and have attached the output file (.nsys-rep) and plain text results to this PR
  • Ran a Rocprof Systems profile using ./mfc.sh run XXXX --gpu -t simulation --rsys --hip-trace, and have attached the output file and plain text results to this PR.
  • Ran my code using various numbers of different GPUs (1, 2, and 8, for example) in parallel and made sure that the results scale similarly to what happens if you run without the new code/feature

PR Type

Enhancement, Bug fix


Description

  • Replace Powell's method with hyperbolic (GLM) divergence cleaning for MHD

  • Add new state variable psi_idx for tracking magnetic divergence

  • Implement hyperbolic cleaning in Riemann solver flux calculations

  • Add three new MHD test cases: rotor, Gaussian pulse, tilted shock tube


Diagram Walkthrough

flowchart LR
  A["Powell's Method"] -->|Replace| B["Hyperbolic Cleaning"]
  B --> C["psi State Variable"]
  C --> D["Riemann Solver Fluxes"]
  D --> E["MHD Test Cases"]
  B --> F["GPU Support"]
Loading

File Walkthrough

Relevant files
Enhancement
19 files
2dHardcodedIC.fpp
Add three new MHD hardcoded initial conditions                     
+99/-2   
m_global_parameters.fpp
Add hyperbolic cleaning parameters and psi index                 
+22/-5   
m_riemann_solvers.fpp
Implement hyperbolic cleaning in flux calculations             
+18/-4   
m_mpi_proxy.fpp
Add hyperbolic cleaning to MPI broadcast variables             
+5/-3     
m_global_parameters.fpp
Add hyperbolic cleaning support to post-processor               
+10/-0   
m_start_up.fpp
Initialize psi state variable for hyperbolic cleaning       
+14/-2   
m_global_parameters.fpp
Add hyperbolic cleaning parameters to pre-processor           
+8/-0     
m_rhs.fpp
Replace Powell RHS with hyperbolic cleaning damping term 
+19/-5   
m_time_steppers.fpp
Allocate and deallocate psi state variable arrays               
+12/-0   
m_start_up.fpp
Add hyperbolic cleaning to post-processor initialization 
+9/-1     
m_variables_conversion.fpp
Add psi conversion between primitive and conservative       
+3/-0     
m_mpi_proxy.fpp
Broadcast hyperbolic cleaning flag in pre-processor           
+1/-1     
m_mpi_proxy.fpp
Broadcast hyperbolic cleaning flag in post-processor         
+1/-1     
m_initial_condition.fpp
Initialize psi to zero in initial conditions                         
+7/-0     
m_data_output.fpp
Add psi variable to database output count                               
+3/-0     
case.py
Update Orszag-Tang case to use hyperbolic cleaning             
+7/-5     
case_validator.py
Update validation to check hyperbolic cleaning constraints
+5/-8     
case_dicts.py
Add hyperbolic cleaning parameters to case dictionary       
+3/-1     
case.py
Update magnetic vortex case to use new case ID                     
+1/-1     
Bug fix
1 files
m_start_up.fpp
Remove Powell's method, add hyperbolic cleaning imports   
+2/-6     
Tests
2 files
case.py
New 2D MHD rotor test case with hyperbolic cleaning           
+84/-0   
cases.py
Replace Powell test with hyperbolic cleaning test               
+1/-1     
Documentation
2 files
case.md
Document hyperbolic cleaning parameters and remove Powell
+13/-10 
references.md
Replace Powell reference with Dedner et al. reference       
+1/-1     
Additional files
7 files
m_mhd.fpp +0/-149 
golden-metadata.txt +185/-0 
golden.txt +20/-0   
golden-metadata.txt +193/-0 
golden.txt +20/-0   
golden-metadata.txt +185/-0 
golden.txt +20/-0   

Summary by CodeRabbit

Release Notes

  • New Features

    • Added hyperbolic divergence cleaning (hyper_cleaning) for MHD simulations with configurable speed and damping parameters.
    • Introduced new 2D MHD test cases: Rotor simulation and updated Orszag-Tang Vortex configuration.
    • Expanded hardcoded initial condition library with rotor, Gaussian divergence, blob, and tilted shock-tube patterns.
  • Deprecation

    • Removed Powell's method for MHD divergence correction.
  • Documentation

    • Updated parameter documentation to reflect hyperbolic cleaning parameters and compatibility constraints.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 13, 2025

📝 Walkthrough

Walkthrough

The PR replaces Powell divergence correction with hyperbolic (Dedner) divergence cleaning for MHD simulations. This involves removing the Powell module, introducing hyper_cleaning feature flags and parameters, extending the state vector with a psi component to track divergence, updating initialization/finalization paths, modifying Riemann solver wave speed constraints, and adjusting RHS computations to propagate cleaning via the psi variable.

Changes

Cohort / File(s) Summary
Global Parameters and Flags
src/simulation/m_global_parameters.fpp, src/pre_process/m_global_parameters.fpp, src/post_process/m_global_parameters.fpp
Added hyper_cleaning (logical), psi_idx (integer), hyper_cleaning_speed (real), and hyper_cleaning_tau (real) as new public entities; extended sys_size allocation logic to accommodate psi state when hyper_cleaning is enabled; removed powell from public declarations and GPU reflections.
Start-up and Input Parsing
src/simulation/m_start_up.fpp, src/pre_process/m_start_up.fpp, src/post_process/m_start_up.fpp
Removed Powell module initialization/finalization calls and powell from namelists; added hyper_cleaning, hyper_cleaning_speed, hyper_cleaning_tau to user input configuration; removed bc_y from pre_process namelist.
State Management and Initialization
src/pre_process/m_initial_condition.fpp, src/simulation/m_time_steppers.fpp
Added conditional allocation and zero-initialization of q_cons_vf(psi_idx) and q_prim_vf(psi_idx) when hyper_cleaning is enabled.
Variable Conversion
src/common/m_variables_conversion.fpp
Added bidirectional psi variable mapping between conservative and primitive states when hyper_cleaning is active.
RHS and Riemann Solver Core
src/simulation/m_rhs.fpp, src/simulation/m_riemann_solvers.fpp
Removed m_mhd module dependency; introduced hyper_cleaning-driven psi RHS update (-q_prim_vf(psi_idx)/hyper_cleaning_tau); constrained fast magnetosonic wave speeds with ±hyper_cleaning_speed and added psi flux propagation in Riemann solver flux updates.
MPI Communication
src/simulation/m_mpi_proxy.fpp, src/pre_process/m_mpi_proxy.fpp, src/post_process/m_mpi_proxy.fpp
Replaced powell with hyper_cleaning in broadcast lists; added hyper_cleaning_speed and hyper_cleaning_tau to broadcast parameters.
Data Output
src/post_process/m_data_output.fpp, src/post_process/m_start_up.fpp
Incremented binary output variable count when hyper_cleaning is enabled; added conditional psi variable output to formatted database.
Deleted Module
src/simulation/m_mhd.fpp
Entire Powell-based MHD source term module removed (149 lines), including s_initialize_mhd_powell_module, s_compute_mhd_powell_rhs, and s_finalize_mhd_powell_module.
Configuration and Case Definition
toolchain/mfc/run/case_dicts.py, toolchain/mfc/case_validator.py, toolchain/mfc/test/cases.py
Added hyper_cleaning (COMMON), hyper_cleaning_speed, hyper_cleaning_tau (SIMULATION) parameter entries; removed powell; updated validation to require mhd flag and disallow 1D when hyper_cleaning enabled; renamed 2D MHD test case from Powell to hyper_cleaning.
Examples
examples/2D_mhd_rotor/case.py, examples/2D_orszag_tang_hyper_cleaning/case.py, examples/2D_mhd_magnetic_vortex/case.py
New rotor example with hyper_cleaning enabled; updated Orszag-Tang example with hyper_cleaning parameters (speed=2.5, tau=0.004), WENO order 5, and adjusted time stepping; updated magnetic vortex case parameter from hcid 252 to 253.
Documentation
docs/documentation/case.md, docs/documentation/references.md
Replaced Powell parameter documentation with hyper_cleaning section; updated reference from Powell94 to Dedner02; removed fd_order and Powell compatibility notes.
Test Metadata
tests/B4DC99F9/golden-metadata.txt, tests/EC8F87D9/golden-metadata.txt, tests/F057F8E6/golden-metadata.txt
Added comprehensive golden metadata files capturing test configuration, CMake flags, compiler details, and hardware specification for three test runs.
Hardcoded IC Patterns
src/common/include/2dHardcodedIC.fpp
Repurposed case 252 as 2D MHD Rotor (rotor core with r ≤ 0.1, ambient outside, rotational momentum); reinterpreted case 253 as MHD Smooth Magnetic Vortex; added cases 260–262 for Gaussian divergence pulse, blob, and tilted shock-tube with associated vector variable declarations.

Sequence Diagram

sequenceDiagram
    participant TS as Time Stepper
    participant RS as Riemann Solver
    participant RHS as RHS Computation
    participant State as State Vector
    
    Note over TS,State: Hyper-Cleaning Flow
    TS->>RS: Compute fluxes<br/>(constrain c_fast by ±hyper_cleaning_speed)
    RS->>RS: Update flux_rs for psi<br/>(propagate B divergence)
    RS->>RHS: Return constrained fluxes
    RHS->>State: Compute psi RHS<br/>= -psi/hyper_cleaning_tau
    RHS->>State: Update rhs_vf(psi_idx)<br/>with divergence & relaxation
    State->>State: Advance q_cons_vf(psi_idx)<br/>via standard time integration
    TS->>State: q_cons_vf(psi_idx) moderates<br/>∇·B via source term
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

The changes span multiple critical paths (Riemann solver, RHS, state initialization) across 25+ files, introducing a new physical feature with interdependent logic (hyper_cleaning parameters, psi state variable, wave speed constraints, flux propagation). While the changes are cohesive and scoped to hyperbolic cleaning, they require understanding of MHD discretization, state vector extension, and numerical wave propagation.

Possibly related PRs

  • CI test #1054: Modifies src/simulation/m_riemann_solvers.fpp flux indexing and wave-speed computation logic, overlapping with hyperbolic cleaning wave speed constraint changes in this PR.
  • Make fluid variables uniform through the code #1063: Modifies src/common/m_variables_conversion.fpp conservative↔primitive conversion routines, directly overlapping with psi variable conversion additions in this PR.

Suggested labels

size:XXL

Suggested reviewers

  • wilfonba

Poem

🐰 The Powell way fades into mist,
As Dedner's psi emerges to persist,
Wave speeds constrained, divergence tamed,
A cleaner vortex, beautifully framed!

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and concisely describes the main change: replacing Powell's method with hyperbolic divergence cleaning for MHD.
Description check ✅ Passed The PR description is largely complete with formulation diagrams, test results, and implementation details. However, most checklist items remain unchecked, indicating work in progress on documentation, GPU testing, and test integration.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@sbryngelson
Copy link
Member

beautiful! @ChrisZYJ
minus the old branching!

Maybe something we can look at...

@danieljvickers
Copy link
Member

Yeah, this the the hyperbolic solver I told you I was going to add. The one I put in this week is an outdated elliptic solver.

I was hoping to update this one as I have time and redo our 2D convergence test that I added previously.

@danieljvickers
Copy link
Member

danieljvickers commented Jan 23, 2026

Hey, @ChrisZYJ, I think I almost have this merged with master. I wanted to ask about this variable rkck_adap_dt that you add to the code. From what I can see, it was never explicitly declared as a global nor is it used anywhere in the code. Is this something that you intend to keep around? If so, can you point me to where it is suppose to be getting used?

This appear to work fine if I just remove it, but I wanted to see if there is a purpose to it sticking around

@ChrisZYJ
Copy link
Contributor Author

Hi @danieljvickers , thanks for the help with the PR!!

rkck_adap_dt is unrelated to this commit and has been present since at least v5.0.0. I can clean it up for the merge if you’d like.

@danieljvickers
Copy link
Member

I don't mind finishing it up. It looks like it's just an issue casting double to complex. You've really done most the heavy lifting already.

1 similar comment
@danieljvickers
Copy link
Member

I don't mind finishing it up. It looks like it's just an issue casting double to complex. You've really done most the heavy lifting already.

@ChrisZYJ
Copy link
Contributor Author

Thanks! Please let me know if you need me to work on anything.

@danieljvickers
Copy link
Member

MacOS failures are due to issues with updated gcc in the latest brew update. I implemented a fix on #1111, which currently is passing all tests. Once that gets merged, we can pull that in and resolve the failure.

@codecov
Copy link

codecov bot commented Jan 24, 2026

Codecov Report

❌ Patch coverage is 48.97959% with 25 lines in your changes missing coverage. Please review.
✅ Project coverage is 44.14%. Comparing base (944aa2f) to head (9e04bd7).
⚠️ Report is 1 commits behind head on master.

Files with missing lines Patch % Lines
src/simulation/m_rhs.fpp 44.44% 4 Missing and 1 partial ⚠️
src/simulation/m_riemann_solvers.fpp 37.50% 4 Missing and 1 partial ⚠️
src/pre_process/m_initial_condition.fpp 0.00% 2 Missing and 1 partial ⚠️
src/pre_process/m_start_up.fpp 40.00% 2 Missing and 1 partial ⚠️
src/simulation/m_time_steppers.fpp 0.00% 2 Missing and 1 partial ⚠️
src/post_process/m_start_up.fpp 60.00% 1 Missing and 1 partial ⚠️
src/common/m_variables_conversion.fpp 0.00% 1 Missing ⚠️
src/post_process/m_data_output.fpp 0.00% 1 Missing ⚠️
src/post_process/m_global_parameters.fpp 75.00% 0 Missing and 1 partial ⚠️
src/pre_process/m_global_parameters.fpp 75.00% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1086      +/-   ##
==========================================
- Coverage   44.16%   44.14%   -0.03%     
==========================================
  Files          71       70       -1     
  Lines       20417    20416       -1     
  Branches     1991     2002      +11     
==========================================
- Hits         9018     9013       -5     
+ Misses      10252    10244       -8     
- Partials     1147     1159      +12     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@danieljvickers
Copy link
Member

I am going to put this PR down for a couple of days, but I manged to do at least some tracking towards a solution.

I tried to also damp in the RHS module by adding lines like this:

if (hyper_cleaning) then
                    do l = 0, p
                        do k = 0, n
                            do j = 0, 
                                rhs_vf(psi_idx)%sf(j, k, l) = rhs_vf(psi_idx)%sf(j, k, l) * exp(-dt/hyper_cleaning_tau)
                            end do
                        end do
                    end do
                end if

When I have this term in AND the term in time stepper, I do actually get the rotor problem to converge for m=n=400 and when I drop the hyper_cleaning_tau = dt:
Screenshot from 2026-01-26 12-24-08
You can see in this image that even with that, we start to get a bit of checkerboarding in the centeral rotor. Every time this case fails for me, it is due to checkerboarding. If I had to bet money, I would say this comes down to how we are damping down psi or how psi interacts with the B field. That limits it to like a dozen lines of code. Hopefully it is one of those. If I was on the wrong track, we can revert the most-recent commit.

@ChrisZYJ
Copy link
Contributor Author

Thanks so much for helping to debug!! The checkerboarding usually means something is completely wrong. Might be something stupid I changed after it first worked. I'll study your latest PR and get the bug fixed.

@ChrisZYJ
Copy link
Contributor Author

I've narrowed down the bug:

  1. The 2D rotor case worked in my last version (f0b7239)
  2. Whether you add smoothing (to exactly match the reference paper) is irrelevant to the bug

I'm comparing the versions line by line and should have a fix soon.

@ChrisZYJ
Copy link
Contributor Author

Should be fixed. The bug was that Flux(B_normal) = 0 is no longer true with hyperbolic cleaning enabled.

In this PR I removed (1 - dir_flg(i + 1)) from
flux_rs${XYZ}$_vf(j, k, l, B_idx%beg + i) = (1 - dir_flg(i + 1))*( ...
but it was added back during the merge, causing the bug. Making sure this term is identically zero only when hyperbolic cleaning is disabled fixes the issue.

@ChrisZYJ
Copy link
Contributor Author

ChrisZYJ commented Jan 27, 2026

I've also added docs, tests, and examples. Also deleted Powell's method as it's no longer relevant.

If the checks for the new cases fail, increasing the tolerance should fix them (hyperbolic cleaning can be sensitive to machine errors).

I don't think all the cases in test_new need to be included as examples. If we agree that test_new can now be deleted, the only remaining work is to remove the unused cases and clean up the comments in the 2D hard-coded IC.

@danieljvickers
Copy link
Member

Ahh darn, you are right. I even dug through the git diff yesterday and completely missed it. Thanks for taking time to look at it. Another set of eyes was helpful. Since this PR is good to go (I hope), I deleted the new_case files and generated one golden file that was missing. Thanks for looking at this with me @ChrisZYJ

@ChrisZYJ
Copy link
Contributor Author

@danieljvickers No problem, glad we caught it! By the way, thanks for moving the damping equations to the RHS so they align better with the literature.

I see the Frontier compilation is still failing, but don't want to cause merge conflicts with overlapping changes. Let me know if you want me to take a look!

@danieljvickers
Copy link
Member

Yeah, o saw the failure yesterday and restarted it to see if that would fix it. I saw it pass before I added the GPU change, and I didn't touch anything in common or post process. So my bet is that this is spurious failures on frontier. But I'll take a look.

@danieljvickers
Copy link
Member

danieljvickers commented Jan 28, 2026

@sbryngelson This PR is now ready for merge, as long as there isn't any other input from @ChrisZYJ

@sbryngelson
Copy link
Member

very good, waiting on @anandrdbz for the AMDFlang compiler stuff first.

@sbryngelson sbryngelson marked this pull request as ready for review January 28, 2026 17:07
Copilot AI review requested due to automatic review settings January 28, 2026 17:07
@qodo-code-review
Copy link
Contributor

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 4 🔵🔵🔵🔵⚪
🧪 PR contains tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Possible Issue

The new MHD magnetic-field flux computation no longer masks out the normal component during the per-component loop, and then later conditionally overwrites/adds to the normal component depending on hyper_cleaning. This can lead to the normal component being computed twice (baseline term plus GLM adjustment) when hyper_cleaning is enabled, and to different behavior than intended compared to the previous dir_flg gating. Please verify the intended GLM/HLL(HLLD) flux form for B_normal and ensure only the correct contribution is present.

! B_z d/d${XYZ}$ flux = (1 - delta(z,${XYZ}$)) * (v_${XYZ}$ * B_z - v_z * B_${XYZ}$)
$:GPU_LOOP(parallelism='[seq]')
do i = 0, 2
    flux_rs${XYZ}$_vf(j, k, l, B_idx%beg + i) = (s_M*(vel_R(dir_idx(1))*B%R(i + 1) - vel_R(i + 1)*B%R(norm_dir)) - &
                                                 s_P*(vel_L(dir_idx(1))*B%L(i + 1) - vel_L(i + 1)*B%L(norm_dir)) + &
                                                 s_M*s_P*(B%L(i + 1) - B%R(i + 1)))/(s_M - s_P)
end do

if (hyper_cleaning) then
    ! propagate magnetic field divergence as a wave
    flux_rs${XYZ}$_vf(j, k, l, B_idx%beg + norm_dir - 1) = flux_rs${XYZ}$_vf(j, k, l, B_idx%beg + norm_dir - 1) + &
                                                           (s_M*qR_prim_rs${XYZ}$_vf(j + 1, k, l, psi_idx) - s_P*qL_prim_rs${XYZ}$_vf(j, k, l, psi_idx))/(s_M - s_P)

    flux_rs${XYZ}$_vf(j, k, l, psi_idx) = (hyper_cleaning_speed**2*(s_M*B%R(norm_dir) - s_P*B%L(norm_dir)) + s_M*s_P*(qL_prim_rs${XYZ}$_vf(j, k, l, psi_idx) - qR_prim_rs${XYZ}$_vf(j + 1, k, l, psi_idx)))/(s_M - s_P)
else
    flux_rs${XYZ}$_vf(j, k, l, B_idx%beg + norm_dir - 1) = 0._wp ! Without hyperbolic cleaning, make sure flux of B_normal is identically zero
end if
Possible Issue

psi is initialized to zero in the general IC path, but pre_process startup additionally injects a hard-coded nonzero Gaussian psi field whenever hyper_cleaning is enabled. This is surprising, case-specific behavior in a generic startup routine and may desynchronize pre_process vs simulation assumptions, affect regression determinism, and break users’ expectations that psi defaults to zero unless explicitly specified by an IC.

! hard-coded psi
if (hyper_cleaning) then
    do j = 0, m
        do k = 0, n
            q_cons_vf(psi_idx)%sf(j, k, 0) = 1d-2*exp(-(x_cc(j)**2 + y_cc(k)**2)/(2.0*0.05**2))
            q_prim_vf(psi_idx)%sf(j, k, 0) = q_cons_vf(psi_idx)%sf(j, k, 0)
        end do
    end do
end if
Edge Case

New hardcoded ICs use user-provided parameters (e.g., sigma for the Gaussian divergence pulse) in divisions and in erf((x-0.5)/sigma) without visible validation. If sigma is unset/zero/very small, this can produce NaNs/Infs. Consider guarding against invalid values (or documenting required patch_icpp(... )%a(*) constraints) to prevent hard-to-debug failures.

case (260)  ! Gaussian Divergence Pulse
    !  Bx(x) = 1 + C * erf((x-0.5)/σ)
    !  ⇒  ∂Bx/∂x = C * (2/√π) * exp[-((x-0.5)/σ)**2] * (1/σ)
    !  Choose C = ε * σ * √π / 2  ⇒ ∂Bx/∂x = ε * exp[-((x-0.5)/σ)**2]
    !  ψ is initialized to zero everywhere.

    eps_mhd = patch_icpp(patch_id)%a(2)
    sigma = patch_icpp(patch_id)%a(3)
    C_mhd = eps_mhd*sigma*sqrt(pi)*0.5_wp

    ! B-field
    q_prim_vf(B_idx%beg)%sf(i, j, 0) = 1._wp + C_mhd*erf((x_cc(i) - 0.5_wp)/sigma)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

High-level Suggestion

Refactor the new hardcoded initial conditions in Fortran files like 2dHardcodedIC.fpp into configurable input files for the test cases. This improves modularity and separates test configuration from core logic. [High-level, importance: 6]

Solution Walkthrough:

Before:

! File: src/common/include/2dHardcodedIC.fpp
...
case (251)
    ...
! case 252 is for the 2D MHD Rotor problem
case (252) ! 2D MHD Rotor Problem
    r_sq = (x_cc(i) - 0.5_wp)**2 + (y_cc(j) - 0.5_wp)**2
    if (r_sq <= 0.1**2) then
        q_prim_vf(contxb)%sf(i, j, 0) = 10._wp
        ...
    else if (r_sq <= 0.115**2) then
        ...
    end if
case (253) ! MHD Smooth Magnetic Vortex
    ...
case (260) ! Gaussian Divergence Pulse
    ...

After:

! File: src/common/include/2dHardcodedIC.fpp
...
case (251)
    ...
! case (252) ! MHD Smooth Magnetic Vortex (ID shifted)
!    ...
! Other hardcoded cases removed and logic moved to python case files.
...

! File: examples/2D_mhd_rotor/case.py
...
# Instead of hcid, use python functions to define ICs
# and pass them to the simulation, for example via a file
# that the Fortran code can read.
# (This framework does not seem to support python-defined ICs directly,
# so the best practice would be to generalize the existing IC system
# if possible, rather than adding more hardcoded cases.)

Comment on lines +847 to +855
! hard-coded psi
if (hyper_cleaning) then
do j = 0, m
do k = 0, n
q_cons_vf(psi_idx)%sf(j, k, 0) = 1d-2*exp(-(x_cc(j)**2 + y_cc(k)**2)/(2.0*0.05**2))
q_prim_vf(psi_idx)%sf(j, k, 0) = q_cons_vf(psi_idx)%sf(j, k, 0)
end do
end do
end if
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Relocate the hard-coded psi initialization to prevent it from being overwritten by a subsequent zero-initialization. [possible issue, importance: 9]

Suggested change
! hard-coded psi
if (hyper_cleaning) then
do j = 0, m
do k = 0, n
q_cons_vf(psi_idx)%sf(j, k, 0) = 1d-2*exp(-(x_cc(j)**2 + y_cc(k)**2)/(2.0*0.05**2))
q_prim_vf(psi_idx)%sf(j, k, 0) = q_cons_vf(psi_idx)%sf(j, k, 0)
end do
end do
end if
! This block should be moved to 'src/pre_process/m_initial_condition.fpp'
! after the zero-initialization of psi, or that initialization should be removed/made conditional.
if (hyper_cleaning) then
do j = 0, m
do k = 0, n
q_cons_vf(psi_idx)%sf(j, k, 0) = 1d-2*exp(-(x_cc(j)**2 + y_cc(k)**2)/(2.0*0.05**2))
q_prim_vf(psi_idx)%sf(j, k, 0) = q_cons_vf(psi_idx)%sf(j, k, 0)
end do
end do
end if

Comment on lines +192 to +225
case (252) ! 2D MHD Rotor Problem
! Ambient conditions are set in the JSON file.
! This case imposes the dense, rotating cylinder.
!
! gamma = 1.4
! Ambient medium (r > 0.1):
! rho = 1, p = 1, v = 0, B = (1,0,0)
! Rotor (r <= 0.1):
! rho = 10, p = 1
! v has angular velocity w=20, giving v_tan=2 at r=0.1

! Calculate distance squared from the center
r_sq = (x_cc(i) - 0.5_wp)**2 + (y_cc(j) - 0.5_wp)**2

! inner radius of 0.1
if (r_sq <= 0.1**2) then
! -- Inside the rotor --
! Set density uniformly to 10
q_prim_vf(contxb)%sf(i, j, 0) = 10._wp

! Set vup constant rotation of rate v=2
! v_x = -omega * (y - y_c)
! v_y = omega * (x - x_c)
q_prim_vf(momxb)%sf(i, j, 0) = -20._wp*(y_cc(j) - 0.5_wp)
q_prim_vf(momxb + 1)%sf(i, j, 0) = 20._wp*(x_cc(i) - 0.5_wp)

! taper width of 0.015
else if (r_sq <= 0.115**2) then
! linearly smooth the function between r = 0.1 and 0.115
q_prim_vf(contxb)%sf(i, j, 0) = 1._wp + 9._wp*(0.115_wp - sqrt(r_sq))/(0.015_wp)

q_prim_vf(momxb)%sf(i, j, 0) = -(2._wp/sqrt(r_sq))*(y_cc(j) - 0.5_wp)*(0.115_wp - sqrt(r_sq))/(0.015_wp)
q_prim_vf(momxb + 1)%sf(i, j, 0) = (2._wp/sqrt(r_sq))*(x_cc(i) - 0.5_wp)*(0.115_wp - sqrt(r_sq))/(0.015_wp)
end if
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Add an else branch to the MHD Rotor problem to initialize the ambient region's density and momentum. [possible issue, importance: 8]

Suggested change
case (252) ! 2D MHD Rotor Problem
! Ambient conditions are set in the JSON file.
! This case imposes the dense, rotating cylinder.
!
! gamma = 1.4
! Ambient medium (r > 0.1):
! rho = 1, p = 1, v = 0, B = (1,0,0)
! Rotor (r <= 0.1):
! rho = 10, p = 1
! v has angular velocity w=20, giving v_tan=2 at r=0.1
! Calculate distance squared from the center
r_sq = (x_cc(i) - 0.5_wp)**2 + (y_cc(j) - 0.5_wp)**2
! inner radius of 0.1
if (r_sq <= 0.1**2) then
! -- Inside the rotor --
! Set density uniformly to 10
q_prim_vf(contxb)%sf(i, j, 0) = 10._wp
! Set vup constant rotation of rate v=2
! v_x = -omega * (y - y_c)
! v_y = omega * (x - x_c)
q_prim_vf(momxb)%sf(i, j, 0) = -20._wp*(y_cc(j) - 0.5_wp)
q_prim_vf(momxb + 1)%sf(i, j, 0) = 20._wp*(x_cc(i) - 0.5_wp)
! taper width of 0.015
else if (r_sq <= 0.115**2) then
! linearly smooth the function between r = 0.1 and 0.115
q_prim_vf(contxb)%sf(i, j, 0) = 1._wp + 9._wp*(0.115_wp - sqrt(r_sq))/(0.015_wp)
q_prim_vf(momxb)%sf(i, j, 0) = -(2._wp/sqrt(r_sq))*(y_cc(j) - 0.5_wp)*(0.115_wp - sqrt(r_sq))/(0.015_wp)
q_prim_vf(momxb + 1)%sf(i, j, 0) = (2._wp/sqrt(r_sq))*(x_cc(i) - 0.5_wp)*(0.115_wp - sqrt(r_sq))/(0.015_wp)
end if
case (252) ! 2D MHD Rotor Problem
...
if (r_sq <= 0.1_wp**2) then
...
else if (r_sq <= 0.115_wp**2) then
...
else
! -- Ambient medium --
q_prim_vf(contxb)%sf(i, j, 0) = 1._wp
q_prim_vf(momxb)%sf(i, j, 0) = 0._wp
q_prim_vf(momxb + 1)%sf(i, j, 0) = 0._wp
end if

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Fix all issues with AI agents
In `@src/simulation/m_global_parameters.fpp`:
- Around line 853-855: When hyper_cleaning is true, add validation that
hyper_cleaning_speed and hyper_cleaning_tau are finite and > 0 to avoid
division-by-zero and invalid wave speeds; locate where hyper_cleaning,
hyper_cleaning_speed and hyper_cleaning_tau are initialized in
m_global_parameters.fpp (the routine that sets defaults/reads parameters) and
insert checks that use isfinite() (or the Fortran equivalent) and value > 0, and
if a check fails call the existing error/stop logging path with a clear message
naming hyper_cleaning_speed or hyper_cleaning_tau (so m_rhs.fpp line 993 and
m_riemann_solvers.fpp line 956 won’t receive invalid values), or set safe
fallback values and log a warning if that matches project behavior.
- Around line 294-295: The psi_idx integer (used by GPU kernels in
m_riemann_solvers and m_rhs) is missing from the device mirroring lists; update
the GPU_DECLARE and GPU_UPDATE macros/declarations to include psi_idx so the
variable is copied to/from the device and kept current for kernels. Locate the
GPU_DECLARE and GPU_UPDATE blocks where global parameters are listed (alongside
other indices) and add psi_idx to both the declaration and update lists to
ensure proper device mirroring for GPU kernels that reference psi_idx.
- Around line 1164-1168: The runtime flag hyper_cleaning unconditionally adds
psi_idx to sys_size even when MHD support (mhd) is not compiled in; add a
runtime guard that aborts when hyper_cleaning is true but mhd is false (e.g.,
check hyper_cleaning .and. .not. mhd and call s_mpi_abort with a descriptive
message) before altering psi_idx/sys_size so the extra equation cannot be
enabled without MHD; update the block that sets psi_idx/sys_size to first
validate mhd and call s_mpi_abort if invalid.

In `@src/simulation/m_rhs.fpp`:
- Around line 987-998: The divergence-cleaning damping term is using the stale
passed-in primitive array q_prim_vf instead of the up-to-date primitive computed
during c2p; in the hyper_cleaning block update the subtraction to read from
q_prim_qp%vf(psi_idx)%sf(j,k,l) divided by hyper_cleaning_tau so
rhs_vf(psi_idx)%sf is reduced using the current primitive psi value (keep the
surrounding loop and GPU pragmas and reference psi_idx, rhs_vf, q_prim_vf,
q_prim_qp, and hyper_cleaning_tau when making the change).

In `@src/simulation/m_riemann_solvers.fpp`:
- Around line 946-958: The normal-component magnetic flux
(flux_rs${XYZ}$_vf(..., B_idx%beg + norm_dir - 1)) is being computed before
applying the psi correction which reintroduces HLL dissipation; update the block
inside the hyper_cleaning branch to restore the original gating by applying (1 -
dir_flg(i + 1)) to the B_normal flux (or explicitly set flux_rs${XYZ}$_vf(...,
B_idx%beg + norm_dir - 1)=0._wp when dir_flg(i + 1) is true) before you add the
psi contribution, ensuring the psi update that uses qL_prim_rs${XYZ}$_vf,
qR_prim_rs${XYZ}$_vf and psi_idx only modifies the non-gated value.
🧹 Nitpick comments (1)
src/common/include/2dHardcodedIC.fpp (1)

256-266: Consider removing commented-out code.

The blob implementation is correct. However, lines 263-265 contain commented-out alternative formulations that appear to be debug/exploration artifacts rather than documentation.

🧹 Suggested cleanup
     case (261)  ! Blob
         r0 = 1._wp/sqrt(8._wp)
         r2 = x_cc(i)**2 + y_cc(j)**2
         r = sqrt(r2)
         alpha = r/r0
         if (alpha < 1) then
             q_prim_vf(B_idx%beg)%sf(i, j, 0) = 1._wp/sqrt(4._wp*pi)*(alpha**8 - 2._wp*alpha**4 + 1._wp)
-            ! q_prim_vf(B_idx%beg)%sf(i,j,0) = 1._wp/sqrt(4000._wp*pi) * (4096._wp*r2**4 - 128._wp*r2**2 + 1._wp)
-            ! q_prim_vf(B_idx%beg)%sf(i,j,0) = 1._wp/(4._wp*pi) * (alpha**8 - 2._wp*alpha**4 + 1._wp)
-            ! q_prim_vf(E_idx)%sf(i,j,0) = 6._wp - q_prim_vf(B_idx%beg)%sf(i,j,0)**2/2._wp
         end if

Comment on lines +294 to 295
integer :: psi_idx !< Index of hyperbolic cleaning state variable for MHD
!> @}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find all usages of psi_idx with context
rg -n "psi_idx" -B 2 -A 2 src/simulation

Repository: MFlowCode/MFC

Length of output: 4223


🏁 Script executed:

# Check the variable declaration in m_global_parameters.fpp
head -n 300 src/simulation/m_global_parameters.fpp | tail -n 20

Repository: MFlowCode/MFC

Length of output: 1770


🏁 Script executed:

# Search for GPU_DECLARE and GPU_UPDATE in the codebase to understand the pattern
rg -n "GPU_DECLARE|GPU_UPDATE" src/simulation | head -20

Repository: MFlowCode/MFC

Length of output: 2026


🏁 Script executed:

# Check if psi_idx appears in any GPU_UPDATE declarations
rg -n "GPU_UPDATE.*psi_idx" src/simulation

Repository: MFlowCode/MFC

Length of output: 39


🏁 Script executed:

# Get the full GPU_DECLARE and GPU_UPDATE sections in m_global_parameters.fpp
sed -n '290,320p' src/simulation/m_global_parameters.fpp

Repository: MFlowCode/MFC

Length of output: 1671


🏁 Script executed:

# Check if there are additional GPU_UPDATE declarations related to global parameters elsewhere
rg -n "GPU_UPDATE.*device=.*momxb" src/simulation -A 3

Repository: MFlowCode/MFC

Length of output: 500


Add psi_idx to GPU_DECLARE and GPU_UPDATE declarations. The variable is used in GPU device code (e.g., m_riemann_solvers.fpp:954–956 to index array elements in flux calculations, and m_rhs.fpp:992–993 in GPU-enabled loops), but is missing from both GPU_DECLARE and GPU_UPDATE lists. Without device mirroring, GPU kernels access stale or uninitialized values.

GPU declaration fix
-    $:GPU_DECLARE(create='[tensor_size,species_idx,c_idx]')
+    $:GPU_DECLARE(create='[tensor_size,species_idx,c_idx,psi_idx]')
GPU update fix
         $:GPU_UPDATE(device='[momxb,momxe,advxb,advxe,contxb,contxe, &
             & bubxb,bubxe,intxb,intxe,sys_size,buff_size,E_idx, &
             & alf_idx,n_idx,adv_n,adap_dt,pi_fac,strxb,strxe, &
-            & chemxb,chemxe,c_idx,adap_dt_tol,adap_dt_max_iters]')
+            & chemxb,chemxe,c_idx,psi_idx,adap_dt_tol,adap_dt_max_iters]')
🤖 Prompt for AI Agents
In `@src/simulation/m_global_parameters.fpp` around lines 294 - 295, The psi_idx
integer (used by GPU kernels in m_riemann_solvers and m_rhs) is missing from the
device mirroring lists; update the GPU_DECLARE and GPU_UPDATE
macros/declarations to include psi_idx so the variable is copied to/from the
device and kept current for kernels. Locate the GPU_DECLARE and GPU_UPDATE
blocks where global parameters are listed (alongside other indices) and add
psi_idx to both the declaration and update lists to ensure proper device
mirroring for GPU kernels that reference psi_idx.

Comment on lines +853 to 855
hyper_cleaning_speed = dflt_real
hyper_cleaning_tau = dflt_real

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

rg -n "hyper_cleaning" -C 3 src --type f90 --type fpp

Repository: MFlowCode/MFC

Length of output: 84


🏁 Script executed:

rg -n "hyper_cleaning" -C 3 src

Repository: MFlowCode/MFC

Length of output: 21189


Add validation for hyper_cleaning_speed and hyper_cleaning_tau when hyper_cleaning = .true. These parameters default to dflt_real and are used in critical calculations without bounds checking. Specifically, hyper_cleaning_tau is used as a divisor in the RHS computation (m_rhs.fpp line 993) and hyper_cleaning_speed is squared in flux calculations (m_riemann_solvers.fpp line 956). Enforce that both are positive and finite to prevent division-by-zero and invalid wave speeds.

🤖 Prompt for AI Agents
In `@src/simulation/m_global_parameters.fpp` around lines 853 - 855, When
hyper_cleaning is true, add validation that hyper_cleaning_speed and
hyper_cleaning_tau are finite and > 0 to avoid division-by-zero and invalid wave
speeds; locate where hyper_cleaning, hyper_cleaning_speed and hyper_cleaning_tau
are initialized in m_global_parameters.fpp (the routine that sets defaults/reads
parameters) and insert checks that use isfinite() (or the Fortran equivalent)
and value > 0, and if a check fails call the existing error/stop logging path
with a clear message naming hyper_cleaning_speed or hyper_cleaning_tau (so
m_rhs.fpp line 993 and m_riemann_solvers.fpp line 956 won’t receive invalid
values), or set safe fallback values and log a warning if that matches project
behavior.

Comment on lines +1164 to +1168
if (hyper_cleaning) then
psi_idx = sys_size + 1
sys_size = psi_idx
end if

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

rg -n "hyper_cleaning" -C3 src/

Repository: MFlowCode/MFC

Length of output: 21189


🏁 Script executed:

rg -n "hyper_cleaning.*=" -B5 -A5 src/pre_process/m_start_up.fpp | head -100

Repository: MFlowCode/MFC

Length of output: 39


🏁 Script executed:

# Search for error checks or validations related to hyper_cleaning
rg -n "if.*hyper_cleaning.*mhd|if.*mhd.*hyper_cleaning" -B3 -A3 src/

Repository: MFlowCode/MFC

Length of output: 745


🏁 Script executed:

# Search for where hyper_cleaning is read from input file or parsed
rg -n "hyper_cleaning" src/pre_process/m_start_up.fpp -B10 -A10 | grep -A10 -B10 "read\|input\|parse"

Repository: MFlowCode/MFC

Length of output: 1420


🏁 Script executed:

# Check if mhd flag exists and how it's validated
rg -n "if.*mhd.*then" src/simulation/m_global_parameters.fpp | head -20

Repository: MFlowCode/MFC

Length of output: 205


🏁 Script executed:

# Search for any validation/error checking in the initialization
rg -n "s_mpi_abort" src/simulation/m_global_parameters.fpp -B5 -A2

Repository: MFlowCode/MFC

Length of output: 39


🏁 Script executed:

# Find where input parameters are read (likely a namelist)
rg -n "namelist\|read.*\(" src/pre_process/m_start_up.fpp | head -30

Repository: MFlowCode/MFC

Length of output: 39


🏁 Script executed:

# Look for the actual input parameter reading section around line 155
sed -n '130,200p' src/pre_process/m_start_up.fpp

Repository: MFlowCode/MFC

Length of output: 3291


🏁 Script executed:

# Check for any validation that checks mhd and hyper_cleaning together
rg -n "mhd\|hyper_cleaning" src/simulation/m_global_parameters.fpp | grep -A2 -B2 "1164\|1165\|1166"

Repository: MFlowCode/MFC

Length of output: 39


🏁 Script executed:

# Look at the section around 1164 in simulation/m_global_parameters.fpp
sed -n '1140,1180p' src/simulation/m_global_parameters.fpp

Repository: MFlowCode/MFC

Length of output: 1451


🏁 Script executed:

# Check if there's any validation checking mhd vs hyper_cleaning in the entire m_global_parameters.fpp
rg -n "mhd\|hyper_cleaning" src/simulation/m_global_parameters.fpp -C2 | grep -A5 -B5 "if\|validation\|error"

Repository: MFlowCode/MFC

Length of output: 39


🏁 Script executed:

# Look for any error handling around the sys_size computation
sed -n '1100,1200p' src/simulation/m_global_parameters.fpp

Repository: MFlowCode/MFC

Length of output: 3997


🏁 Script executed:

# Check what model_eqns == 2 or 3 represents and if it's related to MHD
rg -n "model_eqns" src/simulation/m_global_parameters.fpp | head -40

Repository: MFlowCode/MFC

Length of output: 512


🏁 Script executed:

# Search for model_eqns definitions/documentation
rg -n "model_eqns.*=" src/simulation/m_global_parameters.fpp -B3 -A3 | head -50

Repository: MFlowCode/MFC

Length of output: 1820


🏁 Script executed:

# Look for any input validation that ties hyper_cleaning to mhd
rg -n "hyper_cleaning\|mhd" src/simulation/m_start_up.fpp -B2 -A2 | head -80

Repository: MFlowCode/MFC

Length of output: 39


🏁 Script executed:

# Search for where MHD is enabled (mhd flag setting)
rg -n "mhd.*=" src/simulation/m_global_parameters.fpp -B3 -A3 | head -100

Repository: MFlowCode/MFC

Length of output: 916


🏁 Script executed:

# Check if hyper_cleaning has any input validation
rg -n "hyper_cleaning" src/simulation/m_start_up.fpp -B5 -A5

Repository: MFlowCode/MFC

Length of output: 741


🏁 Script executed:

# Check if there's any runtime validation that checks hyper_cleaning against mhd
rg -n "if.*hyper_cleaning" src/simulation/m_start_up.fpp -B10 -A10 | head -100

Repository: MFlowCode/MFC

Length of output: 39


🏁 Script executed:

# Look for error checking or validation subroutines in simulation startup
rg -n "subroutine.*validation\|subroutine.*check" src/simulation/m_start_up.fpp -A30

Repository: MFlowCode/MFC

Length of output: 39


🏁 Script executed:

# Check if hyper_cleaning is validated/constrained when reading input
sed -n '150,400p' src/simulation/m_start_up.fpp

Repository: MFlowCode/MFC

Length of output: 9639


🏁 Script executed:

# Look at the s_check_inputs subroutines to see if there's validation
rg -n "subroutine s_check_inputs" src/simulation/m_start_up.fpp -A50 | head -100

Repository: MFlowCode/MFC

Length of output: 39


🏁 Script executed:

# Search all files for validation of hyper_cleaning with mhd
rg -n "hyper_cleaning.*mhd|mhd.*hyper_cleaning" src/

Repository: MFlowCode/MFC

Length of output: 156


🏁 Script executed:

# Look at the entire check_inputs subroutine in simulation startup
sed -n '800,1200p' src/simulation/m_start_up.fpp | grep -A100 "subroutine s_check_inputs"

Repository: MFlowCode/MFC

Length of output: 39


🏁 Script executed:

# Check what mhd really is (parameter vs variable)
sed -n '120,140p' src/simulation/m_global_parameters.fpp

Repository: MFlowCode/MFC

Length of output: 1922


🏁 Script executed:

# Check if mhd is compile-time or runtime
rg -n "logical.*mhd\|integer.*mhd" src/simulation/m_global_parameters.fpp | head -20

Repository: MFlowCode/MFC

Length of output: 39


🏁 Script executed:

# Look for how hyper_cleaning relates to actual MHD physics
rg -n "hyper_cleaning" src/simulation/m_riemann_solvers.fpp -B5 -A5

Repository: MFlowCode/MFC

Length of output: 2561


🏁 Script executed:

# Check if there's any CMake or build-time constraint
fd "CMakeLists.txt" | head -5

Repository: MFlowCode/MFC

Length of output: 109


🏁 Script executed:

# Understand what happens when hyper_cleaning is true but mhd is not compiled
# Check if B_idx exists in non-MHD configurations
rg -n "B_idx" src/simulation/m_global_parameters.fpp | head -20

Repository: MFlowCode/MFC

Length of output: 515


🏁 Script executed:

# Check what code blocks are guarded by mhd check
rg -n "if.*mhd.*then" src/simulation/m_global_parameters.fpp -A5 | head -50

Repository: MFlowCode/MFC

Length of output: 771


Guard hyper_cleaning to require MHD at compile-time. The hyper_cleaning flag is a runtime parameter that unconditionally adds psi_idx to the system (lines 1164–1167) but is only meaningful when mhd is compiled in. While the code at line 649 in m_riemann_solvers.fpp has an inline comment linking hyper_cleaning to MHD, there is no runtime validation preventing hyper_cleaning=true when mhd is not compiled, which would introduce an unused equation. Add a runtime check (e.g., if (hyper_cleaning .and. .not. mhd) call s_mpi_abort(...)) to prevent this configuration.

🤖 Prompt for AI Agents
In `@src/simulation/m_global_parameters.fpp` around lines 1164 - 1168, The runtime
flag hyper_cleaning unconditionally adds psi_idx to sys_size even when MHD
support (mhd) is not compiled in; add a runtime guard that aborts when
hyper_cleaning is true but mhd is false (e.g., check hyper_cleaning .and. .not.
mhd and call s_mpi_abort with a descriptive message) before altering
psi_idx/sys_size so the extra equation cannot be enabled without MHD; update the
block that sets psi_idx/sys_size to first validate mhd and call s_mpi_abort if
invalid.

Comment on lines +987 to +998
if (hyper_cleaning) then
$:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3)
do l = 0, p
do k = 0, n
do j = 0, m
rhs_vf(psi_idx)%sf(j, k, l) = rhs_vf(psi_idx)%sf(j, k, l) - &
q_prim_vf(psi_idx)%sf(j, k, l)/hyper_cleaning_tau
end do
end do
end do
$:END_GPU_PARALLEL_LOOP()
end if
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check for assignments between q_prim_vf and q_prim_qp in m_rhs.fpp
rg -n -B5 -A5 'q_prim_vf.*=.*q_prim_qp|q_prim_qp.*=.*q_prim_vf' src/simulation/m_rhs.fpp

Repository: MFlowCode/MFC

Length of output: 646


🏁 Script executed:

#!/bin/bash
# View the context around the hyper_cleaning block and check for earlier q_prim_vf updates
sed -n '950,1090p' src/simulation/m_rhs.fpp | cat -n

Repository: MFlowCode/MFC

Length of output: 6817


🏁 Script executed:

#!/bin/bash
# Search for updates to q_prim_vf in the entire file to understand when it's synchronized
# Show line numbers and context
rg -n 'q_prim_vf.*=' src/simulation/m_rhs.fpp | head -30

Repository: MFlowCode/MFC

Length of output: 148


🏁 Script executed:

#!/bin/bash
# Find the subroutine definition that contains the hyper_cleaning block
# Look backwards from line 987 to find the subroutine definition
sed -n '1,987p' src/simulation/m_rhs.fpp | grep -n 'subroutine\|function' | tail -5

Repository: MFlowCode/MFC

Length of output: 449


🏁 Script executed:

#!/bin/bash
# View the subroutine signature and variable declarations
sed -n '644,750p' src/simulation/m_rhs.fpp | cat -n

Repository: MFlowCode/MFC

Length of output: 5819


🏁 Script executed:

#!/bin/bash
# Search for psi_idx definition and usage
rg -n 'psi_idx' src/simulation/m_rhs.fpp | head -20

Repository: MFlowCode/MFC

Length of output: 501


Use current primitive psi value in divergence cleaning source term.

The damping term at line 993 reads from q_prim_vf(psi_idx)%sf, which is the passed-in parameter not synchronized with the current state. Since q_prim_qp%vf(psi_idx) is computed at line 71 during the conservative-to-primitive conversion and is already GPU-enabled, use it instead to ensure the damping applies the current divergence cleaning potential.

Change line 993 from:

q_prim_vf(psi_idx)%sf(j, k, l)/hyper_cleaning_tau

to:

q_prim_qp%vf(psi_idx)%sf(j, k, l)/hyper_cleaning_tau
🤖 Prompt for AI Agents
In `@src/simulation/m_rhs.fpp` around lines 987 - 998, The divergence-cleaning
damping term is using the stale passed-in primitive array q_prim_vf instead of
the up-to-date primitive computed during c2p; in the hyper_cleaning block update
the subtraction to read from q_prim_qp%vf(psi_idx)%sf(j,k,l) divided by
hyper_cleaning_tau so rhs_vf(psi_idx)%sf is reduced using the current primitive
psi value (keep the surrounding loop and GPU pragmas and reference psi_idx,
rhs_vf, q_prim_vf, q_prim_qp, and hyper_cleaning_tau when making the change).

Comment on lines +946 to +958
flux_rs${XYZ}$_vf(j, k, l, B_idx%beg + i) = (s_M*(vel_R(dir_idx(1))*B%R(i + 1) - vel_R(i + 1)*B%R(norm_dir)) - &
s_P*(vel_L(dir_idx(1))*B%L(i + 1) - vel_L(i + 1)*B%L(norm_dir)) + &
s_M*s_P*(B%L(i + 1) - B%R(i + 1)))/(s_M - s_P)
end do

if (hyper_cleaning) then
! propagate magnetic field divergence as a wave
flux_rs${XYZ}$_vf(j, k, l, B_idx%beg + norm_dir - 1) = flux_rs${XYZ}$_vf(j, k, l, B_idx%beg + norm_dir - 1) + &
(s_M*qR_prim_rs${XYZ}$_vf(j + 1, k, l, psi_idx) - s_P*qL_prim_rs${XYZ}$_vf(j, k, l, psi_idx))/(s_M - s_P)

flux_rs${XYZ}$_vf(j, k, l, psi_idx) = (hyper_cleaning_speed**2*(s_M*B%R(norm_dir) - s_P*B%L(norm_dir)) + s_M*s_P*(qL_prim_rs${XYZ}$_vf(j, k, l, psi_idx) - qR_prim_rs${XYZ}$_vf(j + 1, k, l, psi_idx)))/(s_M - s_P)
else
flux_rs${XYZ}$_vf(j, k, l, B_idx%beg + norm_dir - 1) = 0._wp ! Without hyperbolic cleaning, make sure flux of B_normal is identically zero
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Restore normal-B flux gating before adding psi

Line 946-948 now compute the normal-component induction flux before the psi correction, which reintroduces a non-zero HLL dissipation term in B_normal (this matches the rotor failure noted in the PR discussion). Reapply the (1 - dir_flg(i + 1)) gating (or explicitly zero B_normal) before adding the psi term.

🔧 Suggested fix
-                                        flux_rs${XYZ}$_vf(j, k, l, B_idx%beg + i) = (s_M*(vel_R(dir_idx(1))*B%R(i + 1) - vel_R(i + 1)*B%R(norm_dir)) - &
-                                                                                     s_P*(vel_L(dir_idx(1))*B%L(i + 1) - vel_L(i + 1)*B%L(norm_dir)) + &
-                                                                                     s_M*s_P*(B%L(i + 1) - B%R(i + 1)))/(s_M - s_P)
+                                        flux_rs${XYZ}$_vf(j, k, l, B_idx%beg + i) = (1._wp - dir_flg(i + 1))*( &
+                                                                                     s_M*(vel_R(dir_idx(1))*B%R(i + 1) - vel_R(i + 1)*B%R(norm_dir)) - &
+                                                                                     s_P*(vel_L(dir_idx(1))*B%L(i + 1) - vel_L(i + 1)*B%L(norm_dir)) + &
+                                                                                     s_M*s_P*(B%L(i + 1) - B%R(i + 1)))/(s_M - s_P)
🤖 Prompt for AI Agents
In `@src/simulation/m_riemann_solvers.fpp` around lines 946 - 958, The
normal-component magnetic flux (flux_rs${XYZ}$_vf(..., B_idx%beg + norm_dir -
1)) is being computed before applying the psi correction which reintroduces HLL
dissipation; update the block inside the hyper_cleaning branch to restore the
original gating by applying (1 - dir_flg(i + 1)) to the B_normal flux (or
explicitly set flux_rs${XYZ}$_vf(..., B_idx%beg + norm_dir - 1)=0._wp when
dir_flg(i + 1) is true) before you add the psi contribution, ensuring the psi
update that uses qL_prim_rs${XYZ}$_vf, qR_prim_rs${XYZ}$_vf and psi_idx only
modifies the non-gated value.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Implements GLM (hyperbolic) divergence cleaning for 2D/3D MHD by introducing a new psi state variable and associated runtime parameters, replacing the prior Powell-based approach. Updates the toolchain schema/validation, simulation/pre/post-processing plumbing, documentation, and example/test cases to support the new capability.

Changes:

  • Introduces hyper_cleaning + hyper_cleaning_speed/hyper_cleaning_tau and threads a new psi_idx state variable through pre/sim/post pipelines.
  • Updates the MHD HLL flux path and adds psi damping behavior in the RHS; removes the Powell module.
  • Adds/updates example cases and test harness references; updates docs to cite Dedner et al. (2002).

Reviewed changes

Copilot reviewed 28 out of 31 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
toolchain/mfc/test/cases.py Updates MHD test entry to use the hyper-cleaning Orszag–Tang example.
toolchain/mfc/run/case_dicts.py Adds hyper_cleaning/speed/tau to the toolchain parameter schema; removes Powell key.
toolchain/mfc/case_validator.py Switches validation from powell to hyper_cleaning constraints.
tests/F057F8E6/golden-metadata.txt Adds generated golden metadata for a new/updated regression test.
tests/EC8F87D9/golden-metadata.txt Adds generated golden metadata for a new/updated regression test.
tests/B4DC99F9/golden-metadata.txt Adds generated golden metadata for a new/updated regression test.
src/simulation/m_time_steppers.fpp Allocates/deallocates psi primitive storage when hyper_cleaning is enabled.
src/simulation/m_start_up.fpp Removes Powell module init/finalize wiring; reads hyper-cleaning params.
src/simulation/m_riemann_solvers.fpp Adds hyper-cleaning wave-speed adjustment and GLM flux terms in HLL path.
src/simulation/m_rhs.fpp Adds psi handling in RHS setup and a decay source term.
src/simulation/m_mpi_proxy.fpp Broadcasts hyper_cleaning and its parameters via MPI.
src/simulation/m_mhd.fpp Removes Powell source-term module implementation.
src/simulation/m_global_parameters.fpp Adds hyper_cleaning, psi_idx, and GLM parameter definitions / sys_size accounting.
src/pre_process/m_start_up.fpp Reads hyper_cleaning and introduces a psi initialization hook.
src/pre_process/m_mpi_proxy.fpp Broadcasts hyper_cleaning via MPI in pre_process.
src/pre_process/m_initial_condition.fpp Initializes psi to zero when enabled.
src/pre_process/m_global_parameters.fpp Adds hyper_cleaning and psi_idx sizing in pre_process globals.
src/post_process/m_start_up.fpp Writes psi to formatted database output when enabled.
src/post_process/m_mpi_proxy.fpp Broadcasts hyper_cleaning via MPI in post_process.
src/post_process/m_global_parameters.fpp Adds hyper_cleaning and psi_idx sizing in post_process globals.
src/post_process/m_data_output.fpp Increments DB variable count when psi is included.
src/common/m_variables_conversion.fpp Propagates psi through primitive↔conservative conversion.
src/common/include/2dHardcodedIC.fpp Adds new hardcoded 2D MHD ICs (rotor, Gaussian divergence pulse, tilted shock tube) and reassigns the vortex case id.
examples/2D_orszag_tang_hyper_cleaning/case.py Adds Orszag–Tang example configured for hyperbolic cleaning.
examples/2D_mhd_rotor/case.py Adds a 2D MHD rotor example configured for hyperbolic cleaning.
examples/2D_mhd_magnetic_vortex/case.py Updates hardcoded IC id to match the vortex case renumbering.
docs/documentation/references.md Replaces Powell reference with Dedner et al. (2002).
docs/documentation/case.md Documents hyper_cleaning and its parameters in the case-input reference.
Comments suppressed due to low confidence (1)

src/simulation/m_global_parameters.fpp:298

  • psi_idx is used in GPU kernels (e.g., Riemann solver / RHS), but it is not included in the module’s GPU_DECLARE lists (and therefore won’t be present under default(present)), nor is it included in the GPU_UPDATE index updates. Add psi_idx to the appropriate GPU_DECLARE/GPU_UPDATE lists, and consider setting it to dflt_int when hyper_cleaning is false to avoid an uninitialized index.
    integer :: psi_idx                                 !< Index of hyperbolic cleaning state variable for MHD
    !> @}
    $:GPU_DECLARE(create='[sys_size,E_idx,n_idx,bub_idx,alf_idx,gamma_idx]')
    $:GPU_DECLARE(create='[pi_inf_idx,B_idx,stress_idx,xi_idx,b_size]')
    $:GPU_DECLARE(create='[tensor_size,species_idx,c_idx]')

self.prohibit(powell and fd_order is None,
"fd_order must be set if Powell's method is enabled")
self.prohibit(hyper_cleaning and not mhd,
"Hyperbolic cleaning requires mhd to be enabled")
Copy link

Copilot AI Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

check_mhd_simulation currently allows hyper_cleaning = T with riemann_solver = 4 (HLLD), but the simulation-side hyperbolic cleaning fluxes appear to be implemented only in the HLL path. This can silently run with incorrect (missing) GLM fluxes. Add a constraint to prohibit hyper_cleaning when riemann_solver == 4 (or implement the corresponding HLLD+GLM fluxes).

Suggested change
"Hyperbolic cleaning requires mhd to be enabled")
"Hyperbolic cleaning requires mhd to be enabled")
self.prohibit(hyper_cleaning and riemann_solver == 4,
"Hyperbolic cleaning is not supported with HLLD (riemann_solver = 4)")

Copilot uses AI. Check for mistakes.
Comment on lines +688 to +691
self.prohibit(hyper_cleaning and not mhd,
"Hyperbolic cleaning requires mhd to be enabled")
self.prohibit(hyper_cleaning and n is not None and n == 0,
"Hyperbolic cleaning is not supported for 1D simulations")
Copy link

Copilot AI Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When hyper_cleaning is enabled, hyper_cleaning_speed and hyper_cleaning_tau are required for stability/correctness (and hyper_cleaning_tau is used as a divisor in the RHS). Add validation to require both parameters to be provided and strictly positive.

Copilot uses AI. Check for mistakes.
Comment on lines +847 to +856
! hard-coded psi
if (hyper_cleaning) then
do j = 0, m
do k = 0, n
q_cons_vf(psi_idx)%sf(j, k, 0) = 1d-2*exp(-(x_cc(j)**2 + y_cc(k)**2)/(2.0*0.05**2))
q_prim_vf(psi_idx)%sf(j, k, 0) = q_cons_vf(psi_idx)%sf(j, k, 0)
end do
end do
end if

Copy link

Copilot AI Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This hard-coded psi initialization runs for every case with hyper_cleaning = T, overriding whatever the initial condition module produced (including the earlier "psi initialized to zero" logic). This makes psi nonzero by default for unrelated MHD cases. Consider removing this, or gating it behind a specific hard-coded IC/case id/patch parameter so it only applies to the intended validation case(s).

Suggested change
! hard-coded psi
if (hyper_cleaning) then
do j = 0, m
do k = 0, n
q_cons_vf(psi_idx)%sf(j, k, 0) = 1d-2*exp(-(x_cc(j)**2 + y_cc(k)**2)/(2.0*0.05**2))
q_prim_vf(psi_idx)%sf(j, k, 0) = q_cons_vf(psi_idx)%sf(j, k, 0)
end do
end do
end if

Copilot uses AI. Check for mistakes.
Comment on lines +851 to +852
q_cons_vf(psi_idx)%sf(j, k, 0) = 1d-2*exp(-(x_cc(j)**2 + y_cc(k)**2)/(2.0*0.05**2))
q_prim_vf(psi_idx)%sf(j, k, 0) = q_cons_vf(psi_idx)%sf(j, k, 0)
Copy link

Copilot AI Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Gaussian psi initialization uses default-kind literals (1d-2, 2.0, 0.05) and only writes l=0, which is inconsistent with the codebase’s real(wp) usage and will not correctly initialize 3D cases (p>0). Use _wp literals and loop over l=0,p (or assign the full 3D array) if this initialization is kept.

Copilot uses AI. Check for mistakes.
Comment on lines +987 to +991
if (hyper_cleaning) then
$:GPU_PARALLEL_LOOP(private='[j,k,l]', collapse=3)
do l = 0, p
do k = 0, n
do j = 0, m
Copy link

Copilot AI Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The psi damping source term is applied inside the dimensional-splitting loop, so in 2D/3D it will be applied num_dims times per RHS evaluation (over-damping by a factor of 2 or 3). Move this source term outside the directional loop, or split it consistently (e.g., divide by num_dims) if that’s the intended operator splitting.

Copilot uses AI. Check for mistakes.
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

7 issues found across 31 files

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them.


<file name="src/pre_process/m_start_up.fpp">

<violation number="1" location="src/pre_process/m_start_up.fpp:851">
P2: Hard-coded Gaussian psi initialization runs for every hyper_cleaning case after IC setup, overwriting any user/loaded psi values and only setting the z=0 plane. This makes non-Gaussian or 3D initial conditions incorrect. Gate this behind a specific case/parameter or move it to a case-specific IC routine.</violation>
</file>

<file name="src/pre_process/m_global_parameters.fpp">

<violation number="1" location="src/pre_process/m_global_parameters.fpp:892">
P2: `hyper_cleaning` can be enabled when `model_eqns` is not 2/3, leaving `psi_idx` uninitialized while other modules still use it under `hyper_cleaning` guards. Add validation or disable hyper_cleaning for non-MHD configurations to avoid invalid indexing.</violation>
</file>

<file name="tests/F057F8E6/golden-metadata.txt">

<violation number="1" location="tests/F057F8E6/golden-metadata.txt:29">
P2: Redact machine-specific absolute paths in committed golden metadata to avoid leaking developer environment details and to keep snapshots portable.</violation>
</file>

<file name="tests/EC8F87D9/golden-metadata.txt">

<violation number="1" location="tests/EC8F87D9/golden-metadata.txt:29">
P3: Avoid committing user-specific absolute paths in golden metadata; redact or replace with a portable placeholder so the test artifact doesn’t leak local environment details.</violation>
</file>

<file name="src/simulation/m_riemann_solvers.fpp">

<violation number="1" location="src/simulation/m_riemann_solvers.fpp:650">
P1: `c_fast%L` is forced negative via `min(..., -hyper_cleaning_speed)` even though `c_fast` is a positive speed, which can invert the left wave speed and destabilize the solver. Use a positive magnitude expansion instead.</violation>
</file>

<file name="src/simulation/m_global_parameters.fpp">

<violation number="1" location="src/simulation/m_global_parameters.fpp:294">
P2: psi_idx is introduced for hyperbolic cleaning but never added to GPU_DECLARE/GPU_UPDATE lists, even though it is used inside GPU_PARALLEL_LOOP kernels. This can leave psi_idx undefined on the device and cause incorrect indexing or GPU runtime failures when hyper_cleaning is enabled.</violation>
</file>

<file name="tests/B4DC99F9/golden-metadata.txt">

<violation number="1" location="tests/B4DC99F9/golden-metadata.txt:29">
P2: Golden metadata embeds a developer-specific absolute path (username and local directory). This leaks local environment details and makes the test artifact machine-specific. Consider redacting or replacing with a portable placeholder before committing.</violation>
</file>

Since this is your first cubic review, here's how it works:

  • cubic automatically reviews your code and comments on bugs and improvements
  • Teach cubic by replying to its comments. cubic learns from your replies and gets better over time
  • Ask questions if you need clarification on any suggestion

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

end if

if (hyper_cleaning) then ! mhd
c_fast%L = min(c_fast%L, -hyper_cleaning_speed)
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: c_fast%L is forced negative via min(..., -hyper_cleaning_speed) even though c_fast is a positive speed, which can invert the left wave speed and destabilize the solver. Use a positive magnitude expansion instead.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/simulation/m_riemann_solvers.fpp, line 650:

<comment>`c_fast%L` is forced negative via `min(..., -hyper_cleaning_speed)` even though `c_fast` is a positive speed, which can invert the left wave speed and destabilize the solver. Use a positive magnitude expansion instead.</comment>

<file context>
@@ -646,6 +646,11 @@ contains
                             end if
 
+                            if (hyper_cleaning) then ! mhd
+                                c_fast%L = min(c_fast%L, -hyper_cleaning_speed)
+                                c_fast%R = max(c_fast%R, hyper_cleaning_speed)
+                            end if
</file context>
Fix with Cubic

if (hyper_cleaning) then
do j = 0, m
do k = 0, n
q_cons_vf(psi_idx)%sf(j, k, 0) = 1d-2*exp(-(x_cc(j)**2 + y_cc(k)**2)/(2.0*0.05**2))
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Hard-coded Gaussian psi initialization runs for every hyper_cleaning case after IC setup, overwriting any user/loaded psi values and only setting the z=0 plane. This makes non-Gaussian or 3D initial conditions incorrect. Gate this behind a specific case/parameter or move it to a case-specific IC routine.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/pre_process/m_start_up.fpp, line 851:

<comment>Hard-coded Gaussian psi initialization runs for every hyper_cleaning case after IC setup, overwriting any user/loaded psi values and only setting the z=0 plane. This makes non-Gaussian or 3D initial conditions incorrect. Gate this behind a specific case/parameter or move it to a case-specific IC routine.</comment>

<file context>
@@ -842,6 +844,16 @@ contains
+        if (hyper_cleaning) then
+            do j = 0, m
+                do k = 0, n
+                    q_cons_vf(psi_idx)%sf(j, k, 0) = 1d-2*exp(-(x_cc(j)**2 + y_cc(k)**2)/(2.0*0.05**2))
+                    q_prim_vf(psi_idx)%sf(j, k, 0) = q_cons_vf(psi_idx)%sf(j, k, 0)
+                end do
</file context>
Fix with Cubic

sys_size = damage_idx
end if

if (hyper_cleaning) then
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: hyper_cleaning can be enabled when model_eqns is not 2/3, leaving psi_idx uninitialized while other modules still use it under hyper_cleaning guards. Add validation or disable hyper_cleaning for non-MHD configurations to avoid invalid indexing.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/pre_process/m_global_parameters.fpp, line 892:

<comment>`hyper_cleaning` can be enabled when `model_eqns` is not 2/3, leaving `psi_idx` uninitialized while other modules still use it under `hyper_cleaning` guards. Add validation or disable hyper_cleaning for non-MHD configurations to avoid invalid indexing.</comment>

<file context>
@@ -886,6 +889,11 @@ contains
                 sys_size = damage_idx
             end if
 
+            if (hyper_cleaning) then
+                psi_idx = sys_size + 1
+                sys_size = psi_idx
</file context>
Fix with Cubic

OpenACC : OFF
OpenMP : OFF

Fypp : /home/chris/source/MFC_mhd_hypercleaning/build/venv/bin/fypp
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Redact machine-specific absolute paths in committed golden metadata to avoid leaking developer environment details and to keep snapshots portable.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At tests/F057F8E6/golden-metadata.txt, line 29:

<comment>Redact machine-specific absolute paths in committed golden metadata to avoid leaking developer environment details and to keep snapshots portable.</comment>

<file context>
@@ -0,0 +1,185 @@
+        OpenACC : OFF
+        OpenMP  : OFF
+    
+        Fypp    : /home/chris/source/MFC_mhd_hypercleaning/build/venv/bin/fypp
+        Doxygen : 
+    
</file context>
Fix with Cubic

type(int_bounds_info) :: species_idx !< Indexes of first & last concentration eqns.
integer :: c_idx !< Index of color function
integer :: damage_idx !< Index of damage state variable (D) for continuum damage model
integer :: psi_idx !< Index of hyperbolic cleaning state variable for MHD
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: psi_idx is introduced for hyperbolic cleaning but never added to GPU_DECLARE/GPU_UPDATE lists, even though it is used inside GPU_PARALLEL_LOOP kernels. This can leave psi_idx undefined on the device and cause incorrect indexing or GPU runtime failures when hyper_cleaning is enabled.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/simulation/m_global_parameters.fpp, line 294:

<comment>psi_idx is introduced for hyperbolic cleaning but never added to GPU_DECLARE/GPU_UPDATE lists, even though it is used inside GPU_PARALLEL_LOOP kernels. This can leave psi_idx undefined on the device and cause incorrect indexing or GPU runtime failures when hyper_cleaning is enabled.</comment>

<file context>
@@ -290,6 +291,7 @@ module m_global_parameters
     type(int_bounds_info) :: species_idx               !< Indexes of first & last concentration eqns.
     integer :: c_idx                                   !< Index of color function
     integer :: damage_idx                              !< Index of damage state variable (D) for continuum damage model
+    integer :: psi_idx                                 !< Index of hyperbolic cleaning state variable for MHD
     !> @}
     $:GPU_DECLARE(create='[sys_size,E_idx,n_idx,bub_idx,alf_idx,gamma_idx]')
</file context>
Fix with Cubic

OpenACC : OFF
OpenMP : OFF

Fypp : /home/chris/source/MFC_mhd_hypercleaning/build/venv/bin/fypp
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Golden metadata embeds a developer-specific absolute path (username and local directory). This leaks local environment details and makes the test artifact machine-specific. Consider redacting or replacing with a portable placeholder before committing.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At tests/B4DC99F9/golden-metadata.txt, line 29:

<comment>Golden metadata embeds a developer-specific absolute path (username and local directory). This leaks local environment details and makes the test artifact machine-specific. Consider redacting or replacing with a portable placeholder before committing.</comment>

<file context>
@@ -0,0 +1,185 @@
+        OpenACC : OFF
+        OpenMP  : OFF
+    
+        Fypp    : /home/chris/source/MFC_mhd_hypercleaning/build/venv/bin/fypp
+        Doxygen : 
+    
</file context>
Fix with Cubic

OpenACC : OFF
OpenMP : OFF

Fypp : /home/dan/Documents/repos/MFC/build/venv/bin/fypp
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P3: Avoid committing user-specific absolute paths in golden metadata; redact or replace with a portable placeholder so the test artifact doesn’t leak local environment details.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At tests/EC8F87D9/golden-metadata.txt, line 29:

<comment>Avoid committing user-specific absolute paths in golden metadata; redact or replace with a portable placeholder so the test artifact doesn’t leak local environment details.</comment>

<file context>
@@ -0,0 +1,193 @@
+        OpenACC : OFF
+        OpenMP  : OFF
+    
+        Fypp    : /home/dan/Documents/repos/MFC/build/venv/bin/fypp
+        Doxygen : 
+    
</file context>
Fix with Cubic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Development

Successfully merging this pull request may close these issues.

3 participants