Skip to content

Conversation

@huebleruwm
Copy link

@huebleruwm huebleruwm commented Nov 24, 2025

There's two parts here - getting a new version of clubb in, and enabling the descending grid mode in clubb_intr. Both these goals were split up over many commits. Fixes #1411

New CLUBB

The first commits are dedicated to getting a new version of CLUBB in. Because of clubb_intr diverging between cam_development, which had significant changes to enable GPUization, and UWM's branch, which had redimensioning changes, the merging was done manually.

The first 6 commits here include changes that can be matched with a certain version of CLUBB release:
3d40d8c0e03a298ae3925564bc4db2f0df5e9437 works with clubb hash 1632cf12
e67fc4719fa21f8e449b024a0e3b6df2d0a7f8cb works with clubb hash 673beb05
187d7b536c2f36968fc7f5e1b9d1167e430ad03f works with clubb hash dc302b95
e4b71220b33aeaddb0afc68c9103555edccb59eb works with clubb hash dc302b95
703aca60ed1e0b6b24f2cd890c3a4497041d25b8 works with clubb hash d5957b30
4d9b1b8a528ca532d964c1799e1860e96e068a12 works with clubb hash d5957b30
(to use the clubb hash, go to src/physics/clubb and run the git checkout there)

These commits all have to do with just getting a new version of clubb in, so we need to ensure that at least 4d9b1b8a528ca532d964c1799e1860e96e068a12 is working correctly. The later commits have more complicated changes to clubb_intr, so if we find any problems with this branch, we should test commit 4d9b1b8a528ca532d964c1799e1860e96e068a12 as a next step.

clubb_intr improvements

The next commits after getting new clubb in are increasingly ambitious changes aimed at simplifying clubb_intr and reducing its runtime cost.

e60848b4ec4df90a3060ffd7f664fab42e847509 introduces a parameter, clubb_grid_dir, in clubb_intr that controls which direction the grid is when calling advance_clubb_core. When using -O0 to compile, and setting l_test_grid_generalization = .true. in src/clubb/src/CLUBB_core/model_flag.F90, the results are BFB in either grid direction.

dddff494966bf2bf4341fa4a7526b2f8b0f3d16e separates the flipping code from the copying code (copying cam sized arrays to clubb sizes arrays). Should all be BFB.

055e53f70741531a58d0f7da788b824c76fef087 pushes flipping code inward until it's directly around the call to advance_clubb_core, and the way of controlling the grid direction has been changed to a flag, l_ascending_grid. This should all be BFB as well, and I tested with clubb_cloudtop_cooling, clubb_rainevap_turb, and do_clubb_mf all true to ensure BFBness in either ascending or descending mode. One caveat - the clubb_mf code assumes an ascending grid, so before calling it we flip the arrays, then flip the outputs to descending.

2fb2ba9bd6b1ec5e5c60039f90d0c1020663d0c9 is a pretty safe intermediate commit, mainly moving stuff around in preparation for redimensioning.

bded8a561131e4dbbccad293f14226e5e8c0e856 is some very safe dimensionings.

fce8e1b1b5e8d3e93232c2129be7671fae79db23 is the big one that redimensions most pbuf arrays, and uses them instead of the clubb_sizes local ones. This allows us to avoid the vast majority of the the data copying, and delete the local version of the clubb arrays.

The rest are safe and easy things mainly, or commits to make GPU code work.

Testing

I plan to run an ECT test to compare the cam_development head I started with to the head of this branch. Answers are expected to change slightly due to the ghost point removal in clubb, so I think it unlikely that this passes, but if it does that would be fantastic, and might be the only testing we really need.

If that first ECT test doesn't pass, then I'll will assume that the difference is due to the ghost point removal (or other small bit changing commits in clubb over the past year), and rely on @adamrher to check if the results are good.

The biggest concern is if the answer changes are acceptable, and the only real differences expected are from the new version of CLUBB. If the answers from this branch look bad, we should go back to commit 4d9b1b8a528ca532d964c1799e1860e96e068a12 and check the answers from that, since it only includes the new version of CLUBB, and NO unnecessary clubb_intr changes. If the answers still look bad in that commit, then we have a few more we can step back through to try to figure out which commit introduces the differences. If the hypothetical problem is present in the first commit (3d40d8c0e03a298ae3925564bc4db2f0df5e9437), then the problem is harder, because that includes (pretty much only) the removal of the ghost point in clubb, which is expected to change answers, but hopefully not significantly.

Again if that first ECT test fails, then I can still run another ECT test between the version where clubb is up to date (4d9b1b8a528ca532d964c1799e1860e96e068a12) and the head of this branch. The changes between that commit and the head may be slightly bit changing without (-O0), but definitely shouldn't be answer changing. If this ECT test fails, then I've made a mistake in the changes meant to avoid the flipping/copying, and I'll have to step back through and figure out what bad thing I did.

Some other tests I've ran along the way help confirm at least some of these changes:

  • I've ensured BFBness between ascending and descending modes since it was introduces in commit e60848b4ec4df90a3060ffd7f664fab42e847509
  • The ERP test passes
  • Commit 4d9b1b8a528ca532d964c1799e1860e96e068a12 should be working on the GPU, but later commits are untested

Next steps

I left a number of things messy for now, such as comments and gptl timers. Once we confirm these changes, I'd like to go through and make some of those nicer as a final step.

Performance

In addition to the clubb_intr changes improving performance, we should use this as an opportunity to try to flags that should be significantly faster:

  • clubb_penta_solve_method = 2 uses our custom pentadiagonal matrix solvers, which should be significantly faster than lapack and should pass an ECT test
  • clubb_tridiag_solve_method = 2 uses our custom tridiagonal solver, which should also be faster and pass an ECT test
  • clubb_fill_holes_type = 4 uses a different hole filling algorithm, that in theory is just all around better and faster than our current one, and I suspect it will pass ECT, but I have yet to test it
  • clubb_l_call_pdf_closure_twice = .false. will avoid a pdf_closure call (which is significant in cost) and reduce the memory footprint , and I think will have minimal effect (based on my visual analysis of what that affects in the code), but is the most likely to break the ECT test

@cacraigucar cacraigucar marked this pull request as draft November 24, 2025 20:36
@cacraigucar
Copy link
Collaborator

@huebleruwm - After reading through this text, I moved this PR to draft. Once it is ready for the SEs to review and process it for bringing into CAM, please move it out of draft.

…grid, and flipping sections have been consolidated to directly around the time stepping loop. Next is to push them inward until the only clubb grid calculations are done inside advance_clubb_core.
…s descending BFB (except wp3_ta but that's internal and doesn't matter) - this is even true with clubb_cloudtop_cooling, clubb_rainevap_turb, and do_clubb_mf.
…ending mode, even though there is no notion of ascending in it. There must be some bug with the flipper perhaps? Otherwise an internal interaction between clubb and silhs somehow, it's very upsetting, but I think it's time to give up and come back to it someday.
…oning yet. All changes should be BFB, but a handful of field outputs are different above top_lev because I made it zero everything above top_lev. Among the fields that differ: some (RTM_CLUBB RTP2_CLUBB THLM_CLUBB UP2_CLUBB WP2_CLUBB) were being initiazed to a tolerance above top_lev, and others (THLP2_CLUBB RTP2_CLUBB UM_CLUBB VM_CLUBB) were never initialized or set above top_lev, so we were outputting random garbage.
…ootprint and data copying steps in clubb_intr, mainly by switching pbuf variables (which are mostly clubb_inouts) to have nzm or nzt dimensions, allowing them to be passed into clubb directly, making the copying/flipping step unnecesary. This was tested with a top_lev > 1, so it should be well tested. There were some above top_lev interactions found, which seem erroneous, so I've marked them with a TODO and a little explaination.
@huebleruwm huebleruwm force-pushed the cam_dev_clubb_nov2025 branch from ed11ffc to b2b3232 Compare November 26, 2025 10:11
@huebleruwm
Copy link
Author

@adamrher I have done a number of ECT tests and it was pretty much as expected. I made a baseline from ESCOMP/cam_development commit 4c480c8e8c7e3138d8d5cb3118f635c71886ecf7 (which has cam6_4_128 as the last tag). This commit from this branch (4d9b1b8a528ca532d964c1799e1860e96e068a12), which is just getting in the newest version of clubb) fails the ECT test when compared to the cam_development baseline. So we will need some help confirming the changes as acceptable.

To check the more "dangerous" changes to clubb_intr, I made another baseline from commit 4d9b1b8a528ca532d964c1799e1860e96e068a12 (with new clubb), then compared that to the head of this branch, and that passed the ECT test. I also made sure to run with settings that have top_lev = 3 so that we can test the redimensioning changes, which I did by setting trop_cloud_top_press to 1.D3 in the namelist instead of 1.D2.

Other things I tested:

  • When running with -O0 and l_test_grid_generalization = .true. (clubb internal parameter), then the code is BFB between the ascending and descending modes in clubb_intr (changed by setting l_ascending_grid)
  • Also when running with -O0 the code is to be BFB between 4d9b1b8a528ca532d964c1799e1860e96e068a12 and the head of this branch, outside of the few output variables that differ above top_lev, see the comment from this commit for more detail.
  • The ERP test passes
  • The GPU code runs to completion, haven't check results with ECT test yet, but that's the only thing on the list left to do, and any further changes to make that work (if needed) won't affect CPU results.

So I think the clubb_intr changes are pretty well tested and confirmed. But the changes from new clubb don't pass the ECT test, and we'll need to confirm them as acceptable.

Note

Also, I found a number of examples where clubb_intr is interacting above top_lev, and I'm not sure exactly what we should do about that. For now I've left the errors in place for BFBness, but I've made little code comments explaining the issue (all contain a TODO). There's also a TODO with how we're setting concld_pbuf, which seems suspicious, but not a clear bug. @adamrher could you take a look at these "TODO" comments and see if you have any insight please. I added what I think the "correct" lines should be, and what the errors were that I left in for BFBness.

@huebleruwm
Copy link
Author

I ran some ECT tests with the performance options:

  • clubb_tridiag_solve_method = 2 passes the ECT test and should be significantly faster
  • clubb_penta_solve_method = 2 passes the ECT test and should be significantly faster, but requires clubb_release commit bba14c86a748da7993af72c056a9f6505cf43f1b or newer, since I just fixed a bug in it
  • clubb_fill_holes_type = 4 FAILS the ECT test, which was expected since it's fills holes in the same conservative way, but over completely different ranges, making it answer changing. This should also be decently faster, so it would be nice to test if possible.

@adamrher
Copy link
Collaborator

adamrher commented Dec 5, 2025

@huebleruwm I checked out this PR branch, and while the clubb_intr.F90 changes seem to be there, the .gitmodules are pointing to the clubb externals currently on cam_development:

[submodule "clubb"]
        path = src/physics/clubb
        url = https://github.com/larson-group/clubb_release
        fxrequired = AlwaysRequired
        fxsparse = ../.clubb_sparse_checkout
        fxtag = clubb_4ncar_20240605_73d60f6_gpufixes_posinf
        fxDONOTUSEurl = https://github.com/larson-group/clubb_release

This should be pointing to a different tag / hash containing support for descending / ascending options, no?

@huebleruwm
Copy link
Author

This should be pointing to a different tag / hash containing support for descending / ascending options, no?

Yes. In hindsight, it would've made much more sense to update that with the right hash each commit, rather than just including it in the commit comment.

I've just been going to src/physics/clubb and running git checkout master to get the newest version. Or git checkout hash when testing the first handful of commits:
cam 3d40d8c0e03a298ae3925564bc4db2f0df5e9437 works with clubb hash 1632cf12
cam e67fc4719fa21f8e449b024a0e3b6df2d0a7f8cb works with clubb hash 673beb05
cam 187d7b536c2f36968fc7f5e1b9d1167e430ad03f works with clubb hash dc302b95
cam e4b71220b33aeaddb0afc68c9103555edccb59eb works with clubb hash dc302b95
cam 703aca60ed1e0b6b24f2cd890c3a4497041d25b8 works with clubb hash d5957b30 or newer
cam 4d9b1b8a528ca532d964c1799e1860e96e068a12 works with clubb hash d5957b30 or newer

@adamrher
Copy link
Collaborator

That's what I was thinking too - it's for debugging rather than speed, and it's minimally invasive when it's directly around the advance_clubb_core call.

We could put if statements around the l_ascending_grid blocks - if t=1 for the before block and if t=nadv for the after block?

I also saw advance_clubb_core increasing by ~5% - I think because of the added slicing in the call when pcols/=ncol

Is this added slicing in 1a, or just 6a?

The big discrepancy is definitely interesting though, I wonder if there's anything were doing different besides the run length.

By big discrepancy, you mean that 4.7% slowdown I'm getting for 1a v. 6a w/ ascending on? Could that be the added advance_clubb_core costs from the slicing?

Would you be willing to move the timers, specifically the clubb_tend_cam:advance_clubb_core_api around the advance_clubb_core call, and clubb_tend_cam:flip-index inside the l_ascending_grid blocks?

@huebleruwm
Copy link
Author

huebleruwm commented Dec 18, 2025

One of the reasons the new ascending code is slower is that the flipping step used to be combined with the data copy step. When we copy the the cam data structures (e.g state_loc) to arrays for calling clubb (e.g cloud_frac_inout) , before we would copy and flip within the same loop, but now we copy then flip separately. This was to make the flipping options minimally invasive, affecting only advance_clubb_core, but will add a large amount of cost when turning on l_ascending_grid.

We are also flipping more arrays now, some of the pdf_params need flipping too now because SILHS is only descending mode now. Some other things didn't (and don't) need flipping at all because they weren't (and aren't) used in clubb_intr, but are flipped now just in case they do get used in the future.

The double flipping also adds cost, but I believe the only ways to reduce the number of times we flip will make it more invasive and require more code to work in ascending mode. For example, if we flip the arrays only once per nadv loop, then all the code inside the nadv loop (like integrate_mf) will need to support both ascending and descending modes, or have their own grid-direction conditional flipping step.

So we could definitely reduce the cost in ascending mode by just not flipping certain things or by reducing the frequency of flips, but I vote we just don't worry about it since it's only a debugging option.

By big discrepancy, you mean that 4.7% slowdown I'm getting for 1a v. 6a w/ ascending on? Could that be the added advance_clubb_core costs from the slicing?

By discrepancy I meant why you saw a cost increase in descending mode, but I'm seeing a decrease. The last test you did in descending mode found it slower overall than 0a and 1a. But now that I think about it, that wasn't using the clubb_tend_cam timers directly was it? I'm not sure why that would give such a different answer, but might have something to do with it.

@adamrher
Copy link
Collaborator

So we could definitely reduce the cost in ascending mode by just not flipping certain things or by reducing the frequency of flips, but I vote we just don't worry about it since it's only a debugging option.

That's fine by me.

From 1a to 6a I found clubb_intr reducing in cost by ~29% though. I also saw advance_clubb_core increasing by ~5%

I was able to reproduce this result (labeled 6d below), with clubb_intr (clubb_tend_cam - advance_clubb_core) reducing costs by 25% and advance_clubb_core increasing by 5%. This is based on 4 month runs with 512 tasks.

Turning on the performance options (new solvers and fixed version of new hole filler) reduced advance_clubb_core cost by ~12%

I was able to get basically the same number, a 15% reduction in advance_clubb_core costs (case 7d-new_fast below).

Here's my stacked bar plot (sorry about the lack of color). The bottom is advance_clubb_core and the top is clubb_intr = clubb_tend_cam - advance_clubb_core. Overall the clubb_tend_cam costs going from 0a (clean cam6_4_128) and 7d-new_fast is a 15% reduction.

temp_bar

I'll move on to the science validation for 7d-new_fast.

@huebleruwm
Copy link
Author

I was able to get basically the same number, a 15% reduction in advance_clubb_core costs (case 7d-new_fast below).

Oh excellent, maybe the difference was in the timers being looked at.

Would you be willing to move the timers, specifically the clubb_tend_cam:advance_clubb_core_api around the advance_clubb_core call, and clubb_tend_cam:flip-index inside the l_ascending_grid blocks?

I think it would be nice to move the timers a little too. At least to have to have the clubb_tend_cam:advance_clubb_core_api directly around the call, since then we can actually compare the time in ascending vs descending without the flipping cost. I realized that another explanation for why advance_clubb_core could be slightly slower is that it's slower in descending mode - 5% increase overall just for that little slicing in the call seems odd, and ascending vs descending performance is something I haven't explored in detail yet. I'll move the timers and do a couple more runs to check.

@huebleruwm
Copy link
Author

I realized that another explanation for why advance_clubb_core could be slightly slower is that it's slower in descending mode - 5% increase overall just for that little slicing in the call seems odd, and ascending vs descending performance is something I haven't explored in detail yet.

I move the timing statements around slightly and found that advance_clubb_core is about 1% slower in descending mode (assuming that's not just random error). But that doesn't explain the 5% slow down, so I think we can chalk it up to mainly the added slicing. Nothing we can really do about it at the moment though, just mentioning it here to document the finding.

@adamrher
Copy link
Collaborator

adamrher commented Jan 7, 2026

@huebleruwm I started working on converting clubb_mf to descending mode, but I'm having trouble compiling the head of your branch. Here's the first few errors:

/glade/campaign/cgd/amp/aherring/src/cam_dev_clubb_nov2025_clean/src/physics/cam/clubb_intr.F90(1466): error #6580: Name in only-list does not exist or is not accessible.   [ERR_INFO_TYPE]

/glade/campaign/cgd/amp/aherring/src/cam_dev_clubb_nov2025_clean/src/physics/cam/clubb_intr.F90(1467): error #6580: Name in only-list does not exist or is not accessible.   [INIT_DEFAULT_ERR_INFO_API]

/glade/campaign/cgd/amp/aherring/src/cam_dev_clubb_nov2025_clean/src/physics/cam/clubb_intr.F90(1468): error #6580: Name in only-list does not exist or is not accessible.   [CLEANUP_ERR_INFO_API]

/glade/campaign/cgd/amp/aherring/src/cam_dev_clubb_nov2025_clean/src/physics/cam/clubb_intr.F90(1470): error #6580: Name in only-list does not exist or is not accessible.   [INIT_CLUBB_PARAMS_API]

And here's my create_newcase command:

/glade/campaign/cgd/amp/aherring/src/cam_dev_clubb_nov2025_clean/cime/scripts/create_newcase --case /glade/derecho/scratch/aherring/cam_dev_clubb_nov2025_clean_FHISTC_LTso_ne30pg3_ne30pg3_mg17_512pes_260107_clean --compset FHISTC_LTso --res ne30pg3_ne30pg3_mg17 --driver nuopc --walltime 00:29:00 --mach derecho --pecount 512 --project P93300042 --compiler intel --queue main --run-unsupported

Are you able to reproduce these errors?

@huebleruwm
Copy link
Author

I started working on converting clubb_mf to descending mode, but I'm having trouble compiling the head of your branch.

Is that using the newest version of clubb? Those errors could be explained if you didn't checkout the newest clubb branch after checking out externals.

@adamrher
Copy link
Collaborator

adamrher commented Jan 8, 2026

Oof, you're right. I'm a little rusty from the break.

@adamrher
Copy link
Collaborator

adamrher commented Jan 8, 2026

@huebleruwm I did a simple commit for that bug fix in clubb_intr.F90 but it seems to have also committed src/clubb, since I did a git checkout master. Do you have any advice going forward as I plan to make several commits?

@huebleruwm
Copy link
Author

Oof, you're right. I'm a little rusty from the break.

I blame myself for not having updated the tag along the way.

@huebleruwm I did a simple commit for that bug fix in clubb_intr.F90 but it seems to have also committed src/clubb, since I did a git checkout master. Do you have any advice going forward as I plan to make several commits?

I've done that before too. I do wish there was a nicer way, but I've just gotten used to just running git add inside src/cam or adding files individually.

@huebleruwm
Copy link
Author

@adamrher Also, we just merged in a waiting PR to clubb that adds a couple new options. I'd like to make one final commit to this branch to get those in, and we can make a new tag for this version that I'll (finally) update the Externals_CAM with.

The change set will be very small, so it shouldn't interfere with anything you're doing, but I can hold off on it until you're done making changes if you'd prefer. Otherwise I can get that done by tomorrow.

@adamrher
Copy link
Collaborator

adamrher commented Jan 8, 2026

I just pushed my last commit, so it's all yours now. It changes clubb_mf to operate on descending arrays, and I've confirmed it's b4b with the ascending version of the code. We can now get rid of the ascending code entirely. I'll leave that to you, along with making a new clubb tag and putting it in .gitmodules (Externals_CAM is deprecated).

I should be able to finish up the science validation by tomorrow -- I'll let you and Vince know.

@adamrher
Copy link
Collaborator

adamrher commented Jan 8, 2026

Is the clubb update @HingOng's PR incorporating non-traditional coriolis terms? That would be great to include here.

@huebleruwm
Copy link
Author

Is the clubb update @HingOng's PR incorporating non-traditional coriolis terms? That would be great to include here.

Yes, that's the one. Just waiting on the nightly tests to make sure they all pass. It should all be BFB with the new flags set to false.

@vlarson
Copy link

vlarson commented Jan 8, 2026

I just pushed my last commit, so it's all yours now. It changes clubb_mf to operate on descending arrays, and I've confirmed it's b4b with the ascending version of the code. We can now get rid of the ascending code entirely. I'll leave that to you

Which ascending code is that? Is this ascending code you're referring to specific to CLUBB-MF or does it apply more generally to CLUBB? I thought we were going to leave in some ascending code just to allow testing of the ascending version of CLUBB within CAM.

@adamrher
Copy link
Collaborator

adamrher commented Jan 8, 2026

Which ascending code is that? Is this ascending code you're referring to specific to CLUBB-MF or does it apply more generally to CLUBB? I thought we were going to leave in some ascending code just to allow testing of the ascending version of CLUBB within CAM.

I forgot that we did agree to keep the ascending code option for CLUBB for debugging purposes. As for CLUBB-MF, the integrate_mf call in clubb_intr.F90 operates on ascending arrays, and so here I just flipped it so it operates on descending arrays. I suppose we can either add an endrun statement if clubb_l_ascending_grid=.true. and do_clubb_mf=.true. or I can add logic so that CLUBB-MF can run in ascending mode as well. I would need to make clubb_l_ascending_grid available in clubb_mf.F90, which contains the relevant integrate_mf changes.

@huebleruwm
Copy link
Author

I suppose we can either add an endrun statement if clubb_l_ascending_grid=.true. and do_clubb_mf=.true. or I can add logic so that CLUBB-MF can run in ascending mode as well.

Setting clubb_l_ascending_grid only flips things directly before and after the advance_clubb_core call, so it won't affect integrate_mf at all. So I don't think any further changes are needed. I suppose we could confirm this by testing clubb_l_ascending_grid=.true. with clubb_mf, since they should work together and produce the same answer, but it's probably safe to skip.

@adamrher
Copy link
Collaborator

adamrher commented Jan 9, 2026

Setting clubb_l_ascending_grid only flips things directly before and after the advance_clubb_core call, so it won't affect integrate_mf at all. So I don't think any further changes are needed. I suppose we could confirm this by testing clubb_l_ascending_grid=.true. with clubb_mf, since they should work together and produce the same answer, but it's probably safe to skip.

Ah, yeah, you're right. Bottom line is that I changed integrate_mf to operate on descending arrays, which allowed me to remove the array flipping before and after the call to integrate_mf in clubb_intr.F90. This reduces the CLUBB-MF footprint in clubb_intr.F90 and so I think it's a welcome change, and is independent of whether or not CLUBB operates in ascending or descending mode.

@huebleruwm
Copy link
Author

huebleruwm commented Jan 10, 2026

I added the changes needed for the new clubb coriolis code and made a tag that I updated the gitmodules with (clubb_4ncar_20260109_ddf5110).

@huebleruwm huebleruwm force-pushed the cam_dev_clubb_nov2025 branch from 9236e32 to fbcaf32 Compare January 10, 2026 20:23
@huebleruwm
Copy link
Author

huebleruwm commented Jan 17, 2026

I did some more targetting timings with the options were considering turning on by default. All speedups are measured against a baseline run with the current default settings.

  • setting clubb_penta_solve_method = 2 reduces the cost of advance_clubb_core by ~10%
  • setting clubb_tridiag_solve_method = 2 reduces the cost of advance_clubb_core by ~4%
  • setting clubb_fill_holes_type = 4 reduces the cost of advance_clubb_core by a dissapointing ~2%

The small speedup from the new fill_holes method seems to suggest that filling is either rare in cam or being done on sparser hole patterns than what's seen in clubb_standalone. The upside is that if the new hole filler is both changing answers and barely a speedup, then we can just leave it set to 2 and we still get most of the speedup we were hoping for and no answer changes - assuming the new matrix solver options aren't also the causing answer changes.

@adamrher
Copy link
Collaborator

The upside is that if the new hole filler is both changing answers and barely a speedup, then we can just leave it set to 2 and we still get most of the speedup we were hoping for and no answer changes

This turns out to be the case. I ran a 20 year F-case with the new solver options and using the default hole filling option 2 (7d_pe2), and the answers aren't any different from the control. The table below shows that the area of the globe with significant answer changes is not outside of natural variability (0a*-0a) for most variables (note I don't put much weight on the T850/T950 result since the areas of significance occur where my interpolation of T to a fixed pressure level intersects the surface, which I don't treat properly for this field).

Screenshot 2026-01-18 at 9 46 21 AM

@vlarson
Copy link

vlarson commented Jan 18, 2026

The upside is that if the new hole filler is both changing answers and barely a speedup, then we can just leave it set to 2 and we still get most of the speedup we were hoping for and no answer changes

This turns out to be the case. I ran a 20 year F-case with the new solver options and using the default hole filling option 2 (7d_pe2), and the answers aren't any different from the control.

For this configuration, what speed-up do you see?

@adamrher
Copy link
Collaborator

@vlarson just looking at one of the 2 year segments for each case, the mean clubb_tend_cam time in 7d_pe2 is reduced by 9.5% relative to 6d. Compare that to 7d_pe3, which is reduced by 10.3% relative to 6d. Gunther's numbers would suggest combining using the two new solvers would create a 14% reduction in 7d_pe2 (provided clubb_intr is identical in all these runs, which I believe is true). There's an envelope of error due to load imbalancing, which I tend to think means longer runs have more reliable timings.

Copy link
Collaborator

@adamrher adamrher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@huebleruwm this is my first pass at a code review, making it only to the beginning of clubb_tend_cam. I had a few questions and clarifications but no major issues. There's a lot of code clean up and variable clarification which is awesome.

I'm not sure how much time I'll have in the next two weeks to finish this code review, with AMWG coming up two weeks from today. I'll get to it as soon as I can.

w_down_in_cloud_out, &
cloudy_updraft_frac_out, &
cloudy_downdraft_frac_out, &
rcm_in_layer, & ! CLUBB output of in-cloud liq. wat. mix. ratio [kg/kg]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

curious why you removed the _out modifier?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mainly because I wasn't sure what the point of them were. It seemed like they were meant to signify the purpose of those variables in the clubb call, but that's also not something we do anywhere else in clubb and, given that we already organize the variable order in the advance_clubb_core call, I'm not sure how it helps. Also, not all of them had suffixes, and after going through it a while I realized that the only ones that had a suffixed version were ones that were duplicated due to there being a pbuf version and a local version (which is no longer the case). So my guess is that was done just because we needed multiple variables and adding a suffix is a nice and simple way to name them.

qrl_clubb, &
qclvar_out, & ! cloud water variance [kg^2/kg^2]
zt_g, & ! Thermodynamic grid of CLUBB [m]
Lscale, &
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should Lscale have an _out modifier?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it would be a bad idea. The suffixes were (and still are) inconsistent - some inouts have _inout, but most don't. Most outputs do have _out though. Both rcm_in_layer and Lscale could be suffixed with _out if the suffixes are preferred.

inv_exner_tmp, & ! Inverse exner function consistent with CLUBB [-]
dlf2, & ! Detraining cld H20 from shallow convection [kg/kg/day]
dum1, & ! dummy variable [units vary]
invrs_hdtime, &
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the reason you're introducing more of these invrs_ constants to avoid division inside loops? Does this still matter for modern compilers?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, just for performance. I can't say I've investigated how capable modern compilers are at optimizing this out, but I suspect it still helps.

end if

! Initialize err_info with parallelization and geographical info
call init_err_info_api(ncol, lchnk, iam, state_loc%lat*rad2deg, state_loc%lon*rad2deg, err_info)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious what more information is provided by this new error functionality. The clubb currently on cam_development couldn't tell you the geographical info of the actual column giving the error, b/c the bad column index isn't propagated outside of advance_clubb_core, and geographical info was computed outside advance_clubb_core in clubb_intr. Seems like this geo info is now taken inside clubb and so that issue is resolved here? And it's also giving us the lchnk and rank as well? If so, awesome.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the goal is to print out the lat, lon, chunk, and column within the chunk. However, this functionality hasn't been tested. Is there an easy way to test whether the error message is printed and is correct?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

4 participants