Skip to content

Conversation

@jnnsbrr
Copy link
Collaborator

@jnnsbrr jnnsbrr commented Feb 5, 2026

This MR introduces new features for the compatibility with the latest pycopanlpjml version:

  • default model name set to copan, instead of copan:CORE - if provided via CoupledConfig it can be set
  • functionality to kill remaining processes on coupled port to avoid blocked port in the next simulation
  • debug historic years reading for coupling
  • adjust slurm.jcf file creation for older LPJmL versions to avoid slurm ending the job before coupled program has finished
  • deprecation warning for old objects names, attributes

@codecov
Copy link

codecov bot commented Feb 6, 2026

Codecov Report

❌ Patch coverage is 86.02484% with 45 lines in your changes missing coverage. Please review.
✅ Project coverage is 81.36%. Comparing base (9490c5e) to head (8038fe8).

Files with missing lines Patch % Lines
pycoupler/data.py 88.06% 21 Missing ⚠️
pycoupler/coupler.py 81.35% 11 Missing ⚠️
pycoupler/run.py 85.29% 10 Missing ⚠️
pycoupler/config.py 40.00% 3 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main      #17      +/-   ##
==========================================
+ Coverage   78.47%   81.36%   +2.88%     
==========================================
  Files           7        7              
  Lines        1640     1937     +297     
==========================================
+ Hits         1287     1576     +289     
- Misses        353      361       +8     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@jnnsbrr jnnsbrr requested review from Copilot and zner0L and removed request for Copilot February 6, 2026 12:41
@jnnsbrr jnnsbrr changed the title pycopanlpjml update update for pycopanlpjml Feb 6, 2026
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates pycoupler for compatibility with newer pycopanlpjml/LPJmL coupling expectations, including a new default coupled model name, improved Slurm coupled-job handling, additional port/process cleanup, and expanded test coverage for utilities, run submission, and NetCDF export helpers.

Changes:

  • Default coupled model name updated to copan, with support for overriding via a coupled config.
  • Slurm coupled-job submission now patches slurm.jcf to ensure the coupler process is waited on (and submits via sbatch when needed).
  • Adds NetCDF writing + grid transform utilities for LPJmLData/LPJmLDataSet, plus substantial new tests.

Reviewed changes

Copilot reviewed 16 out of 17 changed files in this pull request and generated 10 comments.

Show a summary per file
File Description
pycoupler/config.py Adjusts coupled config defaults/behavior (model default + coupled config support).
pycoupler/coupler.py Adds port cleanup utilities; fixes historic years handling; improves input sending conversion logic.
pycoupler/run.py Introduces start_lpjml and deprecated alias; patches Slurm submission flow for coupled runs.
pycoupler/utils.py Adds standardized deprecation warning helper.
pycoupler/data.py Implements transform() and NetCDF export helpers for LPJmL data structures.
tests/test_config.py Updates expectations for new default coupled model.
tests/test_couple.py Adjusts time assignment and historic years expectation in coupling test.
tests/test_run.py Extends Slurm submission tests to cover slurm.jcf patching and sbatch flow.
tests/test_run_additional.py Adds additional run-related tests incl. deprecated alias warning behavior.
tests/test_utils.py Extends detect_io_type test coverage for invalid UTF-8 bytes.
tests/test_utils_additional.py Adds coverage for create_subdirs and read_json.
tests/test_data.py Adds comprehensive coverage for grid transforms and NetCDF writing.
tests/test_coupler_utils.py Adds coverage for new port cleanup utilities.
tests/data/invalid_utf8.bin Adds a sample binary file (currently appears redundant with the updated test).
pycoupler/release.py Clarifies release script instructions.
pycoupler/__init__.py Minor formatting cleanup.
CITATION.cff Bumps version to 1.7.0.
Comments suppressed due to low confidence (1)

pycoupler/run.py:265

  • slurm_jcf_dir is documented/used as the directory where slurm.jcf should be written, but the lpjsubmit subprocess is still executed without cwd=.... If lpjsubmit writes slurm.jcf to its working directory (typical), _patch_slurm_and_submit() will look in slurm_jcf_dir/slurm.jcf and fail even though the file exists elsewhere. Consider ensuring the directory exists and running lpjsubmit with cwd=slurm_jcf_dir (or otherwise directing lpjsubmit output) so slurm_jcf_path matches where the file is actually created.
    # run in coupled mode and pass coupling program/model
    needs_coupler_wait = bool(couple_to)
    if slurm_jcf_dir is None:
        slurm_jcf_dir = os.getcwd()
    slurm_jcf_path = Path(slurm_jcf_dir) / "slurm.jcf"

    couple_file = None

    if couple_to:
        python_path = "python3"
        if venv_path:
            python_path = os.path.join(venv_path, "bin/python")
            if not os.path.isfile(python_path):
                raise FileNotFoundError(
                    f"venv path contains no python binary at '{python_path}'."
                )

        bash_script = f"""#!/bin/bash

# Define the path to the config file
config_file="{config_file}"

# Call the Python script with the config file as an argument
{python_path} {couple_to} $config_file
"""

        couple_file = f"{output_path}/copan_lpjml.sh"

        with open(couple_file, "w") as file:
            file.write(bash_script)

        # Change the permissions of the file to make it executable
        run(["chmod", "+x", couple_file])

        cmd.extend(["-couple", couple_file])

    if needs_coupler_wait:
        cmd.append("-norun")

    cmd.extend([str(ntasks), config_file])

    # Intialize submit_status in higher scope
    submit_status = None
    # set LPJROOT to model_path to be able to call lpjsubmit
    try:
        os.environ["LPJROOT"] = config.model_path
        # call lpjsubmit via subprocess and return status if successfull
        submit_status = run(cmd, capture_output=True)
    except Exception as e:

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 611 to +626
# read all historic outputs
hist_years = list()
output_years = list()
for year in self.get_historic_years():
hist_years.append(year)
if year == self._config.outputyear:
output_dict = self.read_output(year=year, to_xarray=False)
output_years.append(year)
elif year > self._config.outputyear:
output_dict = append_to_dict(
output_dict, self.read_output(year=year, to_xarray=False)
)
output_years.append(year)

if not output_dict:
raise ValueError(
f"No historic output found for year {self._config.outputyear}"
)
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

read_historic_output() uses output_dict before it is guaranteed to be initialized. If self._config.outputyear is not encountered in get_historic_years() (e.g., outputyear < firstyear), the elif year > self._config.outputyear: branch will reference output_dict before assignment, and the later if not output_dict: check can also raise UnboundLocalError. Initialize output_dict up front (e.g., to {}) and guard the append path until the first read succeeded; also consider raising a clearer error when no output years were collected.

Copilot uses AI. Check for mistakes.
Comment on lines +460 to +466
enc: dict[str, object] = {}
target_fill = fill_value
if target_fill is None:
if np.issubdtype(dtype, np.floating):
target_fill = DEFAULT_NETCDF_FILL_VALUE
elif np.issubdtype(dtype, np.integer):
target_fill = -999
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Default integer _FillValue is inconsistent between LPJmLData.to_netcdf() (uses -999) and _netcdf_encoding() / LPJmLDataSet.to_netcdf() (uses -9999). This can lead to mixed conventions depending on which writer is used. Consider standardizing on a single integer fill value (and ideally reusing one constant) across both code paths.

Copilot uses AI. Check for mistakes.
Comment on lines +44 to +52
# Test binary file that can't be decoded as UTF-8 (triggers UnicodeDecodeError)
# Create a binary file that's not valid UTF-8
from pathlib import Path

binary_file = Path(test_path) / "data" / "invalid_utf8.bin"
binary_file.parent.mkdir(parents=True, exist_ok=True)
binary_file.write_bytes(b"\xff\xfe\x00\x01") # Invalid UTF-8 sequence
result = detect_io_type(str(binary_file))
assert result == "raw" # Should default to 'raw' when decode fails
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test writes invalid_utf8.bin into the checked-in tests/data directory via test_path, which can fail in environments where the source tree is read-only and also dirties the working tree during test runs. Prefer using tmp_path (or tmp_path_factory) for generated files, and keep test_path usage read-only.

Copilot uses AI. Check for mistakes.
Comment on lines +282 to 288
coupled_config=None,
dependency=None,
coupled_year=None,
temporal_resolution="annual",
write_output=[],
write_file_format="cdf",
append_output=True,
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding coupled_config as a positional parameter before dependency changes the positional argument order of set_coupled(), which can break existing callers that used positional args. To preserve backward compatibility, consider making the new parameter keyword-only (e.g., by adding * before it) or appending it at the end of the signature.

Suggested change
coupled_config=None,
dependency=None,
coupled_year=None,
temporal_resolution="annual",
write_output=[],
write_file_format="cdf",
append_output=True,
dependency=None,
coupled_year=None,
temporal_resolution="annual",
write_output=[],
write_file_format="cdf",
append_output=True,
*,
coupled_config=None,

Copilot uses AI. Check for mistakes.
Comment on lines +47 to +49
subprocess.run(["kill", "-9", pid.strip()], timeout=5)
killed_count += 1
except subprocess.TimeoutExpired:
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kill_process_on_port() increments killed_count even if the kill command fails (non-zero return code), because subprocess.run() is called without check=True and its return value is ignored. This can report success while leaving the port blocked. Consider checking the returncode (or using check=True and handling CalledProcessError) before counting a PID as killed.

Suggested change
subprocess.run(["kill", "-9", pid.strip()], timeout=5)
killed_count += 1
except subprocess.TimeoutExpired:
kill_result = subprocess.run(
["kill", "-9", pid.strip()],
timeout=5,
)
if kill_result.returncode == 0:
killed_count += 1
except subprocess.TimeoutExpired:
# If killing a specific PID times out, skip counting it

Copilot uses AI. Check for mistakes.
Comment on lines +70 to +80
@contextmanager
def safe_port_binding(host, port):
"""Context manager for safe port binding with automatic cleanup."""
# Clean up any existing processes on the port first
kill_process_on_port(port)

try:
yield port
finally:
# Clean up on exit
kill_process_on_port(port)
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

safe_port_binding(host, port) currently never uses host and does not actually bind/check the port; it only performs pre/post cleanup. This makes the context manager name/signature misleading. Either remove the unused host parameter and rename to reflect its behavior (cleanup-only), or implement an actual bind/check step to ensure the port is available.

Copilot uses AI. Check for mistakes.
Comment on lines +449 to +474
global_attrs = _ensure_cf_metadata(lpjml)
dataset = lpjml.to_dataset()
_suppress_coordinate_fill(dataset)
if global_attrs:
dataset.attrs.update(dict(global_attrs))
dataset.attrs.setdefault("Conventions", "CF-1.8")

var_name = lpjml.name or "__lpjml_data__"
dtype = lpjml.dtype
encoding: dict[str, dict[str, object]] = {}

enc: dict[str, object] = {}
target_fill = fill_value
if target_fill is None:
if np.issubdtype(dtype, np.floating):
target_fill = DEFAULT_NETCDF_FILL_VALUE
elif np.issubdtype(dtype, np.integer):
target_fill = -999
if target_fill is not None:
enc["_FillValue"] = target_fill

if compression and np.issubdtype(dtype, np.number):
enc["zlib"] = True
enc["complevel"] = complevel

encoding[var_name] = enc
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LPJmLData.to_netcdf() builds encoding using var_name = lpjml.name or "__lpjml_data__", but then calls dataset = lpjml.to_dataset() without forcing that variable name. When lpjml.name is None (allowed when path is provided), xarray will use its own default variable name, so the encoding entry may not apply to the written variable. Consider calling lpjml.to_dataset(name=var_name) (or renaming before to_dataset) so the encoding key always matches the dataset variable name.

Copilot uses AI. Check for mistakes.
import tempfile
import copy
import warnings
import subprocess
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Module 'subprocess' is imported with both 'import' and 'import from'.

Copilot uses AI. Check for mistakes.
try:
subprocess.run(["kill", "-9", pid.strip()], timeout=5)
killed_count += 1
except subprocess.TimeoutExpired:
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'except' clause does nothing but pass and there is no explanatory comment.

Suggested change
except subprocess.TimeoutExpired:
except subprocess.TimeoutExpired:
# Ignore timeout errors during best-effort port cleanup.

Copilot uses AI. Check for mistakes.
raise ValueError("Test exception")

# Verify cleanup was called twice (start and finally)
assert mock_kill.call_count == 2
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This statement is unreachable.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant