Skip to content

A scalable Python framework that transforms algorithm practice into a data-driven, testable, and high-performance workflow—built to help developers grow faster and understand algorithms more deeply.

Notifications You must be signed in to change notification settings

lufftw/neetcode

Repository files navigation

🧩 NeetCode Practice Framework

GitHub stars GitHub forks License

Python OpenAI VS Code pytest PRs Welcome


Solve. Forget. Repeat. Let’s Fix That.

🎯 Build Algorithmic Intuition

NeetCode is a scalable Python practice framework for algorithm learning and interview prep — build intuition and pattern recognition, turn ideas into clean implementations, and accumulate verifiable evidence (tests, stress cases, benchmarks, complexity checks) so your progress is real, repeatable, and interview-ready.

  • Learn the transferable skills: modeling, state/invariants, edge cases, complexity awareness, and reusable solution templates.
  • Interview-ready practice: time-boxed workflows, explain-while-coding, fewer “small bugs”, stronger trade-off discussions.
  • Prove correctness & robustness: static + seeded random + edge-case stress tests, custom judges, failure reproduction.
  • Measure and compare: benchmark multiple implementations and empirically estimate complexity.
  • See the big picture: ontology + AI mind maps reveal pattern relationships and learning paths.

📚 Docs🧪 Testing & Validation🤖 AI Mind Maps🧠 Interactive Mind Maps🚀 Quick Start📐 Patterns

English | 繁體中文


Topics: knowledge-graph ai-powered mind-map pattern-recognition leetcode neetcode-150 blind-75 stress-testing algorithm-engineering performance-benchmarking data-driven-testing random-test-generation judge-function algorithm-debugging competitive-programming python vscode-integration test-automation pre-commit local-automation coding-interview


💎 Core Philosophy

"Algorithm mastery is not about memorizing 300 solutions — it's about internalizing 15 fundamental patterns and knowing precisely when to apply each one."

This framework embodies three transformative principles:

🧬 Knowledge Graph Architecture

Traditional LeetCode practice treats problems as isolated units. We built an interconnected ontology system where:

  • API Kernels define reusable algorithmic primitives (SubstringSlidingWindow, GridBFS, BacktrackExplore)
  • Patterns compose kernels into higher-level strategies
  • Problem Families reveal structural relationships across 300+ problems
  • AI Synthesis discovers non-obvious connections humans miss

This is how experts think — in abstractions, not in solutions.

⚙️ Production-Grade Validation

Your solution passes LeetCode's tests. But is it correct? Is it optimal? We provide ICPC/Codeforces-caliber testing infrastructure:

Capability What It Proves
🎲 Seeded Random Generation Your code handles cases you never imagined
⚖️ Custom Judge Functions Multiple valid answers are all accepted
📊 Multi-Solution Benchmarking Which approach is actually faster
📈 Empirical Complexity Estimation Your O(n log n) claim is verified

This is how Google engineers validate — through exhaustive, reproducible testing.

🤖 AI-Augmented Understanding

We don't just store knowledge — we synthesize insight:

  • AI analyzes the entire ontology to generate creative, interconnected mind maps
  • Multi-perspective synthesis: Architect × Professor × Engineer × Competitor
  • Problems link to GitHub solutions (when available) or LeetCode (fallback)

This is how the next generation learns — with AI as a thinking partner.


🌟 What Sets Us Apart

💡 "Great algorithmic skill isn’t about finding an answer — it’s about building systems that make correctness, performance, and learning provable."

📦 Other LeetCode Repos 🚀 NeetCode
❌ Binary feedback ("Accepted / Wrong") 🧩 Evidence-driven loop: golden tests + seeded fuzz + edge-case stress
❌ Single solution, unknown behavior 🧩 Multiple implementations + side-by-side benchmarks
❌ Flat, tag-only pattern labels 🧩 Interactive mind maps linking problems, patterns, and kernels
❌ No AI-assisted discovery 🤖 AI-powered connections across related problems, patterns, and approaches
❌ Patterns limited to static notes 🧠 Dual learning paths per pattern: intuition-driven explanations for mental models, plus reusable templates for interviews and fast recall
❌ Manual runs, inconsistent environments ⚙️ Deterministic CLI + VS Code tasks/debug
❌ "Accepted" without proof 🔍 Invariant-aware solutions + explicit failure modes
❌ Ad-hoc edge cases 🧠 Systematic edge-case taxonomy
❌ Solution-first memorization 🧠 Pattern-first transfer learning (interview-ready)
❌ Big-O as documentation only 📊 Measured time / space trade-offs under identical inputs
❌ Complexity claimed, not verified 📊 Complexity + empirical benchmarks under identical conditions
❌ Results hard to reproduce ⚙️ Deterministic, reproducible experiments
❌ Flat problem collection 🧩 Skill & pattern progression tracking
❌ Silent failures 🔍 Auto-captured counterexamples for debugging
❌ Human-written notes only 🤖 AI-augmented reasoning layer (summaries, maps, kernels)

Legend — Capability Categories
🧠 Learning & reasoning layer
🧩 System architecture & structure
⚙️ Execution & tooling infrastructure
📊 Empirical measurement & benchmarks
🔍 Debugging & correctness analysis
🤖 AI-assisted augmentation

🧠 The Knowledge Graph Advantage

Most people practice algorithms in isolation. We built an interconnected knowledge system:

Mind Map Description Link
🤖 AI Ontology Analysis (Evolved) Generated via a multi-agent pipeline 🔗 EN · 🔗 中文
🤖 AI Ontology Analysis AI-powered deep pattern synthesis 🔗 EN · 🔗 中文
📐 Pattern Hierarchy API kernels → patterns → solutions 🔗
👨‍👩‍👧‍👦 Family Derivation Base templates → derived variants 🔗
Algorithm Usage Know which algorithm applies where 🔗
🏢 Company Coverage Target preparation for specific companies 🔗
🗺️ Learning Roadmaps NeetCode 150, Blind 75, etc. 🔗

→ Explore 10+ Interactive Mind Maps

⚙️ Industrial-Strength Testing

Built on principles from Codeforces, ICPC, and Google's engineering practices:

Capability What It Does Why It Matters
🎲 Random Test Generation Seeded generators for reproducibility Find edge cases you never imagined
⚖️ Custom Judge Functions ICPC-style validation logic Multiple correct answers? No problem
📊 Multi-Solution Benchmark Compare N approaches automatically Know which is actually faster
📈 Complexity Estimation Empirical Big-O analysis Verify your theoretical claims
🔧 VS Code Integration One-click debug, tasks, shortcuts Debug algorithms like real software

📑 Table of Contents


⭐ Why This Framework?

The Problem with Traditional Practice

You solve a problem on LeetCode. It passes. But do you really know if your solution is correct? What about:

  • That edge case with empty input you didn't test?
  • The subtle off-by-one error that only appears with large N?
  • Whether your O(n log n) claim is actually true?

Traditional practice leaves these questions unanswered. This framework answers them definitively.

What Makes Us Different

Capability This Framework Typical Repos
Reproducible Random Tests ✅ Seeded generators ❌ Manual only
Custom Judge Functions ✅ ICPC/Codeforces style ❌ String match
Multi-Solution Benchmarking ✅ Compare N approaches ❌ Single solution
VS Code Integration ✅ Tasks, Debug, Shortcuts ❌ CLI only
Stress Testing ✅ Generate 1000+ cases ❌ Limited
Complexity Estimation ✅ Automatic Big-O ❌ None

Built For Excellence

Audience How We Help
🏆 Competitive Programmers Train like Codeforces grandmasters — stress test until you break your code, then fix it
💼 FAANG Engineers Build interview confidence by proving your solutions work, not just hoping they do
🎓 CS Students Learn algorithms the right way — through experimentation, not memorization
👨‍🏫 Educators Give students industrial-grade tools to validate their understanding
🔬 Researchers Benchmark algorithm variants at scale with reproducible methodology

🚀 Quick Start

1. Setup Environment

Windows (PowerShell)

# Clone and navigate to project
cd C:\path\to\neetcode

# Install Python 3.11 (if needed)
py install 3.11

# Create and activate virtual environment
py -3.11 -m venv leetcode
leetcode\Scripts\activate

# Install dependencies
pip install -r requirements.txt

Linux / macOS

# Using pyenv (recommended)
pyenv install 3.11
pyenv local 3.11

# Create and activate virtual environment
python -m venv leetcode
source leetcode/bin/activate

# Install dependencies
pip install -r requirements.txt

# Make scripts executable
chmod +x scripts/run_tests.sh scripts/run_case.sh scripts/new_problem.sh

2. Create Your First Problem

# Windows
scripts\new_problem.bat 0001_two_sum

# Linux/macOS
./scripts/new_problem.sh 0001_two_sum

This creates:

  • solutions/0001_two_sum.py — Your solution file
  • tests/0001_two_sum_1.in — Test input
  • tests/0001_two_sum_1.out — Expected output

3. Run Tests

# Windows
scripts\run_tests.bat 0001_two_sum

# Linux/macOS
./scripts/run_tests.sh 0001_two_sum

4. Debug in VS Code

  1. Open any solution file in solutions/
  2. Press F5 to debug with test case #1
  3. Or press Ctrl+Shift+B to run all tests

That's it! You're ready to solve problems. 🎉


✨ Key Features

Feature Description
🧪 Testing & Validation Engine Core Feature — Automated testing, benchmarking, random test generation, complexity estimation. See Testing & Validation Guide
🤖 AI Ontology Analysis AI-powered knowledge graph synthesis — discover pattern relationships humans miss
🎲 Random Test Generation Seeded generators for reproducibility, stress test with 1000+ cases, auto-save failing cases
⚖️ Custom Judge Functions Validate multiple correct answers, ICPC-style validation, works without expected output
📊 Performance Analysis Benchmark multiple solutions, automatic time complexity estimation, side-by-side comparison
🔧 VS Code Integration One-click test execution, integrated debugging, custom tasks and shortcuts
🧠 Interactive Mind Maps Visualize algorithm patterns, track learning progress — Explore →

🧠 Interactive Mind Maps

Visualize algorithm patterns, problem relationships, and learning paths:

🤖 AI-Powered Ontology Analysis (NEW!)

"Let AI synthesize what takes humans years to internalize."

Our AI Ontology Analyzer processes the entire knowledge graph — API Kernels, Patterns, Algorithms, Data Structures, Problem Families — and generates creative, interconnected mind maps that reveal insights human-curated lists miss.

Language Description Links
English (Evolved) Generated via a multi-agent pipeline Static · Interactive ✨
繁體中文 (Evolved) 由多代理(multi-agent)流程產生 Static · Interactive ✨
English AI-synthesized pattern relationships Static · Interactive ✨
繁體中文 AI 智能分析模式關聯 Static · Interactive ✨

What makes it special:

  • 🧬 Deep Pattern Synthesis — AI identifies non-obvious connections between patterns
  • 🎯 Smart Linking — Problems link to GitHub solutions (when available) or LeetCode
  • 🌐 Multi-language — Generate in English and 繁體中文
  • ♻️ Regeneratable — Run python tools/mindmaps/generate_mindmaps_ai.py to create fresh insights

📚 Curated Mind Maps

Mind Map Description Links
📐 Pattern Hierarchy API Kernels → Patterns → Problems Static · Interactive ✨
👨‍👩‍👧‍👦 Family Derivation Base templates → Derived variants Static · Interactive ✨
Algorithm Usage Problems by algorithm Static · Interactive ✨
🏗️ Data Structure Usage Problems by data structure Static · Interactive ✨
🏢 Company Coverage Company-specific problems Static · Interactive ✨
🗺️ Learning Roadmaps NeetCode 150, Blind 75, etc. Static · Interactive ✨
🔗 Problem Relations Related problems network Static · Interactive ✨
🔀 Solution Variants Multiple approaches Static · Interactive ✨
📊 Difficulty × Topics Topics by difficulty Static · Interactive ✨

👉 View All Interactive Mind Maps


🤖 AI Mind Map Generation

"Let AI synthesize what takes humans years to internalize."

Two Generation Modes

Mode Description Quick Start
🤖 Evolved Agent Multi-expert refinement with consensus voting cd tools/mindmaps/ai-markmap-agent && python main.py
🤖 Basic AI Single-pass synthesis from knowledge graph python tools/mindmaps/generate_mindmaps_ai.py

Key Features

  • 🧬 Multi-Expert Synthesis — Architect + Professor + Engineer perspectives
  • 🎯 Smart Linking — GitHub solution (if exists) → LeetCode fallback
  • 🌐 Multi-language — EN / 繁體中文
  • ♻️ Regeneratable — Version history with auto-increment

Output Files

Type Output Path
Evolved docs/mindmaps/neetcode_ontology_agent_evolved_{lang}.md
Basic docs/mindmaps/neetcode_ontology_ai_{lang}.md
HTML docs/pages/mindmaps/*.html

📖 Evolved Agent: See tools/mindmaps/ai-markmap-agent/README.md for architecture, expert roles, and configuration.

📖 Basic AI: See tools/README.md for configuration options.


📐 Pattern Documentation

"Don't memorize 200 problems. Master 10 patterns."

Each pattern provides two learning paths:

Path Purpose Best For
💡 Intuition Understand the "why" through stories and visual explanations First-time learners, building mental models
🛠️ Templates Production-ready implementations with problem-specific variations Interview prep, quick reference
API Kernel Learning Resources Problems
SubstringSlidingWindow 💡 Intuition · 🛠️ Templates LeetCode 3, 76, 159, 209, 340, 438, 567
TwoPointersTraversal 💡 Intuition · 🛠️ Templates LeetCode 1, 11, 15, 16, 21, 26, 27, 75, 88, 125, 141, 142, 167, 202, 283, 680, 876
BacktrackingExploration 💡 Intuition · 🛠️ Templates LeetCode 39, 40, 46, 47, 51, 77, 78, 79, 90, 93, 131, 216
GridBFSMultiSource coming soon LeetCode 994, 286, 542
KWayMerge coming soon LeetCode 23, 21, 88
BinarySearchBoundary coming soon LeetCode 4, 33, 34, 35
LinkedListInPlaceReversal coming soon LeetCode 25, 206, 92
MonotonicStack coming soon LeetCode 84, 85, 496

👉 View All Pattern Guides →


📖 Usage Guide

⌨️ VS Code Integration

Keyboard Shortcuts:

Shortcut Action
Ctrl+Shift+B Run all tests for current file
F5 Debug with test case #1

Note: Open a solution file in solutions/ before using shortcuts.

Common Tasks (Ctrl+Shift+P → "Tasks: Run Task"):

Task Description
Run all tests Execute all test cases
Run case #1 / #2 / #3 Run specific test case
Benchmark Show execution times
Run all solutions Compare all implementations
Run with generated (10) Static + 10 generated cases

📖 Complete Reference: See VSCode Setup Guide for all 14 tasks, 11 debug configurations, workflow examples, and customization.

💻 Command Line Interface

📖 Complete Reference: See Testing & Validation Guide for full CLI options, usage examples, and advanced features. This is the core testing engine that powers automated testing, benchmarking, random test generation, and complexity estimation.

# Run all test cases
python runner/test_runner.py <problem_name>

# Run specific test case
python runner/case_runner.py <problem_name> <case_number>

# Run with benchmarking
python runner/test_runner.py <problem_name> --benchmark

# Run all solutions
python runner/test_runner.py <problem_name> --all

# Generate random tests
python runner/test_runner.py <problem_name> --generate 10

# Estimate time complexity
python runner/test_runner.py <problem_name> --estimate

📝 Solution File Format

# solutions/0001_two_sum.py
from typing import List
from _runner import get_solver

SOLUTIONS = {
    "default": {
        "class": "Solution",
        "method": "twoSum",
        "complexity": "O(n) time, O(n) space",
        "description": "Single pass with hash map",
    },
}

class Solution:
    def twoSum(self, nums: List[int], target: int) -> List[int]:
        seen = {}
        for i, num in enumerate(nums):
            complement = target - num
            if complement in seen:
                return [seen[complement], i]
            seen[num] = i
        return []

def solve():
    import sys
    lines = sys.stdin.read().strip().split('\n')
    
    # Parse input
    nums = list(map(int, lines[0].split(',')))
    target = int(lines[1])
    
    # Run solution (polymorphic dispatch)
    solver = get_solver(SOLUTIONS)
    result = solver.twoSum(nums, target)
    print(result)

if __name__ == "__main__":
    solve()

📖 See docs/solution-contract.md for the complete specification.

📋 Test File Format

Specification Requirement
Line Ending LF (Unix format, \n)
Encoding UTF-8
File Ending Single newline at end
Naming {number}_{name}_{case}.in/.out

Input file (tests/0001_two_sum_1.in):

2,7,11,15
9

Output file (tests/0001_two_sum_1.out):

[0, 1]

🔧 Advanced Features

🚀 Multi-Solution Benchmarking

Compare multiple approaches for the same problem using the polymorphic pattern:

# solutions/0023_merge_k_sorted_lists.py
from _runner import get_solver

SOLUTIONS = {
    "default": {
        "class": "SolutionHeap",
        "method": "mergeKLists",
        "complexity": "O(N log k)",
        "description": "Min Heap approach"
    },
    "divide": {
        "class": "SolutionDivideConquer",
        "method": "mergeKLists",
        "complexity": "O(N log k)",
        "description": "Divide and Conquer"
    },
    "greedy": {
        "class": "SolutionGreedy",
        "method": "mergeKLists",
        "complexity": "O(kN)",
        "description": "Greedy comparison"
    },
}

class SolutionHeap:
    def mergeKLists(self, lists):
        # Heap implementation
        pass

class SolutionDivideConquer:
    def mergeKLists(self, lists):
        # Divide & Conquer implementation
        pass

class SolutionGreedy:
    def mergeKLists(self, lists):
        # Greedy implementation
        pass

def solve():
    # ... parse input ...
    solver = get_solver(SOLUTIONS)
    result = solver.mergeKLists(lists)
    print(result)

Run commands:

# Run specific solution
python runner/test_runner.py 0023_merge_k_sorted_lists --method heap

# Compare all solutions
python runner/test_runner.py 0023_merge_k_sorted_lists --all --benchmark

Output:

============================================================
📊 Performance Comparison
============================================================
Method               Avg Time     Complexity      Pass Rate
------------------------------------------------------------
heap                    44.36ms   O(N log k)      3/3
divide                  44.48ms   O(N log k)      3/3
greedy                  44.82ms   O(kN)           3/3
============================================================

Create with template: scripts\new_problem.bat 0023_merge_k_lists --multi

📖 See docs/solution-contract.md for complete SOLUTIONS schema and validation rules.

🔀 Flexible Output Validation

For problems with multiple valid answers ("return in any order"):

Validation Modes:

Mode Description Requires .out
[judge] Custom validation with reference
[judge-only] Custom validation only
[exact] Exact string match
[sorted] Sort before comparison
[set] Set comparison

JUDGE_FUNC (Recommended):

def judge(actual: list, expected, input_data: str) -> bool:
    """Validate N-Queens solution."""
    n = int(input_data.strip())
    
    # Validate each board
    for board in actual:
        if not is_valid_n_queens(board, n):
            return False
    
    # Check count if expected exists
    if expected is not None:
        return len(actual) == len(expected)
    
    return True

JUDGE_FUNC = judge

COMPARE_MODE (Simple Cases):

COMPARE_MODE = "sorted"  # Options: "exact" | "sorted" | "set"

📖 See docs/solution-contract.md for complete JUDGE_FUNC signature and validation rules.

🎲 Random Test Generation

Create a generator file with the same name as your solution:

# generators/0004_median_of_two_sorted_arrays.py
import random
from typing import Iterator, Optional

def generate(count: int = 10, seed: Optional[int] = None) -> Iterator[str]:
    """Generate random test cases."""
    if seed is not None:
        random.seed(seed)
    
    # Edge cases first
    yield "[]\n[1]"
    yield "[1]\n[]"
    
    # Random cases
    for _ in range(count - 2):
        m = random.randint(0, 1000)
        n = random.randint(0, 1000)
        nums1 = sorted(random.randint(-10**6, 10**6) for _ in range(m))
        nums2 = sorted(random.randint(-10**6, 10**6) for _ in range(n))
        yield f"{list(nums1)}\n{list(nums2)}".replace(' ', '')

Usage:

# Run static + generated tests
python runner/test_runner.py 0004_median --generate 10

# Only generated tests
python runner/test_runner.py 0004_median --generate-only 100

# Reproducible with seed
python runner/test_runner.py 0004_median --generate 10 --seed 42

# Save failing cases
python runner/test_runner.py 0004_median --generate 10 --save-failed

📖 See docs/generator-contract.md for complete generator specification and best practices.

📈 Time Complexity Estimation

Add a complexity generator function:

# generators/0004_median_of_two_sorted_arrays.py

def generate_for_complexity(n: int) -> str:
    """Generate test case with specific size n."""
    m = random.randint(0, n)
    return _generate_case(m, n - m)

Run estimation:

python runner/test_runner.py 0004_median --estimate

Output:

📈 Running complexity estimation...
   Sizes: [10, 20, 50, 100, 200, 500, 1000, 2000]
   n=   10: 0.0040ms
   n=  100: 0.0082ms
   n= 1000: 0.0685ms
   n= 2000: 0.1796ms

✅ Estimated: O(n log n)
   Confidence: 1.00

📁 Project Architecture

neetcode/
│
├── solutions/                 # 📝 Your solution files
│   └── 0001_two_sum.py
│
├── tests/                     # 📋 Test cases
│   ├── 0001_two_sum_1.in      # Input file
│   ├── 0001_two_sum_1.out     # Expected output
│   └── *_failed_*.in          # Auto-saved failed cases (--save-failed)
│
├── generators/                # 🎲 Random test generators (optional)
│   └── 0001_two_sum.py        # generate(count, seed) function
│
├── runner/                    # 🧪 Core testing & validation engine
│   ├── test_runner.py         # CLI entry point & main orchestration
│   ├── case_runner.py         # Single case runner (for debugging)
│   ├── executor.py            # Test case execution (subprocess)
│   ├── compare.py             # Output comparison (exact/sorted/set/judge)
│   ├── reporter.py            # Result formatting & benchmark display
│   ├── module_loader.py       # Dynamic module loading
│   ├── complexity_estimator.py # Time complexity estimation (big_O)
│   ├── paths.py               # Path utilities
│   ├── io_utils.py            # File I/O operations
│   ├── util.py                # Re-exports (backward compatible)
│   └── README.md              # Quick reference guide
│
│   📖 See [Testing & Validation Guide](docs/runner/README.md) — Core engine for automated testing, benchmarking, random test generation, and complexity estimation
│
├── templates/                 # 📄 Problem templates
│   ├── template_solution.py       # Single solution template
│   ├── template_solution_multi.py # Multi-solution (polymorphic)
│   └── template_test.txt          # Test case template
│
├── .vscode/                   # 🔧 VS Code integration
│   ├── settings.json          # Python environment settings
│   ├── tasks.json             # Ctrl+Shift+B shortcuts (14 tasks)
│   └── launch.json            # F5 debug configurations (11 configs)
│
│   📖 See [VSCode Setup Guide](docs/contributors/vscode-setup.md) — Tasks, debug configs, workflow examples
│
├── docs/                      # 📚 Documentation (MkDocs)
│   ├── index.md               # Homepage (English)
│   ├── index_zh-TW.md         # Homepage (繁體中文)
│   ├── contributors/          # Maintainer documentation
│   │   ├── README.md          # Full maintainer guide
│   │   ├── testing.md         # Complete testing documentation
│   │   ├── vscode-setup.md    # VS Code tasks & debug configs
│   │   ├── virtual-env-setup.md  # Virtual environment setup
│   │   └── documentation-architecture.md  # Documentation structure
│   ├── tools/                 # Tools documentation
│   │   ├── README.md          # Complete tools reference
│   │   ├── ai-markmap-agent/  # AI Markmap Agent docs
│   │   ├── mindmaps/          # Mind Maps Generator docs
│   │   └── patterndocs/       # Pattern Docs Generator docs
│   ├── mindmaps/              # Generated mind map markdown
│   ├── patterns/              # Generated pattern documentation
│   ├── pages/                 # Generated HTML (gitignored)
│   ├── assets/                # Documentation assets (images, CSS, JS)
│   ├── overrides/             # MkDocs theme overrides
│   ├── getting-started/       # Getting started guides
│   └── stylesheets/           # Custom CSS
│
├── tools/                     # 🛠️ Utility scripts
│   ├── mindmaps/              # 🗺️ Mind map tools (all integrated)
│   │   ├── core/              # Core modules
│   │   ├── ai-markmap-agent/  # 🤖 AI Markmap Agent (multi-agent pipeline)
│   │   ├── ai_mindmap/        # AI mind map modules
│   │   ├── hooks/             # Git hooks
│   │   ├── prompts/           # AI prompts
│   │   ├── shared/            # Shared utilities
│   │   ├── tests/             # Tests
│   │   ├── generate_mindmaps.py       # Rule-based generator (entry)
│   │   ├── generate_mindmaps_ai.py    # AI generator (entry)
│   │   ├── generate_mindmaps.toml     # Rule-based configuration
│   │   ├── generate_mindmaps_ai.toml  # AI configuration
│   │   ├── sync_mindmap_html.py       # Sync HTML
│   │   ├── text_to_mindmap.py         # Text to mindmap
│   │   └── html_meta_description_generator.py  # SEO meta descriptions
│   ├── patterndocs/           # 📚 Pattern documentation generator
│   │   └── generate_pattern_docs.py   # Entry script
│   ├── review-code/           # 🔍 Code review & validation
│   │   └── validation/        # Validation tools
│   │       ├── check_solutions.py
│   │       ├── check_test_files.py
│   │       └── run_format_tests.py
│   ├── docstring/             # 📝 Docstring tools
│   ├── leetcode-api/          # 🔗 LeetCode API
│   │   └── crawler/           # Crawler tools
│   ├── maintenance/           # 🔧 Maintenance tools
│   │   └── doc-naming/        # Documentation naming tools
│   └── _staging/              # 📦 Staging area (to be organized)
│
├── ontology/                  # 🧬 Algorithm ontology (TOML)
│   ├── api_kernels.toml       # API kernel definitions
│   ├── patterns.toml          # Pattern definitions
│   ├── algorithms.toml        # Algorithm definitions
│   ├── data_structures.toml   # Data structure definitions
│   ├── companies.toml         # Company definitions
│   ├── topics.toml            # Topic definitions
│   ├── difficulties.toml      # Difficulty levels
│   ├── families.toml          # Problem family definitions
│   └── roadmaps.toml          # Roadmap definitions
│
├── meta/                      # 📊 Problem & pattern metadata
│   ├── problems/              # Problem metadata (one TOML per problem)
│   │   └── *.toml
│   └── patterns/              # Pattern documentation sources
│       └── <pattern_name>/    # Pattern-specific markdown
│
├── roadmaps/                  # 🗺️ Learning path definitions
│   ├── neetcode_150.toml
│   ├── blind_75.toml
│   └── sliding_window_path.toml
│
├── .dev/                      # 🧪 Maintainer zone (unit tests)
│   ├── tests/                 # Unit test suite (150+ cases)
│   ├── tests_solutions/       # Solution validation tests
│   ├── scripts/run_tests.bat/.sh  # Run runner unit tests
│   ├── run_all_tests.bat/.sh  # Run all unit tests
│   ├── run_tests_solutions.bat/.sh  # Run solution tests
│   ├── testing.md             # Testing documentation
│   ├── virtual-env-setup.md   # Virtual environment guide
│   └── README.md              # Maintainer guide
│
├── .github/                   # 🚀 GitHub configuration
│   └── workflows/
│       └── deploy-pages.yml   # GitHub Pages deployment
│
├── leetcode/                  # 🐍 Python virtual environment (3.11)
│
├── scripts/                   # 🔧 Utility scripts
│   ├── new_problem.bat / .sh  # Create new problem from template
│   ├── run_tests.bat / .sh    # Run all tests for a problem
│   ├── run_case.bat / .sh     # Run single test case
│   └── build_docs.bat / .sh   # Build documentation site
│
├── mkdocs_plugins/            # 🔌 MkDocs plugins
│   └── mindmaps_lastmod.py    # Last modified date plugin
│
├── requirements.txt           # Python dependencies
├── pyproject.toml             # Project configuration
├── mkdocs.yml                 # MkDocs configuration
├── pytest.ini                 # pytest configuration
├── README.md                  # This file (English)
└── README_zh-TW.md            # 繁體中文版

Directory Guide

Directory Purpose Target Audience
solutions/ Write your solutions here ✅ All users
tests/ Add test cases (.in/.out) ✅ All users
generators/ Random test generators ✅ All users
runner/ Test execution engine 🔧 Contributors
templates/ Problem templates ✅ All users
.vscode/ VS Code configuration ✅ All users
docs/ MkDocs documentation 🔧 Contributors
tools/ Documentation generators 🔧 Contributors
ontology/ Algorithm ontology data 🔧 Contributors
meta/ Problem/pattern metadata 🔧 Contributors
.dev/ Unit tests (150+ cases) 🔧 Maintainers

📝 Note: Files in docs/mindmaps/, docs/patterns/, and docs/pages/ are auto-generated. Edit the source files in ontology/, meta/, and tools/ instead.

Documentation Guide

Documentation is organized by target audience:

Location Purpose Audience
docs/ User documentation (published to website) ✅ Users
tools/README.md Developer tools reference 🔧 Contributors
tools/*/README.md Module technical details 🔧 Contributors
.dev/ Maintainer documentation 🔧 Maintainers

Key Documentation Files:

Document Description
docs/solution-contract.md Solution file specification
docs/generator-contract.md Generator file specification
docs/tools/README.md Complete tools reference
docs/contributors/README.md Maintainer guide
docs/contributors/documentation-architecture.md Documentation structure

❓ Frequently Asked Questions

What problems does this framework solve?

  • Running multiple algorithm implementations automatically
  • Generating reproducible random test data for stress testing
  • Benchmarking solutions to identify performance differences
  • Debugging LeetCode-style problems with VS Code integration
  • Validating outputs using custom logic beyond simple file comparison

How is this different from copying LeetCode solutions?

This is not a solution collection — it's a testing infrastructure. You write solutions, and the framework:

  1. Runs them against static test cases
  2. Generates random test cases automatically
  3. Validates correctness using custom judge functions
  4. Benchmarks multiple solutions against each other
  5. Estimates time complexity empirically

Can I use this for interview preparation?

Absolutely! The framework is perfect for interview prep:

  • Practice writing solutions in real LeetCode format
  • Find edge cases you might miss with random test generation
  • See which approach is actually faster with benchmarking
  • Debug easily with VS Code integration

What Python version is required?

Python 3.11 — matching the LeetCode official environment.


🛠️ For Contributors

Running Unit Tests

# Activate virtual environment
leetcode\Scripts\activate  # Windows
source leetcode/bin/activate  # Linux/macOS

# Run all tests
python -m pytest .dev/tests -v

# With coverage
python -m pytest .dev/tests --cov=runner --cov-report=html

Generate Mind Maps Locally

AI-Powered (Recommended):

# Interactive mode
python tools/mindmaps/generate_mindmaps_ai.py

# With specific goal
python tools/mindmaps/generate_mindmaps_ai.py --goal interview

# Generate multiple languages
# Edit tools/mindmaps/generate_mindmaps_ai.toml: language = ["en", "zh-TW"]
python tools/mindmaps/generate_mindmaps_ai.py

Configuration: tools/mindmaps/generate_mindmaps_ai.toml

Rule-Based:

# Generate Markdown mind maps
python tools/mindmaps/generate_mindmaps.py

# Generate HTML (interactive) mind maps
python tools/mindmaps/generate_mindmaps.py --html

Configuration: tools/mindmaps/generate_mindmaps.toml

Build Documentation Locally

⚠️ Optional Feature: Building documentation locally is completely optional. Core LeetCode practice functionality works without any documentation build setup.

Recommended Method (Simple):

The easiest way to build documentation locally is using the manual scripts:

# Windows
scripts\build_docs.bat

# Linux/macOS
./scripts/build_docs.sh

# Build and preview locally
scripts\build_docs.bat --serve  # Windows
./scripts/build_docs.sh --serve  # Linux/macOS

📖 See Building Documentation Locally (Manual Method) for complete guide.

Advanced Option (Optional):

If you want to test the exact GitHub Actions workflow locally, you can use act:

📖 See Running GitHub Actions Locally with ActNote: Requires Docker and act tool. Only needed if you want to test CI/CD workflows.

Documentation

Core Documentation:

Local Documentation Build (Optional):

Deployment:


📜 License

MIT License — Free for personal learning and educational use.


Built with ❤️ for the competitive programming community

About

A scalable Python framework that transforms algorithm practice into a data-driven, testable, and high-performance workflow—built to help developers grow faster and understand algorithms more deeply.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages