Skip to content

rfaes/devcontainer-strategy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DevContainer Strategy Comparison

A comprehensive comparison of different strategies for configuring and deploying development containers.

Overview

This repository explores various approaches to structuring devcontainers, analyzing their trade-offs in terms of build time, image size, maintainability, and developer experience.

Strategies Under Comparison

1. Prebuilt DevContainer with All Features

Building a single, comprehensive devcontainer image that includes all features and tools.

Pros:

  • Consistent environment across all team members
  • Ready to use immediately when opening the container

Cons:

  • Large image size
  • All features must be updated together
  • Requires CI/CD pipeline for prebuild
  • Less flexible for individual developer needs

2. Merged Features (Single Feature File)

Consolidating all feature installations into one custom feature definition.

Pros:

  • Single source of truth for dependencies
  • Easier to understand the complete setup
  • Potential for optimization across installations

Cons:

  • Less modular
  • Harder to selectively enable/disable components
  • Can't leverage official feature updates easily

3. Modular Features (Multiple Features)

Using individual features from the DevContainer Feature spec.

Pros:

  • Highly modular and reusable
  • Easy to add/remove specific tools
  • Leverages community-maintained features
  • Better caching per feature

Cons:

  • Order matters (see below)
  • Potential version conflicts

4. Feature Order Optimization

Analyzing how the order of features affects build time and caching.

Pros:

  • Better layer caching
  • Optimizes for common development workflows

Cons:

  • Requires understanding of Docker layer caching
  • May need periodic re-evaluation

5. Hybrid Approach

Combining prebuilt base images with on-demand features.

Pros:

  • Balance between flexibility and convenience
  • Core tools prebuilt, optional tools on-demand
  • Reduced image size for base image

Cons:

  • More complex setup
  • Requires maintaining both prebuild and feature configs

6. Just-In-Time Feature Installation

Installing tools lazily when first needed rather than at container build time.

Pros:

  • Only installs what's actually needed
  • Smaller base image

Cons:

  • Tools not immediately available
  • Potential for environment drift
  • More complex to implement

Evaluation Criteria

Each strategy will be evaluated based on:

  • Build Time: Initial build and rebuild times
  • Startup Time: Time to ready container
  • Image Size: Final container image size
  • Maintainability: Ease of updates and changes
  • Developer Experience: Ease of use and flexibility
  • CI/CD Integration: Pipeline complexity
  • Caching Efficiency: How well it leverages Docker layer caching
  • Reproducibility: Consistency across environments

Example Configuration

To ensure consistent comparison across all strategies, we use a standardized configuration:

Base Image

mcr.microsoft.com/devcontainers/rust@sha256:e418f5e96979c3674baf47115366d0e4f5569fefdab6b51bd2817256941ceeb0

Using a digest-based reference ensures exact reproducibility - the image content is immutable and guaranteed to be identical across all environments.

VS Code Extensions

All strategies will include the following extensions as a representative sample:

  • rust-lang.rust-analyzer - Rust language support
  • vadimcn.vscode-lldb - Native debugger
  • bierner.emojisense - Emoji autocomplete
  • davidanson.vscode-markdownlint - Markdown linting
  • yzhang.markdown-all-in-one - Markdown utilities
  • shd101wyy.markdown-preview-enhanced - Enhanced markdown preview
  • shardulm94.trailing-spaces - Highlight trailing spaces
  • tamasfe.even-better-toml - TOML language support
  • vscode-icons-team.vscode-icons - File icons
  • fill-labs.dependi - Dependency management
  • alexkrechik.cucumberautocomplete - Cucumber support
  • heaths.vscode-guid - GUID generation
  • jebbs.plantuml - PlantUML support
  • redhat.vscode-yaml - YAML language support

This combination provides a realistic development environment with language support, documentation tools, and utilities.

Repository Structure

/
├── strategies/
│   ├── 01-prebuilt-all/
│   ├── 02-merged-features/
│   ├── 03-modular-features/
│   ├── 04-feature-order/
│   ├── 05-hybrid/
│   └── 06-just-in-time/
├── benchmarks/
│   └── results.md
├── examples/
│   └── sample-projects/
├── CONTRIBUTING.md
├── LICENSE
└── README.md

Performance Test Methodology

To ensure accurate and reproducible benchmarks, we follow a standardized testing approach:

Metrics Collected

  1. Initial Build Time

    • Time from docker build start to completion
    • Measured with no cache (clean build)
    • Reported in seconds
  2. Rebuild Time

    • Time to rebuild after changing one feature/extension
    • Tests cache efficiency
    • Measured in seconds
  3. Container Startup Time

    • Time from container start to VS Code ready state
    • Includes extension activation
    • Measured in seconds
  4. Image Size

    • Final compressed image size
    • Reported in MB/GB
    • Includes all layers
  5. Layer Count

    • Number of container layers in final image
    • Affects push/pull performance
  6. System Information

    • CPU model and specifications
    • Total RAM
    • Windows OS version
    • Podman version
    • Captured automatically in each test run

Testing Environment

  • Operating System: Windows (version documented in results)
  • CPU: Processor model and core count
  • RAM: Total system memory
  • Podman Version: Specified in benchmark results
  • Network: Cold pulls (no registry cache) for initial builds
  • Repetitions: Each test run 5 times, median value reported
  • Clean State: Podman system prune between strategy tests

All system specifications are automatically captured and included in the benchmark results.

Benchmark Script

Each strategy includes a benchmark.ps1 PowerShell script that:

  1. Captures system information (CPU, RAM, OS, Podman version)
  2. Clears Podman cache
  3. Pulls base image
  4. Records start time
  5. Builds devcontainer
  6. Records completion time
  7. Measures image size
  8. Tests container startup
  9. Performs rebuild test with minor change
  10. Runs 5 iterations and calculates median
  11. Outputs results in JSON format with system specs

Result Comparison

All results are aggregated in benchmarks/results.md with:

  • Tabular comparison across all strategies
  • Charts showing relative performance
  • Analysis of trade-offs
  • Recommendations based on use case

Getting Started

Each strategy directory contains:

  • .devcontainer/ - DevContainer configuration
  • README.md - Strategy-specific documentation
  • benchmark.ps1 - PowerShell script to measure performance metrics

Contributing

Contributions are welcome! If you have additional strategies or improvements to existing ones, please open an issue or pull request.

See CONTRIBUTING.md for detailed guidelines on how to contribute.

Resources

License

MIT License - See LICENSE file for details

About

Comprehensive comparison of DevContainer configuration strategies with real-world benchmarks

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors