Welcome to the official repository for my Computer Science M.Sc. thesis: "A Proactive Container-based Auto-scaling Approach for a Hierarchical Orchestration Framework for Edge Computing."
This project tackles the inherent resource limitations of Edge Computing. When localized edge clusters face traffic bursts, they quickly saturate. This system introduces a Vicinity-Aware, 3-Layer Hierarchical Scaling Engine that dynamically offloads tasks to geographic neighbors or the central cloud before any degradation in Quality of Service (QoS) occurs.
The project is built using Python/Flask microservices orchestrated by Docker Compose, precisely modeling a 3-layer edge topology:
-
☀️ Layer 1: Central Cloud Node (
central_node/app.py)- The global orchestrator.
- Maps network topology via a latency matrix and defines neighboring relationships (
$Vicinity \le Threshold$ ). - Acts as the final fallback layer for Global Scaling when entire edge regions are saturated.
-
🧠 Layer 2: Cluster Managers (
cluster_node/app.py)- Geographically localized managers.
- Each manages a pool of Edge Worker nodes.
- Maintains a
MY_VICINITYlist to know which neighboring clusters to ask for help during traffic spikes. - Routes container requests using a strict 3-tier fallback logic.
-
👷 Layer 3: Edge Workers (
edge_node/app.py)- The actual machines/VMs running workloads close to the user.
- Defined by strict physical constraints (e.g.,
$CPU = 4.0$ ,$RAM = 8192 MB$ ). - Manages task allocation, strictly monitoring
$CPU_{allocated}$and$RAM_{allocated}$.
When a burst of traffic requires a new application instance, the routing engine executes the core thesis algorithm:
- Tier 1 (Local Scheduling): The Cluster Manager attempts to place the task on its local Edge Workers.
- Tier 2 (Vicinity Offloading): If all local workers are full, the manager forwards the request (
is_offloaded=True) to its predefined Vicinity (neighboring clusters with low latency). - Tier 3 (Global Scaling): If the local cluster and its vicinity are completely saturated, the request is escalated to the Central Cloud, which searches globally for any available resource.
When traffic subsides (Algorithm 5), the system must terminate tasks to save cost/energy. The simulator uses a Cost-Aware Priority Queue to kill instances in this order:
-
Highest Cost (
$Priority = 10$ ): Global/Remote instances. -
Medium Cost (
$Priority = 5$ ): Vicinity/Neighbor instances. -
Lowest Cost (
$Priority = 1$ ): Local instances.
edge_computing_thesis/
├── central_node/
│ ├── app.py # Cloud manager & network initializer
│ ├── Dockerfile
│ └── requirements.txt
├── cluster_node/
│ ├── app.py # 3-tier scaling logic & local routing
│ ├── Dockerfile
│ └── requirements.txt
├── edge_node/
│ ├── app.py # Resource tracking & task execution
│ ├── Dockerfile
│ └── requirements.txt
├── docker-compose.yml # provisions 1 central, 3 clusters, 6 edge nodes
├── simulate_thesis_trace.py # The workload trace simulation engine
└── test_log_bursts_modified.csv # The input workload dataset
Ensure you have Docker and Docker Compose installed. The docker-compose.yml file will provision the complete 3-layer network on a shared bridge.
docker-compose up --build -dWait a few seconds for all edge workers to register themselves with their respective cluster managers.
The simulation script reads a trace dataset (test_log_bursts_modified.csv) and mimics dynamic traffic patterns. It translates requests to CPU demands using the formula:
Run the simulation:
python simulate_thesis_trace.pyAs the simulation runs, you will see the system gracefully navigate capacity limits:
[Minute X] Demand: 3.5 > Current: 1.0 | Scaling UP...[Cluster 1] All Local Workers Full. Trying Vicinity...[Cluster 1] Vicinity Full. Requesting Global Scale from Central Node...
The final state and scaling decisions per minute will be saved to thesis_simulation_results.csv.
This repository houses my ongoing M.Sc. thesis project. While the core algorithms are complete, I am always open to architectural discussions, optimization ideas, and networking!
Feel free to open an issue or reach out if you're interested in Fog/Edge computing, Docker orchestration, or predictive auto-scaling.
Author: Ehsan Moradi
Advisor: S.A. Javadi