VM to Containers: Modernizing a 3-Tier Application - Part 2


Updated on: Application-Modernization containers kubernetes

Preparing the Application for Containerization

In the previous article, I introduced the VM-based 3-tier application that I built and explained how the system works across its three tiers:

  • Web tier (Nginx)
  • Application tier (Flask)
  • Database tier (FastAPI + MongoDB)

The architecture itself is solid and does not need to change immediately. However, the deployment model is tightly coupled to virtual machines. Before we introduce containers, we must first remove the VM-specific assumptions embedded in the application code and deployment scripts.

This preparation step is critical. Without it, containerizing the application would require rebuilding images every time the infrastructure changes. In this article, I will walk through the exact files from my original application that need to be reviewed or modified to make the system container-ready.

The Source Code Baseline for Modernization

The application files are available in this Github repository.

The VM startup scripts from the same archive are also useful references because they show how the tiers were originally wired together.

Reviewing the Application Tier

The main application logic lives in the Flask application:

App VM/app.py

This file handles:

  • rendering the UI
  • receiving form submissions
  • calling the backend API
  • displaying the results

When reviewing this file for containerization, the first thing that stands out is the way the backend service location is defined.

Current configuration in the Flask app

In Line 22 you will find the entry provided below.

db_fqdn = "db-vm.vercel.app" # Modify the value with the actual database server
api_url_base = "http://" + db_fqdn + "/employee"

In the original VM deployment, a startup script replaces this value during deployment.

That approach works in a VM environment, but it creates a problem for containers because the application code itself is being modified at runtime.

Updated configuration approach

Instead of hardcoding the backend location, the Flask app should read it from environment variables.

import os

db_fqdn = os.getenv("DB_API_HOST", "db-api")
db_port = os.getenv("DB_API_PORT", "8000")

api_url_base = f"http://{db_fqdn}:{db_port}/employee"

This small change allows the application to run in any environment without modifying the source code.

Other Files in the Application Tier

Besides app.py, the application tier contains several other files.

App VM/
 ├── app.py
 ├── requirements.txt
 ├── gunicorn.start
 ├── templates/
 │    ├── base.html
 │    ├── homepage.html
 │    └── modals.html

Files to keep

These files are part of the application and remain unchanged for now:

  • templates/base.html
  • templates/homepage.html
  • templates/modals.html

These define the UI that the Flask app renders. Also anything under the Static folder remains unchanged.

Files to review

requirements.txt

This file will later become the dependency list for the Docker image. For now it remains unchanged.

Files kept only as reference

gunicorn.start

This script starts the Flask service in the VM environment. In the containerized version, this logic will be replaced by a Dockerfile command.

Reviewing the Database Tier

The backend API service is located in:

DB VM/

Key files include:

DB VM/
 ├── app.py
 ├── employee_database.py
 ├── employee_routes.py
 ├── employee_models.py
 ├── requirements.txt
 ├── gunicorn.start
 └── MOCK_DATA.json

Most of these files do not require changes for containerization. However, one file contains an infrastructure assumption.

MongoDB connection configuration

In employee_database.py, the database connection is defined as (line 15):

MONGO_DETAILS = "mongodb://localhost:27017"

This assumes MongoDB is always running on the same machine.

In the containerized architecture, MongoDB will run in its own container, so this value must also be externalized.

Updated version

import os

MONGO_DETAILS = os.getenv(
    "MONGO_DETAILS",
    "mongodb://mongo:27017"
)

Now the database location can be supplied dynamically.

Understanding the Role of the Startup Scripts

In the original VM environment, several startup scripts orchestrate the deployment.

Examples include:

customize-app1-vm.start
customize-app2-vm.start
customize-db-vm.start
customize-web1-vm.start
customize-web2-vm.start

These scripts perform tasks such as:

  • injecting configuration into the application
  • wiring service endpoints together
  • modifying Nginx upstream servers
  • starting the application processes

For example, the web VM scripts update the Nginx configuration with the IP addresses of the application VMs so that traffic can be load balanced.

While these scripts are essential in the VM environment, they will not be used directly in the containerized version.

Instead, their responsibilities will be replaced by container-native mechanisms.

How Container Platforms Replace These Scripts

The behavior implemented by the VM startup scripts will be translated into container-native configuration.

VM Startup Script Role

Container Replacement

Inject service endpoints

Environment variables

Modify application configuration

Runtime configuration

Start services

Dockerfile CMD / ENTRYPOINT

Provide service discovery

Docker networking

Configure load balancing

Container networking or orchestration

This translation allows the application images to remain immutable, while the runtime environment handles configuration and service discovery.

The Modernization Baseline

After selecting the correct source directories and removing unnecessary files, the working baseline for the modernization effort looks like this.

project-baseline/
 ├── app/
 │    ├── app.py
 │    ├── requirements.txt
 │    ├── templates/
|   └── static/ ├── db/ │ ├── app.py │ ├── employee_database.py │ ├── employee_routes.py │ ├── employee_models.py
|    ├── MOCK_DATA.json
│ └── requirements.txt │ └── vm-scripts-reference/ ├── customize-app1-vm.start ├── customize-app2-vm.start ├── customize-db-vm.start ├── customize-web1-vm.start └── customize-web2-vm.start

The VM scripts remain in the repository only as reference material for understanding the original deployment logic.

Why These Changes Are Important

At this stage we have made only two functional code changes:

  • externalizing the backend API location in the Flask app
  • externalizing the MongoDB connection string in the database service

Even though these changes are small, they are extremely important.

They allow the application to:

  • run without modifying source code during deployment
  • receive configuration dynamically
  • operate correctly in container environments
  • integrate with container networking and service discovery

With these changes complete, the application is finally ready for the next step.

What Comes Next

Now that the application is no longer tied to VM-specific configuration, we can begin the actual containerization process.

In the next article, I will:

  • create Dockerfiles for the Flask and FastAPI services
  • build container images for each tier
  • introduce a MongoDB container
  • prepare the system to run using Docker

This will be the first step toward running the entire application stack anywhere a container runtime is available.

Series Progress

  • Part 1 — Understanding the Original 3-Tier Architecture: Overview of the application and its VM-based deployment model.
  • Part 2 — Preparing the Application for Containerization: Removing VM-specific assumptions and making the application portable.

Next:

  • Part 3 — Containerizing the Application with Docker

This is where the real transformation begins.