Setting Up
As we work through these labs at various times you will be creating resources with unique names and copying, modifying and pasting templates and code snippets.
To keep track of these it is highly advisable to create a document on your local PC to act as a scratchpad to hold these pieces of data to copy and paste to the ssh terminal and the GCP console.
If you have a simple note taking app this will work well, Visual Studio Code or VS Codium are both excellent for preserving code unmodified (The two products are almost identical but VS Code may be slightly faster to be updated, it contains more telemetry to report usage statistics and VS Codium is licenced under a more Open Source licence, both will work very well for courses on this site).
Microsoft Word is also useful but please be aware that by default Word can modify the characters such as double quotes in quotes strings, this is hard to spot in copy and paste but can break config files and Python scripts. If you are using Word, search for the instructions for your version on how to turn this off.
Finally you will need a ssh (secure shell) client. There are detailed instructions on configuring these for Macs, Linux and Windows later in this document. GCP also provides a browser-based SSH option via the Cloud Console, but we will set up local SSH for a more production-like experience.
Conventions for Naming
Where we are naming and tagging items, we will keep all names lower case and use a standard hyphen "-" instead of spaces. This follows the naming conventions used across GCP and is consistent with the approach used throughout the Clouds and Light platform.
All the courses on this site are templates which can be easily customised. For this course we are naming all our resources with an "intro-course" prefix but this could be easily modified to your company or organisation.
In general you are safe to copy the exact name for resources suggested in the documentation e.g. "intro-course-vpc". However, there will be some areas you will need to create a customised name and these are highlighted.
Sometimes you will need to copy and paste exact and unique values from the console or from a terminal session, this will be highlighted in the lab notes.
Setting up the GCP Console
Before we begin we will set up the basics of the GCP Cloud Console for enhanced security and to make the later stages of the lab easier to work through.
When you first create a Google Cloud account, you are the Owner of the account and have full access to all services. While this is useful for getting started, it is always best practice to follow the principle of least privilege and create specific service accounts and roles for managing the environment.
If you have already configured your GCP account and linked it to this platform then you can skip this step.
Creating a GCP Project
All resources in GCP live within a project. Projects are the fundamental organisational unit in Google Cloud and serve as the boundary for billing, permissions and resource management. Unlike AWS where resources are organised by region within an account, GCP uses projects as the primary container.
Go to the GCP Console at https://console.cloud.google.com/ and sign in with your Google account.
If this is your first time using GCP, you may be prompted to accept the terms of service and set up billing. GCP offers a generous $300 free trial credit for new accounts which is valid for 90 days. This will be more than sufficient for completing these labs.
Create a New Project
In the top navigation bar of the Cloud Console you will see a project selector dropdown (it may show "Select a project" or the name of an existing project).
Click on the project selector and then click "New Project" in the top right of the dialog.
For the Project name enter "intro-course-cloudlabs"
For the Organisation, if you have one set up you can leave it as is, or select "No organisation" if you are using a personal Google account.
Click "Create" to create the project.
Once the project is created, ensure it is selected in the project selector dropdown at the top of the console. You should see "intro-course-cloudlabs" displayed.
Enabling Required APIs
GCP requires you to explicitly enable the APIs for services you wish to use. This is different from AWS where most services are available immediately. We need to enable the APIs for the services we will use in this lab.
In the Cloud Console, go to the Navigation Menu (the three horizontal lines in the top left) and select "APIs & Services" then "Library".
Search for and enable each of the following APIs by clicking on them and then clicking the "Enable" button:
- Compute Engine API — this provides access to virtual machine instances, VPC networks and firewall rules
- Cloud SQL Admin API — this provides access to the Cloud SQL managed database service
- Cloud Resource Manager API — this is needed for project-level operations
Note
Enabling the Compute Engine API may take a minute or two. You will see a progress indicator while it initialises. Do not navigate away until the API is fully enabled.
gcloud CLI Equivalent
gcloud services enable compute.googleapis.com
gcloud services enable sqladmin.googleapis.com
gcloud services enable cloudresourcemanager.googleapis.com
IAM Setup
GCP uses Identity and Access Management (IAM) to control who has what access to which resources. Unlike AWS which uses IAM users and groups with attached policies, GCP primarily uses Google accounts (email addresses) and grants them roles on projects or resources.
For a lab environment, your Google account will already have the Owner role on the project which gives full access. However, for a more production-like setup we will create a service account that could be used for programmatic access.
In the Cloud Console, go to "IAM & Admin" in the Navigation Menu, then select "Service Accounts".
Click "Create Service Account" at the top of the page.
For the service account name enter "intro-course-admin"
For the description enter "Service account for Clouds and Light lab administration"
Click "Create and Continue"
Under "Grant this service account access to project", add the role "Editor" by searching for it in the role dropdown.
Click "Continue" then "Done"
Note
In a production environment you would follow the principle of least privilege and grant only the specific roles needed rather than the broad Editor role. GCP has hundreds of predefined roles and also supports custom roles for fine-grained access control.
gcloud CLI Equivalent
gcloud iam service-accounts create intro-course-admin \
--display-name="intro-course-admin" \
--description="Service account for Clouds and Light lab administration"
gcloud projects add-iam-policy-binding intro-course-cloudlabs \
--member="serviceAccount:intro-course-admin@intro-course-cloudlabs.iam.gserviceaccount.com" \
--role="roles/editor"
Customising the Console
To make the console easier to use you can pin frequently used services to the Navigation Menu. This is not essential but makes navigation quicker during the lab.
Go to the Navigation Menu (three horizontal lines in the top left corner). You will see a list of all GCP services grouped by category.
Hover over each of the following services and click the pin icon that appears next to them:
- VPC Network (under Networking)
- Compute Engine (under Compute)
- Cloud SQL (under Databases)
- Cloud Storage (under Storage)
- IAM & Admin (under IAM & Admin)
These pinned services will now appear at the top of your Navigation Menu for quick access throughout the lab.
Structure of GCP VPC Networks
GCP's networking model differs significantly from AWS. In GCP, a VPC network is a global resource — it spans all GCP regions worldwide. This is fundamentally different from AWS where a VPC is regional and confined to a single region.
However, the subnets within a GCP VPC are regional resources. Each subnet exists in exactly one region but is automatically available across all availability zones within that region. This is a simpler model than AWS where subnets are tied to individual availability zones.
GCP has two types of VPC networks:
- Auto mode — Automatically creates a subnet in every GCP region with predefined IP ranges. Convenient but less control.
- Custom mode — You define each subnet manually, choosing the region and IP range. This gives you full control and is recommended for production environments.
We will use a custom mode VPC so we have full control over our network design.
Note
GCP also differs from AWS in how firewall rules work. Instead of security groups that are attached to individual instances, GCP uses VPC-level firewall rules that are applied using network tags. A network tag is a label you apply to VM instances, and firewall rules can target instances with specific tags. This is a powerful and flexible model but requires a different way of thinking about network security compared to AWS security groups.
We will describe the subnets using the standard CIDR (Classless Inter-Domain Routing) notation.
For a description see - https://cloud.google.com/vpc/docs/subnets
The networks we will create are as follows;
| Scope | Name | Range | Available IPs |
| Global | intro-course-vpc | Custom Mode (no VPC-level CIDR) | — |
| europe-west2 (London) | intro-course-subnet-public | 10.0.8.0/24 | 251 |
| europe-west2 (London) | intro-course-subnet-private | 10.0.16.0/24 | 251 |
| europe-west2 (London) | intro-course-subnet-management | 10.0.0.0/28 | 11 |
Note
Unlike AWS, GCP custom mode VPCs do not have a VPC-level CIDR block. Instead each subnet defines its own IP range independently. GCP reserves the first two addresses (network and gateway) and the last two addresses (broadcast) in each subnet, plus one for the gateway — so a /24 subnet provides 251 usable addresses, and a /28 provides 11.
Also note that because GCP subnets are regional (not per-availability-zone like AWS), we only need one public subnet and one private subnet to cover the entire europe-west2 region including all its zones. This is a significantly simpler model than the AWS approach of creating subnets per availability zone.
We are only using IPv4 addresses throughout this course. If you are deploying a new application in the cloud today which does not need to route to an existing IPv4 network I would strongly recommend looking at IPv6 but describing how this works is beyond the scope of this course.
Checking for a Default VPC
Go to the console homepage and select "VPC Network" from the Navigation Menu (or search for "VPC" in the top search bar).
Select "VPC networks" in the left hand menu.
You will notice that GCP has created a "default" VPC network automatically. Unlike our approach with AWS, we will not delete this default network — instead we will create our own custom VPC alongside it. However, if you prefer a clean environment, you can delete the default network by selecting it and clicking "Delete VPC Network" at the top of the page.
Warning
If you choose to delete the default VPC network, you must first delete all firewall rules associated with it and ensure no other resources depend on it. For this lab, it is safe to leave the default network in place — we will simply not use it.
Creating the VPC
In the VPC Networks page, click "Create VPC Network" at the top.
For "Name" enter intro-course-vpc
For "Description" enter "VPC for Clouds and Light introductory lab course"
Under "Subnet creation mode" select "Custom"
We will now create our three subnets. For the first subnet:
Subnet 1 — Public Subnet
For "Name" enter intro-course-subnet-public
For "Region" select europe-west2 (London)
For "IPv4 range" enter 10.0.8.0/24
Leave "Private Google Access" as Off for now
Leave "Flow logs" as Off
Click "Add Subnet" to add a second subnet.
Subnet 2 — Private Subnet
For "Name" enter intro-course-subnet-private
For "Region" select europe-west2 (London)
For "IPv4 range" enter 10.0.16.0/24
Leave "Private Google Access" as Off for now
Leave "Flow logs" as Off
Click "Add Subnet" to add a third subnet.
Subnet 3 — Management Subnet
For "Name" enter intro-course-subnet-management
For "Region" select europe-west2 (London)
For "IPv4 range" enter 10.0.0.0/28
Leave "Private Google Access" as Off for now
Leave "Flow logs" as Off
Under "Firewall rules" do not select any of the predefined firewall rules. We will create our own custom firewall rules in the next section.
Under "Dynamic routing mode" leave as "Regional"
Click "Create" to create the VPC network.
You should see the VPC network being created. After a few seconds it will appear in your list of VPC networks.
Click on "intro-course-vpc" in the list to view the details. You should see all three subnets listed under the "Subnets" tab.
Note
In GCP, routing within the VPC is handled automatically. All subnets within a VPC can communicate with each other by default, and GCP automatically creates routes for each subnet range. There is no need to manually create route tables and associate them with subnets as in AWS. However, to provide Internet access, we will need to ensure instances have an external IP address. GCP VPCs have an implicit Internet gateway — any instance with an external IP can reach the Internet without creating a separate gateway resource.
gcloud CLI Equivalent
# Create the custom VPC network
gcloud compute networks create intro-course-vpc \
--subnet-mode=custom \
--description="VPC for Clouds and Light introductory lab course"
# Create the public subnet
gcloud compute networks subnets create intro-course-subnet-public \
--network=intro-course-vpc \
--region=europe-west2 \
--range=10.0.8.0/24
# Create the private subnet
gcloud compute networks subnets create intro-course-subnet-private \
--network=intro-course-vpc \
--region=europe-west2 \
--range=10.0.16.0/24
# Create the management subnet
gcloud compute networks subnets create intro-course-subnet-management \
--network=intro-course-vpc \
--region=europe-west2 \
--range=10.0.0.0/28
Costs
Setting up a VPC network and subnets in GCP carries no standing charges. This means you can set them up in your personal account or development environments and not worry about charges. However, this is not true of every GCP network service — services such as Cloud NAT and Cloud Armor have hourly and data transfer charges, so if you are using these it makes sense to add automation around deploying and destroying them.
The ratio of compute, network and storage costs varies across all the major cloud providers, and all have what look like some outliers in terms of costs. While there are always advantages in terms of integration to using the cloud provider native services, sometimes it is worth looking at the costs of services like Cloud NAT and architecting with a cost lens.
Setting Up Firewall Rules
In GCP, firewall rules control what network traffic is allowed to and from your VM instances. Unlike AWS where security groups are attached directly to instances, GCP firewall rules are defined at the VPC level and applied to instances using network tags.
A network tag is a simple text label that you assign to a VM instance when you create it. Firewall rules can then target all instances with a particular tag. This means you define the security policy once in the firewall rules and simply tag instances to apply the appropriate rules.
Every VPC in GCP has two implied firewall rules that cannot be deleted:
- An implied allow egress rule that allows all outbound traffic from all instances
- An implied deny ingress rule that blocks all inbound traffic to all instances
These implied rules have the lowest priority (65535), so any rule you create will take precedence over them.
Thinking about the application architecture we are going to need the following:
- The bastion host in the management network will receive inbound SSH connections from the Internet. It will be tagged with "bastion".
- The servers in our public subnet will respond to HTTP and HTTPS connections from the Internet. They will be tagged with "web".
- The servers in our public subnet will allow inbound SSH connections from hosts in the management network only.
- The servers in the private subnet will allow inbound HTTP and HTTPS connections from servers in the public subnet only. They will be tagged with "app".
- The servers in the private subnet will allow inbound SSH connections from the management network only.
- The servers in the private subnet should not be able to initiate outbound connections to the Internet.
Creating the Firewall Rules
In the VPC Network section of the console, select "Firewall" in the left hand menu.
You may see some default firewall rules if you did not delete the default VPC. We will ignore these and create our own rules for the intro-course-vpc network.
For each of the firewall rules below, click "Create Firewall Rule" at the top of the page.
intro-bastion-ssh
This rule allows SSH access to the bastion host from the Internet.
For "Name" enter intro-bastion-ssh
For "Description" enter "Allow SSH access to bastion host from the Internet"
For "Network" select intro-course-vpc
For "Priority" enter 1000
For "Direction of traffic" select "Ingress"
For "Action on match" select "Allow"
For "Targets" select "Specified target tags"
For "Target tags" enter bastion
For "Source filter" select "IPv4 ranges"
For "Source IPv4 ranges" enter 0.0.0.0/0
Under "Protocols and ports" select "Specified protocols and ports", check "TCP" and enter 22
Note
As with the AWS lab, if you are running these lab exercises from a home PC with a long lived static IP address or you are using a VPN with a static IP, you can replace "0.0.0.0/0" with your specific IP address followed by /32 (e.g. "203.0.113.45/32"). This increases security. However, if you find you cannot SSH into your bastion instance later, checking your source IP address is a key debugging step.
Click "Create" to create the rule.
intro-web-http
This rule allows HTTP and HTTPS traffic from the Internet to web server instances.
For "Name" enter intro-web-http
For "Description" enter "Allow HTTP and HTTPS access to web servers from the Internet"
For "Network" select intro-course-vpc
For "Priority" enter 1000
For "Direction of traffic" select "Ingress"
For "Action on match" select "Allow"
For "Targets" select "Specified target tags"
For "Target tags" enter web
For "Source filter" select "IPv4 ranges"
For "Source IPv4 ranges" enter 0.0.0.0/0
Under "Protocols and ports" select "Specified protocols and ports", check "TCP" and enter 80,443
Click "Create" to create the rule.
intro-web-ssh
This rule allows SSH access to web servers from the management subnet only.
For "Name" enter intro-web-ssh
For "Description" enter "Allow SSH access to web servers from the management network"
For "Network" select intro-course-vpc
For "Priority" enter 1000
For "Direction of traffic" select "Ingress"
For "Action on match" select "Allow"
For "Targets" select "Specified target tags"
For "Target tags" enter web
For "Source filter" select "IPv4 ranges"
For "Source IPv4 ranges" enter 10.0.0.0/28
Under "Protocols and ports" select "Specified protocols and ports", check "TCP" and enter 22
Click "Create" to create the rule.
intro-app-http
This rule allows HTTP and HTTPS traffic to application servers from the public subnet only.
For "Name" enter intro-app-http
For "Description" enter "Allow HTTP and HTTPS access to application servers from the public subnet"
For "Network" select intro-course-vpc
For "Priority" enter 1000
For "Direction of traffic" select "Ingress"
For "Action on match" select "Allow"
For "Targets" select "Specified target tags"
For "Target tags" enter app
For "Source filter" select "IPv4 ranges"
For "Source IPv4 ranges" enter 10.0.8.0/24
Under "Protocols and ports" select "Specified protocols and ports", check "TCP" and enter 80,443
Click "Create" to create the rule.
intro-app-ssh
This rule allows SSH access to application servers from the management subnet only.
For "Name" enter intro-app-ssh
For "Description" enter "Allow SSH access to application servers from the management network"
For "Network" select intro-course-vpc
For "Priority" enter 1000
For "Direction of traffic" select "Ingress"
For "Action on match" select "Allow"
For "Targets" select "Specified target tags"
For "Target tags" enter app
For "Source filter" select "IPv4 ranges"
For "Source IPv4 ranges" enter 10.0.0.0/28
Under "Protocols and ports" select "Specified protocols and ports", check "TCP" and enter 22
Click "Create" to create the rule.
intro-app-deny-egress
This rule denies all outbound Internet traffic from application servers. Remember that GCP has an implied allow-all egress rule at priority 65535, so we need to create a higher priority deny rule.
For "Name" enter intro-app-deny-egress
For "Description" enter "Deny all outbound Internet traffic from application servers"
For "Network" select intro-course-vpc
For "Priority" enter 900
For "Direction of traffic" select "Egress"
For "Action on match" select "Deny"
For "Targets" select "Specified target tags"
For "Target tags" enter app
For "Destination filter" select "IPv4 ranges"
For "Destination IPv4 ranges" enter 0.0.0.0/0
Under "Protocols and ports" select "Allow all"
Click "Create" to create the rule.
intro-app-allow-internal-egress
This rule allows the application servers to communicate with other instances within the VPC. This is needed so the application server can respond to requests from the web server and later communicate with the Cloud SQL database. We give this a higher priority (lower number) than the deny rule so it takes precedence for internal traffic.
For "Name" enter intro-app-allow-internal-egress
For "Description" enter "Allow outbound traffic from app servers to VPC internal addresses"
For "Network" select intro-course-vpc
For "Priority" enter 800
For "Direction of traffic" select "Egress"
For "Action on match" select "Allow"
For "Targets" select "Specified target tags"
For "Target tags" enter app
For "Destination filter" select "IPv4 ranges"
For "Destination IPv4 ranges" enter 10.0.0.0/8
Under "Protocols and ports" select "Allow all"
Click "Create" to create the rule.
Note
The combination of the deny-egress rule (priority 900) and the allow-internal-egress rule (priority 800) means that application servers can communicate within the VPC but cannot initiate connections to the Internet. GCP evaluates firewall rules by priority — lower numbers have higher priority. Since 800 is a lower number than 900, the allow rule for internal traffic will be evaluated first.
Your firewall rules list should now show all seven custom rules for the intro-course-vpc network.
gcloud CLI Equivalent
# Allow SSH to bastion from Internet
gcloud compute firewall-rules create intro-bastion-ssh \
--network=intro-course-vpc \
--direction=INGRESS \
--priority=1000 \
--action=ALLOW \
--rules=tcp:22 \
--source-ranges=0.0.0.0/0 \
--target-tags=bastion \
--description="Allow SSH access to bastion host from the Internet"
... (66 more lines)
Network Testing
Clicking the "Refresh" button below will test your work so far by connecting to your GCP project and testing the VPC setup, the subnets and the firewall rules. This gives you the opportunity to check before you complete the next sections.
Laptop Setup
This section covers the steps needed to set up your laptop to access the GCP compute instances.
Installing the gcloud CLI
The gcloud CLI is Google Cloud's command line tool for managing GCP resources. While not strictly essential for this lab, it is extremely useful and we will use it alongside the console throughout. It also provides a convenient way to SSH into instances.
The homepage for the Google Cloud SDK is here https://cloud.google.com/sdk/docs/install
Follow the instructions for your operating system to install the SDK. Once installed you should be able to test by running gcloud --version.
Once installed, initialise the SDK with:
gcloud init
This will guide you through signing in with your Google account and selecting your project. Choose "intro-course-cloudlabs" when prompted for the default project. Set the default compute region to europe-west2 and zone to europe-west2-a.
You can verify the configuration with:
gcloud config list
Creating SSH Keys
GCP provides multiple ways to access VM instances via SSH:
- gcloud compute ssh — the simplest option, handles SSH key management automatically
- Browser-based SSH — click the SSH button in the Cloud Console
- Manual SSH keys — create your own keys and add them to project or instance metadata
For a production-like setup and to configure ProxyJump through the bastion host, we will use manual SSH keys. This also gives us better control over the SSH configuration.
Configuring Mac / Linux
As the default login user go to your home directory e.g. /Users/Alistair
Create a subdirectory for your ssh keys e.g. "mkdir ./keys"
Check to see if there is a ".ssh" subdirectory using "ls -a", if not create it with "mkdir .ssh"
We are going to create three SSH key pairs for our three server types. In your terminal, run the following commands:
ssh-keygen -t ed25519 -f ~/keys/intro-bastion -C "bastion" -N ""
ssh-keygen -t ed25519 -f ~/keys/intro-web -C "web" -N ""
ssh-keygen -t ed25519 -f ~/keys/intro-application -C "application" -N ""
This creates three key pairs using the Ed25519 algorithm (more secure and faster than RSA). The -N "" flag sets an empty passphrase — in a production environment you would use a passphrase but for lab purposes this simplifies the workflow.
The SSH key approach is strict about the security of the access keys so ensure they are only usable by your user login:
chmod 400 ~/keys/intro-bastion
chmod 400 ~/keys/intro-web
chmod 400 ~/keys/intro-application
Now we need to add the public keys to GCP. We will add them as project-wide SSH keys so they are available on all instances.
In the GCP Console, go to Compute Engine, then select "Metadata" in the left hand menu.
Click on the "SSH Keys" tab, then click "Edit" at the top.
Click "Add Item" and paste the contents of each public key file. You can view a public key file with the command:
cat ~/keys/intro-bastion.pub
Add all three public keys (intro-bastion.pub, intro-web.pub, intro-application.pub). Each key entry will automatically extract the username from the comment field of the key.
Click "Save" to apply the changes.
Note
When you add SSH keys to project metadata, the username is derived from the key comment (the -C flag we used when generating the keys). The usernames will be "bastion", "web" and "application" respectively. Alternatively you can edit the username prefix before the key text in the metadata entry.
Once this is done, change to your ".ssh" directory
Create a new file called "config" using your favourite text editor (VS Code is recommended for graphical edits, vi / vim / emacs or nano if you wish to work in the terminal);
Insert the text below
Host bastion
User bastion
HostName
Port 22
IdentityFile ~/keys/intro-bastion
Host web
User web
HostName 10.0.8.10
Port 22
IdentityFile ~/keys/intro-web
ProxyJump bastion
Host application
User application
Hostname 10.0.16.10
Port 22
IdentityFile ~/keys/intro-application
ProxyJump bastion
Save and exit
We will need to edit this file one more time to add the bastion host's external IP address, but this will allow us to seamlessly and securely access all the VM instances we will set up in GCP.
Note
If you prefer, you can use gcloud compute ssh to connect to instances. For the bastion host this would be gcloud compute ssh intro-bastion-host --zone=europe-west2-a. However, gcloud does not natively support ProxyJump through a bastion, so the manual SSH config approach gives us more flexibility for our three-tier setup.
gcloud CLI Equivalent for Adding SSH Keys
# Add SSH keys to project metadata
gcloud compute project-info add-metadata \
--metadata-from-file=ssh-keys=<(
echo "bastion:$(cat ~/keys/intro-bastion.pub)"
echo "web:$(cat ~/keys/intro-web.pub)"
echo "application:$(cat ~/keys/intro-application.pub)"
)
Configuring Windows
First check that ssh is installed on your system. Open the PowerShell console and run the command ssh. If you see a list of usage flags continue with the next steps, if not install ssh using the guide here How to Enable and Use Windows 10's New Built-in SSH Commands.
As the default login user go to your home directory e.g. C:\Users\User1\, make a note of this Directory
Create a subdirectory for your ssh keys e.g. "mkdir keys"
Check to see if there is a ".ssh" subdirectory using "dir .", if not create it with "mkdir .ssh"
Generate three key pairs in PowerShell:
ssh-keygen -t ed25519 -f C:\Users\User1\keys\intro-bastion -C "bastion" -N ""
ssh-keygen -t ed25519 -f C:\Users\User1\keys\intro-web -C "web" -N ""
ssh-keygen -t ed25519 -f C:\Users\User1\keys\intro-application -C "application" -N ""
Change the path C:\Users\User1\ to the actual Windows path to your home directory.
Add the public keys to GCP project metadata following the same console steps described in the Mac/Linux section above. You can view a public key file in PowerShell with:
type C:\Users\User1\keys\intro-bastion.pub
Once this is done, change to your ".ssh" directory, in PowerShell "cd .ssh".
Create a new file called "config" using a text editor, from PowerShell you can use "notepad config";
Insert the text below, note change the path C:\Users\User1\keys\ to the actual Windows path to your keys directory;
Host bastion
User bastion
HostName
Port 22
IdentityFile C:\Users\User1\keys\intro-bastion
Host web
User web
HostName 10.0.8.10
Port 22
IdentityFile C:\Users\User1\keys\intro-web
ProxyJump bastion
Host application
User application
Hostname 10.0.16.10
Port 22
IdentityFile C:\Users\User1\keys\intro-application
ProxyJump bastion
Save and exit. If you used notepad it may insist on saving the file with a .txt extension, you can remove this in file explorer or in PowerShell by using "mv config.txt config".
We will need to edit this file one more time but this will allow us to seamlessly and securely access all the VM instances we will set up in GCP.
Create a Compute Engine Instance
Creating our first Compute Engine instance for the Bastion Host
In the console, search for Compute Engine and go to the Compute Engine homepage. Select "VM Instances" in the left hand menu. Ensure that your chosen GCP project "intro-course-cloudlabs" is selected in the project dropdown at the top of the console.
Click the "Create Instance" button at the top of the page.
For "Name" enter intro-bastion-host
For "Region" select europe-west2 (London) and for "Zone" select europe-west2-a
Under "Machine configuration", select the "E2" series and for "Machine type" select e2-micro
Under "Boot disk" click "Change". Select Debian as the operating system and Debian GNU/Linux 12 (bookworm) as the version. Leave the disk type as "Balanced persistent disk" and size as "10 GB". Click "Select".
Under "Firewall" do not check either of the "Allow HTTP traffic" or "Allow HTTPS traffic" checkboxes — we have already created custom firewall rules for this.
Click "Advanced options" at the bottom to expand it, then click "Networking".
Under "Network tags" enter bastion — this is how our firewall rules will target this instance.
Under "Network interfaces", click on the default interface to edit it.
For "Network" select intro-course-vpc
For "Subnetwork" select intro-course-subnet-management (10.0.0.0/28)
For "External IPv4 address" select "Ephemeral" — this will assign a public IP address that may change when the instance is stopped and restarted.
Note
In a production environment you would use a static external IP address for the bastion host. You can reserve a static IP in the VPC Network section under "IP addresses". For this lab, an ephemeral IP is sufficient but you should be aware it may change if you stop and restart the instance.
Leave all other settings as default.
Click "Create" to launch the instance.
Once the instance has launched it will have an external IP address. Copy this to your scratchpad under "Bastion External IP Address".
Go back to your ".ssh/config" file on your laptop and change the first block to include the external IP address in the HostName field as shown in the example below. If you did not make a note of the instance's external IP address go to the Compute Engine instances list and it will be displayed in the "External IP" column.
Host bastion
User bastion
HostName 34.89.100.52
Port 22
IdentityFile ~/keys/intro-bastion
Save and exit
This instance will be our Bastion or Jump host. It is the only instance we will use to connect to for interactive shell access and is a common pattern for interactive shell access.
Because we set up the ssh config file you should now just be able to type "ssh bastion" on your command line and connect, if it asks to accept the key just type "yes"
Once you are logged in we do not need to do much more, so just type "exit" to leave the login.
gcloud CLI Equivalent
gcloud compute instances create intro-bastion-host \
--zone=europe-west2-a \
--machine-type=e2-micro \
--image-family=debian-12 \
--image-project=debian-cloud \
--boot-disk-size=10GB \
--boot-disk-type=pd-balanced \
--network=intro-course-vpc \
--subnet=intro-course-subnet-management \
--tags=bastion
Creating our Web Server
Once we have created our bastion host, we can create the web server.
In the Compute Engine console, select "VM Instances" and click "Create Instance".
For "Name" enter intro-web-server
For "Region" select europe-west2 and for "Zone" select europe-west2-a
Under "Machine configuration", select the "E2" series and for "Machine type" select e2-micro
Under "Boot disk" click "Change". Select Debian GNU/Linux 12 (bookworm). Leave disk type as "Balanced persistent disk" and size as "10 GB". Click "Select".
Do not check "Allow HTTP traffic" or "Allow HTTPS traffic" under Firewall — our custom rules handle this.
Click "Advanced options" then "Networking".
Under "Network tags" enter web
Under "Network interfaces", click on the default interface to edit it.
For "Network" select intro-course-vpc
For "Subnetwork" select intro-course-subnet-public (10.0.8.0/24)
For "Primary internal IPv4 address" select "Reserve a static internal IPv4 address" or choose "Custom" and enter 10.0.8.10
For "External IPv4 address" select "Ephemeral" — we need a public IP as this is a web server.
Leave all other settings as default.
Click "Create" to launch the instance.
It will take a few minutes to start the instance. You can use the time to look at the instance details or have a coffee.
Once the instance has started, the status will show a green tick mark. Make a note of the External IP address — copy this to your scratchpad under "Webserver External IP Address".
You can now connect using the following command on your laptop;
ssh web
This opens a connection to the bastion host, then uses this as a proxy to connect to the web server instance. You should see a Debian welcome message and a command prompt.
We now have our first server to build on.
gcloud CLI Equivalent
gcloud compute instances create intro-web-server \
--zone=europe-west2-a \
--machine-type=e2-micro \
--image-family=debian-12 \
--image-project=debian-cloud \
--boot-disk-size=10GB \
--boot-disk-type=pd-balanced \
--network=intro-course-vpc \
--subnet=intro-course-subnet-public \
--private-network-ip=10.0.8.10 \
--tags=web
Building the Webserver
We can now install software on our web server. Unlike the AWS lab where we needed to add a temporary security group for software updates, in GCP the web server already has an external IP address and the implied allow-all egress rule means it can download packages from the Internet by default. Our custom firewall rules only restrict egress for instances tagged "app", not "web".
Updating the Webserver
Going back to our terminal logged into the web server instance, we can now install the Apache webserver. On Debian we use apt-get instead of the yum package manager used on Amazon Linux. The Apache package on Debian is called apache2 rather than httpd.
First, update the package lists:
sudo apt-get update
Then install Apache:
sudo apt-get install -y apache2
The -y flag automatically answers yes to the installation prompts.
Once the webserver is installed, it will be started and enabled automatically on Debian. We can verify this with:
sudo systemctl status apache2
This should return a status report that looks like;
bastion@intro-web-server:~$ sudo systemctl status apache2
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; preset: enabled)
Active: active (running) since Mon 2026-02-16 10:30:07 UTC; 25s ago
Docs: https://httpd.apache.org/docs/2.4/
Main PID: 1842 (apache2)
Status: "Total requests: 0; Idle/Busy workers 100/0; Requests/sec: 0; Bytes served/sec: 0 B/sec"
Tasks: 55 (limit: 1129)
Memory: 10.2M
CPU: 52ms
CGroup: /system.slice/apache2.service
├─1842 /usr/sbin/apache2 -k start
├─1844 /usr/sbin/apache2 -k start
└─1845 /usr/sbin/apache2 -k start
Warning
You may need to press the q key at this point to exit the status display.
If you do not see this status, double check the steps above and ensure you installed the apache2 package. Running dpkg -l | grep apache2 should list the package if it is installed.
To see your website in action, go to your web browser and go to http://34.89.100.52/ (changing the IP address to the External IP address of your web server instance from your scratchpad, note this is http not https).
This should now show the default Apache Debian page with the heading "Apache2 Debian Default Page — It works!"
We are now ready to add content to our server, and gradually link it to other services.
Installing Additional Packages
Now that we have installed the webserver, we should install the additional software packages we will use for connecting to a SQL database in later stages.
We will install;
- The MySQL/MariaDB command line tools
- Python 3 and the PIP package manager (Python 3 is pre-installed on Debian 12 but PIP may not be)
- Using PIP we will install the Python MySQL connector package
To manage the database we will need to install the MariaDB command line tools. For the purpose of our exercise MariaDB and MySQL are compatible software.
These can be installed using the command
sudo apt-get install -y mariadb-client
You can test they have installed correctly using the command mariadb --version, you should see a response of the form "mariadb Ver 15.1 Distrib 10.11.6-MariaDB, for debian-linux-gnu (x86_64) using EditLine wrapper".
To install PIP and Python development headers use:
sudo apt-get install -y python3-pip python3-venv
Once this is done we can install the mysql.connector module. On Debian 12, pip requires the use of a virtual environment or the --break-system-packages flag. For simplicity in this lab environment we will use the flag:
sudo pip3 install mysql-connector-python --break-system-packages
Warning
The --break-system-packages flag overrides a safety check in Debian 12 that prevents pip from installing packages system-wide. This is acceptable for a lab environment but in production you should use virtual environments (venv) to isolate Python dependencies.
We have now installed all the additional software we need for the web and application server.
Adding Website Content
In your ssh session to the VM instance, go to the webserver content directory. Note that on Debian the Apache document root is at /var/www/html/ which is the same as Amazon Linux.
cd /var/www/html/
We will now create a new home page. First we need to remove the default Apache page:
sudo rm index.html
Then create our new home page. If you are familiar with vi (or vim) as a Unix text editor, use that. If not we can use nano.
Either type "sudo vi index.html" or "sudo nano index.html"
In your new document enter content like the below
<HTML>
<HEAD>
<TITLE>Internet Banking Test Site</TITLE>
</HEAD>
<BODY>
<H2>Online Banking</H2>
<H3>Past Month's Transactions</H3>
<TABLE BORDER=2 CELLSPACING=5 CELLPADDING=5>
<TR>
<TD>Transaction Name</TD><TD>Amount</TD>
</TR>
</TABLE>
</BODY>
</HTML>
To save and exit vi type ":wq" in nano it is CTRL+O then CTRL+X. Note if you are using a terminal session on a Mac, some key mappings may be different.
If you reload your home page, you should now see your content has replaced the default Apache page. Right now it is not very impressive, and certainly is not going to win any design awards, but it is fine as a simple testbed.
Webserver Testing
Clicking the "Refresh" button below will test the Web Server setup.
Adding a dynamic webpage element
<br>Our next step is to add a dynamic element to our webpage, so our list of current transactions is generated by a script.
Fortunately, we can do this by using Apache Webserver Server Side Includes and we will build an initial (very simple) Python script to generate a list of transactions.
These steps are a little complex so please follow carefully and go back and debug if needed.
Install the first Python script
To save time we are now going to change our ssh user to the Linux root account. Run the following command
sudo su
You should now see your command prompt has changed to "root".
First we will change to the webserver's CGI scripts directory. On Debian, the default CGI directory is:
cd /usr/lib/cgi-bin/
Note
On Debian-based systems with Apache, the CGI directory is /usr/lib/cgi-bin/ rather than /var/www/cgi-bin/ which is used on Red Hat based systems like Amazon Linux. The Apache configuration already has a ScriptAlias pointing /cgi-bin/ URLs to this directory.
The directory may already contain some default files. We will create our first script.
Using the editor of your choice enter "vi transactions.py" or "nano transactions.py"
Enter the following script (CHECK FOR PASTE ERRORS, watch out for Unicode characters). CTRL+i in Vi for insert mode.
#!/usr/bin/env python3
names = ["Gails Bakery", "Transport for London", "Octopus Energy", "Uber", "Cancer Research UK", "Netflix", "Amazon", "Boots", "Transport for London"]
amounts = [7.44, 8.10, 54.20, 21.00, 10.00, 14.99, 72.12, 4.49, 3.90]
n=len(names)
total = sum(amounts)
print ("Content-Type: text/plain\n")
for i in range (n):
print ("<TR><TD>", end='')
print(names[i], end='')
print ("</TD><TD>", end='')
print (f"{amounts[i]:0,.2f}", end='')
print ("</TD></TR>")
print ("<TR><TD> Total </TD><TD>", end='')
print (f"{total:0,.2f}", end='')
print ("</TD></TR>")
Save and exit the file
The script does the following things;
- First we specify the version of Python we are using and the directory it is in
- Then we create two lists of values, names contains the list for merchants we have transactions with and the second the value of the transaction
- len(names) gives us the number of items in the names list for our loop
- sum(amounts) gives us a sum of the numeric values in amounts
- We print out the content type the script is returning, note in the case of Server Side Includes this is text/plain rather than text/html
- We then have a loop for each item in the names, based on the count n
- For each item we print out a HTML table row with the merchant name and transaction value
- When the loop is finished we print a final row with the total of the transaction values
Note, this is not necessarily the best way to construct a Python script, we are using very simple examples for legibility.
Now we need to make the script executable and then run it to test it works
Make the script executable using the command "chmod 755 ./transactions.py"
Then run it using the command "./transactions.py"
You should see an output of the form
root@intro-web-server:/usr/lib/cgi-bin# ./transactions.py
Content-Type: text/plain
<TR><TD>Gails Bakery</TD><TD>7.44</TD></TR>
<TR><TD>Transport for London</TD><TD>8.10</TD></TR>
<TR><TD>Octopus Energy</TD><TD>54.20</TD></TR>
<TR><TD>Uber</TD><TD>21.00</TD></TR>
<TR><TD>Cancer Research UK</TD><TD>10.00</TD></TR>
<TR><TD>Netflix</TD><TD>14.99</TD></TR>
<TR><TD>Amazon</TD><TD>72.12</TD></TR>
<TR><TD>Boots</TD><TD>4.49</TD></TR>
<TR><TD>Transport for London</TD><TD>3.90</TD></TR>
<TR><TD> Total </TD><TD>196.24</TD></TR>
If you see errors it may be because Unicode characters have been added during copy and paste. The most common error is substitution of left and right open and close double quotes — Python expects a single form of double quote i.e. " . The other error to watch out for is that Python is a language that requires significant indentation - see W3 Schools for a useful explanation. Double check that when you copied the code the indentation below the line "for i in range (n):" was preserved.
Secondly, as this is a development environment we are going to add some debug information to our page using a simple shell script which pulls back the metadata for our Compute Engine instance and displays it on the webpage. Many production applications have some functionality of this type normally activated with a special cookie or form parameter, but in this case we will display it every time.
Still in the "/usr/lib/cgi-bin/" directory we will create a new shell script called hostname.sh
Edit this with "vi hostname.sh" or "nano hostname.sh"
Enter the following data and save the file
#!/usr/bin/sh
ECHOSTNAME=$(curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/hostname)
ECINSTANCEID=$(curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/id)
ECZONE=$(curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/zone)
COLOUR1=$(echo $ECINSTANCEID | md5sum | head -c 6)
COLOUR2=$(echo $ECINSTANCEID | md5sum | tail -c 7 | head -c 6)
printf "Content-Type: text/plain\n\n"
printf "<BR><BR><TABLE WIDTH=100%%><TR>\n"
printf "<TD COLSPAN=2 BGCOLOR=#$COLOUR1>Server Info </TD> </TR>\n"
printf "<TR><TD>The GCE Hostname is </TD><TD>$ECHOSTNAME</TD></TR>\n"
printf "<TR><TD>The GCE Instance ID is </TD><TD>$ECINSTANCEID</TD></TR>\n"
printf "<TR><TD>The GCE Zone is </TD><TD>$ECZONE</TD></TR>\n"
printf "<TR><TD COLSPAN=2 BGCOLOR=#$COLOUR2>Script Ends </TD></TR></TABLE>\n"
Note
GCP's metadata service is accessed at http://metadata.google.internal/computeMetadata/v1/ and requires the header Metadata-Flavor: Google. This is different from AWS which uses http://169.254.169.254/latest/meta-data/ with a token-based authentication system. The GCP metadata service header is a security measure to prevent SSRF attacks. The colour generation uses md5sum because GCP instance IDs are purely numeric, unlike AWS instance IDs which contain hex characters.
Now we need to make the file executable
Enter "chmod 755 hostname.sh"
You can now test this file by simply entering "./hostname.sh"
You should see output as follows;
Content-Type: text/plain
<BR><BR><TABLE WIDTH=100%><TR>
<TD COLSPAN=2 BGCOLOR=#a3c1f5>Server Info </TD> </TR>
<TR><TD>The GCE Hostname is </TD><TD>intro-web-server.europe-west2-a.c.intro-course-cloudlabs.internal</TD></TR>
<TR><TD>The GCE Instance ID is </TD><TD>2817349561023456789</TD></TR>
<TR><TD>The GCE Zone is </TD><TD>projects/123456789/zones/europe-west2-a</TD></TR>
<TR><TD COLSPAN=2 BGCOLOR=#f5c1a3>Script Ends </TD></TR></TABLE>
This code uses the GCP Compute Engine metadata service. This allows us to retrieve internal information about the running instance including its instance ID, hostname and zone.
The use of colour in the HTML output is a simple visual indicator for debugging. If we were looking at multiple instances built from the same image this would give us a simple visual clue if the same content was being served from two different instances, this will be used in later courses.
Configuring the Webserver to Process the Files
Now we will configure our homepage to process the script and include it in our home page. To do this we will use an Apache feature called Server Side Includes ( https://httpd.apache.org/docs/current/howto/ssi.html ). These allow us to run a script on the webserver and include the results of that script in a webpage. Again note that this is not the best or most current way to run web development in production today, but allows us to demonstrate some key cloud functionality.
Change to the server content home page "cd /var/www/html"
Now edit the index.html file using "nano index.html" or "vi index.html"
Add the two lines starting
<!--
shown below.
<HTML>
<HEAD>
<TITLE>CLO - Internet Banking Test Site</TITLE>
</HEAD>
<BODY>
<H2>Online Banking</H2>
<H3>Transactions February 2026</H3>
<TABLE BORDER=2 CELLSPACING=5 CELLPADDING=5>
<TR>
<TD>Transaction Name</TD><TD>Amount</TD>
<!--#include virtual="/cgi-bin/transactions.py"-->
</TR>
</TABLE>
<!--#include virtual="/cgi-bin/hostname.sh"-->
</BODY>
</HTML>
The "<!--# -->" syntax is a special command which tells the webserver to run the included command
Save the file
Now that we have added a server file include we need to make it executable. This is used to give the webserver an indication to process the webpage for included statements
Enter "chmod 755 index.html"
If this works we are 90% of the way there.
Configuring Apache to process server side include directives
By default the installed version of Apache on Debian will not process Server Side Includes, so we need to enable the required modules and update the configuration.
On Debian, Apache uses a modular configuration system with commands to enable and disable modules and sites. This is different from the single httpd.conf file used on Amazon Linux.
First, enable the required Apache modules:
a2enmod include
a2enmod cgi
These commands enable the Server Side Includes module and the CGI module respectively. On Debian, Apache modules are enabled/disabled using the a2enmod and a2dismod commands rather than editing the main configuration file directly.
Now we need to modify the site configuration to enable SSI processing. Edit the default site configuration:
nano /etc/apache2/sites-available/000-default.conf
Find the section with DocumentRoot /var/www/html and add a directory block below it so the file looks like:
<VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
<Directory /var/www/html>
Options +Includes +ExecCGI
XBitHack On
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Save and exit the file.
We also need to add the CGI handler for .py and .sh files. Edit the CGI configuration:
nano /etc/apache2/conf-available/serve-cgi-bin.conf
If this file does not exist, edit the main CGI configuration instead:
nano /etc/apache2/conf-enabled/serve-cgi-bin.conf
Ensure the ScriptAlias and Directory block look like:
<IfModule mod_alias.c>
<IfModule mod_cgi.c>
Define ENABLE_USR_LIB_CGI_BIN
</IfModule>
<IfModule mod_cgid.c>
Define ENABLE_USR_LIB_CGI_BIN
</IfModule>
<IfDefine ENABLE_USR_LIB_CGI_BIN>
ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
<Directory "/usr/lib/cgi-bin">
AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
AddHandler cgi-script .cgi .py .sh
Require all granted
</Directory>
</IfDefine>
</IfModule>
The key addition is the AddHandler cgi-script .cgi .py .sh line which tells Apache to execute files with these extensions as CGI scripts.
Save and exit the file.
Now we just need to restart the Apache service:
systemctl restart apache2
You can check it has restarted with "systemctl status apache2" (remember you may need to type "q" to exit).
If all is good reload your homepage and you should see
Note that the instance ID and hence banner colours will be different for every instance.
To experiment go to your script at /usr/lib/cgi-bin/transactions.py ( cd /usr/lib/cgi-bin/ )
Edit it with vi or nano and experiment with adding or changing the values of the lists, if you save and reload the webpage you should see the new values. Note if you do not have an equal number of transaction items and values the script may break, it has no error checking and is not ready for production yet.
Building a more secure service
So far we have built a webserver with a script as a very simple example of how to run a server in the cloud. But to make the environment more secure (and scalable) it would be better to run the public facing webserver facing the Internet and the server running the scripts and holding custom transaction data in a private network. Again this is massively simplified from how this would be run in a real production environment but the core ideas are here.
Creating a Machine Image
Google Cloud (and all the major public clouds) have a very useful capability to create images of the VMs you create and all the software running on them. This is very useful for horizontal scaling architectures where we treat multiple virtual machines as a pool of compute. In GCP these are referred to as Machine Images.
We will create a machine image from our web server.
First, ensure the web server instance is stopped: go to Compute Engine in the console, select the "intro-web-server" instance, click the "Stop" button at the top of the page (the square icon), then confirm.
Once the instance state has changed to "Terminated" (this generally takes no more than 90 seconds), go to Compute Engine and select "Machine images" in the left hand menu.
Click "Create Machine Image" at the top.
For "Name" enter intro-webserver-image
For "Description" enter "Intro Webserver Image — February 2026"
For "Source VM instance" select intro-web-server
For "Location" you can leave as the default (multi-regional)
Click "Create" to create the image.
After a couple of minutes you should see the machine image status change to "Ready".
You can now go back to your list of VM instances under Compute Engine and restart the Web Server instance. Select the "intro-web-server" instance and click the "Start/Resume" button (the play icon). It will take two to three minutes to start. Note when it restarts it will have the same internal IP address but a new external IP address (since we used an ephemeral IP).
gcloud CLI Equivalent
# Stop the instance
gcloud compute instances stop intro-web-server --zone=europe-west2-a
# Create a machine image
gcloud compute machine-images create intro-webserver-image \
--source-instance=intro-web-server \
--source-instance-zone=europe-west2-a \
--description="Intro Webserver Image - February 2026"
# Start the instance again
gcloud compute instances start intro-web-server --zone=europe-west2-a
Creating the Application Server
We will now create our application hosting server. In the real world this would be a feature rich runtime environment capable of managing complex application functionality and holding session state for multi stage web workflows. But in our case we are going to use a few simple Python scripts on a webserver to demonstrate the cloud architecture concepts.
We can now create a new application server instance from the machine image.
Go to Compute Engine in the console and select "Machine images" in the left hand menu.
Find the "intro-webserver-image" and click on the three dots menu on the right, then select "Create instance".
For "Name" change it to intro-application-server
For "Region" ensure europe-west2 is selected and for "Zone" select europe-west2-a
For "Machine type" ensure e2-micro is selected.
Click "Advanced options" then "Networking".
Under "Network tags" change the tag to app — this is important as it determines which firewall rules apply to this instance.
Under "Network interfaces", click on the default interface to edit it.
For "Network" select intro-course-vpc
For "Subnetwork" select intro-course-subnet-private (10.0.16.0/24)
For "Primary internal IPv4 address" enter 10.0.16.10
For "External IPv4 address" select "None" — this instance is going to be in a private subnet with no Internet access.
Leave all other settings as default.
Click "Create" to launch the instance.
Go back to the VM instances view and wait for the instance to launch, this should take 2 to 3 minutes.
Once it is launched you should be able to log into it simply by typing "ssh application"from your laptop. This will log into our bastion host then launch a second connection to the newly created application server (again you may have to accept the connection when you first connect, just type"yes").
gcloud CLI Equivalent
gcloud compute instances create intro-application-server \
--zone=europe-west2-a \
--machine-type=e2-micro \
--source-machine-image=intro-webserver-image \
--network=intro-course-vpc \
--subnet=intro-course-subnet-private \
--private-network-ip=10.0.16.10 \
--no-address \
--tags=app
Once logged in check to see if the webserver is running with "ps -ef | grep apache2"
You should see output of the form
root 1440 1 0 16:11 ? 00:00:00 /usr/sbin/apache2 -k start
www-data 1558 1440 0 16:11 ? 00:00:00 /usr/sbin/apache2 -k start
www-data 1559 1440 0 16:11 ? 00:00:00 /usr/sbin/apache2 -k start
At present the application server is an exact copy of the web server. So to demonstrate which server we are serving code from we are going to make a change to our transactions script
cd /usr/lib/cgi-bin
Edit "transactions.py" using sudo vi transactions.py or sudo nano transactions.py
Under the line
"names = ["Gails Bakery", ...."
change some of the values in double quotes to new values, you may even want to change one to "Application Server" just so we can see the script is now running on the application rather than web server.
Finally we can test if the script is working by typing "curl http://127.0.0.1/cgi-bin/transactions.py"
You should see the HTML for our banking transactions, including the new transaction names you added.
Curl is the Unix command for issuing a http request, 127.0.0.1 is a reserved IP address that performs a loopback on a running instance, allowing us to test local services.
So now we can test if we can access our other webserver in the public subnet
Try typing "curl http://10.0.8.10/cgi-bin/transactions.py"
You should see that the command now hangs with no response. Although the server is up and running, the firewall rules we applied to instances tagged "app" deny all outbound traffic to destinations outside the VPC. The internal egress rule allows responses to established connections, but the application server cannot initiate new connections to the Internet. You should be able to escape with CTRL+c.
Finally we will make changes to the public web server to connect it to the "application" server.
Exit from the application server (just type exit) to return to your laptop command line.
SSH to the web server using "ssh web"
Change to the web server content directory cd /var/www/html
You should now be able to edit the server home page which is index.html
So type "sudo vi index.html" or "sudo nano index.html"
We are going to change the line
<!--#include virtual="/cgi-bin/transactions.py"-->
To
<!--#exec cmd="curl http://10.0.16.10/cgi-bin/transactions.py" -->
(Note in vi "dd" will delete a line, "o" will insert a line below the current line). Your file should now look like;
<HTML>
<HEAD>
<TITLE>CLO - Internet Banking Test Site</TITLE>
</HEAD>
<BODY>
<H2>Online Banking</H2>
<H3>Transactions February 2026</H3>
<TABLE BORDER=2 CELLSPACING=5 CELLPADDING=5>
<TR>
<TD>Transaction Name</TD><TD>Amount</TD>
<!--#exec cmd="curl http://10.0.16.10/cgi-bin/transactions.py" -->
</TR>
</TABLE>
<!--#include virtual="/cgi-bin/hostname.sh"-->
</BODY>
</HTML>
Save and exit
Application Server Testing
Clicking the "Refresh" button below will test the Application Server setup.
So now we have changed the web server from running a local script using the cgi-bin function to calling a script on the remote application server, which has no Internet access. This is a very ugly way to do this — do not build production apps this way — but it is fine for demonstration purposes. Our debug script will still run on the local web server.
If you use your web browser to access the public Internet facing website again you should see the script is now running on the application server, and you should see the new transaction names we added. Note that the web server instance will have a new external IP address once it has restarted — you can look this up by viewing the instance in the Compute Engine instances list and checking the "External IP" column.
Section Conclusion
While what we have built is very simplistic it demonstrates some key cloud architecture concepts.
We have built a network infrastructure with publicly routable (i.e. Internet facing) subnets and private subnets in the europe-west2 region.
Unlike AWS where subnets need route tables manually associated, GCP automatically handles routing within the VPC. Instances with an external IP address can reach the Internet, and instances without one cannot — combined with our firewall rules this gives us effective network segregation.
We have created firewall rules using network tags which only allow inbound connections on specific ports from specific IP address ranges. We have also created egress rules to prevent the application server from initiating outbound Internet connections while still allowing internal VPC communication.
The combination of network tags, firewall rules and the absence of external IP addresses on private instances actually represents close to best practice for deploying a combination of public networks for static content and private networks for dynamic application code. A real public application with confidential data might use Cloud Armor as a WAF, but we have the basics covered.
Setting Up Private Service Access
Before creating the Cloud SQL instance, we need to set up Private Service Access. This is a GCP networking feature that allows your VPC to connect privately to Google managed services like Cloud SQL without going over the public Internet. It works by allocating an IP range from your VPC for Google's use and creating a private connection.
In the GCP Console, go to "VPC Network" and select "VPC networks" from the left menu.
Click on "intro-course-vpc" to view its details.
Select the "Private Service Connection" tab.
Under "Allocated IP ranges for services", click "Allocate IP Range".
For "Name" enter intro-google-services
For "IP range" select "Automatic" and for "Prefix length" enter 20 — this allocates a /20 block (4,096 addresses) for Google managed services.
Click "Allocate".
Now under "Private connections to services" click "Create Connection".
For "Assigned allocation" select intro-google-services
Click "Connect" and wait for the connection to be created. This may take a minute or two.
Once completed you will see the private connection listed with a status of "Connected".
gcloud CLI Equivalent
# Allocate an IP range for Google services
gcloud compute addresses create intro-google-services \
--global \
--purpose=VPC_PEERING \
--addresses=10.1.0.0 \
--prefix-length=20 \
--network=intro-course-vpc
# Create the private service connection
gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--ranges=intro-google-services \
--network=intro-course-vpc
Creating the Database in GCP
In the GCP Console, go to "Cloud SQL" from the Navigation Menu (or search for "Cloud SQL" in the top search bar).
Click "Create Instance" at the top of the page.
Select "Choose MySQL" as the database engine.
For "Instance ID" enter intro-db-mysql
For the root password, enter a password of your choice. Make a note of this password in your scratchpad. DO NOT use the same password as your login password.
For "Database version" leave as the default MySQL 8.0.
For "Cloud SQL edition" select "Enterprise".
For "Region" select europe-west2 (London)
For "Zonal availability" select "Single zone" — this is the cheapest option and sufficient for a lab environment.
Click "Show configuration options" to expand the settings.
Under "Machine configuration" select Lightweight and choose 1 vCPU, 3.75 GB (db-n1-standard-1) or if available select the cheapest option such as Shared core with db-f1-micro (1 vCPU, 0.6 GB). The db-f1-micro is the cheapest option available.
Warning
Cloud SQL does not have a free tier. Even the smallest db-f1-micro instance will incur hourly charges (approximately $7-10 per month for the London region). Make sure to delete the Cloud SQL instance when you are finished with the lab to avoid ongoing charges. The $300 free trial credit will cover this cost during the trial period.
Under "Storage" leave as "SSD" with "10 GB" capacity. Uncheck "Enable automatic storage increases" to avoid unexpected costs.
Under "Connections" expand this section. Check "Private IP" and uncheck "Public IP".
For "Network" select intro-course-vpc
You should see a message confirming that Private Service Access is already configured. If you see a message asking you to set up Private Service Access, go back to the previous section and complete those steps first.
For "Allocated IP range" select intro-google-services
Under "Data protection" you can uncheck "Enable deletion protection" and uncheck "Automate backups" since this is a lab environment and we want to be able to easily clean up later.
Click "Create Instance" at the bottom of the page.
The Cloud SQL instance will take 5 to 10 minutes to create. This is significantly longer than creating a Compute Engine instance because GCP is provisioning a managed database with its own infrastructure.
Once the instance is created, click on its name to view the details. Make a note of the Private IP address in your scratchpad under "Cloud SQL Private IP". This will be an IP address in the range you allocated for Private Service Access, for example something like 10.1.0.3.
gcloud CLI Equivalent
gcloud sql instances create intro-db-mysql \
--database-version=MYSQL_8_0 \
--tier=db-f1-micro \
--region=europe-west2 \
--network=projects/intro-course-cloudlabs/global/networks/intro-course-vpc \
--no-assign-ip \
--allocated-ip-range-name=intro-google-services \
--storage-type=SSD \
--storage-size=10GB \
--no-backup \
--root-password=(your chosen password)
Updating Firewall Rules for Cloud SQL Access
Our application server needs to be able to reach the Cloud SQL instance on port 3306 (MySQL). The Cloud SQL instance uses a private IP from the allocated range (within the 10.1.0.0/20 range). We already have the intro-app-allow-internal-egress rule that allows outbound traffic from "app" tagged instances to 10.0.0.0/8, but the Cloud SQL private IP is in the 10.1.0.0/20 range, which is covered by 10.0.0.0/8. So our existing egress rule already allows this traffic.
However, we do need to verify that the Cloud SQL instance can receive connections from the application server. Since Cloud SQL is a managed service, GCP handles the ingress firewall rules on the database side. The private service connection we established ensures connectivity.
Note
Cloud SQL instances with private IP connectivity use VPC peering under the hood. GCP manages the routing and firewall rules on the Cloud SQL side automatically. As long as your application server can send traffic to the Cloud SQL private IP (which our egress rule allows) and the Cloud SQL instance is configured to accept connections, everything should work.
Connecting to the Database
We will now connect to the database using the command line tools from the application server.
Connect to the application server instance using "ssh application"
Once logged in you can now connect using the command (change the IP address in bold to the Cloud SQL Private IP from your scratchpad):
mysql -h 10.1.0.3 -P 3306 -u root -p
Enter the root password you created above as recorded in your scratchpad.
Note
In GCP Cloud SQL, the default administrative user is "root" rather than "admin" as in AWS RDS. You connect using the private IP address of the Cloud SQL instance rather than a DNS hostname.
If all has worked correctly you should see
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 42
Server version: 8.0.36-google (Google)
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [(none)]>
You are now logged into the Cloud SQL database.
Populating the Database
We will now create a database, a table for the transactions and populate the first transactions.
We will start by running the "show databases;" command to list the existing databases, note for all the following transactions the trailing semicolons are important.
MySQL [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
4 rows in set (0.003 sec)
This shows the default databases. We will create a new database for this application, enter
create database intro;
If you enter "show databases;" you can check it has been added.
Now we can switch to this database using the "use" command
use intro;
You should see that the prompt has changed to
MySQL >
Now we will create a table to hold the transactions
Enter the following command (across multiple lines, hit enter at the end of each line)
MySQL [intro]> CREATE TABLE if not exists transactions (
-> sequence_number int(10) NOT NULL AUTO_INCREMENT,
-> description varchar(50) NOT NULL DEFAULT '',
-> value double(10,2) NOT NULL DEFAULT '0',
-> PRIMARY KEY(sequence_number)
-> ) ;
Query OK, 0 rows affected, 2 warnings (0.045 sec)
This has created a table with three columns.
The first is a sequence number which is a simple counter which can be used to refer to any row on the database as it will be unique. It is a 10 digit integer.
The second is used for the description of each transaction, it is a string with a max. length of 50 characters.
The third column is used for the value of each transaction, it is a floating point number with up to 10 digits before the decimal point and two after.
Finally we specify "sequence_number" as the primary key for the database.
We can check the status of our tables using the "show tables;" command
MySQL > show tables;
+------------------+
| Tables_in_intro |
+------------------+
| transactions |
+------------------+
1 row in set (0.002 sec)
We can see more details of the table structure using "describe transactions;"
MySQL > describe transactions;
+-----------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------------+--------------+------+-----+---------+----------------+
| sequence_number | int | NO | PRI | NULL | auto_increment |
| description | varchar(50) | NO | | | |
| value | double(10,2) | NO | | 0.00 | |
+-----------------+--------------+------+-----+---------+----------------+
3 rows in set (0.003 sec)
We have now constructed our table, it is time to populate it.
Populating the Table
There is useful information on working with MySQL at https://www.mysqltutorial.net/mysql-generated-columns/
To add values to the table we use the INSERT command.
The format is;
INSERT INTO table name (column separated by a comma) values (values in single quotes separated by commas);
e.g.
INSERT INTO transactions (description,value) values ('Transport for London','6.70');
At any point you can check what has been added to the table using "select * from transactions;"
MySQL [intro]> select * from transactions;
+-----------------+----------------------+-------+
| sequence_number | description | value |
+-----------------+----------------------+-------+
| 1 | Transport for London | 6.70 |
+-----------------+----------------------+-------+
1 row in set (0.001 sec)
Rather than typing each INSERT statement individually, we can create a SQL script file to populate all the sample data in one go. This is also good practice for any database work — keeping your data setup in a script makes it repeatable and less error prone.
Exit the MySQL session by typing "exit"
Create a new file called "populate-transactions.sql" using "sudo vi populate-transactions.sql" or "sudo nano populate-transactions.sql"
Enter the following content:
USE intro;
INSERT INTO transactions (description, value) VALUES
('Transport for London', 6.70),
('Pret a Manger', 9.40),
('Cancer Research UK', 10.00),
('Black Sheep Coffee', 4.25),
('Thames Water', 53.24),
('Amazon UK', 32.45),
('Transport for London', 4.30),
('ATM', 100.00),
('Boisdale of Canary Wharf', 40.40),
('O2', 20.36),
('Justgiving', 101.00),
('Mr Fox', 49.00);
Save and exit the file.
Note that this uses a single INSERT statement with multiple value rows separated by commas. This is more efficient than individual INSERT statements and is the standard approach for bulk data loading in SQL.
Now run the script against the database using the following command (replacing the IP address with your Cloud SQL Private IP from your scratchpad):
mysql -h 10.1.0.3 -P 3306 -u root -p < populate-transactions.sql
Enter your database password when prompted. If successful the command will complete without output.
Now log back into the database to verify the data was loaded:
mysql -h 10.1.0.3 -P 3306 -u root -p
Once logged in, switch to the intro database and check the data:
use intro;
MySQL > select * from transactions;
+-----------------+--------------------------+--------+
| sequence_number | description | value |
+-----------------+--------------------------+--------+
| 1 | Transport for London | 6.70 |
| 2 | Pret a Manger | 9.40 |
| 3 | Cancer Research UK | 10.00 |
| 4 | Black Sheep Coffee | 4.25 |
| 5 | Thames Water | 53.24 |
| 6 | Amazon UK | 32.45 |
| 7 | Transport for London | 4.30 |
| 8 | ATM | 100.00 |
| 9 | Boisdale of Canary Wharf | 40.40 |
| 10 | O2 | 20.36 |
| 11 | Justgiving | 101.00 |
| 12 | Mr Fox | 49.00 |
+-----------------+--------------------------+--------+
12 rows in set (0.001 sec)
At this point we have done everything we need to set up the simple database, we can now write a script to query it.
Exit from the SQL interactive session using "exit".
Database Testing
Clicking the "Refresh" button below will test the Cloud SQL database setup.
Create the Python Script to Read from the Cloud SQL Database
We will now use the Python "mysql.connector" library to read the transaction values from the database and present them as HTML tables on the web page.
On the application server, change to the webserver scripts directory;
cd /usr/lib/cgi-bin
We will now create a Python script to read from the transactions in the Cloud SQL MySQL database, we will call this "mysqltransactions.py"
Before editing the script you will need;
The private IP address for the Cloud SQL instance, this should be in your scratchpad or this can be found by viewing your database in the Cloud SQL section of the GCP console, it should look like "10.1.0.3"
The password for the database, again you should have recorded this in your scratchpad
Edit the script using "sudo vi mysqltransactions.py" or "sudo nano mysqltransactions.py"
Enter the following (items in bold and brackets require you to substitute the value from the variables above), note the commas and quotes in lines 6-10. Storing passwords in scripts is not recommended for any personal data or production applications and it is recommended you look at Google Cloud Secret Manager for anything more advanced than a service demonstration.
#!/usr/bin/env python3
import mysql.connector
mydb = mysql.connector.connect(
host="(enter your Cloud SQL private IP here)",
user="root",
password="(enter your database password here)",
database="intro"
)
... (22 more lines)
Save the file
Make the file executable by entering
sudo chmod 755 mysqltransactions.py
Now you should be able to run the file by just typing "./mysqltransactions.py", you should see the output like below
Content-Type: text/plain
<TR><TD> Transport for London </TD><TD> 6.70 </TD></TR>
<TR><TD> Pret a Manger </TD><TD> 9.40 </TD></TR>
<TR><TD> Cancer Research UK </TD><TD> 10.00 </TD></TR>
<TR><TD> Black Sheep Coffee </TD><TD> 4.25 </TD></TR>
... (19 more lines)
Finally check that the local web server is running and processing this script by running "curl http://127.0.0.1/cgi-bin/mysqltransactions.py"
Once this works we are ready to make the change to our web server to point at this script.
Modifying the Webserver to point at the Database
For our final step in this stage we will modify the web server to point at this database script on the application server.
Exit from the application server you are logged into, using "exit"
Now you should be able to log back in to the web server using "ssh web"
cd /var/www/html
Edit the homepage using "sudo vi index.html" (or "sudo nano index.html")
Change the script name to mysqltransactions.py as shown below, save and exit
<HTML>
<HEAD>
<TITLE>CLO - Internet Banking Test Site</TITLE>
</HEAD>
<BODY>
<H2>Online Banking</H2>
<H3>Transactions February 2026</H3>
<TABLE BORDER=2 CELLSPACING=5 CELLPADDING=5>
<TR>
<TD>Transaction Name</TD><TD>Amount</TD></TR>
<!--#exec cmd="curl http://10.0.16.10/cgi-bin/mysqltransactions.py" -->
</TABLE>
</BODY>
</HTML>
Now in your desktop web browser, revisit the website for the web server instance, remember this will be "http://(your webserver external IP address)", not https.
You should see the page is now updated with the values you added to the database, and the final line of the table shows the transactions were generated from a Cloud SQL MySQL database.
If you want to experiment, you can now log into the application server and use the SQL commands to add new name value pairs to your transaction list.