Monday, May 24, 2021

A personal photo gallery application running in the cloud on RHEL using Podman containers and Amazon EFS

 I will create a photo gallery running in a Podman container on a RHEL EC2 instance, where the photos displayed by the web site are stored on the EC2 instances connected to AWS Elastic File System(EFS) across multiple Availability Zones(AZ’s). 

So what is Podman? Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Rat Hat Enterprise Linux (RHEL ) System. 


As far as Amazon Elastic File System (Amazon EFS) goes, it provides a simple, serverless, set-and-forget, elastic file system that lets you share file data without provisioning or managing storage. It can be used with AWS Cloud services and on-premises resources, and is built to scale on demand to petabytes without disrupting applications. With Amazon EFS, you can grow and shrink your file systems automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.


In this post for the photo gallery we will use the photoshow container image. https://hub.docker.com/r/linuxserver/photoshow


What we will see in the post:

  1. The Podman container will run in a RHEL EC2 instance, and use the local filesystem on the EC2 instance to store the images (This is without HA).

  2. We will do exactly what we did in step1 , but this time we store the images on an EFS file system. (Storage level HA)

  3. We make the solution HA by adding a second EC2 instance on another AZ, and adding an Application Load Balancer in front of it. (Compute level HA added)

  4. We take care of scaling the solution by adding an Auto Scaling Group. (Scaling added) 


Step 1:


Let's start by downloading the photoshow image


[ec2-user@ip-172-31-58-150 ~]$ sudo podman images

REPOSITORY                                       TAG       IMAGE ID      CREATED        SIZE

ghcr.io/linuxserver/photoshow                    latest    eb0ad054517e  5 days ago     222 MB


Just making sure that there are no containers running on the EC2 instance.


[ec2-user@ip-172-31-58-150 ~]$ docker ps -a

CONTAINER ID  IMAGE   COMMAND  CREATED  STATUS  PORTS   NAMES


We start by creating a directory on the host machine, and then creating a file in that directory.


[ec2-user@ip-172-31-58-150 ~]$ pwd

/home/ec2-user

[ec2-user@ip-172-31-58-150 ~]$ ls -l photo/

total 8

drwxrwxr-x. 7 ec2-user ec2-user   64 May 20 00:16 config

drwxrwxr-x. 2 ec2-user ec2-user 4096 May 20 03:12 pictures

drwxrwxr-x. 4 ec2-user ec2-user   32 May 20 00:16 thumb



Now we run the container with the podman command with  -v to point the source and where we want it mounted into the container.


[ec2-user@ip-172-31-58-150 ~]$ sudo podman run -d   \

--name=photoshow   \

-e PUID=1000   -e PGID=1000   -e TZ=Europe/London   -p 8080:80   \

-v /home/ec2-user/photo/config:/config:Z   \

-v /home/ec2-user/photo/pictures:/Pictures:Z   \

-v /home/ec2-user/photo/thumb:/Thumbs:Z   \

--restart unless-stopped   linuxserver/photoshow


The :Z will ensure the proper SeLinux context is set.

[ec2-user@ip-172-31-58-150 ~]$ sudo podman ps -a

CONTAINER ID  IMAGE                  COMMAND  CREATED        STATUS            PORTS                 NAMES

0d011645d26d  linuxserver/photoshow           9 minutes ago  Up 9 minutes ago  0.0.0.0:8080->80/tcp  photoshow




We download a few images into the /photo/pictures directory of the host EC2 instance using wget.


[ec2-user@ip-172-31-58-150 ~]$ ls photo/pictures/

Argentina_WC.png  FIFA_World_Cup.jpg  Germany_WC.png  Korea-Japan_WC.png  Qatar_WC.png   SouthAfrica_WC.png  USA_WC.png

Brasil_WC.png     France_WC.png       Italia_WC.png   Mexico_WC.png       Russia_WC.png  Spain_WC.png        WGermany_WC.png

[ec2-user@ip-172-31-58-150 ~]$



This is just to show that no volumes were indeed created


[ec2-user@ip-172-31-58-150 images]$ sudo podman volume ls


Let's do an “inspect” of the container to see what was Mounted


[ec2-user@ip-172-31-58-150 images]$ sudo podman inspect photoshow


…….

…….

…….

          "Mounts": [

            {

                "Type": "bind",

                "Name": "",

                "Source": "/home/ec2-user/photo/thumb",

                "Destination": "/Thumbs",

                "Driver": "",

                "Mode": "",

                "Options": [

                    "rbind"

                ],

                "RW": true,

                "Propagation": "rprivate"

            },

            {

                "Type": "bind",

                "Name": "",

                "Source": "/home/ec2-user/photo/config",

                "Destination": "/config",

                "Driver": "",

                "Mode": "",

                "Options": [

                    "rbind"

                ],

                "RW": true,

                "Propagation": "rprivate"

            },

            {

                "Type": "bind",

                "Name": "",

                "Source": "/home/ec2-user/photo/pictures",

                "Destination": "/Pictures",

                "Driver": "",

                "Mode": "",

                "Options": [

                    "rbind"

                ],

                "RW": true,

                "Propagation": "rprivate"

            }

        ],



…….

…….



Lets log into the container and check the directories that we created:


[ec2-user@ip-172-31-58-150 ~]$ sudo podman exec -it photoshow /bin/bash

root@0d011645d26d:/#

root@0d011645d26d:/#

root@0d011645d26d:/# ls

Pictures  app  config     dev          etc   init  libexec  mnt    proc  run   srv  tmp  var

Thumbs      bin  defaults  docker-mods  home  lib   media    opt    root  sbin  sys  usr  version.txt

root@0d011645d26d:/# ls -l /Pictures /config /Thumbs

/Pictures:

total 2456

-rw-rw-r-- 1 abc users  70015 Apr  9 00:56 Argentina_WC.png

-rw-rw-r-- 1 abc users 173472 May 20 03:53 Brasil_WC.png

-rw-rw-r-- 1 abc users 879401 May 20 03:53 FIFA_World_Cup.jpg

-rw-rw-r-- 1 abc users  81582 Jul 17  2018 France_WC.png

-rw-rw-r-- 1 abc users 124180 May 20 03:53 Germany_WC.png

-rw-rw-r-- 1 abc users  84614 Jul 17  2018 Italia_WC.png

-rw-rw-r-- 1 abc users 126259 Sep 13  2019 Korea-Japan_WC.png

-rw-rw-r-- 1 abc users 157670 Jul 17  2018 Mexico_WC.png

-rw-rw-r-- 1 abc users 125000 May 20 03:53 Qatar_WC.png

-rw-rw-r-- 1 abc users 188832 May 20 03:53 Russia_WC.png

-rw-rw-r-- 1 abc users 248316 May 20 03:53 SouthAfrica_WC.png

-rw-rw-r-- 1 abc users 104383 May 19 10:36 Spain_WC.png

-rw-rw-r-- 1 abc users  98021 Jul 18  2018 USA_WC.png

-rw-rw-r-- 1 abc users  26622 Jul 18  2018 WGermany_WC.png


/Thumbs:

total 4

drwxr-x--- 2 abc users   67 May 20 03:54 Conf

drwxr-x--- 2 abc users 4096 May 20 04:13 Thumbs


/config:

total 0

drwxr-xr-x 2 abc users 38 May 20 01:16 keys

drwxr-xr-x 4 abc users 54 May 20 02:00 log

drwxrwxr-x 3 abc users 42 May 20 01:16 nginx

drwxr-xr-x 2 abc users 44 May 20 01:16 php

drwxrwxr-x 3 abc users 41 May 20 01:16 www

root@0d011645d26d:/#



Go to http://54.202.174.232:8080 to check the Photo Gallery



Gooooal!!!



Step 2: 


This is cool, but is the data highly available - meaning what happens if the EC2 goes down, can I still access my images in this case?


Our application was using the local filesystem in the previous scenario. To make the data HA (Highly Available) lets use EFS to store our images. You will see below how to set up EFS and use it with our application container running in Podman.


Create and EFS filesystem, in this case called “demo”



On the EC2 host mount the EFS filesystem


[ec2-user@ip-172-31-58-150 pictures]$ cat /etc/fstab


#

# /etc/fstab

# Created by anaconda on Sat Oct 31 05:00:52 2020

#

# Accessible filesystems, by reference, are maintained under '/dev/disk/'.

# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.

#

# After editing this file, run 'systemctl daemon-reload' to update systemd

# units generated from this file.

#

UUID=949779ce-46aa-434e-8eb0-852514a5d69e /                       xfs     defaults        0 0


fs-33656734.efs.us-west-2.amazonaws.com:/   /mnt/efs_drive nfs  defaults,vers=4.1  0 0



Lets mount /mnt/efs_drive


[ec2-user@ip-172-31-58-150 pictures]$ sudo mount /mnt/efs_drive

[ec2-user@ip-172-31-58-150 pictures]$ df -h

Filesystem                                 Size  Used Avail Use% Mounted on

devtmpfs                                   3.8G     0  3.8G   0% /dev

tmpfs                                      3.8G  168K  3.8G   1% /dev/shm

tmpfs                                      3.8G   17M  3.8G   1% /run

tmpfs                                      3.8G     0  3.8G   0% /sys/fs/cgroup

/dev/xvda2                                  10G  9.6G  426M  96% /

tmpfs                                      777M   68K  777M   1% /run/user/1000

fs-33656734.efs.us-west-2.amazonaws.com:/  8.0E     0  8.0E   0% /mnt/efs_drive


Run the container using Podman


[ec2-user@ip-172-31-58-150 ~]$ sudo podman run -d   --name=photoshow   -e PUID=1000   -e PGID=1000   -e TZ=Europe/London   -p 8080:80 --mount type=bind,source=/mnt/efs_drive/photo/config,destination=/config  --mount type=bind,source=/mnt/efs_drive/photo/pictures/,destination=/Pictures --mount type=bind,source=/mnt/efs_drive/photo/thumb/,destination=/Thumbs --restart unless-stopped   ghcr.io/linuxserver/photoshow

95c78443d893334c4d5538dc03761f828d5e7a59427c87ae364ab1e7f6d30e15

[ec2-user@ip-172-31-58-150 ~]$


[ec2-user@ip-172-31-58-150 ~]$ sudo podman ps -a

CONTAINER ID  IMAGE                          COMMAND  CREATED         STATUS             PORTS                 NAMES

95c78443d893  ghcr.io/linuxserver/photoshow           18 seconds ago  Up 17 seconds ago  0.0.0.0:8080->80/tcp  photoshow

[ec2-user@ip-172-31-58-150 ~]$



Let's inspect the container


[ec2-user@ip-172-31-58-150 ~]$ sudo podman inspect photoshow

……………

……………

……………

        "Mounts": [

            {

                "Type": "bind",

                "Name": "",

                "Source": "/mnt/efs_drive/photo/config",

                "Destination": "/config",

                "Driver": "",

                "Mode": "",

                "Options": [

                    "rbind"

                ],

                "RW": true,

                "Propagation": "rprivate"

            },

            {

                "Type": "bind",

                "Name": "",

                "Source": "/mnt/efs_drive/photo/pictures",

                "Destination": "/Pictures",

                "Driver": "",

                "Mode": "",

                "Options": [

                    "rbind"

                ],

                "RW": true,

                "Propagation": "rprivate"

            },

            {

                "Type": "bind",

                "Name": "",

                "Source": "/mnt/efs_drive/photo/thumb",

                "Destination": "/Thumbs",

                "Driver": "",

                "Mode": "",

                "Options": [

                    "rbind"

                ],

                "RW": true,

                "Propagation": "rprivate"

            }

        ],


……………

……………

…………...


Step 3:


This is fine, but what if the EC2 instance goes down? I have data in an EFS filesystem but how are the clients going to access it?


For this we have to make our compute and storage both HA. Our storage is already HA because of EFS, but now let's make the EC2 instance also HA.


For that we first create an image (AMI) of our running EC2 instance as shown below in the screen shots, and bring a new EC2 instance in a different Availability Zone(AZ). Both our instance will now be accessing the same data that is stored on EFS 









Lets add an Application Load Balancer to distribute the client requests to the two EC2 instance in the two AZ’s






The Application Load Balancer forwards the requests to the Target group that includes the two EC2 instances hosting our application containers.




Enter the DNS name of the Load Balancer with port 8080 in the web browser (photoshow-lb-207083175.us-west-2.elb.amazonaws.com:8080) to connect to the application.






Goooal!!!



Step 4:


So far so good, but happens when our requests increase and we need additional resources to handle the client requests?


Ah ha, that is exactly where the Autoscaler comes into picture. I added an Auto Scaling group called photoshow-asg with Desired Capacity of 1, Minimum capacity of 1, and Maximum capacity of 3 to handle any increase in the user requests.




I tested to see if the Photo gallery can still be accessed from the URL, and the scaling of the EC2 instances based on the load.



Goooal!!!


Ok, but I don’t want to be giving the DNS name of a Load Balancer to family and friends to check out my photos, how uncool is that!


Valid point! That is where Route 53 can help. I have a domain register with Route 53, and I’m going to use it to access the photo gallery.


Go to Route 53 -> Hosted Zone -> <your registered domain> and create a CNAME record type pointing to the Load Balancer DNS name






Conclusion:


In this post we have seen how a Podman Containerized application running in RHEL can connect to Amazon EFS. In the following posts we will look into other storage features for Podman Containers like volumes, and we can use it in a containerized solution.



No comments: