قالب وردپرس درنا توس
Home / Tips and Tricks / How to manage Docker Container Persistence and Storage – CloudSavvy IT

How to manage Docker Container Persistence and Storage – CloudSavvy IT



Docker logo.

Docker is a containerization service, designed to run apps in their own environment in all systems. It is intended to be platform statistical, but if you need to store data on a disk, it can be done with volume and binding mounts.

Use an external database or an object store

This is the method that most people will recommend. Storing permission as a file on disk is not in line with the Dockers model, and although it can be done, it̵

7;s always best to keep in mind – do you really need to do that?

For example, let’s say you’re running a web application in Docker that needs to store data in a database. It does not make sense to run MySQL in a Docker container, so you should instead deploy MySQL on RDS or EC2, and let the Docker container connect to it directly. The Docker container is completely stateless as it is meant to be; it can be stopped, started or struck with a sledgehammer, and a new one can be spun in place, all without data loss. With IAM permissions, this can be done securely, completely in your VPC.

If you really need to store files, such as photos and videos uploaded by users, you really should use AWS’s Simple Storage Service (S3). It is much cheaper than EBS-based storage, and far cheaper compared to EFS storage, which is your primary choice for a shared file system for ECS containers. Instead of storing a file on disk, you upload directly to S3. This method also allows you to run additional processing with Lambda features on uploaded content, such as image or video compression, which can save a lot on bandwidth costs.

Simple solution: Mount a unit on a container

Docker has two ways to achieve endurance: Volume brackets and binding brackets. With binding brackets, you can mount a specific location on the server file system to a location inside the Docker container. This link can be read-only, but also read / write, where files written by the Docker container will remain on disk.

You can bind individual host directories to destination directories in the Docker container, which is useful, but the recommended method is to create a new “volume” managed by Docker. This makes it easier to back up, transfer and share volumes between different instances of containers.

A word of caution: If you do not have direct access to the server you are running Docker on, as is the case with managed distributions such as AWS’s Elastic Container Service (ECS) and Kubernetes, you want to be careful about this. It is tied to the server’s own disk space, which is usually volatile. You want to use an external file storage such as EFS to achieve real endurance with ECS (more on that later).

However, binding and volume mounts work well if you simply use Docker to run a simple installation of an app on your server or just want fast endurance for testing purposes. However, the method of creating volumes will be the same no matter where you store them.

You can create a new volume from the command line with:

docker volume create nginx-config

And then, when you go to run your Docker container, link it to the target in the container with --mount flag:

docker run -d 
--name devtest 
--mount source=nginx-config,target=/etc/nginx 
nginx:latest

If you run docker inspect , you see the volume specified below Mounts section.

If you use Docker Compose, installation is also easy. Just add one volumes mail for each container service you have, and then map a volume name to a place in the guest. You must also specify a list of top-level volumes volumes key for Compose to provision.

version: "3.0"
services:
  web:
    image: nginx:latest
    ports:
      - "80:80"
    volumes:
      - nginx-config:/etc/nginx/
volumes:
  nginx-config:

This automatically creates the volume for this compose. If you want to use a premad volume from outside Compose, enter external: true in the volume configuration:

volumes:
  cms-content:
    external: true

If instead you just want to make a binding attachment and do not care about volumes, just enter a path name instead of the volume name and refrain from defining the volume names.

version: "3.0"
services:
  web:
    image: nginx:latest
    ports:
      - "80:80"
    volumes:
      - /docker/nginx-config/:/etc/nginx/

You can read Docker’s complete documentation on using volumes with Compose if your use case requires something more specific than this.

For managed distributions, use a shared file system (AWS EFS)

If you distribute on AWS ECS, you can not use a normal binder or volume bracket, because when you turn off the container, you will probably not run on the same machine the next time you start it, defeat the purpose of endurance.

But you can still achieve endurance with another AWS service – Elastic File System (EFS). EFS is a shared network file system. You can mount it on multiple EC2 servers and the accessed data is synchronized across all of them. For example, you can use this to host the static content and code of your site and then run all of your worker codes on the ECS to manage the actual display of your content. This is about the limitation of not storing data on disk, as the volume mount is tied to an external device that remains over ECS distributions.

To configure this you need to create an EFS file system. This is quite simple and can be done from the EFS Management Console, but you want to note the volume ID because you need it to work with the volume.

If you need to manually add or change files in your EFS volume, you can mount them to any EC2 instance. You need to install amazon-efs-utils:

sudo yum install -y amazon-efs-utils

And then mount it with the following ID command:

sudo mount -t efs fs-12345678:/ /mnt/efs

This way, you can directly view and edit the contents of your EFS volume as if it were another hard drive on your server. You want to make sure you have nfs-utils installed for everything to work properly.

Then you need to connect ECS to this volume. Create a new task definition in the ECS Management Console. Scroll to the bottom and select “Configure via JSON.” Then replace the blank “volume” key with the following JSON and add the “family” key at the end:

"volumes": [
        {
            "name": "efs-demo",
            "host": null,
            "dockerVolumeConfiguration": {
                "autoprovision": true,
                "labels": null,
                "scope": "shared",
                "driver": "local",
                "driverOpts": {
                    "type": "nfs",
                    "device": ":/",
                    "o": "addr=fs-XXXXXX.efs.us-east-1.amazonaws.com,nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport"
                }
            }
        }
    ],
"family":"nginx",

Replace fs-XXXXXX.efs.us-east-1.amazonaws.com with the correct address of your EFS volume. You should see a new volume:

ecs ny volym

You can use this in your container definition as a mounting point. Select “Add container” (or edit an existing one), and under “Storage and logging”, select the newly created volume and enter a container path.

add mounting point

Save the task definition and when you start a cluster with this new definition, all containers can access your shared file system.


Source link