قالب وردپرس درنا توس
Home / Tips and Tricks / How to run NGINX Inside Docker (for easy automatic scaling) – CloudSavvy IT

How to run NGINX Inside Docker (for easy automatic scaling) – CloudSavvy IT



Docker logo

One of the most common workloads for Docker is to use it to containerize web servers such as NGINX and Apache to run a high-performance delivery fleet for content that can be easily scaled and managed automatically. We show you how to set it up with NGINX.

Setting up NGINX Inside Docker

Docker is a containerization platform used to unpack your application and all its code into an easily manageable container image. The process of doing this is quite similar to how you would set up a new server ̵

1; the container is an empty slate, so you need to install dependencies, build your code, copy over the build artifacts and copy over any configuration. Fortunately, you do not have to automate so much. NGINX already has a publicly available Docker container, which you can use as a starting point for your application.

Depending on which application you containerize, it can of course be a little more involved. If you are deploying a CMS like WordPress, you will probably need to have an external database, as containers are not designed to be persistent. A good place to start for WordPress in particular would be the WordPress Docker container.

To have something a little more involved than a simple Hello World website, we create a new project catalog and initiate a basic Vue.js application. Your configuration will differ depending on the content you serve, but the general idea is the same.

project template

Create a new file named simply at the root of your project Dockerfile without extension. This works as the build configuration. By default, the container is empty and includes only those applications and dependencies that have been installed with the base image. You must copy over the program code. Serving only static content is easy, but if you work with server-side applications such as WordPress, you may need to install additional dependencies.

The following configuration is pretty basic. Since this is a node application, we have to run npm run build to get a distribution-ready building. We can handle all this in Dockerfile by setting up a two-part container building:

FROM node:latest as build-stage
WORKDIR /src
COPY package*.json ./
RUN npm install
COPY ./ .
RUN npm run build

FROM nginx as production-stage
RUN mkdir /src
COPY --from=build-stage /src/dist /src
COPY nginx.conf /etc/nginx/nginx.conf

The first line is FROM the command pulls node container from Docker Hub and creates a new container called build-stage. Next cdis to that directory and copies over package.json. Then run it npm install, then copies over the app’s code and starts the construction process. If your application needs to be built from source, you want to do something like this.

The next state is pulling nginx container to act as production distribution. It does src and then copies, from build-stage container, /src/dist/ folder containing build artifacts, over to /src folder on the production container. It is then copied over an NGINX configuration file.

You also want to create a new file named .dockerignore, to say that it should be ignored node_modules as well as all construction artifacts from local buildings.

**/node_modules
**/dist

The docker file refers to one nginx.conf, which you must also create. If you are running a more complex configuration with multiple configurations in /sites-available, you may want to create a new folder for your NGINX configuration and copy it over.

user  nginx;
worker_processes  1;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
events {
  worker_connections  1024;
}
http {
  include       /etc/nginx/mime.types;
  default_type  application/octet-stream;
  log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';
  access_log  /var/log/nginx/access.log  main;
  sendfile        on;
  keepalive_timeout  65;
  server {
    listen       80;
    server_name  localhost;
    location / {
      root   /src;
      index  index.html;
      try_files $uri $uri/ /index.html;
    }
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
      root   /usr/share/nginx/html;
    }
  }
}

This is just an HTTP web server. The easiest way to configure HTTPS would be to run LetsEncrypt’s certificate bot locally and copy over the certificate from /etc/letsencrypt/live/example.com/fullchain.pem into the production container. These certificates are valid for 90 days, so you need to renew them regularly. You can automate this as part of the container construction process.

When everything is in order, you can run Docker-build:

docker build . -t my-app

This will build the container as my-app, after which you are free to label it and send it to ECS or a container register for possible distribution. You should, or of course, test it locally first with docker run binding localhost:8080 to port 80 in the NGINX instance:

docker run -d -p 8080:80 my-app

Once you have a built-in image, it is quite easy to distribute it in production. You can read our guide to creating one automatic scaling of container distribution on AWS ECS to learn more, or read our guide at set up a CI / CD pipeline with containers to handle automated buildings and distributions.


Source link