Lukas Marx
June 01, 2018

Run Angular in a Docker Container using Multi-Stage builds

In this tutorial, we are going to take a close look at Docker and its containers.

We will discover, how we can use Docker to build and host a simple angular application.

This is an affiliate link. We may receive a commission for purchases made through this link.

For that, we will create a simple image to build angular and then use the Docker multi-stage feature to create another image to host our application using NGINX.

We will then learn how we can build our Docker image in a container using Docker itself and Docker-Compose.

At the end of this tutorial, you will have a angular application running in your own Docker container.


Let's get started!


Creating an application to Dockerize

For this example, we are going to use a angular application. If you have the angular-cli installed anyway, you can go ahead and create a new angular-cli project right away.

ng new angular-docker

Otherwise, you can just clone this Git repository.

git clone

It does contain the same files, as if you would have used the angular-cli.

Also, don't worry if you are not familiar with angular at all! After all, we are just publishing some HTML, CSS and JavaScript files. That they were generated by angular does not matter at all.


Defining a Docker image with a Dockerfile

Now that we have something to run inside a container, it is time to set it up. But before we can run anything, we need to define a Docker image first.

What is a Docker image?

You can think about the Docker image as a blueprint. It does contain all the information that is required to set up the Docker container.

The image defines the operating system we are using, which programs will be installed and which directories exist.

To define the image, we are using a special file called Dockerfile. It typically sits right at the root of the project.

To follow along, please create a file called "Dockerfile" (yes, no file-ending) in the root directory of our angular project.

To define the image, we are using a Docker-specific syntax.

Let's take a look at all the commands we need to create a working image!

Choosing the Base of our image

Probably the first line of every Docker file is the "FROM" statement. This is because every image needs a base image to inherit from. Typically this is an image only containing the base operating system.

Using the "FROM" command, we can choose the operating system our image (and later our container) is based on.

Also, we need to define the version of the base image. While you can be lazy and just choose :latest, this is not desired in most cases, because you want to prevent any unwanted and uncontrollable changes to your image. Instead, we define a fixed release for our image.

For this example, we are going to inherit from the node image version 8.11.2 based on the alpine distribution. In this case the first line of our Dockerfile looks like this:

FROM node:8.11.2-alpine as node

In this case, we are using a Docker feature called stages. In the first stage, we use the node distribution to compile our angular app. In the second stage, we will use that output to host the application itself.

This is why we need to define a name for that state ("as node") so we can reference it later.

Choosing a WORKDIR

Using the WORKDIR command, we define our current path inside of the container. This is important for other commands which use the relative path. In this case, we choose a directory called "/usr/src/app".

WORKDIR /usr/src/app

Copying our project into the image

In the next step, we copy both, the package.json and the package.lock.json into the image.

COPY package*.json ./

We then run npm install to install all dependencies of the project.

RUN npm install

Only after that, we copy the rest of the project into the image using:

COPY . .

We do so in separate steps because this way we can take advantage of Docker caching each step (also called layer). That way subsequent builds of the image will be faster, in case the package.json did not change.

Afterward, we run the build script of the angular project using:

RUN npm run build

Ignoring files using .dockerignore

While our first stage is almost complete, we need to do one more thing to make it work properly.

Notice that the COPY command copies everything? This does also include the node_modules folder and the dist directory. Copying these directories can completely overwrite the work we have done using the "RUN npm install" and "RUN npm run build" commands. This can lead to problems, as for example, certain node_modules need to be compiled on the platform they are used on.

To prevent this from happening, we can create a file called ".dockerignore" in the root directory of the project. This file prevents Docker from touching the files and folders mentioned.

To exclude our two directories, we edit the .dockerignore file like this:


Finishing stage one

Now we have defined a Docker image, that contains the compiled version of our angular app. The finished Dockerfile for that stage should look like this:

FROM node:8.11.2-alpine as node

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY . .

RUN npm run build


Creating multi-stage images with Docker

We have successfully created a Docker image that does contain the compiled version of our app. Great!

Next, we want to serve that application using a web-server. We are going to use NGINX for that.

Before multi-staged builds were introduced in Docker version 17.05, we now would have to modify our Dockerfile to install NGINX.

Especially for day-to-day Windows users, that can be a challenging task. It is also very hard to read and maintain.

Thankfully, with Docker multi staged builds, we can just create a new image, that has NGINX already installed. All we then have to do is copy the compiled image from our first stage image to the new image.

For this image we are going to use the official nginx alpine image:

# Stage 2
FROM nginx:1.13.12-alpine

Afterward, we copy the dist-output from our first image (called node, remember?) to our new image. Precisely into the NGINX public folder.

COPY --from=node /usr/src/app/dist/angular-docker /usr/share/nginx/html

Finally, we copy the required nginx configuration file into our image. We have not created this file yet, but we will do so in the next chapter.

COPY ./nginx.conf /etc/nginx/conf.d/default.conf


Configuring NGINX using a config file

To tell NGINX, which files to serve under which domain, we need to provide it configuration file.

As we defined using the last command of our Dockerfile, this configuration has to be in a file called "nginx.conf" and be located in the projects' root. Let's go ahead and create that file!

server {
  listen 80;
  location / {
    root /usr/share/nginx/html;
    index index.html index.htm;
    try_files $uri $uri/ /index.html =404;

Inside of that file, we configure NGINX to listen on port 80 and server the index.html file from the defined directory.


How to create a container from the Docker image

Congratulations! You have created your first (two) image(s)!

Now that we have the blueprint for our container, we can finally start thinking about the container itself.

There is only one question:

What is a Docker container?

"A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings." -

The main advantage of a Docker container over a regular virtual machine is, that container can share resources (like the image). This allows containers to run in large clusters with a minimal resource-overhead.

Building the Docker image from the Dockerfile

Now it is time to run our first container. But before we can do that, we need to build the image first. We can do so using this command:

docker build -t angular .

With the -t argument, we define the name of the image. The second argument (".") defines the location of the Dockerfile. This command can take a while because images have to be downloaded and the angular app has to be compiled.

Spinning up a Docker container from our image

To start a Docker container using our image, we use this command:

docker run -p 80:80 --name angular-container -d angular

With -p we define a port mapping. Basically, we define that the port 80 of our container should be exposed to the port 80 of our host machine. With --name we define the name of the container. In this case "angular-container". With -d we detach and let Docker create the container in the background. The last argument is the name of the image ("angular") we want to use.


How to use docker-compose to simplify the process

Docker has a tool called docker-compose, that makes working with Docker a lot easier.

Using Docker-Compose, we can define a file, containing all the information we passed into the run command. That way we don't have to pass them in every time.

Docker-compose uses a file called "docker-compose.yml". Create a file called like that in the root directory of the project.

Afterward, pass in the required information like so:

version: '2.3'

    hostname: localhost
    container_name: angular-container
    build: .
      - 80:80

With Docker-Compose, it is also very easy to start multiple containers at the same time. But that is another story...

To start our container using Docker-compose, we can use the

docker-compose up



In this tutorial, we discovered how we can use Docker and Docker-compose to create a Docker image/container to build and server an angular application.

This is an affiliate link. We may receive a commission for purchases made through this link.

If you want to see the full application, you can check out the GitHub repository for this tutorial!

I hope you like this article. If you did, please share it with your friends!

Happy docking!

Leave a comment

We save your email address, your name and your profile picture on our servers when you sing in. Read more in our Privacy Policy.