In this segment, we will discuss containers. Containerization is another innovation that numerous ASP.NET engineers may not yet have a great deal of involvement with. Docker is the most well known containerization tool today. I’ll clarify how Docker functions at high level and why it’s a decent decision for deploying ASP.NET Core applications. When you set up a server or virtual machine to host your application, think about all the stuff that you need to do. Install dependencies like .NET and third-party libraries, configure things like IIS and NGINX, add environment variables and so on. If you have multiple servers, you have to do this setup manually on each one.
Because of this, adding and maintaining servers turns into a perplexing task. With Docker, you rather make an image from your application that incorporates the majority of the required dependencies, files, and setup steps. The Docker image contains everything expected to take a machine from a blank slate all the way upto running your application. You would then be able to utilize this image to make at least one Docker container. To use a programming metaphor, consider images as classes. A container is a process that runs on the Docker host and is isolated from other running procedures on the machine.
The container is a live version of the image. So to proceed with the programming analogy, consider containers as instances of the classes. The Docker host can run numerous containers at once, all secluded from one another. This methodology has a couple of advantages. Rather than overseeing servers or virtual machines that have been precisely set up to run your application, you just need a server that can run Docker. The dependencies and setup steps required to host and run your application are explicitly defined in the Docker image.
That implies that Docker images become the fundamental unit of deployment. When you build a new version of your application, you create an updated Docker image and push that out to your running Docker containers. Images can be versions tagged and swapped in and out of containers easily, so adding more servers just means spinning up more containers from the same image. All of this means that using Docker makes deploying and managing your application servers much easier. The .NET Core team at Microsoft has created a set of base images that make it straightforward to deploy your ASP.NET Core applications using Docker.
The first step to get started with installing Docker on your machine. You can get all the installation steps here https://www.docker.com/get-started.
Since, I am on Windows 10 machine. Hence, I installed windows version for the same. Once finished, when you type docker –version in your cmd prompt, it should show below information. This is the confirmation that docker is installed and running correctly!
The next thing which you need is a docker file. Docker file is like a recipe which tells how to build an image for your application. Below, I have created a simple asp.net core app.
Here also, I can add docker support from Visual Studio like shown below.
You can add this support while creating project also. But, I wanted to add the same manually, hence skipped that step. Now, let’s go ahead and write the
Make sure after adding this file, you remove .txt extension, otherwise it won’t work.
Now, my docker file looks like
Let me explain the code a bit.
The docker file will start with FROM microsoft/dotnet:2.1, which is the latest version at the time, – sdk AS build. This tells Docker that we’re starting from the Microsoft .NET Core sdk base image. Then we’ll say WORKDIR/src to move into a virtual directory inside of the Docker image.
We want copy all of our source files into the Docker image temporarily, so we can build the application. We’ll start with just the csproj file first. Copy .csproj into the Docker image, and then we’re going to RUN dotnet restore to pull down any packages that we need to build the application.
After that restore step, we’ll COPY the rest of the rest of the source files, and then RUN dotnet publish -c Release, and say that output should go into another virtual directory called /app.
Splitting up the restore and publish steps in this way allows Docker to optimize the packages which get pulled down when we’re doing the NuGet restore. Now that we’ve built the application, we’ll say FROM microsoft/dotnet:2.1-aspnetcore-runtime, since we don’t need the sdk any longer, AS runtime. Switch into that /app directory, COPY things from the build step into /app, we need to set an environment variable, so we’ll say ENV, the environment variable is called ASPNETCORE_URLS, this tells ASP.NET Core what ports and URLS it should bind to.
We’ll stick with the default configuration of binding to port 5000 by saying http://*:5000. We can also configure https, if we have https certificate configured in docker container. Then we finally need to say ENTRYPOINT and give the command to start the application, which is “dotnet“, “HelloDocker.dll“.
Now, I will switch to cmder and build the same. I need to say Docker build, give it a tag, we’ll say -t hellodocker just as a name for the image. And then a dot to say that we want to build this image from the current directory. Once the image has been built, it can be run locally on this machine, or on any Docker host. Below, I have pasted images for the same.
Now that we’ve built an image with Docker build, we can test it on our local machine. First use docker images to list the images that are available.
I can run this image using Docker run. I’ll use the -it flag to tell docker to take all of the output from the container and pipe it to console window. I’ll also use the -p flag to map port 5000 from inside of the container, which was exposed in the Docker file, to port 5000 on my local machine.
And finally I’ll specify the name of the image that I want to spin up. When I hit enter, Docker will take this image, create a container from it, and then run the container and show me the output.
This is very fast. Now when I go to http://localhost:5000, it will navigate to the sample app.
I hope you would have liked this discussion. In the coming session, we will delve further and see the tiny bits inside.
Also published on Medium.