Almost done.
If you've been reading along, you know we got Docker for Windows running, created a dockerfile in Visual Studio, and published our app to a directory where Docker could find it. Running our web app in the container allows us to access it via localhost:80 and it runs fine.
If you're starting here, you might want to go back to Part 1 and read from the beginning.
Now that our app is running in a local container, we can move that container image anywhere the Docker for Windows environment is available and run it there. The beauty of that--and I can't stress this enough--is that the image, being completely self-contained, will run anywhere the service is available. So there's basically no way to hose it when you put it into production, or move it from one deployment environment to another.
As a refresher, let's look at the contents of our newly created container.
The container instance runs inside the Docker service, which in turn runs on--in this particular case--a Windows OS. Notice I didn't write Windows Server or Windows 10. Any Windows OS that the Docker service can run on can also run this container. In the case of Docker for Windows Community Edition, it has to run on Windows 10 64-bit Pro, Enterprise, or Education (not Home). But the container can run on any supported Windows OS (as we'll see when we push it to Azure).
No particular reason, except it's a little easier/simpler to get this lashup running on Azure than it is on AWS, and I wanted to close this thread with moving the app to the cloud. In future if there is interest I'll do one on AWS. But this time is Azure.
Azure gives you--in its typically mind-numbingly confusing way--two different ways to run a Docker container. There could be 20, who knows? The one I found first is to use a container registry, but you can also run container instances. At least I think so. Who knows? So for this tutorial, we will use container registries. Supposedly this is designed to support swarms, which is a plurality of Docker containers (like a murder of crows or a parliament of owls I suppose). As mentioned earlier, one of the benefits of containers over VMs is how much faster you can spool one up, so having a swarm of identical containers lets you exercise some of that vaunted cloud elasticity in a more responsive way.
First you need an Azure account--and there are an equally mind-numbing number of choices in that regard. You're on your own, Bucky. Once you are logged in, either go to your resource group or create a resource group (a billing unit) and click Add. So many choices! Like when I go to buy earrings for my wife--they show me like 500 pairs and eventually I just close my eyes and point. So in the search box, type "container registry" and you find it:
When you click on it, you get a nice explanation from Microsoft about what this is, and a button to create one. Here's what they say:
Azure Container Registry is a private registry for hosting container images. Using the Azure Container Registry, you can store Docker-formatted images for all types of container deployments. Azure Container Registry integrates well with orchestrators hosted in Azure Container Service, including Docker Swarm, DC/OS, and Kubernetes. Users can benefit from using familiar tooling capable of working with the open source Docker Registry v2.
Use Azure Container Registry to:
Clicking "Create" takes you to a form where you have to specify the name, resource group, and some other stuff. For SKU select "Standard."
Notice in tiny little letters under the Registry Name field is .azurecr.io. Your container registry will get assigned a login server named <registry name>.azurecr.io. Since I created a registry called JBDemo (such hubris), my login server name is jbdemo.azurecr.io. You'll need this name later to push the container up to Azure.
Here's what you should see next:
If you don't see this, just go to your resource group (mine is called "Marketing") and you'll see all your resources:
You can see there is one container registry (JBDemo) and three container instances (jbsks, sksazure, and sksweb). Click on any of these to open it up--I'm going to open my container registry. And you'll notice there are not containers, nor is there any obvious way to upload a container.
Yeah, that stalled me for awhile.
Ok, now we have to go back to our PowerShell window where we ran our container. At this point I don't really need this container running on my local machine anymore, so I can stop it with this command:
docker container stop gracious_mendel
Docker will respond with the name of the container ("gracious_mendel") indicating it is stopped. I can confirm that by running the Docker ps command, which will show no running containers:
Now we are ready to push this container image up to Azure. But how will Docker know which image to push? The answer is the tag command:
docker tag web-app jbdemo.azurecr.io/repos/sks:latest
All this command does is take the image called web-app (see the output of the Docker images command above) and associate it with a name on my container registry, in the form of <server name><repo name><container name:tag>.
Ok, let's have Docker copy this image to our container registry:
Assuming no errors, you should see something like the screenshot above. When it's finished, go back to your Azure container repository and click "Repositories" from the side bar menu:
There are a number of options here like access control or webhooks. I'll leave that for you to experiment with. For now, we want to look at our repo--no worry that you didn't create a repository in Azure, the push command created one for you:
There's nothing magical about the name repos/sks: it's just a name. Let's click on it:
And here we'll see all the tags we've used for this repo. Could be "nightly" or "latest" or "foobar", doesn't matter.
All we have to do is right click on the "latest" tag (or left click on the three dots on the right) and select "Run instance."
That opens a config screen:
Notice the whinging about my garbage name. I did that so you could see the rules: only lowercase letters, no spaces. Seems pretty restrictive but there it is. Be sure to select Windows as the OS type and Yes for Public IP address. For some reason you have to specify the Resource Group again, so pick the one you started with (in my case it's Marketing).
Now go get a cup of coffee, or your beverage of choice. This will take a few minutes. Eventually in the Alarm area (the bell icon top right) you will get a notification that the deployment is successful. If you then go to the instance--go back to your resource group and find the instance, like below (it's called blogdemo):
And there it is. Let's click on it to get the public IP address:
In addition you have a control to stop, delete, or restart the container. Let's go to a browser and connect to this IP address:
and there's our app. Since I literally never touched the source code from my laptop to Azure, I don't need any tests to know this is a good build--well, assuming the version on my laptop was good, this one will be identical--because it's bit for bit identical. And that's the coolness of containers.
We began with something horrifying yet common: a VB6 desktop app still in use. We made it a modern web app (using WebMAP) with Angular 6 for the client and ASP.NET Core on the server side, with all business logic intact. Then we put that whole enchilada into a Docker container, ran it locally, then pushed it to Azure and ran it there.
The reasons for moving off the desktop to the web just get stronger. And CI/CD is--when you really get it--one of the best. But you can't do CI/CD with a desktop app, not really. And containers--like Docker--make CI/CD so much easier, safer, and more reliable. But you have to have a web app. So take another look at WebMAP, because every day it makes it a little easier to get off the desktop and into the 21st century.