This article is NOT good advice and should be completely disregarded by any serious sysadmins.<p>You absolutely should not <i>remote</i> into the web server box from the agent box! This goes entirely against the grain of how modern Azure DevOps pipelines deployments are designed to work... hence the security issue that the hapless blogger is trying to unnecessarily solve.<p>The correct approach is to install the DevOps Agent <i>directly</i> onto the IIS web hosts, linking them to a named Environment such as "Production Web App Farm A" or whatever. See: <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments-virtual-machines?view=azure-devops&tabs=windows" rel="nofollow">https://learn.microsoft.com/en-us/azure/devops/pipelines/pro...</a><p>In your pipelines, you can now utilise Deployment Jobs linked to that named environment: <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops" rel="nofollow">https://learn.microsoft.com/en-us/azure/devops/pipelines/pro...</a><p>Deployment jobs have all sorts of fancy built-in capabilities such as pre-deployment tasks, rolling and canary strategies, post-deployment tasks, health checks, etc...<p>They're designed to dynamically pick up the current "pool" of VMs linked to the environment through the agents, so you don't need to inject machine names via pipeline parameters. Especially when you have many apps on a shared pool of servers, this cuts down on meaningless boilerplate.<p>All of the above works even with annoying requirements such as third-party applications where active-passive mode must be used for licensing reasons. (I'm looking at <i>you</i> ESRI and your overpriced software). The trick is to 'tag' the agents during setup, which can then be used later in pipelines to filter "active" versus "passive" nodes.
If you have the choice, I'd strongly consider using Kestrel and self contained deployments.<p>IIS isn't "bad", but it's definitely way more complicated than these newer hosting models.<p>Controlling 100% of the hosting environment from code is a really nice shift in responsibility. Takes all the mess out of your tooling and processes. Most of the scary is resolved at code review time.
If you are anyway forced to use IIS for hosting for some reason, then why not use msdeploy.exe for deployment? I have recently used this guide with great success <a href="https://dennistretyakov.com/setting-up-msdeploy-for-ci-cd-deployments-to-iis/" rel="nofollow">https://dennistretyakov.com/setting-up-msdeploy-for-ci-cd-de...</a><p>Can't find the documentation for it now, but in some version of msdeploy they also added a way to automatically bring the site offline while deployment was done so that the deployment is not blocked by files in use.
It's mind blowing to me that people still ship software by copying a file to a machine and restarting a service.<p>I'm very unfamiliar with IIS hosting though, does it support any kind of containerisation/deployment immutability at all?
Author here - very surprised to see this on the front page after posting it a few days ago. Thanks for the resurrect!<p>For those wondering how anyone is dealing with such an ancient process, I've written a piece about the history of automation in our org that might shed some light: <a href="https://rewiring.bearblog.dev/automation-journey-of-a-legacy-organization/" rel="nofollow">https://rewiring.bearblog.dev/automation-journey-of-a-legacy...</a>
If you’re doing ASP.NET Core, you should be able to get away without restarting the IIS app pool. You can just create a `app_offline.htm` file, wait some time until the process fully shuts down, deploy the new code, and finally remove the .htm file.
If you’re still forced to deal with IIS and Windows Services deployments then I’d highly suggest moving to Octopus Deploy for this. It saves so much headache. Starter edition license is just $360 per year.