Category Archives: Azure

Making WordPress Fast on Azure App Service Linux PHP 8

TL;DR:

To make WordPress fast on a vanilla Linux Web App, add app setting WEBSITES_ENABLE_APP_CACHE=true then redeploy your app, and ensure that you are not relying on filesystem (i.e. use Azure Blob or S3, deploy updates instead of live update plugins, etc)

App service on linux has been around for some time now, and as of PHP 8, it is now unsupported to run PHP on App Service on Windows. This blog was hosted on PHP 7.4 on Linux App Service and was very easy to manually deploy – I just created an app service (B1) and a MySQL Flexible Server (B1s), both on a basic tier, copied the files and DB over, and it cost only 60p per day = approx. £18 a month, well within my Visual Studio inclusive Azure credits, I was very happy… then I switched to PHP 8.

Firstly, be aware that once you change, you can’t go back! Then, be aware that they have changed from Apache to Nginx, so .htaccess file is not used any more. You will get 404 on any URLs not ending in .php. There is guidance on how to implement a rule in the nginx configuration to fix this here, using a custom startup script. Note you don’t need to create custom .sh file you can just put this into the Azure startup command:

cp /home/site/default /etc/nginx/sites-enabled/default; service nginx restart

I’m happy with this, the blog runs quick (TTFB = 0.4s). I like the simplicity of sticking as close as possible to vanilla PaaS service.

Which brings me to the main subject of this blog post. Another WordPress site I manage, this time a WooCommerce setup with 16 plugins. This site has been running fine on another platform but when we moved it into Azure App Service Linux the performance just turned abysmal – I’m talking just the time to return the homepage html document (TTFB) went up from about 0.8s to about 2.8s, and admin and shop pages even worse.

What I tried:

  1. Set app to Always On
  2. Scaling up the web app from S1 all the way to P1v3 made no difference
  3. Scaling up MySQL from B1ms to D2ads_v5 made no difference
  4. Check the metrics (CPU and Mem), nothing seems overloaded
  5. Setting FPM_MAX_CHILDREN=20 and some other settings after spotting a warning in the logs, made no difference
  6. Used Debug Bar plugin to check for slow MySQL queries and external HTTPS dependencies, but these were only taking about 200ms
  7. Enabling Redis Cache using the Redis Object Cache plugin, no difference
  8. Enabling WP-Optimize plugin, no difference unless I enable page level caching, which only benefits anonymous users.

Eventually I discovered that the bottleneck is disk I/O because App service runs the app from a network share. There are many discussions stating this is the cause, but not many with solutions.

This post hints at the PHP performance issues caused by disk IO and suggests you can use a custom container image, because then persistent storage is not used. This is probably another solution but I wanted to try and solve this while still running in vanilla web app, so not to have to maintain any docker images. The second option isn’t really viable, to make the app run from outside of /home. The third option is to use another hosting product! I was sure there must be an easier way to get the app running from local disk.

This post contains WordPress-specific app settings for Azure but I believe (some of) these only apply when using Microsoft’s WordPress marketplace template which uses a custom docker image (well, that aligns with the above post). Specifically, I had read that the appsetting WORDPRESS_LOCAL_STORAGE_CACHE_ENABLED = true would solve the performance issue, but it made no difference to my vanilla app. I span up a WordPress app via the portal to see how it is configured. Sure enough, it uses a container mcr.microsoft.com/appsvc/wordpress-alpine-php:8.2 and loads of the app settings referred to in the post.

Also I can see that they install W3 Total Cache and configure it to cache pages, using Redis Cache, and it uses Azure CDN. Fair enough, it is fast – under 300ms TTFB. This is all great, but I don’t want to add all this complexity into my existing app that hasn’t needed it before, I just want it to run as best it can without page caching. Anyway – half the shop can’t use this type of caching and none of the backoffice. I considered trying to use their image and deploy my WP into it, but decided to pursue more vanilla fix first (and there doesn’t seem to be much documentation for using it)… if you do decide to pursue this then the articles here would be useful.

This post validates that storage is slow on Linux web app, but the only suggestion is to mount premium SSD storage which is not supported for /home. Many people seem to have given up and hosted elsewhere, or are relying on caching plugins to improve performance to an acceptable level.

Finally I came across App Cache. Looks to be Linux equivalent of Local Cache. This post has been around since 2021 so I have no idea why it isn’t more popularised on the web. I tried it straight away – set WEBSITES_ENABLE_APP_CACHE=true. After enabling, my web app went back to the default state below, which was expected because it says you need to redeploy after changing the setting.

Then I redeployed the app (via Azure Devops – zip deploy) and it came back up as normal. Except – the homepage now returns in 500-600ms! This is even faster than it was on the previous hosting, and 4x faster than it was before adding the setting.

The only issue I’ve noticed so far, is that the nginx configuration I mentioned at the start of this post seems to have been lost. I will SSH in and investigate.

Turns out I now am logged into an “empty” instance with vanilla nginx config in /etc/sites-enabled/default and only the web files in /home and the site files are in /home/site/wwwroot:

I added into location / { try_files $uri $uri/ /index.php?$args; } using nano and then ran service nginx restart and sure enough 404 issue is fixed. But I assume it will return on next deploy/restart. Sure enough, after a restart I am back to 404 errors.

It seems the Startup Command is not working when App Cache is enabled… this is rather an annoying problem, as this is so close to a solution. Then I found this great post about deploying Drupal using App Cache, from only last month! The trick is, to put your custom nginx default file inside the repo (the opposite to the usual approach!), so that it gets deployed into wwwroot within the container. I assume that means, my Startup Command was probably failing because the file didn’t exist at /home/site/default! Yep, it works, with startup command “cp /home/site/wwwroot/nginx-default /etc/nginx/sites-enabled/default; service nginx restart”

During my investigation I noted down these alternatives to investigate:

  • App Cache vs Run from Package – can the latter also solve this (if it even supports linux)?
  • Check if disk is really read-only with App Cache (as stated here) if so, does this cause errors? UPDATE – I don’t think it is read-only, as I am able to upload media successfully that is stored temporarily on disk.
  • App Cache caveat – you can’t rely on local storage so must avoid doing “live” plugin updates etc, and use Azure Blob Storage or S3 for media file storage.
  • Can we achieve this without App Cache by setting custom Site Root and startup script to copy the files out of /home?
  • Try setting WEBSITES_ENABLE_APP_SERVICE_STORAGE=false to force /home not to use shared storage.

Strangely, that last setting also appears to work, without app cache. But, when trying to replicate this on another environment I could not get the performance improvement unless I enabled and then disabled App Cache! I think there is something strange going on. When I diff the environment variables the only difference is APPSVC_RUN_ZIP goes to ‘true’ once you do this. So I think this is doing something behind the scenes which also makes it fast (sounds a bit like ‘run from package’ doesn’t it…) although I can’t find any documentation about this.

While testing this setup I found I could no longer upload media files and got an error that the disk is read only “Unable to create directory wp-content/uploads/2023/11. Is its parent directory writable by the server?” so I think it is a non-starter anyway, given that App Cache is at least mostly documented I will use that approach.

Azure Private DNS will break your network if used incorrectly

I am not a networking expert but in configuring some Azure cloud services I came across the need to use Azure Private DNS to create a private DNS zone for an App Service Environment v3 (Isolated web app that sits within a vnet).

This is something that is needed to be able to connect to the web apps that are hosted in the ASE from elsewhere in the virtual network (e.g. other web apps in the ASE), without needing hosts file entries (which are impossible to create on PaaS services).

Then I realised that my web app also needs to connect to other existing web apps which are hosted internally. These apps have been setup on Windows VMs using IIS with the proper HTTPS binding with their proper URL (public traffic is routed via an Application Gateway). In this particular restricted environment, outgoing internet HTTP/HTTPS requests are restricted to pre-approved domains. Hence on existing VMs we have some hosts entries so that they can access each other via their internal IPs.

My plan was: Create an Azure Private DNS zone and then create A records matching the app URLs with the internal IP of each app.

However, it turns out that once you create a Private DNS Zone, all public records beneath this are no longer accessible from resources within your vnet. You would have to duplicate all the public DNS records to be able to have a private DNS zone for a top level domain.

I did read about Split Horizon / Split Brain DNS and I was hoping that if a DNS entry isn’t resolved by the Private Zone then it gets recursively resolved by Public DNS but it isn’t the case. I believe you could to use Azure DNS Private Resolver to do this, which is much cleverer, but a much bigger thing to add to a vnet.

Here’s an example.

From a VM in the vnet in question I can do some DNS queries for my company’s public domain:

Then, in the Azure Portal (in an account / subscription that has nothing to do with Great State, but has a virtual network and VMs already) I can create a Private DNS zone for greatstate.co – just like the documentation about Split-Horizon functionality suggests doing for contoso.com :

I have added a test.greatstate.co DNS record so I can be sure it is working.

I then link this up to my vnet from the Virtual network links tab.

Then go back to my VM, flush the DNS and I can query the test.greatstate.co DNS record. Unfortunately, I can no longer query greatstate.co or www.greatstate.co so I have totally broken DNS for the entire domain within the entire virtual network!

This is such a dangerous thing that I’m surprised there isn’t a warning in the docs or in the Azure Portal in the creation of a Private DNS zone.

This page does suggest that it can have consequences since Microsoft have blocked their own domains being used as Private DNS zones!

Reboot Node if Needed in Azure Automation State configuration (DSC)

In Azure Automation when onboarding a server you have a choice whether to “Reboot Node if Needed” which many tutorials will suggest you should tick.

However you might wonder what about production servers, do you really want them to reboot themselves?

Of course you could initially onboard a server with the Reboot ticked if needed for initial setup and then re-onboard it with it unticked so that once in production it doesn’t get rebooted. But what if you forgot to change it? For this reason I’d rather onboard Prod servers without automatic reboot.

So I wondered, what actually is the experience if you don’t tick the box but the node requires reboot? Is it still possible to apply configurations that require reboot but manually rebooting? How would you know when the reboot is required?

What I found was the node gets “stuck” in “In progress” state:

If I click into the node then (after some time has passed) I see that no status is shown even though reports are coming through (normally these would show Compliant or Failed):

If I click on either of these then it says:

So there are no failures logged it just is unable to state if the configuration is fully applied or not.

On the server I tried to interrogate the DSC status:

It isn’t totally clear that the node requires rebooting. But if you are aware that this is the reason the configuration “has not converged yet” then it is easy enough to manually reboot the node.

After reboot, the node goes to (in my case) Failed or Compliant like usual.

So it is fine to not allow reboot keep in mind that the node will show In Progress indefinitely until rebooted if this is required by the configuration. At least this is the case with my example that is waiting for WebAdministration role to be installed

Deploying Sitecore Unicorn Items to Azure Web App using VSTS Build and Release

For years we have been using Unicorn to manage and deploy our developer-owned Sitecore items using TeamCity. Nowadays, we have moved to using VSTS for source code, build and release management, and Microsoft Azure for web app hosting. We struggled to set up a process akin to our TeamCity deployment process and Darren Guy’s epic blog series only covers Octopus Deploy.

I won’t go into setting up Unicorn as this is covered in the documentation. I will assume you have got it saving serialized files into source control and just outline the solution we have used to deploy the items.

We decided to put the Unicorn files into /App_Data/Unicorn so it is beneath the Webroot so can be deployed to, but protected from the public.

1. Configure the build to include Unicorn files

We used a Copy Files step to copy the Unicorn folder from source control into the artefact staging directory. This means the files are available at Release time. We’re copying it into a new folder “UnicornWWW” in the location under the website path that we want to deploy the items to (App_Data/Unicorn).

We also zip this folder up into a Unicorn.zip which contains the path within the website inside the zip, and delete the temporary folder as we don’t need it anymore.

Now the UnicornWWW.zip is a build artefact available at release time. You can check by going to the completed build summary page and checking the Artefacts tab, where the file should be listed.

2. Deploy the Unicorn files

This was tricky as we want to not only deploy the new files but also delete any old ones that have been removed from source control. We already had an Azure App Service Deploy step to deploy our web deploy package to the web app. What we did was add a post deployment script to this step which removes the Unicorn folder:


Then we have a follow up deployment which deploys the UnicornWWW.zip directly to the web app. As it has the App_Data/Unicorn folder inside it, the Unicorn files are deployed to the right place and the previous version have been deleted beforehand.

I believe it should be possible to do a manual msdeploy call inside a Powershell script where you could target a subfolder and do ‘sync’ to delete existing files, much like we do with TeamCity, but we haven’t got that working quite yet as not sure how the publish credentials would be taken out of the service principal setup that connects VSTS and Azure – it “just works” when you use the Azure App Service Deploy task so for now this is what we are sticking with.

3. Sync the Unicorn files

First we need a copy of the Unicorn Powershell API scripts available to the release agent. To do this we go back to the build and add a step to copy the Unicorn Tools PSAPI folder from the nuget package location into the build artefact folder:


We then use a Powershell step which runs on the Release agent, which triggers Unicorn sync using the Unicorn Powershell API. This is based on the sample.ps1 from the PSAPI folder. Here we specify the environment URL + /unicorn.aspx and the shared secret that is configured as per Unicorn.UI.config.


The URL and secret can be made environment variables and then you have a fully automated deployment of Unicorn items to your Azure web app environment!

Using environment variables for configuration with VSTS build and release

29/09/17 – Updated to cover new build/release editor. Screenshots still show the old editor. Also sorry about the double quotes in the code snippets, they are being replaced with pretty ones and you have to use normal ones, I’ll sort it out sometime.

We are using Visual Studio Team Services online for source code, task and release management.

For build and deployment we typically create multiple configurations in the project for each environment (e.g. DevCMS, DevWWW, etc) and use config transforms to make changes to the config. While it is easy to set up and maintain, there are a few downsides to this:

  • Potentially sensitive configuration settings are checked into source control (passwords, connection strings)
  • A separate build is required for each environment

Azure websites do allow you to override app settings and connection strings at runtime via the portal, but this doesn’t help with other settings or files. Better would be to use environment variables within the release process. VSTS releases allow just this, in the release definition you can set up variables and set them per each environment:

The question is, how to get the variables set here to update your config files at deploy time. One way I have found to do this is as follows.

Step 1. Add Parameters.xml file to the website project.

This file defines the values that need to be updated. This can replace tokens in any text file, xml files have better support using xpath to target attributes/values to replace.

<?xml version=1.0” encoding=utf-8 ?>
<parameters>
  <parameter name=DemoEnv” description=DemoEnv” defaultValue=#{DemoEnv}#” tags=“”>
    <parameterEntry
kind=XmlFile
                    scope=obj\\Release\\Package\\PackageTmp\\Web\.config$
                    match=/configuration/appSettings/add[@key=’DemoEnv’]/@value />
</parameter>
</parameters>

For the defaultValues, set the value to a token that can be replaced during release. In this example I am using the default token format used by the Replace Tokens utility described later.

I have to credit this post from Colins ALM for explaining this, although it is about TFS the first half still applies.

http://colinsalmcorner.com/post/webdeploy-and-release-management–the-proper-way

Note that connection strings are special and the values are replaced with tokens automatically during package creation. The original value is placed into the SetParameters.xml file. So to get your own token in there, you either need to add a config transform and replace your local connection string with a token, or follow the link above and set up a publish profile that defines a connection string token that will be used in the package.

Step 2. Configure the build to generate a web deploy package

This you can achieve by using a build task with MSBuild arguments as follows:

/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation=”$(build.artifactstagingdirectory)\\”

On build it will generate a zip file and a SetParameters.xml file which has just the values from the Parameters.xml that you created earlier:

<?xml version=1.0” encoding=utf-8?>
<parameters>
<setParameter name=DemoEnv” value=#{DemoEnv}# />
</parameters>

This is ideal to replace at release time with a simple token replace task.

Step 3. Set up release definition

In the release definition, add a separate environment for each environment.

Each environment has a list of tasks. First I have added a Replace Tokens task using this add-on from the marketplace.

This is configured to set the root directory to the drop folder from the build, and target files set to the name of the SetParameters.xml file output by the build. The

This will replace any variables defined with the values set in the release variables, which you can edit via Configure variables (old editor only):

Or on the Variables tab as shown at the top of this post (old and new editor – for old editor, switch using the dropdown at the top right from Release variables to Environment variables).

Step 4. Set up deployment step

The deploy step for us is a normal Azure website deployment which uses web deploy. This defines the subscription, web app, slot, and web deploy package.

The important thing to set is the SetParameters file additional option which needs to point to the SetParameters.xml file that you did the token replacement on:

NOTE: You have to tick ‘Publish using Web Deploy’ for the SetParameters box to appear.

If you have issues with the token replacement, you can use a command line “Type” command to output the replaced version of the SetParameters.xml file to the release log that you can check has been replaced properly.

References:

http://www.colinsalmcorner.com/post/config-per-environment-vs-tokenization-in-release-management

http://colinsalmcorner.com/post/webdeploy-and-release-management–the-proper-way

One thing to note, I found that if you change the environment variables, and trigger a release, it won’t overwrite the web.config. It requires a new build. I think this is because web deploy uses the file date/size from the package to compare against the site, and this isn’t changing, so it doesn’t replace it. If anyone knows a way around this, let me know!

How to get Azure web app deployment to not delete

If you’ve tried the ‘new scriptable build system’ on Visual Studio Online and you use the wizard to set up an Azure Website build/deployment you might find that by default it will delete any existing files/folder upon deployment. If you rely on any uploaded app_data files etc being kept in the wwwroot you might want to disable this.

So if you set up a build using this button:

Choose Azure Website:

Then when configuring your deploy build step make sure to add -doNotDelete into the Additional Arguments:

Otherwise it will delete existing files!