nginx and PHP on Azure Linux Web Apps using custom docker containers

nginx and PHP on Azure Linux Web Apps using custom docker containers

This is part I of hosting PHP apps using custom Docker containers

Linux Web Apps on Azure have been in public preview for a while and we finally decided to give it a spin by moving a WordPress site to it. Linux web apps make use of Docker containers and allow you to either use one of the existing MS “blessed” container images for .Net Core, LAMP, NodeJS, or pull from a custom one from docker repository. App services also support diagnostics using Kudu Console, webssh (if included in docker container) and continuous deployment using KuduSync.

Given that the built in PHP container uses Apache and we prefer nginx/PHP-FPM over it, we decided to create our own custom container. Docker containers for App service have certain pre-requisites which need to be taken care off, some of these are:

  1. Volume linking is not supported, so they cannot be used for persistent storage. Linux Web Apps (like their Windows counterpart) have a SMB share mounted under /home, so all persistent storage (like app files, assets, logs etc) have to go there.

  2. SSH is only supported with a predefined port (2222) and password – this is generally not a security issue given that the access is firewalled and can be only done via Kude Console

  3. Settings defined in Web App app settings are passed on to container using environment variables, so all configurable settings should go there. Any change in setting causes the container to restart

I believe the key to devops and stable environments is to make the containers as non-volatile as possible by always installing a specific version of package instead of just doing $ apt-get install xxx

one should do $ apt-get install xxx=version

Also the packages provided by distros are generally outdated (for e.g. nodejs version provided by Xenial is 4.2) so it is also better to either compile them from source or request them from official maintainer’s repo. The dockerfile is fairly simple, it just installs these:

  • nginx using nginx repo

  • php-fpm by installing a particular version from official repo

  • installing openssh

  • php modules/extensions needed

  • creating symlinks for folders to /home/LogFiles

  • executing a shell file as Docker CMD which starts FPM, SSH, modifying y ar conf file based on env variable and finally starting nginx in foreground

We could have also installed mariadb or mysql within the container (having symlink between /home/data and DB data_dir) but decided to give MySQL Azure a try as well. MySQL Azure is also in preview and only provides two service tiers for now. The biggest issue is connection security, there ain’t any option to “Allow Azure Sites” checkbox there so you either have to: a. Allow all endpoints to connect b. Figure out endpoints that App Service uses (they are under App Service/Properties) and allow access to only those – again, these IPs are not permanent and could change if the app service is restarted. One option is to allow an IP address range depending on what you see under Azure App Service – which is what we did.

So we had the DB running on Azure MySql, App service container running nginx proxying PHP-FPM, DB connection string setup in Application Settings, and though we hadn’t integrated the front-end pipeline yet we decided to test our setup done so far. We integrated with our git using Continuous Deployment for App Service, which triggers a Kudu deployment on any pushes to a particular git branch. Kudu is responsible for keeping the website in sync with repo and tries to make a smart guess based on the type of repo and either triggers a npm install for node site, composer install for PHP post deployment. Since ours was a PHP site and didn’t have any composer dependencies all Kudu had to do was sync the files on a commit which was fairly breeze. Navigating to our *.azurewebsites.net gave us the not-so-great “Could not connect to the database” exception. A bit of debugging (using phpinfo) helped us figure out that the issue was with FPM not having access to the environment variables, even adding

clear_env=yes

in the www.conf didn’t help. Maybe an issue with Xenial’s FPM package or something that we weren’t doing right, anyway, we decided to copy over all *required* environment variables into www.conf at container startup using another shell file. Once that was done, the nginx/PHP worked great!

But this was only first part of the puzzle, this PHP site uses a Bower/Gulp/npm front-end pipeline – all files are combined, minified, uglified etc as part of gulp build process. Front-end dependencies are managed using npm and bower (yeah it’s still on bower and not yarn). So even though we had a working backend, our frontend was pretty much broken since there weren’t any dist/build front end files yet. More on it and on our (not-so-great) experiences with Kudu in Part II.

In case you are interested in the docker image used, below are the repo links:

Github Dockerhub

#azure #docker #linux-web-apps #container #php #nginx #devops #kudu

Redefine customer journey and user experiences through Goavega's Cloud solutions-driven digital transformation.