paint-brush
How to Use a Template to Use Docker with PHP by@stratdes
215 reads

How to Use a Template to Use Docker with PHP

by Alex HernándezJanuary 3rd, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

A few years ago, when Docker was emerging and very few people were using it, I started to work in a company where everything had been built in Docker. Over the years, I’ve learned how to set up Docker in a way it’s easy to use without needing to know every detail - which is interesting anyway-. Today I want to give you an easy-to-use template in order to use Docker with PHP, explained so you can understand how it works in just 10 minutes. The file structure is set up in a file called docker-compose.yaml.

Company Mentioned

Mention Thumbnail
featured image - How to Use a Template to Use Docker with PHP
Alex Hernández HackerNoon profile picture
Several years ago, when Docker was just emerging and very few people were using it, and most of us were using Vagrant as a local development environment, I started to work in a company where everything had been built in Docker.


I had never used it before. I was very used to Vagrant and my desire to learn something new that I did not think would contribute anything better to me was scarce. I was wrong, we all know today.


But anyway I was there and I needed to learn. So I opened the first project I was going to work on and started to do the usual archeology to understand how docker was going to help me to run the project.


After two hours of not having any clue, I asked to my one of my colleges. With the needed time, I got used to this new container technology I had never used before, but it took probably an excessive time… because the project was not friendly at all.


Over the years, I’ve learned how to set up Docker in a way it’s easy to use without needing to know every detail -which is interesting anyway-. Today I want to give you an easy-to-use template in order to use Docker with PHP, explained so you can understand how it works in just 10 minutes.


Let’s go!

The file structure

First things first, we need to understand how files are distributed in the workspace folder and, more important, where should we put our project files.


There are three important folders: app, bin, and docker.


  • app: is where your project files should be put. If you are using Symfony, Laravel, or other similar frameworks, the contents of the root folder should be inside the app folder.

  • bin: we have some handy files to run docker commands much faster.

  • docker: is where we have the setup for docker. I’ll explain more later.


In the app folder, you will find two more: bin and public. These folders imitate how frameworks use to distribute console and server entry points, and you can change this with these frameworks tools. I have included two fake files, index.php and console.php, to test each option.

Configuring Docker

Docker provides a CLI tool to build and run container images separately, one at a time. You know, docker build -t tag . or docker run tag.


If you want to run more than one image, and we want to run php-fpm and nginx at the same time, the best option -in local environment- is docker-compose. With docker-compose you can just plan what you need and then run it. This plan is set up in a file called docker-compose.yaml. This is our docker-compose.yaml file (docker/docker-compose.yaml):


version: "3.7"
services:
  console:
    image: php:8.1.1-cli-alpine3.15
    working_dir: /var/www
    volumes:
      - ../app:/var/www:delegated
  php-fpm:
    image: php:8.1.1-fpm-alpine3.15
    working_dir: /var/www
    volumes:
      - ../app:/var/www:delegated
  nginx:
    image: nginx:1.21.5-alpine
    working_dir: /var/www
    volumes:
      - ../app:/var/www:delegated
      - ./host.conf:/etc/nginx/conf.d/default.conf
    ports:
      - "80:80"


Let’s analyze the file:


  • version: is the version of docker-compose’s API you want to use. You can find more details here.
  • services: here you will define the services you need.
  • console: the service for php-cli, for running the console; the image of docker you want to use, and the folders you want to share with the container -more below. And the working dir (the default folder).
  • php-fpm: the service for php-fpm; the image of docker you want to use, and the folders you want to share with the container -more below. And the working dir (the default folder).
  • nginx: the service for nginx; the image of docker you want to use, and the folders you want to share with the container -more below. And the working dir (the default folder). You also define what ports you want to share between the host and the container -port 80 in the host points to port 80 in the container.
  • shared folders: we are sharing app folder in all the services -yes, nginx need this also-. And we share the host.conf file to serve our app/public/index.php file. You can see the host file below:


server {
    listen 80 default_server;
    server_name _;
    root /var/www/public;

    location / {
        try_files $uri /index.php$is_args$args;
    }

    location ~ ^/index\.php(/|$) {
        include fastcgi_params;
        fastcgi_pass php-fpm:9000;
        fastcgi_split_path_info ^(.+\.php)(/.*)$;
        fastcgi_param SCRIPT_FILENAME /var/www/public$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
        internal;
    }

    error_log /var/log/nginx/error.log;
    access_log /var/log/nginx/access.log;
}


As the intention of this post is not to explain how nginx works, I will just point out the fastcgi_pass php-fpm:9000 part: it is just using the service php-fpm to run PHP from nginx.


You may be wondering how networking works in terms of the two services we have defined: php-fpm and nginx.


The answer is actually very easy: if you don’t define networking, docker creates a default network and all the services are inside and have visibility one to each other. Which happens to be exactly what we need.


So now we have defined what services we want and how they are build -we use docker images from docker hub. Now, how we could launch this services in order to run the console?

Running the console

To run the console, we should do something like docker-compose run whatever params etc. But typing this every time we want to run the console is the opposite of productivity, so I have created a simple bash file:


#!/usr/bin/env bash

SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )

docker-compose -f $SCRIPT_DIR/../docker/docker-compose.yaml pull console
docker-compose -f $SCRIPT_DIR/../docker/docker-compose.yaml run console php bin/console.php "$@"


Lines 1 and 3 are only to get the folder where the script is saved. This is handy when you want to run the script from different places than the root.


Line 6 is the actual line that run docker-compose.


We provide the path to the docker-compose.yaml file, then run, then the service we want to run, in our case, console, then the actual command, which is php console.php params.


So we can’t just run ./bin/console.php params. And if you do, docker will pull the php-cli image from docker hub, and then will execute the app/bin/console.php outputting This is the console, which is what we have built as a fake console.

Serving the application

What if we want to serve nginx and php-fpm? We have another handy bash file:


#!/usr/bin/env bash

SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )

docker-compose -f $SCRIPT_DIR/../docker/docker-compose.yaml pull php-fpm nginx
docker-compose -f $SCRIPT_DIR/../docker/docker-compose.yaml up -d php-fpm nginx


Lines 5 and 6 do the trick.


First of all, we pull images. Then, again, we provide the path to the docker-compose.yaml file, then up -d -the -d param detaches the process-, then the services we want to run, php-fpm and nginx.


And we can just bin/up.sh to do the trick.


Images are pulled, run, and we can go to localhost and see the Hello World! which, again, is what we have faked.


If you want to stop the services, do bin/stop.sh. You can see the stop script here, and I think I don’t need to explain it as it is quite straightforward:


#!/usr/bin/env bash

SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )

docker-compose -f $SCRIPT_DIR/../docker/docker-compose.yaml down

Seeing the logs

If you want to see the logs, you can just docker logs -f container, where -f do a tail over the log -to find container id you can just docker ps which will output the list of containers: is the first column.


But, how logs are working in this setup?


Well, basically everything you output to stderr and stdout goes to the logs. That’s it.

Where to find the template

I have pushed my template to a public repo in Github. Click here to fork/download.

Final thoughts

Infrastructure can be tough. It is very important to build easy-to-use frameworks and tooling so developers can be productive from minute zero.


With this template, for instance, any developer can run the project just by running two different commands -console and server- without even knowing how things work from behind.


Then, with enough time and confidence, learning the details is easier.

This article was also published

바카라사이트 바카라사이트 온라인바카라