I've used the Wamp server for as long as I care to remember. Recently I decided it was time to move on from that [WAMP] to using Docker. It seams every developer nowadays is using Docker and of course it makes sense really.
The advantages speak volumes and underscore why the WAMP (and XAMPP) server has no place in the modern PHP developers toolbox. With Docker you can simply download new images of different versions of PHP or different databases even, and build a new image to develop with in isolation of other build application images.
Anyway, here is how I built my PHP Laravel set up with MySql database and PHPMyAdmin. I'm actually happy for once not to be using the Apache webserver too. First of all the directory structure I work with. I start from C:\dev\
by the way.
./laravelapp
+-.docker
+---mysql
+------my.cnf
+---mginx
+------app.conf
+---php
+------local.ini
+---pma
+------config.inc.php
+-db
+-src
+-docker-compose.yml
+-Dockerfile
Here, all the configuration is found under one root ― .docker
― bar the obvious Dockerfile
and docker-compose.yml
files. The Laravel application cloned (from Git Hub) is found under the src
directory. The database under db
clearly and that's it basically. The docker compose structure makes provision for configuration for PHP, Mysql and the NGINX server but whether or not you use them is up to yourself. As you'll see my configuration is very sparse.
Moving on we have the Dockerfile
which is actually used to build the application image.
FROM php:8.2-fpm
# Get argument defined in docker-compose file
ARG user
ARG uid
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip \
&& docker-php-ext-install pdo_mysql \
&& docker-php-ext-install mbstring \
&& docker-php-ext-install exif \
&& docker-php-ext-install pcntl \
&& docker-php-ext-install bcmath \
&& docker-php-ext-install gd \
&& docker-php-source delete
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Get latest Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Create system user to run Composer and Artisan Commands
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
# Set working directory
WORKDIR /var/www
USER $user
I can't say I fully understand all what's going on under the hood but the more you mess about with it (and break something) you learn as you go along. It may be I do another blog post in the future covering the Dockerfile
in greater detail. In any case, copy this file and adjust to suit your own situation such as install other PHP extensions as per needs. The most important thing to understand is the parts for Composer and file ownership I guess.
Notice the PHP image I use doesn't come with the Apache webserver included? You do get Docker images with PHP and Apache together but the configuration for the Dockerfile would also differ as well. In my opinion it's better to just go with a bare minimum PHP image and use another image for your server.
Following on we are interested in the configuration of PHP and so on. In no particular order they are:
// found at /.docker/php/local.ini
file_uploads = On
memory_limit = 64M
upload_max_filesize = 64M
post_max_size = 64M
max_execution_time = 900
Put what you want in your configuration. I adjusted mine to accommodate greater file upload limits and execution time and that's it really. Do a phpinfo();
to see what's already there in the PHP image and adjust to please (either in the configuration or the Dockerfile
).
// found at /.docker/mysql/my.cnf
[mysqld]
general_log = 1
general_log_file = /var/lib/mysql/general.log
max_allowed_packet=512M
When using PHPMyAdmin you will run into issues with uploading large .sql
files so do consider that. The best solution is to put the file upload limit in the docker compose file and not bother yourself with the database configuration too much. Moving onto the server next.
// found at /.docker/nginx/app.conf
server {
listen 80;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
# set to 0 will disable checks, as per the documentation (not recommended)
# @see http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size
client_max_body_size 64M;
client_body_buffer_size 64M;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# the php is the name of the service specified in the docker-compose.yml file, for
# php, built from the Dockerfile typically
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
The configuration for the NGINX server is pretty basic. I've far more experience with Apache and it's virtual hosts and so on so I guess it'll take time for me to adjust to how NGINX works internally. Two points to note are: a) no virtual hosting (for custom domain names) and b) no https://
protocols. For local development purposes I guess those two things aren't important but some cloud services are not available to http://
domains (local or otherwise).
One thing you must notice is the FASTCGI pass through though. It must match what is in your docker-compose.yml
file for the name of your PHP service. I've commented on that point in the configuration. And unlike other developers who've blogged about their Docker experience I have ladled lots of comments in my docker compose file.
Very helpful I am.
// found at /.docker/pma/config.inc.php
$cfg['ShowPhpInfo'] = true;
$cfg['ExecTimeLimit'] = 0;
The bulk of your Docker build will come from the docker-compose.yml
file which determines the functionality you end up with and how capable your PHP Laravel development setup will be. The setup I am happy with is below. Some words of wisdom first. Name your containers and stick to a naming convention so no two containers are named the same. I go with the application (or project) name followed by the service name for naming my containers.
version: "3"
services:
php:
build:
args:
user: sysuser
# on the command line, enter id to get the uid of user
uid: 1001
context: .
dockerfile: Dockerfile
# image name is what you want it to be, as coming from result of what is in the Dockerfile
image: application
# use the container name to access the bash, ie docker exec -it laravelphp bash
# use the service to access the bash, ie docker-compose exec -it php bash
# likewise for other containers or services
container_name: laravelphp
restart: unless-stopped
working_dir: /var/www/
volumes:
- ./src:/var/www
- ./.docker/php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- laravelnetwork
depends_on:
- mysql
tty: true
mysql:
image: mysql:latest
container_name: laravelmysql
restart: unless-stopped
tty: true
ports:
# host:container
- 3306:3306
environment:
MYSQL_DATABASE: 'laraveldb'
MYSQL_ROOT_PASSWORD: 'password'
volumes:
# directory structure is relative to this docker-compose.yml file
# ./db must be a directory that is read and write for user
- ./db:/var/lib/mysql
- ./.docker/mysql/my.cnf:/etc/mysql/my.cnf
networks:
- laravelnetwork
phpmyadmin:
image: phpmyadmin:latest
container_name: laravelpma
ports:
# access http://127.0.0.1:80
- 80:80
environment:
# use either the mysql database service name, or container name for the phpmyadmin host
# fiddle around to see which one works for you
PMA_HOST: mysql
PMA_PORT: 3306
MYSQL_USERNAME: 'laraveluser'
MYSQL_ROOT_PASSWORD: 'password'
# change the upload limit for PHPMyAdmin
UPLOAD_LIMIT: 128M
depends_on:
- mysql
networks:
- laravelnetwork
nginx:
image: nginx:latest
container_name: laravelnginx
restart: unless-stopped
tty: true
ports:
- 8080:80
volumes:
- ./src:/var/www
- ./.docker/nginx:/etc/nginx/conf.d
networks:
- laravelnetwork
depends_on:
- mysql
- php
networks:
# a network allows the various containers to communicate with each other
laravelnetwork:
driver: bridge
volumes:
db:
driver: local
See the comments about how to access PHP and MySql from the bash terminal. You use docker-compose exec -it php bash
where php
is the service named for your PHP image pulled by the Dockerfile
. Likewise for MySql you run the docker-compose exec -it mysql bash
command and follow this with mysql -u root -p
to enter your password first to gain access to the database.
There's nothing to stop you running docker-compose exec -it phpmyadmin bash
either is there? Play around and see what you can break because that's how you learn. And that's what is so great about Docker ― you can break something and not beat yourself up about it because you just tear it down and start again.
With the WAMP server if it's broken (for what ever reason) it can take hours (or even longer) to determine the cause and find a solution. I remember the woes trying to make WAMP work nicely with http://localhost:443
. What a right -beep- nightmare.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f44858bff31b nginx:latest "/docker-entrypoint.…" 4 hours ago Up 4 hours 0.0.0.0:8080->80/tcp laravelnginx
7188847b9b04 phpmyadmin:latest "/docker-entrypoint.…" 4 hours ago Up 4 hours 0.0.0.0:80->80/tcp laravelpma
802b31c4aaac application "docker-php-entrypoi…" 4 hours ago Up 4 hours 9000/tcp laravelphp
f583c7a9436f mysql:latest "docker-entrypoint.s…" 4 hours ago Up 4 hours 0.0.0.0:3306->3306/tcp, 33060/tcp laravelmysql
Once you've got those vital configuration files sorted you will be eager to seeing the default Laravel screen. From the ./laravelapp/
root directory you can use git clone https://github.com/laravel/laravel.git src
to create the default application structure. Or clone your own existing repository if you have one. It only takes a few seconds to clone a repository so you are ready for the next steps moments later.
Fine tune the composer.json
file as required, making your adjustments. If memory serves me correctly you first need to build your docker images prior to running composer (regardless of what's inside the src
directory). So do that next: from the terminal command line run docker-compose build --no-cache
and wait until done.
Then run docker-compose up -d
to create your containers. Following this run docker ps
to check those containers are indeed running. If you are seeing any of your containers saying they're "restarting..." then there's something wrong with the docker compose file. Check and double check each service. Try running with docker log laravelphp
for example, where laravelphp
is the container in question that's restarting. The log files are very helpful.
You only need to run with docker-compose build --no-cache
the once only, when you're satisfied with your build. From that point on, only run docker-compose up -d
to fire up those containers and docker-compose down
to shut them down and remove them. Because of those volumes created in the docker compose file, all your data will persist afterwards. Peaking inside ./laravelapp/db/
you clearly see all the standard MySql data files.
When there are no issues you can think about using Composer to update your Laravel application. Do so with docker-composer exec -it php bash
and from the command line run composer install
(or even composer update
) to see Composer do it's thing. I also figure you'll run into the issue of Composer timing out? It happens all the time and the only way to solve the issue is by running composer config --global process-timeout 1800
first before you attempt to install or update using Composer.
This gives composer a maximum 30 minute window to complete the installation and give that length of time because it does take longer as all I/O is between Windows and Ubuntu (the default Linux distro). Go off and make yourself lunch and by the time you come back it'll be complete.
Clear the terminal (use clear
) and begin with cp .env.example .env
to copy over the environment file. Then php artisan key:generate
followed by php artisan storage:link
(if a new Laravel application) and php artisan optimize
. Set up your configuration for the MySql database first of all before venturing with php artisan migrate
.
In my ./config/database.php
configuration I have the following. Notice I'm not using http://127.0.0.1
for the host? You'll have pot luck if you try with the localhost using Docker containers. You must use your networks IP address and you'll find that by running ipconfig
on the command line or try using ip a
if the later fails to produce results for you.
'mysql' => [
'driver' => 'mysql',
'url' => env('DATABASE_URL'),
'host' => env('DB_HOST', '172.x.x.1'),
'port' => env('DB_PORT', '3306'),
'database' => env('DB_DATABASE', 'laraveldb'),
'username' => env('DB_USERNAME', 'laraveluser'),
'password' => env('DB_PASSWORD', 'password'),
'unix_socket' => env('DB_SOCKET', ''),
'charset' => 'utf8mb4',
'collation' => 'utf8mb4_unicode_ci',
'prefix' => '',
'prefix_indexes' => true,
'strict' => true,
'engine' => 'InnoDb ROW_FORMAT=DYNAMIC',
'options' => extension_loaded('pdo_mysql') ? array_filter([
PDO::MYSQL_ATTR_SSL_CA => env('MYSQL_ATTR_SSL_CA'),
]) : [],
],
If you have an existing Laravel application, enter http://localhost:80
for the PHPMyAdmin tool. To see your freshly installed Laravel application then use http://localhost:8080
in the brower address bar. If you have errors being reported for the PHPMyAdmin tool then check your docker-compose.yml
file and that PMA_HOST: mysql
is actually using your database service.
It was trial and error for me to get the darned thing working to start with and I found using the database service name for the PMA host was the best solution. Fingers crossed for you.
You are now ready to migrate your database however lastly you must grant privileges to your user. Usually (by default) that user is root however I guess it'd be more secure to create a new user for your application and grant privileges just to that user on the host (172.x.x.1 remember?). You can docker-compose exec -it mysql bash
on the command line (root directory of your application, ie ./laravelapp) followed by entering mysql -u root -p
and the password.
Once you see the mysql>
prompt you can use create user "laraveluser"@"172.x.x.1" identified with mysql_native_password by "password";
where "172.x.x.1"
is the host you figured out with the ip a
or ipconfig
command. The password is what you have put in the docker-compose.yml
file obviously. Now that you've created a user (and not using the root user) you need to grant all privileges to this user ― using:
grant all privileges on laraveldb.* to "laraveluser"@"172.x.x.1" with grant option;
Notice we are only granting to the host and not to "%". Using the wildcard is dangerous and an ill practice in my opinion even though it's only local development, it's a bad habit. Run the php artisan migrate
command and your output should be similar in nature to this:
sysuser@93a2800d2495:/var/www$ php artisan migrate
INFO Preparing database.
Creating migration table ................................................................................. 82ms DONE
INFO Running migrations.
2014_10_12_000000_create_users_table ..................................................................... 94ms DONE
2014_10_12_100000_create_password_reset_tokens_table ..................................................... 85ms DONE
2019_08_19_000000_create_failed_jobs_table ............................................................... 67ms DONE
2019_12_14_000001_create_personal_access_tokens_table .................................................... 65ms DONE
That brings a close to my early experience of building Docker containers from image for my Laravel development set up. I'm finding that I am best pleased having dumped the WAMP server however there are still on going issues. Namely there is a lag between page loads and that's because of the I/O back and forth between Windows and the underlying WSL2 system.
There is a solution I've read about but as of yet, not tried. I'm sure I will try at some point and blog about it afterwards but for now that's all I've to offer with my experience using Docker for PHP Laravel application development.
As it turns out there are a few things missing out of the build script in regards to the GD
extension. You only really discover these failings afterwards when you continue to develop with a project or application after moving it from a WAMP server to a Docker environment. The case in point for me on this occasion was I couldn't create a JPEG image.
Digging around SO (Stack Overflow) and elsewhere for a few hours I worked out I needed to make a few adjustments and those are below. If you come across this blog post in a search then use the following (below) and not the build script above.
FROM php:8.2-fpm
# Get argument defined in docker-compose file
ARG user
ARG uid
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
zlib1g-dev \
libpng-dev \
libwebp-dev \
libjpeg-dev \
libfreetype6-dev \
libonig-dev \
libxml2-dev \
zip \
unzip \
&& docker-php-ext-install pdo_mysql \
&& docker-php-ext-install mbstring \
&& docker-php-ext-install exif \
&& docker-php-ext-install pcntl \
&& docker-php-ext-install bcmath \
&& docker-php-ext-configure gd --with-freetype=/usr/include/ --with-jpeg=/usr/include/ --with-webp=/usr/include/ \
&& docker-php-ext-install gd \
&& docker-php-source delete
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Get latest Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Create system user to run Composer and Artisan Commands
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
# Set working directory
WORKDIR /var/www
USER $user
If you are using any Docker image with -alpine
I would wager a bet you may need to make your own modifications to this build script as well. Out of curiosity, I said you could use phpinfo();
to output what extensions are installed but you don't have to (as I found out recently). Because having used docker-compose exec -it php bash
on the command line you can simply then enter php -i
to output the same thing what using phpinfo();
does.
Content on this site is licensed under a Creative Commons Attribution 4.0 International License. You are encouraged to link to, and share but with attribution.
Copyright ©2024 Leslie Quinn.