As with so many of my projects, a few years passed between the doing and the posting. The text on this page was mostly written in January 2020, then augmented in April 2022 and finished August 2022. Hopefully, finally posting this will be at least a little interesting to some people. For instructions on how to use the container, see the section further down this page.
PICT, the drawing game I made, has been pretty well received.
The front-end was specifically made to be platform agnostic: it has to work on any phone or tablet, without installing anything. Despite the initial weirdness with iPhone touch behaviour, we mostly succeeded there.
The back-end was written in PHP (groan) because that's the environment I have here for my site, and it saves me having to pay for more server space. But bizarrely, the very first game of PICT was hosted on my laptop, with a hastily hacked-together LAMP install and broadcasting a wifi hotspot. It worked. But had I known that was how it'd be played, I would have designed the back-end quite differently.
That game was played on a long train journey, where mobile internet was unreliable. There have been several other situations where we've been in a group, wanting to play PICT, but thwarted by unreliable or nonexistent internet. It is a hassle to install PICT locally, which is why producing a PICT Container was suggested.
"Containers" are well outside of my usual jurisdiction so I asked Tom to give me a hand. You can tell how familiar he is with Docker by how much he hates it. But Docker is currently the de-facto standard for making code portable ("not quite as inefficient as a VM") so turning PICT into a Docker image seems like a worthwhile exercise.
I don't normally release PHP source code, because I'm paranoid about exploits and the primary defence for PHP projects is to keep the source code under wraps. There's almost certainly exploits in my code and I don't have the patience to find all of them. On the other hand, five nines of attacks this site sees are generic wordpress exploits (I don't even use wordpress) so maybe my fears are unfounded.
Note: PICT was originally written over a weekend. We eventually spent the best part of a month dicking around with this container stuff, so it would definitely have been faster to just rewrite it from scratch.
The database credentials, in true PHP style, are declared in the main file and committed to the repo. We moved those into a separate PHP file, db.php
which gets included and the repo can contain a template db.php.example
file. For deployment to a webserver that can be filled out, and in the container setup we can dump a localhost/root template in there.
But the credentials still exist in the git history so we need to filter those out. There may be newer tools to do this in a "safer" way, but the way I'm familiar with is the dreaded git filter-branch
. It's very slow and comes with all the usual warnings about manipulating history, but makes it easy to apply a search/replace on every file in every commit. I don't remember the exact command used, but it was something along the lines of
git filter-branch --tree-filter 'sed -i "s/old/new/g" index.php' -- --all
The --all
after the separator means to rewrite all branches and tags. Instead of the sed in-place command, we may actually have used a patch file prepared beforehand. Whatever the method, all instances of connecting to the database in all the history were changed to a require('db.php');
To check it worked it should be enough to just git log -S password
to search for the existence of password in the history, but we all know I'm far too paranoid to trust that. After checking out the initial commit and various intermediate commits I convinced myself the process was successful. Note that the original git objects are still present, and will remain so until garbage collection (or invoking git gc
). Naturally when we push/publish, the unreferenced objects will be left behind.
The remainder of the starting work is straight-forward and can be committed on top of the history. Pretty much every reference to mitxela.com needs to be swapped out with something generic (excepting, of course, the hyperlink on the home page). The idea here was not to make a new, containerized version of PICT, but to clean up the repo in a way that lets it work either in the container or in the original environment (labelled NFS in the git messages). setCookie
is one example, redirects are another, where the hostname is used explicitly. Replacing mitxela.com with localhost fixes the container but breaks the original. The correct thing to do is use environment variables. HTTP_HOST is set by Apache, so $_SERVER['HTTP_HOST']
can be dropped in wherever we need it. Note that this doesn't tie us to using apache in the container, since in the container we can set whatever env vars we want.
Other environment variables can be used to cover different behaviours. For instance, the containerized version won't be served over HTTPS, as the intended use case is hosting on a local network and certificates can't be issued for private IPs (note: that doesn't preclude us from using a domain name for a private IP, but more on that later). A PICT_NO_SSL env var lets us instruct the code about whether or not HTTPS is being used. It can be checked as a RewriteCond in the .htaccess file, and accessed directly from PHP with getenv()
.
# in .htaccess RewriteCond %{ENV:PICT_NO_SSL} ^$ RewriteCond %{HTTP:X-Forwarded-Proto} !https RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]The X-Forwarded-Proto is a header added by my webhost's reverse proxy service. For setting cookies, Chrome really hates the hostname being set to "localhost" or an IP, but not setting the hostname at all works fine. Back in the day, browsers used to behave oddly if you did this (I'm talking circa 2006) but it seems those days are gone. The main reason for setting hostname is if you need the cookie to cross subdomains, which we do not.
// in PHP setcookie("pict", $s, 0, "/", null, !getenv('PICT_NO_SSL'), TRUE);
git init
in that folder later. For small personal projects, there is really nothing wrong with this approach.For some projects it might be enough to chuck a Dockerfile in the root of the repository, but I don't want our docker stuff, extra config and shell scripts to get plonked on the regular webserver. There's no deploy script, and I don't want one.
Instead, let's build our container elsewhere and chuck the entirety of the PICT source code in a subfolder of it. We spent a while thinking of the best way to do this, and changed our minds halfway through.
The first plan was to use a git submodule. Submodules are designed for almost exactly this use case, cloning a separate repository (the PICT source code) into another repository (the container). However, for reasons I can't fully recall, this was a pretty unsatisfactory result. At the very least, this means that the pict source code is self-contained, and has to be pushed/pulled independently from the container repo.
Our final creation is a lot more of a hack. The applicability of git worktrees may not immediately be apparent, but a worktree allows one to check out multiple branches of a repository at the same time. It transpires that nothing prevents you from placing one git worktree within another. We created two branches, the main branch being the pict source code, and a container branch, that adds the main branch as a worktree in a subfolder (called src
). In this configuration, both branches make use of the same .git directory, it's all one repository, but cd'ing into the source directory gives you a separate history, and you can happily commit to either branch without interference. They are indelibly linked, git branch
highlights an active worktree in cyan with a plus sign instead of an asterisk.
LAMP, for Linux-Apache-MySql-PHP, is an acronym for a once-popular webdev environment. As PICT runs in a LAMP environment, the fastest way to get up and running is to take an existing LAMP docker image and squirt our source code and configuration files into it. This is the first thing we attempted, and despite how ugly the result was, it technically worked.
A docker "image" is a template, while a "container" is an instance of an image. You first build the image, by writing a Dockerfile and running docker build ...
. The first line of our Dockerfile will define the base image, in this case
FROM mattrayner/lamp:latest-1804means that we start with the ready-made LAMP image. That LAMP image itself starts from an Ubuntu base image. The rest of the Dockerfile copies in our code and sets up the database and so on.
Once built, the container can be run with docker run ...
. The commands can soon get quite long and tedious, especially if running multiple containers (or building "multi-container apps") so another tool named docker-compose is often useful. This is a wrapper that lets us write a yaml file to configure our environment, which for now just consists of the pict image, the exposed ports, some environment variables, and potentially a mounted volume containing the source code (more on that in a bit).
If that all sounds a bit too simple and easy, it's apparently standard (or useful, or convention, or tradition?) to then wrap the docker-compose command in a shell script so we don't need to remember or type out the arguments each time. In this case, Tom produced a script named start-dev.sh
that both runs the initial setup and spits out another shell script that contains the actual docker-compose command. I suppose one benefit of wrapping everything in scripts is that you can then version control the commands used to invoke things. But as a newbie to all of this, I'll simply accept that this is a good way to do it.
Speaking of wrapper scripts, one of the complaints about docker is that it needs to run as root. Even for situations where it really shouldn't need to. You can fiddle with permissions to ease this but sadly the best solution seems to be just plonking a script like this in ~/usr/bin
:
#!/bin/bash -e exec sudo /usr/bin/docker "$@"
It's worth mentioning that these days (in 2022) there are alternatives such as podman which act as a drop-in replacement for docker with less of the permissions hassle.
docker images
. When we rebuild an image the old one isn't deleted, so we can quite quickly eat up a lot of diskspace if we're not careful. You can use docker image rm [image]
to remove one, assuming it's not in use, or docker image prune
to autodelete. Running docker images
on my old laptop shows something like this (truncated for clarity):
REPOSITORY TAG IMAGE ID CREATED SIZE ... <none> <none> df0a24d3b13c 2 years ago 843MB <none> <none> cc4aa1dd8fe1 2 years ago 843MB <none> <none> d7d4b649adc0 2 years ago 843MB <none> <none> 64d8988e7a3d 2 years ago 843MB <none> <none> adc7bbd66ad6 2 years ago 843MB <none> <none> 327fd27d17e7 2 years ago 844MB <none> <none> f2aac11d1c8b 2 years ago 844MB ... mattrayner/lamp latest-1804 70d76b7d843f 3 years ago 843MB
The commands and interface are a bit of a mess, there seem to be multiple ways to invoke most things. The above output can also be had by typing docker image ls
(docker image
being a subgroup for dealing with images) and individual images can also be removed with docker rmi [image]
.
To see containers (which are instances of images) we run docker ps -a
. Without the -a
it will only show running containers, which can be confusing since containers often seem to end up in various "exited" states, which is not the same as stopped. This is usually as a result of things failing. The exited container will then hang around and prevent you from removing the image until you remove the container.
To see processes running within a container, docker top [container]
is the command you want.
docker run
is the command to create and start a new container. docker start
would be to resume a stopped container. In our case we're creating/starting it via our wrapper script around docker-compose anyway. docker ps -a
shows something like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b01e356d1bf6 mitxela/pict "/run.sh" 14 minutes ago Up 3 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp container-pict-1
The name container-pict-1
is autogenerated. On my old laptop some of these use underscores instead of hyphens, but notably a lot of the names are completely nonsensical, "helpfully" chosen from random dictionary words. It's this friendly name you need to supply to docker stop
or docker rm
.
In our case, we want to start and stop the container using the docker-compose command. The wrapper script (src/dev
) ends up a bit like this:
#!/bin/bash -e docker-compose -f docker-compose.yml "$@"
so we can start it by running src/dev up -d
and kill it with src/dev rm
.
A running container stays there even through a reboot (or will come back once you start the docker service). At least once I found that rebuilding and starting a container was still launching the old one, but after rm
'ing it once, rebuilding and starting launches the most recent build. Everything seems especially messy when it comes to jumping around the git history and trying to compare different versions.
Perhaps the most useful thing I can put here is the procedure for flushing everything and getting back to a clean state. First delete all containers:
docker rm $(docker ps -a -q)
Then delete all images:
docker rmi -f $(docker images -a -q)
Alternatively it's probably fine to merely stop the docker service and rm -rf /var/lib/docker
. Incidentally on my old laptop, since my main partition was running short of space, I symlinked /var/lib/docker to a directory on my secondary drive, then all of the docker objects end up there instead.
The initial way of chucking our code in the container works fine in that the code ends up in the container. We could even edit this code while the container is running by launching an interactive shell to the container, but aside from being awkward, the main problem there is that to keep our changes we'd need to extract the files out of the container at the end. Otherwise, when we rebuild the image it'll just copy the old files from disk into the new container.
Obviously the correct thing to do is mount the src folder into the container so that changes are directly reflected. There are multiple ways to do this, a "bind mount" is the simplest and what we went for, and amounts to adding four lines into our docker-compose yaml file. The source is still copied in as part of building the container (last line of the dockerfile), so the image will still work if run directly, the bind-mount just acts as an overlay.
At this point, the container mostly worked, so I think it's time for an intermission, where I'll talk a bit about Apache, PHP and MySQL.
UTF8 is a simple enough encoding. The lowest 127 code points are identical to ascii, but using variable-length multibyte encoding it can represent any character in unicode. Fantastic! What could go wrong?
For a start, for reasons beyond comprehension UTF8 isn't the default on many systems. I'm not talking about transfer encoding, though the problems there are equally stupid: if a websever fails to specify an encoding in the HTTP headers, the browser "falls back" to ascii, or specifically ISO 8859-1. Non-ascii characters are rendered as question marks, but where this really hits problems is submitting form data, which gets mangled on transmission, as the browser believes the server won't support anything non-ascii. It's ridiculous that you can fix the encoding by adding an HTML meta tag, because by the time the meta tag is parsed the content is already being read – I digress.
A number of PHP functions, such as json_encode, get very upset by these unicode issues. But it's generally true that once every component is informed that the data is in UTF8, the problems dissolve away. It's UTF8 in one place, it's handled as UTF8 in another, everything's dandy. Not so, in the world of MySQL.
In MySQL, the UTF8 text type is not actually UTF8 compliant. It is a subset of UTF8 that only supports up to three-byte characters. If a text string contains a four-byte UTF-8 character, MySQL throws a fit, the data is truncated and the query fails.
Instead of fixing what is obviously a complete failure in the implementation, they added a new text type, utf8mb4, which should really be called utf8. Words cannot describe my confusion and frustration about this.
utf8mb4 wasn't even added that long ago. As my webserver also hosts some archived old code which I don't want to modify, and I'm a cheapskate, I've held the environment here on an old version of MySQL (specifically MariaDB 5.3) which is from before four-byte utf8 support existed. Did I mention that emojis are generally four-byte unicode characters? I tested PICT with several unicode messages, so imagine my surprise when the first person to use an emoji on PICT faced an error message.
Our only options are to store names and descriptions as raw binary (making inserts and queries very tedious) or to escape the four-byte characters before inserting. The nature of this process means escaping only the four-byte characters is substantially more effort than escaping everything that's not part of the ascii charset.
TL;DR: In order to support emojis, every single non-ascii character has to be escaped as an HTML entity before inserting into the database.
There's nothing exactly wrong with doing this in PHP, and I'd always leave some kind of check in place just in case the server configuration is broken at a later date, but the more elegant way of achieving the same result is to name your hidden PHP files consistently and add a simple rule to the server config. The following .htaccess rule serves all files beginning with an underscore as a 404 page:
RedirectMatch 404 /_
We now return you to the main narrative.
The right base image to start from would be Alpine linux. This is a lightweight distro designed specifically for containers. It uses busybox and for me brings back a wave of buildroot nostalgia.
The LAMP image we started with was 843MB, and I'm not sure where all that is coming from, given that the base Ubuntu image is about 188MB. Alpine is about 5MB. To this we need to add all our software packages which now come from the Alpine package repositories using apk
. PHP, MySQL (Maria DB) and a webserver.
One of the docker "philosophies" is to have one process per container. That process can be launched as PID 1, and if you want multiple services that talk to each other, you run multiple containers. This maximises the separation between processes, but again isn't really what we were hoping for with our lightweight container. There are lots of situations where running multiple services in a container makes sense, but for that we need, at the very least, an init system that will launch all our services, and hopefully supervise them (restart them if they fail, etc).
The init system is also supposed to reap zombie processes, since any orphaned process becomes a child of PID 1. Accumulating zombie processes is the kind of problem that affects long-running containers, I don't think we'll need to worry about it for the few hours PICT runs at a time, but this is one of the given reasons for not setting any old process as PID 1, that it won't reap orphaned processes if it wasn't designed to be an init system.
When I played with buildroot on raspberry pi, I used busybox as the init system and listed my scripts in /etc/inittab
. This is technically possible in a docker image, but there are plenty of other options.
If we take a closer look at the mattrayner/lamp
image, we see that it uses phusion/baseimage
as its base. This phusion baseimage is a minimal version of ubuntu that uses a custom init system written in python (which they have named "my_init"). The command we gave to docker to start the lamp image was /run.sh
, and the last line of that script is exec supervisord -n
. Supervisord is a process supervisor written in python.
There are, in fact, a whole load of different process supervisors we could choose from, many of which can also function as an init system. Tom seemed quite excited about a tool called s6 which is a very lightweight supervisor. The s6 site does a better job of comparing it to alternatives than I could do. One of the distinctions they make is between a supervision suite and a service manager, with s6 being the former, but another project, s6-rc, being the later. The salient point is that a supervision suite alone doesn't manage dependencies.
Pretty much the only startup dependency in our system is that the database setup, a run-once script that installs the db and applies the schema, needs the database (mariaDB) to be running in order to apply the schema, but the database needs to be installed before it can run. The services are all launched in parallel by s6, but rather than adding a full service manager for this one case, we did a kind of hack to get the database to come up correctly. The setup script first installs the db, then waits for the database to be up and running before creating the pict database and applying the schema. Meanwhile, the database service script first tries to cd
into the database directory – if it is not yet created, the service will fail and continue to auto-restart until it's ready.
I honestly don't have enough context to compare what we created here to other approaches, but field of init scripts, supervision suites and service managers seems to be filled with hotly debated approaches and controversial software. The skarnet site makes for interesting reading.
In hindsight we probably should have put the nginx config in a bind-mount as we did the source code, but the plan was never to be editing it frequently. Instead, while trying to get it to work (especially later, when I started fiddling with captive portal stuff) the update process went like this:
docker exec -it container-pict-1 sh vi /opt/nginx.conf pkill -HUP nginx
That is, launch an interactive shell into the currently running container, then make our changes with vi and send the hangup command to nginx so it reloads the config file. This is probably the fastest way to iterate (notwithstanding the terrible editor) but once it's working, we then need to extract our config file out of the container for prosperity with another docker exec
command or it'll all be overwritten when we next build it.
So – with nginx, Alpine and s6, how big is our container?
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE mitxela/pict latest 19e4c3874c86 6 seconds ago 195MB alpine 3.12.8 48b8ec4ed9eb 11 months ago 5.58MB
Not too shabby! Certainly a lot nicer than the >800MB images we were creating before. The bind-mount lets us edit the source code with tracked changes in an environment similar to the live webserver, so everything seems to be shaping up. You'll notice I'm not using the latest Alpine base image though, I went for one about a year old as I write this in 2022. I'll explain why in the conclusion section.
But the real challenge is to play PICT on a moving vehicle. I recently ran the PICT container successfully on an aeroplane, where the only option is to hotspot. It's also worthwhile doing this on a bus or train, because all sorts of annoying problems happen when phones switch cell tower.
Note that hotspotting on a laptop like I'm about to describe does not provide internet access to any of the connected devices. Not a problem on an aeroplane but if you want to continue to allow messages to get through, you could hotspot on the laptop and then tether that laptop to a phone's internet. It might also be good enough to use, for instance, Android's mobile hotspot feature, and connect the laptop to it, running PICT as we did on the wifi previously. I haven't confirmed if this works.
To set up a hotspot on linux, given I'm already using Network Manager, the simplest way is to let it handle everything. You can even do this through the GUI (nm-applet). For it to work, you also need to have dnsmasq
and nftables
installed, which are optional dependencies on Arch.
To launch the hotspot, you need to manually choose "connect to a hidden network..." in the applet menu and select pict. Easy!
All the players then need to connect to that network and access the IP address you specified earlier. So long you accessed it via the IP (instead of e.g. just going to localhost), the QR code and join link will be generated correctly. On some phones, you may need to turn off mobile data for it work.
Since we need to share the join link anyway, this is easily good enough, but on the subject of things being easy, we're using dnsmasq so if we wanted to use a custom URL it should be as easy as sticking it in /etc/hosts
. To my surprise, this didn't work. We can see that Network Manager launched dnsmasq with the --no-hosts
option, from running this command:
$ ps -uax | grep dnsmasq nobody 63848 0.0 0.0 18832 6728 ? S 12:06 0:00 /usr/bin/dnsmasq --conf-file=/dev/null --no-hosts --keep-in-foreground --bind-interfaces --except-interface=lo --clear-on-reload --strict-order --listen-address=10.42.0.1 --dhcp-range=10.42.0.10,10.42.0.254,60m --dhcp-lease-max=50 --dhcp-leasefile=/var/lib/NetworkManager/dnsmasq-wlp108s0.leases --pid-file=/var/run/nm-dnsmasq-wlp108s0.pid --conf-dir=/etc/NetworkManager/dnsmasq-shared.d
Fair enough, the --no-hosts
option clearly tells it not to make use of the hosts file. It's not worth the effort of trying to figure out why it's launched like this. But it's also explicitly adding a conf dir at /etc/NetworkManager/dnsmasq-shared.d
so we can force it to look at /etc/hosts
by sticking addn-hosts=/etc/hosts
in, e.g. /etc/NetworkManager/dnsmasq-shared.d/hosts.conf
. Remember to restart Network Manager after this change.
I stuck the following line in my hosts file:
10.42.0.1 pict.test
...and hooray, with my phone connected to the hotspot, accessing http://pict.test takes us to pict.
We don't even need a TLD, it works fine if you make the hostname just "pict" but without a dot, phones have a habbit of attempting to search for the phrase instead of realizing it's a URL.
We now have the possibility as mentioned earlier of serving the page over HTTPS. Self-signed certificates tend to cause more problems than they solve (often being flagged as somehow less secure than an unencrypted connection) but we could serve pict on a domain we really own, and get a Let's Encrypt certificate for it. There's even the possiblilty of pulling down the current certificate and private key for mitxela.com, and serving pict as if it was on the real URL of mitxela.com/pict. Hilarious, but it would probably be very confusing for anyone who wasn't connected to the hotspot, or whose phone tried to connect via mobile data, and then attempted to join the wrong instance of pict.
Let's Encrypt certificates expire after three months, which is shorter than the mean time between containerized PICT sessions. I hate the idea of having to renew our certificates for something like this, and let's be honest, SSL really doesn't matter for PICT.
But something I would like to add is automatic redirect, aka captive portal. It should be as simple as adding a wildcard into the dnsmasq config, such as
address=/#/10.42.0.1
To my surprise, trying this (in 2022) also didn't work (even though it used to!). I eventually figured out this is a glitch in dnsmasq 2.86, which has been patched. For now I downgraded to 2.85 via the Arch Linux Archive.
With the wildcard working, everything is served as PICT. If you head to google.com on your phone, it will play PICT. We can redirect everything using nginx config if we want, but I'm cautious not to break compatibility with the non-hotspot version. If we don't control the DNS server, redirecting to our custom URL is going to break everything.
Phones detect captive portals by hitting certain URLs. Android hits a bunch of google domains with /generate_204
(that returns a 204). If we serve valid pages for these, the phone will think it has internet. One possibly desirable behaviour is for connecting the wifi network to show a popup, saying "Sign in to WiFi network", which then launches PICT. To make this happen, we need the URLs pinged to not have a valid response. But with our wildcard redirect, this doesn't work, and I found the phone would simply display "Internet may not be available." To correctly show the sign in page, apparently we need these google domains to time out. One solution is to specifically give those domains a bogus IP in /etc/hosts
. This does indeed work on my Android phone, but I'm not entirely sure if it's worth it. It all seems flaky enough that it'll break soon anyway, and running PICT within the sign-in browser may lead to lost session data, which is never fun.
#!/bin/bash cmd="systemd-inhibit --what=handle-lid-switch sleep infinity" pid=$(pgrep -f "$cmd") if [[ "$pid" ]]; then kill $pid notify-send "Lid-switch action re-enabled" else notify-send "Lid-switch action inhibited" eval "$cmd" fi
The systemd-inhibit command will temporarily disable sleep, and this script means I can tap the key to toggle the behaviour on and off. I am certain it will come in handy eventually.
Interacting with hardware from a docker container is possible by launching with --privileged
but then the word "container" no longer makes much sense. There are a few projects running captive portals with hostapd and iptables, here's one.
We could launch our PICT container and hotspot container together, potentially keeping the PICT container identical to when we want to run it on a local network. I think the best way to do this might involve setting up a reverse proxy on the hotspot container so that the captive portal stuff is all completely separated out.
The main reason I haven't done this is that there are hardware dependencies, I'm not convinced you can launch and stop the hotspot container and be sure the network hardware is as it was beforehand. The other reason is that we just don't need it, for the three times I've ever needed to run PICT on a hotspot, Network Manager has been fine.
The problem is game ID collisions. Had I written the game with this in mind, UUIDs for each game would have been a better choice. In that case, the join link could be a temporary ID for shortness, with a local mapping of temp IDs to UUIDs.
If we wrote a "push to archive" system, it could of course re-enumerate the IDs to avoid collisions.
This is something I'm unlikely to get round to doing (and there is probably no demand for it).
build-container.sh
start-dev.sh
You can then use the wrapper script like src/dev stop
and src/dev start
if you want. Alternatively just stop the docker service and start it again when you next want it. To remove the container (once it's stopped) you can src/dev rm
.
src/dev up
(without the -d
) will start the container and show the log output in the terminal, ctrl+C will stop it.
If you've no interest in running the dev environment, once the image is built you can instead run the image directly with docker run -p 80:80 mitxela/pict
. If I ever push the image to dockerhub this would be the fastest way to launch the container.
To tear down/clean up:
docker ps -a
to show containers, docker rm
to get rid of them
docker images
to show images, docker rmi
to get rid of them
rm -rf
the pict repository, good riddance
Was any of this fun? Hmm... not really, and I still can't figure out why I thought it would be to begin with. The most enjoyable bit was probably the git worktree-within-a-worktree hack.
However, the container technically works, and passed the test when I used it on the skiing trip. On a different laptop, all I had to do was install docker, clone the repo and fire it up. Despite the fact I'd forgotten most of the details, it worked perfectly, so that's something to be celebrated.
Disappointingly, as I come to write this up, it transpires that the earlier success may simply have been coincidence. Building the container from a clean slate, now, in August 2022, fails due to some php7-pecl-imagick
dependency problems. If that had happened on the trip, I probably would have just given up and done something fun instead.
The problem is that basing our image on alpine/latest puts us at the whims of the potential problems in the alpine package repositories. Docker won't by default pull the latest version of an image when you use it as a base, so even though we had an "alpine/latest" image, it was actually getting progressively out of date, and I only noticed the build was now failing when I manually deleted everything for the sake of testing the examples I was putting in this writeup.
Furthermore, trying different versions of Alpine gives wildly different results, I notice that basing on 3.15.5 gives a 328MB image, whereas 3.12.8 gives a 195MB image. And 3.16.1 fails entirely. I'm not sure what's going on but it's clearly all related to the Alpine package repositories. It does make you wonder, if the whole thing is this fragile, what exactly is the point of using a container? For now I've pinned it to 3.12.8 and I don't really have the heart to dig deeper.
The source code for everything is on github, with the default branch being container and the master branch being the PHP source code. Some commits under Tom's name are obviously mine, some commits under my name are obviously Tom's. I had to do some serious restructuring of the git history to make it look even remotely coherent, in 2020 we'd left the repo with multiple orphans and several repeated histories after all the filtering and rebasing that took place.
PICT is remarkably popular. I think most people playing it were introduced by other PICT players. If I were the type of person to do analytics on visitors, it would have been fun to track which players returned for more games – we could have drawn a kind of map between games of PICT showing the degrees of separation from the initial handful of people I introduced it to.