Chris W Jones

Hobbiest Programmer

I've been hunting down a problem this morning that keeps causing my laptop to totally lock up. I suspected it was related to Docker because it happened every time I hit a particular endpoint on a local webserver running in Docker.

When I was running Docker on a MBP, I could easily limit the total resources Docker used through a program that lived in the taskbar. I found it more challenging on Ubuntu but after some poking around, I found an question on StackOverflow by user Leltir that gave me the answer. I have modified the answer by adding comments and adding my particular limits

  1. Create a file in /etc/systemd/system/docker_limit.slice with the following contents:

    [Unit]
    Description=Slice that limits docker resources
    Before=slices.target
    [Slice]
    # Turn on CPU limit
    CPUAccounting=true
    # Set CPU limit to use 400% of total CPU resources
    # I have 8 cores so this limits Docker to using 4 of them
    CPUQuota=400%
    # Turn on memory limit
    MemoryAccounting=true
    # Set memory limit to 8GB
    MemoryLimit=8G
    
  2. Run the following to load the new system file and start it:

    sudo systemctl daemon-reload
    sudo systemctl start docker_limit.slice
    
  3. Add the following to /etc/docker/daemon.json (merging it into any existing JSON in that file):

    {"cgroup-parent": "/docker_limit.slice"}
    
  4. Restart docker with sudo systemctl restart docker

For help with what limits can be set in the system file, see the RedHat doc β€œModifying Control Groups”

I run a hobbyist's infrastructure out of my basement. I have a Raspberry Pi 3 (amy), a Raspberry Pi 2 (hermes), an old gaming rig (theprofessor), and a DigitalOcean droplet (zoidberg). Each of them has a purpose and they are each precious to me.

Managing them has been a pain lately so I've been working on reducing the pain. For starters, I've moved a lot of the boring configuration management over to Ansible. That has been a big help actually. Now I can use templates for the configuration files and keep everything up-to-date quickly and easily.

Another thing that I've done is install Tailscale on all of my servers and devices. That gave me a quick and easy way to connect from my laptop no matter where I am. Under the hood, it uses Wireguard so it's secure and fast. Anecdotally, it was about a 10x speedup for me. I have gigabit fiber coming into my house and a gigabit LAN infrastructure. When I was running OpenVPN, the max connection speed through the VPN I could get was around 5 Mb/s. Now that I've switched over to Wireguard, I regularly see around 50 Mb/s. It was a nice improvement.

Each server in my infrastructure has a specific purpose. amy runs Home Assistant and the attendant services for it for local control of IoT devices. hermes runs Pi-hole for whole-house ad-blocking. zoidberg power this website and several others. And theprofessor is my NAS. Pretty much all of those services are running in Docker. To manage them, I've been my own container orchestration service. Last week, I read the post Running Nomad for home server and figured it was time to get a container orchestration system going.

I've tried running k3s on my infra before but on my severely-limited nodes, it would never seem to start properly or once started could never run another container. Nomad seemed promising though. I'm happy to report that I got it running in about an hour on all of my nodes except hermes and I suspect that's due to the ridiculously old arm version that hermes has. I'm still working on migrating my services over to it but so far I'm enjoying using nomad as opposed to managing all the services myself.

I'm working on migrating this blog from Ghost to WriteFreely so that I can participate fully in the IndieWeb. I also found Ghost to be kind of a pain to deal with so hopefully WriteFreely will be easier.

This post continue from part 1 of this series.

In this post, we'll create a docker container for our satis server to run in that can be used to deploy it into a Docker Swarm or Kubernetes cluster.

Storing the dist archives So now that we've got satis building locally (or on a server somewhere), lets put it into a container and deploy it. There are a few ways that we could do this. We could put all of the archive files into the container itself or we can put them onto a CDN or some sort of storage service. I tried both of these at work. We have a pretty large list of dependencies for our project and when we were building the archives into the container, we eneded up with a container over 5GB in size. Docker handled it just fine but since I was building every couple of hours to make sure that we had the latest packages in Satis, our servers started hitting storage limits because of all of the previous docker images laying around. After getting a stern talking to by our Devops team, I moved the packages into a Google Storage bucket so that's what we'll do here too.

I'm going to assume that you have gsutil installed and setup. If you don't, Google has a quickstart [https://cloud.google.com/storage/docs/quickstart-gsutil] that should get everything installed.

To start, we'll create a bucket to store all of our archives. Don't forget to replace the name of the bucket with the name of your bucket!

gsutil mb gs://cwj-satis-bucket
Creating gs://cwj-satis-bucket

Now that we've got a storage bucket, we need to tell Satis that's where we'll be storing all of our archives. Satis will then rewrite its index file to point clients to the bucket instead of itself.

Before we start updating everything, here's what our current composer.json file looks like:

{
    "name": "work/satis-repo",
    "homepage": "https://satis.work.com",
    "repositories": [
        { "type": "vcs", "url": "ssh://bitbucket.work.com/mirrors/99designs.phumbor.git" }
    ],
    "require": {
        "99designs/phumbor": "*"
    },
    "require-dependencies": true,
    "require-dev-dependencies": true,
    "archive": {
        "directory": "dist",
        "format": "zip"
    }
}

We're going to change the archive section so that it looks like this:

    "archive": {
        "directory": "dist",
        "format": "zip",
        "prefix-url": "https://cwj-satis-bucket.storage.googleapis.com"
    }

Once again, don't forget to update the bucket name to your bucket's name.

Now that we have composer.json updated to point to our bucket, we need to copy our archives to the bucket. Our build is getting complicated so I'm going to take a moment to create a Makefile that will handle this for us.

Build script I'm going to start a simple Makefile that will do all of our building from now on. Here we go:

.PHONY: all build-satis copy-archives
all: build-satis copy-archives

build-satis:
	cp ~/.ssh/id_rsa .
	cp ~/.ssh/id_rsa.pub .
	docker run --rm --init -it \n		-e GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" \n		--volume ${PWD}:/build \n		--volume "${COMPOSER_HOME:-$HOME/.composer}:/composer" \n		--volume ${PWD}/id_rsa:/root/.ssh/id_rsa \n		--volume ${PWD}/id_rsa.pub:/root/.ssh/id_rsa.pub \n		composer/satis --no-ansi -v build /build/composer.json /build/output
	rm id_rsa
	rm id_rsa.pub
        
push-assets:
	gsutil -m rsync -r ./output/ gs://cwj-satis-bucket/

There, now we can run make in our project directory and we'll build satis and push the archives to our bucket. If you don't want to put this in docker, you could stop right here and have a perfectly workable Satis server that could be hosted on a VM or probably out of a bucket (but I haven't tried that).

Dockerize I mentioned at the start of this article that I tried to put all of the archives in a container first. I built a simple nginx container where the archives were stored in the file system. I'm going to keep the same idea and instead of having the archives stored locally, nginx is only going to store the index and some basic information for our satis repository while proxying any archive requests to our bucket.

I'm going to use the bitnami nginx image [https://hub.docker.com/r/bitnami/nginx] to start from because they have a lot of good, sane defaults.

Here's what our Dockerfile should look like:

FROM bitnami/nginx:1.16

# This is where all of our files will live
WORKDIR /app
# We'll get to this next but this is the nginx config for our server
COPY ./server_blocks/satis.christopherjones.us.conf /opt/bitnami/nginx/conf/server_blocks/
# Copy over some of the small files that Satis uses but none of our archives
COPY ./output/index.html ./index.html
COPY ./output/packages.json ./packages.json
COPY ./output/p ./p

Next, let's update our Makefile by adding another block to build our Docker container:

TAG:=satis-server:latest

.PHONY: all build-satis copy-archives
all: build-satis copy-archives build-nginx

build-satis:
	cp ~/.ssh/id_rsa .
	cp ~/.ssh/id_rsa.pub .
	docker run --rm --init -it \n		-e GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" \n		--volume ${PWD}:/build \n		--volume "${COMPOSER_HOME:-$HOME/.composer}:/composer" \n		--volume ${PWD}/id_rsa:/root/.ssh/id_rsa \n		--volume ${PWD}/id_rsa.pub:/root/.ssh/id_rsa.pub \n		composer/satis --no-ansi -v build /build/composer.json /build/output
	rm id_rsa
	rm id_rsa.pub
        
push-assets:
	gsutil -m rsync -r ./output/ gs://cwj-satis-bucket/
    
build-nginx:
	docker build -t ${TAG} .

Great, let's tie it all together by building our container:

$ make
cp ~/.ssh/id_rsa .
cp ~/.ssh/id_rsa.pub .
docker run --rm --init -it \n...
Scanning packages
Enter passphrase for key '/root/.ssh/id_rsa':
Selected 99designs/phumbor (0.1.0)
Selected 99designs/phumbor (0.1.1)
...
wrote packages to /build/output/p/99designs/phumbor$7883d94b94e047487e4616ff6a4e56ed7963e37cceec02e739451868f3087e24.json
Writing packages.json
Pruning include directories
Writing web view
rm id_rsa
rm id_rsa.pub
docker build -t satis-server:latest .
Sending build context to Docker daemon   1.04MB
Step 1/6 : FROM bitnami/nginx:1.16
 ---> a58e62787db2
Step 2/6 : WORKDIR /app
 ---> Using cache
 ---> 36710f3e54af
Step 3/6 : COPY ./server_blocks/satis.cnet.com.conf /opt/bitnami/nginx/conf/server_blocks/
 ---> Using cache
 ---> f8b1611f4292
Step 4/6 : COPY ./output/index.html ./index.html
 ---> c40177a2752a
Step 5/6 : COPY ./output/packages.json ./packages.json
 ---> a4da11c92972
Step 6/6 : COPY ./output/p ./p
 ---> 8efbd61365bc
Successfully built 8efbd61365bc
Successfully tagged satis-server:latest

I've omitted part of the output but at the end you should have a tagged docker image that you can now deploy where ever you like!

Intro

While I was reading the composer docs looking for a way to speed up our setup, I discovered Satis. It allows you to generate a private package repository. Here's how I got it setup.

I work on a Symfony project for work. One of the pain points in developing it was any time we needed to add or update a dependency. Like most PHP projects, we use Composer for managing our dependencies. The problem was that instead of using packages from Packagist, most of our packages are internal and hosted in bitbucket so any time we did ran composer update, composer would visit each repo and do a git fetch then have to parse all of the versions in that repo. It took forever.

Getting Started

To start off with, I created a new folder for this project:

mkdir satis-server
cd satis-server
git init
git ci --allow-empty -m "Initial commit"

Setup Satis composer.json

Satis is configured through a composer.json file just like a regular PHP project. The format looks pretty similar. Here's an example:

{
    "name": "work/satis-repo",
    "homepage": "https://satis.work.com",
    "repositories": [
        { "type": "vcs", "url": "ssh://bitbucket.work.com/mirrors/99designs.phumbor.git" }
    ],
    "require": {
        "99designs/phumbor": "*"
    },
    "require-dependencies": true,
    "require-dev-dependencies": true,
    "archive": {
        "directory": "dist",
        "format": "zip"
    }
}

Let's go over what each of these sections mean.

  • name is pretty self-explanitory. * It will be shown at the top of your repo page.
  • homepage is very important. * Set this to the domain name that your server will live at. Whatever you put here show up in any projects' composer.lock files that point to your satis server.
  • repositories lists where your server should look for packages.
    • This should be any private sources. You can use any of the usual types for a composer.json file.
  • require lists the versions of the packages that you want included in your satis server.
    • In my example, I'm including all versions of the phumbor package.
    • You can either list out all of the versions manually like I've done in my example or you can replace it with the option β€œrequire-all”: true which will require all versions of all packages
  • require-dependencies and require-dev-dependencies tells composer to also include all of the dependencies and dev dependencies of all the packages.
    • These aren't required but since my goal was speeding up our local composer functions, I wanted my server to have all of the packages that we'd need.
  • archive says to create local copies of dist files.
    • This is technically optional.
    • Using this option, our satis server will download dist files (zip files) of the dependencies when possible and host them. Without this option, you're relying on the remote repos to always be available.
    • There are more options that are possible including hosting all of the package files on a CDN. See the official docs for all of the possible options.

Building your server

Now that we've got a composer.json setup for satis, we need to build our server. I'm using Docker to do that but you can also install satis locally.

cp ~/.ssh/id_rsa .
cp ~/.ssh/id_rsa.pub .
docker run --rm --init -it \n  -e GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" \n  --volume ${PWD}:/build \n  --volume "${COMPOSER_HOME:-$HOME/.composer}:/composer" \n  --volume ${PWD}/id_rsa:/root/.ssh/id_rsa \n  --volume ${PWD}/id_rsa.pub:/root/.ssh/id_rsa.pub \n  composer/satis --no-ansi -v build /build/composer.json /build/output
rm id_rsa
rm id_rsa.pub

You'll note that I'm mounting in my personal ssh credentials. This is only required because the repos that I use at work aren't public. If all of the repos that you use are public, you probably don't need to do that.

Once you run that, it'll take a while to download everything but at the end you should have a new folder called output with contents like this:

😎  λ ~/projects/satis-server/ master* ls -havl output
total 192K
drwxr-xr-x 6 chrisj chrisj  204 Sep 11 10:46 .
drwxr-xr-x 7 chrisj chrisj  238 Sep 11 10:46 ..
drwxr-xr-x 3 chrisj chrisj  102 Sep 11 10:46 dist
drwxr-xr-x 3 chrisj chrisj  102 Sep 11 10:46 include
-rw-r--r-- 1 chrisj chrisj 186K Sep 11 10:46 index.html
-rw-r--r-- 1 chrisj chrisj  192 Sep 11 10:46 packages.json

That's a fully functioning satis repo! Throw those files into the root of your favorite server and you'll be able to point any PHP projects to it.

In the next post of this series, we'll explore how to dockerize this project.

The other morning, I was looking over a recent Jenkins build at work and realized that I had no idea why it took 5 minutes to build the container. When using Assetic [https://symfony.com/doc/2.8/frontend/assetic/asset_management.html] to build our assets, I can add a β€”profile flag and get line-by-line timings for each asset generated. I wanted that for docker build.

After a little research, I discovered the answer in a StackOverflow post [https://stackoverflow.com/a/51760937/1695439]. The ts program which is part of the moreutils package accepts data from a pipe and echos a timestamp plus the original text.

I tried running it but got a β€œcommand not found” error in my terminal. To install the ts program, you need to install the moreutils package. On my mac, I ran brew install moreutils. If you're not on mac, you should be able to install it through your OS's package manager.

Now that I had the ts program installed, I was finally able to time my build.

Ξ» ~/projects/foobar/ docker build . | ts Sep 04 09:57:36 Sending build context to Docker daemon 4.096kB Sep 04 09:57:36 Step 1/5 : FROM nginx:latest Sep 04 09:57:36 —–> e445ab08b2be Sep 04 09:57:36 Step 2/5 : WORKDIR /var/www/html Sep 04 09:57:36 —–> Running in 045f73fdcb03 Sep 04 09:57:36 Removing intermediate container 045f73fdcb03 Sep 04 09:57:36 —–> 0cd92b8984a1 Sep 04 09:57:36 Step 3/5 : ADD index.html . Sep 04 09:57:37 —–> a7f065a0e84d Sep 04 09:57:37 Step 4/5 : USER www-data Sep 04 09:57:37 —–> Running in e8058a99f42a Sep 04 09:57:37 Removing intermediate container e8058a99f42a Sep 04 09:57:37 —–> af9a4e545394 Sep 04 09:57:37 Step 5/5 : HEALTHCHECK CMD curl β€”fail http://localhost/ || exit 1 Sep 04 09:57:37 —–> Running in dffef986d061 Sep 04 09:57:37 Removing intermediate container dffef986d061 Sep 04 09:57:37 —–> 00c8c3bcbd61 Sep 04 09:57:37 Successfully built 00c8c3bcbd61

Hello visitor! I want to be the first to welcome you to my new site. You might see a few pages break over the next few days while I get everything sorted out.

During week 4 of my class in Operating Systems, I learned about thread synchronization. Since this whole class is based in Java, we've been implementing all of the OS ideas in Java.

I've taken a Java class before where we dealt with threads but it was quite a while ago. We used things like Thread.new() and Thread.sleep(). I've now learned that both of those are quite outdated ways to deal with threads.

Anyway, I learned about CyclicBarriers [https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CyclicBarrier.html] and CountDownLatches [https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CountDownLatch.html] . A CyclicBarrier works in a similar manner to a CountDownLatch except that a CyclicBarrier can be reset and waited on several times. In practice, I couldn't get my threads to synchronize after resetting the CyclicBarrier so I had to use both the CountDownLatch and CyclicBarrier.

My own prediction is that TDD is the deciding factor. You don't need static type checking if you have 100% unit test coverage. And, as we have repeatedly seen, unit test coverage close to 100% can, and is, being achieved. What's more, the benefits of that achievement are enormous. Therefore, I predict, that as TDD becomes ever more accepted as a necessary professional discipline, dynamic languages will become the preferred languages.

Robert Martin (Uncle Bob) in Type Wars [http://blog.cleancoder.com/uncle-bob/2016/05/01/TypeWars.html]

Over the Christmas break, Ruby dropped a new version. I installed it as soon as I heard and I've been exploring it since.

There are a few things in it that I think will be super useful so I thought I'd write a bit about them.

Lonely Operator This is a neat operator that Ruby stole from other languages. It looks kind of strange: &.. It allows you to collapse a bunch of nil checks into a single statement.

Since in a Ruby if statement, nil is treated the same as false, if any of the methods that you want to call aren't defined then the if statement will evaluate to false. An example:

Before 2.3

if person && person.address && person.address.country puts person.address.country end

IN 2.3

if person&.address&.country puts person.address.country end

As you can see, a lot cleaner looking code with the lonely operator.

New enumerable methods I found out about these by reading a rosseta.net [https://rossta.net/blog/whats-new-in-ruby-2-3-enumerable.html] blog post. The two new methods are grepv and chunkwhile.

Ruby Enumerables have had grep for quite a while. It has the same usage as the command line program grep. It allows you to search through an enumerable. The new grep_v does the opposite. It excludes what you don't want.

I'll use the alphabet

alphabet = (β€œa”..β€œz”)

To get only the vowels, I'll use grep

alphabet.grep(/a|e|i|o|u/) => [β€œa”, β€œe”, β€œi”, β€œo”, β€œu”]

To get only the consonants, I'll use grep_v

alphabet.grep_v(/a|e|i|o|u/) => [β€œb”, β€œc”, β€œd”, β€œf”, β€œg”, β€œh”, β€œj”, β€œk”, β€œl”, β€œm”, β€œn”, β€œp”, β€œq”, β€œr”, β€œs”, β€œt”, β€œv”, β€œw”, β€œx”, β€œy”, β€œz”]

On to chunkwhile. I understand this but I can't think of a use case for it. It allows you to create an enumerator when a condition is met. It is the opposite of slicewhen.

Our good old alphabet

alphabet = (β€œa”..β€œz”)

Using slice_when

alphabet.slicewhen{|i| i =~ /a|e|i|o|u/}.toa => [[β€œa”, β€œb”], [β€œc”], [β€œd”], [β€œe”, β€œf”], [β€œg”], [β€œh”], [β€œi”, β€œj”], [β€œk”], [β€œl”], [β€œm”], [β€œn”], [β€œo”, β€œp”], [β€œq”], [β€œr”], [β€œs”], [β€œt”], [β€œu”, β€œv”], [β€œw”], [β€œx”], [β€œy”], [β€œz”]]

Using chunk_while gives the opposite

alphabet.chunkwhile{|i| i=~ /a|e|i|o|u/}.toa => [[β€œa”], [β€œb”, β€œc”, β€œd”, β€œe”], [β€œf”, β€œg”, β€œh”, β€œi”], [β€œj”, β€œk”, β€œl”, β€œm”, β€œn”, β€œo”], [β€œp”, β€œq”, β€œr”, β€œs”, β€œt”, β€œu”], [β€œv”, β€œw”, β€œx”, β€œy”, β€œz”]]

As you can see, using slicewhen creates an array whenever the letter is a vowel and chunkwhile with the same condition creates an array whenever it isn't a vowel.

Frozen strings This is yet another change in 2.3. It allows you to freeze strings which means that you can't change individual letters after a string has been frozen. The main benefit to this is that Ruby has to allocate fewer objects when using a frozen string. The performance benefits are so good that they're talking of making strings frozen by default in Ruby 3.0.

With a normal string

:004 > mystring = β€œHarry Potter” => β€œHarry Potter” :005 > mystring[0] = β€œG” => β€œG” :006 > puts my_string Garry Potter => nil

With a frozen string

:009 > mystring = β€œHarry Potter” => β€œHarry Potter” :010 > mystring.freeze => β€œHarry Potter” :011 > my_string[0] = β€œG”
RuntimeError: can't modify frozen String from (irb):11:in []=' from (irb):11 from /home/chrisj/.rvm/rubies/ruby-2.3.0/bin/irb:11:in'

There are a ton of other changes both big and small. For further reading, check out this great post by Nithin Bekal [http://nithinbekal.com/posts/ruby-2-3-features/].