September 30, 2018

Testing Docker Containers, images and services

Filed under: ResearchAndDevelopment — Ryan Wilcox @ 11:10 am


Containers give us a way to package software and ensure that the software etc we expect installed is, in, fact, installed.

We are back to the days of a single binary to run your application, even though we all know there's a language runtime, third party dependencies, and half a Unix userspace in that carefully crafted container.

But do we know it's carefully crafted?

Can we test our containers to make sure? Even if we do the simplest of checks: is this software installed, might it run when it is deployed to an environment?

In fact, we can use a Python module from RedHat, named conu to write tests to know this!

I've created a Github repo showing this: rwilcox/testing_containers_with_conu, but this blog article annotates out all the pieces.

Behaviors of Container

It's often important to know what behaviors we are testing before we write code or tests. In this case I want to provide examples on how to write the following tests:

  1. As a tester I should be able to know if Ruby really is installed on this Ruby container
  2. As a tester I should be able to know if a microservice will start and return something, anything. A health check! Anything!

I'm not talking about integration tests in Docker containers – although I'll get close to that in this article. I'm talking simple tests: does this container really have the JRE? Does Ruby run, or segfault on launch because something else isn't installed?

As a tester I should be able to know if Ruby really is installed on this Ruby container

We all have base image for our applications. One general place to do 90% of the software install all of our microsevices need. Our service Dockerfiles should not be doing something as low level as installing Ruby, you should use a base image for that.

But testing these base images is somewhat hard: all we can do is check for the presence of some binary: there's really nothing to run, nothing to poke at.

Conu to the rescue

This is actually pretty easy in Conu. From my example on how to do this:

from conu import DockerBackend, DockerRunBuilder
from conu.helpers import get_container_output

def execute_container_testing_cmd():
    with DockerBackend() as backed:
    image = backed.ImageClass("nginix_rpw", "1.0.0")
    cmd = DockerRunBuilder(command = ["which", "nginx"])
    our_container = image.run_via_binary(cmd)

    assert our_container.exit_code() == 0, "command not found"
    print( "******* ngnix was installed in container *******************8" )

This isn't a great test, but it is a test: we know that nginx is installed. Does it run? We could test that too, but for now knowing that things could work is way better than "build and wish".

How do we integrate this in CI?

In your testing pipeline create a convention ("Docker tests go in docker/tests/") and call that python script from your CI pipeline. You could use a Python testing framework if you wanted to (I also layered on the new unit test module's test discovery features), but Python's assert statement will return a non-zero exit code, which is enough to trigger build step failure in most test runners.

As a tester I should be able to know if a microservice will start and return something, anything. A health check! Anything!

This is a little bit harder than the first solution. We can't just check to see if some binary is installed, we want to know if it runs. But a microservice running is sometimes hard: often it requires a database, redis, some secret store, who knows.

So, how can a developer make sure these things exist for test? The answer: a Docker Compose file for Docker build testing!

Docker Compose as a solution for getting a valid "can it run?" test

Docker Compose lets you set containers that depend on other containers, so you can instruct Docker Compose to launch your database container before your microservice container.

Docker Compose is awesome because it does a number of things for us:

  1. Gives our Docker container essentially a namespace. (This is based on the name of the containing folder. If you decale a 'postgres' container Docker Compose will translate that to my_service_postgres.)
  2. Writes up the Docker network linking so that these microservices can talk to each other. They also get their separate Docker network, so they can/will only talk to services in the same compose file (by default).

Outside the fairy-tale of Docker Compose

In the simpliest world, those feature + the docker-compose.yml's depends_on statement works. More likely your microservice startup looks like:

  1. Launch database container for microservice
  2. Wait for database to boot up, because it's that kind of database. I'm looking at you, Cassandra. <— you could do this with conu too, which would be an improvement over 'just wait 10 seconds and hope' I've seen everyone do
  3. Perform database migration to get the newly launched / blank database with some database structure or seed data
  4. Launch microservice container

From a CI/CD perspective, that's a lot of garbage we can't standardize.

Establish a convention

From a CI/CD perspective I want to run one thing to get the microservice up and running in Docker Compose, so I can run some simple health checks that may pass.

Let's call that file docker/

$ ls docker/

And an example contents of


set -e 

replace_railsapps_image_statement_with $1
docker-compose -f docker-compose-tests.yml start db
sleep 10
docker-compose -f docker-compose-tests.yml run railsapp rake db:setup
docker-compose -f docker-compose-tests.yml start railsapp
python3 # or call everything above this docker/

We want to test the container we've built, so from a CI/CD perspective we want to pass the tag/label of the image we built into this shell script. The shell script should do something clever with that, with sed or with i-don't-care.

Now CI/CD just calls docker/ and is abstracted away from the mess of creating databases or whatever.

Conu for Docker Compose based, running, containers

It took me a couple days of background thought to realize how to get conu to test Docker Compose launched applications. I (eventually!) remembered that Docker Compose containers are just namespaced containers, at least from a docker ps perspective.

Given that we could write some clever conu code to find our container and run a health check against it.

from conu import DockerBackend, DockerRunBuilder
from conu.helpers import get_container_output

def iterate_containers(name):
    with DockerBackend() as backend:
        for current in backend.list_containers():

            # need the name of the container, not the name of the image here,
            # as we may be running containers whose image name is the same (ie on a CI server)
            # BUT Docker Compose namespaces _container_ names
            docker_name = current.get_metadata().name
            if docker_name.find(name) > -1:
                return current

def is_container_running(containerId, containerName):
    with DockerBackend() as backend:
        container = backend.ContainerClass(None, containerId, containerName)

        assert container.is_running(), ("Container found, but is not running (%s)" % containerName)

        with container.http_client(port=8081) as request:
            res = request.get("/health")  # HAHA, http-echo returns what we say EXCEPT for /health. That is special. Thanks(??) Hashicorp. WD-rpw 09-29-2018
            text = res.text.strip()
            assert text == """{"status":"ok"}""", ("Text was %s", text)

docker_compose_namespace = "test_docker_compose_service"
docker_compose_container_name = "%s_sit" % docker_compose_namespace
found = iterate_containers( docker_compose_container_name )

assert found != None, ("No container found for %s", docker_compose_container_name)
is_container_running( found.get_id(), found.get_image_name() )

print("****** TESTS DONE EVERYTHING IS FINE *********************")

Now it's same as it ever was: let Python's assert statements return non-zero error codes if something is borked.


We can now test Docker base images for their validity, and launch just enough of a microservice to test how that works. As a microservice may fail early if it can't find some of it dependencies (connection to database), we want to make sure those are there too: and in fact we can, with Docker Compose!

Conu is pretty awesome software, and that with some conventions gives us a nice CI/CD pipeline to make sure that higher deployment environments get images that at least launch (QA testers get mad when they run into issues because the stupid Docker container didn't launch!)

Test code, test containers!

May 17, 2018

Jenkins, Groovy init scripts and custom Tools

Filed under: General Information,ResearchAndDevelopment — Ryan Wilcox @ 10:56 pm

I’ve been working with Jenkins quite a bit lately.

When I set up a system I want it to be as reproducable as possible: you can never trust hardware, especially when it’s virtual.

I found Accenture’s Jenkins Docker configuration . It’s super good, especially as a basis of sample code. Based on this code I was able to install and configure plugins (ie I set up a Node.js Jenkins tool, etc etc).

My Jenkins installation also uses the CustomTool Plugin extensively, to provide CLI tools to my Jenkins pipelines. So I wanted to add my custom tool configuration to my Jenkins init scripts.

There’s plenty of documentation on installing tools based on plugins (even a section of my learning notes!) but the custom tools plugin seems to be left out of this flurry of documentation. No longer!

Installing custom tools is a bit different from installing tools that come explicitly as part of a plugin, and here is some code that worked for me:

import jenkins.model.*
import com.cloudbees.jenkins.plugins.customtools.CustomTool;
import com.synopsys.arc.jenkinsci.plugins.customtools.versions.ToolVersionConfig;


def installs = a.getInstallations()
def found = installs.find { == "gcc"

if ( found ) {
println "gcc is already installed"
} else {
println "installing gcc tool"

def newI = new CustomTool("gcc", "/usr/local/gcc/", null, "bin", null, ToolVersionConfig.DEFAULT, null)
installs += newI
a.setInstallations( (com.cloudbees.jenkins.plugins.customtools.CustomTool[])installs );


March 25, 2018

Gatsby.js: in his house and in my house

Filed under: General Information,ResearchAndDevelopment — Ryan Wilcox @ 8:42 am


I’ve been playing with Gatsby.js. Gatsby is a static site render that uses React and GraphQL to make neat sites that a modern 2016+ web developer is gonna love… then render these sites to plain ol’ static HTML.

So I started building something…

My goal: render my reading notes into a website

I’ve kept a wiki for a long time, and have been keeping notes in books for a very long time. Previously I would buy PDF copies of books and highlight and write notes in the PDF book. Since O’Reilly stopped selling PDF books I’ve started keeping notes in markdown files.

So, thus my goal: using gatsby to render my markdown notes into a pretty website.

Challenge One: create a simple site that renders my markdown files

This turns out to be well documented on the Gatsby site. This worked super well – I followed that almost exactly and made something kind of nice.

Challenge accomplished pretty easily!

Challenge Two: “Hmmm… what if I want to embed a Gatsby site into another site?”

I’ve worked on so many Rails apps where I end up writing a simple blog component because the site needed a technical or marketting blog in the same style as the rest of the site.

Likewise, is a statically rendered site, and I if I used the learning / Gatsby site as a replacement for the Wiki then I’d want the styles to match. Which probably means recreating a pretty old design on old tech.

Soo…. what if I could:

  1. Get Gatsby to render my Markdown files
  2. Pull away enough React code to get the markup I care about
  3. Present it on my page

Because my Gatsby pages come from Markdown there’s no fancy React components or liveness on the site: just rendered text.

How Gatsby renders markup pages

Gatsby generates both a rendered React component for the file / path AND also generates the HTML statically.

Given the following structure:


And a path for that document at /learning/gatsby, the rendered path will be curl http://localhost:9000/learnings/gatsby/index.html

Problem solved? I can make an AJAX request for the index.html page, then cleverly insert it into the host page, right?

Inserting Markdown HTML content into a host page: simple insert

Again, I suspect this works because these pages are rendered markdown. No React components on my content pages: I just wanted the rendered HTML.

So I created a sample host page and busted out some jQuery.

In my gatsby template – what the Markdown walkthrough calls src/templates/blogTemplate.js, I gave a class to the returned div. <div className='learning-container'>.

In my host site I wrote the following Javascript function:

function htmlFromGatsby( url, callback ) {

    $.ajax( url, { complete: function( jqXHR, textStatus) {

        var info = $.parseHTML( jqXHR.responseText, document, { keepScripts: false } )
        var whereItIs = $(info).find("div.learning-container") // seek until we find the container in our learningTemplate

Tada! My AJAX function retrieves the statically rendered Gatsby page, breaks all the React stuff Gatsby added, goes into the container that has all my content, pulls it out, then adds it to the host document. Neat!!!

The index page of my notes site has a list of all my learnings. Because its Gatsby I construct that list with GraphQL and a custom React component that abstracts real HTML links.

My first tests started with a simple page of notes content. My next test was the index page: how would linking content work in a host website? And because the React Router takes care of all the page routing / location bar changing… well that’s a problem.

When I added my index page – chalked full of these links – my links just 404ed. OK, time to break out some more jQuery…

In my src/pages/index.html ( my Gatsby site) I added an ID to the div returned: div id="learning_site_index">, and my PostLink component I have create a <Link to={...} className="learning-link">.

Here’s the code I have on my host page:

$.ajax("http://localhost:8000/index.html", {complete: function( jqXHR, textStatus) {
    var info = $.parseHTML( jqXHR.responseText, document, { keepScripts: false } )

    var whereItIs = $(info).find("div#learning_site_index_content")
    $("#destination").html( whereItIs )

                    // gotta do it here, attaching to all, because can't use live events to override an event handled by the control itself. WD-rpw 04-24-2018
                    $("a.learning-link").on("click", function(evt) {
                        var defaultWhere = $("href") 

                        htmlFromGatsby("http://localhost:8000" + defaultWhere + "/index.html", function(outHtml) {
                          $("#destination").html( outHtml )
                        } )
                        return false

There: we display the index page, and clicking a link will fetch and display that content on the page.

Conclusions and Problems

So now I have a simple solution for embedding Gatsby content into other sites. There are some problems:

  1. It works for mostly static pages, like Markdown Gatsby rendered into HTML. Your fancy React components won’t work here.
  2. My sample code is 2006 style jQuery. There’s little (no?) SEO friendliness, the back button is broken, etc etc.

But it’s a great little proof of concept for when you need to make a simple blog on a simple site and want to get something out there quickly.

Wishes for the future

I understand the complexities involved, but I’d love to have a way to use my Gatsby created components as part of an already React-ified site. To be able to plop a <GatsbyRenderComponent> into my site somewhere and load data, or run a GraphQL query about something Gatsby knows about, inside and alongside my custom “host” site components would be super cool.

February 18, 2018

Sending Heroku logs to CloudWatch Logs

Filed under: ResearchAndDevelopment — Ryan Wilcox @ 10:26 pm

For early stage experiments I like deploying to Heroku. Heroku’s free model will spin down servers that haven’t gotten traffic in a while, and none of my free experiments have enough continuous load to require anything more.

When I set up a Heroku app I usually set up Cloudfront as a CDN and S3 as a blob storage. Heroku is already just a layer over AWS EC2, so these choices are staying in the ecosystem, and easy to set up too

The other week I realized I could set up logging the same way: send to Cloudwatch Logs. I’ve used Cloudwatch Logs for other production applications, and it’s an OK common denominator tool. Cheap, and I can hook up subscription filters to send it to some other microservice for further processing.

So I wrote heroku-cloudwatch-sync: you deploy the app, set up a Heroku drain, and it goes to the specified Cloudwatch Log Group and Log Stream.

This was my first time with AWS Lambda functions, and current state of the world allows pretty rapid prototyping by just editing the file through a web based editor. For more complex things I ended up writing a complex Makefile to take care of packaging and deploying the script, and Cloudformation templates to set up the infrastructure.).

When I do further serverless work I’ll have to check to see if the serverless frameworks have any wins here: I don’t know if they make the deployment story any better or worse than what I figured out.

Anyway, my heroku cloudwatch sync provides an easy and cheap way to get logs from Heroku into an easy and cheap log store!

May 14, 2017

Bitbucket Pipelines, Heroku and Rails

Filed under: Uncategorized — Ryan Wilcox @ 3:43 pm

This weekend I took the time to play with Bitbucket Pipelines, a new feature from Bitbucket.

Often, my goal with my setup is “get something simple up and running simply”. This is why I like hosting on Bitbucket (free private repositories), and the pipeline’s feature now lets me get a simple CI pipeline up, without involving extra tools.

With a little bit of work, now I have a template for Rails apps deploying on Heroku. (I’m not using the Heroku Pipelines for this because it assumes Github repositories. I may use that part in the future to promote some code from staging to production… but right now the app isn’t that fancy.

bitbucket-pipelines.yml file

  name: rwilcox/rails-mysql-node:latest

      - step:
            - bundle install
            - cp config/sample_database.yml config/database.yml
            - "sed -i 's/  host: mysql/  host:' config/database.yml"
            - RAILS_ENV=test rake db:create
            - RAILS_ENV=test rake db:schema:load
            - rake spec
            - echo "NOW DEPLOYING THE APP...."
            - deploy-scripts/heroku/ myapp-staging
            - deploy-scripts/heroku/ myapp-staging
            - deploy-scripts/heroku/ myapp-staging
            - echo "app deployed, now priming the cache..."
            - curl -s ""
            - database
      image: mysql

Let’s break this big file up into smaller pieces.

The image section: getting the test enviroment

Bitbucket Pipelines are built on top of Docker. Awesome, as my (new) development workflow is built on Docker too.

Bitbucket Pipelines have a standard Docker image it uses to build your app. Included are things like Node, Python (2), java, and maven.

In our case – a Rails app – that doesn’t work: the standard image doesn’t come with Ruby. I also want to use mysql as the data store, and I know the mysql2 gem requires a C library for mysql bindings.

Thus, I could install those dependancies in my build pipeline, or I could just use a Docker container to run my tests with the full suite of required software. Docker!!

Bitbucket Pipelines don’t (yet) allow you to build a Docker image then docker run in that built container, so I can’t build the container in the pipeline then run it. This seemed like the easiest way, but is not currently allowed in Bitbucket Pipelines.

So I thought about publishing my development Docker container to Amazon Elastic Container Registry. There’s some problems with that: ECR generates a password that’s only good for 12 hours. So I either run a cron job to update an environmental variable in the Bitbucket Pipeline…

… or I just create a Makefile, based on my development Docker environment, that publishes the image to Docker Hub.

For one private repository Docker Hub is free, and Bitbucket Pipelines can interact even with private images stored there.

Makefile (for building and pushing development Docker environment to Docker Hub)

# Builds and uploads our dev image to Docker Hub.
# Required right now because Bitbucket pipelines can't build then run Docker containers
# (if it could then we would just build the container there then attach and run the tests).
    docker login

    docker build -t rwilcox/rails-mysql-node -f Dockerfile.devel .

    docker push rwilcox/rails-mysql-node:latest

all: login build push

The steps section

Currently a pipeline can have only one step, so I jam testing and deployment in the same step. Normally I’d separate these, as they’re separate actions….

cp config/sample_database.yml config/database.yml

I gitignore config/database.yml, so the pipeline must generate it

sed -i ‘s/ host: mysql/ host:’ config/database.yml

My config/sample_database.yml file assumes I have another Docker container (thanks to Docker Compose) named mysql. Use sed to rewrite the mysql hostname so it’s Bitbucket Pipeline services are accessed via localhost, so I must target that. (Specifically target here because mysql2 assumes that localhost means socket communication, not TCP/IP).

The deployment steps

For any Heroku Rails deployment there are three steps:

  1. deploy the code to Heroic, usually via the famous “git push” based deployment model.
  2. Run database migrations rake db:migrate on Heroic
  3. Restart the applications on Heroku, as now the database is correctly migrated for that app version.

We can duplicate these in code here, but we can’t use the normal heroku command line tool. There’s warnings about how using HEROKU_API_KEY environmental variable can interfere with some operations of the heroku CLI tool.

There’s an awesome SO answer on the various ways you can get a headless CI server authenticating with Heroku, which discusses feeding the username and password to heroku login (which I don’t think will work if you have 2FA turned on!), just using HEROKU_API_KEY anyway, and writing your own .netrc file.

Neither of these alternatives are super great. Heroku does provide a rich API and (with a bit of fiddling) I have a several API scripts that will do all three steps.

Deploy to Heroku (deploy-scripts/heroku/

# FROM:  
# Bash script to deploy to Heroku from Bitbucket Pipelines (or any other build system, with
# some simple modifications)
# This script depends on two environment variables to be set in Bitbucket Pipelines

git archive --format=tar.gz -o deploy.tgz $BITBUCKET_COMMIT

HEROKU_VERSION=$BITBUCKET_COMMIT # BITBUCKET_COMMIT is populated automatically by Pipelines

echo "Deploying Heroku Version $HEROKU_VERSION"

URL_BLOB=`curl -s -n -X POST$APP_NAME/sources \
-H 'Accept: application/vnd.heroku+json; version=3' \
-H "Authorization: Bearer $HEROKU_API_KEY"`

echo $URL_BLOB | python -c 'import sys, json; print(json.load(sys.stdin))'
PUT_URL=`echo $URL_BLOB | python -c 'import sys, json; print(json.load(sys.stdin)["source_blob"]["put_url"])'`
GET_URL=`echo $URL_BLOB | python -c 'import sys, json; print(json.load(sys.stdin)["source_blob"]["get_url"])'`

curl $PUT_URL  -X PUT -H 'Content-Type:' --data-binary @deploy.tgz

REQ_DATA="{\"source_blob\": {\"url\":\"$GET_URL\", \"version\": \"$HEROKU_VERSION\"}}"

BUILD_OUTPUT=`curl -s -n -X POST$APP_NAME/builds \
-d "$REQ_DATA" \
-H 'Accept: application/vnd.heroku+json; version=3' \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $HEROKU_API_KEY"`

STREAM_URL=`echo $BUILD_OUTPUT | python -c 'import sys, json; print(json.load(sys.stdin)["output_stream_url"])'`


Straightforward coding, and I’m glad I found this snippet on the Internet.

Migrate Database (deploy-scripts/heroku/


mkdir -p tmp/

newDyno=$(curl -n -s -X POST$1/dynos \
   -H "Accept: application/json" \
   -H "Authorization: Bearer $HEROKU_API_KEY"\
   -H 'Accept: application/vnd.heroku+json; version=3' \
   -H 'Content-Type: application/json' \
   -d '{"command": "rake db:migrate; echo \"MIGRATION COMPLETE\"", "attach": "false"}' | tee tmp/migration_command |
python -c 'import sys, json; myin=sys.stdin; print( json.load(myin)["name"] )')

cat tmp/migration_command

echo "One-Shot dyno created for migration at: $newDyno"

# create a log session so we can monitor the completion of the command
logURL=$(curl -n -s -X POST$1/log-sessions \
  -H "Accept: application/json" \
  -H "Authorization: Bearer $HEROKU_API_KEY" \
  -H 'Content-Type: application/json' \
  -H 'Accept: application/vnd.heroku+json; version=3' \
  -d "{\"lines\": 100, \"dyno\": \"$newDyno\"}" | tee tmp/log_session_command | python -c 'import sys, json; myin=sys.stdin; print(json.load(myin)["logplex_url"])')

cat tmp/log_session_command

echo "sleeping for 30 "
echo "LOG STREAM AT $logURL"
sleep 30

curl -s $logURL > tmp/logfile
cat tmp/logfile
cat tmp/logfile | grep "MIGRATION COMPLETE" # MUST be last, exit status will trigger if text not found

Technically, when you run the heroku run command, you’re creating another dyne to run whatever your command is. We do the same thing here: we create a dyne, give it a command to run, then get the log information and see if the migration completed or not.

This is not the best shell script: if the database migration takes longer than 30 seconds to complete we may get a false failure. I may need to tweak this part of the script in the future.

Restart the app (deploy-scripts/heroku/


curl -n -s -X DELETE$1/dynos \
  -H "Content-Type: application/json" \
  -H "Accept: application/vnd.heroku+json; version=3" \
  -H "Authorization: Bearer $HEROKU_API_KEY"

sleep 10

This restarts the app (very abruptly, by deleting all the running dynes). The last stage in the pipeline goes and performs the first web request on the Heroku box, an operation that sometimes takes “longer than normal”.

(Service) definitions

Bitbucket has good documentation on the provided service definitions


With Bitbucket Pipelines I have a simple, one stop, place for CI/CD, requiring very little in the way of extra services. I like to keep simple, experimental projects simple, then migrate away from simple when that fails. I’ve also created useful scripts that can be used if I decide to move away from Bitbucket Pipelines to something more robust (while still targeting Heroku).

January 9, 2017

Installing and exploring SpaceVim

Filed under: ResearchAndDevelopment — Ryan Wilcox @ 12:52 am


Posted on this blog as comments may be insightful. My other – more personal – blog doesn’t have comments, so this one will have to do. I’m hoping future comments will be helpful getting this running on Ubuntu 18 or something

I’m interested in text editors again. My main driver for the last 5(??) years only runs on OS X, but I’m not sure if my next laptop will be an OS X laptop. Maybe if I sit out the big couple-year USB-C transition….

Time to look into alternative text editors then. For the last 6 months I’ve been using Atom at work to write React. Atom feels like not a good long term fit for me, even if it’s React / JSX autocomplete stuff is exceptional (and the only reason I stuck with it).

Then SpaceVim caught my eye. It’s a set of configurations for Vim that Makes Vim Better (Again?). Decided to try it, and spent a couple afternoons setting up Ubuntu 16 and SpaceVim.

Spoilers: I like SpaceVim. Slightly undecided about Ubuntu).

Install Vim 8 on Ubuntu 16:

  1. Follow instructions on Vim 8 on Ubuntu page
  2. On Xbuntu (what I run on an old netbook), you may need to install vim-gtk3. However, this seemed to Just Work on my Ubuntu 16 install on my (VM) desktop machine.
  3. sudo apt install exuberant-ctags

Install SpaceVim on Ubuntu 16

  1. Install dein, paste what it tells you into your .vimrc, fire up vim and run the install command
  2. Install SpaceVim
  3. Install apt install fcitx
    3 Configure fcitx (run im-config. Yes, change your config, set to fcitx)

After all this, run vim. It will download some plugins. Now run vim something.js: it will download more/different plugins. Do this for other languages you use that SpaceVim supports, to cache their plugins.

Neat SpaceVim things

One of the reasons I persisted with SpaceVim was both because this group knows more about hthe state of the VIM art than I do, and because the GUI stuff looked really good. I hate remembering a ton of keyboard commands, so I like picking command options out of a menu, and SpaceVim, like Spacemacs before it, uses a neat type-ahead menu window to let me select what command I want to trigger. This certainly helps in a console app like Vim (even though I run with the menus on).

Some examples:

  • See / trigger all commands in a GUI search-ahead style menu: (normal mode) f+.
  • See all plugins installed: (normal mode): leader+lp

SpaceVim for Markdown

I wrote this blog entry in Spacevim – it has some neat features specifically for writers. :Goyo. This puts Vim into a distraction free writing mode. Toggle out by reexecuting the ex command.

I created a user defined command alias for this called :Focus, because I’ll never remember :Goyo.

Making SpaceVim yours

SpaceVim has some opinionated defaults, some of which I don’t like. Check out ~/.vim/autoload/SpaceVim/default.vim. A couple of these opinions are:

  1. Menu bar off.
  2. Hide Toolbar
  3. Mouse off
  4. Font (Deja Sans Mono for Powerline, 11pt)

Luckily, SpaceVim has a couple customiation points: put your customized settings in ~/.local.vim OR in .local.vim (allowing you to have project specific VIM settings).

To reverse these customizations:

  • Menu bar on: set guioptions+=m
  • Mouse: set mouse=a
  • Font: set guifont=Courier 10

Disabling plugins you don’t like (or can’t use)

SpaceVim lets you disable plugins you’re never going to, or can’t, use. For example, I found that the version of Vim I installed had Python 3 instead of Python 2. IndentLines requires Python 2, and that choice is a compiletime, exclusively.

We can disable the plugin per normal SpaceVim conventions by adding this line to our ~/.local.vim:

let g:spacevim_disabled_plugins=['identline']

Force disabling IndentLines

IdentLines is a pretty foundational bit of SpaceVim tech, so SpaceVim doesn’t actually like us doing this. We need to provide a stub implementation for IndentLinesToggle, the main Vim command in question. Also added to the ~/.local.vim file:

command! IndentLinesToggle ;

Loading your own Vim plugins

While most settings can be tweaked in the ~/.local.vim, this file is exec-ed before SpaceVim loads. This means that dein, or your Vim plugin manager of choice, hasn’t loaded yet.

I’ve added some new functionality to my local copy of SpaceVim, to load a .local.after.vim (in ~/ or ., just like the built in version). Here’s how:

To ~/.vim/vimrc add:

function! LoadCustomAfterConfig() abort
    let custom_confs = SpaceVim#util#globpath(getcwd(), '.local.after.vim')
    let custom_glob_conf = expand('~/.local.after.vim')
    if filereadable(custom_glob_conf)
        exec 'source ' . custom_glob_conf

    if !empty(custom_confs)
        exec 'source ' . custom_confs[0]

call LoadCustomAfterConfig()

This allows me to keep my customizations – wither they need to be loaded before SpaceVim, or after SpaceVim, in a seperate file, with only this one change to the core configuration.

One thing worth noting: if you add a new package to your .local.after.vim you’ll need to run :call dein#install() to download the pcakage. (perhaps this runs too late for the auto-downloader to pick up).


SpaceVim is pretty good. There are some neat things I didn’t know Vim could do, and some stettings that my previous Vim configuration had set also. So, like coming home to Vim, with the old familiar armchair and a nice hyper modern sofa too.

TL; DR: pretty neat, especially it’s use of Unity Vim plugins that give a nice type-ahead GUI for commands, buffers, etc etc. A welcome jumpstart back into the world of Vim, and better / easier than anything I could have put tocgether myself.

Feels like there’s optimization that could be done (it lags pretty hard on my almost 8 year old netbook… on the other hand, 8 year old netbook….)

But that’s SpaceVim, and how to set it up on Ubuntu 16. Up, up and away!

July 31, 2016

Single file web components on legacy projects

Filed under: ResearchAndDevelopment — Ryan Wilcox @ 10:33 pm

I’m constantly looking for new innovations that I can use on brown-field projects. You know the ones: you come in and there’s a large engineering investment in a project, so architectural decisions have been made, and to change those decisions would require large amounts of capital (both political and money capital).

So we have legacy code. But we can still learn from other modern, greefield projects, and apply new concepts from front end frameworks into our code.

We’ll examine one of this new front end ideas here: Single File Components.

Single Files Components: The State of the Art in Javascript Land

Traditionally CSS, HTML and Javascript have been in separate files. Separate languages, separate files. React.js and Vue.js have an interesting idea: that often these pieces interact with each other at a component level – such that having the three different pieces in the same file makes sense.

For example, you’ve been given a task to implement a toolbar. Normally you create a couple CSS classes, put them in a CSS file. Create an HTML structure, which maybe you put in a template somewhere . Documentation… somewhere. Some Javascript to tie it together, in a forth place.

But what if everything lived in the same file, instead of being spread out? Because ultimately, you’re building a toolbar – a single piece of UI.

React.js solves this with JSX – a Javascript preprocessor that lets you splat HTML in the middle of your Javascript. JSX is somewhat interesting, but assumes you buy into (the DOM building parts of) React.

React is pretty cool, and – in my mind – is easier to integrate with pre-existing sites than say Angular. If you want a new, complicated, individual component on your site, maybe React is worth looking into.

But Vue.JS has another solution to the “let’s keep stuff about a single thing together”.

Vue.JS solves this with Single File Components.

Vue.js Component files take cues from HTML: Javascript lives in a script tag, CSS lives in a style tag, HTML templates live in a template tag. This might remind you of your first web pages, before you worried about separating out everything (for your one page/one file “HI MOM” page with the blink tags…).

Bringing the state of the art back home, to legacy-ville

With a little help from Gulp and Gulp VueSplit we can split out Vue Component files into their individual files.

The awesome thing about Gulp-Vuesplit is that it’s a simple text extraction: pulling out the different parts of a component file then writing them out to separate files.

Let’s take a look at a simple component:

// components/clicker.vue
function autoClickers() {
    var clickers = []'.clicker-clickers')) // turn into iterable array

    clickers.forEach(function(current) {
          current.addEventListener('click'function() { alertcurrent.innerHTML ) }, false);
    } )
document.addEventListener'DOMContentLoaded'autoClickers )

.clickers {

We can split this file up with a simple Gulpfile:

var vueSplit = require'gulp-vuesplit' )

var gulp = require'gulp' )
var pipe = require'gulp-pipe' )
var vueSplit = require'gulp-vuesplit' ).default

gulp.task('vuesplit'function() {
    return gulp.src('components/*.vue').
        pipevueSplit( { cssFilenameScopedtrue } ) ).

This generates two files – one for the JS and one for the CSS. (It would also generate a file for anything in a template tag, but we don’t have one here).

The generated JS is nothing remarkable – it’s just everything in the script tag, but the generated CSS is:

.clicker-clickers {

Notice the .clicker- prefix of our CSS class? That was generated by Gulp-VueSplit. It performs a kind of auto name-spacing, either with the file name prefixed, or with a unique hash suffix.

Now that we’ve generated seperate files, we can use them in an HTML page – even a simple HTML with no extra Javascript frameworks!

    <link rel="stylesheet" type="text/css" href="dist/clicker.css" />
    <script src="dist/clicker.js"></script>

    <p class="clicker-clickers">Hello world</p>
    <p>Not clickable</p>

January 16, 2016

Migrating Vagrant setup from Puppet 3 to Puppet 4 (manifestdir)

Filed under: ResearchAndDevelopment — Ryan Wilcox @ 10:11 pm

I like Puppet with Vagrant. Puppet 4 removed an option I really liked: manifestdir

You see, often when I’d start a greenfield project, I’d include a Vagrantfile so getting a new developer set up is one command. I’ve talked about this in the past.

Now-a-days I want to keep my puppet scripts in a folder, organized and slightly away from the main code.

Because I’m boring I call this puppet/.

manifestdir let me do that: shove everything into puppet/ and not see it. One simple flag passed into Puppet from Vagrant.

I knew Puppet 4 was going to remove manifestdir, but I could ignore the problem as long as Vagrant base boxes shipped with Puppet 3.7. Which they no longer do – it seems to be 4.x now-a-days.

Bitrot is rough in the DevOps world.

It also means I had to revisit territory from the first half of an earlier blog post.

I figured out how to solve my problem by abusing Puppet’s Environment system

Skip ahead and see the diff

In my Vagrant setup I’ll have Puppet modules specific to my app: telling Puppet to install this version of Ruby, that database, this Node package, whatever. I’ll also have third party modules: actually doing the heavy lifting of downloading the right Linux package for whatever, etc.

So, I’m building on some abstraction.

I use puppet module install to pull in third party modules. Puppet 4 puts them in a new place, I specifying the environment, to keep the cognitive dissidence low. I don’t strictly have to do this, but I think it’s good.

Note that we don’t want our third party modules to be in the same places as our specific modules: if installed them in the same place then we’d have to deal with these extra files in our source tree.

You see, when Vagrant starts up it creates a folder: /tmp/vagrant-puppet/ – it’s a shared folder so anything extra put in there shows up in our source directory.

So puppet module install installs third party modules in one place, and Vagrant installs our modules in another place.

Here’s where environments come in:

  1. We set our environment_path in the Vagrantfile to be ./. This is where Puppet will go looking for environments to load
  2. We set our environment – aka the environment Vagrant will tell Puppet to use – to a folder named puppet in the source directory. (remember that?)
  3. Puppet environments can contain three things: a config file, a modules folder and a manifests folder

Our config file sets the path for modules: we tell Puppet to look in our puppet/modules file, then look in the directory where puppet module install downloads its modules to, then look at the base module directory.

We need our config file because by default Puppet will look for modules in our environment’s module path, and the base module directory… and not where puppet module install puts things. (or so it seems…)

So that’s how you mis-use environments to get manifestdir “working” again.

May 24, 2015

Rapid System Level Development in Groovy

Filed under: ResearchAndDevelopment — Ryan Wilcox @ 2:53 pm

Introduction: Setting the stage

Lately I’ve found myself turning to Groovy for — oddly enough — system level development tasks. This is an unexpected turn of events, and seemmingly mad choice of technologies to say the very least.

Why Groovy?:

  1. I can’t assume OS (so Unix command line tools are out). One of my recent tasks involved something pretty complex, so shell script was out anyway.
  2. I can’t assume Ruby or Python is installed on any of these machines, but I can assume the JVM is installed.
  3. Groovy is a not bad high level language that I’ve also been using for other larger (non system level) programs.
  4. Since I’m on the JVM I can make executable jars, bundling up Groovy and all other dependancies into an easy to run program.

That last point is the real kicker. I want these programs to be easy to run. Even with that, as much as three days ago I wouldn’t have imagined doing programming like this in Groovy.

But this article isn’t about my poor choices in system languages: it’s about a workflow for small Groovy tools, from inception to ending up with an executable jar.

“But, but, what about?”

But, but, what about Go?“. I hear you, and I almost wrote my scripts in Go. Especially with the new easy cross-compilation stuff coming in Go 1.5. I expect to write tools like this in Go in the latter half of 2015 (or: whenever Go 1.5 is released + probably a couple months). I don’t have the patience to learn how cross compilation works today (Go 1.4).

But, but what… executable jars with Groovy?! Aren’t your jars huge?” Yeah, about 6MB a pop. I’ll admit this feels pretty outrageous for like a 50 line Groovy program. I’m also typing this on a machine with 3/4rds of a TB of storage… so 6MB is not a dealbreaker for me at current scale. But it does make me sad.

But, but what about JVM startup costs?” Yup, this is somwhat of a problem, even in my situation. Especially when in rapid development mode. This is another place where I almost wish I was writing in Go (cheap startup and compile times).

But this article is about rapid development in Groovy: going from an idea to releasing an embedable jar – maybe for systems programming, maybe for other things.

Fast, initial development of small Groovy scripts

As a newcomer to the Groovy scene I’ve Googled for this information, and found a couple of disjointed (and in some cases bitrotted) pieces on how to do these things (primarily packaging executable jars created from Groovy source). I hope another person (newcomer or otherwise) finds it useful.

Create your maven structure (which we’ll promptly ignore)

$ mvn archetype:generate -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false -DgroupId=com.wilcoxd -DartifactId=RapidDev (with values appropriate for your groupId and artifactId)

Dig into the generate src/main/java/com/wilcoxd and:

  1. make a new Groovy file
  2. Open the new Groovy file in your editor
  3. Add package com.wilcoxd as the first line of the file. Substitute com.wilcoxd with the groupId you specified in the mvn archetype:generate command.

While semantically, you should rename your java folder to groovy, that doesn’t seem to work with packaging process to create the executable jar. Just leave it be (I guess).

Rapidly develop your Groovy project (with two tricks)

The nice thing about Groovy is that you can write your Groovy program just like you would expect to write a Ruby or Python or Javascript: just type code into a file and Groovy will Figure It Out(TM).

Trick One: develop running your script directly

  1. cd into /src/main/groovy/com/wilcoxd/
  2. Write your script in the body of your .groovy file.
  3. Occasionally pop out to the command line and run groovy RapidDev.groovy (or whatever your script is called)

Groovy does a fair bit of work to execute your (even unstructured!) code. There’s some magic here that I don’t fully understand, but whatever.

$ vi RapidDev.groovy
.... type type type...

$ cat RapidDev.groovy
package com.wilcoxd

println "hey world"

$ groovy RapidDev.groovy
hey world

Crazy talk! No class needed, I don’t even need a public static void main style function!

Trick Two: dependency “management” with @Grab

If you find yourself needing a third party module, use @Grab to get it.

We’ll set things up with Maven, properly, later. Right now we’re concentrating on getting our program working, and turns out we need to make a RESTful API request (or whatever). We just need a third party module.

$ cat RapidDev.groovy
package com.wilcoxd

@Grab(group='com.github.groovy-wslite', module='groovy-wslite', version='1.1.0')

println("hello world!!!")

@Grab pulls in your dependancies even without Maven. I don’t want to introduce Maven here, because then I have to build and run via Maven (I guess?? Again, newbie in JVM land here…). Magic-ing them in with @Grab is probably good enough.

I’m sure Grab is not sustainable for long term programs. In fact, this isn’t a long term proposition: in fact we’re going to remove comment out grab the second we get this script done.

... iterate: type type, pop out and run, type type type...

$ groovy RapidDev.groovy

... IT WORKED! ...

We’re done coding! Now time to Set up pom.xml!

Yay, we’re done. Our rapid, iterative development cycle let us quickly explore a concept and get a couple dozen or a couple hundred lines of maybe unstructured code out. Whatever, we built a thing! Development in the small is nicer, sometimes, than development in the large: different rules apply.

But now we need to set up pom.xml, so it builds a jar for us.

Specify your main class property

Add this as a child of the <project> in your pom.xml:


Adjust the value of start-class as appropriate for your class / artifact ID from the mvn artifact:generate part of this.

Add Groovy and other third party modules you @Grabed into your <dependancies> section

Something like this (with a translated dependency, @Grab syntax to Maven syntax for the wslite module we previously grabbed above), to the <dependencies> section:


Once you have this in your pom, comment out the @Grab declaration in your source file

Add build plugin dependancies (another child of <project>):




            <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">

Know your Groovy code will be groovy, unstructured and all

As mentioned before, Groovy goes to some magic to implicitly wrap a class around unstructured code. In fact, it will use the name of the file as the name of the class (so name your files like you would name Java classes!).

In our example, we’ve been editing RapidDev.groovy, which Groovy will wrap up in a class RapidDev declaration… or something. That package com.wilcoxd means Groovy will actually wrap our unstructured code into a class com.wilcoxd.RapidDev… which is a fine name and what we specified in our pom’s start-class property.


With a simple mvn package we can bundle our Groovy script up to an executable jar. A java -jar target/RapidDev-1.0-SNAPSHOT.jar runs it.

Which is awesome! I can take this and run it on any system with the JVM! I can write my “complex” systems level program once and run anywhere! I can reach deep into the Java ecosystem for spare parts to make my development easier, and still have a rapid development cycle one expects out of Python or Ruby.

Pretty neat!

March 2, 2015

A Deep Dive Into Vagrant, Puppet, and Hiera

Filed under: ResearchAndDevelopment — Ryan Wilcox @ 1:08 am

A Vagrant setup that supports my blog entry on “Vagrant / Puppet and Hiera Deep Dive”. Below is a reproduction of that article.


This weekend I spent far more time than I’d like diving deep into Puppet, Hiera and Vagrant.

Puppet is a configration/automation tool for installing and setting up machines. I prefer Puppet to other competitors (such as Chef) for Reasons, even though I also use Chef.

Hiera is an interesting tool of Puppet (with no equivalent I’ve found in Chef, at least that I’ve found): instead of setting variables in your configuration source, do it in YAML (or JSON or MYSL or…) files. This ideally keeps your Puppet manifests (your configuration sourcecode) more sharable and easier to manage. (Ever had a situation in general programming where you need to pass a variable into a function because it’s passed to another function three function calls down the stacktrace? Hiera also avoids that.)

However, documentation on Puppet, Hiera is pretty scarce – especially when used with Vagrant, which is how I like to use Puppet.

This article assumes you’re familiar with Vagrant.

My Vagrant use cases

I use (or have used) Vagrant for two things:

  1. To create local development VMs with exactly the tools I need for a project. (Sample)
  2. To create client serving infrastructure (mostly early stage stuff).

For use case #2, usually this is a client with a new presence just getting their online site ramped up. So I’m provisioning only a couple of boxes this way: I know this wouldn’t work for more than a couple dozen instance, but by then they’re serving serious traffic.

My goal is to use Vagrant and 99% the same Puppet code to do both tasks, even though these are two very different use cases.

Thanks to Vagrant’s Multi VM support I can actually have these two VMs controlled in the same Vagrantfile

First, general Vagrant Puppet Setup Tricks

File Organization

I set my Vagrantfile’s puppet block to look like this:

config.vm.provision "puppet" do |puppet|
  puppet.manifests_path = "puppet/manifests"
  puppet.manifest_file  = "site.pp"

  puppet.module_path   = "puppet/modules"

Note how my manifests and modules folder are in a puppet folder. Our directory structure now looks like:


Why? Vagrant, for me, is a tool that ties a bunch of other tools together: uniting virtual machine running with various provisioning tools locally and remotely. Plus the fact that the Vagrantfile is just Ruby means that I’m often pulling values out into a vagrantfile_config pattern, or writing tools or something. Thus, the more organization I can have at the top level the better.

Modules vs Manifests

I tend to one module per project I’m trying to deploy. By that I mean if I’m deploying a Rails bookstore app, I’ll create a bookstore module. This module will contain all the manifests I need to get the bookstore up and running: manifests to configure mysql, Rails, redis, what-have-you.

Sometimes these individual manifests are simple (and honestly probably could be replaced with clever hiera configs, once I dig into that more), and sometimes a step means configuring two or three things. (a “configure mysql” step yes, needs to use an open source module to install MySQL, but also may need to create a mysql user, create a folder with the correct permissions for the database files, set up a cron job to backup the database, etc)

I also assume I’ll be git subtree-ing a number of community modules directly into my codebase.

My puppet/manifests/ folder than ends up looking like a poor man’s Roles and Profiles setup. I take some liberties, but it’s likely the author is dealing with waaaaay more Puppet nodes than I’d ever imagine with this setup.

Pulling in third party Puppet modules

The third party Puppet community has already created infrastructure pieces I can use and customize, and has created a package manager to make installation easy. Except we need to run these package managers before we run Puppet on the instance!

Vagrant to the rescue! We can run multiple provisioning tasks (per instance!) in a Vagrantfile!

Before the config.vm.provision "puppet" line, we tell puppet to install modules we’ll need later:

    config.vm.provision :shell, :inline => "test -d /etc/puppet/modules/rvm || puppet module install maestrodev/rvm"

Because the shell provisioner will always run, we want to test that a Puppet module is not installed before we try to install it.

There are other ways to manage Puppet modules, but this simple shell inline command works for me. I’ll often install 4 or 5 third party modules this way, simply copy/pasting and changing the directory path and module name. As long as I’m before the puppet configuration block these modules will be installed before that happens.

Uninstalling Old Puppet Versions (and installing the latest)

This weekend I discovered a Ubuntu 12 LTS box with a very old version of Puppet on it (2.7). I have a love/hate relationship with Ubuntu LTS: The LTS means Long Term Support, so nothing major changes over the course of maybe 5 years. Great for server stability. However, that also means that preinstalled software that I depend on may be super old… and I may want / need the new version.

I ended up writing the following bash script:

#!/usr/bin/env bash
# This removes ancient Puppet versions on the VM - if there IS any ancient
# version on it - so we can install the latest.
# It is meant to be run as part of a provisioning run by Vagrant
# so it must ONLY delete old versions (not current versions other stages have installed)
# It assumes that we're targeting Puppet 3.7 (modern as of Feb 2015...)

INSTALLED_PUPPET_VERSION=$(apt-cache policy puppet | grep "Installed: " | cut -d ":" -f 2 | xargs)
echo "Currently installed version: $INSTALLED_PUPPET_VERSION"

if [[ $INSTALLED_PUPPET_VERSION != 3.7* ]] ; then
  apt-get remove -y puppet=$INSTALLED_PUPPET_VERSION puppet-common=$INSTALLED_PUPPET_VERSION
  echo "Removed old Puppet version: $INSTALLED_PUPPET_VERSION"

It assumes your desired Puppet version is 3.7.x, which should be good until Puppet 4.

I also have a script that installs Puppet if it’s not there (maybe it’s not there on the box/instance, OR our script above removed it). I got it from the makers of Vagrant themselves: puppet-bootstrap.

Again, added before the config.vm.provision :puppet bits:

config.vm.provision :shell, path: "vagrant_tools/"  # in case the VM has old crap installed...
config.vm.provision :shell, path: "vagrant_tools/"

Notice that both these shell scripts I store in a vagrant_tools directory, in the same folder as my Vagrantfile. My directory structure now looks like:


Puppet + Hiera

Using Hiera and Vagrant is slightly awkward, especially since many of the Hiera conventions are meant to support dozens or hundreds of nodes… but we’re using Vagrant, so we may have one – or maybe more, but in the grand scheme of things the limit is pretty low. Low enough where Hiera gets in the way.


The way I figured out how to do this is create a hiera folder in our puppet folder. My directory structure now looks like this:


A reminder at this point: the VM (and thus Puppet) have their own file systems disassociated with the file system on your host machine. Vagrant automates the creation of specified shared folders: opening a directory portal back to the host machine.

Implicitly Vagrant creates a shared folder for manifest_path and module_path folders. (In fact, these can be arrays of paths to share, not just single files!!!)

Anyway, our hiera folder must be shared manually.

Note here that Vagrant throws a curveball our way and introduces a bit of arbitraryness to where it creates the manifest and module folders. You’re going to have to watch the vagrant up console spew to see where this is: with the vagrant_hiera_deep_dive VM the output was as follewed:

==> default: Mounting shared folders...
    default: /vagrant => /Users/rwilcox/Development/GitBased/vagrant_hiera_deep_dive
    default: /tmp/vagrant-puppet-3/manifests => /Users/rwilcox/Development/GitBased/vagrant_hiera_deep_dive/puppet/manifests
    default: /tmp/vagrant-puppet-3/modules-0 => /Users/rwilcox/Development/GitBased/vagrant_hiera_deep_dive/puppet/modules 

Notice the /tmp/vagrant-puppet-3/? That’s your curveball: it may be different for different VM names (but is consistant: it’ll never change)

So, create the shared folder in the Vagrantfile:

config.vm.synced_folder("puppet/hiera", "/tmp/vagrant-puppet-3/hiera")

Likewise, we’ll want to add the following lines to the puppet block

puppet.hiera_config_path = "puppet/hiera/node_site_config.yaml"
puppet.working_directory = "/tmp/vagrant-puppet-3/"

Important notes about the hiera config

It’s important that Hiera only likes .yaml extensions, not .yml.

It’s also important that yes, having both the node_site_data.yml and node_site_config.yml files do feel a bit silly, especially at our current scale of one machine. Sadly this is not something we can fight and win, but a limitation of the system. Hiera’s documentation goes more into config vs data files.

But also note that the node_site_config file points to node_site_data, via Hiera’s config file format.


I’ve been using Vagrant and Puppet at a very basic level a very long time (something like 5 years, I think). From best practices I’ve been using for years, to new things I’ve just pieced together today, I hope this was helpful to someone.

Explore this article more by looking at the Vagrant setup on Github

Next Page »