Wilcox Development Solutions Blog

Bitbucket Pipelines, Heroku and Rails

May 14, 2017

This weekend I took the time to play with Bitbucket Pipelines, a new feature from Bitbucket.

Often, my goal with my setup is “get something simple up and running simply”. This is why I like hosting on Bitbucket (free private repositories), and the pipeline’s feature now lets me get a simple CI pipeline up, without involving extra tools.

With a little bit of work, now I have a template for Rails apps deploying on Heroku. (I’m not using the Heroku Pipelines for this because it assumes Github repositories. I may use that part in the future to promote some code from staging to production… but right now the app isn’t that fancy.

bitbucket-pipelines.yml file

image:
  name: rwilcox/rails-mysql-node:latest
  username: $DOCKERHUB_USERNAME
  password: $DOCKERHUB_PASSWORD
  email: $DOCKERHUB_EMAIL

pipelines:
  branches:
    master:
      - step:
          script:
            - bundle install
            - cp config/sample_database.yml config/database.yml
            - "sed -i 's/  host: mysql/  host: 127.0.0.1/' config/database.yml"
            - RAILS_ENV=test rake db:create
            - RAILS_ENV=test rake db:schema:load
            - rake spec
            - echo "NOW DEPLOYING THE APP...."
            - deploy-scripts/heroku/package_and_deploy.sh myapp-staging
            - deploy-scripts/heroku/migrate.sh myapp-staging
            - deploy-scripts/heroku/restart.sh myapp-staging
            - echo "app deployed, now priming the cache..."
            - curl -s "http://myapp-staging.herokuapp.com"
          services:
            - database
definitions:
  services:
    database:
      image: mysql
      environment:
        MYSQL_ROOT_PASSWORD: CHANGEME

Let’s break this big file up into smaller pieces.

The image section: getting the test enviroment

Bitbucket Pipelines are built on top of Docker. Awesome, as my (new) development workflow is built on Docker too.

Bitbucket Pipelines have a standard Docker image it uses to build your app. Included are things like Node, Python (2), java, and maven.

In our case - a Rails app - that doesn’t work: the standard image doesn’t come with Ruby. I also want to use mysql as the data store, and I know the mysql2 gem requires a C library for mysql bindings.

Thus, I could install those dependancies in my build pipeline, or I could just use a Docker container to run my tests with the full suite of required software. Docker!!

Bitbucket Pipelines don’t (yet) allow you to build a Docker image then docker run in that built container, so I can’t build the container in the pipeline then run it. This seemed like the easiest way, but is not currently allowed in Bitbucket Pipelines.

So I thought about publishing my development Docker container to Amazon Elastic Container Registry. There’s some problems with that: ECR generates a password that’s only good for 12 hours. So I either run a cron job to update an environmental variable in the Bitbucket Pipeline…

… or I just create a Makefile, based on my development Docker environment, that publishes the image to Docker Hub.

For one private repository Docker Hub is free, and Bitbucket Pipelines can interact even with private images stored there.

Makefile (for building and pushing development Docker environment to Docker Hub)

# Builds and uploads our dev image to Docker Hub.
# Required right now because Bitbucket pipelines can't build then run Docker containers
# (if it could then we would just build the container there then attach and run the tests).
#
login:
    docker login

build:
    docker build -t rwilcox/rails-mysql-node -f Dockerfile.devel .

push:
    docker push rwilcox/rails-mysql-node:latest

all: login build push

The steps section

Currently a pipeline can have only one step, so I jam testing and deployment in the same step. Normally I’d separate these, as they’re separate actions…

cp config/sample_database.yml config/database.yml

I gitignore config/database.yml, so the pipeline must generate it

sed -i ‘s/ host: mysql/ host: 127.0.0.1/’ config/database.yml

My config/sample_database.yml file assumes I have another Docker container (thanks to Docker Compose) named mysql. Use sed to rewrite the mysql hostname so it’s 127.0.0.1. Bitbucket Pipeline services are accessed via localhost, so I must target that. (Specifically target 127.0.0.1 here because mysql2 assumes that localhost means socket communication, not TCP/IP).

The deployment steps

For any Heroku Rails deployment there are three steps:

  1. deploy the code to Heroic, usually via the famous “git push” based deployment model.
  2. Run database migrations rake db:migrate on Heroic
  3. Restart the applications on Heroku, as now the database is correctly migrated for that app version.

We can duplicate these in code here, but we can’t use the normal heroku command line tool. There’s warnings about how using HEROKU_API_KEY environmental variable can interfere with some operations of the heroku CLI tool.

There’s an awesome SO answer on the various ways you can get a headless CI server authenticating with Heroku, which discusses feeding the username and password to heroku login (which I don’t think will work if you have 2FA turned on!), just using HEROKU_API_KEY anyway, and writing your own .netrc file.

Neither of these alternatives are super great. Heroku does provide a rich API and (with a bit of fiddling) I have a several API scripts that will do all three steps.

Deploy to Heroku (deploy-scripts/heroku/package_and_deploy.sh)

#!/bin/bash
#
# FROM: https://bitbucket.org/rjst/heroku-deploy  
# Bash script to deploy to Heroku from Bitbucket Pipelines (or any other build system, with
# some simple modifications)
#
# This script depends on two environment variables to be set in Bitbucket Pipelines
# 1. $HEROKU_API_KEY - https://devcenter.heroku.com/articles/platform-api-quickstart
#

git archive --format=tar.gz -o deploy.tgz $BITBUCKET_COMMIT

HEROKU_VERSION=$BITBUCKET_COMMIT # BITBUCKET_COMMIT is populated automatically by Pipelines
APP_NAME=$1

echo "Deploying Heroku Version $HEROKU_VERSION"

URL_BLOB=`curl -s -n -X POST https://api.heroku.com/apps/$APP_NAME/sources \
-H 'Accept: application/vnd.heroku+json; version=3' \
-H "Authorization: Bearer $HEROKU_API_KEY"`

echo $URL_BLOB | python -c 'import sys, json; print(json.load(sys.stdin))'
PUT_URL=`echo $URL_BLOB | python -c 'import sys, json; print(json.load(sys.stdin)["source_blob"]["put_url"])'`
GET_URL=`echo $URL_BLOB | python -c 'import sys, json; print(json.load(sys.stdin)["source_blob"]["get_url"])'`

curl $PUT_URL  -X PUT -H 'Content-Type:' --data-binary @deploy.tgz

REQ_DATA="{\"source_blob\": {\"url\":\"$GET_URL\", \"version\": \"$HEROKU_VERSION\"}}"

BUILD_OUTPUT=`curl -s -n -X POST https://api.heroku.com/apps/$APP_NAME/builds \
-d "$REQ_DATA" \
-H 'Accept: application/vnd.heroku+json; version=3' \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $HEROKU_API_KEY"`

STREAM_URL=`echo $BUILD_OUTPUT | python -c 'import sys, json; print(json.load(sys.stdin)["output_stream_url"])'`

curl $STREAM_URL

Straightforward coding, and I’m glad I found this snippet on the Internet.

Migrate Database (deploy-scripts/heroku/migrate.sh)

#!/bin/bash

mkdir -p tmp/

newDyno=$(curl -n -s -X POST https://api.heroku.com/apps/$1/dynos \
   -H "Accept: application/json" \
   -H "Authorization: Bearer $HEROKU_API_KEY"\
   -H 'Accept: application/vnd.heroku+json; version=3' \
   -H 'Content-Type: application/json' \
   -d '{"command": "rake db:migrate; echo \"MIGRATION COMPLETE\"", "attach": "false"}' | tee tmp/migration_command |
python -c 'import sys, json; myin=sys.stdin; print( json.load(myin)["name"] )')

cat tmp/migration_command

echo "One-Shot dyno created for migration at: $newDyno"

# create a log session so we can monitor the completion of the command
logURL=$(curl -n -s -X POST https://api.heroku.com/apps/$1/log-sessions \
  -H "Accept: application/json" \
  -H "Authorization: Bearer $HEROKU_API_KEY" \
  -H 'Content-Type: application/json' \
  -H 'Accept: application/vnd.heroku+json; version=3' \
  -d "{\"lines\": 100, \"dyno\": \"$newDyno\"}" | tee tmp/log_session_command | python -c 'import sys, json; myin=sys.stdin; print(json.load(myin)["logplex_url"])')

cat tmp/log_session_command

echo "sleeping for 30 "
echo "LOG STREAM AT $logURL"
sleep 30

curl -s $logURL > tmp/logfile
cat tmp/logfile
cat tmp/logfile | grep "MIGRATION COMPLETE" # MUST be last, exit status will trigger if text not found

Technically, when you run the heroku run command, you’re creating another dyne to run whatever your command is. We do the same thing here: we create a dyne, give it a command to run, then get the log information and see if the migration completed or not.

This is not the best shell script: if the database migration takes longer than 30 seconds to complete we may get a false failure. I may need to tweak this part of the script in the future.

Restart the app (deploy-scripts/heroku/restart.sh)

#!/bin/bash

curl -n -s -X DELETE https://api.heroku.com/apps/$1/dynos \
  -H "Content-Type: application/json" \
  -H "Accept: application/vnd.heroku+json; version=3" \
  -H "Authorization: Bearer $HEROKU_API_KEY"

sleep 10

This restarts the app (very abruptly, by deleting all the running dynes). The last stage in the pipeline goes and performs the first web request on the Heroku box, an operation that sometimes takes “longer than normal”.

(Service) definitions

Bitbucket has good documentation on the provided service definitions

Conclusion

With Bitbucket Pipelines I have a simple, one stop, place for CI/CD, requiring very little in the way of extra services. I like to keep simple, experimental projects simple, then migrate away from simple when that fails. I’ve also created useful scripts that can be used if I decide to move away from Bitbucket Pipelines to something more robust (while still targeting Heroku).