May 14, 2017

Bitbucket Pipelines, Heroku and Rails

Filed under: Uncategorized — Ryan Wilcox @ 3:43 pm

This weekend I took the time to play with Bitbucket Pipelines, a new feature from Bitbucket.

Often, my goal with my setup is “get something simple up and running simply”. This is why I like hosting on Bitbucket (free private repositories), and the pipeline’s feature now lets me get a simple CI pipeline up, without involving extra tools.

With a little bit of work, now I have a template for Rails apps deploying on Heroku. (I’m not using the Heroku Pipelines for this because it assumes Github repositories. I may use that part in the future to promote some code from staging to production… but right now the app isn’t that fancy.

bitbucket-pipelines.yml file

  name: rwilcox/rails-mysql-node:latest

      - step:
            - bundle install
            - cp config/sample_database.yml config/database.yml
            - "sed -i 's/  host: mysql/  host:' config/database.yml"
            - RAILS_ENV=test rake db:create
            - RAILS_ENV=test rake db:schema:load
            - rake spec
            - echo "NOW DEPLOYING THE APP...."
            - deploy-scripts/heroku/ myapp-staging
            - deploy-scripts/heroku/ myapp-staging
            - deploy-scripts/heroku/ myapp-staging
            - echo "app deployed, now priming the cache..."
            - curl -s ""
            - database
      image: mysql

Let’s break this big file up into smaller pieces.

The image section: getting the test enviroment

Bitbucket Pipelines are built on top of Docker. Awesome, as my (new) development workflow is built on Docker too.

Bitbucket Pipelines have a standard Docker image it uses to build your app. Included are things like Node, Python (2), java, and maven.

In our case – a Rails app – that doesn’t work: the standard image doesn’t come with Ruby. I also want to use mysql as the data store, and I know the mysql2 gem requires a C library for mysql bindings.

Thus, I could install those dependancies in my build pipeline, or I could just use a Docker container to run my tests with the full suite of required software. Docker!!

Bitbucket Pipelines don’t (yet) allow you to build a Docker image then docker run in that built container, so I can’t build the container in the pipeline then run it. This seemed like the easiest way, but is not currently allowed in Bitbucket Pipelines.

So I thought about publishing my development Docker container to Amazon Elastic Container Registry. There’s some problems with that: ECR generates a password that’s only good for 12 hours. So I either run a cron job to update an environmental variable in the Bitbucket Pipeline…

… or I just create a Makefile, based on my development Docker environment, that publishes the image to Docker Hub.

For one private repository Docker Hub is free, and Bitbucket Pipelines can interact even with private images stored there.

Makefile (for building and pushing development Docker environment to Docker Hub)

# Builds and uploads our dev image to Docker Hub.
# Required right now because Bitbucket pipelines can't build then run Docker containers
# (if it could then we would just build the container there then attach and run the tests).
    docker login

    docker build -t rwilcox/rails-mysql-node -f Dockerfile.devel .

    docker push rwilcox/rails-mysql-node:latest

all: login build push

The steps section

Currently a pipeline can have only one step, so I jam testing and deployment in the same step. Normally I’d separate these, as they’re separate actions….

cp config/sample_database.yml config/database.yml

I gitignore config/database.yml, so the pipeline must generate it

sed -i ‘s/ host: mysql/ host:’ config/database.yml

My config/sample_database.yml file assumes I have another Docker container (thanks to Docker Compose) named mysql. Use sed to rewrite the mysql hostname so it’s Bitbucket Pipeline services are accessed via localhost, so I must target that. (Specifically target here because mysql2 assumes that localhost means socket communication, not TCP/IP).

The deployment steps

For any Heroku Rails deployment there are three steps:

  1. deploy the code to Heroic, usually via the famous “git push” based deployment model.
  2. Run database migrations rake db:migrate on Heroic
  3. Restart the applications on Heroku, as now the database is correctly migrated for that app version.

We can duplicate these in code here, but we can’t use the normal heroku command line tool. There’s warnings about how using HEROKU_API_KEY environmental variable can interfere with some operations of the heroku CLI tool.

There’s an awesome SO answer on the various ways you can get a headless CI server authenticating with Heroku, which discusses feeding the username and password to heroku login (which I don’t think will work if you have 2FA turned on!), just using HEROKU_API_KEY anyway, and writing your own .netrc file.

Neither of these alternatives are super great. Heroku does provide a rich API and (with a bit of fiddling) I have a several API scripts that will do all three steps.

Deploy to Heroku (deploy-scripts/heroku/

# FROM:  
# Bash script to deploy to Heroku from Bitbucket Pipelines (or any other build system, with
# some simple modifications)
# This script depends on two environment variables to be set in Bitbucket Pipelines

git archive --format=tar.gz -o deploy.tgz $BITBUCKET_COMMIT

HEROKU_VERSION=$BITBUCKET_COMMIT # BITBUCKET_COMMIT is populated automatically by Pipelines

echo "Deploying Heroku Version $HEROKU_VERSION"

URL_BLOB=`curl -s -n -X POST$APP_NAME/sources \
-H 'Accept: application/vnd.heroku+json; version=3' \
-H "Authorization: Bearer $HEROKU_API_KEY"`

echo $URL_BLOB | python -c 'import sys, json; print(json.load(sys.stdin))'
PUT_URL=`echo $URL_BLOB | python -c 'import sys, json; print(json.load(sys.stdin)["source_blob"]["put_url"])'`
GET_URL=`echo $URL_BLOB | python -c 'import sys, json; print(json.load(sys.stdin)["source_blob"]["get_url"])'`

curl $PUT_URL  -X PUT -H 'Content-Type:' --data-binary @deploy.tgz

REQ_DATA="{\"source_blob\": {\"url\":\"$GET_URL\", \"version\": \"$HEROKU_VERSION\"}}"

BUILD_OUTPUT=`curl -s -n -X POST$APP_NAME/builds \
-d "$REQ_DATA" \
-H 'Accept: application/vnd.heroku+json; version=3' \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $HEROKU_API_KEY"`

STREAM_URL=`echo $BUILD_OUTPUT | python -c 'import sys, json; print(json.load(sys.stdin)["output_stream_url"])'`


Straightforward coding, and I’m glad I found this snippet on the Internet.

Migrate Database (deploy-scripts/heroku/


mkdir -p tmp/

newDyno=$(curl -n -s -X POST$1/dynos \
   -H "Accept: application/json" \
   -H "Authorization: Bearer $HEROKU_API_KEY"\
   -H 'Accept: application/vnd.heroku+json; version=3' \
   -H 'Content-Type: application/json' \
   -d '{"command": "rake db:migrate; echo \"MIGRATION COMPLETE\"", "attach": "false"}' | tee tmp/migration_command |
python -c 'import sys, json; myin=sys.stdin; print( json.load(myin)["name"] )')

cat tmp/migration_command

echo "One-Shot dyno created for migration at: $newDyno"

# create a log session so we can monitor the completion of the command
logURL=$(curl -n -s -X POST$1/log-sessions \
  -H "Accept: application/json" \
  -H "Authorization: Bearer $HEROKU_API_KEY" \
  -H 'Content-Type: application/json' \
  -H 'Accept: application/vnd.heroku+json; version=3' \
  -d "{\"lines\": 100, \"dyno\": \"$newDyno\"}" | tee tmp/log_session_command | python -c 'import sys, json; myin=sys.stdin; print(json.load(myin)["logplex_url"])')

cat tmp/log_session_command

echo "sleeping for 30 "
echo "LOG STREAM AT $logURL"
sleep 30

curl -s $logURL > tmp/logfile
cat tmp/logfile
cat tmp/logfile | grep "MIGRATION COMPLETE" # MUST be last, exit status will trigger if text not found

Technically, when you run the heroku run command, you’re creating another dyne to run whatever your command is. We do the same thing here: we create a dyne, give it a command to run, then get the log information and see if the migration completed or not.

This is not the best shell script: if the database migration takes longer than 30 seconds to complete we may get a false failure. I may need to tweak this part of the script in the future.

Restart the app (deploy-scripts/heroku/


curl -n -s -X DELETE$1/dynos \
  -H "Content-Type: application/json" \
  -H "Accept: application/vnd.heroku+json; version=3" \
  -H "Authorization: Bearer $HEROKU_API_KEY"

sleep 10

This restarts the app (very abruptly, by deleting all the running dynes). The last stage in the pipeline goes and performs the first web request on the Heroku box, an operation that sometimes takes “longer than normal”.

(Service) definitions

Bitbucket has good documentation on the provided service definitions


With Bitbucket Pipelines I have a simple, one stop, place for CI/CD, requiring very little in the way of extra services. I like to keep simple, experimental projects simple, then migrate away from simple when that fails. I’ve also created useful scripts that can be used if I decide to move away from Bitbucket Pipelines to something more robust (while still targeting Heroku).

March 10, 2012

A Rails Development Environment with Puppet

Filed under: Uncategorized — Ryan Wilcox @ 4:45 pm


I really enjoy using Vagrant to do my development. I’ve even posted my Base Vagrant Package.

But I found I was still doing the same things over and over again: setting up postgres, and setting up the RVM. Yes, I had automated 80% of my “get a new Rails box up”, but that extra 20% eluded me.

Today I said, “No more”

I’ve improved on my package – now we set up postgres and RVM.

Now, my strategy

My strategy for new virtual machines is now

$ git clone git:// rpw_NEWPROJECT
$ vi Vagrantfile # tweak project paths etc
$ vi manifests/lucid32. # search for PROJECT_NAME, start adding packages and editing the RVM name
$ vagrant up

(go get a sandwhich, because the first vagrant up will take 15 minutes)

Take a look

The best place to look at what I do is to dig into the manifest file.

I believe this provides one of the few examples on the net on how to do all these things together.


The RVM puppet manifests require Puppet 2.7.11, released Feb 23, 2012. You may need to update the puppet in your base boxes (sudo gem install puppet), then package them up again.

It’s important to remember that Puppet is a great tool for machine setup, but Capistrano (or your favorite deploy tool flavor) is better for deploying actual code.

Capistrano as a machine setup tool isn’t all that great. Even a simple task can get caught up in headbashing, as I found out when trying to set up RVM and gemsets. It’s so much easier when you’re using a tool (Puppet) that’s meant for those things.

Likewise, I wouldn’t want to use Puppet to deploy, bundle install, etc etc. the Capistrano community does all that stuff really well, thank you.

How to make your Rails deployments better with Puppet

Vagrant and virtual machines are great for development environments, but I’m also a big fan of reusing everything I can. Puppet was really meant to automate managing of servers… so why don’t we use the same Puppet files in managing our servers?

You can! Use the puppet modules and scripts in my vagrant base and Supply Drop.

If you’re using Vagrant to test this locally, do NOT set a provisioner – supply_drop will take care of that for us. (Normally you’d want a provisioner, because you don’t cap deploy:dev, but this is a simulated deploy test).

Your cap deploy:setup should call cap puppet:bootstrap: (appropriate OS here), in addition to creating a folder for your Git repo to land. Nothing else.

cap deploy should call cap puppet:apply. (You may want to create a cap deploy:pre task, which checks the syntax of the puppet files as well as doing anything else you need).

The thing to watch out for here is that your Puppet file must be named puppet.pp in current versions of Supply Drop.

Using Puppet both insures your development environments are consistent with your production environment, AND uses a tool that’s declarative instead of procudural (avoiding Capfile hell).

Supply Drop needs some improvements (for example, different machine roles in Capistrano should be able to have different puppet configs), but it’s a tool worth keeping an eye on.

Next Episode: A local QA stack with Vagrant

Over the upcoming weeks I hope to write about Puppet, Vagrant and Supply Drop to bring up a local QA stack for your application stack. Your application stack probably has multiple machines (web front end, application server, database server), and you can replicate the entire stack with Vagrant. For now, there’s an excellent starter article: Bootstrapping Vagrant with Puppet.

February 24, 2012

My Standard Estimating (and project work) Workflow

Filed under: Uncategorized — Ryan Wilcox @ 4:45 pm

Today I decided to record what my workflow is when I get a new potential project.

Maybe you’re a new client and what to see behind the scenes what goes on when you say, “Ryan, can you do this project for me?”. Especially if this is your first time working with me

For most of my client projects I use a tool (being released today, I believe) called Projector PM.

In the estimation stage I use Projector PM to create a list of behaviors (features, essentially) I see in the app. I’ll send you that list of behaviors and you can review them.

In the implementation stage I’ll invite you to projector PM. Here you can see some graphs: progress of the app vs allocated budget remaining. You’ll also get emails about what I worked on.

I ended up making two videos:

January 19, 2012

Whitepaper on early stage startup advice / “So, you wanna do a startup”

Filed under: Uncategorized — Ryan Wilcox @ 1:53 pm

Today I ended up writing a white-paper in which I collect links for early startup people.

Particularly, early, non-technical, startup people.

My standard operating procedure, next time I hear a pitch about some startup needing a technical co-founder for equity, is to send them to this whitepaper.

I’ve helped a lot of startups and small businesses launch products – sometimes even minimally viable products that only might cost a few thousand dollars in development time. I’d love to do even more of that in the future.

However, the current startup environment seems to contain a lot of people who think, “I’ll get some coder who will gladly develop my killer idea, for a cut of the millions of dollars we’ll make! Who could resist that!!!”

This whitepaper, I hope, serves an an education device: doing a startup doesn’t mean brainstorming all day (brainstorming about the app before noon, then taking a long lunch and brainstorming about which small island you should buy with all your startup IPO money until bed-time).

Startups, sadly, take actual work.

So, I present: Early Stage Startups: Advice for founding a startup

October 29, 2011

Whitepaper on node.js

Filed under: Uncategorized — Ryan Wilcox @ 9:02 am

I spent some time last night giving node.js a serious looking at. I was looking for best practices from the node community, and didn’t find any resources there.

So I dug and wrote my own.

Initially this was for internal research, but it seemed too good to keep secret.

So, I present: Node.js: research, analysis, and best practices

February 11, 2011

The Workflow Git Flow enables

Filed under: Uncategorized — Ryan Wilcox @ 10:31 am

Git flow is awesome, and accomplishes everything you might want easily.

This is (part) of an email I sent out today about the patterns git flow sets you up for, and what that means.

The Broken Build

> Hi guys,
> Can you guys huddle and see about getting the build working again?
> Thanks!
> – Ed The Manager

My Response

I think it’s important to use this time to highlight that most of the
work happens in the “develop” branch in Github. There’s another
branch, “master” that is synced up with the latest when [redacted] does a

Master is what (everything sometimes integration) gets their code from

Take Home Point

Even if the build is broken on the develop branch [ed: which is what the original email was referring to], we’ll always have a
production ready build (on master).

Changes when the build is broken (for production)

If we absolutely must make a production change while the develop build
is broken, that’s also possible with our process and this “git flow”
tool we use. (we do our work and put it on master, not on develop).

BUT if we release when we have a broken build, we:
(a) We lose that “know good” state we can make critical fixes.

(b) possibly introduce bugs (that the tests are trying to point out to us)

In Production

The coming pressure of “a bug in production” will also try to force
the “release this branch now, broken build or not!” Again: we have
process for that (production critical bugs should happen as what git
flow calls “hotfixes”

The point: if we as a whole Agile team, don’t release
(putting the develop branch into the master branch) when the build is
broken, we’ll always have a “known good” point we can issue emergency
hotfixes for production issues.

And we can deploy this “master with this new hotfix” and start the deployment
process with them.

Too Long; Didn’t Read

The develop build could stay broken for days, and it doesn’t matter,
with the pattern we are using. Production fixes can go in and the
client will see responsiveness.

The master build, yes, the concerned stakeholders should/could watch to remain informed of the state of the production-ready code. Broken builds here are “drop everything” critical issues.

Hope this explains our process, and how we can accomplish change even
if we’re waiting on the build to be fixed elsewhere.


Git flow is a standard tool at Wilcox Development Solutions. We love that it enables developer productivity and forward motion, while enabling being responsive to the customer on critical issues.

While having a broken develop build is bad, and should be fixed, ideally the team itself should manage itself in this way (via peer pressure and professionalism), instead of this being a critical management issue. Because management has other things to worry about.

January 11, 2011

Announcing: BBEdit-DSL (A Proof of concept script)

Filed under: Uncategorized — Ryan Wilcox @ 7:24 pm

BBEdit is a pretty awesome editor. While I’ve moved to TextMate for most of my text editing, I’ll occasionally come back to BBEdit because it has the tools I need.

In a magazine article I once compared TextMate to a M*A*S*H style surgery theatre, and BBEdit to a hospital operating room. There’s a lot of time I really need it done (M*A*S*H style), and sometimes I need other features (like a split pane editor) where I need to pull out the big guns

Today I needed to edit four 1,000 line Ruby files, so I opened them up in BBEdit and split the pane for each file. This worked awesome until I needed to do anywhere in the file

You see. the files were written in the Domain Specific Language Shoulda” — which adds essentially a new way to declare functions to Ruby. TextMate sees these shoulda function declarations and adds them to the function popup, but BBEdit doesn’t

A quick conversation with BBEdit support, and I realized I needed to find the solution myself. So I wrote a proof-of-concept Python script: BBEdit-DSL.

If you’re using Ruby on Rails with BBEdit, and specifically Shoulda, you should check it out

December 14, 2010

Announcing: jQuery-xpath-ify bookmarklet

Filed under: Uncategorized — Ryan Wilcox @ 2:45 pm

Want to feed XPath expressions into jQuery in Firebug? Me too, so I wrote a bookmarklet: jquery-xpath-ify

June 28, 2010

Turbogears Example Code Repository

Filed under: Turbogears,Uncategorized — Ryan Wilcox @ 9:57 pm

So I’ve noticed there’s not one central place for Turbogears newbies to go to see a bunch of sample applications.

I’m trying to fix that.

To gauge interest, and as an easy/no hassle way to collect all these links, I started a public Google Doc that lists all the Turbogears examples on the web that I know about.

Check it out at: Turbogears Examples Public Google Doc.

At some point in time I’d love to work with the Turbogears team to make this part of their website, but I think showing them that there’s enough examples out there is an important first step.

Update: if Google docs gives you an error just hit refresh – it should sort itself out then.

May 29, 2010

Site Refresh

Filed under: Uncategorized — Ryan Wilcox @ 9:59 pm

When I first designed the Wilcox Development Solutions website it was meant to say one thing, stylistically: I design simple, clean websites that look functional.

It was 2003 after all: things should look simple. We still have people on 56K modems, running Internet Explorer on the Mac, and nobody’s really figured out a good way to do layout, anyway

About a year ago I noticed that these problems were solved: a whole lot more people got broadband, we have hyper-modern browsers now, and gosh is there some good looking websites out there (thanks in part, I think, to Blueprint CSS. And the world revolves around web apps way more than it did in 2003.

So, time for a refresh. I work towards the refresh in September, 2008.

Ever heard that saying, “The cobbler’s children have no shoes”?. Yeah, it’s like that.

Tonight I put the finishing touches on the site and deployed it.

Because people are curious, here’s how it used to look:

And how it looks now:

Mostly the same content, but with a much more pleasing layout. The old layout had a good, 7 year, run, but it was time for this refresh.

In addition to BlueprintCSS, I have to thank my other partner in crime: Webby. This made (and will make) layout level changes so much easier

The cobbler’s kids now have nice looking shoes!

Next Page »