Open a Github Issue From Slack!

The other day one of my co-workers opined that it’d be fantastic if we could open a GitHub issue from Slack. Fifteen minutes later the channel got to bask in the awesomeness… of this!

Read on to discover how to use Zapier (shameless plug: yes, I work on this) to whip this up quickly as well!

Opening the Issue

First up, we need to log in to Zapier and set up our first of two Zaps, the one that will create a new issue from Slack.

Now we’ll select our two services and the desired actions:

Next up, connect Slack and GitHub to Zapier.

When we get to step four, we’ll want to setup a custom filter so that we only trigger on Slack messages that contain !gh_issue.

At step five we’ll want to plug the values in to the GitHub issue from Slack. If you scroll back you’ll remember we used a specific format for our issue:

!gh_issue title(Junk Issue) description(Junk Issue!) repo(zapier/zapier-infra)

In Zapier-land, we extract those elements with parenthesis as variables. So when pulling from the trigger we get the raw text and the extracted variables as names like {{text__title}}, {{text__description}}, etc.

At step six we’ll load some samples.

Hrmph. All filtered out. Ah! We haven’t actually tried to create an issue from Slack. Let’s go do that now!

Now we go back to step six and refresh and we should see a new unfiltered sample, of which we can click “See filter sample” to view what will go to GitHub.

Looks good! Let’s go ahead and click “Test” and check that the GitHub issue was created on GitHub.

Great! Let’s go ahead and name this Zap!

But that’s only half the story. It’d also be nice if there was some notification in the channel that it had been created. Not 100% needed, but it would be nice!

The Webhook

So we have multiple ways we could approach this here:

  • Create a Zap that polls GitHub issues and alerts the channel of new issues
  • Setup a webhook through Zapier to push new issues instantly to Slack
  • Use the native Slack/GitHub integration on Slack to send the new issue notification

I’ll admit I didn’t have much luck using the native integration despite wanting it to work as it would have required the least amount of setup. Polling was easy to setup, but it means I can have anywhere from a 1 minute to a 15 minute delay from when I open the issue to when it is published back to Slack. So I opted for the webhook route.

The Webhook Trigger on Zapier is immensely powerful. You can use it to poll a URL, catch incoming webhooks, and even send webhooks back out to other services. It’s pretty raw but it gets the job done, and it gets it done instantly.

Like last time, for step one we will select our services: Webhook to Slack!

In step two, we’ll be given a webhook we can copy and paste to plug it into GitHub. Let’s navigate to GitHub really quick to add it.

In our repository settings page on GitHub, let’s add a new webhook.

By default this will fire on all events. We don’t want that, we want each issue.

This will be grayed out until an event fires, so let’s go back to Zapier and continue working on our Zap.

On step four, we’ll want add a custom filter so that the Zap will only trigger when issue action is equal to “open”. Otherwise this will fire whenever any activity takes place, such as opening and closing issues.

The first time through you may get a modal pop up prompting you to go create a new issue when you try to select a field. This is because webhooks are instant and require a user interaction to take place first. So go create an issue (manually or from Slack, it doesn’t matter) and follow the instructions to get it caught by Zapier. Now we can select the field we need and move on. :-)

At step five it’s time to set up the channel the message will be sent to and what the message will be. I typically prefer to alert the channel of a new issue opened on a repository and then link to it.

There is also a field for Icon URL that can be used to plug in a specific icon for the Slack bot that broadcasts the message. I usually use a character of ours (Zapbot!) that is similar to Hubot, but Octocat fits well here too!

Now we’ll test the Zap and if all goes well, name it and set it live!

Whelp that wraps it up for us… hope you find these Zaps as useful as we have!

Installing Elasticsearch Plugins on Graylog2

Thought I’d share this since it was something I unfortunately spent a good portion of my afternoon wrestling with. So you want to use an elasticsearch plugin within graylog2-server? I don’t care your reasons, but this will help you do it. I’m going to go out on a limb and assume you’re wanting to use the kopf plugin to view cluster state, but this will work for any plugin.

1. Download the Plugins

This can be slightly tricky… I’ve found that the best option is to install ES 0.90.10 (or whatever version is compatible with your version of graylog2) and use it to install plugins. You’ll then move the one you want from /plugins to /plugins. But if you are familiar with the plugin structure that will be created, you can manually download and unzip the plugins to the graylog2 plugins directory you define.

So for example, I’d do the following for installing kopf (a site plugin) and cloud-aws (a java based plugin).

2. Specify an Elasticsearch Config File For Graylog2

This easy, just specify make sure that elasticsearch_config_file = /etc/graylog2-elasticsearch.yml is set in your graylog2.conf. You can also just run this quick sed against the stock config file.

3. Specify a Plugin Dir

You’ll need to tell elasitcsearch where to actually look for the plugins, so add this to /etc/graylog2-elasticsearch.yml:


path.plugins=/opt/graylog2-server/plugins

4. Put Any Plugin Specific Configuration in graylog2-elasticsearch.yml

This is pretty much plugin specific, but you’ll do this following the plugin’s installation instructions.

I’m currently using this method to make my graylog2-server instance autojoin a specific cluster based on security group and EC2 tag and it works pretty well so far. :-)

An Even Better Way to Use Puppet Modules in a Vagrant Project

I previously blogged about what I thought was a good way to tie librarian-puppet and vagrant together in a way that allowed one to use librarian puppet without dealing with rubygems on their system (which despite the excellent tooling can be a pain for non-rubyists).

Today I discovered there is a handy vagrant plugin for this and found that using it is a much better approach.

For the quick and dirty on how to use it:

Suffice to say, you should just use this instead.

Before You Start Your Day

Before you start your day think of what is one thing you can do to enrich someone’s life? Whether your family, schoolteachers, co-workers or even a random person on the street. If you could find just one thing to do that will enrich someone’s live and then multiply that to each and every day, could you imagine what kind of impact that could have?

Effective Puppet Module Management in Vagrant

I still remember my first early forays into using vagrant and puppet together to provision local development environments. Everything was easy accept figuring out a proper way to bundle puppet modules with a project. Basically it was a three step phase of discovery.

1. Run “puppet module install” and adding them to the git repo (not a bright est idea but simple).
2. Add puppet modules as git submodules in the project. This turned out to be even more troublesome as adding/removing/updating modules became a real pain.
3. Use puppet-librarian to manage puppet modules as the dependencies they are.

The third option was the best… we could now just simply add, remove or upgrade puppet module versions in a Puppetfile and just run “librarian-puppet install” to install the modules. But a final caveat wound up being that users had to install rubygems on their host machine which can bring other troubles. So why not just install the modules within the vagrant box when it comes up and be done with it?

This effectively adds the Puppetfile in the root of the project to the guest machine and installs the modules, referencing the modules directory when running puppet apply. This works great as you can guarantee the same install across multiple environments where developers may or may not be familiar with rubygems. ;)

Securing Docker’s Remote API

One piece to docker that is interesting AMAZING is the Remote API that can be used to programatically interact with docker. I recently had a situation where I wanted to run many containers on a host with a single container managing the other containers through the API. But the problem I soon discovered is that at the moment when you turn networking on it is an all or nothing type of thing… you can’t turn networking off selectively on a container by container basis. You can disable IPv4 forwarding, but you can still reach the docker remote API on the machine if you can guess the IP address of it.

One solution I came up with for this is to use nginx to expose the unix socket for docker over HTTPS and utilize client-side ssl certificates to only allow trusted containers to have access. I liked this setup a lot so I thought I would share how it’s done. Disclaimer: assumes some knowledge of docker!

Generate The SSL Certificates

We’ll use openssl to generate and self-sign the certs. Since this is for an internal service we’ll just sign it ourselves. We also remove the password from the keys so that we aren’t prompted for it each time we start nginx.

Another option may be to leave the passphrase in and provide it as an environment variable when running a docker container or through some other means as an extra layer of security.

We’ll move ca.crt, server.key and server.crt to /etc/nginx/certs.

Setup Nginx

The nginx setup for this is pretty straightforward. We just listen for traffic on localhost on port 4242. We require client-side ssl certificate validation and reference the certificates we generated in the previous step. And most important of all, set up an upstream proxy to the docker unix socket. I simply overwrote what was already in /etc/nginx/sites-enabled/default.

One important piece to make this work is you should add the user nginx runs as to the docker group so that it can read from the socket. This could be www-data, nginx, or something else!

Hack It Up!

With this setup and nginx restarted, let’s first run a curl command to make sure that this setup correctly. First we’ll make a call without the client cert to double check that we get denied access then a proper one.

For the first two we should get some run of the mill 400 http response codes before we get a proper JSON response from the final command! Woot!

But wait there’s more… let’s build a container that can call the service to launch other containers!

For this example we’ll simply build two containers: one that has the client certificate and key and one that doesn’t. The code for these examples are pretty straightforward and to save space I’ll leave the untrusted container out. You can view the untrusted container on github (although it is nothing exciting).

First, the node.js application that will connect and display information:

And the Dockerfile used to build the container. Notice we add the client.crt and client.key as part of building it!

That’s about it. Run docker build . and docker run -n >IMAGE ID< and we should see a json dump to the console of the actively running containers. Doing the same in the untrusted directory should present us with some 400 error about not providing a client ssl certificate. :)

I’ve shared a project with all this code plus a vagrant file on github for your own prusual. Enjoy!

Parameterized Docker Containers

I’ve been hacking a lot on docker at Zapier lately and one of the things I found to be somewhat cumbersome with docker containers is that it seemed to be a little difficult to customize published containers without extending them and modifying files within them or some other mechanism. What I have come to discover is that you can publish containers that are customizable without modification from the end user by utilizing one of the most important concepts from 12 factor application development to Store Configuration in the Environment.

Let’s use a really good example of this, the docker-registry application used to host docker images internally. When docker first came out I whipped up a puppet manifest to configure this bad boy but then realized that the right way would be to run this as a container (which was published). Unfortunately the Dockerfile as it was didn’t fit my needs.

The gunicorn setup was hardcoded and to make matters more complicated the configuration defaulted strictly to the development based configuration that stored images in /tmp vs. the recommended production setting that stored images in S3 (where I wanted them).

The solution was easy, create a couple bash script that utilized environment variables that could be set when calling `docker run`.

First we generate the configuration file:

And wrap the gunicorn run call:

Finally the Dockerfile is modified to call these scripts with CMD, meaning that they are called when the container starts.

Since we use puppet-docker, the manifest for our dockerregistry server role simply sets these environment variables when it runs the container to configure it to our liking.

I’m really a big fan of this concept. This means people can publish docker containers that can be used as standalone application appliances with users tweaking to their liking via environment variables.

EDIT: Although I used puppet in this example to run docker, you don’t need to. You can easily do the following as well.

Handy Hub Alias

I’ve recently become a big fan of Hub and use a lot of the commands to interact with github from the comfort of my commandline. One of my personal favorites is pull-request as we use PRs often as a form of both code reviews and code promotion. Here’s a handy alias I have for the common task of issuing a PR for promotion.

Now I just need to figure out how to make it open the URL for the pull request that it dumps to the console. :)