Testing Ansible roles for CentOS on Travis-CI with Docker

Wow that title looks so buzzword-y

Testing your code is always a good thing and ansible-galaxy init(since 2.0) includes a simple test setup using Travis-CI, which is a free service for continuous integration that integrates with GitHub. The problem is their infrastructure is all Ubuntu based, and my recently created role is for CentOS. So how do I test my role in an automated way?

The solution is Travis-CI with Docker. Using a couple different sources online, I pieced together a config that uses the centos images for 6 and 7(not 5 as its too old, using Python 2.4, for ansible) to test my ansible role. And whats nifty is Ansible Galaxy has a webhook that Travis can use to report build status on the role description page.

I also use the env directive to create a build matrix that will launch sub-jobs, in this case the different centos image versions. Take a look at my travis.yml and see for yourself.

Yum Repository Priorities and Ansible

I’ve been investigating setting up a VPN gateway on my home server, but so far my search has only shown implementations that run on Linux or BSD, but not SmartOS/Illumos(Solaris). It’s planted some doubt in my choice in SmartOS as my server’s OS, but so what! I can still run a Linux VM via KVM. I’ve gone with my old standard of CentOS, a very solid OS that I trust to run my eventual VPN gateway securely.

When I get to it, I’ll let you all know how I go about setting up StrongSWAN, but I’m not there yet. I need to first setup some groundwork and that’s installing the EPEL yum repo. I’m very familiar with how to do so manually, adding the repo and then using the priorities plugin, but it’s all about Ansible today. I’ve created an Ansible Galaxy role to install and configure the priorities plugin, go check it out.

davidmnoriega/yum-plugin-priorities

iPXE with Serial Console

My SmartOS based home server is already configured to use the serial console, in case I need to physically plug into it, but there was a missing piece. iPXE, the boot loader I use to boot the smartos image, can use the serial console, but this is not the default. I couldn’t find an existing binary that was configured with serial, so I built it myself.

Its pretty easy to git clone their repo, un-comment a line, and build the binary, but I wanted an automatic way to do so. I’ve seen Travis-CI around and one of its features is being able to build artifacts and post them to your repo as a release. I created a git repo, ipxe_serial, for this project.

Once you have your GitHub account linked with Travis, you’ll enable it for a repo, though you’ll probably want to change the settings to only build if the config file exists, .travis.yml. Travis has a cli client that you’ll need as it makes setting up the GitHub Releases integration easy. Install the client and authorize it to use your GitHub account, then from within your repo directory, use travis setup releases and follow the prompts. It writes to your travis config file and creates a GitHub token for your repo and encrypts the token before putting it in the config file.

Afterwards, I think its a good idea to have the following added:

deploy:
  skip_cleanup: true
  ...
  on:
    tags: true

From reading the docs, this looks like a good thing to have. With that, I can then tag a commit and Travis will run my build script and create a release on my repo.

Running a Local Docker Registry Proxy Cache

I was talking with a fellow engineer and found out that their team was testing services by having them run locally on the host, not within a vagrant. I thought this was odd and the explanation was it came down to convenance and speed. Bringing up a new vagrant and provisioning it was too slow. I thought about this and wondered how I could potentially help.

My first thought is how can we cache needed items. Vagrant has vagrant-cachier is a plugin that will cache things like apt packages, but we already use that. My second thought was docker images, how could we cache that. Some googling showed trying to use a generic bucket provided by vagrant-cachier wouldn’t work due to how the docker daemon stores images. Some more googling turned up the pull through cache feature of the v2 docker registry.

Using Vagrant and the vagrant docker provisioner, I’ve created a vagrant that runs a local registry proxy cache that can be used by things like docker-machine. You can find it here: https://github.com/davidmnoriega/docker-registry-proxy

Linksys E4200 and DD-WRT

To furthur support my home server project, I needed my network environment to support pxe booting. I first tried running dnsmasq on my raspberry pi, but it didn’t seem to respond to any DHCP requests, only my Linksys router would. I’ve always been curious about running DD-WRT, and now had a real reason to do so.

Following the steps in the wiki entry for my E4200 were pretty simple and I had it up and working quickly. The next step was getting it to serve up the boot image via tftp, but as it turns out, the version of dnsmasq shipped in DD-WRT has tftp disabled. Also its config syntax doesn’t follow the usual form found in the man page, maybe due to the version. I didn’t really dig too deep as I found installing OptWare brings more software options, like a tftp server, avahi, and zabbix.

TFTP Server

I wont go over how to install OptWare as thats documented well elsewhere, but I will mention that I followed the newer version, OptWare the Right Way 2. With that installed, I first experimented with installing atftp, but quickly found out that it wouldn’t work for my needs as it has a pretty small file size limit of files it serves up. I then switched to tftp-hpa and configured it to serve up from a directory off of the small usb thumb drive I plugged into the router.

The router mounted the partition named opt to the correct location, but mounts the data directory to a generic location based off of partition number in /tmp/mnt. Disappointing but not horrible as that wouldn’t be changing. I now had a tftp server(fyi it used xinetd), I needed to configure dnsmasq to provide the right information. Following information from another guide, I decided to use iPXE as my bootloader of choice. It supports a menu system and if I get around to recompiling, will support output to a serial console.

I used these options to tell dnsmasq to:

  • Serve up the iPXE bootloader to plain PXE boot requests
  • Serve up the boot menu to iPXE bootloaders
dhcp-match=ipxe,175
dhcp-boot=net:#ipxe,undionly.kpxe,,192.168.1.1
dhcp-boot=menu.ipxe,,192.168.1.1

Note the use of net and # instead of the usual tag and ! directives

I now had a functioning PXE boot environment on my main network, yay! Later on I’ll post how I got vlans working and the zabbix agent that comes with OptWare(fyi the zabbix agent built into my version of DD-WRT wasn’t great as I didn’t know how I could configure it).

Tiny Home Server

It’s been a long while since my last posting and a lot of things have happened. I’ve moved out to California, the East Bay area to be exact. Started a new job with a great new company, and have just been enjoying life. Now after all that, I’ve gotten the itch for a project to work on and I’ve always wanted to have a server at home.

Now you may ask, how can someone who calls himself a system administrator not have a server at home? Well I’ve had all the servers I could ever work on when I was at my university, but now I wanted something all my own. Something special, so I set out to build as small a server as I could. Small enough to fit on the shelf in the living room, since thats where the cable gateway is.

Besides that, I also wanted something that could support virtualization, but also not cost too much. My last hardware build was my desktop pc, another project where I set out to build something not super extravagant, but still special. Poking around, I found this cool mini ITX server board from Supermicro that supports a max of 64GB of ram! But trying to find 16GB SO-DIMM modules is kind hard, so I just went with 32GB.

To go along with that, I got two 1TB HGST 2.5inch disk, a 16GB SATADOM, and to wrap it all up, a Mini-ITX case. Not just any case, but this tiny thing, the M350 case with PicoPSU 80 power supply. Its great how tiny this thing is. Just in case, I added a 40mm PWM fan. The case doesn’t come with any mounting hardware, so I also got some rubber “screws” that hold it in nicely, though they do require some trimming. One issue is the case has a USB 2.0 header cable, and the motherboard only has a 3.0 header. There are adapters out there, but as I had no need for it, I left it out.

As for my plans for this sweet little thing? Well, you’ll have to check back later  🙂

fast2phy: Convert aligned FASTA to interleaved PHYLIP

Where I work, many of our users are involved in bioinformatics and recently one user was concerned with the time it took to convert an aligned FASTA file into an interleaved PHYLIP file for phylogenetic analysis. Using BioPython took a very long time and not to mention its in memory representation was many times larger then the actual file itself and this added to the difficulties the user was facing.

So I thought I could help out. Luckily an existing project existed, pyfasta. This great tool uses Numpy’s mmap to access a fasta file without having to read it completely into memory and then with some loops, I was able to convert to the phylip format. I’m also happy to report that the user is very satisfied with this program.

fast2phy can be found on github

Link

From Maryn McKenna of Wired:

“A couple of unpleasant and deeply dismaying things have happened in the science blogosphere in the past 36 hours or so. I’m posting on it, along with a growing number of other science bloggers, in order to stand in solidarity with a fellow blogger and to ensure her voice is not silenced.”

The treatment of women in science and technology needs to change. I stand with Dr Lee