Testing Ansible roles for CentOS on Travis-CI with Docker

Wow that title looks so buzzword-y

Testing your code is always a good thing and ansible-galaxy init(since 2.0) includes a simple test setup using Travis-CI, which is a free service for continuous integration that integrates with GitHub. The problem is their infrastructure is all Ubuntu based, and my recently created role is for CentOS. So how do I test my role in an automated way?

The solution is Travis-CI with Docker. Using a couple different sources online, I pieced together a config that uses the centos images for 6 and 7(not 5 as its too old, using Python 2.4, for ansible) to test my ansible role. And whats nifty is Ansible Galaxy has a webhook that Travis can use to report build status on the role description page.

I also use the env directive to create a build matrix that will launch sub-jobs, in this case the different centos image versions. Take a look at my travis.yml and see for yourself.

Advertisements

Yum Repository Priorities and Ansible

I’ve been investigating setting up a VPN gateway on my home server, but so far my search has only shown implementations that run on Linux or BSD, but not SmartOS/Illumos(Solaris). It’s planted some doubt in my choice in SmartOS as my server’s OS, but so what! I can still run a Linux VM via KVM. I’ve gone with my old standard of CentOS, a very solid OS that I trust to run my eventual VPN gateway securely.

When I get to it, I’ll let you all know how I go about setting up StrongSWAN, but I’m not there yet. I need to first setup some groundwork and that’s installing the EPEL yum repo. I’m very familiar with how to do so manually, adding the repo and then using the priorities plugin, but it’s all about Ansible today. I’ve created an Ansible Galaxy role to install and configure the priorities plugin, go check it out.

davidmnoriega/yum-plugin-priorities

iPXE with Serial Console

My SmartOS based home server is already configured to use the serial console, in case I need to physically plug into it, but there was a missing piece. iPXE, the boot loader I use to boot the smartos image, can use the serial console, but this is not the default. I couldn’t find an existing binary that was configured with serial, so I built it myself.

Its pretty easy to git clone their repo, un-comment a line, and build the binary, but I wanted an automatic way to do so. I’ve seen Travis-CI around and one of its features is being able to build artifacts and post them to your repo as a release. I created a git repo, ipxe_serial, for this project.

Once you have your GitHub account linked with Travis, you’ll enable it for a repo, though you’ll probably want to change the settings to only build if the config file exists, .travis.yml. Travis has a cli client that you’ll need as it makes setting up the GitHub Releases integration easy. Install the client and authorize it to use your GitHub account, then from within your repo directory, use travis setup releases and follow the prompts. It writes to your travis config file and creates a GitHub token for your repo and encrypts the token before putting it in the config file.

Afterwards, I think its a good idea to have the following added:

deploy:
  skip_cleanup: true
  ...
  on:
    tags: true

From reading the docs, this looks like a good thing to have. With that, I can then tag a commit and Travis will run my build script and create a release on my repo.

Running a Local Docker Registry Proxy Cache

I was talking with a fellow engineer and found out that their team was testing services by having them run locally on the host, not within a vagrant. I thought this was odd and the explanation was it came down to convenance and speed. Bringing up a new vagrant and provisioning it was too slow. I thought about this and wondered how I could potentially help.

My first thought is how can we cache needed items. Vagrant has vagrant-cachier is a plugin that will cache things like apt packages, but we already use that. My second thought was docker images, how could we cache that. Some googling showed trying to use a generic bucket provided by vagrant-cachier wouldn’t work due to how the docker daemon stores images. Some more googling turned up the pull through cache feature of the v2 docker registry.

Using Vagrant and the vagrant docker provisioner, I’ve created a vagrant that runs a local registry proxy cache that can be used by things like docker-machine. You can find it here: https://github.com/davidmnoriega/docker-registry-proxy

Linksys E4200 and DD-WRT

To furthur support my home server project, I needed my network environment to support pxe booting. I first tried running dnsmasq on my raspberry pi, but it didn’t seem to respond to any DHCP requests, only my Linksys router would. I’ve always been curious about running DD-WRT, and now had a real reason to do so.

Following the steps in the wiki entry for my E4200 were pretty simple and I had it up and working quickly. The next step was getting it to serve up the boot image via tftp, but as it turns out, the version of dnsmasq shipped in DD-WRT has tftp disabled. Also its config syntax doesn’t follow the usual form found in the man page, maybe due to the version. I didn’t really dig too deep as I found installing OptWare brings more software options, like a tftp server, avahi, and zabbix.

TFTP Server

I wont go over how to install OptWare as thats documented well elsewhere, but I will mention that I followed the newer version, OptWare the Right Way 2. With that installed, I first experimented with installing atftp, but quickly found out that it wouldn’t work for my needs as it has a pretty small file size limit of files it serves up. I then switched to tftp-hpa and configured it to serve up from a directory off of the small usb thumb drive I plugged into the router.

The router mounted the partition named opt to the correct location, but mounts the data directory to a generic location based off of partition number in /tmp/mnt. Disappointing but not horrible as that wouldn’t be changing. I now had a tftp server(fyi it used xinetd), I needed to configure dnsmasq to provide the right information. Following information from another guide, I decided to use iPXE as my bootloader of choice. It supports a menu system and if I get around to recompiling, will support output to a serial console.

I used these options to tell dnsmasq to:

  • Serve up the iPXE bootloader to plain PXE boot requests
  • Serve up the boot menu to iPXE bootloaders
dhcp-match=ipxe,175
dhcp-boot=net:#ipxe,undionly.kpxe,,192.168.1.1
dhcp-boot=menu.ipxe,,192.168.1.1

Note the use of net and # instead of the usual tag and ! directives

I now had a functioning PXE boot environment on my main network, yay! Later on I’ll post how I got vlans working and the zabbix agent that comes with OptWare(fyi the zabbix agent built into my version of DD-WRT wasn’t great as I didn’t know how I could configure it).

Tiny Home Server

It’s been a long while since my last posting and a lot of things have happened. I’ve moved out to California, the East Bay area to be exact. Started a new job with a great new company, and have just been enjoying life. Now after all that, I’ve gotten the itch for a project to work on and I’ve always wanted to have a server at home.

Now you may ask, how can someone who calls himself a system administrator not have a server at home? Well I’ve had all the servers I could ever work on when I was at my university, but now I wanted something all my own. Something special, so I set out to build as small a server as I could. Small enough to fit on the shelf in the living room, since thats where the cable gateway is.

Besides that, I also wanted something that could support virtualization, but also not cost too much. My last hardware build was my desktop pc, another project where I set out to build something not super extravagant, but still special. Poking around, I found this cool mini ITX server board from Supermicro that supports a max of 64GB of ram! But trying to find 16GB SO-DIMM modules is kind hard, so I just went with 32GB.

To go along with that, I got two 1TB HGST 2.5inch disk, a 16GB SATADOM, and to wrap it all up, a Mini-ITX case. Not just any case, but this tiny thing, the M350 case with PicoPSU 80 power supply. Its great how tiny this thing is. Just in case, I added a 40mm PWM fan. The case doesn’t come with any mounting hardware, so I also got some rubber “screws” that hold it in nicely, though they do require some trimming. One issue is the case has a USB 2.0 header cable, and the motherboard only has a 3.0 header. There are adapters out there, but as I had no need for it, I left it out.

As for my plans for this sweet little thing? Well, you’ll have to check back later  🙂

fast2phy: Convert aligned FASTA to interleaved PHYLIP

Where I work, many of our users are involved in bioinformatics and recently one user was concerned with the time it took to convert an aligned FASTA file into an interleaved PHYLIP file for phylogenetic analysis. Using BioPython took a very long time and not to mention its in memory representation was many times larger then the actual file itself and this added to the difficulties the user was facing.

So I thought I could help out. Luckily an existing project existed, pyfasta. This great tool uses Numpy’s mmap to access a fasta file without having to read it completely into memory and then with some loops, I was able to convert to the phylip format. I’m also happy to report that the user is very satisfied with this program.

fast2phy can be found on github

Building exabayes(1.2.1) for Rocks 6.1

To build exabayes(note, this is for version 1.2.1. 1.3 just came out and doesn’t build for me just yet) on Rocks 6.1, which is based on CentOS 6.3, LLVM’s clang and libc++ need to be installed. I have a previous blog post about this.

The available prebuilt binaries do not work on CentOS 6.3, but once clang and libc++ is installed, rebuilding it is fairly straight forward. Download and extract exabayes and go into its directory. Use the following commands to configure and build both the serial and parallel versions of exabayes:

CC=clang CXX=clang++ CXXFLAGS=”-std=c++11 -stdlib=libc++” ./configure
make
OMPI_CC=clang OMPI_CXX=clang++ OMPI_CXXFLAGS=”-std=c++11 -stdlib=libc++” CC=mpicc CXX=mpic++ ./configure –enable-mpi
make clean
OMPI_CC=clang OMPI_CXX=clang++ OMPI_CXXFLAGS=”-std=c++11 -stdlib=libc++” CC=mpicc CXX=mpic++ make

mpicc and mpic++ are just wrappers for gcc, but by using those environment variables, they can be pointed to another compiler without having to build a separate version of openmpi. Now that is done, within the top level directory are all the exabayes binaries. Ignore the ones in bin/bin, those are the prebuilt ones that don’t work.

Building libc++ on CentOS 6

For the cluster I manage, a user needed exabayes(there will be another post on building that later) but their prebuild binaries didn’t work on Rocks 6.1, which is based on CentOS 6.3. GCC is too old to build it as they use C++11, but luckily clang 3.4 is available from EPEL. Only thing, it still wouldn’t compile. I got the following two errors:

/usr/bin/../lib/gcc/x86_64-redhat-linux/4.4.6/../../../../include/c++/4.4.6/exception_ptr.h:143:13: error:
unknown type name ‘type_info’
const type_info*
./src/Density.cpp:79:34: error: use of undeclared identifier ‘begin’
double sum = std::accumulate(begin(values), end(values), 0. );

While there was a potential workaround for the first error, nothing viable was to be found for dealing with the second error. But this research pointed in the next direction, building LLVM’s libc++ as these errors have to do with GCC’s old version of the standard c++ library. Its a bit complicated and rather hackish but looks like it works, so here we go.

Download libc++ via svn, but instead of following their directions for building, do this:

cd libcxx/lib
./buildit

Thanks to this blog post, which is in Chinese, but the commands are easy to understand. After building the library, copy it to /usr/lib(or because this is 64bit, I put it in /usr/lib64) and create the needed symlinks. Then copy libcxx/include to /usr/include/c++/v1. Remember this as we’ll be replacing libc++ later with a rebuild version.

Next is building libc++abi. Again download from svn and build it like above and copy the library to /usr/lib64 and make the symlinks. The include directory doesn’t need to be copied. Now time to rebuild libc++ with libc++abi. This requires CMake, and I opted for the newer version available from EPEL. The command is then cmake28. I also started with a fresh download of libc++

cd libcxx
mkdir build
cd build
CC=clang CXX=clang++ cmake28 -G “Unix Makefiles” -DLIBCXX_CXX_ABI=libcxxabi -DLIBCXX_LIBCXXABI_INCLUDE_PATHS=”<libc++abi-source-dir>/include” -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr ../
make

Since I don’t like to mess with the system install, I used DESTDIR during the make install step. This then allows me to build an rpm package using rocks create package. I also create a package for libc++abi.

With this now its possible to compile with clang and c++11. Test it out like so: clang++ -stdlib=libc++ -std=c++11 input.cpp

Setting up LAM Pro with SELinux

Been a while since I had to install LAM Pro and now that I have the new version, I ran into a couple of issues. First there is SELinux, which has saved my butt a couple of times, so I work to always have it enabled. I cringe whenever I see the first step in someone’s howto is to disable it. Its complicated and I do not profess to be anywhere near an expert in it, but I get by. Second is to secure the install, and while SELinux helps in that regard, there are still steps to take.

Installing:

LAM Pro is the paid version of LAM, LDAP Account Manager. Starting with a fresh install of CentOS 6.4, I downloaded the LAM Pro source and use its install script:

# ./configure --with-httpd-user=apache \
              --with-httpd-group=apache \
              --with-web-root=/var/www/html/lam
# make install

By default it will install everything into /usr/local/lam and use httpd as user/group name. Here I fix that to use the CentOS default of apache as user/group. Also it’s recommend to separate the web interface from the more sensitive parts, like the config files and session information.

SELinux Configuration:

If you were to start up a browser to load up LAM, even with SELinux disabled, there is the issue of writing config files. They are stored in /usr/local/lam/etc, where the web server doesn’t have permission. Also there is the session and tmp directory in var it uses to cache data from ldap. So use chmod to fix that:

# chmod -R apache:apache /usr/local/lam/etc
# chmod -R apache:apache /usr/local/lam/var/sess
# chmod -R apache:apache /usr/local/lam/var/tmp

But with SELinux enabled, apache still doesn’t have permissions to write outside of its own root directory. We need to tell the system the right context to use to allow apache access.

# semanage fcontext -a -t httpd_sys_content_t "/usr/local/lam/var(/.*)?"
# semanage fcontext -a -t httpd_sys_content_t "/usr/local/lam/etc(/.*)?"

Next is to apply these settings to the files themselves:

# restorecon -R /usr/local/lam/var
# restorecon -R /usr/local/lam/etc

Now apache can run LAM with SELinux enable and not run into any issues.

Apache Configuration:

The next part is to secure the installation. LAM comes with htaccess files ready to use to protect the config files and other sensitive files. Its my preference and you can do it differently, but I start with creating /etc/httpd/conf.d/lam.conf to hold these directives.

<Directory /var/www/html/lam>
   AllowOverride All
</Directory>

Now you could just change the server wide setting in httpd.conf, but I like to follow the principle of least privilege.

Next would be to set it up to use SSL to keep prying eyes from grabbing your passwords and user information, but that’s a whole other topic.