Setting up LAM Pro with SELinux

Been a while since I had to install LAM Pro and now that I have the new version, I ran into a couple of issues. First there is SELinux, which has saved my butt a couple of times, so I work to always have it enabled. I cringe whenever I see the first step in someone’s howto is to disable it. Its complicated and I do not profess to be anywhere near an expert in it, but I get by. Second is to secure the install, and while SELinux helps in that regard, there are still steps to take.

Installing:

LAM Pro is the paid version of LAM, LDAP Account Manager. Starting with a fresh install of CentOS 6.4, I downloaded the LAM Pro source and use its install script:

# ./configure --with-httpd-user=apache \
              --with-httpd-group=apache \
              --with-web-root=/var/www/html/lam
# make install

By default it will install everything into /usr/local/lam and use httpd as user/group name. Here I fix that to use the CentOS default of apache as user/group. Also it’s recommend to separate the web interface from the more sensitive parts, like the config files and session information.

SELinux Configuration:

If you were to start up a browser to load up LAM, even with SELinux disabled, there is the issue of writing config files. They are stored in /usr/local/lam/etc, where the web server doesn’t have permission. Also there is the session and tmp directory in var it uses to cache data from ldap. So use chmod to fix that:

# chmod -R apache:apache /usr/local/lam/etc
# chmod -R apache:apache /usr/local/lam/var/sess
# chmod -R apache:apache /usr/local/lam/var/tmp

But with SELinux enabled, apache still doesn’t have permissions to write outside of its own root directory. We need to tell the system the right context to use to allow apache access.

# semanage fcontext -a -t httpd_sys_content_t "/usr/local/lam/var(/.*)?"
# semanage fcontext -a -t httpd_sys_content_t "/usr/local/lam/etc(/.*)?"

Next is to apply these settings to the files themselves:

# restorecon -R /usr/local/lam/var
# restorecon -R /usr/local/lam/etc

Now apache can run LAM with SELinux enable and not run into any issues.

Apache Configuration:

The next part is to secure the installation. LAM comes with htaccess files ready to use to protect the config files and other sensitive files. Its my preference and you can do it differently, but I start with creating /etc/httpd/conf.d/lam.conf to hold these directives.

<Directory /var/www/html/lam>
   AllowOverride All
</Directory>

Now you could just change the server wide setting in httpd.conf, but I like to follow the principle of least privilege.

Next would be to set it up to use SSL to keep prying eyes from grabbing your passwords and user information, but that’s a whole other topic.

Advertisements

Rocks Cluster:Changing the external IP address

Seems simple enough, yes? Just recently had to move our entire infrastructure out of the main university server room and into the new research datacenter built just for groups like the one I work for. Though this also meant a change in the network as well. 

For those who use Rocks already, you’ll know right away that Rocks doesn’t use the nifty gui interface to manage the network devices, but goes straight to the network startup scripts in /etc/sysconfig/network-scripts. Don’t worry, this is the easy part that any sysadmin should know of, just edit the corresponding ifcfg-ethX file for your external network interface and change the information to what it needs to be(and don’t forget /etc/hosts).

Then second, update the Rocks database entry for the external ip of the head node like so:

rocks set host interface ip xxxxxxx ethX x.x.x.x

Where of course you fill in the blanks with your relevant information. 

This next part wasn’t so obvious and I didn’t know anything was wrong until later.

With the head node back online with its new ip, I started booting up the nodes, only to find they were not finding their way back into the grid engine. When I ssh’d to the nodes, found out they were still referencing the old external ip address when trying to communicate back to the master grid engine process.  Where was it even getting this information? Turns out, from the Rocks database, but didn’t I just fix that?

Not really, there is more. The database stores ip information for all the nodes, and as well as for Kickstart, which is why the nodes were using the old external ip address. Use rocks list attr to list all attributes and you’ll see the Kickstart entries and the old ip information. I used the following to fix that:

rocks set attr Kickstart_PublicAddress x.x.x.x 
rocks set attr Kickstart_PublicNetwork x.x.x.x 
rocks set attr Kickstart_PublicBroadcast x.x.x.x 
rocks set attr Kickstart_PublicGateway x.x.x.x 
rocks set attr Kickstart_PublicNetmask x.x.x.x
 

Ta Da! All done.