jonEbird

February 19, 2012

Automatically positioning and sizing windows with Devilspie

Filed under: adminstration,blogging,linux — jonEbird @ 1:48 pm

I often talk about my media PC in the living room and how I like to stream music via Pandora in the house. Since I am using Pithos, I can maximize the desktop real estate and enjoy whatever wallpaper suits my mood. In order to best accomplish this, I tend to resize Pithos so that it only shows the current album and song as well as move it off to the corner of the desktop. Although this only takes a moment to accomplish, I’ve grown tired of doing it each and every time I need relaunch Pithos. What I really want to be able to do is automatically resize and position the window immediately when I launch Pithos. Someone has to have solved this before and it’s bugging me enough that I need to solve this once and for all.

I started looking at wmctrl to solve my problem per the recommendation of Patrick Shuff. While familiarizing myself with the utility, I ironically stumbled upon a wmctrl tips page which included a list of other similar tools and the description of devilspie sounded exactly like what I needed:

devilspie is a window-matching utility. It can be configured to detect windows as they are created, and match the window to a set of rules. If the window matches the rules, it can perform a series of actions on that window.

My needs are modest. A resize and a positioning is probably accomplished with a simple declaration to set the window geometry. Let’s install it and start playing.

Fortunately I could install devilspie directly from yum and while the manpage is quite sparse it is packaged with a sufficient README that after a few trial and error iterations, I got a basic configuration working which just ran devilspie in debug mode and could report on the current windows opened and new ones being launched.

sudo yum -y install devilspie
[ ! -d ~/.devilspie ] && { mkdir -m 0755 ~/.devilspie; echo "(debug)" > ~/.devilspie/debug.ds; }
devilspie

That will continue to run in your terminal, so you may want to open another tab/terminal as we explore.

Since I am a devoted Emacs user, I certainly wasn’t disappointed to learn that devilspie uses s-expressions for it’s configurations. I also found a good unofficial devilspie documentation page written up by Gina Häußge which helps round out the education. Indeed, it looks like I’ll be able to solve my problem with devilspie. After meticulously positioning my Pithos window, I used xwininfo to record my current window geometry. I then created the simplistic config file “~/.devilspie/pithos.ds” as the following:

(if (is (application_name) "Pithos") (begin (geometry "481x163--20-17") (unshade) ))

Some more background on setting up your own configurations. If you notice when you launched devilspie, after the initial install, you can see the information on the current windows and their associated application names. For example, here is what I see for Pithos:

Window Title: 'Pithos - Bankrupt On Selling by Modest Mouse'; Application Name: 'Pithos'; Class: 'Pithos'; Geometry: 483x197+797+523

That should make it easy for coming up with rules to match application names, window names or classes when targeting your applications.

The frustrating thing for me was how each utility was reporting a different geometry value for the windows currently launched. Between xwininfo, devilspie and wmctrl they all had differing opinions on what I should use. You’d think that I could use what devilspie was telling me since that is the utility I’m indented on using but it didn’t work out for me. I’ve used xinwinfo for over a decade and I guess I’m sticking with it.

Finally, I added a final few touches to my desktop configuration before calling it complete. I created a simple shell script to relaunch Pithos indefinitely because occasionally I need to relaunch it but I actually never intend to leave it off: (See installing pithos with virtualenv if you want to see how I installed Pithos)

#!/bin/bash

PITHOS_HOME=~/pithos
PITHOS_VENV=~/pithos_venv

#--------------------------------------------------
source ${PITHOS_VENV}/bin/activate
cd $PITHOS_HOME
while :; do
    pithos
    sleep 1
done

My cheap way of launching apps indefinitely is to run them within my main screen session. I’ll create one for the Pithos restart script and another for devilspie. Here is an excerpt from my ~/.screenrc:

screen -t emacs     0  /usr/bin/emacs -nw
screen -t local     1  /bin/bash
screen -t local     2  /bin/bash
screen -t local     3  /bin/bash
screen -t local     4  /bin/bash
screen -t local     5  /bin/bash
screen -t synergy   10 /usr/bin/synergyc -f 192.168.1.23:6700
screen -t devilspie 11 /usr/bin/devilspie
screen -t pithos    12 ~/bin/run_pithos.sh

And all is well in my household again. Should something get borked with Pithos, perhaps due to being paused for too long or whatever, I can simply kill the window which triggers the restart script to launch another copy and devilspie will position and resize it perfectly for me. How will you use devilspie?

December 29, 2011

Installing emacs v24 on Fedora

Filed under: adminstration,blogging,emacs,linux,usability — jonEbird @ 10:05 pm

I’ve been reading about other people giving the yet to be release version 24 of emacs for some time now. When I decided to upgrade my systems to use v24, I was a bit surprised to not find anything about configuring a Fedora system to use v24 of emacs. Guess I gotta do it myself…

This tutorial is part editorial and part instructional. I thought it would be helpful to include some of the techniques I used to get emacs up and running quickly without needing to pull my hair for other’s edification.

After realizing I wasn’t going to be able to just grab a pre-built binary, I went looking for the official sources. I ended up finding the pretest download location. First step first, let’s pull down the latest emacs-24 tarball and extract it.

PRETEST_URL="http://alpha.gnu.org/gnu/emacs/pretest/"
FILENAME=$(curl -s ${PRETEST_URL} | sed -n 's/^.*a href="\(emacs-24.[0-9\.]*tar.gz\)".*$/\1/p' )
curl -o ${FILENAME} ${PRETEST_URL}${FILENAME}
tar -xzof $FILENAME
cd ${FILENAME%.tar.gz}

If that worked, you are now sitting in the extracted directory of the latest emacs-24 pretest source. Now for some instructional information. Any significantly large project will need a decent amount of development packages installed for a successful compile and that can be a pain to identify. Earlier I claimed that I didn’t pull my hair out which means I cheated. I grabbed the latest Fedora source rpm. I didn’t actually want to install the src.rpm but rather extract the emacs.spec file which will act like a blueprint for my build. I’m going to give you the answer later but if you’d like to know how to extract the specfile, try this:
Note: You do not need to do this step. Instructional only.

SRCRPM=~/Download/emacs-23.3-7.fc16.src.rpm
# Your SRCRPM may differ depending on what you end up downloading.
mkdir tmp && cd tmp
rpm2cpio $SRCRPM | cpio -ivd
sed -n -e 's/,/ /g' -e 's/^BuildRequires: //p' emacs.spec | xargs sudo yum -y install

Note the last command in that section was a command to install the necessary development packages for our build. Since I”m not requiring you to do that above, here is the command for you:

sudo yum -y install atk-devel cairo-devel freetype-devel \
  fontconfig-devel dbus-devel giflib-devel glibc-devel gtk2-devel \
  libpng-devel libjpeg-devel libtiff-devel libX11-devel libXau-devel \
  libXdmcp-devel libXrender-devel libXt-devel libXpm-devel \
  ncurses-devel xorg-x11-proto-devel zlib-devel librsvg2-devel \
  m17n-lib-devel libotf-devel autoconf automake bzip2 cairo texinfo \
  gzip GConf2-devel alsa-lib-devel desktop-file-utils python2-devel \
  python3-devel util-linux

The other part of the specfile you’ll typically want to look at, if you’re cheating like me, is the %build section. That is where you’ll find the actual commands used to configure and build the binaries. There I found the configure switches used so I don’t have to pick out which ones I’ll need. Again, just like figuring out the development packages, figuring out configure options can also be a chore. Let’s get to configuring, building and installing it now.

./configure --prefix=/usr/local/emacs24 --with-dbus --with-gif --with-jpeg --with-png \
  --with-rsvg --with-tiff --with-xft --with-xpm --with-x-toolkit=gtk
make
./src/emacs --version # Look good? The INSTALL doc suggests testing: ./src/emacs -Q
sudo make install

Well, that worked for me and hopefully it worked for you too. If you noticed, I used the --prefix=/usr/local/emac24 option above on my configure line which means everything got cleanly installed down it’s own separate base directory of /usr/local/emacs24. Since you won’t want to use that path explicitly each time you launch emacs, we’ll have to inform Fedora of our new altenative.

sudo alternatives --install /usr/bin/emacs emacs /usr/local/emacs24/bin/emacs 20000
sudo alternatives --install /usr/bin/emacsclient emacsclient /usr/local/emacs24/bin/emacsclient 20000

And there, we’re done. Congratulations. You have installed emacs version 24 on your Fedora system. Let me know if you’ve had any problems or have a better recommendation.

December 23, 2011

Installing Pithos on Fedora within a Virtualenv

Filed under: adminstration,blogging,linux,python,usability — jonEbird @ 12:40 pm

I listen to a lot of music while at home. I am a Pandora user and have been very happy with my Pandora One subscription now for over two years. The machine used for playing my music is what I call my “media PC”. It is called that because this machine sits in my entertainment stand and is connected to my Sony receiver via HDMI making the multimedia experience as good as I can get. If you put those two facts together, you can see that I am staring at my desktop a lot and I thought it would be nice to integrate my TV into rest of the decor of the house. I primarily do that by being very selective in finding desktop pictures and generally clearing off the desktop of any clutter. Think of the large 47″ LCD television as one big painting for the living room.

Which leads me to my one, sole problem with Pandora: I like to look up and read the Artist and Title of the track being played but I don’t want the browser to also consume my visual space. (I also don’t want to mess around with Adobe Air for the desktop version of Pandora) Enter Pithos. By this point, I should point out that my media PC is running Fedora Core 15 and I’m a Gnome user (let’s not talk about Gnome3). That is important because Pithos was written for gnome users.

Pithos is great. It’s a simple UI design, still allows for normal Pandora song control, easy drop-down for my stations, can still star (thumb’s up) songs all the while being small and unobtrusive. And now we are to the subject of this blog post: Installing Pithos on a Fedora Core machine.

This installation guide will follow my other guides in the same “copy & paste” format. That is, below you should be able to simply open a shell, copy the block of shell code and paste it into your terminal and be ready to launch Pithos. The one configurable item I left in there is whether or not you’d like to install Pithos within a virtualenv or not. I won’t go into detail about what virtualenv is for this discussion, but suffice to say that you’d choose it if you want to install Pithos in a alternative path that you own instead of /usr/local/bin/. Below, when you copy & paste the instructions to install Pithos, you can simply leave out the variable "I_LOVE_VIRTUALENV" or change the value from anything but “yes” to install the “normal” way. I choose to install via virtualenv to 1. keep my system site-packages clean and 2. also keep /usr/local uncluttered. When I do this, I mostly only have to worry about backing up my home directory between rebuilds.

Again: If you’d like to use virtualenv, keep the "I_LOVE_VIRTUALENV" variable set to “yes”.
Furthermore, using virtualenv you can control the env path via setting the VIRTUALENV variable. Some people have a separate directory for their virtualenv’s. E.g. VIRTUALENV=virtualenvs/pithos
(Copy and paste away!)

# Keep this variable to install within a virtualenv.
#   otherwise, skip this line or change from "yes" to anything else.
I_LIKE_VIRTUALENV="yes"
VIRTUALENV="" # Set this to control where your virtualenv is created
# --- Rest is pure copy & paste gold ---
sudo yum -y install python pyxdg pygobject2 \
  gstreamer-python notify-python pygtk2 dbus-python \
  gstreamer-plugins-good gstreamer-plugins-bad \
  bzr python-virtualenv
# FYI, those last two are not direct requirements but tools to complete this
cd; bzr branch lp:pithos pithos
if [ "${I_LIKE_VIRTUALENV}" == "yes" ]; then
  virtualenv ${VIRTUALENV:-pithos_venv}
  source ${VIRTUALENV:-pithos_venv}/bin/activate
  # The money shot... finger's crossed
  cd pithos; python setup.py install
else
  cd pithos; sudo python setup.py install --prefix=/usr/local
fi

And there you have it. A clean, aesthetically pleasing music experience. Enjoy.
Desktop Shot with Pithos

July 10, 2011

64bit Google Chrome with Flash on Fedora – Take2

Filed under: adminstration,linux,usability — jonEbird @ 10:08 am

Early last year I wrote up my procedure on getting Chrome setup with Flash on a 64bit Fedora Core 13 build. It is now dated and I hope people are not still using it. However, if you like the copy & paste style of my direct installation guidelines, I thought I’d give an updated post after a recent Fedora Core 15 install.

First things first, we still need to create new Yum repositories like we did last time. The biggest difference between last time is I would no longer recommend installing the beta build of Chrome and Adobe has moved the 64bit flash player out of a lab project to an official release. The fine folks working on the Fedora project have lend their help in instructing people on how to setup Flash as well and you may want to check that page for a more detailed explanation on what we’re doing. In particular, I’ll be using a repository which is being hosted by Fedora member leigh123linux.

Here is the final copy & paste version: (Like last time, if you are already root, take out the “sudo”)

# Creating the Google repo
cat <<\EOF | sudo tee /etc/yum.repos.d/google.repo
[google64]
name=Google - x86_64
baseurl=http://dl.google.com/linux/rpm/stable/x86_64
enabled=1
gpgcheck=1
gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub
EOF
# Slip this guy in here... who doesn't want the gchat plugin?
cat <<\EOF | sudo tee /etc/yum.repos.d/google-talkplugin.repo
[google-talkplugin]
name=google-talkplugin
baseurl=http://dl.google.com/linux/talkplugin/rpm/stable/x86_64
enabled=1
gpgcheck=1
EOF
# Now to install the latest flash release yum repo from "leigh123linux"
leighURL="http://www.linux-ati-drivers.homecall.co.uk/flashplayer.x86_64/"
latest_release=$(wget -qO- $leighURL |\
sed -n '/flash-release/s/^<LI><A HREF="\([^"]*\)".*$/\1/p' | sort -n | tail -1)
sudo yum -y localinstall --nogpgcheck ${leighURL}${latest_release}
# Actually installing Chrome and the Flash player
sudo yum -y install google-chrome-stable google-talkplugin flash-plugin
# Creating a plugins directory
[ ! -d /opt/google/chrome/plugins ] && sudo mkdir /opt/google/chrome/plugins
sudo ln -s /usr/lib64/flash-plugin/libflashplayer.so /opt/google/chrome/plugins/

If you noticed, I slipped in the Google chat plugin. That is the necessary RPM for enabling video chat. If you really didn't want it installed, simply run "sudo yum erase flash-plugin". Enjoy your surfing.

May 19, 2010

Happy Belated Birthday to Me

Filed under: adminstration,architecture,blogging — jonEbird @ 9:34 pm

A little play on blog titles going on here. Today I enter a new chapter in my career as I accept the Linux and Unix Architect role at Nationwide. I originally applied for the position back in February and I had hoped to land the position on my birthday in April, but the process was delayed for uncontrollable reasons. I like to be able work towards a work promotion as a birthday present for myself. In recognition for all the hard work just like four years ago when I congratulated myself in achieving Senior Systems Administrator.

I have been preparing for this position for the last few years and yet in some respects it feels like I don’t know what I’m entering. That might terrify some people but it is this very fact that most energizes me. The last time I felt this way I was transitioning from a purely development role into Administration. At the time, my high school friend Cory Sanders encouraged me in saying, “You’ll do fine.” I’ve received the same kind of encouragement recently and I appreciate the support. I’m energized about this opportunity because I will be learning so much. When I switched over to System Administration I spend hours studying admin books while my wife (then girlfriend) was working at Tim Horton’s. Fueled by coffee and donuts I climbed the learning curve as fast as I could.

My best description of a Architect is someone who leverages their broad technical experience in helping the business make decisions in what to pursue, where to invest and ultimately where to focus further development. I have always had a great deal of success in influencing people outside of my control but I plan to study that art in the book Influence: The Psychology of Persuasion. I am currently learning about personalties via Personality Plus. Somewhere in the middle of those books, I’ve already borrowed The New Rational Manager where I hope to gain a better systematic approach to problem resolution. I already consider my troubleshooting skills to be very good but I was impressed in how, now fellow, Architects were able to keep a room full of people technicians focused on the resolution in such a organized manor.

Beyond the academic focus, this new position is largely about building relationships. I need to devote time with teams and individuals at lunch, over coffee and in the hallways. They need to be comfortable coming to me with their problems and I need them to be receptive to directions I set for them. This will be the “easy” part of the job. As a person of a Peaceful Phlegmatic personality disposition building relationships is naturally easy.

Finally, I hope to largely increase my business acumen as the next Linux and Unix Architect. I am looking forward to having strategic conversations with our business partners such as RedHat, HP, Sun Oracle, Novell, IBM, Veritas and many more. I expect to be working closer with existing management on budgetary decisions. Frankly, I am struggling to enumerate further business categories for which I should be focusing on which underscores my ignorance in the field. I have had a subscription to Entrepreneur Magazine for nearly a year, watch business videos online from Stanford University and try to participate in local TechColumbus networking and business related activities. I will look for any further opportunities that present themselves to me and take it from there.

When I started mentoring with our previous Architect, I told him my goal was to take his job, “…but don’t worry, I want you to move on to bigger and better work first.” I need those kind of goals to keep me motivated. This next goal will be quite lofty and I have no idea how or when I might achieve it, but the next position I am setting my sights on is CTO. I have a feeling it’s going to be longer than four years from now that I’ll be able to write that blog post.

February 9, 2010

Deciphering Caught Signals

Filed under: adminstration,linux,python — jonEbird @ 6:49 pm

Have you ever wondered which signal handlers a particular process has registered? A friend of mine was observing different behavior when spawning a new process from his Python script vs. invoking the command in the shell. Actually, he was consulting me about finding the best way to shutdown the process after spawning it from his Python script. You see, the program is actually just a shell wrapper which then kicks off the real program. His program would learn the process id (pid) of the wrapper and trying to send a kill signal to that was effectively terminating the wrapper and leaving the actual program running. By comparison, I asked him what happens in the shell when he tries to kill the program. Unlike being spawned in the Python script, this time the program and wrapper together would shutdown cleanly. My initial question was, “Are there different signal handlers being caught between the two scenarios?” He wasn’t sure and our dialog afterwards is what I’d like to explain to you now.

A pretty straight forward way to query what signal handlers a process has is to use “ps”. Let’s use my shell as an example:

$ ps -o pid,user,comm,caught -p $$
  PID USER     COMMAND                   CAUGHT
 3508 jon      bash            000000004b813efb

My shell is currently catching the signals being represented by the signal mask of 0x000000004b813efb. Pretty straight forward, right? Yeah, unless you havn’t done much C programming like my friend. He was not used to seeing hexadecimal numbers where each bit represents a on/off flag for each available signal. To follow along, make sure you understand binary representation of numbers first and learn that our number 0x000000004b813efb is represented in binary as 01001011100000010011111011111011. Now viewing that number and reading from right (least significant bit) to left, note which nth bit has a one or not. You can see that it is the 1st, 2nd, 4th, 5th, etc. Now all we have to do is associate those place holders with the signals they represent. Easiest way to see which numeric values are assigned to which signals is to use the “kill” command:

$ kill -l
 1) SIGHUP       2) SIGINT       3) SIGQUIT      4) SIGILL
 5) SIGTRAP      6) SIGABRT      7) SIGBUS       8) SIGFPE
 9) SIGKILL     10) SIGUSR1     11) SIGSEGV     12) SIGUSR2
13) SIGPIPE     14) SIGALRM     15) SIGTERM     16) SIGSTKFLT
17) SIGCHLD     18) SIGCONT     19) SIGSTOP     20) SIGTSTP
21) SIGTTIN     22) SIGTTOU     23) SIGURG      24) SIGXCPU
25) SIGXFSZ     26) SIGVTALRM   27) SIGPROF     28) SIGWINCH
29) SIGIO       30) SIGPWR      31) SIGSYS      34) SIGRTMIN
35) SIGRTMIN+1  36) SIGRTMIN+2  37) SIGRTMIN+3  38) SIGRTMIN+4
39) SIGRTMIN+5  40) SIGRTMIN+6  41) SIGRTMIN+7  42) SIGRTMIN+8
43) SIGRTMIN+9  44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12
47) SIGRTMIN+13 48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14
51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10
55) SIGRTMAX-9  56) SIGRTMAX-8  57) SIGRTMAX-7  58) SIGRTMAX-6
59) SIGRTMAX-5  60) SIGRTMAX-4  61) SIGRTMAX-3  62) SIGRTMAX-2
63) SIGRTMAX-1  64) SIGRTMAX

Armed with this knowledge, you can now provide a human readable report for which signals my shell is capturing: It has signal handlers setup for SIGHUP(1), SIGINT(2), SIGILL(4), SIGTRAP(5), etc.

A quick note about signal handlers. A signal handler is basically a jump location for your program to goto after receiving a particular signal. Think of it as an asynchronous function call, or more succinctly as a callback. That is, your program’s execution will jump to the function you’ve registered for your signal handler immediately upon receiving said signal and it does not matter where in your program’s execution you are currently at. Since the call is asynchronous, a lot of people will have a signal handler merely toggle a global flag and let their program resume it’s processing and check on that flag at a more convenient time.

Now that we know how to see which signals are being caught by a program, and what signal handlers are, let’s create a new signal handler for my shell and note the changed signal mask. Again, reviewing my currently caught signals, I notice I’m not doing anything for the 3rd signal of SIGQUIT. I want to assign a signal handler on this signal so we can see the changed signal mask. I’m going to have the shell execute a simple function upon receipt of the SIGQUIT signal.

$ function sayhi { echo "hi there"; }
$ trap sayhi 3
$ trap sayhi SIGQUIT # same thing as the number 3
$ kill -QUIT $$
hi there

Now, how about our signal mask. Has it changed?

$ ps -o pid,user,comm,caught -p $$
  PID USER     COMMAND                   CAUGHT
 3508 jon      bash            000000004b813eff

The signal mask has changed from 0x000000004b813efb to 0x000000004b813eff. The new signal mask, converting from hexadecimal to binary, is 1001011100000010011111011111111. Notice how our 3rd bit from the right is now a “1″ and before it was “0″.

Understanding how the signal masks are represented is good, but it’s still a pain if you want to quickly compare the signals being caught between two different processes. Per that point, I created a little Python script to do the work for me:

#!/bin/env python

import sys, signal

def dec2bin(N):
    binary = ''
    while N:
        N, r = divmod(N,2)
        binary = str(r) + binary
    return binary

def sigmask(binary):
    """Take a string representation of a binary number and return the signals associated with each bit.
       E.g. '10101' => ['SIGHUP','SIGQUIT','SIGTRAP']
            This is because SIGHUP is 1, SIGQUIT is 3 and SIGTRAP is 5
    """
    sigmap = dict([ (getattr(signal, sig), sig) for sig in dir(signal) if (sig.startswith('SIG') and '_' not in sig) ])
    signals = [ sigmap.get(n+1,str(n+1)) for n, bit in enumerate(reversed(binary)) if bit == '1' ]
    return signals

if __name__ == '__main__':

    if sys.argv[1].startswith('0x'):
        N = int(sys.argv[1], 16)
    else:
        N = int(sys.argv[1])

    binstr = dec2bin(N)
    print '"%s" (0x%x,%d) => %s; %s' % (sys.argv[1], N, N, binstr, ','.join(sigmask(binstr)) )

To use the my signals.py program, copy it to a file, make it executable and run it passing the signal mask of your program.

$ wget -O ~/bin/signals.py http://jonebird.com/signals.py
$ chmod 755 ~/bin/signals.py # assuming ~/bin is in your PATH
$ signals.py "0x$(ps --no-headers -o caught -p $$)"
"0x000000004b813eff" (0x4b813eff,1266761471) => 1001011100000010011111011111111;
 SIGHUP,SIGINT,SIGQUIT,SIGILL,SIGTRAP,SIGIOT,SIGBUS,SIGFPE,SIGUSR1,SIGSEGV,SIGUSR2,
 SIGPIPE,SIGALRM,SIGCLD,SIGXCPU,SIGXFSZ,SIGVTALRM,SIGWINCH,SIGSYS

Now back to my friend and his program problem. I asked him to fire off the program both from his Python script and then again directly from the shell. Each time I asked him to check on the caught signal mask of both the wrapper program and the actual binary and report the signal masks to me. As for the wrapper, it was consistently catching only SIGINT and SIGCLD, but the story was not as clear for the binary.
When kicked off via Python, the binary was catching the following signals:

  SIGQUIT,SIGBUS,SIGFPE,SIGSEGV,SIGTERM

whereas when invoked directly from the shell, the binary was catching:

  SIGINT,SIGQUIT,SIGBUS,SIGFPE,SIGSEGV,SIGTERM

Initially, I thought, “Ah ha, see it’s catching SIGINT in addition to the other signals when invoked from the shell!”, but quelled my excitement as I realized it didn’t help to explain why both wrapper and binary were both shutting down in the shell. If you sent a SIGINT to the wrapper via “kill -INT <wrapperpid>” nothing happens. Any other signal that the wrapper was not catching, such as SIGTERM (which is the default send via “kill” when you do not specifiy a signal), would cause the wrapper to terminate and orphan the binary to remain running.

The explanation lies within the shell code. We went through the various cases and when it wasn’t explained by the wrapper handling some signal and shutting down the binary, I was left with presuming the interactive shell was doing something unique. I initially observed this by running a strace against the binary and seeing the SIGINT interrupt and then later confirmed the behavior by consulting the bash source code. When you hit control-c in the shell, the shell will send a SIGINT to both processes because they are in the same process group (pgrp). I literally downloaded the bash source code to confirm this and quoting from a comment in the source code, “keyboard signals are sent to process groups”* That means a SIGINT is sent to both the wrapper and the binary. When that happens, the wrapper does nothing, as seen from prior experiments, but the binary catches it and does a clean shutdown which then allows the wrapper to complete and exit as well.

– Jon Miller

* How to efficiently root through source code is a subject for another blog. Within the bash-3.2.48.tar.gz source bundle, look at line 3230 in jobs.c.

September 16, 2009

Server Death

Filed under: adminstration,blogging,linux — jonEbird @ 7:41 pm

I often joke that the only people that read my weblog are bots, so it shouldn’t bother me if my site is down but it does. Last week the server, which was also doubling as a workstation for the wife, died. “The computer is not working”, the Wife explained. I didn’t check it out immediately as I just assumed that X had crashed or something else preventing her from using firefox. Like I said, I’m not too overly concerned with my site’s uptime.

But when I finally did check it out, sure enough, it was not looking good. Absolutely no display on the monitor. Considering I had replaced my video card not too long ago and I could no longer ssh into the machine, I am thinking that either the CPU and/or the motherboard are dead.

DeadPC
Hercules taking a look

After Hercules and I surveyed the situation, we decided to pull the sheet over it’s head. It’s had a nice long life (pc years) since 2004.

I headed to microcenter today to checkout what kind of motherboards, CPUs and even memory that they had on sale. If you consider my last machine was running with only 756M of memory, an ageing AMD 2Ghz processor on a abit kv8 motherboard while happily serving my website and handling the Wife’s facebook usage, then you can understand I was looking for the smallest, cheapest solution I could find. That solution was looking to be somewhere around $225.

Not willing to rush into a $200+ investment, I instead bought a IDE enclosure which is capable of serving my data via USB for a mere $21 bucks.

Now for the Restoration of my Website

I really shouldn’t even be talking about this. I should have had regular MySQL dumps along with full web content backed off to another machine. Aside from a laptop, the other “real” pc in the house is a Acer I bought as a media machine which sits in my entertainment center. It was never intended to be running 24×7, so I only did on-demand backups of my important files which were actually outside of my website. Another justification for not having regular backups was that I had two internal Seagate drives configured in a software mirror. I always figured if I had some sort of hardware problem, I’d be able to replace it and in worse case never really lose my data.

So I have my hard drive and am now looking to get my WordPress site back online with the pc in the living room. After plugging in the harddrive, I need to activate the MD device and mount up my filesystem:

[jon@pc ~]$ sudo mdadm --assemble --scan
mdadm: /dev/md/0_0 has been started with 1 drive (out of 2).
[jon@pc ~]$ cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sdd1[1]
      241665664 blocks [2/1] [_U]

unused devices: <none>
[jon@pc ~]$ sudo mount /dev/md127 /mnt

My two machines were off from each other by two Fedora releases. I wondered if I could do a chroot, startup MySQL and get a fresh, clean dump of the database…

[jon@pc ~]$ sudo su -
[root@pc ~]# chroot /mnt
[root@pc /]# ls
bin  boot  dev  etc  home  lib  lib64  lost+found  media  mnt
opt  proc  root  sbin  selinux  srv  sys  tmp  usr  var

[root@pc ~]# mount -t proc none /proc
[root@pc ~]# /etc/init.d/mysqld status
mysqld dead but subsys locked
[root@pc ~]# /etc/init.d/mysqld restart
Stopping MySQL:                                            [  OK  ]
Starting MySQL:                                            [  OK  ]
[root@pc ~]# /etc/init.d/mysqld status
mysqld (pid 9394) is running...
[root@pc ~]# mysqldump -u root -p wordpress > wordpress.mysqldump
Enter password:
[root@pc ~]# wc -l wordpress.mysqldump
354 wordpress.mysqldump

Cool!

The rest of the migration involved an rsync of /var/www/html/ content, adjustments of the default Apache config, granting access for my WordPress user to use the database and finally updating my router to now direct requests for port 80 to my media pc.

At this point, I guess I’ll be running this site from the living room until I decide what to do about my server / workstation. I’ve always wanted to build a slimmed down, efficient virtual server to host my website and then migrate it between server and laptop during maintenance / patching of my machines, but my AMD processor didn’t support the Virtualization assistance, so it was painfully slow. I think I’ll keep an eye out for a used, server-class machine. Let me know if you find any, bots. Thanks. ;-)

June 15, 2009

Intern Regiment

Filed under: adminstration,blogging,linux — jonEbird @ 10:06 pm

Today was Patrick Shuff‘s first day with our team. He is our intern for the summer and I actually recommended that we steal him from another team after meeting him last year. From my half day assessment of him last year, I thought he was much more suited to work with our Linux team vs. the Windows provisioning team. He found my gnu screen, emacs and script automation tricks fascinating and right there just invalidated himself as a legit Windows guy. The Windows experience he picked up last year was no doubt useful, but it’s not something you enjoy to return to. Just like it’s very useful to have learned C as your first programming language, being an awesome basis to provide a solid understanding of the computing innards, but you don’t want to return to it after programming in Python.

I have been trying to brainstorm good ideas for him to work on in the team. I suppose the main reason I want to see his experience be as positive as possible is because I myself was an intern for about three years. One thought I had, was to turn the three month time schedule into an intense one assignment per week ordeal where I throw a new task at him intended to inject new incites into all facets of becoming a well rounded Linux administrator. Of course, one week is not enough time to properly study each area for most of the categories of topics I was thinking of, but it would have a nice organized structure and would be nearly guaranteed to provide an intense experience worthy of writing home about. Okay, if he ends up writing home about it, then we know he’s a dork but would also mean he’s probably found a career in which he’d never have to work a day of his life because it’s enjoyable.

I started brainstorming my categories of areas in a quick outline mode. Of course, this list is subject to change and we actually end up doing through with this I’ll naturally have to report back on the actual topics covered in each week and what the assignments were. If nothing else, it should keep my weblog busier than normal which isn’t hard. So, here what I’m thinking constitutes a well rounded Linux Administrator:

  • Ever improve efficiencies
    • editor
      - pick one: emacs or vim. Just don’t settle at being able to modestly edit text.
    • shell
      - An essential, stereotypical Linux Admin skill. And yes, it is important. Study up.
  • organizational skills
    • - Can not be underestimated. Aren’t we always ever improving our organizational skills?
      - Develop consistent habits in note taking. Try reading Getting Things Done

    • project notes
    • meeting notes
    • hallway conversations
    • company hierarchy
  • technical expertise
    • operating systems
    • programming languages
    • architectural design
    • applications administration
  • staying current
    • awesome rss feeds
    • key social article sharing sites

      - Looking at you, reddit.
    • magazines
    • books
  • soft skills
    • working within a team
    • speach / presentation
    • written communication

      - tech writing, effective email communication
  • career, career stuff
    • resume writing
    • networking
    • staying driven
    • finding your path

Sorry for the lack of details on each of the items but it’s kind of silly to populate it further now. For now it remains an idea for a summer internship. Only once the plan comes to fruition will I report back with juicier details.

November 4, 2008

Management Tools for Multi-Vendors

Filed under: adminstration,blogging — jonEbird @ 4:44 pm

The challenge to build a tool which manages multiple vendors and platforms by way of piggy backing off their technology is a losing battle. Be it provisioning, patching, monitoring, etc it doesn’t matter. To choose such a tool, you end up paying big bucks for other people to constantly watch and react to what various vendors are doing. Combine that piece of realization with the fact that a tool will almost never perfectly suit the unique requirements of your business and you’d be in denial to not realize that it sucks. Beyond the shear money of the endeavour you are also wasting time of your associates which will probably not get recouped.

I will never say anything is impossible. You can build such a tool and it can have the necessary hooks to allow your associates to customize it to suits your needs. My point is, that work is much harder to pull off than the naive observer might realize. Imagine you are abstracting the details of Suse’s automated installer “AutoYast”. But let’s say the OpenSuse project decides to take a drastic change on how the unattended installer works. Their efforts, no doubt, will be motivated by improving their end user’s experience by presumably making it quicker, simpler and overall a better product. Depending on how drastic the change, it could represent an entirely different philosophical approach to OS installs. As the tool builder, trying to provide a layer of abstraction, you have just stuck yourself into a large endeavour to re-factor those pieces of your application to handle the radical changes being made. It’s a given risk, if that is what you’re providing. My point is, as a customer, just don’t buy that product.

To purchase such a product, you are basically stating that you believe the particular team of developers are going to continue to accurately and intuitively abstract those details for you. Don’t forget you’re still paying a lot of money for this. But this is how management thinks, “I’m going to buy this tool and allow my associates to use one tool and spend their time elsewhere.” It doesn’t happen. Instead, the associates try to shift their energies on learning a new tool, figuring out how to customize it for their needs and probably end up with one FTE dedicated to maintaining it.

Please, don’t waste your time and money. Spend your time collaborating with teammates. Decide upon OS and install standards. Each OS installer provides the ability to perform basic configuration of disk, network, software, etc and then allows for final post-install hook. That hook will then lean upon your team’s efforts. You will end up spending the same amount of work creating your post-install scripts as it takes to merely install and train folks on an “all in one” tool. Big difference of “rolling your own” is you now own the tool set, it already exactly meets your needs, every one knows and understands how it works, updates are easy, knowledge and skill gained is more widely recognized and all the while you haven’t spent more money.

Now for the counter-point: You have to have a good team to pull this off. Team members will require enough experience to demonstrate the proper discernment in building out a quality framework. So what if you maintain a Solaris Jumpstart, RedHat kickstart, Suse autoyast, etc all together? Keep your data and configs centrally managed together. Parallel concepts between each one, maintain like directory hierarchies, write straight forward documentation on using and performing builds. Doesn’t it make sense to be proficient in the OS tool which comes directly from the vendor, at least from a personal development perspective? 

October 25, 2008

PyWorks Stuff

Filed under: adminstration,python,usability — jonEbird @ 12:00 am

For the 2008 PyWorks convention, I will be presenting about LDAP and Python. The presentation is really about demystifying LDAP and encouraging people to use and extend LDAP for their config file needs. In efforts to make my point, the last half of my presentation will be a time for a demo. This entry is your basic landing point where you can download the scripts, presuming you are looking for a copy of the scripts and/or slides after seeing my presentation? (oh! nevermind, your google search landed you here)

PyWorks Speakers Badge

For the demo, I will be leveraging the fail2ban project. It is a python based application which scans typical application logs for security failures and bans IPs from being able to connect again. It also uses the builtin ConfigParser module for reading it’s 30+ config files, which is why I have chosen to use it. For the demo, I have created two scripts:

The first one, configparser2ldap.py is used to process a set of config files and automatically generate LDAP schema as well as LDIF data.

Next, I have my ldapconfig.py module where I extended the ConfigParser module to support making queries to LDAP. I am basically overriding the read() method only and leaving the rest of the module alone. This way the only modifications to the fail2ban application are how it is instantiating the ConfigParser and I won’t have to become a full time fail2ban developer if I want to centralize the configuration data in LDAP.

And that is really the main point of my presentation: The power of centralizing your configuration data and how it can drastically change how you administer your large scale server farm.

Downloads

LDAP + Python Slides.

configparser2ldap.py script to auto-generate LDAP schema and LDIF from ConfigParser compatible config files.

ldapconfig.py python module which inherits the ConfigParser and supports optionally pulling config data from LDAP.

Next Page »