May 15, 2010

Sun’s Future under Oracle

Filed under: blogging,opinion — jonEbird @ 8:34 am

Nice insight into the transformation of Sun under Oracle’s helm. They talk about Exadata 2, the appliance which is using their newly acquired sparc hardware.

The machine costs more than $1 million, stands over 6 feet tall, is two feet wide and weighs a full ton. It is capable of storing vast quantities of data, allowing businesses to analyze information at lightening fast speeds or instantly process commercial transactions.

The part I don’t like about that statement is the fact that they’re trying to build both a data warehousing solution as well as a OLTP in the same machine. Sounds horribly inefficient. What I’ve taken from the article is that Larry seems to be a very savvy businessman but it’s interesting that they’ve failed in developing new software for a decade and their revenue increases came from acquisitions.

He plans on continuing to buy up more companies, in the hardware space, but once he’s done he’ll have to produce products and that is where I question their ability to deliver. It’s nice to see that he’s stopped the bleeding within Sun but now you’ve got a bruised and battered, over the hill player recovering from fractured ribs and a couple concussions available to you on the bench. It just seems like they’re taking Brett Favre and using him to coach rugby. I guess that could work?

February 24, 2010

64bit Google Chrome with Flash on Fedora

Filed under: blogging,linux — jonEbird @ 9:22 pm

[ UPDATE as of 2011-07-10 This post is no longer advisable. Please see my updated post on setting up 64bit Chrome with Flash player for recent Fedora releases. ]

This is a quick howto on getting a 64bit Flash working with your 64bit Google Chrome browser on Fedora. The unfortunate part is that I feel obligated in writing this down for people but it’s really not that complicated after you figure out a few details.

First things first, you need to get Chrome installed. I find it funny that the top hit on google for "chrome yum repo" suggests a yum repo which points to a web server containing only a Readme that states it’s not serving chrome RPMs due to “legal concerns”? Google’s top hit should be it’s own page for Google Yum Repository. There you will find a block of text for your Yum repository which I personally put in /etc/yum.repos.d/google.repo.

Currently, the rpm does not create a plugins directory so we have to create one at /opt/google/chrome/plugins/. Once you have done that, you can visit Adobe’s 64bit Flash page where you can download the compressed tarball. Inside that tarball will be a single library which you will now want to either sym link to in the plugins directory or just copy it there.

With all that in place, you are ready to fire up Chrome and tell it about your manually installed plugin. Do that via "google-chrome --enable-plugins". All should be well and instead of testing it on, let’s go to instead and listen to “M.I.A.” channel. That funky channel seems appropriate for this procedure.

Here is the copy & paste version: (remove the "sudo" if you are root)

# Creating the repo
cat <<EOF | sudo tee /etc/yum.repos.d/google.repo
name=Google - x86_64
# Actually installing Chrome
sudo yum install google-chrome-beta.x86_64
# Creating a plugins directory
[ ! -d /opt/google/chrome/plugins ] && sudo mkdir /opt/google/chrome/plugins
# Grabbing Adobe's 64 bit Flash player
wget -qO /tmp/flash.html
DLURL=$(sed -n '/^.*a href.*libflashplayer.*tar.gz/s/^.*<a href="\([^"]*\)".*/\1/p' /tmp/flash.html)
wget -qO- $DLURL | sudo tar -C /opt/google/chrome/plugins/ -xzvof -
# fireup chrome with new plugin
google-chrome --enable-plugins

And because I’ve been playing around with the combination of desktop background, Chrome theme and a Pandora skin in a nice, aesthetic color scheme, I’ll share a desktop screenshot of my 64bit Chrome playing some tunes.

September 16, 2009

Server Death

Filed under: adminstration,blogging,linux — jonEbird @ 7:41 pm

I often joke that the only people that read my weblog are bots, so it shouldn’t bother me if my site is down but it does. Last week the server, which was also doubling as a workstation for the wife, died. “The computer is not working”, the Wife explained. I didn’t check it out immediately as I just assumed that X had crashed or something else preventing her from using firefox. Like I said, I’m not too overly concerned with my site’s uptime.

But when I finally did check it out, sure enough, it was not looking good. Absolutely no display on the monitor. Considering I had replaced my video card not too long ago and I could no longer ssh into the machine, I am thinking that either the CPU and/or the motherboard are dead.

Hercules taking a look

After Hercules and I surveyed the situation, we decided to pull the sheet over it’s head. It’s had a nice long life (pc years) since 2004.

I headed to microcenter today to checkout what kind of motherboards, CPUs and even memory that they had on sale. If you consider my last machine was running with only 756M of memory, an ageing AMD 2Ghz processor on a abit kv8 motherboard while happily serving my website and handling the Wife’s facebook usage, then you can understand I was looking for the smallest, cheapest solution I could find. That solution was looking to be somewhere around $225.

Not willing to rush into a $200+ investment, I instead bought a IDE enclosure which is capable of serving my data via USB for a mere $21 bucks.

Now for the Restoration of my Website

I really shouldn’t even be talking about this. I should have had regular MySQL dumps along with full web content backed off to another machine. Aside from a laptop, the other “real” pc in the house is a Acer I bought as a media machine which sits in my entertainment center. It was never intended to be running 24×7, so I only did on-demand backups of my important files which were actually outside of my website. Another justification for not having regular backups was that I had two internal Seagate drives configured in a software mirror. I always figured if I had some sort of hardware problem, I’d be able to replace it and in worse case never really lose my data.

So I have my hard drive and am now looking to get my WordPress site back online with the pc in the living room. After plugging in the harddrive, I need to activate the MD device and mount up my filesystem:

[jon@pc ~]$ sudo mdadm --assemble --scan
mdadm: /dev/md/0_0 has been started with 1 drive (out of 2).
[jon@pc ~]$ cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sdd1[1]
      241665664 blocks [2/1] [_U]

unused devices: <none>
[jon@pc ~]$ sudo mount /dev/md127 /mnt

My two machines were off from each other by two Fedora releases. I wondered if I could do a chroot, startup MySQL and get a fresh, clean dump of the database…

[jon@pc ~]$ sudo su -
[root@pc ~]# chroot /mnt
[root@pc /]# ls
bin  boot  dev  etc  home  lib  lib64  lost+found  media  mnt
opt  proc  root  sbin  selinux  srv  sys  tmp  usr  var

[root@pc ~]# mount -t proc none /proc
[root@pc ~]# /etc/init.d/mysqld status
mysqld dead but subsys locked
[root@pc ~]# /etc/init.d/mysqld restart
Stopping MySQL:                                            [  OK  ]
Starting MySQL:                                            [  OK  ]
[root@pc ~]# /etc/init.d/mysqld status
mysqld (pid 9394) is running...
[root@pc ~]# mysqldump -u root -p wordpress > wordpress.mysqldump
Enter password:
[root@pc ~]# wc -l wordpress.mysqldump
354 wordpress.mysqldump


The rest of the migration involved an rsync of /var/www/html/ content, adjustments of the default Apache config, granting access for my WordPress user to use the database and finally updating my router to now direct requests for port 80 to my media pc.

At this point, I guess I’ll be running this site from the living room until I decide what to do about my server / workstation. I’ve always wanted to build a slimmed down, efficient virtual server to host my website and then migrate it between server and laptop during maintenance / patching of my machines, but my AMD processor didn’t support the Virtualization assistance, so it was painfully slow. I think I’ll keep an eye out for a used, server-class machine. Let me know if you find any, bots. Thanks. ;-)

September 4, 2009

Finding My Strengths

Filed under: blogging — jonEbird @ 3:17 pm

Being a #1 best selling book, there is a decent chance you’ve heard of Strengths Finder 2.0. At my work, our whole team got a copy of it. I’m busy finishing up another book, but meanwhile everyone else has completed the online assessment of their strengths and have started sharing amongst the team. So, I have decided to at least take the online assessment and share my top five strengths as well.

I must admit, I was skeptical of the effectiveness of the poll but am now pleasantly surprised to reveal such an accurate description of my strengths. Enough so to encourage me to hurry up and start reading the book. Once you complete the assessment, it will then provide you a guideline or an action plan to further take advantage of your strengths. I particularly liked the fact that it provided a nice html, printable version of that action plan which I can then easily share with others. So, without further ado, I give you my top 5 strengths:

Jon Miller’s, Strengths Finder 2.0, top five strengths

August 10, 2009

Hadoop Elephant Makes a Big Splash

Filed under: blogging,hadoop,python — jonEbird @ 5:27 pm

Big news in the world of Hadoop today. My Running Large Python Tasks With Hadoop is published in the July Edition of Python Magazine. This marks my second article with the magazine and I had a lot of fun doing it. My interest in the anti-rdbms will continue as I continue to find interesting ways to organize data in the enterprise.

While providing a gentle introduction to Hadoop, my article also introduces readers to my HadoopCalculator which you can install a couple of different ways. First way is done via git where you can pull my HadoopUtils repo from github via:

git clone git://

That will bring a few more scripts than just my HadoopCalculator. The second way to install is to use the Python setuptools utility easy_install or pull down the source package from the Cheese Shop.

Thank you for reading this far. I lied. The big news today in the Hadoop world is Doug Cutting joining Cloudera. Had you going, didn’t I? Recently, while Doug was still with Yahoo!, the Microsoft and Yahoo Partnership had people wondering what impact that would have on the Hadoop ecosystem. Today, Yahoo! is the largest Hadoop user and for obvious reasons contributed a lot to the community. Cloudera was already a well known player in the Hadoop community but their stock has risen immensely with the addition of Doug Cutting. If they were selling stock, I’d buy.

June 24, 2009

Emacs Registers and Bookmarks

Filed under: blogging,emacs,usability — jonEbird @ 5:02 pm

Every once and a while I like to re-read manuals about pieces of software which I already feel quite comfortable in using in hopes to learn a new trick or two. Today, I was browsing the freely available GNU Emacs Manual and was breezing through it until I hit the registers section.

Have you every copied a region of text in plans to yank it back in multiple locations within your current buffer but in between this work, you realize you need to do some intermediate killing and yanking and therefore have overwritten your original region of text you had copied? Well, using registers is one way in solving that problem.

Bookmarks have a similar function as with registers within emacs. I am actually grouping them together in this weblog because their key sequences are so very similar. Bookmarks are, like the name implies, a way to keep track of position within a buffer. They can be saved and later referenced to not only re-open the particular file you were editing but take you back to the position within that file as well.

I can not seem to remember anything, from emacs commands or keystrokes to basic shell commands, without coming up with some mnemonic for memorizing it. I have just came up with one such mnemonic for using Registers and Bookmarks and I thought I’d share it with my bots (machines which read this weblog).

Register and Bookmark Mnemonics

Each key sequence starts with:
C-x r – Think “r” for register.
The general pattern is:
C-x r <key> <register> – “register” is any single digit or letter.

“key” is what I’m calling each category of register usage. Let’s explore them:

  • “<space>” – mark – Just like C-spc to set the mark, this asks that the mark be set in our register.
  • “s” – save – save the region’s text into the register.
  • “n” – number – save a particular number into the register.
  • “m” – bookmark – We’ll explore bookmarks further, but notice how you set bookmarks with what I consider the register key sequence.

Those are some of the basic storing actions, but with the same C-x r prefix you can perform other actions with the contents of the registers:

  • “+” – increment – When the register is a number, this increments it’s value. Convenient to use with macros.
  • “i” – insert – This action can be used for numbers, regions of text and even marks!
  • “j” – jump – jump to a mark. This assumes you have already stored a mark in the particular register.

We’ll shift a bit into bookmarks but stay succinct within the mnemonics section here.

  • “m” – bookmark – Repeated from above. Note that this is a interactive function which allows you to name your bookmark with an intelligent name. The default will be the basename of your current buffer’s filename. So, if you’re editing /path/to/emacs_notes.txt it will default to store the bookmark under “emacs_notes.txt” but maybe you want it to be called “emacs notes”. If so, go ahead and type that out and hit RET.
  • “l” – list bookmarks – This opens a new pseudo buffer with the list of all of your bookmarks.
  • “b” – bookmark jump – Jump to the named bookmark. This is a interactive function as well.

Bringing it Together – Examples

Save current mark to register “l”:
C-x r <space> l
Move to mark saved in register “l”:
C-x r j l
Save number in register “n”:
C-x r n n
Now, increment that number and then insert it:
C-x r + n C-x r i n

Love the emacs notes you’re editing, bookmark it:
C-x r m emacsnotes RET
Buried into multiple install READMEs for a particular product and want to return later:
C-x r m installreadme RET

Finally, the prettiest of the commands, let’s review our bookmarks:
C-x r l

Final Note on Registers and Bookmarks

There is a variable named bookmark-save-flag which when set to the value of “1″ will have the action of automatically updating your ~/.emacs.bmk file with any updates to your bookmarks. I recommend setting this in your ~/.emacs file so you don’t have to “M-x bookmark-save” periodically. Add the following to your ~/.emacs file:
(set-variable (quote bookmark-save-flag) 1 nil)

Finally, I’d be remiss if I didn’t mention how I generated that one-liner emacs-lisp line, even if it’s off topic for this weblog entry. I like to use various interactive functions and then capture their effective execution, in emacs-lisp form, via the “repeat-complex-command” function which is bound to “C-x M-:“. In this situation, I used the “set-variable” interactive function to set “bookmark-save-flag” to “1″ and then punched in “C-x M-:” and copied the one-liner for my ~/.emacs file. So, there you go, a bonus tip for those who’ve read this far.

June 15, 2009

Intern Regiment

Filed under: adminstration,blogging,linux — jonEbird @ 10:06 pm

Today was Patrick Shuff‘s first day with our team. He is our intern for the summer and I actually recommended that we steal him from another team after meeting him last year. From my half day assessment of him last year, I thought he was much more suited to work with our Linux team vs. the Windows provisioning team. He found my gnu screen, emacs and script automation tricks fascinating and right there just invalidated himself as a legit Windows guy. The Windows experience he picked up last year was no doubt useful, but it’s not something you enjoy to return to. Just like it’s very useful to have learned C as your first programming language, being an awesome basis to provide a solid understanding of the computing innards, but you don’t want to return to it after programming in Python.

I have been trying to brainstorm good ideas for him to work on in the team. I suppose the main reason I want to see his experience be as positive as possible is because I myself was an intern for about three years. One thought I had, was to turn the three month time schedule into an intense one assignment per week ordeal where I throw a new task at him intended to inject new incites into all facets of becoming a well rounded Linux administrator. Of course, one week is not enough time to properly study each area for most of the categories of topics I was thinking of, but it would have a nice organized structure and would be nearly guaranteed to provide an intense experience worthy of writing home about. Okay, if he ends up writing home about it, then we know he’s a dork but would also mean he’s probably found a career in which he’d never have to work a day of his life because it’s enjoyable.

I started brainstorming my categories of areas in a quick outline mode. Of course, this list is subject to change and we actually end up doing through with this I’ll naturally have to report back on the actual topics covered in each week and what the assignments were. If nothing else, it should keep my weblog busier than normal which isn’t hard. So, here what I’m thinking constitutes a well rounded Linux Administrator:

  • Ever improve efficiencies
    • editor
      - pick one: emacs or vim. Just don’t settle at being able to modestly edit text.
    • shell
      - An essential, stereotypical Linux Admin skill. And yes, it is important. Study up.
  • organizational skills
    • - Can not be underestimated. Aren’t we always ever improving our organizational skills?
      - Develop consistent habits in note taking. Try reading Getting Things Done

    • project notes
    • meeting notes
    • hallway conversations
    • company hierarchy
  • technical expertise
    • operating systems
    • programming languages
    • architectural design
    • applications administration
  • staying current
    • awesome rss feeds
    • key social article sharing sites

      - Looking at you, reddit.
    • magazines
    • books
  • soft skills
    • working within a team
    • speach / presentation
    • written communication

      - tech writing, effective email communication
  • career, career stuff
    • resume writing
    • networking
    • staying driven
    • finding your path

Sorry for the lack of details on each of the items but it’s kind of silly to populate it further now. For now it remains an idea for a summer internship. Only once the plan comes to fruition will I report back with juicier details.

November 16, 2008

Pyworks In Summation

Filed under: blogging,PHP,python — jonEbird @ 7:10 pm

I sit in the Atlanta Airport reminiscing over the events of PyWorks ’08. This was the first year for PyWorks but MTA combined the conference with PHP Architect and I believe everyone was happy with the combination. At a minimum, people had engaging conversations between the groups and a significant number of them cross-attended the sessions. I attended two PHP sessions and one neutral session and then the rest Python. Some people were a bit disappointed in the lack of Python attendees and it is true that we didn’t make up a large part of the total 148 attendees of the conference. But with the quality of talks staying superbly high, not having a full room wasn’t a bad thing.

The quality of talks were all superb, indeed. Probably over half of the presenters are either principle developers on high profile projects or they have written a book or own their own consulting company. On day zero, where there were 3hr long tutorial sessions, I spend the morning in Mark Ramm‘s TurboGears but then I switched over to the PHP side in the afternoon to catch Scott MacVicar and Helgi Þormar Þorbjörnsson‘s Caching for Cash.

At the start of day one, the first day of the normal sessions, I think everyone was expecting a lot more people. There were, in fact, more people but not as many as I was expecting, but again that’s perfectly okay. This day was a full one, starting off with the keynote by Kevin Dangoor about Growing your Community. After a break I then attended Decorators are Fun by Matt Wilson and learned that he is not that far away from me in Cleveland. Next I attended another Mark Ramm talk about WSGI where he was explaining how easy it was to build a web framework. It was given a bit “tongue in check” since he is the primary maintainer of TurboGears. Following that, I attended a middle track session about Distributed version control with GIT by Travis Swicegood. Travis had just finished writing a book about using GIT called Pragmatic Version Control Using Git and not surprisingly gave a authoritation explanation of using GIT. Following lunch, I attending another PHP track presentation but it could have been in the neutral middle track. The talk was Map, Filter, Reduce In the Small and in the Cloud by Sebastian Bergmann where he explained the popular functional programming techniques popularized by Google for computing large quantities of data. Sebastian gave me another reason to checkout Hadoop and in fact I’m now thinking of another Python Magazine article about using hadoop with Jython. For the last session of the day I decided to attend Michael Foord‘s talk about IronPython. I didn’t think I’d ever checkout IronPython on my own, so I thought I’d get a crash course from Michael who also just finished work on his book IronPython in Action.

Still not done with day one. After all of the normal presentation’s concluded, we had happy hour while gearing up for the Pecha Kucha competition sessions. Pecha Kucha is where you provide 20 slides and set them to auto switch every 20 seconds making your session a little over six minutes. Apparently people have found that you can get the same quality bits of information in that format as compared to a full hour session. At least that is what the Japanese have concluded. As for PHP/PyWorks, we mostly had fun with the sessions. There were talks about web security, general ranting, LOLCode, and many others which I’m having a problem remembering. At the end, the LOLCode talk took the prize of the Xbox 360 gaming system by our judges and if you’d really like to see what was going on, you may be able to watch streamed video captured by Travis Swicegood‘s iPhone. Before I went to bed, I rehearsed my presentation one more time.

By the time day two started, it felt like I had been there a full week and yet we still had a full day of presentations again. I started the morning in Chris Perkins‘s talk about the Sphinx Documentation System. We all understand the importance of documentation and it’s not always fun, but again I thought investing 45min catching up on some of the Python “best practices” for documentation would be well worth the time. Afterwards, I stayed in the same room for Jacob Taylor‘s talk about Exploring Artificial Intelligence with Python. Jacob didn’t get around to showing any Python code but he had good attendance for being a founder of SugarCRM. Next, the highlight of the conference, my presentation about LDAP and Python. The number of attendees for my presentation were average for the Python sessions and by this point I felt like I knew everyone which removed any pressure or nervousness. We’ll see how interested people were by seeing who downloads my and/or scripts. After lunch, I attended Kevin Dangoor‘s Paver talk where he explained the motivations for Paver and showed numerous examples of what pain points it solves. Finally, the last session I attended at PyWorks was Jonathan LaCour‘s talk about Elixir, the Python module which makes introduction into SQLAlchemy an easy one. Elixir helps kick start your DB code by simplifying SQLAlchemy by making a lot of sane choices for you as well as providing other conveniences. Jonathan had to work hard to get all of his content into his hour, mostly because he gave a decent overview of SQLAlchemy and then his Elixir module.

As with the previous day, this day concluded with another happy hour while waiting for our closing keynote. The closing keynote was given by Jay Pipes about “living in the gray areas” and not sticking to extreme black and white of our technologies. He praised the joint efforts being made by the PHP and Python folks and criticized people who are too biased to learn from the other communities. Jay is working on Drizzle, while working for Sun, where they are challanging all of the preconceived notions being made by the MySQL community. Drizzle is basically a fork of MySQL and their goals are to provide a much more streamlined version of a database. Jay explained that forks are good (as well as “sporks”) because it keeps people on their toes and keeps the level of competition up. Finally, Jay’s last point was that we need to spend more time listening to other people and less time preaching our biased opinions.

I overheard PHP and Python people resonating Jay’s message after the keynote. I’m glad to have participated in such a successful conference where I truely believe boundries were crossed. With as much time that I spend with the PHP folks, I was repeatedly asked, “So, you coming over to the PHP side?” I think the last time I was asked that was in the hotel pool where again I was playing the role of the “token Python guy” amongst the PHP folks. To be honest, those PHP folks know how to have fun, and if my criteria for choosing a programming language was the amount of fun the community had I would be doing PHP development. I definately want attend next year’s PyWorks and PHP conference and I have an entire year to come up with my presentation proposals.

November 6, 2008

2D Barcodes

Filed under: blogging,usability — jonEbird @ 12:06 am

In anticipation of heading down to PyWorks 2008, I have been thinking about creating a business card for the sake of keeping in contact with people I meet. One of my main goals, while attending and speaking at PyWorks, is to network with people and mark 2008 as the year which I start participating and contributing within the OSS community. While exercising my creativity in designing a nice business card, I have also been reading about Google’s android mobile platform, and I came across a interesting intersection between the two when I saw a demonstration video.

A Google developer, working on the zxing project (pronounced “zebra crossing”), has printed a 2D barcode encoding of his personal information on the back of his business card. With the builtin camera, on his android phone, he can scan in a barcode and immediate use the encoded data. It is an impressive demonstration of integrating technology with our mobile devices. Check out the video which has inspired me to not only do the same but also write this small informational note about 2D barcodes.

If you didn’t catch it, the format of the 2D barcode on his business card is QR Code. Among the other 2D barcode formats, QR Code barcodes are most popular in Japan where it was invented by Denso-Wave. The popularity of QR Codes in Japan has grown to the level of being supported by nearly every mobile device there and that also means finding QR Codes available on a increasing amount of printed media from fliers to magazines and coupons.

There are other competing 2D barcode formats that I could choose from but after doing researching and not seeing any distinctive advantages, I have concluded to follow suit with the google developer and hope that the android phone’s application and popularity will help propel QR Code’s popularity over the other 2D barcode formats.

Since 2D barcodes are nothing more than encoding and decoding data, the first thing to decide is what data we would like to encode. Since I actually do not have a business of my own and furthermore use a work issued phone, the data I encode will probably be a URL of my website. There are other interesting encodings, though, which include email address, sms, geographic locations, etc. See zxing’s wiki about BarcodeContents for a better discussion for suggested format, including their primary suggestion of using the MECARD format which is typically a composite of Name, Address, Phone number and Email address.

Once you know what you would like to encode in your 2D barcode, I’m guessing you will need software to help with that. With the assumption that you are not going to be encoding/decoding barcodes with a large frequency, my suggestion is that you use online utilities to help you. Interestingly, it turns out that the google chart api can now generate QR Code online. That is both convenient for repeated generation of QR Code but also in dynamic generation of barcodes. But alas, Jason Delport has created a google app engine application to record your text and generate the QR Code for you by generating the google chart api link for you. At that point, you can either use the supplied URL or simply download the png image. Finally, for performing online decoding of the barcode I have found the zxing online decoder to be the best and least intrusive one available.

The main reason 2D barcodes have not really taken off here in the States is because people have not yet came up with a really good idea to propel it into the mainstream. That, my friends, is going to be up to you and me to accomplish. Or, wait, we could just let google usher it in for us? But seriously I think support for mobile devices to read 2D barcodes is great step forward. Afterwards, I can envision graphic designers coming up with clever barcode prints and ways of intriguing people to scan the code for more details, but then it would be people like us who come up with new categories of data to be encoded in the barcodes for new, innovative ways to use them.

To learn more, try the collection of interesting links provided again by the zxing folks.

November 4, 2008

Management Tools for Multi-Vendors

Filed under: adminstration,blogging — jonEbird @ 4:44 pm

The challenge to build a tool which manages multiple vendors and platforms by way of piggy backing off their technology is a losing battle. Be it provisioning, patching, monitoring, etc it doesn’t matter. To choose such a tool, you end up paying big bucks for other people to constantly watch and react to what various vendors are doing. Combine that piece of realization with the fact that a tool will almost never perfectly suit the unique requirements of your business and you’d be in denial to not realize that it sucks. Beyond the shear money of the endeavour you are also wasting time of your associates which will probably not get recouped.

I will never say anything is impossible. You can build such a tool and it can have the necessary hooks to allow your associates to customize it to suits your needs. My point is, that work is much harder to pull off than the naive observer might realize. Imagine you are abstracting the details of Suse’s automated installer “AutoYast”. But let’s say the OpenSuse project decides to take a drastic change on how the unattended installer works. Their efforts, no doubt, will be motivated by improving their end user’s experience by presumably making it quicker, simpler and overall a better product. Depending on how drastic the change, it could represent an entirely different philosophical approach to OS installs. As the tool builder, trying to provide a layer of abstraction, you have just stuck yourself into a large endeavour to re-factor those pieces of your application to handle the radical changes being made. It’s a given risk, if that is what you’re providing. My point is, as a customer, just don’t buy that product.

To purchase such a product, you are basically stating that you believe the particular team of developers are going to continue to accurately and intuitively abstract those details for you. Don’t forget you’re still paying a lot of money for this. But this is how management thinks, “I’m going to buy this tool and allow my associates to use one tool and spend their time elsewhere.” It doesn’t happen. Instead, the associates try to shift their energies on learning a new tool, figuring out how to customize it for their needs and probably end up with one FTE dedicated to maintaining it.

Please, don’t waste your time and money. Spend your time collaborating with teammates. Decide upon OS and install standards. Each OS installer provides the ability to perform basic configuration of disk, network, software, etc and then allows for final post-install hook. That hook will then lean upon your team’s efforts. You will end up spending the same amount of work creating your post-install scripts as it takes to merely install and train folks on an “all in one” tool. Big difference of “rolling your own” is you now own the tool set, it already exactly meets your needs, every one knows and understands how it works, updates are easy, knowledge and skill gained is more widely recognized and all the while you haven’t spent more money.

Now for the counter-point: You have to have a good team to pull this off. Team members will require enough experience to demonstrate the proper discernment in building out a quality framework. So what if you maintain a Solaris Jumpstart, RedHat kickstart, Suse autoyast, etc all together? Keep your data and configs centrally managed together. Parallel concepts between each one, maintain like directory hierarchies, write straight forward documentation on using and performing builds. Doesn’t it make sense to be proficient in the OS tool which comes directly from the vendor, at least from a personal development perspective? 

« Previous PageNext Page »