Friday, December 5, 2008

Building a CentOS 4 AMI: Part 3 of 2

First, I described how to prepare a system to be used to create the AMI.

Next, I outlined the process of creating a CentOS 4.7 AMI.

Finally, I'm going to show how to upload, register and start the instance. I shouldn't have to say, though I will, there are numerous variations on how to perform all of these tasks and getting your system and application in the cloud.

I'm my last installment I created the image itself on my server. After doing so, it needs to be broken up into smaller chunks to be uploaded and create a manifest. This is done with one command ec2-bundle-image.
ec2-bundle-image \
-k $EC2_PRIVATE_KEY \
--cert $EC2_CERT \
--user 123412341234 \
-i centos-4.7-i386.img \
-r i386 \
-B "ami=sda1,root=/dev/sda1,swap=sda3"
NOTE: If you do not have the EC2_PRIVATE_KEY or EC2_CERT shell variables then you didn't follow my instructions in my first post.

There are some obvious options here that are necessary. -B for block device maps and -r specifying the architecture but there are a few others that may be useful. --kernel to specify the kernel ID to boot with and --ramdisk in case one is needed.

NOTE: I should mention, at this point that if the system you're working on has a system clock that is too far off with Amazon's then you may receive the message
Client.InvalidSecurity: Request has expired
Best bet is to get your system's NTP service running, such as ntp(8), for details on synchronizing your systems clock.

Uploading the chucks...

You'll need your access key and secret key at this point for the -a and -s parameters below. Amazon S3 stores your files in what they call buckets. Every file in the bucket must have a unique name. I'm not 100% on this but the bucket name needs to be alphanumeric and can also use periods. Beyond that, I'm not sure. For more information on the full S3 API and it's capabilities and constraints, check out the Amazon S3 documentation pages.

OK. We're uploading. The chunkified image and manifest must be sent to a bucket (-b) in the S3 storage cloud.
ec2-upload-bundle \
-b my.first.centos.4.7 \
-m /tmp/centos-4.7-i386.img.manifest.xml \
-a "sssssshhhhhh" \
-s "itsre/allyr/eally/secret"
In this case, the full path to the files will have the form
my.first.centos.4.7/centos-4.7-i386.img.manifest.xml
Simple. Oh, if the files (they're not an AMI yet) aren't in use and you want to delete them check out the ec2-delete-bundle. There is help available online as well by using the --help command line option.

The files are in the storage cloud but the EC2 service needs to know about it. To do so, it needs the path to the manifest file. Registering is simple
ec2-register \
-K $EC2_PRIVATE_KEY \
-C $EC2_CERT \
my.first.centos.4.7/centos-4.7-i386.img.manifest.xml
Conversely, when you get bored with your image and want to eventually remove it from S3 (so you don't get charged every month) there is a ec2-unregister command. Again, like most or all EC2 commands in the toolkit, you may use the --help command line option.

A listing of all AMI's are available using the command below or by using Elasticfox, the web browser extension for Firefox that provides a GUI for
ec2-describe-images
IMAGE ami-89abcdef my.first.centos.4.7/centos-
4.7-i386.img.manifest.xml 495219933132 available private
The security group used will be your "default" policy. Since we're expecting to connect to the instance via SSH we ought to enable port 22/tcp.
ec2-authorize default -P tcp -p 22 -s 0.0.0.0/0
NOTE: Keep this command in mind. Depending on what you intend to do with your server in the cloud, additional ports may need to be opened.

Finally, we can start the AMI. By default, the m1.small instance type will be used providing the instance with a 1 core processor and roughly 1.7 GB of memory. You can start as many as you like.
ec2-run-instances \
-K $EC2_PRIVATE_KEY \
-C $EC2_CERT \
ami-89abcdef
Use the ec2-describe-instances to check the instance and once it's running the output will contain the IP address assigned to it. Earlier, we opened port 22 so starting a SSH connection now should work just fine. Log in and check out your server in the cloud.

If you find the m1.small lacking in memory or horsepower, other instance types with more cores and memory are available. I'd recommend starting with m1.small and if it isn't working, shut down the instance and bring it back up with a different type such as m1.large or c1.medium. There are described on the AWS web site.

All commands are well documented using the command line --help as well as on the AWS web site

That's all I have for now.

Building a CentOS 4 AMI: Part 2 of 2

In my last post, I outlined the preparation of a virtual system to be used to create the AMI. Everything done here is on the virtual machine and references the exact settings in the previous post.

NOTE: If you have not signed up for an AWS account and agreed to the EC2 and S3 terms of agreement you won't get too far here. Your own private key, certificate, and AWS user id is required to complete all of these steps.

Here, I will describe the actual image creation, uploading and then starting the AMI in the cloud. The steps I outline are exerpts from a script I use to automate all the steps. If you copy all the steps below to your own script, it ought to work.

Like all systems, whether virtual or real, requires a disk or disk image. I've opted to create a 2 GB image. I recognize that this is not the only way and you can accomplish this through other processes and means.
# Making the image and formatting the file system
dd if=/dev/zero of=centos-4.7-i386.img bs=1M count=2000
mke2fs -F -j centos-4.7-i386.img

# Mounting the image file
mount -o loop centos-4.7-i386.img /mnt
NOTE: You may create a larger image to suit your needs and it may be more economical than downloading things after the instance is started but remember that Amazon charges not just for uploads but for monthly storage. Creating an image larger than what you need may cost you.

Next step is creating some necessary directories and device files in the image. These processes are basic though provide a foundation for the software installation and running system later. More information can be had from any one of the many build your own distribution web sites.
# Creating the necessary directories
mkdir /mnt/dev
mkdir /mnt/proc
mkdir /mnt/etc

# Creating some minimal device files
for i in console null zero
do
/sbin/MAKEDEV -d /mnt/dev -x $i
done
More system basics here. The fstab file needs to be created and the image's proc file system is also necessary for the loading of software.
# Create fstab
cat <<EOFSTAB > /mnt/etc/fstab
dev/sda1 / ext3 defaults 1 1
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
/dev/sda2 /mnt ext3 defaults 1 2
/dev/sda3 swap swap defaults 0 0
EOFSTAB

# Mount the image proc now
mount -t proc none /mnt/proc
I'm using yum to perform the installation. It's a great tool for automatically resolving software dependencies and installing what's needed without the need for searching countless Google hit pages, cruising RPMfind.net or hunting for your distribution installation images.

You can expand on the yum.conf file as needed to customize your image. What I've included here are the basics but you may add the configuration for any other bit of software.

# Custom yum.conf for our image
cat <<EOCONF > /tmp/yumec2.conf
[main]
cachedir=/var/cache/yum
debuglevel=2
logfile=/var/log/yum.log
exlude=*-debuginfo
gpgcheck=0
obsoletes=1
reposdir=/dev/null

[base]
name=CentOS-4.7 - Base
mirrorlist=http://mirrorlist.centos.org/?release=4.7&arch=i386&repos=os
protect=1

[update]
name=CentOS-4.7 - Update
mirrorlist=http://mirrorlist.centos.org/?release=4.7&arch=i386&repos=updates
protect=1
EOCONF
NOTE: I would recommend for those who plan on using or tweaking their image to download all the packages you need from the distribution and create your own internal mirror site. Mirrors come and go and should your required version 1.2.3-5-EL4 get replaced and cause problems with package wyzzwg-4.3.2-1-EL4 then you may need to invest in Tylenol .

Finally, let's install some software. This may take a while depending on your internet connection. So, start now and go get some coffee or do your Christmas shopping... just not online. Go to the store. They're those big brick, stone or stucco buildings you see in real life alongside highways.
# Avoid a yum lock file error
mkdir -p /mnt/var/lock/rpm

# Install the base
yum -c /tmp/yumec2.conf --installroot=/mnt -y groupinstall Base

# Cleanup
yum -c /tmp/yumec2.conf --installroot=/mnt -y clean packages
Done? No errors? Das ist gut.

We're in the home stretch. We need to solve our access problems. I, as the rest of the world, should prefer nothing less than SSH for accessing systems. If you haven't generated a public-private key pair yet, please do so. This will store the key paid in /root/.ssh.
ssh-keygen -t dsa -C '' -N ''
The next few steps I've refined so our instance whenever started is ready for me to login. After all, this is like a freshly installed system and we can't expect it to know our password nor do we want some password stored out there in the S3-land.
# Make sure TLS is disabled
mv /mnt/lib/tls /mnt/lib/tls-disabled

# When this instance boots, the keys to the instance need to be there
if [ ! - d /mnt/root/.ssh ] ; then
mkdir -p /mnt/root/.ssh
chmod 700 /mnt/root/.ssh
fi

# Copy public key to instance
cp /root/.ssh/id_dsa.pub /mnt/root/.ssh/authorized_keys2
chmod 644 /mnt/root/.ssh/authorized_keys2
NOTE: No private keys installed. This is ultimately important. Like your password which does not exist in the /etc/shadow file on the image your private key should be kept out of the cloud.

The SSH daemon needs some tweaking to keep us from being shut out.
cat <<EOCONF >> /mnt/etc/ssh/sshd_config
UseDNS no
PermitRootLogin without-password
EOCONF
Our network configuration. Kinda pointless if we don't let the AMI have a network connection.
cat <<EOCONF > /mnt/etc/sysconfig/network
NETWORKING=yes
HOSTNAME=localhost.localdomain
EOCONF
cat <<EOCONF > /mnt/etc/sysconfig/network-scripts/ifcfg-eth0
ONBOOT=yes
DEVICE=eth0
BOOTPROTO=dhcp
EOCONF
Let's not forget to unmount things. Hate to spend all this time creating a nice, neat image just to corrupt it later.
# Unmount image
sync
umount /mnt/proc
umount /mnt
Putting the ribbon around the image now. Here you can go crazy telling the instance on boot to use wget or curl to download your custom scripts, data, or whatever installing, running or just having it spin in circles. It's all completely customizable just by adding the scripts to the /etc/init.d director and appropriate links to the /etc/rc3.d directory. I believe run level 3 is the default.

NOTE: If you're going to put custom boot scripts in that leverage commands such as wget or curl then make sure they exist in the image. If they do not, just add some more yum command similar to what is above and you'll be all set.

Okay. This post ran a little long so my 2 post steps to building your AMI is going to spill over into a 3rd. Should give you time to customize your image and creates dreams of conquering the world.

Thursday, December 4, 2008

Building a CentOS 4 AMI: Part 1 of 2

Building a new AMI for Amazon's Web Services takes as much time prepping your target system as does creating the AMI itself. There are system requirements, some, aren't taken care of depending on the distribution you're using.

Case in point, I'm building a CentOS 4 AMI. Why? Because CentOS / Red Hat 5 it all over and there are some applications that require older versions. ISOdx, the project I've been on for the past 6 years, grew up on Red Hat 3 and now runs on Red Hat and CentOS 4. I'd love to invest the time in getting the RPM's up to snuff so it'd run on v5 but I have bigger fish to fry.

What I settled on was CentOS 4.7. It's the most recent and release in the v4 release. After downloading the DVD copy of the distribution, I built a VMware server and installed almost everything, after all, you never know what you're going to need. There was some additional software and updates required. Here are some I've identified.

From http://aws.amazon.com/
  • ec2-ami-tools.noarch.rpm
  • ec2-api-tools.zip
From http://www.java.com/
  • jre-6u7-linux-i586-rpm.bin
Located using http://RPMfind.net/
  • tar-1.15.1-1.i386.rpm
From http://dev.centos.org/centos/4/testing/i386/RPMS
  • ruby-1.8.5-1.el4.centos.i386.rpm
  • ruby-libs-1.8.5-1.el4.centos.i386.rpm
I won't bore you with the downloading and installing. That ought to be straight forward enough. It will be evident from some paths where I unpacked things on my system. It may not be the perfect place but it works.

Next, I had to tweak my shell environment.
export EC2_HOME=/root/ec2-api-tools-1.3-24159
export JAVA_HOME=/usr/java/jre1.6.0_07
Yes. I'm logged in as root. Don't panic. It's a virtual machine that I may trash and I've been doing this since 1988. Yes, 1988. Older than some who may read this so you youngsters just settle down. Where's my spectacles?

Needless to say, you'll need a AWS account as well as the private key and cert. Having them on the system is very helpful and I stored them in /root/.amazon plus setup the following shell variables.
export EC2_PRIVATE_KEY=/root/.amazon/pk.pem
export EC2_CERT=/root/.amazon/cert.pem
And, finally we'll add to the path the bin directory for the EC2 tools.
export PATH=$PATH:$EC2_HOME/bin
Next post, which I hope occurs before the next turn of the century, I'll explain the a simple process of building the actual CentOS 4 AMI, uploading it, and eventually starting the thing.

Wednesday, October 8, 2008

Economy Connect The Dots

Yes, it's that season. Every four years we dig up all the crud we can on the party from "across the aisle" and try to pin it on them.

"It's their fault."

"We tried to prevent these awful events from occurring but ... "

"Had I been in office ... "

Well, since I believe hindsight is 20-20 and both sides can agree with that. Let's connect the events, or dots, going backwards.

2008 ...

In America, it is now 7 years after quite a few institutions took advantage of the historically low prime rate and relaxed lending practices. These institutions were offering sub-prime rates and interest-only loans in the form of 5 and 7 year ARM mortgages "to individuals whose credit is generally not good enough to qualify for conventional loans"(1).

February 2006 ...

Greenspan is out. Bernanke is in. Greenspan and the fed since 2004 had been raising the prime rate at every change to head off a credit crisis. The rate increases, as it turns out we'll see later, is a continuation of rate hikes that happened back in the days of irrational exuberance. This time, exuberance was happening in real estate.

So, Bernanke comes in and follows Greenspan's lead for a short time raising the prime rate until it's up another 0.75% to 6.25% in June 2006(2). There it stayed for almost 14 months until it was obvious that the credit crisis was unavoidable. It's been cut back drastically to 1.75% since. The current prime rate is now at it's lowest point since December 2001. We all know what happened that got us there... or do we?

July 2003 - June 2004 ...

Let's make a brief stop here before proceeding further into the past. After all, this is when the rates were historically low. The prime rate hadn't been this low since midway through President Kennedy's term in July of 1961.(3)

In fact, from December 2001 until November 2004 the rate bounced around between 2.00% to 1.00%, never over 2.00% and spent more time fighting to stay above 1.00%. Real estate was on fire though. Sub-prime and loose lending practices continued through up to the end of this period. Home sales reached record levels.

Rates... lending... home sales... there seems to be a few reoccurring themes.

Someone in the Bush administration saw it coming. (4)

December 11, 2001 ...

Three months to the day after that awful day when the name Osama bin Ladin became a household name forever engraved in the minds of every American the federal prime rate reached 1.25%. This was the third cut in those three months. It was in response to this attack... or was it? The rate was at just 2.50% prior to the attacks. If the 9/11 attacks didn't cause the drop, what could have?

Late 2000 ...

It wasn't long before someone saw a storm brewing. Edward M. Gramlich, a Federal Reserve governor who died in September 2007, warned "that a fast-growing new breed of lenders was luring many people into risky mortgages they could not afford."(5)

May 2000...

It was on it's way down long before September 11th. In May of 2000 the rate was at 9.50%. Greenspan and the Federal Reserve Board had been trying to burst the Dot Com Bubble and stop the irrational exuberance on Wall St. When the bubble did finally burst the rate starting getting slashed as much as 0.50% at a time. Why?

In a span of less than 14 months Greenspan had it down to 6.50%. Down 3.00% in 14 months compared to a drop of 2.50% over almost 2.5 years after 9/11. Why?

Perhaps, instead of bursting the dot com bubble he instead had a hand in crippling the economy. Fortunately, there were new lending rules at Freddie Mac and Fannie Mae instituted a year earlier by the Clinton administration(1).

And so, we've come full circle.

  1. 1999, Clinton administration pressures the Freddie's to throw out best practices when it comes to lending.

  2. 2000, Greenspan bursts the dot com bubble and at the same time the American economy.

  3. Late 2000, there was already evidence that bad loans were being made. These 7-year ARM's would become the leading edge of the storm.

  4. 2001, 1.5% of new loans were interest only(6)

  5. 2001, September 11th attacks which forces the economy and prime rate down further.

  6. 2002, 6% of new loans were interest only(6)

  7. 2003, Bush administration and the Republican minority in Congress calls for additional oversight but is killed by the Democrat majority.

  8. 2003, 13% of new loans were interest only(6)

  9. 2004, 31% of new loans were interest only(6)

  10. 2006, An estimated $1 trillion in adjustable rate mortgages will reset in 2007(7). Those who do not refinance could see their payments increase by 25%.

  11. By the end of the third quarter of 2006, the total U.S. mortgage debt outstanding was $10.7 trillion(8).



My conclusion? The leading edge of the storm (foreclosures from 7yr ARM's in 2000 & 2001 and 5yr ARM's from 2003) is behind us and now we're in the thick of it. Potentially, we may have 4 years of the worst economy since the Great Depression. Just don't point at people on the right of the aisle. Look at recent history and you'll find those responsible. Some of whom are can be counted amoung Obama's 'trusted' advisors.

Quotes and citations:

1. Fannie Mae Eases Credit To Aid Mortgage Lending New York Times, Steven A. Holmes, September 30, 1999

2.
Historical Changes of the Target Federal Funds and Discount Rates, Federal reserve Bank of New York

3.
Effective Federal Funds Rate

4. New Agency Proposed to Oversee Freddie Mac and Fannie Mae, New York Times, Stephen Labaton, September 11, 2003

5. Fed Shrugged as Subprime Crisis Spread New York Times, Edmund L. Andres, December 18, 2007

6. A Growing Tide of Risky Mortgages Business Week, Peter Coy, May 18, 2005

7. A House of Cards: Refinancing the American Dream, Demos, November 2006

8. Federal Reserve data


Thursday, February 28, 2008

Components In A Successful Open Source Business Strategy


DiggIt!


I've been scratching my head on this. How does a business survive and thrive when their product is, in fact, not theirs but the community at large?

There's a more fundamental question that I think needs to be answered first. What open source project offers the most for a business to adopt it and create a strategy around it? Why does one succeed and another very similar fail?

Infrastructure is my first answer. The project must be a major component of the enterprise IT infrastructure. Whatever the project is, it must replace a key, existing, commercial component. By replace I mean, it achieves parity with the product it is replacing.

Certainly there are more but the examples below provide some very good talking points. They are all what I would categorize as successful businesses whose core existence is built around an open source project.
  • Red Hat and SuSe
  • MySQL and EnterpriseDB
  • WSO2
  • Zmanda
What components do we have in the list? Operating systems. Databases. Web servers. File systems. Backup software. All are core components that are must-haves in datacenters today. Each example fills a need, has at least achieved parity with what they replace, are expendable, adhere to standards, or are recognized as a leader in that segment.

Red Hat and Linux are the obvious poster children for this discussion. Red Hat was a free Linux distro but they recognized the need for standardization. At the time of their emergence there were distro's popping up all over and many were based on theirs so they obviously had a good model or at least one worth copying. Their IPO rang out across Wall Street and not long after everyone knew Red Hat was going to achieve something even though they may still be confused about how to pronounce Linux. They filled a need by replacing proprietary and costly operating systems, achieved parity by providing an stable environment, and they extended the open source piece through additional offerings and services. Services run the gambit from professional onsite services to phone support to training and certifications. And JBoss! It has had the effect of catapulting Red Hat to compete step for step with the likes of IBM and Oracle.

Having followed and used Linux since the mid 90's I have seen a dramatic shift in the UNIX gear deployed in datacenters today. I mentioned SuSe but don't feel the need to type much about it. Not to belittle SuSe but they were only mentioned to get additional credence to the Red Hat example.

MySQL produced such a sound model they were recently purchased by Sun Microsystems to the tune of $1 billion dollars. I'm not sure I'd be thrilled about that. Sun doesn't exactly have an outstanding track record when it comes to purchasing software companies and making them better. This doesn't detract from the fact that MySQL has been a favorite among the open source community for being lite and simple but also expendable and versatile. Their offerings mimic Red Hat filling needs for the serious datacenters with cluster offerings to professional services, support and training.

EnterpriseDB offers a database "that is compatible with applications written for Oracle". Whoa. They better watch it less they provoke Ellison.

WSO2 has a huge bag of tricks for extending and supporting Apache based sites.

Zmanda has a datacenter ready offering that competes with the likes of NetBackup and Networker.

I could keep going and that would make this a very long winded post. What I need to point out are the things that I have found again and again in every scenario. Services, services, services. Sure, ancillary products help and highlight the capabilities of the base product but every business has a complete set of services. Professional services to help with onsite needs to augment existing staff. Phone support to give management the warm fuzzy feeling that comes with have someone else to lean on when things go wrong. Training to keep fulltimers happy.

There are many popular open source projects out there that could benefit from a having a corporate sponsor. The problem may be in picking the right one.

Next post I'll talk a bit more about this.

Updated March 6, 2008 per spelling Nazi email.

Creating Content

Content is key. What you put out there everyone can see so make sure it's clear and anyone, or at least your intended audience, gets the point right away. Many blogs I've read have way too much of a personal touch. So, I've often viewed blogging as an online diary of sorts.

Dear Blog,
I went to W3C today and found the niftiest pair of tags in the latest revision. They were so well marked up ...

Well, I'd rather not fall into that trap. Instead, I have stuff rattling around in my head that I'd like answers to and will discuss them via my blog. Comments I'll try to address through later posts and have the occasional poll just because ... it's Web 2.0, man! Content! Interactivity! Is that really a word?

A few of the things I'm cobbling together right now include,
  • What are the components to a successful open source business strategy?
  • Why take a software project open source?
  • How to sell to the community?
You can see where I'm coming from. Hopefully, I'll get some feedback and that will of course provide ... content!