Audio Blog Post – IT History: An Interview with Brent’s Mom

Today, I got to do something pretty cool! I got to record a quick interview about the history of IT and what some of today’s technologies look like through the eyes of someone who has done IT for the last 40 years. Even cooler than that, I got to interview MY MOM! 

Check this out; as she discusses mainframes, punch cards and tape vaults, insights about mainframe authentication and even quality control in the mainframe environment. She even gives advice to IT folks approaching retirement age and her thoughts on the cloud. 

She closes with a humorous insight into what she thinks of my career and when she knew I might be a hacker. 🙂

It’s good stuff, and you can download the audio file (m4a format) by clicking here

Thanks for listening and let me know if you have other IT folks, past or present, you think we should be talking to. I’m on Twitter (@lbhuston) , or you can respond in the comments.

Are Your Disaster Recovery Plans Ready For A Disaster?

One Data center just found out that theirs wasn’t, and a lot of their customers were also caught with no backup servers, only relying on the Data center’s disaster recovery. On Saturday ThePlanet Data center experienced an explosion in their power room that knocked approximately 9,000 servers offline, effecting over 7,500 customers. ThePlanet was unable to get power back on to those servers for over a day, due to the fire department not letting them turn the backup power on.

Two separate issues can be seen from this, one, the Data center’s disaster recovery plan failed to recover them from a disaster. While quite unlikely to happen, an explosion in the power room can happen, as seen here, and they were not prepared for it. Perhaps they could have worked with the fire department during the disaster recovery policy creation to identify ways that backup power could be served while the power room was down. Or possibly with 5 Data centers (as ThePlanet has) they could have had spare hot servers at the other sites to send backups to. We don’t know the details of their policy or exactly what happened yet, so we can only speculate ways that the downtime could have been prevented.

Secondly, many customers found out the hard way to not rely on someone else’s disaster recovery plans. These sites could have failed over to a site at another Data center, or even a backup at their own site, but they weren’t prepared, assuming that nothing could happen to the Data center their server is at.

The lesson learned from this mistake is that disasters happen, and you need to be prepared. No disaster scenario should be ignored just because “it’s not likely to happen”. So take a look at your plans, and if you host at a Data center, if your website is critical make sure there is a backup at a separate Data center or on your own site.