It has been a turbulent last few days around MSI and in Central Ohio at large. On Sunday, we experienced the left overs of Hurricane Ike. Being in the mid-west, we were not quite prepared for what came our way. Wide swaths of the state experienced winds as high as those associated with a minor hurricane or tropical storm. Hundreds of thousands were left without power and damage to homes and property took a heavy toll. While the damage to our area was a minor issue compared to the beating that Houston and parts of Texas took – it was, nevertheless, a major event for us.
Today is 3 full days after the storm event here in Columbus and many remain without power and other “conveniences”. Grocery stores, gas stations and restaurants are just beginning to reopen in many parts of the city. Problems with various services and businesses abound. For example, many schools are still closed, several doctor and dentist offices still have no power and there are ongoing ups and downs for telephone services and ISPs.
Around MSI, we have been fighting the damage from the storm and Mr. Murphy. While we have been among the lucky ones to keep power on a stable basis, many of our team have been spending long hours in the conference room watching TV and playing video games after business hours. Many of them have no electricity at home, so this seems to be an easy way for them to spend some time. Our ISP has had two outages in the last two days. One was around 5 hours due to a power failure for some of the equipment that manages the “last mile” while the other was less than an hour this morning when their generator for their local data center developed an oil leak. Thankfully, both have been repaired within their SLA and none have interfered with our progress on engagements.
We have prepped our warm site for extended outages and just as we were about to activate it for these ISP outages, the connectivity returned. We have learned some lessons over the last couple of days about dealing with email outages, web presence outages and certainly gained some deeper insights into a few slight dependencies that had escaped us, even during our 2x per year DR testing. We still have some kinks to work out, but thankfully, our plans and practice paid off. We were prepared, the team knew our SLA windows for our vendors and our clients and our processes for ensuring continuation of engagements worked well!
We got to know first hand, exactly how good prep and good processes for DR/BC pay off. We took our own medicine and the taste wasn’t all that bad.
The moral of the story, I guess, is that DR/BC is a very worthwhile process. So the next time, we are doing an assessment for you and ask some tough questions about yours – don’t take it personally – as we said, we have learned first hand, just how worthwhile front-end investment can be.
Learn more about the storm:
News about the storm here.
American Electric Power outage map.
Local news.
PS – Special thanks to the folks who signed up for the State of the Threat presentation this morning. Sorry for the need to postpone it. We worked with Platform Labs throughout the day yesterday attempting to coordinate the event, but at the end of the day yesterday they still had no power. Thus, the postponement. Thanks for your patience and understanding on the issue. The good news is that Steve at Platform says they are back up and running as of this morning! Good news for Steve and everyone else!