Sales personnel have a mantra, “ABC” or “Always Be Closing,” as a reminder to continually drive conversations to selling conclusions or move on. In a world where business conditions remain helter-skelter, traditional IT capacity management techniques are proving insufficient. It’s time to think different – or “ABC”: Always Be (Thinking) Cloud.
Sales personnel have a mantra, “ABC” or “Always Be Closing,” as a reminder to continually drive conversations to selling conclusions or move on. In a world where business conditions remain helter-skelter, traditional IT capacity management techniques are proving insufficient. It’s time to think different – or “ABC”: Always Be (Thinking) Cloud.
Getting more for your IT dollar is a smart strategy, but running your IT assets at the upper limits of utilization—without a plan to get extra and immediate capacity at a moment’s notice—isn’t so brainy. Let me explain why.
Author Nassim Taleb writes in his latest tome, “Anti-Fragility,” about how humans are often unprepared for randomness and thus fooled into believing that tomorrow will be much like today. He says we often expect linear outcomes in a complex and chaotic world, where responses and events are frequently not dished out in a straight line.
What exactly does this mean? Dr. Taleb often bemoans our pre-occupation with efficiency and optimization at the expense of reserving some “slack” in systems.
For example, he cites London’s Heathrow as one of the world’s most “over-optimized” airports. At Heathrow, when everything runs according to plan, planes depart on time and passengers are satisfied with airline travel. However, Dr. Taleb says that because of over-optimization, “the smallest disruption in Heathrow causes 10-15 hour delays.”
Bringing this back to the topic at hand, when a business runs its IT assets at continually high utilization rates it’s perceived as a beneficial and positive outcome. However, running systems at near 100% utilization offers little spare capacity or “slack” to respond to changing market conditions without affecting expectations (i.e. service levels) of existing users.
For example, in the analytics space, running data warehouse and BI servers at high utilization rates makes great business sense, until you realize that business needs constantly change: new users and new applications come online (often as mid-year requests), and data volumes continue to explode at an exponential pace. And we haven’t even yet mentioned corporate M&A activities, special projects from the C-suite, or unexpected bursts of product and sales activity. In a complex and evolving world, solely relying on statistical forecasts (i.e. linear or multiple linear regression analysis) isn’t going to cut it for capacity planning purposes.
On premises “capacity on demand” pricing models and/or cloud computing are possible panaceas for better reacting to business needs by bursting into extra compute, storage and analytic processing when needed. Access to cloud computing can definitely help “reduce the need to forecast” for traffic.
However, many businesses won’t have a plan in place, much less the capability or designed processes—at the ready—to access extra computing power or storage at a moment’s notice. In other words, many IT shops know “the cloud” is out there, but they have no idea how they’d access what they need without a whole lot of research and planning first. By then, the market opportunity may have passed.
Businesses must be ready to scale (where possible) to more capacity in minutes or hours—not days, weeks or months. This likely means having a cloud strategy in place, completion of vendor negotiation (if necessary), adaptable and agile business processes, identifying and architecting workloads for the cloud, and a tested “battle plan” so that when demands for extra resources filter in, you’re ready to respond to whatever the volatile marketplace requires.