Version: 1.0.0

Introduction

I’m sure we’re all familiar with the quote “Those who don’t know history are doomed to repeat it.” In technology, we’ve seen various waves of solutions repeating themselves. Often each wave has evolved in a way that it solves previous issues. Thus the negatives of the historical waves are not applicable, or mitigated.

In this article, I’m going to talk about the infrastructure trends I’ve seen in my career. We are repeating ourselves, and in many ways, repeating the problems we’ve tried to avoid.

History

Dedicated Infrastructure

Let’s rewind the IT landscape about 20 to 25 years. Your infrastructure ran on dedicated physical servers with their own local storage. Larger organization, may have utilized SAN to shared some components of storage.

Problems

  • Under utilized infrastructure
  • Large amounts of floor space consumed
  • Lots of electric and cooling wasted
  • Slow approval time
  • Slow buy time
  • Slow provisioning time

Shared Infrastructure

Now let’s rewind to only 15 years ago. Virtualization was in full swing, and you have an ability to share your servers and storage. Running dedicated servers and storage was a rarity and avoided.

Problems

  • Slow approval time
  • Slow buy time
  • Infrastructure costs are difficult to separate per dimension

Dedicated Infrastructure

Finally, let’s look at where we are now, “the cloud” which started around 10 years ago. We now have option to run a multitude of deployment models. Both dedicated, shared and some combination of the both. You have the freedom to create what makes the most sense for you.

So if “the cloud” is hybrid of both shared and dedicated infrastructure, why am I considering this a dedicated wave? The devil is in the details as they say. You see, we now have a much easier time deploying infrastructure. Leveraging automation, code, etc., has made this possible compared to 25 years ago. Before, it would have taken a great deal of time and effort to approve, buy and provision. Now, we only need the approval, and the provisioning part is easy.

The Problem Repeats

Where is the issue coming in? Well since we now have an ability to deploy dedicated infrastructure, why wouldn’t we. Unlike “way back in the day” it’s easier than ever. Plus we can now see how much a customer , department, division, application, etc. costs. Isn’t this exactly what we’ve been striving for? Yes, but at what cost.

This is going to be a case of “just because you can, doesn’t mean you should”. What the business needs to ask its self is one simple question. “Do I care about cost efficiency and profit?”. If that answer is yes, then dedicated infrastructure should be avoided. You see, we’re now back to islands of underutilized compute and storage. Maybe you’re thought is, that you’re charging it back, so who cares? In fact, now you can see how much to charge back so things are great.

What’s interesting, is that we’ve identified the actual problem you’re trying to solve. It was never about deploying dedicated infrastructure based on a need. Instead, it is about isolating infrastructure so we could have easier accounting.

An alternative

What if instead of deploying dedicated infrastructure, we went back to a shared model? Customers, departments, apps all sharing compute and storage. From a cost perspective, you’d likely see your cloud costs drop considerably. It’s very rare that every customer, department and apps are using compute and storage 100% of the time. This was why virtualization was so popular back in the day. From a cost perspective, you were utilizing what you paid for and that led to much lower operating costs. Lower costs, led to more profit or funds available for R & D.

Another win for sharing infrastructure, is you tend to build bigger and better. Meaning, when resources are isolated, you have to pinch every penny. Imagine a case where everyone has their own home. Some folks can afford to have fantastic chef level kitchens, huge swimming pools, etc. Other are lucky if they have a washing machine and dryer for their clothes. If we all pooled our resources together, we’d have access to an Olympic sized swimming pool. A kitchen fit for a five start restaurant. And it would still all be underutilized a bit. In IT infrastructure, the same is true. Pooling resources together, means we all get access to better things. And chances are good we’d coexist just fine.

I hear what you’re thinking, what about knowing how much x,y and z are costing us? In some ways, I would challenge why is that important from a big picture perspective. What you care about is profit. In life, we sometimes need to help those that struggle with our extra resources. It’s good for humanity and I contend its good for business in the right context. This isn’t to say we shouldn’t figure out a way to understand the infrastructure cost for x,y and z. More so that it shouldn’t be the primary focus. What we need to focus on, is how to measure value. That is a problem we can solve independent of sharing infrastructure. If truly knowing the cost of infrastructure per billable demission is a need, focus on how to solve that. Take the savings you’ll reap and divert some to cost tracking tooling. Whether that’s buying off the self tools, or building your own.

Closing

Sharing is caring as they say. We teach out children this, and some where along the way, we’ve lost site of this. I’m not suggesting we give away everything. Rather that there are other ways we can determine value. We can always figure out a way to charge back infrastructure. However, let’s first focus on making it as cost efficient as possible.