"The reason that God was able to create the world in seven days is that he didn't have to worry about the installed base."
Enzo Torresi. 1945-2016.
“I consider the bicycle to be the most dangerous thing to life and property ever invented. The gentlest of horses are afraid of it.”
Samuel G. Hough, General Manager of the Monarch Line of steam ships. July 14, 1881.
“Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1,000 vacuum tubes and perhaps weigh 1.5 tons.”
Popular Mechanics. March, 1949.
A few days ago I published a blog offering career advice based on some questions I’ve been getting from former colleagues. Interestingly, the part that garnered the most comments and feedback was about public vs. private clouds and shrink wrapped on-prem software vs. cloud based services:
Run, don't walk, away from any company recruiting you to work on shrink wrapped on-prem software. The days of the private cloud are numbered as are the days of legacy shrink wrap software that needs to be installed, maintained, and managed by an IT department. All of those companies are busy trying to figure out how to move their software to the public cloud.
If you are working for a hardware company, I feel for you. As the industry figures out how to offer anything-and-everything-as-a-service and as hardware becomes more and more commoditized, it’s no wonder that hardware companies are struggling - even the ones who are trying to “add value” with their “differentiated software stacks”. Proprietary software tied to a specific vendor’s hardware is a recipe for obsolescence. No, thank you.
The last company I worked for, Cloudflare, offers DDoS-protection-as-a-service, Firewall-as-service, DNS-as-a-service, Load-Balancing-as-service, Failover-as-service, Caching-as-service, you name it. Just take out your credit card, go to their website, and you can sign up for any of those services in just a few minutes. No need to buy any hardware or maintain a random collection of servers and routers and switches and disk arrays in your own data center, no need to hire an IT staff to manage aforementioned infrastructure. Oh wait. Did I say take out your credit card? Never mind. They offer most of these services for free. Why would I want to run my own data center full of servers and stale software, again?
"Private cloud" is just a euphemism for “IT department unwilling to let go of the past”. Even companies in heavily regulated industries are busy trying to figure out how to get out of running their own data centers. The public cloud is the future, no question in my mind. And the only way to win there is to be “cloud native”.
I was asked to expand on my comments and justify my rationale, hence this update, seen primarily from the enterprise customer’s point of view.
I believe the companies that will thrive in the next generation of computing, in the cloud era, are not the ones that will sell you hardware, then sell you the operating system to run on that hardware, then sell you the app to run on that OS, then sell you the backend database needed to scale that app, then sell you the management solutions to manage that app, then sell you the backup solution that integrates “deeply” with that app, then sell you the identity solution that also integrates “deeply” with that app as well as the management solution, then sell you the load balancers and firewall to put in front of that app, then sell you … I can keep going but I think you get the idea.
Don't you see that the minute you buy a piece of hardware and put it in your data center, you start building a bespoke stack and that nothing you do will ever solve that problem nor make it cheaper or easier to maintain? Every step you take from that moment on only adds to the complexity of managing that piece of hardware sitting in your data center.
Similarly, when you buy a piece of shrink-wrapped software - any software, be it an operating system or a firewall appliance or an HR application or an Oracle database - you are not buying just that piece of software. You are buying into an ecosystem that will, sooner or later, become a ball and chain forcing you to continue to invest in it.
Worse yet, that software running in your data center is guaranteed to be stale the minute you deploy it. Much like a car that loses a huge chunk of its value as soon as you drive it off the dealer lot, shrink wrapped software, on-perm software, by definition starts rotting the minute you start using it. The problem is that it takes us (let’s be generous and say) three to five years to design and build the various components needed to get your bespoke solution working and it takes you at least a year to do the integration and qualification testing needed to get the end to end solution working in your environment.
And during those three to five years, the rest of the industry has moved forward by leaps and bounds with respect to best practices in reliability, availability, security, and maintainability - just to name a few “abilities”. By the time a feature, any “enterprise” feature, makes it through the sausage factory, it must be designed, implemented, and tested - not just by one engineering team or even by just one company but by an entire slew of companies. Worse yet, by the time you’ve run the gauntlet and gotten all the various pieces of hardware and software working together, it’s already time to install patches and service packs and agents and plugins and connectors and start the next round of upgrades. In essence, you sign up to be the system integrator and you have to keep the beast fed, all in order to pull together a solution that is unique to your company when viewed end to end.
You don't believe me about “bespoke stacks”? Try comparing your Exchange Server implementation with that of your competitor across the street. He has Exchange 2003 SP3 running on Dell servers with EMC Symmetrix and Windows Server 2008 SP2, integrated with Active Directory for identity and Digital Rights Management add-on for better security. He is running it on vSphere 6.0 and using OpenStack for management. His backup solution is Symantec NetBackup 6.3 and he is using a four node Cluster. He is also using F5 for load balancing and Cisco ASA for Firewall. Except for that one division that came through an acquisition. They have a different version of Exchange running on HP servers with NetApp filers and HP OpenView...
Your setup, let’s just say, is slightly different.
Any wonder none of those vendors can replicate your problem when you call them at 3:00 AM Saturday morning when you can't restore from a daily backup or when your perimeter is breached and your emails show up on Wikileaks? These are just half a dozen variables in the hardware and software soup (dare I say cesspool?) that is running in your “private cloud enabled data center”. There are hundreds, if not thousands, of such variables in each data center running shrink wrapped on-prem “Enterprise class” software. Each of those components has gone through rigorous testing but I can pretty much guarantee they have never been put together in exactly the combination that you have chosen. Every time you change a single variable, you double the complexity and the testing matrix for the companies involved.
Every single enterprise company out there is running a bespoke software stack - nay, a dozen bespoke software stacks - in their data centers, one for each enterprise application. The example above just covered email. Add to that HR and CRM and Finance and a dozen other stacks. And every one of those stacks, I claim, offers less reliability, availability, security, performance, manageability, and worse overall TCO than any of the current generation of public clouds and comparable SaaS solutions.
Here's a simple analogy: If I ask you to take me from point A to point B, would you take out your phone and call for an Uber or would you start ordering vehicle parts so you can assemble a car to fulfill the request? Even if you choose to take the latter route, I bet you wouldn’t order parts from a dozen different car companies. Then why are you doing that when you want to run the most critical business apps, the ones that your company depends on? On-prem shrink wrapped software is evil. Private clouds solve only a part of the problem but don’t address the fundamental issues I’ve described above. Hybrid clouds? Those don't even really exist.
What I've said here is probably obvious to most industry pundits. What amazes me is that so many multi-billion dollar companies continue to employ thousands of engineers and build on top of the same ancient delivery model. Once they’ve sold you all the bits and pieces, they also promise that they will “hide” the complexity by giving you “universal management tools” and “one pane of glass visibility”.
At the end of the day, every piece of software you install and maintain in that ecosystem requires plugins and patches and “agents” and “connectors” in order to integrate with the other pieces of software. The complexity increases exponentially every time you add one more variable, all the way from hardware to operating system to applications to storage system to firewall to identity system to backup system to management solution to … you name it. And you are the system integrator. I guarantee no one else in the world is running the same exact mix of hardware and software that you are.
If you can find a single enterprise company IT department that offers the same levels of availability, reliability, security, and performance as the public cloud, I would urge you to short their stock. Because they are obviously spending way too much on their IT budget instead of their core business. The reality is that most Enterprise IT organizations make compromises based on budget constraints, time constraints, political constraints, lack of information, and often even the whims of their personnel. “I absolutely hate Microsoft and refuse to buy any of their software.” These same IT organizations also change their requirements every once in awhile - as disruptive industry trends catch up and their associated costs drop, as they go through acquisitions or mergers, as new CIOs come and go, as solution providers go out of business, and for a dozen other reasons. So you end up with spaghetti in the data center. You end up with a dozen miscellaneous unpatched operating systems on “appliances” because, of course as we all know, “appliances don’t count because I don't have to worry about the OS.” You end up with a dozen competing management solutions that promise to make your life easier but, in fact, often only add to expenses without delivering sufficient ROI.
At best, you end up building a system that works well during normal operations but falls apart as soon as any single component hits a problem. Any such deployment doesn't just have a Single Point of Failure. It has many Single Points of Failure. Compare that to the current generation best of breed public clouds that are designed from the ground up for redundancy, designed for availability, designed for maintainability. Designed for Failure. Remember that these are enterprise application deployments we are talking about - ones that your business depends on. Which environment would you rather depend on?
Hosted solutions are a step in the right direction as they remove several variables from the on-prem equation. The right long term solution is to re-architect all these applications for the cloud; to make them “cloud native”, not to try to use a forklift to move the legacy monolithic applications to the cloud because your IT department is “comfortable with the current tools”. it’s hard to let go of legacy but I argue it’s always better to understand the core requirements for that application, for that workload, and find the closest commercially available SaaS solution on the market. I don't even have to do the math to show you that such an answer is always the right one in the long run based not just on CapEx and OpEx savings but also in terms of overall service availability and reduced attack surface. But the IT department is usually the last one to tell you that. Their jobs are not best served by that answer. Nor are the hardware vendors. Nor are the database companies. Nor are the operating system companies. Nor are the application providers. Nor are the management solution providers.
The promise of the cloud is obvious. Utility computing. Simplicity. Fewer variables. We standardize on one piece of hardware, one operating system, one set of management tools. And, for all intents and purposes, we will always run the latest version of software. And we will offer you an SLA - which means we have to constantly monitor service levels, something your IT department is probably not doing. And We will do immediate postmortems and Root Cause Analysis in the case of service failure and share the findings with the public. In such a world, the fewer variables the better. Choice is the enemy of simplicity and reliability.
Seems obvious. Yet, a lot of people are still hanging on to the old delivery model. Every excuse is used to perpetuate the old world: compatibility, regulatory compliance,, training costs, budgetary constraints, etc. That model (bespoke stacks running in your data center) made sense ten or twenty years ago when we didn't have public clouds offering utility computing, when high speed connectivity didn't exist, when we didn't have such demanding service availability requirements. It makes no sense in the new world.
One other thing that these companies seem to miss is that customers usually make decisions based on applications, not on infrastructure. Updating and re-architecting an enterprise application (email or HR or finance, for example) is an arduous multi-year journey. Enterprise companies make these decisions one application at a time and they do so infrequently - for good reasons. If I'm looking to upgrade my aging Exchange email infrastructure, I want to look into all the new architectures that have come along since Exchange was architected over twenty years ago. Wouldn't it make a lot more sense and be a lot cheaper in the long run to switch to gmail, for example, than some Frankenstein Exchange solution virtualized to run in a VM so we can continue to run Exchange 2003 SP9 and back it up with Backup Exec 3.2.5b?
The right answer is not to perpetuate the old model but rather to cap investment in existing on-prem solutions and to implement tools that help squeeze every ounce of value from your sunk cost in the existing on-prem infrastructure while re-architecting those applications for the public cloud if they are core to your business or outsourcing them to SaaS providers if they are not.
Seen through this lens, a whole slew of companies are doomed in the long term - unless they reinvent themselves. It is just as hard for these companies to do so as it is for their enterprise customers to move off existing infrastructure and solutions. Too much inertia, too many engineers and executives who are happy making incremental improvements to existing products rather than rethinking their value prop and business model. Very few companies have been able to successfully maneuver through such a transition in the past. Adobe seems to be a good recent example - a company that went from delivering shrink wrapped software running on your home PC to a cloud based service without losing a large percentage of its customer base in the transition to the new architecture. It was able to do so because it drew a clear line in the sand and stopped supporting their “legacy” install base pretty quickly. This is an extremely hard thing for a company to do - walking away from its install base, its cash cow, its revenue stream - and committing itself to a disruptive new business model. The classic Innovator's Dilemma.
As I was trying to say earlier: the companies that will survive in this new generation will be the ones that help you re-architect your application so it fits better to the cloud world, not ones that keep selling you solutions for the old world. The journey to the cloud needs to happen application by application, not company by company. The trick, of course, is to avoid making the same mistakes again in the cloud, swapping one bespoke brittle proprietary stack for another hidden behind an API. That, I’m afraid, will have to be a topic for another day.