HELLOOMirantis named a Challenger in 2024 Gartner® Magic Quadrant™ for Container Management  |  Learn More

< BLOG HOME

Why are distributed applications the developer's problem, not the cloud platform's?

image
Last week, Iron.io announced Picasso, a project for Functions as a Service, and IBM announced general availability of OpenWhisk, thier own "serverless applications" project.  For those unfamiliar, "serverless" apps are distributed applications that don't actually run without servers, of course, but they are run in such a way that the developer doesn't have to think about the server very much.  
Now, we're all treating this like it's brand new, and a wonder of innovation.  How incredible that the developer just uploads their code and it runs where it needs to! We've been trying to get to that point with OpenStack and live migration for six years.
Here's the thing: this isn't new. VMware users have had this for years of course.
But that's not new either.
Most of you are too young to remember, but as early as 1983, VMS systems enabled developers to -- say it with me -- upload code to a distributed system and not worry about where it went.  If a  system went down, the workloads moved to a healthy node.
Systems stayed up for literally decades at a time, and operators could upgrade or destroy entire datacenters without disturbing running applications.
So what is actually new?

Distributed applications, then versus now

It wouldn't be entirely accurate to say that serverless computing -- or "cloud native" computing in general -- is just like the old VMS (or the subsequent OpenVMS) days.  
It's harder.
See, in the old days, developers created large, monolithic applications, and the operating system worried about running different portions of it in different places. Now distributing workloads is the developer's problem. The developer has to architect their application in such a way that it can be broken up, and so that different portions can be reached when needed, and so that persistence is handled, and so on.
With microservices, we're already making the developer do a lot of work and thinking about things they've never had to think about before; Functions as a Service promises to make that task even more complicated.
So why are we making developers do this?

What we get out of it

Some of the advantages of these architectural changes are endemic to the architecture itself. For example, I probably don't have to elaborate too much on the advantages of modularity in application design and how it makes maintenance and upgrades easier.
But there's still another advantage of cloud-native computing that is inherent in applications that are designed to be modularized by the developer versus by the operating system, and that's freedom. Those VMS systems weren't running on your laptop; they were running on machines costing half a million dollars in today's money and accessible only by the largest of entities.
By deliberately building applications to be distributable, developers make it possible to get that same flexibility (almost) but without having to be tied to a particular operating system or particular hardware. Instead, a properly virtualized application can run on a private cloud, a public cloud, or even a combination of the two.
In fact, when developers take the time and effort to build properly virtualized and distributed applications, decisions can be made based on criteria that are outside of the application itself. For example, some components might run in-house for regulatory reasons, while others run in the public cloud for performance or proximity reasons.
Perhaps the biggest advantage of virtualized applications, however, is the speed and agility that they enable. By taking the time at the beginning to build applications that handle distributed environments, you are saving time later, both in terms of support and maintenance. More than that, you are making it possible to much more easily add new functionality going forward.

Is it worth it?

So that brings us back to our original question: is it worth creating all of these systems that make the developer do the work of making their applications distributed? The answer depends on your point of view, of course, but consider this: by deliberately architecting the application to be distributed, rather than relying on the operating system, you gain all of the agility such an architecture provides, as well as the ability to run your application in multiple environments and in multiple ways, with virtually unlimited scalability -- and none of the traditional roadblocks of monolithic, all-powerful systems.
Isn't that worth it?

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED

Join Our Exclusive Newsletter

Get cloud-native insights and expert commentary straight to your inbox.

SUBSCRIBE NOW