Serverless Computing Isn't Simple to Explain

Serverless computing is a concept that's difficult to encapsulate in a few short words, as illustrated in our expert discussion.

Charles Babcock, Editor at Large, Cloud

March 2, 2017

5 Min Read

 More on Cloud Live at Interop ITX
More on Cloud
Live at Interop ITX

Last week, InformationWeek hosted a Twitter chat on serverless computing with experts that will be speaking about the topic at Interop ITX in May. Some of the tweets from participants shed light on the subject, some raised important questions, and others expressed continued bafflement over what all the fuss was all about.

Expert participants Keith Townsend, author of The CTO Advisor and infrastructure architect at AbbVie, and Joe Emison, CIO of Xceligent, a commercial real estate information firm. Joe is a frequent speaker and writer on serverless and believes serverless will come to play a big role in IT. He explains more in this ServerlessConf presentation.

The Twitter chat started with the question, "What is serverless?" Emison responded:

This of course is an extension of the meme that legacy servers are pets and cloud servers are cattle. Instead of worrying about one or the other, serverless allows you to disregard caring for beasts of any sort.

Kenneth Hui, a cloud architect at Rackspace, also pointed out:

That's a central point about serverless. "Serverless" describes how the developer sees the application, not how the user or IT operations manager sees it. The developer is composing business logic as if he no longer cares about where it's run. Under serverless, he doesn't have to.

IT operations, on the other hand, still cares very much about the servers it has under its jurisdiction, and it will always have some, in my opinion. When it comes to serverless apps, IT most likely has no on-premises servers hosting the app components, nor is it responsible for commissioning virtual servers in the cloud. The event-triggered microservice in the cloud is running on cloud infrastructure, which includes servers, needless to say, but not any servers that have been initiated by IT operations.

Townsend might argue that serverless doesn't necessarily exclude on-premises operations and there's a hybrid serverless operation in your future, part on premises and part in the cloud. In my opinion, that's a long way off.

Hear from the experts live at Interop ITX, where Keith Townsend covers Integrating Serverless Computing into Your Hybrid Infrastructure and Joe Emison presents How You Can Benefit from Software Eating the World.

Under today's serverless operations, when they occur, the cloud provider has to worry about the servers of serverless. In effect, it needs to keep the thousands of servers that make up the cloud functioning. Inside the cloud, either Google Cloud Functions or Microsoft Azure Functions or AWS Lambda will take advantage of the infrastructure, but but at no time will customers be able to point to a specific server and say it's running his serverless application.

In that sense the term serverless is an apt one. From the customer's point of view, the hardware running his application is invisible and anonymous and somewhat irrelevant. The customer has no need to know and no interest in knowing even out of curiosity. If one of the servers dies, neither the cloud provider nor the customer cares. If the cloud is working properly, the function rolls over to a new server and continues as before.

A related concept is that serverless means the developer is just as free of operating system worries as he is of server infrastructure issues. For example, an app function is at rest in AWS Lambda. It waits to be awakened by a software event that triggers a call for it to come to life and process a piece of data. A developer using serverless might have 100 functions accessible via Lambda without knowing what version of the operating system they run on or whether it's even the same operating system. It no longer matters. They exist as a web service, or more accurately, a web microservice, callable from a mobile app or website application or other source.

In addition, serverless computing advances the state of the art for cloud use. In the cloud, we wish to pay only for what we use and there's always the possibility that an application that ran 24 hours a day in the enterprise data center can be switched off in the cloud at midnight and back on again at 4 am, saving four hours of charges. This approach is carried a giant step forward with serverless.

The microservices or application functions are not sitting on a server that's idling away until certain software events activate their use. They are inactive but able to be loaded into memory quickly and run, possibly as a containerized microservice. If containerized, no operating system needs to be spun up and no virtual machine woven around both workload and operating system. That would add many seconds of latency to the application's operation.

Instead the function springs to life in an instant, is used and just as quickly is decommissioned and goes back to sleep, with the owner paying only for the time that the function ran. There are significant overhead savings in this approach compared to the enterprise server chugging along mindlessly 24 hours a day, whether in use or not.

That is, serverless is a chance to enjoy near-continuous availability of an application but pay for only those fractions of a minute or hour that the specific parts of the application are actually in use. For some heavily used applications, that could still mean 24 hours of charges, but for many, it means paying for a fraction of the day instead of the whole day.

The Twitter chat covered the basics, but there is much more of serverless to explore. For a transcript of the chat, read the compiled comments on Storify.

To dig deeper into serverless computing, attend Interop ITX, where Keith Townsend will discuss Integrating Serverless Computing into Your Hybrid Infrastructure and Joe Emison will present How You Can Benefit from Software Eating the World.

Read more about:

20172017

About the Author(s)

Charles Babcock

Editor at Large, Cloud

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive Week. He is a graduate of Syracuse University where he obtained a bachelor's degree in journalism. He joined the publication in 2003.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights