NetworkWorld

Serverless computing’s future is now – and why you should care

By Peter Horadan, EVP of Engineering and CTO, Avalara

February 15, 2017

Serverless computing, a disruptive application development paradigm that reduces the need for programmers to spend time focused on how their hardware will scale, is rapidly gaining momentum for event-driven programming. Organizations should begin exploring this opportunity now to see if it will help them dramatically reduce costs while ensuring applications run at peak performance.

For the last decade, software teams have been on a march away from the practice of directly managing hardware in data centers toward renting compute capacity from Infrastructure as a Service (IAAS) vendors such as Amazon Web Services (AWS) and Microsoft Azure. It is rare that a software team creates unique value by managing hardware directly, so the opportunity to offload that undifferentiated heavy lifting to IaaS vendors has been welcomed by software teams worldwide.

The first wave of moving to IaaS involved replicating data center practices in the cloud. For example, a team that had 10 machines in its data center might create 10 VMs in an IaaS and copy each server to the cloud one by one. This worked well enough, but it didn’t take long for the industry to realize that IaaS is not just a way to offload hardware management. Instead, it is a fundamentally different way to build applications, offering far greater opportunities.

Serverless computing is the next step in this journey. With serverless computing, rather than allocating virtual machines and deploying code to them, the software team just uploads functions and lets the IaaS vendor figure out how to deploy and run those functions. The IaaS provider is also responsible for scaling the infrastructure so functions perform as expected no matter how frequently they are called. All the software team has to worry about is writing the code and uploading it to the IaaS vendor.

The promise of serverless computing is to allow teams to entirely stop thinking about the machinery the code runs on: how many machines are needed at peak times, whether those machines have been patched, whether the machines have the right security settings, and so on. Instead, the team just focuses on making the code great, while the IaaS vendor is responsible for running it at scale.

As a practical example, consider an application that allows users to upload photographs for automatic redeye removal. If the team manages its own hardware and the number of servers dedicated to the application is over-specified – and relatively few photos are uploaded – then the servers spend most of their time idle, a significant waste of resources. However, if the number of servers is under-specified, users will experience significant delays during peak usage. While auto-scaling services are available, they take extra effort to manage. Serverless computing eliminates all these concerns.

Serverless computing not only benefits software teams by removing the need to think about hardware, but can also dramatically reduce costs. In the world of managing VMs directly, there is nearly always excess capacity in the system, which has a direct cost. Moreover, most IaaS vendors offer a discount for teams that enter into a contract to buy “reserved” capacity.

In the world of managing machines, this imposes an unwelcome burden on software teams: they must not only manage the machines, but also make bets about the kinds of capacity they will need for the next year and enter into long-term contracts with their IaaS vendor. If a team overestimates the need, it wastes money buying reserved capacity that won’t be used. Conversely, if a team under-estimates the need, it has to pay full “retail” price to add capacity outside the contract. This financial gamesmanship is a well-known and unpleasant reality for people managing IaaS spend.

Serverless computing eliminates the need for such gamesmanship. In the serverless world, the team just uploads code, and there is no need to think in advance about capacity or make years-long server reservation contracts.

The cost savings here can be dramatic. In our own experience, we’ve seen projects costing $5,000 a month on reserved VM instances go down to closer to $200 a month in fees for the serverless computing model. 

Another area of cost savings and increased efficiency is scale. Returning to the redeye removal application: with a standard IaaS, a developer codes the application, tests it on a local computer, rents a server from an IaaS provider, makes sure the server has all the recent patches (an ongoing requirement), and then starts planning strategically, fiscally, and contractually for scale. With serverless computing, the vendor publishes an API that allows the developer to upload the function, and the vendor handles all the server maintenance and scaling. The vendor then provides a URL for the user to access the application. That’s it.

Given its simplicity and cost savings, serverless computing would seem to be the ideal development environment, but there are some important considerations to note. First, you need to place a lot of trust in the vendor. The benefit of serverless computing is that you don’t have to sweat the details; the downside is that you don’t know anything about the details. You must have confidence the vendor can instantly scale as needed without degrading performance.

So today, most organizations offering an enterprise-class, low-latency, high availability service may still prefer to manage their own servers, or at most reserve servers from an IaaS. For applications that don’t have such stringent requirements, serverless computing may already be a terrific, lower cost alternative.

Another limitation of serverless computing is that if a company has a large application with a lot of functions to stitch together, there is no “compiler” in the IaaS system to do the stitching. Instead, each function is uploaded separately and must be managed to work together by the software team.

This is much less efficient than linking to a function in the same executable. Testing and debugging is more challenging, since functions are managed individually and may be on different versions in different environments.

Finally, only a limited number of programming languages are currently supported by IaaS vendors, which could mean additional training for the existing team or the need to bring on new team members. New tools are being delivered regularly, and I expect these problems to start to go away, but at least for now, serverless computing is still “some assembly required.”

The adoption rate for serverless computing will likely accelerate dramatically as vendors overcome or eliminate these obstacles. Eventually, even the most mission-critical workloads will be moved to this environment as teams continue to gain trust that IaaS vendors are much better at managing hardware than they are.

Ultimately, every company benefits from having developers spend less time worrying about infrastructure and more time implementing differentiated features and functionality. Whether it’s the start-up that goes from idea to product in a fraction of the time at a fraction of the cost, or an existing business that can drive down costs and increase agility, “serverless computing” will likely soon be just “computing,” and a programmer born today may never encounter the term “server” at all.

Although vendor-written, this contributed piece does not advocate a position that is particular to the author’s employer and has been edited and approved by Network World editors.

 

This article was written by Peter Horadan, EVP of Engineering and CTO, Avalara, from NetworkWorld and was legally licensed through the NewsCred publisher network.