by Nathan Noble –
Have you ever wanted to just stop dealing with servers? They take some work to setup and maintain and sometimes they demand extra attention. There was a reason they were called “server farms” and “web gardens”. They had to be “tended” to keep them healthy, occasionally “pruned” and needed to be upgraded or patched once in a while. If you are or were once, a system administrator like me, you might remember the times you had to wake up in the middle of the night when somebody called you because a server went down. In this era of cloud computing and auto-scaling clusters you can sleep better at night thanks to virtualization and not having to manage physical servers. Still, you have to deal with the “server” as a concept and perform tasks to keep them in top shape.
Serverless computing is not a brand new concept. It has been around for a while. A few years ago Amazon Web Services introduced the AWS Lambda service. Basically, the new architecture allowed developers to “run code without thinking about servers and pay only for the compute time consumed.” To put it simply, it is like uploading your code to the cloud where it magically runs and you only pay when you call it. Some call it FaaS for “Function as a Service” as all it is a function in the cloud waiting to be called. Even though they are called serverless, there are many servers still behind it all. The only difference is that you do not need to provision, manage or think about them at all. Ignorance is bliss.
Serverless platforms of which AWS Lambda is just one of many out there promises to be scalable, highly available and will only charge you for the time your function is actually running — to the second. If it is not running, you do not pay anything. This is a big advantage over running server instances where you pay by the hour whether or not the server is doing actual work.
Continuous Scaling. Processing power on demand is one of the biggest reasons serverless computing is becoming popular. When an application demands more resources the service automatically scales your application by running code in response to each trigger. Your code runs in parallel and processes each trigger individually, scaling precisely with the size of the workload.
High Availability and Fault Tolerance. Some services maintain compute capacity across multiple Availability Zones in each region to help protect your code against individual machine or data center facility failures providing predictable and reliable operational performance. AWS Lambda, for example, is designed to provide high availability for both the service itself and for the functions it operates. There are no maintenance windows or scheduled downtimes.
Zero Administration. The service manages all the infrastructure to run your code on highly available, fault-tolerant infrastructure, freeing you to focus on building differentiated back-end services. With Lambda, you never have to update the underlying OS when a patch is released or worry about resizing or adding new servers as your usage grows. The service seamlessly deploys your code, does all the administration, maintenance, and security patches, and provides built-in logging and monitoring.
Integrated Security. The service allows your code to securely access other services through built-in software development kit (SDK) and integration with Identity and Access Management. The service runs your code within a virtual private cloud (VPC) by default. AWS Lambda, for example, is SOC, HIPAA, PCI, ISO compliant.
Lower Cost. You pay only for the requests served and the compute time required to run your code. Billing is metered in increments of 100 milliseconds, making it cost-effective and easy to scale automatically from a few requests per day to thousands per second. Basically, with AWS Lamda, for example, the first 1 million requests per month are free. $0.20 per 1 million requests thereafter or $0.0000002 per request as of this writing.
Vendor Lock-in. The biggest concern is that it can lock you into one vendor. Although the code may be portable to other providers systems, there is still some work expected to be done to transfer the functions elsewhere. Code that is just running on a server, VM or Docker container is still more portable.
Debugging Challenges. AWS Lambda and/or Google Cloud Functions functions can be run and debugged locally using lambda-local for node.js or the Serverless Framework but they still cannot beat the native debuggers that are available for .NET and Java that are easy to use and developers know so well. Integrated Development Environment (IDE) based debugging tools are not widely available or extensive for serverless code as of the moment; although, they are making great progress as time passes and many are coming soon.
Code Size Limits. There are limits that the services impose on the size of your deployment package thus limiting the code and dependencies that you can include.
Resource Limits. Functions are limited. Memory allocation range: Minimum = 128 MB / Maximum = 3008 MB (with 64 MB increments). If the maximum memory use is exceeded, function invocation will be terminated. Ephemeral disk capacity (“/tmp” space) at 512 MB, number of file descriptors at 1,024, number of processes and threads (combined total) at 1,024, maximum execution duration per request at 300 seconds, concurrent executions at 1000 per region, etc. So, the application design will have to work around these limits.
AWS Lambda is not the only game in town. Azure Functions, Google Cloud Functions, and IBM Cloud Functions are the other top providers. A comparison of highlighted features from the four top providers as published is shown below.
1 million invocations free per month
Pay per 100 milliseconds ($0.20 per 1 million requests thereafter)
see AWS Lambda – https://aws.amazon.com/lambda/pricing/
1 million invocations free per month
Per-second billing ($0.20 per 1 million executions thereafter)
see Azure Functions – https://azure.microsoft.com/en-us/pricing/details/functions/
2 million invocations free per month
$0.40 per million invocations thereafter
see Google Cloud Functions – https://cloud.google.com/functions/pricing
$0.000017 per second of execution, per GB of memory allocated
see IBM Cloud Functions – https://console.bluemix.net/openwhisk/learn/pricing
Seeing that serverless computing leverages existing programming languages in C#, Java, Python and JavaScript, there is no reason that existing developers will not be able to adapt quickly. The skill set is similar to what is required for developing functions for server architectures.
As can be seen in the list of Execution Environments and Language Support there is hardly anything new to learn except for the frameworks themselves and possibly the newer cloud-based databases that are better suited for the functions to use. Many developers are becoming full-stack, front-to-back generalists and less of being specialists and more companies prefer these types as can be seen in job postings. Serverless computing is attractive to these professionals as it helps discourage specialization and reduces the layers to go through to deploy a function.
Definitely. As individuals and companies become more comfortable using the cloud for their products and applications more and more will see the practicality of using serverless architectures. Cloud adoption is growing at around 21% per year1 and serverless architectures just make sense in the cloud.
A big YES. Many companies are still just trying to adopt DevOps and one of the things that make it harder to implement is the existing mixed ecosystem of technologies. Serverless computing allows the enterprise to adopt a single framework for future applications and services more easily. It eliminates server flavors and versions and the need to keep track of each flavor or version quirks and deprecation. DevOps aims to make organizations more agile in delivering software and services and serverless enables them to do that.
Subscribe to receive more blogs like this from RCG
#IdeasRealized
1. O'Reily (2021 Nov 8) https://www.oreilly.com/pub/pr/3333