The bad news: There are servers used in serverless computing. Real servers, with whirring fans and lots of blinking lights, installed in racks inside data centers inside the enterprise or up in the cloud.
The good news: You don’t need to think about those servers in order to use their functionality to write and deploy enterprise software. Your IT administrators don’t need to provision or maintain those servers, or think about their processing power, memory, storage, or underlying software infrastructure. It’s all invisible, abstracted away.
That’s why serverless computing is such a powerful concept, one that’s poised to explode as software architects and software developers come to grips with this emerging paradigm.
Serverless Computing 101: Containers
The whole point of serverless computing is that there are small blocks of code that do one thing very efficiently. Those blocks of code are designed to run in containers so that they are scalable, easy to deploy, and can run in basically any computing environment. The open Docker platform has become the de facto industry standard for containers, and as a general rule, developers are seeing the benefits of writing code that target Docker containers, instead of, say, Windows servers or Red Hat Linux servers or SuSE Linux servers, or any specific run-time environment. Docker can be hosted in a data center or in the cloud, and containers can be easily moved from one Docker host to another, adding to its appeal.
Currently, applications written for Docker containers still need to be managed by enterprise IT developers or administrators, and that means deciding where to create the containers, ensuring that the container has sufficient resources (like memory and processing power) for the application, actually installing the application into the container, running/monitoring the application while it’s running, and then adding more resources if required. Helping do that is Kubernetes, an open container management and orchestration system for Docker. So while containers greatly assist developers and admins in creating portable code, the containers still need to be managed.
That’s where serverless comes in. Developers write their bits of code (such as to read or write from a database, or encrypt/decrypt data, or search the Internet, or authenticate users, or to format output) to run in a Docker container. However, instead of deploying directly to Docker, or using Kubernetes to handle deployment, they write their code as a function, and then deploy that function onto a serverless platform, like the new Fn project. Other applications can call that function (perhaps using a RESTful API) to do the required operation, and the serverless platform then takes care of everything else automatically behind the scenes, running the code when needed, idling it when not needed.
Serverless Computing 102: The Fn Project
The Fn project is an open-source framework for deploying serverless functions in enterprise data centers or in the cloud. Developers write applications just like they were targeting any old Docker container, and can use a variety of languages—currently, Java, Go, Ruby, Python, PHP, and Node.js.
The Fn project packages that code (now called “functions”) into containers that can be run on any platform supporting Docker. Each function, as mentioned above, represents some fairly straightforward operation—image processing, video encoding, data processing, semantic analysis, and so on. When the function is deployed, it looks like a single instance of that function (“call this function to access the database”), but in reality, the serverless system manages the execution of that function. Nobody is using it? Kill all the running instances of that function’s container. An application is calling the function? Deploy the function in a container. Lots of apps are calling it? Provision lots of containers with that function.
A benefit of that scalability becomes apparent if the functions are being run in a pay-as-you-go cloud platform. Because the scaling is so dynamic, the costs are directly tied to utilization. Early users of serverless functions in PaaS (platform as a service) cloud environments have seen that their costs are lower compared to other ways of using containers in the cloud.
The Fn project is hosted on GitHub, and there’s a rich discussion community around it on Medium. According to Chad Arimura, who before joining Oracle (where he is now vice president of software development) was founder and CEO of Iron.io, one of the early pioneers in the serverless industry, “The way we package software is fundamentally different, thanks to containers. But they aren’t without difficulties, especially at scale. The Fn Project gives developers a ‘containerless experience’ by abstracting out the complexities—yet exposing their power.”
“With Docker and Kubernetes leading the way in creating an open, cloud neutral application stack, developers are now looking for a complementary, open serverless project that leverages these container-native components,” adds Bob Quillin, vice president of developer relations at Oracle. “One of the major reasons developers are demanding an open source stack in the first place is to avoid cloud lock-in—which many fear is exactly where AWS Lambda’s proprietary solution will lead them.”
The Servers Are Abstracted Away
There are still servers in serverless—but nobody sees them. Code is written as functions, which are deployed into Docker containers—but nobody sees them. Serverless computing removes a lot of the friction and run-time overhead from creating new applications, especially new cloud-native applications. Automatically scalable, portable between data center and clouds, and they cost less to operate. That’s why, if your team is building new applications, you should care about serverless computing.
Alan Zeichick is principal analyst at Camden Associates, a tech consultancy in Phoenix, Arizona, specializing in software development, enterprise networking, and cybersecurity. Follow him @zeichick.