Insight by Red Hat

Serverless computing goes open source to meet the customer where they are

This content is provided by Red Hat.

Serverless computing is having a moment. Although it’s been around for several years, recent shifts away from proprietary models toward open source have built momentum. Similarly, the standardization of containers, especially with Kubernetes, has opened up new possibilities and use cases, as well as fueled innovation.

“It’s really this iteration on this promise that’s been around for what seems like decades now, which is if you outsource to, for instance, a cloud provider, you don’t necessarily have to know or care or manage things like servers or databases,” said John Osborne, chief architect for North America Public Sector at Red Hat. “A couple of the key traits of serverless are that the code is called on demand, usually when some event happens, and that the code can scale down to zero when it’s no longer needed. Essentially, you’ve offloaded part of your infrastructure to a platform or public cloud provider.”

The term serverless is a little misleading. There are actually servers, of course, you just don’t have to know or care about them, because they’re owned and managed by the platform. Osborne likens it to the term wireless – because a laptop isn’t plugged into a wall, we call it wireless, even though the signal may travel 10,000 miles via fiber optic cable. The only part that’s actually wireless is your living room, but that’s really the only part you have to care about.

One of the main benefits of adopting serverless is that it facilitates a faster time to market. There’s no need to worry about procurement or installation, which also saves cost. Devs can just start writing code.

“It’s almost seen as a little bit of an easy button, because you’re going to increase some of the velocity for developers, and just get code into production a lot faster,” Osborne said. “In a lot of cases, you’re not necessarily worried about managing servers, so you’re offloading some liability to whoever’s managing that serverless platform for you. If your provider can manage their infrastructure with really high uptime and reliability, you inherit that for your application as well.”

The main roadblock to adoption thus far has been that the proprietary solutions, while FedRAMP certified, just haven’t done a good job of meeting customers where they are. These function-as-a-service platforms are primarily just for greenfield applications, Osborne said. But the public sector has a lot of applications that can’t just be rewritten. It also breaks existing workflows, and there’s a high education barrier.

Containers have now become the de-facto mechanism to ship software. It’s easy to package apps, even most older applications, in a container. Kubernetes will then do a lot of the heavy lifting for that container based workload such as application health and service discovery. And with Kubernetes, it will run anywhere: in a public cloud, on premise, at the edge, or any variation thereof. This makes Kubernetes an optimal choice for users that want to run serverless applications with more flexibility to run existing applications in any environment. While Kubernetes itself isn’t a serverless platform there have been a lot of innovations in this area specifically with the knative project which is essentially a serverless extension for Kubernetes.

“The idea is that you can run these kinds of serverless applications in any environment, so you’re not necessarily locked into just what the public cloud is giving you, but anywhere Kubernetes can run, you can run serverless,” Osborne said. “And since it’s running containers, you can take legacy workloads and run them on top as well, which opens the door for the public sector to a lot of use cases. Traditionally, public sector IT orgs have handled applications with scaling requirements by just optimizing for the worst case scenario. They would provision infrastructure, typically virtual machines, to handle the highest spike and leave those machines running 24/7.”

Serverless can help alleviate some of this pain; the application can spin up when it’s needed, and spin back down when it’s not.

Osborne said he’s seen use cases at some agencies where they receive one huge file – say a 100G data file – each day, so they have server capacity running all day just to process that one file. In other cases, he said he’s seen agencies that bought complicated and expensive ETL tools simply to transform some simple data sets. Both of these are good use cases for serverless. Since serverless is also event-based it makes a great fit for DevSecOps initiatives. When new code gets merged into a repo it can trigger containers to spin up to handle tests, build, integrations, etc.

“Once you go down the serverless path you realize that there are a lot of trickle down ramifications from using existing tools and frameworks up through workflows and architecture models. If you’re using containers, it’s just a much better way to meet you wherever you are in terms of those tools and workflows, such as logging operations and so forth,” Osborne said. “Open source is really where all the momentum is right now. It’s a big wave; I tell customers to get ahead of it as much as they can. At least start to look into this kind of development model.”

Comments

Sign up for breaking news alerts