Serverless Computing accelerates time to value and reduces costs
We are a long way from Remote Procedure Calls (RPC). We are in serverless land. Here is a quick look at its implications for developers, IT, and its impact on business velocity.
In the serverless world, clients send messages to servers. Servers process messages upon the arrival. Servers send responses back to clients. Servers are spun up as needed. Servers are auto-scaled for the needed load. You get charged for the duration of running the servers. And for the number of messages processed.
RPC to Serverless
While some of this sounds like the ’90s RPC, event-driven data processing has come a long way since RPC. Now, as a developer, we can simply focus on the logic of processing events. The logic of the business. The logic needed for the realtime, programmable, event-driven enterprise.
This is prefabricated PaaS. This is PaaS-pampering of developers.
No messy server provisioning, message passing, autoscaling, load balancing, securing… i.e. no operational headaches. It is just you, the logic, and the code.
Prefabricated PaaS reduces the time from concept to code. Call it co-co time.
Costs and vendor lock-in considerations aside, it is goodness.
Point of Arrival
Broadly speaking, we got here in two steps. Middleware to PaaS first. Then, fine-grained PaaS services got further abstracted for coarse-grained services. Surrounding support and management operations were automated next.
Everyone is on the bandwagon:
- Amazon has had their serverless services for a while. Beyond just on their usual cloud, AWS Lambda is also available for AWS Edge locations and on your local devices via AWS Lambda Greengrass. (See this article on proximity computing for what this means for edge processing.)
- Microsoft Azure has it. “Azure Functions” that some are technically referring to as FaaS (function-as-service)
- Google has their serverless service.
- IBM BlueMix has OpenWhisk.
It is a serverless nirvana out there!
Will It Get Better?
How? And will it accelerate AI? Aye.
Two more things get layered on to this pampering:
- Analytics and AI libraries that your serverless code can call at whim get better. Yes, those fantastic ML beasts that takes time to tame to slave on your precious datasets can be accessed as serverless functions.
- Integration tools for composing complex business logic that stitches workflows across the micro services. Drag-and-drop studios that let you compose away.
GE Digital’s acquisition of IQP is an example of the latter. It further democratizes the creation of codified business logic. In short, the democratization continues.
Prefabricated PaaS delivers A.I (Analytics + Integration). So it will accelerate AI and consequently drive business agility. [Q.E.D]
What is the Catch?
One could say lack of portability (“lock-in”) and unpredictable costs are downsides. Those objections exist for IaaS in general. Determining which applications should use serverless would be the other. This concern too is the next refrain of the question, “which workload should we move to cloud?”.
Ok. Next big question please?
If serverless can lower co-co time (concept-to-code time), how should I organize my internal resources to take advantage of it? What are the tradeoff’s between using internal IT operations and app development teams vs. adopting serverless and unleashing the masses at it?
Time for another co-co talk. This time it is co-co for core-vs-context.
Accelerate your business velocity AND optimize your resources. Think co-co to reduce time from concept to code and to make some good ol’ core-vs-context decisions.
It’s co-co time in the land of the serverless! Make those decisions well, lest you go cuckoo.
Originally published on medium.com/cerebrus.
— — — — — — — — — — — –