Want more awesome content? Sign up for our newsletter.
The journey to the Clouds.
It was last April when me and my fellas from Imagicle’s R&D department set off on a long journey.
Our destination? Cloud-native architecture for our UC licences. Something that would make even Frodo and his valiant Sam go pale, I’m telling you.
Today, we constantly hear about Cloud: Cloud technologies, Amazon, Google, RedHat, Oracle and many other players, but what’s really going on out there?
In the real-world scenario, whoever you are, you should identify yourself in one of these categories:
- You are a startup starting from scratch, and you’re dealing with some of the latest technologies like Kubernetes, Docker, Lambda, etc.
- You are a big company with one or more monolith products, you probably keep using old technologies like Cobol or 90’s tech like Java and C#, and you want to migrate to the Cloud only with drag&drop effort.
- You are , you have one monolith ApplicationSuite, and you have a big dream: you want to build an actual native cloud system. This means you can experiment with the latest technologies while trusting in a great wealth of consolidated experience in software development.
Lucky for me, me and my mates are in the 3rd scenario!
Well, pack your bags and get ready: I want to tell you a fantastic story and take you straight to the Mount Doom of technology.
What does Serverless mean?
Serverless computing (or serverless for short), is an execution model where the cloud provider (AWS, Google Cloud or Azure) is responsible for executing a piece of code by dynamically allocating the resources. And only charging for the amount of resources used to run the code.
So, despite the name, serverless architecture doesn’t mean running the code without servers.
Instead, it means that there’s no need to buy or use servers/virtual machines. It’s a way of building and running applications and services without managing the infrastructure.
All you need to do is develop your application and distribute it with serverless architecture by choosing the runtime.
Server software, Server hardware, Auto-Scaling, Networking, Provisioning, will be the Cloud Provider’s responsibility.
All happens auto-magically.
Remember when you used to need a costly bare metal server with a huge proliferation of the slowest VM (oh… how little I like VMs!), hours, even days to set up the right environment… and eventually, when you thought you were ready, you weren’t able to scale?!
Take a breath. It’s prehistory!
Today you can create an environment with only a few lines of code (i.e., Terraform or CloudFormation) and deploy your application in a few seconds on a magic infrastructure entirely controlled by your Cloud Provider.
You can scale up as much as you want, to infinity and beyond!
You can scale up as much as you want, to infinity and beyond!
How did we get rid of servers?
How did we get to do the same (or more) operations without servers?
Simple. We have moved away from everything superfluous (we’ve handed all this stuff to the Cloud Provider, remember?) to focus on the one fundamental thing: development.
After the Bare Metal and the VMs, containers arrived and allowed us to create fantastic microservice applications. But this is not the last technological stage; in fact, there is still something we must take the distance from: the container itself!
Functions allow us to focus only on the development, without thinking about other matters like:
- choosing the right Docker image;
- setting up our k8s cluster with correct policies;
- setting up application servers like JBoss or servlet containers like Tomcat;
- provisioning the required hardware.
Basically, the function does nothing more than putting your application into a container before you can say Jack Robinson.
Why choosing Serverless Architecture?
Let’s go back to the journey of us brave R&D adventurers.
We were just looking for an cloud license management system when we decided to develop one ourselves: it was at that very moment that the questions that led us to serverless architecture emerged.
- How many requests should we handle?
- How much traffic will be generated?
- How many times should we call that API?
The Fellowship of the Clouds hesitated for a moment.
That day we didn’t know how to answer, yet. The good news, though, is that the first time you’re doubtful about which architecture to choose but you know it will be based on microservices, you can take into considerations the questions above: if you can’t establish with the utmost precision how much you’ll be able to scale and provision, probabily the architecture you need is serverless.
See, many people ask me: when should serverless architecture be used?
The answer is very simple: when should it not be used?
In fact, it’s much easier to say when you shouldn’t use it than when you need to use it. Here is a list of general cases when serverless is not a good fit:
- a very complex task, with high memory requirements and peculiar requirements;
- fine-grained customization/flexibility/control level beyond what’s allowed by the functions;
- a long-running task that can’t be broken into smaller functions (remember that you pay as you go);
- when your functions are always up and running, and the cost is greater than a container (cost of the cluster is excluded);
- in case your Cloud provider doesn’t offer a good FaaS (Function as a Service) platform.
The Imagicle recipe.
Now, you’re probably thinking: “Ok, Chris, but which runtime should I pick?”
Today there are a lot of runtimes in the Serverless scenario.
For example, AWS, currently the most advanced Cloud Provider, supports Go, Python, NodeJs, Java, and .Net Core 2.
There is not a single right answer to this question, but perhaps it can help you to know how the Imagicle team approached this dilemma.
When we’ve begun developing the function, we asked ourselves:
- Which language will be more efficient and fast?
- What language will allow us to save money?
- Which language will the most suitable for our future in the Cloud?
Keep in mind that the function’s cost is calculated considering 3 different factors:
- seconds of computing time;
- number of requests.
The first 2 points are very simple to manage, while the last one may be tricky, because, in our case, is in our clients’ hands.
After some technical experiments, we have adopted an open source programming language, because:
- it’s faster than other languages (less execution time = less Cloud cost);
- it is very simple to develop with;
- it produces only one distributable file (easy to deploy);
- in my opinion, it’s the language of the Cloud as much as python was the language of data science and ML, Scala was the language of Big Data, Java was the language of enterprise backend, PHP of the web, and so on.
To complete the recipe, we seasoned with a pinch of to enable us to manage all functions lifecycle and all IAC (infrastructure as a code).
Well, well, guys.
The Fellowship of the Imagicle Clouds is on track: we can already see the slopes of Mount Doom.
Today our Cloud Licensing backend, based on a functions set, got a clean bill of health.
When there’s nothing to do, our architecture “turns off”, and when our Imagicle ApplicationSuite needs to call our service, the infrastructure turns on by magic!
What we have built during these months has a very specific name: .
Starting from the last Summer Release 2019, in fact, Imagicle partners and customers can set a Smart Account and manage all Imagicle licenses directly from our Cloud, taking all the advantages of an easy activation, a real-time view of activities and centralized management.
It’s a huge change. And it’s 100% serverless.
As we said, we are only on the slopes. As we proceed in our technological ascent, many other innovations will come, getting us (and you!) closer and closer to the Clouds.
So, guys, what do you think? Do you still love your old architecture?
Ps. Wanna join the Fellowship? Check our openings !