visit
Ignoring such advancements in the application development space can lead to sub-optimal results when adopting Microservices based architectures.
The focus of this post is on 2 of such technologies, Node and Go. Why those? I was intrigued by a sort of strange fact: they share the same date of birth, I mean almost the same day. And maybe not by chance.
With Microservices you better optimize the concurrent processing of many small tasks which is not what languages like Java and C# were designed forSuch architectures have proved to scale at sustainable costs as long as they are able to process many small tasks at “the same time”, maximising the use of cpu cycles and memory of those commodity multicore processors. Traditional languages like Java, born in a different era, had not been designed with horizontally scalable distributed architectures in mind. Applications in the ’90s were monolithic: monolithic application servers running monolithic processes. In 2009, the new requirement was to concurrently run many small tasks on many small machines (massively concurrent/parallel systems). So there was clearly a mismatch.Both Node and Go came in to address this mismatch, even if from very different directions.
In distributed architectures concurrency is crucial if you want to optimise the use of infrastructure and minimise its costsSo, what does this mean for our computing power? It means that it will remain under-utilised, unless we do something, unless we make sure that more than one request can be managed by our CPU “at the same time”. And this is exactly what concurrency is all about.This is not a new problem. Traditionally, in the Java world, this was the task of the Application Servers. But Application Servers are not a good fit for distributed and horizontally scalable architectures. And this is where the likes of Node and Go can come to rescue.
Concurrency and parallelism are similar but different concepts. Here I use concurrency, since this is what really matters in this context.
In the above example, the first request Req1 runs some initial logic (the first dark green bar) and then starts an I/O operation (I/O operation 1.1) specifying the function that will have to be called when I/O completes (the function cb11).
At that point, the processing of Req1 halts and Node can start processing another request, e.g. Req2. When I/O Operation 1.1 completes, Node is ready to resume the processing of Req1 and will invoke cb11. cb11 will itself start another I/O operation (I/O Operation 1.2) passing cb12 as callback function, which will be invoked when the second I/O operation completes. And so on until Req1 processing ends and the response Resp1 is sent back to the client.
In this way, with a single thread, Node can serve many requests at the same time, i.e. concurrently. The non blocking model is the key for concurrency in Node. Being single threaded though means that we can not use more than one core (for multi core scenarios it is possible to use Node clusters, but going along this path is inevitably adding some complexity to the overall solution).Another aspect to note is that the non blocking model implies the use of an asynchronous style of programming, which at the beginning may result hard to reason about and can lead to complicated code, the so called “callback hell”, unless properly managed.The Go approach to concurrency is based on goroutines, which are communicating among each other via channels.
Programs can launch many goroutines and the Go runtime will take care of scheduling them on the CPU cores available for Go, according to its optimised algorithm. Goroutines are not Operating System tasks, they require much less resources and can be spawned very fast in very high numbers (there are several references of Go r).
Go is also non blocking, but this is all done . For instance, if a goroutine fires a network I/O operation, its state is changed from “executing” to “waiting” and the Go runtime scheduler picks another goroutine for execution.
So, from a concurrency perspective, this is similar to what Node does, but has 2 main differences:In the above example, the Go runtime has 2 cores available. All processors are used to serve incoming requests. Each incoming request is processed by a goroutine.
For instance, Req1 is processed by goroutine gr1 on Core1. When gr1 issues an I/O operation, the Go runtime scheduler moves gr1 to “waiting” state and starts processing another goroutine. When the I/O operation completes, gr1 is put in “runnable” state and the Go scheduler will resume its execution as soon as possible.
A similar thing happens with Core2. So, if we look at a single core, we have a picture similar to that of Node. The switch of goroutine state (from “running” to “waiting” to “runnable” to “running” again) and the code is a simple flow of statements to be performed sequentially, which is different from the callback-based mechanism imposed by Node.
In addition to all of the above, Go provides with a very simple and powerful mechanism of communication among goroutines based on channels and mutex which allows smooth synchronisation and orchestration among different goroutines.
Node and the Holy Grail of one language for Front End and Back End
Javascript/Typescript dominate the world of Front End. It is almost impossible to imagine a SW shop which has to build some Front End software not using Javascript/Typescript extensively.But what if you need to build also the Back End? With Node you can leverage the same language, the same constructs and the same ideas (asynchronous programming) also to build the Back End. Even in the serverless space Node plays a central role, having been the first platform to be supported by all major Cloud providers for their FaaS offering (Function as a Service, i.e. AWS Lambda, Google Cloud and Azure functions).The ease of switching between Front End and Back End may be one of the reasons of the incredible success of Node which has led to a super vast echo system of packages (you have a Node package for practically everything) and a super vibrant community.At the same time, not all types of Back End processing are efficiently supported by Node. For instance, CPU intensive logic is not for Node given its single threaded nature. And therefore you should not get into the . Node, with the enormous amount of packages available in its ecosystem, can also be seen as “the Far West of programming”, a place where quality and safety of what you import has to be constantly checked (read for a feeling of the risks). But this is probably true anytime we leverage external libraries: the broader the ecosystem is, the higher the attention level has to be.Still in many cases, especially for I/O bound concurrent scenarios, Node can be a choice and would help maximise the Javascript/Typescript skills that may be already in house.Rust. It is an open source language, backed by Mozilla which presented it in 2010 (so it is one year younger than Node and Go). The main goal of Rust is to optimise performance like C/C++ with a much safer programming model, i.e. with less probability to stumble into obnoxious runtime bugs. The new way of thinking Rust introduces, specifically around owning/borrowing memory, is often considered challenging in terms of learning curve. If performance is supercritical though, then Rust is definitely an option to be considered.
Kotlin. It is a language that runs on the JVM, developed by JetBrain and released in 2016 (in 2017 Google announced its support for Kotlin as ). It is more concise than Java and embeds in its original design concepts like functional programming and coroutines making it part of the modern languages league. It can be seen as a natural evolution of Java, with a low entry barrier for developers coming from that world.
GraalVM. This is a new promising approach to Java and other JVM based languages. GraalVM Native Image allows to compile Java code to native executable binaries. This can produce smaller images and improve performance both at startup and execution time. The technology is still pretty young (released in 2019) and, at the moment, it shows . Given the Java popularity, it is likely to see significant improvements as it evolves towards maturity.