"I like this ball" – Harry Potter (aspiring wizard)
"Ah, you like it now. Just wait, it's wicked fast and damn near impossible to see" – Oliver Wood (Gryffindor Quidditch Captain)
I'm no expert on wizard pastimes, but it strikes me that ensuring Microservices stay performant is like trying to catch the Golden Snitch in a game of Quidditch. Like the snitch, Microservices are small, fast and dart about at incredible speeds.
But unlike Harry Potter, IT operations can't rely on state-of-the-art broomsticks, potions and spells. With applications now decomposed into hundreds, perhaps thousands of "snitches," new monitoring methods must be employed to ensure the Microservice promise is fully realized.
Small is Beautiful -- but There is a Cost
Though not especially new, the idea of developing and delivering small, granular, independent but collaborating services makes perfect sense. Independence reduces development bottlenecks, and when supported by DevOps and Continuous Delivery, Microservices become a perfect architectural platform to conduct more frequent releases – all without impacting the stability of the entire system.
Now that all sounds great in theory, but as with any tech potion there are just a few side-effects to consider.
With Microservices, the scale of the monitoring problem increases exponentially. Suddenly, that static monolithic application deployed to a single cluster might now be composed of hundreds (even thousands) of separate containerized services. It gets even more complex when services are written in different programming languages and use separate data stores (possibly SQL, but more likely NoSQL like Cassandra or MongoDB). Throw in a few cloud load balancers (AWS Elastic Load Balancer or maybe Nginx) together with dash of auto-scaling for resilience, and what used to be a relatively simple task of maintaining application performance has become an operational nightmare.
Of course scale isn't the only concern. When you break applications into many collaborating services there'll be thorny new communication issues to consider. Not least network latency and asynchronous messaging. Add to the mix issues around API performance and monitoring – might just as well become part of the Hogwarts School of Witchcraft and Wizardry curriculum.
Modern Application Performance Management for Microservice Monitoring
Wizardry aside, deploying Microservices at the pace demanded by a modern digital business requires new operational skills, tools and methods. Simply throwing thousands of services over the wall to the operations just doesn't work, while attempting to stem the tide with unwieldy change control and standardization causes friction and conflict. Modern Application Performance Management solutions can help, but they must move beyond the traditional "break fix" approach associated with silo'd based monitoring. Doing so requires serving developers modern techniques that help them become operationally focused in context of their Microservice coding efforts – never enforcing overly complex practices. And, since Microservice style applications are tightly integrated into their environmental context (e.g. containers and cloud services), the notion of leveraging APM to build production quality right from the development get-go is an important consideration.
From a production perspective, modern APM will also deliver many new capabilities. Rather than trying to build complex topology maps that are out-of-date as soon as they're presented, new tools will aggregate any number of services into higher-level business abstractions. Using new visualizations, cross-functional teams will gain clear business insight into the operational impact of Microservices, with role-based views helping drive improvements from any practitioner perspective.
Developers for example will invoke assisted triage workflows to pinpoint where code improvements are needed, while support analysts will adopt APM analytics and proven statistical methods such as differential analysis to better predict Microservice performance problems and anomalies.
All of these new methods are now table stakes. Apart from the sheer enormity of components under management, Microservices architectures place greater emphasis on the relationship between elements, which has a massive effect on alarms, alerts (and noise). Add cloud and mobile app delivery where demand is difficult to predict and performance baselining across modern fluid platforms is at best, well - best-guess – or yet another operational dark art.
But guesswork has no place in modern application delivery. Digital experience engineers require modern APM methods that truly exploit the value of this brilliant architectural model.
Succeed and the undoubted benefits will quickly outweigh all the management overhead and costs.
Fail and you'll be left chasing the tech equivalent of a Golden Snitch - only without all the wizardry.
Pete Waterhouse is Advisor, Product Marketing, at CA Technologies.