Containerization, orchestration and open standards for metrics, logs, traces and flows have ushered in a new era of monitoring tools that will form the foundation for instrumentation in the enterprise. Learn more on our view.
The increasing and widespread adoption of cloud has fundamentally transformed the IT industry since AWS was launched in 2006. The changes span the development, deployment and management of applications and create new demands on operational support systems.
Docker and others introduced an advanced form of a lightweight virtual machine (VM) – the container. Containers encapsulating application code isolated the running of it from the underlying hardware, enabling portability and scalability. Containers also publish metrics in a consistent format regardless of the VM upon which they are running.
Before the cloud, managing the configuration and deployment of applications was a problem for the operations team. A whole culture of scripting tools, languages and skills mushroomed as the volume of application components started increasing, aggravated by the rampant use of VMs as way to control the environment.
When the cloud came along, the cloud vendors could not survive by using the same approach. AWS and then the others created proprietary environments to define, configure, test, deploy and operate applications.
The next logical step came in the form of open source orchestration tools. Within two years of its introduction, Kubernetes has become the de facto orchestration layer, bringing standardization to architecture, formats, tools, etc.
While hardware and operating systems were always instrumented for metrics and control, they provided proprietary configuration mechanisms. Over time Application Virtual Machines like the JVM provided a structured instrumentation frameworks like JMX which were adopted albeit slowly.
The advent of open and standard Orchestration has led to a similar trend. Interest and emergence of standards for emitting metrics, alerts and distributed tracing is now widespread.
Legacy application environments had a smaller number of thicker components. Topology was simple and the approach to resolve issues during incidents involved looking in-depth and often intrusively into the components.
Today, there are a much larger number of thinner components (I.e., microservices and containers). While the components are simpler, the network effect of multitudes of dynamic interacting components creates challenges for operations.
Addressing the scale and complexity creates a need for autonomous and self-healing systems. such as Kubernetes Operators and ITTT tools. Automation requires trusted monitoring systems, and open access to that data.
The adoption of open source components for building applications and standards in instrumentation has led to adoption of open source tools for monitoring. Cloud native applications added to the impetus and a forum to standardize was created in the Cloud Native Computing Foundation (CNCF).OpsCruise leverages and contributes to the CNCF open source monitoring tools, increasingly deployed in modern environments.
High quality open source and cloud provider tools for each of these information types
Flow & Traces