Over the past four years, I’ve consolidated a representative list of network observability vendors, but have not yet considered any modeling-based solutions. That changed when Forward Networks and NetBrain requested inclusion in the network observability report.
These two vendors have built their products on top of a network modeling technology, and both of them met the report’s table stakes, which meant they qualified for inclusion. In this iteration of the report, the fourth, including these two modeling-based vendors did not have a huge impact. Vendors have shifted around on the Radar chart, but generally speaking, the report is consistent with the third iteration.
However, these modeling solutions are a fresh take on observability, which is a category that has so far been evolving incrementally. While there have been occasional leaps forward, driven by the likes of ML and eBPF, there hasn’t been an overhaul of the whole solution.
I cannot foresee any future version of network observability that does not include some degree of modeling, so I’ve been thinking about the evolution of these technologies, the current vendor landscape, and whether modeling-based products will overtake non-modeling-based observability products.
Even though it’s still early days for modeling-based observability, I want to explore and validate these two ideas:
- It’s harder for observability-only tools to pivot into modeling than the other way around.
- Modeling products offer some distinct advantages.
Pivoting to Modeling
The roots of modeling solutions are based in observability—specifically, collecting information about the configuration and state of the network. With this information, these solutions create a digital twin, which can simulate traffic to understand how the network currently behaves or would behave in hypothetical conditions.
Observability tools do not need to simulate traffic to do their job. They can report on near-real time network performance information to provide network operations center (NOC) analysts with the right information to maintain the level of performance. Observability tools can definitely incorporate modeling features (and some solutions already do), but the point here is that they don’t have to.
My understanding of today’s network modeling tools indicates that these solutions cannot yet deliver the same set of features as network observability tools. This is rather expected, as a large percentage of network observability tools have more than three decades of continuous development.
However, when looking at future developments, we need to consider that network modeling tools use proprietary algorithms, which have been developed over a number of years and require a highly specific set of skills. I do not expect that developers and engineers equipped with network modeling skills are readily available in the job market, and these use cases are not as trendy as other topics. For example, AI developers are also in demand, but there’s also going to be a continuous increase in supply over the next few years as younger generations choose to specialize in this subject.
In contrast, modeling tools can tap into existing observability knowledge and mimic a very mature set of products to implement comparable features.
Modeling Advantages
In the vendor questionnaires, I’ve been asking these two questions for a few years:
- Can the tool correlate changes in network performance with configuration changes?
- Can the tool learn from the administrator’s decisions and remediation actions to autonomously solve similar incidents or propose resolutions?
The majority of network observability vendors don’t focus on these sorts of features. But the modeling solutions do, and they do so very well.
This list is by no means exhaustive; I’m only highlighting it because I’ve been asking myself whether these sorts of features are out of scope for network observability tools. But this is the first time since I started researching this space that the responses to these questions went from “we sort of do that” to “yes, this is our core strength.”
This leads me to think there is an extensive set of features that can benefit NOC analysts that can be developed on top of the underlying technology, which may very well be network modeling.
Next Steps
Whether modeling tools can displace today’s observability tools is something that remains to be determined. I expect that the answer to this question will lie with the organizations whose business model heavily relies on network performance. If such an organization deploys both an observability and modeling tool, and increasingly favors modeling for observability tasks to the point where they decommission the observability tool, we’ll have a much clearer indication of the direction of the market.
To learn more, take a look at GigaOm’s network observability Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.
If you’re not yet a GigaOm subscriber, sign up here.