Network engineering has transformed from a domain of granular expertise to a more systemic approach, prioritizing overall system performance and customer experiences. This has resulted in changes in how network teams are set up, with a greater emphasis on systems engineering thinking and less on component-level expertise. 

When enterprises today are looking for the skill set to do this tradecraft, they’re looking for more of a systems engineer, responsible for the whole operation and its moving parts to ensure it’s producing the outcome the organization needs. Systems engineers “are certainly not sitting out in isolation anymore. There’s the systems aspect to it, as it touches so many more stakeholders in the organization,” he said.

Network observability used to be “the domain of the hedgehog,” said Josh Mayfield, senior director of product marketing at network observability provider Kentik, “and now it’s the domain of the fox. Watching for minute jittering behavior of a protocol in a particular setting has been displaced.”

RELATED:
Explore AI with intent
A guide to network observability tools

Mayfield noted that some seven to 10 years ago, if you looked at an organization’s network engineer architecture group, if there were 20 people in the group, 18 of them were “hedgehogs – the super granular expert at the component level,” he said. Today, it has flipped, and he estimated the group is now five hedgehogs and 15 foxes. “What’s the makeup of the individual? What’s their skill set? It’s several layers up in more systems understanding … more of a fox,” he said. And that’s because the mandate is to offer an outstanding customer experience, which he said requires more systems thinking than component-level thinking.

When it comes to systems thinking, the organization is addressing such things as wanting to reach network self-healing capability in the next 24 months. That’s a big lift, and has little to do with why the application is not performing. Mayfield said he has heard repeatedly over the last six months that teams have cascading goals on sustainability targets. “So, what are you doing in the network to optimize energy consumption, to optimize efficiency there?”

Mayfield described that as more like “forward panic” than responding to outages or downtime. And forward panic, he said, is a good goal in that network teams are now being asked to anticipate what might happen if certain goals and benchmarks aren’t hit.  So network observability, he added, is “certainly not sitting out in isolation anymore. And that’s indicative of the systems aspect to it; it touches so many more stakeholders in the organization.”

What organizations are watching regarding network performance has evolved from the days of single data centers. Networks have become more compositional, and even in on-premises infrastructures, today’s data centers look different from organization to organization. Some are running LLMs and AI workloads, and that will look different in terms of flow, compute and storage. And that’s before architectural concerns are even looked at. “It’s almost like we’ve discovered that there are other galaxies than just the Milky Way and that we’re actually floating around in a much bigger space,” Mayfield said. “And then you get to the cloud. Who really uses one cloud? Anyone? No, everyone’s multicloud. And so you really are floating around in cloudy space, on-prem data centers, and … within the whole multiverse of the internet.” 

And that, he said, is how the tradecraft of network observability has evolved. “You take [Border Gateway Protocol] monitoring and reporting and understanding exactly how I am interconnecting with the rest of everything else, and doing so in an efficient way,” Mayfield summed up.

With all this complexity, it becomes necessary to have an experienced, trusted source to provide the insight. The whole point of observability, Mayfield said, is to take all this data that a machine would understand and make it meaningful to a human, and it has to be done correctly.

In choosing a vendor, Mayfield suggested that network observability be something they breathe. “You can’t just take Splunk and feed it all this data on network and traffic and flow and vision and make sense of it. It’s all meaningless. You have to somehow then train that to translate all the machine information.”

Any number of internal and external factors can affect how networks perform. Internally, he said, it comes down to “best intentions gone sideways, where we thought this production push was going to work just fine, and we thought that everything was all right with that container load, and that the flows were correctly calibrated for the pod… and so on and so on.” Or, he said, perhaps sustainability is a forward-looking mission, or the organization wants to do real AIOps, which is a big strategic mission. And, perhaps the organization needs to spin out networks to different business units that are going to break out as individual companies and by virtue of regulation need to have these network utilities and shared services in this kind of kaleidoscope composition in which you have to support all these different compositions in which many compositions are changing. 

Many outside factors can affect the network, including geography as well as human decisions and policies, Mayfield said. “Occasionally, subsea cable goes awry, which is pretty rare, or if it’s cut by militant activity and causes the internet to change its flows and motions, that really comes down to human decision-making,” he said. “For example, if you’re a North American energy company, some of your supply chains you had with Russia and geosphere were disrupted a couple of years ago. So creating new supply chain partners changes what the network team needs to create, support and maintain.”