One of the prevailing trends in IT today is distributed architectures. More likely than not, your file systems, virtual server instances, and applications are spread across a variety of host nodes or environments.
From a scalability and reliability standpoint, this is a great thing. Distributed architectures help to meet fluctuating demand for services, while also decreasing downtime.
But distributed architectures don’t make life easier in all ways. From a security standpoint, distributing software and infrastructure across large environments can make things considerably trickier.
If you plan to take advantage of modern, distributed architectures, understanding and addressing the special security challenges that they create is crucial. Here’s how.
What is a distributed architecture?
By “distributed architecture,” I mean any type of infrastructure of a software application that is composed of or hosted on multiple, decentralized components. That distinguishes them from conventional architectures, in which each environment tended to be centralized on a single host node.
Software-defined file systems that spread data across a cluster of different disks and servers are one example of a distributed architecture. Virtual servers running in public or private cloud environments that are composed of multiple underlying physical servers are another example. And a microservices-based application that is deployed using containers spread across multiple host servers is a third.
As noted above, distributed architectures offer many advantages. When properly designed and implemented, they remove the risk of having a single point of failure. They also make it easier to scale up or down because you can add or remove components as needed without having to restart the entire environment.
Why distributed architectures make IT security hard
From an IT security perspective, things can get complicated quickly when you are dealing with a distributed architecture, as compared to a traditional one.
More moving pieces
The simplest and most obvious challenge is that, in a distributed environment, you have more moving pieces. And not only that, but the pieces tend to move more quickly.
What I mean is that there are more components to worry about. Instead of having a single host server, for example, you have multiple ones. And instead of only having to secure one application service, you might have dozens of microservices. Plus, all of your infrastructure components and services tend to move around quickly as traffic fluctuates or configurations get auto-updated.
This means that security teams have a lot more to monitor and analyze. It’s kind of like the difference between having to secure a small house that has just one exterior door and a few windows, as opposed to a sprawling apartment complex with dozens of entrances and people coming and going all the time.
No “normal”
A second key challenge of distributed architectures is that their configurations tend to change quickly and automatically as orchestration tools move components around in response to fluctuations in traffic and other considerations.
As a result, there is no such thing as “normal.” You can’t establish a constant fixed baseline and use that to determine which types of activity are standard, and which ones might signal a security problem.
Lack of perimeters
The fact that distributed environments are spread across a wide area, and that they tend to grow and shrink naturally in response to demand, means that they lack clear and fixed perimeters.
For that reason, you can’t simply firewall-off your environment and assume that it is safe. Nor can you set strict limits on where a service is allowed to run and where it isn’t.
Fluctuating resource consumption
In the days before distributed architectures became common, a sudden peak in resource consumption by a given service or application was often a sure sign that something was wrong. Whether it was a security breach or a different type of technical problem, your admins knew that they had to look into it.
In distributed, dynamic environments, however, resource consumption might spike for legitimate reasons. You can’t always assume that a sudden increase in the amount of CPU that a microservice is consuming is a sign of a problem, for example. Nor can you set strict limits on resource consumption in an effort to prevent abuse. If you do, you may end up denying resources to services that legitimately need them.
How to secure distributed environments
So, what’s a forward-thinking IT team to do in the face of the security challenges described above? Avoiding distributed architectures is not the answer—If you do that, you’re depriving yourself of the considerable advantages that these architectures confer.
Instead, your organization needs to implement strategies that allow you to take advantage of distributed architectures while meeting their security needs. Such strategies should include:
- An obsession with automation. Automation not only helps you to orchestrate and manage your distributed environments efficiently, but is also essential for gleaning meaningful security insights within the complexity of distributed environments. No matter how brilliant your security engineers are, you can’t expect them to make sense of all of the fast-changing data that your distributed architectures spit out if they don’t automate their security analytics.
- Dynamic baselining. Similarly, your security tools and processes need to be founded upon the concept of dynamic baselining. Dynamic baselining means that your tools continuously and automatically reassess what constitutes normal, legitimate behavior, and what represents an anomaly that should be investigated.
- Dynamic firewall configurations. Your firewall rules also need to be configured intelligently and updated automatically so that they can keep pace with your distributed environment while still helping to keep out intruders.
- Integration. In complex, dynamic, distributed environments, security can’t exist in a silo. It needs to be baked into every stage of the process that is used to deliver and manage applications for those environments. Your developers must embrace security best practices when designing and writing code, your test engineers must understand security issues when testing it, and your Ops engineers must keep security considerations at the fore of their minds when deploying and redeploying services.
- Multi-layered security. Your security strategy also needs to incorporate multiple approaches of different types. It’s not enough just to do runtime security, or to rely just on static vulnerability analysis or firewalls.