My thoughts about network virtualization
In project decisions, security is one of the main 6 quilifiers (Availability, Manageability, Recoverability,..) which could change a project radically, from architecture to cost perspective. Rules, Data breaches, Possible workload vulnerabilities are the main things that a sysadmin must fight against every day (night included!!!).
Many security lacks cost to the company millions of €/$, and many regulations now are using a new terms to redefine the approach for the cyber defense which is called “Security by Design”.
But the real question isn’t how much spend for people and/or technology to best control the infrastructure, what are security things and which technology could fit this issue in a virtual environment.
Problem exists between efficiency and security
Simple layer2 network means the most high attack square and only a single physical firewall at the edge is not enough to prevent dos attacks. Starting from this point, in my experience, I saw many design mistakes. This because people brings focus to the end of the project forgiving infrastructure security. This was an error in physical world that decuplicates the attentions in virtual world (more servers and attack surfaces than physical world).
For this reason a simple single firewall could not be enough or in some cases could represent a single point of failure of the virtual datacenter.
Another dangerous problem is about data breaches: is not a secret that big social platforms (like “linkedin”) are a great places to steal informations causing serious financial problem for these company.
Last but not the least data alterations, defacement and data distruptions which are principal causes of long and unplanned downtime, long restore periods and data losses… These issues become moneys, moneys and moneys out of the balance of the companies.
Today marketing challenge is driving security in the same way and in the same time we deliver a virtual machine in a virtual datacenter, and this could be addressed implementing a software defined network instead old (and vintage) physical networking elements like firewalls and routers.
So let’s see where are the real security challenges, what is changing in networking scenario and how micro-segmentation could be a new way to think application security.
Changing your mind…
One of the first things that a system design must do during cost and time calculation is think about security as equal as the other design element: apply a true security by design.
Starting from data the needs around this are:
- how to keep it
- how to recover it
- how to secure it
In many design approaches before placing host and storage scenarios, system design could start talking about RPO, RTO, MTBF, TTR placing backups and recover elements and developing design around these factors.
The next step could be the analysis for how to recover a “all data loss”, the best design is thinking how to reduce the probability that data losses could happens. And this are arguments for storage and networks engineers. To determine the networking solution that best fits design concepts like availability, recoverability and security is important to analyze the classic application functional diagram.
Then check and establish the right security zones based on the attack surface of the systems that are places on it. Now it’s time to draw networks and networking elements like switch, routers and firewall. In a prehistoric networking vision there are only a way to address security:
But re-thinking application we found that the best way to bring applications inside a virtual datacenter is to create a multiple segments of networks which brings logical isolation and reduces attack surface through vm application vulnerability. This way is called micro-segmentation.
Micro-segmentation
This could be called “how to reduce the attack surface” in a large networking area. With VXLAN and DLF (Distributed Logical Firewall), the security governance and performance changed its rules. The ability of “virtualize” the networking is the key concept where NSX could take a more efficient control of every packet transmitted from and to every virtual machines without compromising the application networking efficiency.
The real “power” doesn’t lie in a single (or double) virtual firewall component, but in a synergic work model where all hypervisors are involved in traffic analysis and all security rules are centrally handled by a single pane of glass (or a self service portal).
The results:
- Only good traffic walks through physical networks
- Rules are centrally managed
- Performances can scale simply adding a vsphere node
Security roles and governance
One of the fear in a structured company is who handles networks security? There are several ways to transform IT and ICT and route a company to a right view of application deployment:
- NSX has a control panel that could be out of vsphere environment. Firewall rules and routing could be handled by networking staff without interactions with VMs. I don’t prefer this way! because IMHO virtualization staff must touch every aspects of the workload from computing to security.
- Network engineers could handle physical world and simplify their day by day operations, delegating network and security things to virtual engineers.
- Some network engineers could become virtual engineers and keep their specialization in datacenter network troubleshooting from physical to virtual world.
Remember: IT and ICT are bigger enough to keep your role! Don’t be afraid about the changes but keep your knowledge updated.