After reading this article, I had a few dissenting thoughts, maybe someone will provide their perspective?
The article suggests not running critical workloads virtually based on a failure scenario of the hosting environment (such as ransomware on hypervisor).
That does allow using the ‘all your eggs in one basket’ phrase, so I agree that running at least one instance of a service physically could be justified, but threat actors will be trying to time execution of attacks against both if possible. Adding complexity works both ways here.
I don’t really agree with the comments about not patching however. The premise that the physical workload or instance would be patched or updated more than the virtual one seems unrelated. A hesitance to patch systems is more about up time vs downtime vs breaking vs risk in my opinion.
Is your organization running critical workloads virtual like anything else, combination physical and virtual, or combination of all previous plus cloud solutions (off prem)?
That article is SO wrong. You don’t run one instance of a tier1 application. And they are on separate DCs, on separate networks, and the firewall rules allow only for application traffic. Management (rdp/ssh) is from another network, through bastion servers. At the very least you have daily/monthly/yearly (yes, yearly) backups. And you take snapshots before patching/app upgrades. Or you even move to containers, with bare hypervisors deployed in minutes via netinstall, configured via ansible. You got infected? Too bad, reinstall and redeploy. There will be downtime but not horrible. The DBs/storage are another matter of course, but that’s why you have synchronous and asynchronous replicas, read only replicas, offsites, etc. But for the love of what you have dear, don’t run stuff on bare metal because “what if the hypervisor gets infected”. Consider the attack vector and work around that.
Most organizations will avoid patching due to the downtime alone, instead using other mitigations to avoid exploitation.
If you can’t patch because of downtime, maybe you are cheaping out too much on redundancy?
It the virtual borks, spin it back up. That’s a plus.
Some should run at least one instance baremetal, like domain controllers.
It’s not a one-size-fits-all.
If the hypervisor or any of its components are exposed to the Internet
Lemme stop you right there, wtf are you doing exposing that to the internet…
(This is directed at the article writer, not OP)
Well. Misconfiguration happens, and sadly, quite often.
Heh, whatever you do don’t do what everybody in the world has been doing successfully for the past 20 years.
I work for a newspaper. It was published without fail every single day since 1945 (when my country was still basically just rubble, deservedly).
So even when all our systems are encrypted by ransomware, the newspaper MUST BE ABLE TO BE PRINTED as a matter of principle.
We run all our systems virtualized, because everything else would be unmaintainable and it’s a 24/7 operation.But we also have a copy of the most essential systems running on bare metal, completely air-gapped from everything else, and the internet.
Even I as the admin can’t access them remotely in any way. If I want to, I have to walk over to another building.In case of a ransomware attack, the core team meets in a room with only internal wifi, and is given emergency laptops from storage with our software preinstalled. They produce the files for the paper, save them on a USB stick, and deliver that to the printing press.
save them on a USB stick
…which is also kept with the air-gaped system and tossed once used, i assume…
Seems like your org has taken resilience and response planning seriously. I like it.
How you keep the air gapped system in sync?
We don’t. It’s a separate, simplified system that only lets the core team members access the layout-, editing- and typesetting-software that is locally installed on the bare metal servers.
In emergency mode, they get written articles and images from the reporters via otherwise unused, remotely hosted email addresses, and as a second backup, Signal.
They build the pages from that, send them to the printers, and the paper is printed old-school using photographic plates.
Most everything everywhere is virtual these days, even when the host hardware is single tenant. Companies running hosted applications on bare metal are rare. I run personal stuff that way because proxmox was too much hassle, but a more serious user would have just dealt with it.
“Don’t use virtualization”, says exec whose product doesn’t run on virtualization
If we boil this article down to it’s most basic point, it actually has nothing to do with virtualization. The true issue here is actually centralized infra/application management. The article references two ESXi CVE’s that deal with compromised management interfaces. Imagine a scenario where we avoid virtualization by running Kubernetes on bare metal nodes, and each Pod gets exclusive assignment to a Node. If a threat actor has access to the Kubernetes management interface, and can exploit a vulnerability to access that management interface, it can immediately compromise everything within that Kubernetes cluster. We don’t even need to have a container management platform. Imagine a collection of bare-metal nodes managed by Ansible via Ansible Automation Platform (AAP). If a threat actor has access to AAP and exploit it, it then can compromise everything managed by that AAP instance. This author fundamentally misattributes the issue to virtualization. The issue is centralized management and there are significant benefits to using higher-order centralized management solutions.
Agreed.
Dont we all use centralized management because there is cost and risk involved when we don’t.
More management complexity, missed systems, etc.
So we’re balancing risk vs operational costs.
Makes sense to swap out virtual for container solutions or automation solutions for discussion.



