AMD and researchers spar over shocking attack's real-world dangers
Hypothetical threat, or concrete danger?

What you need to know
- Researchers have exposed a vulnerability with AMD SEV (Secure Encrypted Virtualization).
- In response, AMD has cast doubt on the real-world implications of the discovery, citing physical logistical hurdles for threat actors.
- The researchers have responded, disputing the existence of said hurdles.
In one of the more tech-savvy, inside-baseball bits of news to crop up recently, AMD and a group of researchers have begun something of a sparring match, going back and forth over whether AMD SEV (Secure Encrypted Virtualization) has just had a dangerous vulnerability exposed or if nothing more than inconsequential hypotheticals have been presented.
Here's the idea behind SEV (based on how AMD is positioning it): It's meant to safeguard virtual machine data in the cloud so that admins can't go wild and cause chaos. However, in a research paper entitled "One Glitch to Rule Them All: Fault Injection Attacks Against AMD's Secure Encrypted Virtualization," researchers shine a spotlight on where SEV can be compromised (via The Register).
"By manipulating the input voltage to AMD systems on a chip (SoCs), we induce an error in the read-only memory (ROM) bootloader of the AMD-SP, allowing us to gain full control over this root-of-trust," the paper says. "This type of attack is commonly referred to as voltage fault injection attacks."
AMD replied that this is not a remote attack scenario, casting doubt over the real-world utility of the attack. However, the researchers came back with a statement. When speaking to TechRadar Pro, Robert Buhren, one of the paper's authors, pointed out that "no physical tampering with machines in the data center is required" and that the threat posed by a voltage fault injection attack is very much real.
Furthermore, Buhren highlighted that the vulnerability being unrelated to firmware means that firmware updates can't stop it, making it even more dangerous. AMD has yet to publicly reply to the updated researcher response.
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
Robert Carnevale is the News Editor for Windows Central. He's a big fan of Kinect (it lives on in his heart), Sonic the Hedgehog, and the legendary intersection of those two titans, Sonic Free Riders. He is the author of Cold War 2395. Have a useful tip? Send it to robert.carnevale@futurenet.com.
-
Erm.... Don't voltage changes to the cpu require bios /uefi access by design? As none in their right mind would be installing OC tools like Ryzen Master in a data centre lol...
-
What you wrote was my first thought too. So I went to the links to see if the researchers provided a more likely scenario where this flaw could be exploited by bad actors. It looks like the initial voltage fault injection does require physical access to a CPU, possibly of the same production part # or possibly just any Ryzen series CPU (that's not entirely clear), but then further physical access is not required. I think this is the relevant text: "Robert Buhren, one of the authors of the paper, contacted TechRadar Pro to dismiss AMD’s supposition, and instead claims that the attacker needs to have physical access to any arbitrary Epyc CPU, and not necessarily to the CPU that executes the targeted virtual machines (VM). "'A malicious admin could buy the CPU somewhere and use the extracted keys on systems in the data-center. IMHO, this makes the attack much more dangerous as no physical tampering with machines in the data center is required,' Buhren told us." … "The PoC shows how an attacker can use the keys from one AMD processor to extract a SEV-protected VM's memory inside a data center.
"He explains that their most recent glitching attack makes it possible to extract details from all three generations of Zen CPUs, in essence enabling the PoC [proof of concept] to work on all AMD processors that support SEV." Perhaps AMD will have a response to this new line of reasoning as well, but I've not seen one from them yet. -
Ah I see, thanks for the info. To be honest, if someone with malicious intent was working in a data centre they wouldn't need to be faffing about with a compromised CPU. They'd already have access to the systems. From first glance, it's not as severe as it's being made out to be. Plus, if management are stupid enough not secure the server rooms to authorised personnel then they'd be running afoul of many laws and regs leaving them and the data centre operator liable. So that leaves one plausible a external scenario - compromised servers sold off the shelf. That's highly unlikely they would be deliberately sold. But if they were put into a supply chain by a malicious party... So this flaw does need to be patched.
-
TechFreak1, I believe you are more knowledgeable on this than I (my tech knowledge in security matters is too high level to fully understand these kinds of problems), so this is a question, not an argument: couldn't a bad actor buy one of the applicable CPUs, use this technique to get the codes, then use those remotely (even on a remote data center) to monitor contents of VM RAM? I think that was the researcher's assertion. If that's not possible, then which part do you believe requires direct physical access?
-
I need to have a indepth look at the documents and delve further into associated material. As right now it would be mostly speculative. Putting physical access to cpus aside. The questions I have: How would a malicious admin extract the relevant keys in the first place? Which also leads to my second question where does voltage manipulation come into play? Since the researchers rebuttal is the extracted keys can be run on any epyc cpu. So where does voltage manipulation come into play here? Soo, yeah need to look at more information.