[Ed note: Most of the headlines for this story uses the security vendor's description of this is a "backdoor", which is getting called out as deliberate clickbait and hype given the physical access needed to load malicious code --hubie]
Undocumented commands found in Bluetooth chip used by a billion devices
The ubiquitous ESP32 microchip made by Chinese manufacturer Espressif and used by over 1 billion units as of 2023 contains an undocumented "backdoor" that could be leveraged for attacks.
The undocumented commands allow spoofing of trusted devices, unauthorized data access, pivoting to other devices on the network, and potentially establishing long-term persistence.
This was discovered by Spanish researchers Miguel Tarascó Acuña and Antonio Vázquez Blanco of Tarlogic Security, who presented their findings yesterday at RootedCON in Madrid.
"Tarlogic Security has detected a backdoor in the ESP32, a microcontroller that enables WiFi and Bluetooth connection and is present in millions of mass-market IoT devices," reads a Tarlogic announcement shared with BleepingComputer.
"Exploitation of this backdoor would allow hostile actors to conduct impersonation attacks and permanently infect sensitive devices such as mobile phones, computers, smart locks or medical equipment by bypassing code audit controls."
The researchers warned that ESP32 is one of the world's most widely used chips for Wi-Fi + Bluetooth connectivity in IoT (Internet of Things) devices, so the risk of any backdoor in them is significant.
In their RootedCON presentation, the Tarlogic researchers explained that interest in Bluetooth security research has waned but not because the protocol or its implementation has become more secure.
Instead, most attacks presented last year didn't have working tools, didn't work with generic hardware, and used outdated/unmaintained tools largely incompatible with modern systems.
Tarlogic developed a new C-based USB Bluetooth driver that is hardware-independent and cross-platform, allowing direct access to the hardware without relying on OS-specific APIs.
Armed with this new tool, which enables raw access to Bluetooth traffic, Targolic discovered hidden vendor-specific commands (Opcode 0x3F) in the ESP32 Bluetooth firmware that allow low-level control over Bluetooth functions.
In total, they found 29 undocumented commands, collectively characterized as a "backdoor," that could be used for memory manipulation (read/write RAM and Flash), MAC address spoofing (device impersonation), and LMP/LLCP packet injection.
Espressif has not publicly documented these commands, so either they weren't meant to be accessible, or they were left in by mistake.
The risks arising from these commands include malicious implementations on the OEM level and supply chain attacks.
Depending on how Bluetooth stacks handle HCI commands on the device, remote exploitation of the backdoor might be possible via malicious firmware or rogue Bluetooth connections.
This is especially the case if an attacker already has root access, planted malware, or pushed a malicious update on the device that opens up low-level access.
In general, though, physical access to the device's USB or UART interface would be far riskier and a more realistic attack scenario.
"In a context where you can compromise an IOT device with as ESP32 you will be able to hide an APT inside the ESP memory and perform Bluetooth (or Wi-Fi) attacks against other devices, while controlling the device over Wi-Fi/Bluetooth," explained the researchers to BleepingComputer.
"Our findings would allow to fully take control over the ESP32 chips and to gain persistence in the chip via commands that allow for RAM and Flash modification."
"Also, with persistence in the chip, it may be possible to spread to other devices because the ESP32 allows for the execution of advanced Bluetooth attacks."
BleepingComputer has contacted Espressif for a statement on the researchers' findings, but a comment wasn't immediately available.
= https://www.documentcloud.org/documents/25554812-2025-rootedcon-bluetoothtools/
= https://reg.rootedcon.com/cfp/schedule/talk/5
= https://www.tarlogic.com/news/backdoor-esp32-chip-infect-ot-devices/
(Score: 2) by JoeMerchant on Tuesday March 11, @01:16PM (11 children)
Well, I still feel that if there are "more secure" chipsets to choose from, that should be very visible to the end consumers somehow, and the less secure options should not be being sold in the billions.
🌻🌻🌻 [google.com]
(Score: 2) by RamiK on Tuesday March 11, @02:37PM (10 children)
Transistor count and power consumption increases with memory protection since you need to (at least) do a table lookup for every memory access. So, it comes down to the use case: If it's a smart door lock, cameras or anything that processes user input, then you probably need memory protections and beefy transport encryption. But, there's plenty of use cases that come down to just reporting a sensor reading on the wifi/bluetooth that just don't justify the hardware. Like, does my wireless bluetooth mouse really need an MMU and AES256? Does this stupid body scale that (apparently) reports readings over LoRa needs to have their stack go through formal verification? What about those soil capacitance (=moisture) modules using attiny and talking over LoRa using the likes of RFM95W? etc...
Now, what I WOULD endorse for such MCUs and use cases is the mandatory use of safe languages like Rust for at least the implementation of the network stack. But it will have to come top-to-bottom with the manufacturers releasing their IDEs as Rust-first or even Rust-only first. Cause there's no way in hell I'm going to hunt down crates for an I2C sensor and http server when the C/C++ IDE and SDK are right there going it all for me and all I have to do is glue it up with a dozen LoC and a couple of #includes.
compiling...
(Score: 2) by JoeMerchant on Tuesday March 11, @03:49PM (9 children)
> If it's a smart door lock, cameras or anything that processes user input, then you probably need memory protections and beefy transport encryption. But, there's plenty of use cases that come down to just reporting a sensor reading on the wifi/bluetooth that just don't justify the hardware.
I agree, and especially around the battery life considerations: if you don't need the security, why are you sucking additional power from the battery for security?
Still, the distinction should be more end-consumer visible. Good luck getting that to happen on Amazon, of course.
>Does this stupid body scale that (apparently) reports readings over LoRa needs to have their stack go through formal verification?
IMO, yes, at least far enough to demonstrate that the body scale cannot act as a potential backdoor into your other health and financial records.
>What about those soil capacitance (=moisture) modules using attiny and talking over LoRa using the likes of RFM95W
Similarly, yes - at least far enough to demonstrate that the only thing at risk in an attack is the data delivered by the device. Of course, this is more on the receiver architecture than the satellite devices, but nonetheless, it's a system and I'm sure some developer somewhere thinks it's a good clever idea to be able to push receiver firmware updates through the same interface the soil data comes in.
> the mandatory use of safe languages like Rust for at least the implementation of the network stack.
Show me the full network stack implemented in Rust without "unsafe exceptions" and I'll start warming to the position. Until then, it's a bunch of semantics with little actual difference between "scary" C++ using safe APIs and taking a hard look at any raw memory accesses, vs "safe" Rust with piles and piles of unsafe exceptions to get the same job done.
>when the C/C++ IDE and SDK are right there going it all for me and all I have to do is glue it up with a dozen LoC and a couple of #includes.
Agreed. And, if we had some independent review of the IDE and SDK and libraries that your 12 LoC are leaning on, I don't think it really matters whether they are in Rust or C++ or Fortran.
🌻🌻🌻 [google.com]
(Score: 2) by RamiK on Tuesday March 11, @05:39PM (8 children)
It can't do any better or worse than any other thingy with an antenna. And asking more than that is equivalent to shutting down all websites and forbidding all software that hasn't been formally verified to be secure since they might be compromised and will then compromise something else that will then compromise your browser that will then compromises your bank account that kills your dog... Which, admitadly, is exactly why we do air gapping... Different threat modeling I suppose.
There's formally verified network stacks all over aerospace so once the work on the new trait solver is complete ( https://github.com/rust-lang/rust/issues/107374 [github.com] ) and Rust can move forward adding optional formal verification to unsafe uses, there will be less and less uses of unsafe and more and more instances of unsafe being accompanied by formal specification. e.g. There's already open source projects working towards such goals: https://asterinas.github.io/2025/02/13/towards-practical-formal-verification-for-a-general-purpose-os-in-rust.html [github.io]
Anyhow, it took ASN1 and Ada a couple of decades to get through all of this in aerospace. Rust is already past the half way point and has the vast majority of its code base in safe and verified code already. So, it won't take long.
Why the IDE?
Regardless, I don't see why you'd put equal burden of proof on both C++ and (safe) Rust. I mean, do you go through perfectly working python code with GDB to hunt down potential garbage collector bugs?
compiling...
(Score: 2) by JoeMerchant on Tuesday March 11, @06:17PM (7 children)
>And asking more than that is equivalent to shutting down all websites and forbidding all software that hasn't been formally verified to be secure since they might be compromised and will then compromise something else
Yeah, that's kinda where we are at the moment. I'm not advocating doing it all tomorrow by close of business, but we need to start moving in that direction if we're going to actually use this internet thingy to do more than fling poo at each other.
It's not actually "that hard" to make server software that requires secure positive identification of "the caller at the other end of the IP connection." The harder nuance is secure key management.
When devices and software start segregating along lines of "security is important" and not, that will make the whole exercise a lot more feasible. As things are, I'm typing this to you on the same device, the very same piece of application software in fact, that I also use to manage my retirement accounts. That's a whole lot of faith in something that doesn't deserve it. but I don't have much choice in today's marketplace.
>Rust is already past the half way point and has the vast majority of its code base in safe and verified code already.
I'll stay on this side of the fence where we still think about when we use unsafe things and not just throw a blanket "Oh, they're speaking Rust, it must be safe" statement at it. Call me when Rust is claiming to be past the 99% point and passing 99/100 audits verifying that claim.
>Why the IDE?
Not so much the IDE, you're right. My main IDE is Qt Creator, and with that I get clang syntax highlighting which is basically realtime static analysis as I type. Since that started I have not had, for instance, a single uninitialized variable issue. It's not the whole solution, but when I encounter problems with other peoples' code that clang static analysis would have stopped if they had bothered, I'm pretty amazed that they're still ignoring state of 10 years ago tools.
> I mean, do you go through perfectly working python code with GDB to hunt down potential garbage collector bugs?
No, but I have plenty of not perfectly working python code which comes down to garbage collector timing issues.
> I don't see why you'd put equal burden of proof on both C++ and (safe) Rust.
Right now, because (safe) Rust isn't - in practice. And, back at'cha if I'm using C++ with verified safe libraries and only (safe) constructs in-between my library calls, which is easily verified in real time as I type with static analysis.
In the practical world, I'd love to have a tool that gives me "green light" based on static analysis of my code, and if I feel that I need to color outside those lines, then procedurally we get to have a formal (documented) code review to explain why the "not recognized as intrinsically safe" operations are, after all, 100% perfectly safe due to input sanitizing, null pointer checking, lack of other threads involved or proper mutexing, etc.
🌻🌻🌻 [google.com]
(Score: 2) by RamiK on Tuesday March 11, @06:41PM (6 children)
Regrettably C++ static analysis doesn't guarantee anything anywhere near lifetime annotations which is why a borrow checker is being proposed for C++: https://safecpp.org/draft.html [safecpp.org]
Keep in mind, that's still not as good as Rust if only because it doesn't separate safe from unsafe clearly. Still, for the purpose of the conversation is shows why C++ deserves additional scrutiny compared to Rust even now.
compiling...
(Score: 2) by JoeMerchant on Tuesday March 11, @08:16PM (5 children)
In theory, I don't disagree.
I don't work in theory, I work in practice.
In practice, my existing code base is all in C++, and if I were even to contemplate a reimplementation in Rust the libraries and other tools aren't on-par yet to support that decision.
Also in practice, a lot of what my day to day C++ code does is launch other standard Linux applications in their own processes, which is a security and safety hole of yet another order of magnitude of trust in the authors of those tools.
🌻🌻🌻 [google.com]
(Score: 2) by RamiK on Tuesday March 11, @09:47PM (4 children)
It's not without friction but I'll just leave these two here:
https://github.com/dtolnay/cxx [github.com]
https://github.com/KDAB/cxx-qt [github.com]
But yeah. Maybe one day things will get easier somehow...
compiling...
(Score: 2) by JoeMerchant on Tuesday March 11, @11:21PM (3 children)
I have done the py-Qt thing a couple of times. Python usually gets in the way eventually and I just end up back in C++ anyway. Being python it does facilitate package version hell just like you would expect.
🌻🌻🌻 [google.com]
(Score: 2) by RamiK on Wednesday March 12, @08:40AM (2 children)
Yeah I used python-qt bindings back in the day and they were too high for me to get what I wanted. But keep in mind that, unlike Python's cffi bindings, Rust's ffi bindings can sub-class c++ code: https://www.kdab.com/cxx-qt-0-5/ [kdab.com]
compiling...
(Score: 2) by JoeMerchant on Wednesday March 12, @10:18PM (1 child)
Yeah, and if required I have done stuff like that, starting with calling 6502 assembly from BASIC.
I really prefer when a single API and language gets the job done, particularly if you don't have to mess with complex toolchain setup in addition to the source code.
🌻🌻🌻 [google.com]
(Score: 3, Insightful) by RamiK on Thursday March 13, @08:22AM
Well, c++ developers do tolerate templates, C, CMake/Makefile, bash, some python, maybe some shaders, the odd assembly, probably some javascript, a few declarative "languages" like css/html... So, it's mostly a quantitative issue specific to how complex Qt is and how Rust binding to it might add to the complexity.
To be honest, we only have data from Microsoft, Google and Nvidia on systems programming with Rust to show rust/c/c++ polyglot pays off. So, you might be right in saying that Qt rust bindings will never get comfortable in rust. Of course, native rust GUI toolkits ARE a thing... But yeah. Just because it can be done doesn't mean it should be done.
compiling...