A wall-mounted metasurface powered by a large language model interprets voice commands and uses ordinary WiFi signals to deliver data and monitor breathing simultaneously and autonomously.
(Nanowerk Spotlight) ‘Talking to a wall’ is the universal shorthand for being completely ignored, which is exactly what walls actually do with radio waves. WiFi signals scatter off surfaces, lose energy through building materials, and arrive at your device weakened and distorted. Programmable metasurfaces were designed to fix this.
These flat panels, studded with hundreds of tiny antenna elements whose electromagnetic response can be switched on command, can steer and focus wireless signals in real time. They can redirect a WiFi beam around a corner or concentrate it on a single device. But they have always needed a human telling them what to do, recalculating configurations every time someone moves, a new device appears, or conditions shift. Take the operator away, and even autonomous metasurfaces that adapt their electromagnetic response revert to limited, pre-programmed routines.
The idea of giving these surfaces the ability to act on their own is not entirely new. In 2025, a separate research group introduced metaAgent, a framework that connected a large language model to a programmable metasurface, allowing it to interpret natural-language commands and execute electromagnetic tasks like beamforming and localization through semantic programming (Light: Science & Applications, “AElectromagnetic metamaterial agent”). That was an important proof of concept, but tasks still ran one at a time, and the system lacked real-time environmental awareness.
Now, a team of researchers has built a surface that goes further: it does not just follow orders but actively listens, reasons, perceives its surroundings, and responds. Ask it to send a photo to your partner’s laptop while checking whether your baby in the next room is breathing normally, and it does both simultaneously, using nothing but ordinary WiFi signals. No wearable on the infant, no camera on the crib, no app to configure. Just a spoken request and a spoken answer.
The system, described in Advanced Functional Materials (“Autonomous Intelligent Metasurface for Wireless Communications and Contactless Human Sensing”), is called an autonomous intelligent metasurface, or AIM. It wraps a programmable electromagnetic surface in a closed-loop architecture driven by a large language model, giving it the ability to interpret natural-language commands, perceive its environment, reconfigure wireless signals, monitor human vital signs, and report results back by voice, all without human-in-the-loop control.
Schematic of the autonomous intelligent metasurface (AIM). (a) Working pipeline of AIM for complex EM tasks, which follows a closed-loop process of listening, reasoning, acting, and responding. (b) An example of AIM for integrated sensing and communication, which can simultaneously monitor an individual’s respiration and transmit the customized content to a designated device. (Image: Reproduced with permission from Wiley-VCH Verlag) (click on image to enlarge)
At AIM’s core sits a 64-centimeter square panel holding a 32 × 32 grid of small resonant elements operating at 3.5 GHz. Each element contains two PIN diodes that flip its electromagnetic phase by 180°, and by programming different patterns across the grid, the system can steer a wireless beam in a chosen direction or focus it onto a specific spot in a room.
But hardware alone does not make a wall intelligent. AIM wraps this panel in a cognitive architecture inspired by human physiology, with six modules the researchers name after body parts. A microphone (“ear”) captures spoken commands and converts them to text, which a large language model (“brain”) then parses into a sequence of subtasks. A camera paired with a LiDAR depth sensor (“eye”) identifies and tracks people and objects in the room, while the metasurface (“hand”) steers and focuses electromagnetic waves accordingly.
A signal processing module (“neurons”) handles wireless data and extracts physiological signals. A loudspeaker (“mouth”) announces results back to the user. The metaphor is more than decorative: each component feeds the next in a continuous cycle of perception, reasoning, action, and feedback that runs without human intervention.
What makes this cycle particularly striking is that sensing and communication happen at the same time, on the same signal. Most systems that combine wireless data transfer with health monitoring require dedicated sensing hardware or separate frequency bands. AIM avoids both.
It generates signals following the IEEE 802.11n WiFi standard, shifted to 3.5 GHz to avoid interfering with everyday wireless traffic. The same signal that delivers a file to a laptop also bounces off a person’s chest, where the tiny movements caused by breathing shift the reflected signal in ways similar to a micro-Doppler radar. The signal processing module picks up those shifts and extracts respiration rate, no camera or wearable required. This dual use of a single, unmodified WiFi signal extends the principle behind recent work on metasurface-based wireless sensing from gesture recognition to continuous vital-sign monitoring.
Did it actually work? When a user said “send this image to the laptop while checking Alice’s breathing,” AIM identified Alice among multiple people, tracked her in real time, focused the beam on her chest, transmitted the file, extracted her respiration rate, and announced the result, all autonomously. Compared against a wearable belt sensor on the subject’s abdomen, the breathing measurements matched closely. With one person in the room, the system’s readings were typically off by fewer than two breaths per minute. With four people moving around, the error roughly doubled but remained within a clinically acceptable range.
The communication improvements were just as tangible. Without AIM, a transmitted image arrived visibly corrupted, with scattered signal clusters indicating heavy distortion. With AIM steering the beam, the same image came through clearly, and a standard signal quality metric improved by roughly 8 dB, significantly reducing transmission errors. Packet error rates fell across a wide range of angles between the surface and the receiver.
The researchers also tested AIM on spoken commands it had never encountered during development. Simple instructions like sending a file to a specific device succeeded 96 to 98 percent of the time. Compound commands requiring simultaneous data transfer and health monitoring reached 90 to 100 percent. Failures traced mainly to ambiguous language in multi-person scenes, momentary visual tracking loss, or network latency slowing the language model’s response. The entire closed loop, from spoken command to voice-delivered result, took about 6.3 seconds.
At roughly 450 W of total power consumption and a 20 percent manual reset rate, AIM is not yet ready for a consumer product shelf. But the metasurface hardware itself stays simple and inexpensive; the intelligence lives entirely in the surrounding software and sensors.
Future development will tackle interference management when multiple tasks compete for the surface, safety measures against adversarial voice commands, and privacy protections that activate physiological sensing only on explicit request. Getting that balance right between autonomy and safety remains an open problem. But the core demonstration stands: a wall that listens, thinks, and responds is no longer just an idiom for futility. It is an engineering prototype with measured performance data to back it up.
For authors and communications departmentsclick to open
Lay summary
Prefilled posts
Plain-language explainer by Nanowerk
https://www.nanowerk.com/spotlight/spotid=68972.php?ref=li_author
Nanowerk Newsletter
Get our Nanotechnology Spotlight updates to your inbox!
Thank you!
You have successfully joined our subscriber list.
Become a Spotlight guest author! Join our large and growing group of guest contributors. Have you just published a scientific paper or have other exciting developments to share with the nanotechnology community? Here is how to publish on nanowerk.com.