A self-powered fabric with nanostructured materials converts sound into electrical signals and uses deep learning to recognize speech, turning clothing into a soft and responsive voice interface.
(Nanowerk Spotlight) Most voice-controlled systems still depend on rigid microphones housed in plastic or metal. They work well for phones and speakers but sit awkwardly on fabric. The idea of letting clothing itself sense and respond to sound has drawn researchers from materials science, electronics, and human–computer interaction, yet practical progress has been slow. Making a piece of cloth sensitive enough to detect speech means building something that can capture tiny air vibrations while staying soft, washable, and power-efficient.
Attempts to merge audio sensing with textiles have run into physical limits. Microphones rely on membranes that need enclosed air spaces to function, and those structures do not translate easily to woven fibers. Piezoelectric threads that convert pressure into voltage have been woven into experimental fabrics, but the signals they produce are often weak and irregular.
Triboelectric systems, which use the static charge created when materials move against each other, promised a simpler path because they can harvest energy from motion. But most required two surfaces to strike together or a rigid cavity to amplify sound, which made them bulky and unsuitable for everyday clothing.
Recent progress in nanostructured coatings, flexible conductors, and machine learning has changed what is possible. New materials can hold static charge for long periods, and neural networks can extract clear patterns from noisy data. These developments have opened the door to fabrics that not only sense touch and movement but can also interpret the vibrations of speech.
This work describes a self-powered fabric that turns sound into electricity through the natural electrostatic charge that fabrics accumulate. The signals it produces are then analyzed by a compact deep learning model capable of interpreting spoken words.
The researchers demonstrate how the textile can control smartphones, home devices, and a generative AI chatbot through simple voice commands. This combination of materials design and neural modeling creates a single platform for sensing and interpretation rather than separate components joined together.
(A) Schematic showing that the deep learning (DL)-empowered AI acoustic textile (A-Textile) can be seamlessly assembled into clothing, allowing application in the voice-controlled teleoperation of IoT devices, access to cloud information, and chatting with ChatGPT. (B) Photographic image showing the A-T extile–integrated garment for voice perception. Scale bar, 4 cm. (C) Photographic image showing the developed A-T extile (dimension, 3.3 cm by 3.3π cm). Scale bar, 2 cm. (D) Photographic image showing the flexibility of the A-Textile. Scale bar, 2 cm. (E) Schematic illustration depicting the process used by the DL-empowered A-Textile for smart home usage, GPS navigation, and communicating with generative AI chatbot applications. (F) Final structure of the 2D CNN model after optimization. (G) Schematic representation of the layer-by-layer structure of the A-Textile. (H) Schematic representation of the working principle of the contactless A-Textile. (Image: Reprinted from DOI:10.1126/sciadv.adx3348, CC BY) (click on image to enlarge)
At the center of the design is a layered fabric that captures and holds static charge while responding to sound vibrations in air. The key materials are silicone rubber coated with tin disulfide nanoflowers, a carbonized cotton textile that stores charge, and a nylon layer that becomes positively charged during motion. These are arranged with a conductive silver fiber electrode and a thin spacer between them.
When the fabric flexes or brushes against itself, it naturally accumulates opposite charges on the facing surfaces. Incoming sound waves then cause tiny changes in distance and electric field between those charged layers, generating an electrical signal that follows the sound pressure. Because the layers do not need to strike together, the sensor stays thin, soft, and silent while functioning like normal cloth.
The nanostructured coating is central to its sensitivity. The tin disulfide nanoflowers provide a large surface area and favorable electronic properties that help capture and retain electrons. When they are embedded in the silicone rubber and deposited on the carbonized textile, they create sites that trap charge for long periods.
The result is a fabric with a much higher surface potential, around negative two thousand and eighty volts at optimal composition, and a charge retention time of more than fifty thousand seconds. At low concentrations these particles strengthen the field; at higher ones they form continuous paths that leak charge, so the composition must be carefully tuned. This stable charge reservoir allows the textile to generate strong, reliable electrical responses even to small sound vibrations.
Performance tests show why this design matters. The textile produces an open-circuit voltage up to twenty-one volts under sound excitation and a sensitivity of about 1.2 volts per pascal at eighty decibels of sound pressure. That value is higher than many fabric-based sensors described previously.
The device maintains a smooth and continuous frequency response from eighty to nine hundred hertz, covering the main range of human speech rather than narrow peaks. It can distinguish tone changes as fine as one hertz and delivers a signal-to-noise ratio near fifty-nine decibels. These characteristics make its output suitable for direct input into a neural model without external amplification.
Mechanical tuning further improved performance. Adjusting the thickness of the spacer between layers changed the electric field strength and responsiveness, with an optimal gap of about sixty micrometers. The researchers also perforated the composite top layer to let air move freely. Holes about one and a quarter millimeters wide and covering roughly thirteen percent of the surface produced the best balance of amplitude and frequency stability.
The optimized textile withstood more than seven thousand actuation cycles in durability tests, maintained its output after several washings, and operated reliably at ninety percent humidity and after repeated bending. These results suggest it could endure normal wear and cleaning without losing sensitivity.
Collecting electrical signals from fabric is only part of the problem. They must also be interpreted correctly. To translate the fabric’s voltage patterns into meaning, the team trained a convolutional neural network, a model that excels at recognizing patterns in time or space. They converted the textile’s voltage data into two-dimensional representations similar to spectrograms and trained a network with three convolution layers to identify key features. The model was tested on three groups of spoken commands recorded through the textile.
In the first set, participants spoke ten common home-control commands such as “light on,” “television off,” and “fan on.” The model achieved an average accuracy of ninety-three and a half percent. In a second task involving smartphone navigation and cloud searches, it reached ninety-seven and a half percent accuracy. A third demonstration linked the system to a generative AI chatbot using spoken prompts like “Explain AI” or “Enter,” again achieving about ninety-seven and a half percent accuracy. Visual analysis of the network’s internal layers confirmed that it learned to separate spoken commands into distinct clusters, showing that the signals carried consistent speech information.
The model also performed well under realistic conditions. With background noise between sixty-five and seventy-five decibels, about the level of a busy street, it maintained an average accuracy near ninety-four percent. It distinguished similar-sounding commands such as “light on” and “light off” with better than ninety percent accuracy. These results indicate that the fabric’s strong and continuous frequency response provides features that machine learning models can read accurately even across different speakers and noisy surroundings.
The researchers demonstrated how the textile could work in practice. They stitched it into a sleeve connected wirelessly to nearby devices. Spoken commands issued through the sleeve could turn lamps, televisions, or fans on and off, start navigation on a smartphone, or trigger responses from a chatbot. These tests still used an external processor to run the model, but they showed that a soft fabric could serve as a reliable voice interface.
This fabric microphone functions without an external power source. It draws energy from natural motion and ambient vibration, converts it to electrical charge, and senses sound without rigid cavities or physical impact between layers. It remains stable when washed or bent, operates in high humidity, and covers the essential frequency range of human speech. When paired with a lightweight neural model, it can recognize small command vocabularies quickly and consistently. Each of these traits addresses a major limitation that has hindered attempts to make fabrics act as acoustic sensors.
The study is careful about scope. The prototype still relies on laboratory instruments to capture voltages and on a computer to process the data. It recognizes short commands rather than open-ended speech. Even so, the concept points toward clear directions for development. Integrating flexible electronics for amplification, wireless communication, and on-fabric processing could make the entire system self-contained.
Compact models trained for limited vocabularies could run locally on small chips while more complex queries route to cloud-based computation. Combining speech sensing with motion or touch data could reduce false triggers and broaden the range of applications.
By grounding sound sensing in static electricity rather than rigid hardware, this research redefines what fabric can do. The study shows that textiles can both capture and interpret speech without sacrificing comfort or flexibility. It does not replace microphones as we know them but offers a new way to interact with digital systems through the materials people already wear.
For authors and communications departmentsclick to open
Lay summary
Prefilled posts
Plain-language explainer by Nanowerk
https://www.nanowerk.com/spotlight/spotid=67853.php?ref=li_author
Nanowerk Newsletter
Get our Nanotechnology Spotlight updates to your inbox!
Thank you!
You have successfully joined our subscriber list.
Become a Spotlight guest author! Join our large and growing group of guest contributors. Have you just published a scientific paper or have other exciting developments to share with the nanotechnology community? Here is how to publish on nanowerk.com.