Stacked carbon nanotube films turn a touch sensor into a self-computing skin


Feb 06, 2026

A flexible electronic skin stacks carbon nanotube layers to sense touch position and pressure simultaneously, performing classification through its physical structure rather than external processors.

(Nanowerk Spotlight) Pick up a coffee cup and your hand performs a series of computations so fast and fluid you never notice them. Nerve endings in the skin fire to report the exact point of contact, the firmness of your grip, and whether your fingers are sliding across the ceramic glaze. The brain fuses all of this into a unified sensation in milliseconds, letting you adjust your hold before the cup slips. This integration of “where,” “how hard,” and “what kind of motion” is the foundation of dexterous manipulation, and it is precisely what engineers have struggled to reproduce in machines. The dominant approach in artificial touch relies on arrays of discrete pressure-sensing points spread across a surface, much like pixels on a screen. Each point measures force at its location and sends the reading to an external processor, which then stitches together a picture of what happened. The strategy works, but it scales poorly. More sensing points mean more wiring, more data, and more computation. Gaps between points create blind zones that miss fine details. Continuous gestures like a finger sliding across the surface get sampled as a series of isolated snapshots, stripping out the smooth trajectory that carries so much information. And the raw pressure signal still needs to be sorted into useful categories, “gentle” versus “firm,” for instance, by algorithms running on separate hardware. This split between sensing and computing, inherited from the classical von Neumann architecture that underpins most digital systems, imposes a ceiling on how compact, efficient, and responsive an electronic skin can be. A research effort now published in Advanced Functional Materials (“A Dual‐Modal Programmable E‐Skin with Tunable In‐Sensor Touch Computing”) tackles this bottleneck head-on. A team based primarily at Xiamen University, with collaborators at Xiamen University of Technology and Nanyang Technological University in Singapore, describes a dual-modal programmable intelligent electronic skin that folds sensing and a degree of computing into the same physical structure. Rather than scattering individual pressure pixels across a surface, the system uses vertically stacked position-sensing layers that encode both where a touch lands and how intense it is, directly in the electrical signal, before any data leaves the sensor. Concept and illustration of dual-modal programmable intelligent electronic skin Concept and illustration of the dual-modal programmable intelligent electronic skin (DPI e-skin). (a) Illustration of individual and group biological functions of ants, demonstrating the functional extension in recognition and motion of interbiological synergy. (b) Multilayer integration of the DPI e-skin for touch sensing, achieving modal expansion and in-sensor computing. (c), (d) Comparison between the discrete sensor array and the DPI e-skin, where the latter demonstrated fewer signal channels, continuous feature recognition, and sensing and computing integration. (e) Process of position and pressure recognition, encompassing signal acquisition, in-sensor computing, and interactive terminal applications. The left side showed the cross-sectional layered structure design of the DPI e-skin. (Image: Reproduced with permission from Wiley-VCH Verlag) (click on image to enlarge) The core building block is a single position-sensing layer. Two thin films of polyethylene terephthalate, a common flexible plastic, each coated with a uniform layer of multi-walled carbon nanotubes, face each other across a narrow spacer. Carbon nanotubes are known for their high electrical conductivity and mechanical resilience. They can be applied by simple spraying, which keeps fabrication straightforward. In the resting state, the two conductive coatings do not touch, so no current flows. When a finger presses hard enough to overcome the spacer’s support, the films make contact and complete a circuit. Where the contact occurs determines the circuit’s total resistance. Pressing near the electrode produces a short conductive path and low resistance; pressing farther away creates a longer path and higher resistance. The result is a single analog signal whose amplitude maps continuously to position, with a resolution finer than 500 µm. Because the conductive films are flat and continuous rather than divided into discrete pixels, there are no blind spots. Sliding a finger produces a smoothly changing resistance trace; tapping produces an abrupt step. The sensor distinguishes between these gesture types from signal shape alone. Pressure classification emerges from stacking multiple layers. The authors liken the principle to ant colonies: a single ant has limited capabilities, but a group achieves complex collective behavior through cooperation. In the same way, each layer is a simple sensing unit but stacking them produces capabilities no single layer could deliver. Each layer’s spacer can be made thicker or thinner, which changes the minimum force needed to bring its conductive films into contact. In a three-layer device, the top layer activates under gentle pressure of 9.6 kPa, the second at 200.9 kPa, and the third only above 385.8 kPa, a differentiation ratio greater than 38×. The number of layers producing a signal at any moment tells the system how hard the user is pressing, without any algorithmic threshold calculation. The researchers describe this as in-sensor computing: the device’s physical structure performs the classification that would normally require separate processing hardware. Position and pressure signals remain decoupled. Once a given layer activates, its resistance reading depends on position but stays essentially flat as force increases further. Signal fluctuation across different pressures remained below 1.5 %. The team reports a response time under 0.6 ms, with even the thickest spacer configuration recovering in under 2 ms, well within real-time interaction requirements. The device maintained stable output over more than 10 000 loading cycles with less than 1 % signal drift and tolerated a range of temperatures and humidity levels. To demonstrate practical utility, the researchers built several prototypes. An arrow-shaped single-layer version operated a simulated four-axis robotic arm, with eight spatial zones each linked to a specific joint rotation, translating simple finger presses into coordinated multi-axis movement. A two-layer S-shaped device functioned as a compact keyboard: 26 positions along its surface corresponded to the letters of the alphabet, and gentle versus heavy pressure toggled between lowercase and uppercase without a shift key. The three-layer configuration opened the door to security applications. Designed as a two-factor encrypted password lock, it required users to enter a numeric code through touch position while simultaneously encoding a pressure password through the force applied at each digit. Both had to match for authentication. The researchers then pushed this further: they trained a one-dimensional convolutional neural network on tactile data from six volunteers and achieved over 96 % accuracy in user identification and over 97 % in distinguishing simulated healthy from sub-healthy states, based on subtle differences in press duration, timing, and signal stability. The team acknowledges limitations. The design handles single-point touch most reliably; multi-point contact introduces signal ambiguity. The health-state prediction was tested on a small group with simulated rather than clinically diagnosed conditions, and achieving broader clinical applicability would require larger and more representative datasets. What makes the device notable is what it removes from the equation. By physically encoding position as a continuous analog signal and pressure as a discrete layer count, it collapses what traditionally required dense arrays and separate processors into a thin, flexible stack of films. Fewer channels, less data, and no need for external threshold logic add up to a leaner architecture, one that could find a natural home in wearable electronics, robotic skins, and assistive technologies where every milliwatt and every millimeter of thickness counts.


Michael Berger
By
– Michael is author of four books by the Royal Society of Chemistry:
Nano-Society: Pushing the Boundaries of Technology (2009),
Nanotechnology: The Future is Tiny (2016),
Nanoengineering: The Skills and Tools Making Technology Invisible (2019), and
Waste not! How Nanotechnologies Can Increase Efficiencies Throughout Society (2025)
Copyright ©




Nanowerk LLC

For authors and communications departmentsclick to open

Lay summary


Prefilled posts