| Apr 23, 2026 |
A combined camera and tactile sensor system enables robots to map fine surface features of millimeter-sized objects when lighting or focus conditions degrade visual input.
|
|
(Nanowerk News) Robots equipped with cameras can locate objects in their surroundings, but vision alone struggles with targets that are tiny, irregularly shaped, or partially obscured. A team from Yonsei University in South Korea and the University of Southern California has developed a perception system that pairs a single RGB-Depth camera with a flexible tactile sensor array.
|
|
The approach, published in Microsystems & Nanoengineering (“Complementary visual localization and tactile mapping approach for robotic perception of millimeter-sized objects with irregular surfaces”), lets a robot first spot an object visually and then map its surface through touch when camera data becomes unreliable.
|
Key Findings
- A soft capacitive pressure sensor array fabricated with inkjet-printed silver electrodes detected loads as light as 23 Pa, responded in roughly 48 milliseconds, and maintained stability over 5,000 loading cycles.
- Under illumination below 10 lux, camera detection accuracy dropped sharply, but tactile scanning still resolved surface protrusions and height differences down to 2 millimeters.
- The combined vision-and-touch strategy reconstructed three-dimensional surface profiles of pill-scale objects even under close-range focus failure, occlusion, and very low light.
|
|
Robots now operate in settings where human intervention is dangerous or impossible. Space missions, radioactive cleanup, and industrial hazard zones all demand machines that can perceive and manipulate objects on their own. Camera-based systems generally perform well under controlled lighting but lose reliability when illumination drops, objects are blocked from view, or the working distance pushes the lens out of focus.
|
|
Touch offers a complementary channel, especially for reading surface shape and local geometry. Yet most flexible tactile sensors have difficulty capturing structural features at the millimeter scale. Tactile sensing on its own also cannot determine where a target sits within a broader scene. Combining the two modalities addresses the weaknesses of each.
|
|
The tactile component uses a soft capacitive pressure sensor built with inkjet-printed deformable silver electrodes and an irregular-structured dielectric layer on a flexible substrate. This design achieved a sensitivity of 0.3458 kPa⁻¹ across the 0 to 0.8 kilopascal range, with a response time of approximately 48 milliseconds and a minimum detectable load of about 23 pascals.
|
|
Arranged in a 10-by-10 grid, the sensors enabled two-dimensional pressure mapping and shape recognition. The array’s skin-like local deformation improved detection of fine structural variations that rigid sensor surfaces would miss. Over 5,000 loading and unloading cycles, performance remained stable.
|
|
On the vision side, the RGB camera achieved strong object recognition above 100 lux, with a mean average precision score of approximately 0.995. That figure dropped to 0.706 once illumination fell below 10 lux. Adding a depth module improved boundary detection and position estimation but still could not resolve fine surface geometry.
|
|
Tactile scanning filled precisely this gap. The sensor array detected single and multiple protrusions on test objects, distinguished height differences of 2, 3, and 4 millimeters, and reconstructed the surface contours of tablets inside a blister pack. The camera had failed at this task because of close-range focus limitations and low light.
|
|
“This study points to a practical shift in robot perception: when cameras lose confidence, touch can take over the harder job of reading surface detail,” the findings suggest. “In that sense, tactile sensing is not just a supporting signal. It becomes a functional backup pathway for recognizing shape, protrusions, and height variation when visual information is incomplete or unavailable.”
|
|
The strategy mirrors how humans interact with unfamiliar objects: look first to locate, then touch to refine understanding. Applying that sequence to robotics could strengthen micromanipulation in hazardous response, maintenance, and space operations where unstructured conditions make pure vision unreliable. The researchers frame their work as a step toward sensory substitution, where one sensing channel compensates when another degrades.
|
|
Future efforts will focus on fusing visual and tactile signals in real time and pushing tactile resolution further, with the goal of making robotic manipulation more autonomous and adaptable.
|