Metasurfaces trained like neural networks redefine how holograms store information


Dec 03, 2025

Layered metasurfaces trained as optical neural networks enable multifunctional holograms and security features, integrating neural computation principles with nanostructured optics to create a versatile new device class.

(Nanowerk Spotlight) Metasurfaces are thin optical materials patterned with nanoscale structures that interact with light in precisely engineered ways. Their optical response is determined by geometry rather than thickness. Each nanostructure can redirect or reshape light by altering how its phase or polarization changes as it traverses the surface. These surfaces have enabled compact lenses, beam shapers, and high-fidelity holograms, providing greater control than pixel-based digital holography. The information is embedded not in electronics but in the spatial arrangement of the nanostructures. Optical encryption approaches have made use of this property. A hologram produces an image through diffraction, not as a static bitmap, and the image appears only under the proper illumination. Early holographic security encoded information using phase distributions that could be interpreted only through specific reconstruction algorithms or numerical keys. Subsequent work embedded the key into physical geometry. Two metasurfaces could reveal a hidden image only when stacked with precise alignment or spacing. These methods eliminated the need for digital decoding but remained rigid. They were designed for a narrow purpose and could not easily support different functions without redesigning the entire device. Optical neural networks developed in parallel. In these systems, patterned surfaces act as physical analogs of neural network layers. Light passing through the first layer undergoes a transformation similar to an activation. It propagates to the next layer and transforms again. The network is trained digitally to find the phase distribution that produces a target output. When training is complete, the layers are fabricated as metasurfaces. The optical stack performs inference entirely in light, without electronics. These systems demonstrated classification and logic but were fixed in design. The individual layers contributed only to the collective function and could not be used independently. A study in Advanced Functional Materials (“Recomposable Layered Metasurfaces for Wavelength‐Multiplexed Optical Encryption via Modular Diffractive Deep Neural Networks”) describes a framework that combines metasurface holography with diffractive neural networks while addressing the lack of modularity. The researchers introduce a modular diffractive deep neural network, abbreviated MD2NN. In this architecture, each metasurface layer is trained so that it can reconstruct its own image. At the same time, pairs or larger combinations of layers reconstruct different images when cascaded. The same physical sheets therefore operate as standalone holograms and as components of a cooperative optical system. Illumination wavelength adds another dimension of control. Conceptual illustration of the modular diffractive deep neural network (MD2NN) architecture for multi-functional and multi-wavelength optical encryption. Conceptual illustration of the modular diffractive deep neural network (MD2NN) architecture for multi-functional and multi-wavelength optical encryption. (Image: Reproduced from DOI:10.1002/adfm.202523309, CC BY) The scalability of the system follows a simple combinatorial rule. With N metasurface layers and m illumination wavelengths, the device can express up to m(2ᴺ − 1) holographic outputs. These include images from each individual layer as well as images from every possible layer combination. The authors demonstrate this principle numerically for three layers under three wavelengths, producing 21 outputs. These channels are not patched together: they result from a single optimization process that requires all layers to function correctly alone and in combination. Training uses a wave optics model. Each metasurface is represented as a phase-shifting layer. Light propagation between layers is simulated using the angular spectrum method, which models how spatial frequency components of an optical field evolve through space. For each wavelength and each layer configuration, the reconstructed output is compared to its target using mean square error. Errors from all objectives are combined into one loss function. Gradients from this loss update every pixel in every metasurface. Because the loss includes independent and cooperative targets, the system learns both behaviors simultaneously. Physical implementation uses meta atoms that provide continuous phase tuning by rotation. The study employs anisotropic silicon nitride nanorods that impose a geometric phase. When circularly polarized light passes through a rotated nanorod, the rotation angle determines the phase shift applied to the transmitted light. Rotation alone controls the phase profile, which allows pixel-by-pixel tuning without altering nanorod geometry. Silicon nitride is selected because it maintains a useful refractive index in the visible spectrum and exhibits low absorption, supporting efficient cascaded operation. To validate the design, the researchers fabricate two metasurfaces on glass substrates coated with silicon nitride. Each metasurface comprises a dense grid of nanorods covering about 585 μm by 585 μm. The optical system uses circularly polarized illumination at 450 nm, 550 nm, and 650 nm. When used alone, the first metasurface reconstructs an identification image. The second metasurface reconstructs a QR code. When the two layers are physically combined at their trained spacing, the system reconstructs an encrypted password. The hologram forms directly in transmission. No electronic decoding step is required. The physical configuration itself operates as the decryption key. The password image appears only at the correct interlayer spacing. Experiments show a noticeable decline in fidelity when the spacing deviates by more than about 7 μm from the trained value. Lateral misalignment also degrades image quality, indicating that both depth and in-plane positioning influence reconstruction. These sensitivities anchor encryption in the device geometry rather than in digital post-processing. The approach differs from static metasurface holography. Instead of designing separate devices for each encoded function, the MD2NN framework embeds multiple outputs in a single set of phase patterns. During training, each layer learns its individual target and its collaborative targets. After fabrication, the device can express those outputs by selecting wavelength and layer arrangement. This does not make the metasurfaces reprogrammable after manufacture, but it does allow a single physical structure to serve several optical roles without redesign. The study suggests that metasurfaces can be treated as reusable optical components rather than fixed-function plates. A single system can store identifiers, machine-readable symbols, and encrypted content, reconstructed at the speed of light and without electronic intervention. The combination of modular training, wavelength multiplexing, and geometric phase control demonstrates how security and multifunctionality can be embedded directly in the propagation of light rather than in external computation.


Michael Berger
By
– Michael is author of four books by the Royal Society of Chemistry:
Nano-Society: Pushing the Boundaries of Technology (2009),
Nanotechnology: The Future is Tiny (2016),
Nanoengineering: The Skills and Tools Making Technology Invisible (2019), and
Waste not! How Nanotechnologies Can Increase Efficiencies Throughout Society (2025)
Copyright ©




Nanowerk LLC

For authors and communications departmentsclick to open

Lay summary


Prefilled posts