Ultrafast compact all-fiber imaging via deep learning
High-speed imaging through multimode optical fibers is demonstrated, leveraging the high intermodal dispersion to transform 2D spatial information into 1D temporal pulsed signal streams. Deep learning algorithms is applied to reconstruct the images of micron-scale objects at ultrahigh frame rates.
Optical fibers are vital media for long-distance information transmission. Recent years have seen increasing interests in applying deep learning algorithms to recover and detect images from the scattered lights through single- or multimode-fibers [1-3].
While ultrahigh-speed optical-fiber-based endoscopy are frequently required for in vivo and in situ applications, arming the high imaging speed with the fiber endoscopy can be vital for exploring transient biomedical and physical phenomena. However, conventional approaches entails traditional cameras that generally require a balance between the imaging speed and the frame depth due to a limited readout speed from the pixel arrays to memory. Inspired by the time-stretching scheme , a compact all-fiber imaging scheme is proposed and experimentally demonstrated at very high speeds. As is illustrated in Fig. 1, the high-speed imaging based on the transformation of two-dimensional spatial information into one-dimensional temporal pulsed streams by leveraging high intermodal dispersion in a multimode fiber. Neural networks are trained to reconstruct images from the temporal waveforms.
The operation schematics and experimental layout is shown as Fig. 2. Here, we combine the advantages of the time-stretching method  and fiber endoscopy to propose a one-pixel method to enable all-fiber high-speed detection of images, eliminating the requirements and costs on expensive high-speed/resolution cameras. Leveraging a single multimode fiber (MMF) as the probe, real-time image acquisition with a high frame rate of over 15 Mfps and a shutter time of 45.1 ps is experimentally demonstrated, in which 10,000 frames could be recorded in a single shot. Moreover, the maximum system frame rate may be further enhanced to 53.5 Mfps. The 2D spatial information encoded in scattered light from the target images is transformed into one-dimensional (1D) time-domain pulsed waveforms, thanks to the big intermodal dispersion effect in an MMF. An artifical neural network model is trained to reconstruct images from the temporal waveforms recorded by an ultrafast photodiode connected to the output end of the fiber.
Besides, an all-fiber structure by combining a fiber-output pulse laser, a triple-cladding fiber probe, and a side-pump coupler is also applied, which may enable high levels of integration and system stability. The overall imaging performance characterizations are manifested in Fig. 3. The performance of our demonstrated proof-of-principle system may be further improved. This scheme can be further modified to detect 3D objects by combining it with the existing time-of-flight technique as instance.
 Caramazza, P., et al. Transmission of natural scene images through a multimode fibre. Nature Communications 10, 2029 (2019).
 Borhani, N., et al. Learning to see through multimode fibers. Optica 5, 960–966 (2018).
 Meng, Y., et al, Optical meta-waveguides for integrated photonics and beyond, Light: Science & Applications 10, 235 (2021).
 Goda, K., et al. Serial time-encoded amplified imaging for realtime observation of fast dynamic phenomena. Nature 458, 1145–1149 (2009).