A Unified Framework for Compressive Video Recovery from Coded Exposure Techniques
Prasan Shedligeri
Anupama S
Kaushik Mitra
[Paper]
[GitHub]
Flutter
Shutter (8x)
Pixel-wise coded
exposure (16x)
C2B (16x)
Input
Reconstruction
PSNR / SSIM
27.82dB / 0.908
PSNR / SSIM
32.29dB / 0.946
PSNR / SSIM
34.65dB / 0.972

Abstract

Several coded exposure techniques have been proposed for acquiring high frame rate videos at low bandwidth. Most recently, a Coded-2-Bucket camera has been proposed that can acquire two compressed measurements in a single exposure, unlike previously proposed coded exposure techniques, which can acquire only a single measurement. Although two measurements are better than one for an effective video recovery, we are yet unaware of the clear advantage of two measurements, either quantitatively or qualitatively. Here, we propose a unified learning-based framework to make such a qualitative and quantitative comparison between those which capture only a single coded image (Flutter Shutter, Pixel-wise coded exposure) and those that capture two measurements per exposure (C2B). Our learning-based framework consists of a shift-variant convolutional layer followed by a fully convolutional deep neural network. Our proposed unified framework achieves the state of the art reconstructions in all three sensing techniques. Further analysis shows that when most scene points are static, the C2B sensor has a significant advantage over acquiring a single pixel-wise coded measurement. However, when most scene points undergo motion, the C2B sensor has only a marginal benefit over the single pixel-wise coded exposure measurement.


Talk


[Slides]

Key takeaways

  • Shift variant convolutional layer can be used as an efficient structure for extracting feataures from coded-exposure images
  • Proposed algorithm makes extensive quantitative and qualitative comparison for various coded exposure techniques.
  • Coded-2-bucket sensor provides a significant advantage over the single pixel-wise coded exposure only when most scene points are static.

  • Code

    We propose an unified algorithm for reconstruction of video sequences from three different coded exposure techniques: flutter shutter, pixel-wise coded exposure and coded-2-bucket sensor. The unified framework has 2 main parts: an exposure-aware feature extraction stage and a refinement stage. A shift-variant convolutional layer extracts features from the input coded exposure image(s). The refinement stage then takes this features as input and outputs the full video sequence.

     [GitHub]


    Paper and Supplementary Material

    P. Shedligeri, Anupama S., Kaushik Mitra
    A Unified Framework for Compressive Video Recovery from Coded Exposure Techniques
    In WACV, 2021.
    (hosted on ArXiv, CVF OpenAccess)


    [Supplementary Material] [Bibtex]


    Related Publications

  • Anupama S, Prasan Shedligeri, Abhishek Pal & Kaushik Mitra. (2020) Video Reconstruction by Spatio-Temporal Fusion of Blurred-Coded Image Pair. Accepted at IEEE International Conference on Pattern Recognition, doi to be assigned [Preprint] [Slides] [Supplementary] [Code]
  • Prasan Shedligeri, Anupama S & Kaushik Mitra. (2021) CodedRecon: Video reconstruction for coded exposure imaging techniques. Accepted at Elsevier Journal of Software Impacts, doi to be assigned [Paper] [Code]

  • Acknowledgements

    The authors would like to thank Sreyas Mohan and Subeesh Vasu for their helpful discussions.
    This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.