Saturday, April 27, 2024

 

ADOBE


Adobe researchers demonstrate progress of VideoGigaGAN AI to upscale low-quality videos while maintaining high detail level

Adobe researchers have demonstrated the progress of their VideoGigaGAN AI that upscales low-quality videos. The visual quality of the generated videos is better than other AI tested, but more research is needed to overcome notable limitations.
Adobe researchers have demonstrated the current progress of their VideoGigaGAN AI to upscale low-quality videos. Once fully developed, the AI can generate high-quality videos without resorting to expensive reshoots. Adobe improves upon prior works by reducing artifacts and flicker while retaining fine details in processed videos.

Image upscaling and super-resolution technology has been used for many years to improve the quality and resolution of low-quality pictures. Some Sony Cybershot cameras use its By Pixel Super Resolution technology to upscale low-resolution images by using a database of reference picture data, but is limited by discrete pixel information to upscaling images by two to three times the original size. More recently, Generative Adversarial Networks (GANs) trained on billions of images can upscale images 8x and beyond.

Adobe Researchers work on upscaling low-quality videos using VideoGigaGAN AI. (Source: Adobe Research)

Applying such techniques to videos is challenging due to the introduction of aliasing and stutter. Smoothing image details can eliminate these issues with the tradeoff of poorer quality. VideoGigaGAN uses several techniques to work around these limitations, including object motion tracking, image blurring, and detail learning and repainting. Still, the AI does not upscale small text or long video clips well, so more research is required. In the meantime, readers can capture high-quality videos with a top-notch DSLR (like this at Amazon) to avoid needless upscaling.

                        VideoGigaGAN - general system diagram. (Source: Adobe Research)

Technical details...To maintain smooth video flow between frames over time, a flow-guided propagation AI module is added before the main GAN. It ‘learns’ the movement of objects across time in the original input so that the same smooth movement is applied in the upscaled video. Also, upsampling layers in the GAN incorporate temporal attention layers that help keep frame transitions smooth.
To tackle aliasing, frames are pushed through an anti-aliasing block in the middle of the GAN which unfortunately reduces image quality due to detail blurring. This results in an upscaled video with smooth motion, no aliasing, but soft image detail. VideoGigaGAN works around this by introducing a high-frequency shuttle that pulls fine detail from the initial GAN downsampling layers and applies them later to the upsampled layers. The result of multiple layers of image processing is an super-resolution video that contains high-detail level without aliasing or flicker.

mundophone

No comments:

Post a Comment

  TECH Infosys founder defends call for 70-hour workweeks, says he "doesn't believe in a work/life balance" Infosys co-founder...