The rapid expansion of video streaming, driven by the internet’s global reach, has significantly increased energy consumption and carbon emissions. The energy demands of information and communications technology (ICT), particularly video streaming, are rising both in absolute terms and as a share of global energy consumption. This issue is exacerbated by the continued dominance of fossil fuels in global energy production and was further intensified during the COVID-19 pandemic when lockdowns and remote work led to a surge in video streaming demand, reaching 82% of global internet traffic in 2022. Data centers, crucial for supporting video streaming, consumed 205 billion kWh globally in 2020, with significant CO2 emissions. For instance, streaming a 30-minute Netflix show generates 1.6 kg of CO2, equivalent to driving 4 miles. This highlights the complex relationship between technological advancement and sustainability, where digital technologies offer convenience and connectivity but also contribute to carbon emissions and energy consumption.
In response to this challenge, we aim to devise a solution to reduce video traffic over the network, leading to lower server power consumption. Our approach involves a pipeline that degrades the video on the server intentionally and intelligently and then restores it on the client side, using less energy than current methods. Various techniques are explored, including video summarization, which condenses content into keyframes or key fragments. Keyframes are individual frames that provide a snapshot of the video, while key fragments are short video segments that incorporate motion and audio, offering a more dynamic representation. Video frame interpolation enhances visual quality by increasing frame rates, which improves the fluidity of motion and reduces blur. This technique calculates and inserts new frames between existing ones, leveraging deep learning algorithms to achieve smooth transitions. Video resolution modification is also considered, involving both downscaling and upscaling. Downscaling reduces the resolution to make videos easier to store and transmit, though it sacrifices some visual detail. Conversely, upscaling, or video super-resolution, enhances the resolution of low-quality videos, increasing pixel density for a sharper appearance. Recent advancements in deep neural networks have significantly improved super-resolution techniques, making them applicable in fields like medical imaging and surveillance. Additionally, effective video compression methods are crucial for efficiently transmitting and storing high-resolution content. Compression can be either lossless, preserving quality, or lossy, which sacrifices some detail to reduce file size. Video restoration techniques, such as denoising, deblurring, and compression artifact reduction, are essential for maintaining video quality, especially as higher resolution and better-quality videos become more prevalent. By integrating these methods, the proposal aims to create a more efficient and environmentally friendly video streaming process.