11/08/2020

Neural-enhanced live streaming: Improving live video ingest via online learning

Jaehong Kim, Youngmok Jung, Hyunho Yeo, Juncheol Ye, Dongsu Han

Keywords: super-resolution, live streaming, online learning, video delivery, deep neural networks

Abstract: Live video accounts for a significant volume of today’s Internet video. Despite a large number of efforts to enhance user quality of experience (QoE) both at the ingest and distribution side of live video, the fundamental limitations are that streamer’s upstream bandwidth and computational capacity limit the quality of experience of thousands of viewers.To overcome this limitation, we design LiveNAS, a new live video ingest framework that enhances the origin stream’s quality by leveraging computation at ingest servers. Our ingest server applies neural super-resolution on the original stream, while imposing minimal overhead on ingest clients. LiveNAS employs online learning to maximize the quality gain and dynamically adjusts the resource use to the real-time quality improvement. LiveNAS delivers high-quality live streams up to 4K resolution, outperforming WebRTC by 1.96 dB on average in Peak-Signal-to-Noise-Ratio on real video streams and network traces, which leads to 12

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at SIGCOMM 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd

Similar Papers