Abstract: An original input video file is encoded using a machine learning approach. The encoder performs a detailed video analysis and selection of encoding parameters that using a machine learning algorithm improves over time. The encoding process is done using a multi-pass approach. During a first pass, the entire video file is scanned to extract video property information that does not require in-depth analyses. The extracted data is then entered into an encoding engine, which uses artificial intelligence to produce optimized encoder settings. The video file is into a set of time-based chunks and, in a second pass, the encoding parameters for each chunk are set and distributed to encoding nodes for parallel processing. These encoder instances probe-encode each chunk determine the level of complexity for the chunk and to derive chunk-specific encoding parameters.
Type:
Grant
Filed:
March 29, 2019
Date of Patent:
March 30, 2021
Assignee:
Bitmovin, Inc.
Inventors:
Martin Smole, Armin Trattnig, Christian Feldmann
Abstract: A video streaming system optimizes the buffering of periods of frames of a video presentation in order to achieve a more constant perceptual quality throughout the entire video presentation. An adaption algorithm determines transmission bitrates to transmit some periods at a lower bitrate that the channel conditions of the channel may allow while transmitting other periods at a higher bitrate. The transmission bitrates are determined based on expected quality metadata signaled in the periods of the bitstream for the current period and following periods in order to optimize the bitrate and the expected perceptual quality of each version of each period over time.
Type:
Application
Filed:
November 15, 2018
Publication date:
March 11, 2021
Applicant:
BITMOVIN, INC.
Inventors:
Christian FELDMANN, Martin SMOLE, Christopher MUELLER, Daniel WEINBERGER, Armin TRATTNIG
Abstract: Media playback may be controlled or adapted using behavioral player adaptation. The user and the user's physical environment are monitored via sensors. Sensor data representative of relevant user behavior and physical properties of the environment where the user is located is collected, aggregated, and pre-processed to determine the state of parameters of the sensed environment that may be relevant. The pre-processed sensor data is examined to determine the state of user model parameters. Machine learning may be used for the data examination; a neural network is used to learn the key parameters from the pre-processed data that then are used for media playback adaptation and/or control.
Abstract: An original input video file is encoded using a machine learning approach. The encoder performs a detailed video analysis and selection of encoding parameters that using a machine learning algorithm improves over time. The encoding process is done using a multi-pass approach. During a first pass, the entire video file is scanned to extract video property information that does not require in-depth analyses. The extracted data is then entered into an encoding engine, which uses artificial intelligence to produce optimized encoder settings. The video file is into a set of time-based chunks and, in a second pass, the encoding parameters for each chunk are set and distributed to encoding nodes for parallel processing. These encoder instances probe-encode each chunk determine the level of complexity for the chunk and to derive chunk-specific encoding parameters.
Type:
Application
Filed:
March 29, 2019
Publication date:
October 1, 2020
Applicant:
Bitmovin, Inc.
Inventors:
Martin Smole, Armin Trattnig, Christian Feldmann