Patents by Inventor Anne Aaron

Anne Aaron has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200252666
    Abstract: In various embodiments, an interpolation-based encoding application encodes a first subsequence included in a media title at each encoding point included in a first set of encoding points to generate encoded subsequences. Subsequently, the interpolation-based encoding application performs interpolation operation(s) based on the encoded subsequences to estimate a first media metric value associated with a first encoding point that is not included in the first set of encoding points. The interpolation-based encoding application then generates an encoding recipe based on the encoded subsequences and the first media metric value. The encoding recipe specifies a different encoding point for each subsequence included in the media title. After determining that the encoding recipe specifies the first encoding point for the first subsequence, the interpolation-based encoding application encodes the first subsequence at the first encoding point to generate at least a portion of an encoded version of the media title.
    Type: Application
    Filed: February 3, 2020
    Publication date: August 6, 2020
    Inventors: Glenn Van WALLENDAEL, Anne AARON, Kyle SWANSON, Jan DE COCK, Liwei GUO, Sonia BHASKAR
  • Patent number: 10674180
    Abstract: In one embodiment of the present invention, an encode validator identifies and classifies errors introduced during the parallel chunk-based translation of a source to a corresponding aggregate encode. In operation, upon receiving a source for encoding, a frame difference generator creates a frame difference file for the source. A parallel encoder then distributes per-chunk encoding operations across machines and creates an aggregate encode. The encode validator decodes the aggregate encode and creates a corresponding frame difference file. Subsequently, the encode validator performs phase correlation operations between the two frame difference files to detect errors generated by encoding process faults (i.e., dropping a frame, etc.) while suppressing discrepancies inherent in encoding, such as those attributable to low bit-rate encoding.
    Type: Grant
    Filed: February 13, 2015
    Date of Patent: June 2, 2020
    Assignee: NETFLIX, INC.
    Inventors: Anne Aaron, Zhonghua Ma
  • Patent number: 10547856
    Abstract: A bitrate allocation engine allocates bitrates for distributed encoding of source data. Upon receiving a chunk of source data, the bitrate allocation engine generates a curve based on multiple points that each specify a different visual quality level and corresponding encoding bitrate for encoding the chunk. Subsequently, the bitrate allocation engine computes an optimized encoding bitrate based on the generated curve and an optimization factor that is associated with different visual quality levels and corresponding encoding bitrates for multiple chunks of the source data. The bitrate allocation engine then causes the chunk to be encoded at the optimized encoding bitrate. Advantageously, the resulting encoded chunk is optimized with respect to the optimization factor for multiple chunks of the source data.
    Type: Grant
    Filed: October 18, 2016
    Date of Patent: January 28, 2020
    Assignee: NETFLIX, INC.
    Inventors: Jan De Cock, Anne Aaron
  • Patent number: 10475172
    Abstract: In one embodiment of the present invention, a quality trainer and quality calculator collaborate to establish a consistent perceptual quality metric via machine learning. In a training phase, the quality trainer leverages machine intelligence techniques to create a perceptual quality model that combines objective metrics to optimally track a subjective metric assigned during viewings of training videos. Subsequently, the quality calculator applies the perceptual quality model to values for the objective metrics for a target video, thereby generating a perceptual quality score for the target video. In this fashion, the perceptual quality model judiciously fuses the objective metrics for the target video based on the visual feedback processed during the training phase. Since the contribution of each objective metric to the perceptual quality score is determined based on empirical data, the perceptual quality score is a more accurate assessment of observed video quality than conventional objective metrics.
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: November 12, 2019
    Assignee: NETFLIX, INC.
    Inventors: Anne Aaron, Dae Kim, Yu-Chieh Lin, David Ronca, Andy Schuler, Kuyen Tsao, Chi-Hao Wu
  • Patent number: 10438335
    Abstract: In one embodiment of the present invention, a quality trainer and quality calculator collaborate to establish a consistent perceptual quality metric via machine learning. In a training phase, the quality trainer leverages machine intelligence techniques to create a perceptual quality model that combines objective metrics to optimally track a subjective metric assigned during viewings of training videos. Subsequently, the quality calculator applies the perceptual quality model to values for the objective metrics for a target video, thereby generating a perceptual quality score for the target video. In this fashion, the perceptual quality model judiciously fuses the objective metrics for the target video based on the visual feedback processed during the training phase. Since the contribution of each objective metric to the perceptual quality score is determined based on empirical data, the perceptual quality score is a more accurate assessment of observed video quality than conventional objective metrics.
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: October 8, 2019
    Assignee: NETFLIX, INC.
    Inventors: Anne Aaron, Dae Kim, Yu-Chieh Lin, David Ronca, Andy Schuler, Kuyen Tsao, Chi-Hao Wu
  • Patent number: 10404986
    Abstract: In one embodiment of the present invention, an encoding bitrate ladder selector tailors bitrate ladders to the complexity of source data. Upon receiving source data, a complexity analyzer configures an encoder to repeatedly encode the source data-setting a constant quantization parameter to a different value for each encode. The complexity analyzer processes the encoding results to determine an equation that relates a visual quality metric to an encoding bitrate. The bucketing unit solves this equation to estimate a bucketing bitrate at a predetermined value of the visual quality metric. Based on the bucketing bitrate, the bucketing unit assigns the source data to a complexity bucket having an associated, predetermined bitrate ladder. Advantageously, sagaciously selecting the bitrate ladder enables encoding that optimally reflects tradeoffs between quality and resources (e.g., storage and bandwidth) across a variety of source data types instead of a single, “typical” source data type.
    Type: Grant
    Filed: March 30, 2015
    Date of Patent: September 3, 2019
    Assignee: NETFLIX, INC.
    Inventors: Anne Aaron, David Ronca, Ioannis Katsavounidis, Andy Schuler
  • Publication number: 20180343458
    Abstract: In various embodiments, a sequence-based encoding application partitions a set of shot sequences associated with a media title into multiple clusters based on at least one feature that characterizes media content and/or encoded media content associated with the media title. The clusters include at least a first cluster and a second cluster. The sequence-based encoding application encodes a first shot sequence using a first operating point to generate a first encoded shot sequence. The first shot sequence and the first operating point are associated with the first cluster. By contrast, the sequence-based encoding application encodes a second shot sequence using a second operating point to generate a second encoded shot sequence. The second shot sequence and the second operating point are associated with the second cluster. Subsequently, the sequence-based encoding application generates an encoded media sequence based on the first encoded shot sequence and the second encoded shot sequence.
    Type: Application
    Filed: August 3, 2018
    Publication date: November 29, 2018
    Inventors: Ioannis KATSAVOUNIDIS, Anne AARON, Jan DE COCK
  • Publication number: 20180300869
    Abstract: In one embodiment of the present invention, a quality trainer and quality calculator collaborate to establish a consistent perceptual quality metric via machine learning. In a training phase, the quality trainer leverages machine intelligence techniques to create a perceptual quality model that combines objective metrics to optimally track a subjective metric assigned during viewings of training videos. Subsequently, the quality calculator applies the perceptual quality model to values for the objective metrics for a target video, thereby generating a perceptual quality score for the target video. In this fashion, the perceptual quality model judiciously fuses the objective metrics for the target video based on the visual feedback processed during the training phase. Since the contribution of each objective metric to the perceptual quality score is determined based on empirical data, the perceptual quality score is a more accurate assessment of observed video quality than conventional objective metrics.
    Type: Application
    Filed: June 25, 2018
    Publication date: October 18, 2018
    Inventors: Anne AARON, Dae KIM, Yu-Chieh LIN, David RONCA, Andy SCHULER, Kuyen TSAO, Chi-Hao WU
  • Publication number: 20180302456
    Abstract: In various embodiments, an iterative encoding application generates shot encode points based on a first set of encoding points and a first shot sequence associated with a media title. The iterative encoding application performs convex hull operations across the shot encode points to generate a first convex hull. Subsequently, the iterative encoding application generates encoded media sequences based on the first convex hull and a second convex hull that is associated with both a second shot sequence associated with the media title and a second set of encoding points. The iterative encoding application determines a first optimized encoded media and a second optimized encoded media sequence from the encoded media sequences based on, respectively, a first target metric value and a second target metric value for a media metric. Portions of the optimized encoded media sequences are subsequently streamed to endpoint devices during playback of the media title.
    Type: Application
    Filed: June 22, 2018
    Publication date: October 18, 2018
    Inventors: Ioannis KATSAVOUNIDIS, Anne AARON, Jan DE COCK
  • Patent number: 10007977
    Abstract: In one embodiment of the present invention, a quality trainer and quality calculator collaborate to establish a consistent perceptual quality metric via machine learning. In a training phase, the quality trainer leverages machine intelligence techniques to create a perceptual quality model that combines objective metrics to optimally track a subjective metric assigned during viewings of training videos. Subsequently, the quality calculator applies the perceptual quality model to values for the objective metrics for a target video, thereby generating a perceptual quality score for the target video. In this fashion, the perceptual quality model judiciously fuses the objective metrics for the target video based on the visual feedback processed during the training phase. Since the contribution of each objective metric to the perceptual quality score is determined based on empirical data, the perceptual quality score is a more accurate assessment of observed video quality than conventional objective metrics.
    Type: Grant
    Filed: May 11, 2015
    Date of Patent: June 26, 2018
    Assignee: NETFLIX, INC.
    Inventors: Anne Aaron, Dae Kim, Yu-Chieh Lin, David Ronca, Andy Schuler, Kuyen Tsao, Chi-Hao Wu
  • Publication number: 20180167619
    Abstract: In various embodiments, a perceptual quality application computes an absolute quality score for encoded video content. In operation, the perceptual quality application selects a model based on the spatial resolution of the video content from which the encoded video content is derived. The model associates a set of objective values for a set of objective quality metrics with an absolute quality score. The perceptual quality application determines a set of target objective values for the objective quality metrics based on the encoded video content. Subsequently, the perceptual quality application computes the absolute quality score for the encoded video content based on the selected model and the set of target objective values. Because the absolute quality score is independent of the quality of the video content, the absolute quality score accurately reflects the perceived quality of a wide range of encoded video content when decoded and viewed.
    Type: Application
    Filed: October 12, 2017
    Publication date: June 14, 2018
    Inventors: Zhi LI, Anne AARON, Anush MOORTHY, Christos BAMPIS
  • Publication number: 20180167620
    Abstract: In various embodiments, a perceptual quality application determines an absolute quality score for encoded video content viewed on a target viewing device. In operation, the perceptual quality application determines a baseline absolute quality score for the encoded video content viewed on a baseline viewing device. Subsequently, the perceptual quality application determines that a target value for a type of the target viewing device does not match a base value for the type of the baseline viewing device. The perceptual quality application computes an absolute quality score for the encoded video content viewed on the target viewing device based on the baseline absolute quality score and the target value. Because the absolute quality score is independent of the viewing device, the absolute quality score accurately reflects the perceived quality of a wide range of encoded video content when decoded and viewed on a viewing device.
    Type: Application
    Filed: October 12, 2017
    Publication date: June 14, 2018
    Inventors: Zhi LI, Anne AARON, Anush MOORTHY, Christos BAMPIS
  • Publication number: 20180109799
    Abstract: In one embodiment of the present invention, a bitrate allocation engine allocates bitrates for distributed encoding of source data. Upon receiving a chunk of source data, the bitrate allocation engine generates a curve based on multiple points that each specify a different visual quality level and corresponding encoding bitrate for encoding the chunk. Subsequently, the bitrate allocation engine computes an optimized encoding bitrate based on the generated curve and an optimization factor that is associated with different visual quality levels and corresponding encoding bitrates for multiple chunks of the source data. The bitrate allocation engine then causes the chunk to be encoded at the optimized encoding bitrate. Advantageously, the resulting encoded chunk is optimized with respect to the optimization factor for multiple chunks of the source data.
    Type: Application
    Filed: October 18, 2016
    Publication date: April 19, 2018
    Inventors: Jan De Cock, Anne Aaron
  • Publication number: 20170295374
    Abstract: In various embodiments, a quality trainer trains a model that computes a value for a perceptual video quality metric for encoded video content. During a pre-training phase, the quality trainer partitions baseline values for metrics that describe baseline encoded video content into partitions based on genre. The quality trainer then performs cross-validation operations on the partitions to optimize hyperparameters associated with the model. Subsequently, during a training phase, the quality trainer performs training operations on the model that includes the optimized hyperparameters based on the baseline values for the metrics to generate a trained model. The trained model accurately tracks the video quality for the baseline encoded video content. Further, because the cross-validation operations minimize any potential overfitting, the trained model accurately and consistently predicts perceived video quality for non-baseline encoded video content across a wide range of genres.
    Type: Application
    Filed: July 11, 2016
    Publication date: October 12, 2017
    Inventors: Anne AARON, Zhi LI, Todd GOODALL
  • Publication number: 20160335754
    Abstract: In one embodiment of the present invention, a quality trainer and quality calculator collaborate to establish a consistent perceptual quality metric via machine learning. In a training phase, the quality trainer leverages machine intelligence techniques to create a perceptual quality model that combines objective metrics to optimally track a subjective metric assigned during viewings of training videos. Subsequently, the quality calculator applies the perceptual quality model to values for the objective metrics for a target video, thereby generating a perceptual quality score for the target video. In this fashion, the perceptual quality model judiciously fuses the objective metrics for the target video based on the visual feedback processed during the training phase. Since the contribution of each objective metric to the perceptual quality score is determined based on empirical data, the perceptual quality score is a more accurate assessment of observed video quality than conventional objective metrics.
    Type: Application
    Filed: May 11, 2015
    Publication date: November 17, 2016
    Inventors: Anne Aaron, Dae Kim, Yu-Chieh Lin, David Ronca, Andy Schuler, Kuyen Tsao, Chi-Hao Wu
  • Publication number: 20160295216
    Abstract: In one embodiment of the present invention, an encoding bitrate ladder selector tailors bitrate ladders to the complexity of source data. Upon receiving source data, a complexity analyzer configures an encoder to repeatedly encode the source data-setting a constant quantization parameter to a different value for each encode. The complexity analyzer processes the encoding results to determine an equation that relates a visual quality metric to an encoding bitrate. The bucketing unit solves this equation to estimate a bucketing bitrate at a predetermined value of the visual quality metric. Based on the bucketing bitrate, the bucketing unit assigns the source data to a complexity bucket having an associated, predetermined bitrate ladder. Advantageously, sagaciously selecting the bitrate ladder enables encoding that optimally reflects tradeoffs between quality and resources (e.g., storage and bandwidth) across a variety of source data types instead of a single, “typical” source data type.
    Type: Application
    Filed: March 30, 2015
    Publication date: October 6, 2016
    Inventors: Anne AARON, David RONCA, Ioannis KATSAVOUNIDIS, Andy SCHULER
  • Publication number: 20160241878
    Abstract: In one embodiment of the present invention, an encode validator identifies and classifies errors introduced during the parallel chunk-based translation of a source to a corresponding aggregate encode. In operation, upon receiving a source for encoding, a frame difference generator creates a frame difference file for the source. A parallel encoder then distributes per-chunk encoding operations across machines and creates an aggregate encode. The encode validator decodes the aggregate encode and creates a corresponding frame difference file. Subsequently, the encode validator performs phase correlation operations between the two frame difference files to detect errors generated by encoding process faults (i.e., dropping a frame, etc.) while suppressing discrepancies inherent in encoding, such as those attributable to low bit-rate encoding.
    Type: Application
    Filed: February 13, 2015
    Publication date: August 18, 2016
    Inventors: Anne AARON, Zhonghua MA
  • Publication number: 20090327918
    Abstract: A method of formatting information for transmission over a peer-to-peer communication network is provided. The method comprises identifying a graphical nature of the information, and capturing the information based on the graphical nature. The method further comprises identifying a graphical content type associated with the information, and encoding the information based on the graphical content type.
    Type: Application
    Filed: April 30, 2008
    Publication date: December 31, 2009
    Inventors: Anne Aaron, Siddhartha Annapureddy, Pierpaolo Baccichet, Bernd Girod, Vivek Gupta, Iouri Poutivski, Uri Raz, Eric Setton
  • Publication number: 20090327917
    Abstract: A method of sharing information associated with a selected application is provided. The method comprises identifying a media type associated with the information, and capturing the information based on the media type. The method further comprises identifying a content type associated with the information, the content type being related to the media type, encoding the information based on the content type, and providing access to the encoded information over a communication network.
    Type: Application
    Filed: April 30, 2008
    Publication date: December 31, 2009
    Inventors: Anne Aaron, Siddhartha Annapureddy, Pierpaolo Baccichet, Bernd Girod, Vivek Gupta, Iouri Poutivski, Uri Raz, Eric Setton