Patents Assigned to Euclid Discoveries, LLC
-
Patent number: 11350105Abstract: Videos may be characterized by objective metrics that quantify video quality. Embodiments are directed to target bitrate prediction methods in which one or more objective metrics may serve as inputs into a model that predicts a mean opinion score (MOS), a measure of perceptual quality, as a function of metric values. The model may be derived by generating training data through conducting subjective tests on a set of video encodings, obtaining MOS data from the subjective tests, and correlating the MOS data with metric measurements on the training data. The MOS predictions may be extended to predict the target (encoding) bitrate that achieves a desired MOS value. The target bitrate prediction methods may be applied to segments of a video. The methods may be made computationally faster by applying temporal subsampling.Type: GrantFiled: January 8, 2021Date of Patent: May 31, 2022Assignee: Euclid Discoveries, LLCInventors: Dane P. Kottke, Katherine H. Cornog, John J. Guo, Myo Tun, Jeyun Lee, Nigel Lee
-
Patent number: 10757419Abstract: Videos may be characterized by objective metrics that quantify video quality. Embodiments are directed to target bitrate prediction methods in which one or more objective metrics may serve as inputs into a model that predicts a mean opinion score (MOS), a measure of perceptual quality, as a function of metric values. The model may be derived by generating training data through conducting subjective tests on a set of video encodings, obtaining MOS data from the subjective tests, and correlating the MOS data with metric measurements on the training data. The MOS predictions may be extended to predict the target (encoding) bitrate that achieves a desired MOS value. The target bitrate prediction methods may be applied to segments of a video. The methods may be made computationally faster by applying temporal subsampling.Type: GrantFiled: May 23, 2019Date of Patent: August 25, 2020Assignee: Euclid Discoveries, LLCInventors: Dane P. Kottke, Katherine H. Cornog, John J. Guo, Myo Tun, Jeyun Lee, Nigel Lee
-
Patent number: 10097851Abstract: Perceptual statistics are used to compute importance maps that indicate which regions of a video frame are important to the human visual system. Importance maps may be generated from encoders that produce motion vectors and employ motion estimation for inter-prediction. The temporal contrast sensitivity function (TCSF) may be computed from the encoder's motion vectors. Quality metrics may be used to construct a true motion vector map (TMVM), which refines the TCSF. Spatial complexity maps (SCMs) can be calculated from simple metrics (e.g. block variance, block luminance, SSIM, and edge detection). Importance maps with TCSF, TMVM, and SCM may be used to modify the standard rate-distortion optimization criterion for selecting the optimum encoding solution. Importance maps may modify encoder quantization. The spatial information for the importance maps may be provided by a lookup table based on block variance, where negative and positive spatial QP offsets for block variances are provided.Type: GrantFiled: November 18, 2016Date of Patent: October 9, 2018Assignee: Euclid Discoveries, LLCInventors: Nigel Lee, Sangseok Park, Myo Tun, Dane P. Kottke, Jeyun Lee, Christopher Weed
-
Patent number: 10091507Abstract: Perceptual statistics may be used to compute importance maps that indicate which regions of a video frame are important to the human visual system. Importance maps may be applied to the video encoding process to enhance the quality of encoded bitstreams. The temporal contrast sensitivity function (TCSF) may be computed from the encoder's motion vectors. Motion vector quality metrics may be used to construct a true motion vector map (TMVM) that can be used to refine the TCSF. Spatial complexity maps (SCMs) can be calculated from metrics such as block variance, block luminance, SSIM, and edge strength, and the SCMs can be combined with the TCSF to obtain a unified importance map. Importance maps may be used to improve encoding by modifying the criterion for selecting optimum encoding solutions or by modifying the quantization for each target block to be encoded.Type: GrantFiled: September 3, 2015Date of Patent: October 2, 2018Assignee: Euclid Discoveries, LLCInventors: Nigel Lee, Sangseok Park, Myo Tun, Dane P. Kottke, Jeyun Lee, Christopher Weed
-
Patent number: 9743078Abstract: A model-based compression codec applies higher-level modeling to produce better predictions than can be found through conventional block-based motion estimation and compensation. Computer-vision-based feature and object detection algorithms identify regions of interest throughout the video datacube. The detected features and objects are modeled with a compact set of parameters, and similar feature/object instances are associated across frames. Associated features/objects are formed into tracks and related to specific blocks of video data to be encoded. The tracking information is used to produce model-based predictions for those blocks of data, enabling more efficient navigation of the prediction search space than is typically achievable through conventional motion estimation methods. A hybrid framework enables modeling of data at multiple fidelities and selects the appropriate level of modeling for each portion of video data.Type: GrantFiled: March 12, 2013Date of Patent: August 22, 2017Assignee: Euclid Discoveries, LLCInventors: Darin DeForest, Charles P. Pace, Nigel Lee, Renato Pizzorni
-
Patent number: 9578345Abstract: A model-based compression codec applies higher-level modeling to produce better predictions than can be found through conventional block-based motion estimation and compensation. Computer-vision-based feature and object detection algorithms identify regions of interest throughout the video datacube. The detected features and objects are modeled with a compact set of parameters, and similar feature/object instances are associated across frames. Associated features/objects are formed into tracks and related to specific blocks of video data to be encoded. The tracking information is used to produce model-based predictions for those blocks of data, enabling more efficient navigation of the prediction search space than is typically achievable through conventional motion estimation methods. A hybrid framework enables modeling of data at multiple fidelities and selects the appropriate level of modeling for each portion of video data.Type: GrantFiled: December 21, 2012Date of Patent: February 21, 2017Assignee: Euclid Discoveries, LLCInventors: Darin DeForest, Charles P. Pace, Nigel Lee, Renato Pizzorni
-
Patent number: 9532069Abstract: Systems and methods of improving video encoding/decoding efficiency may be provided. A feature-based processing stream is applied to video data having a series of video frames. Computer-vision-based feature and object detection algorithms identify regions of interest throughout the video datacube. The detected features and objects are modeled with a compact set of parameters, and similar feature/object instances are associated across frames. Associated features/objects are formed into tracks, and each track is given a representative, characteristic feature. Similar characteristic features are clustered and then stored in a model library, for reuse in the compression of other videos. A model-based compression framework makes use of the preserved model data by detecting features in a new video to be encoded, relating those features to specific blocks of data, and accessing similar model information from the model library.Type: GrantFiled: October 29, 2014Date of Patent: December 27, 2016Assignee: Euclid Discoveries, LLCInventors: Charles P. Pace, Darin DeForest, Nigel Lee, Renato Pizzorni, Richard Wingard
-
Patent number: 9106977Abstract: Personal object based archival systems and methods are provided for processing and compressing video. By analyzing features unique to a user, such as face, family, and pet attributes associated with the user, an invariant model can be determined to create object model adapters personal to each user. These personalized video object models can be created using geometric and appearance modeling techniques, and they can be stored in an object model library. The object models can be reused for processing other video streams. The object models can be shared in a peer-to-peer network among many users, or the object models can be stored in an object model library on a server. When the compressed (encoded) video is reconstructed, the video object models can be accessed and used to produce quality video with nearly lossless compression.Type: GrantFiled: December 30, 2011Date of Patent: August 11, 2015Assignee: Euclid Discoveries, LLCInventor: Charles P. Pace
-
Patent number: 8964835Abstract: Systems and methods of processing video data are provided. Video data having a series of video frames is received and processed. One or more instances of a candidate feature are detected in the video frames. The previously decoded video frames are processed to identify potential matches of the candidate feature. When a substantial amount of portions of previously decoded video frames include instances of the candidate feature, the instances of the candidate feature are aggregated into a set. The candidate feature set is used to create a feature-based model. The feature-based model includes a model of deformation variation and a model of appearance variation of instances of the candidate feature. The feature-based model compression efficiency is compared with the conventional video compression efficiency.Type: GrantFiled: December 30, 2011Date of Patent: February 24, 2015Assignee: Euclid Discoveries, LLCInventor: Charles P. Pace
-
Patent number: 8942283Abstract: Systems and methods of processing video data are provided. Video data having a series of video frames is received and processed. One or more instances of a candidate feature are detected in the video frames. The previously decoded video frames are processed to identify potential matches of the candidate feature. When a substantial amount of portions of previously decoded video frames include instances of the candidate feature, the instances of the candidate feature are aggregated into a set. The candidate feature set is used to create a feature-based model. The feature-based model includes a model of deformation variation and a model of appearance variation of instances of the candidate feature. The feature-based model compression efficiency is compared with the conventional video compression efficiency.Type: GrantFiled: October 6, 2009Date of Patent: January 27, 2015Assignee: Euclid Discoveries, LLCInventor: Charles P. Pace
-
Patent number: 8908766Abstract: A method and apparatus for image data compression includes detecting a portion of an image signal that uses a disproportionate amount of bandwidth compared to other portions of the image signal. The detected portion of the image signal result in determined components of interest. Relative to certain variance, the method and apparatus normalize the determined components of interest to generate an intermediate form of the components of interest. The intermediate form represents the components of interest reduced in complexity by the certain variance and enables a compressed form of the image signal where the determined components of interest maintain saliency. In one embodiment, the video signal is a sequence of video frames.Type: GrantFiled: January 4, 2008Date of Patent: December 9, 2014Assignee: Euclid Discoveries, LLCInventor: Charles P. Pace
-
Patent number: 8902971Abstract: Systems and methods of improving video encoding/decoding efficiency may be provided. A feature-based processing stream is applied to video data having a series of video frames. Computer-vision-based feature and object detection algorithms identify regions of interest throughout the video datacube. The detected features and objects are modeled with a compact set of parameters, and similar feature/object instances are associated across frames. Associated features/objects are formed into tracks, and each track is given a representative, characteristic feature. Similar characteristic features are clustered and then stored in a model library, for reuse in the compression of other videos. A model-based compression framework makes use of the preserved model data by detecting features in a new video to be encoded, relating those features to specific blocks of data, and accessing similar model information from the model library.Type: GrantFiled: February 20, 2013Date of Patent: December 2, 2014Assignee: Euclid Discoveries, LLCInventors: Charles P. Pace, Darin DeForest, Nigel Lee, Renato Pizzorni, Richard Wingard
-
Patent number: 8842154Abstract: Systems and methods for processing video are provided. Video compression schemes are provided to reduce the number of bits required to store and transmit digital media in video conferencing or videoblogging applications. A photorealistic avatar representation of a video conference participant is created. The avatar representation can be based on portions of a video stream that depict the conference participant. A face detector is used to identify, track and classify the face. Object models including density, structure, deformation, appearance and illumination models are created based on the detected face. An object based video compression algorithm, which uses machine learning face detection techniques, creates the photorealistic avatar representation from parameters derived from the density, structure, deformation, appearance and illumination models.Type: GrantFiled: July 3, 2012Date of Patent: September 23, 2014Assignee: Euclid Discoveries, LLCInventor: Charles P. Pace
-
Patent number: 8553782Abstract: Personal object based archival systems and methods are provided for processing and compressing video. By analyzing features unique to a user, such as face, family, and pet attributes associated with the user, an invariant model can be determined to create object model adapters personal to each user. These personalized video object models can be created using geometric and appearance modeling techniques, and they can be stored in an object model library. The object models can be reused for processing other video streams. The object models can be shared in a peer-to-peer network among many users, or the object models can be stored in an object model library on a server. When the compressed (encoded) video is reconstructed, the video object models can be accessed and used to produce quality video with nearly lossless compression.Type: GrantFiled: January 4, 2008Date of Patent: October 8, 2013Assignee: Euclid Discoveries, LLCInventor: Charles P. Pace
-
Publication number: 20130230099Abstract: A model-based compression codec applies higher-level modeling to produce better predictions than can be found through conventional block-based motion estimation and compensation. Computer-vision-based feature and object detection algorithms identify regions of interest throughout the video datacube. The detected features and objects are modeled with a compact set of parameters, and similar feature/object instances are associated across frames. Associated features/objects are formed into tracks and related to specific blocks of video data to be encoded. The tracking information is used to produce model-based predictions for those blocks of data, enabling more efficient navigation of the prediction search space than is typically achievable through conventional motion estimation methods. A hybrid framework enables modeling of data at multiple fidelities and selects the appropriate level of modeling for each portion of video data.Type: ApplicationFiled: March 12, 2013Publication date: September 5, 2013Applicant: Euclid Discoveries, LLCInventors: Darin DeForest, Charles P. Pace, Nigel Lee, Renato Pizzorni
-
Publication number: 20130114703Abstract: A model-based compression codec applies higher-level modeling to produce better predictions than can be found through conventional block-based motion estimation and compensation. Computer-vision-based feature and object detection algorithms identify regions of interest throughout the video datacube. The detected features and objects are modeled with a compact set of parameters, and similar feature/object instances are associated across frames. Associated features/objects are formed into tracks and related to specific blocks of video data to be encoded. The tracking information is used to produce model-based predictions for those blocks of data, enabling more efficient navigation of the prediction search space than is typically achievable through conventional motion estimation methods. A hybrid framework enables modeling of data at multiple fidelities and selects the appropriate level of modeling for each portion of video data.Type: ApplicationFiled: December 21, 2012Publication date: May 9, 2013Applicant: Euclid Discoveries, LLCInventor: Euclid Discoveries, LLC
-
Publication number: 20130107948Abstract: A model-based compression codec applies higher-level modeling to produce better predictions than can be found through conventional block-based motion estimation and compensation. Computer-vision-based feature and object detection algorithms identify regions of interest throughout the video datacube. The detected features and objects are modeled with a compact set of parameters, and similar feature/object instances are associated across frames. Associated features/objects are formed into tracks and related to specific blocks of video data to be encoded. The tracking information is used to produce model-based predictions for those blocks of data, enabling more efficient navigation of the prediction search space than is typically achievable through conventional motion estimation methods. A hybrid framework enables modeling of data at multiple fidelities and selects the appropriate level of modeling for each portion of video data.Type: ApplicationFiled: December 21, 2012Publication date: May 2, 2013Applicant: Euclid Discoveries, LLCInventor: Euclid Discoveries, LLC
-
Patent number: 8243118Abstract: Systems and methods for processing video are provided. Video compression schemes are provided to reduce the number of bits required to store and transmit digital media in video conferencing or videoblogging applications. A photorealistic avatar representation of a video conference participant is created. The avatar representation can be based on portions of a video stream that depict the conference participant. A face detector is used to identify, track and classify the face. Object models including density, structure, deformation, appearance and illumination models are created based on the detected face. An object based video compression algorithm, which uses machine learning face detection techniques, creates the photorealistic avatar representation from parameters derived from the density, structure, deformation, appearance and illumination models.Type: GrantFiled: January 4, 2008Date of Patent: August 14, 2012Assignee: Euclid Discoveries, LLCInventor: Charles P. Pace
-
Publication number: 20120163446Abstract: Personal object based archival systems and methods are provided for processing and compressing video. By analyzing features unique to a user, such as face, family, and pet attributes associated with the user, an invariant model can be determined to create object model adapters personal to each user. These personalized video object models can be created using geometric and appearance modeling techniques, and they can be stored in an object model library. The object models can be reused for processing other video streams. The object models can be shared in a peer-to-peer network among many users, or the object models can be stored in an object model library on a server. When the compressed (encoded) video is reconstructed, the video object models can be accessed and used to produce quality video with nearly lossless compression.Type: ApplicationFiled: December 30, 2011Publication date: June 28, 2012Applicant: Euclid Discoveries, LLCInventor: Charles P. Pace
-
Publication number: 20120155536Abstract: Systems and methods of processing video data are provided. Video data having a series of video frames is received and processed. One or more instances of a candidate feature are detected in the video frames. The previously decoded video frames are processed to identify potential matches of the candidate feature. When a substantial amount of portions of previously decoded video frames include instances of the candidate feature, the instances of the candidate feature are aggregated into a set. The candidate feature set is used to create a feature-based model. The feature-based model includes a model of deformation variation and a model of appearance variation of instances of the candidate feature. The feature-based model compression efficiency is compared with the conventional video compression efficiency.Type: ApplicationFiled: December 30, 2011Publication date: June 21, 2012Applicant: Euclid Discoveries, LLCInventor: Charles P. Pace