Patents by Inventor Albert E. Keinath

Albert E. Keinath has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230396819
    Abstract: A video delivery system generates and stores reduced bandwidth videos from source video. The system may include a track generator that executes functionality of application(s) to be used at sink devices, in which the track generator generates tracks from execution of the application(s) on source video and generates tracks having a reduced data size as compared to the source video. The track generator may execute a first instance of application functionality on the source video, which identifies region(s) of interest from the source video. The track generator further may downsample the source video according to downsampling parameters, and execute a second instance of application functionality on the downsampled video. The track generator may determine, from a comparison of outputs from the first and second instances of the application, whether the output from the second instance of application functionality is within an error tolerance of the output from the first instance of application functionality.
    Type: Application
    Filed: June 1, 2023
    Publication date: December 7, 2023
    Inventors: Ke ZHANG, Xiaoxia SUN, Shujie LIU, Xiaosong ZHOU, Jian LI, Xun SHI, Jiefu ZHAI, Albert E KEINATH, Hsi-Jung WU, Jingteng XUE, Xingyu ZHANG, Jun XIN
  • Publication number: 20230300341
    Abstract: Techniques are disclosed for generating virtual reference frames that may be used for prediction of input video frames. The virtual reference frames may be derived from already-coded reference frames and thereby incur reduced signaling overhead. Moreover, signaling of virtual reference frames may be avoided until an encoder selects the virtual reference frame as a prediction reference for a current frame. In this manner, the techniques proposed herein contribute to improved coding efficiencies.
    Type: Application
    Filed: January 20, 2023
    Publication date: September 21, 2023
    Inventors: Yeqing WU, Yunfei ZHENG, Alexandros TOURAPIS, Alican NALCI, Yixin DU, Hilmi Enes EGILMEZ, Albert E. KEINATH, Jun XIN, Hsi-Jung WU
  • Patent number: 11606574
    Abstract: Techniques are disclosed for coding video data in which frames from a video source are partitioned into a plurality of tiles of common size, and the tiles are coded as a virtual video sequence according to motion-compensated prediction, each tile treated as having respective temporal location of the virtual video sequence. The coding scheme permits relative allocation of coding resources to tiles that are likely to have greater significance in a video coding session, which may lead to certain tiles that have low complexity or low motion content to be skipped during coding of the tiles for select source frames. Moreover, coding of the tiles may be ordered to achieve low coding latencies during a coding session.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: March 14, 2023
    Assignee: APPLE INC.
    Inventors: Dazhong Zhang, Peikang Song, Beibei Wang, Giribalan Gopalan, Albert E. Keinath, Christopher M. Garrido, David R. Conrad, Hsi-Jung Wu, Ming Jin, Hang Yuan, Xiaohua Yang, Xiaosong Zhou, Vikrant Kasarabada, Davide Concion, Eric L. Chien, Bess C. Chan, Karthick Santhanam, Gurtej Singh Chandok
  • Publication number: 20220360814
    Abstract: An encoder or decoder can perform enhanced motion vector prediction by receiving an input block of data for encoding or decoding and accessing stored motion information for at least one other block of data. Based on the stored motion information, the encoder or decoder can generate a list of one or more motion vector predictor candidates for the input block in accordance with an adaptive list construction order. The encoder or decoder can predict a motion vector for the input block based on at least one of the one or more motion vector predictor candidates.
    Type: Application
    Filed: May 4, 2022
    Publication date: November 10, 2022
    Inventors: Yeqing Wu, Alexandros Tourapis, Yunfei Zheng, Hsi-Jung Wu, Jun Xin, Albert E. Keinath, Mei Guo, Alican Nalci
  • Publication number: 20200382806
    Abstract: Techniques are disclosed for coding video data in which frames from a video source are partitioned into a plurality of tiles of common size, and the tiles are coded as a virtual video sequence according to motion-compensated prediction, each tile treated as having respective temporal location of the virtual video sequence. The coding scheme permits relative allocation of coding resources to tiles that are likely to have greater significance in a video coding session, which may lead to certain tiles that have low complexity or low motion content to be skipped during coding of the tiles for select source frames. Moreover, coding of the tiles may be ordered to achieve low coding latencies during a coding session.
    Type: Application
    Filed: May 26, 2020
    Publication date: December 3, 2020
    Inventors: Dazhong ZHANG, Peikang SONG, Beibei WANG, Giribalan GOPALAN, Albert E. KEINATH, Christopher M. GARRIDO, David R. CONRAD, Hsi-Jung WU, Ming JIN, Hang YUAN, Xiaohua YANG, Xiaosong ZHOU, Vikrant KASARABADA, Davide CONCION, Eric L. CHIEN, Bess C. CHAN, Karthick SANTHANAM, Gurtej Singh CHANDOK
  • Patent number: 10452713
    Abstract: Systems and processes for improved video editing, summarization and navigation based on generation and analysis of metadata are described. The metadata may be content-based (e.g., differences between neighboring frames, exposure data, key frame identification data, motion data, or face detection data) or non-content-based (e.g., exposure, focus, location, time) and used to prioritize and/or classify portions of video. The metadata may be generated at the time of image capture or during post-processing. Prioritization information, such as a score for various portions of the image data may be based on the metadata and/or image data. Classification information such as the type or quality of a scene may be determined based on the metadata and/or image data. The classification and prioritization information may be metadata and may be used to automatically remove undesirable portions of the video, generate suggestions during editing or automatically generate summary video.
    Type: Grant
    Filed: December 3, 2014
    Date of Patent: October 22, 2019
    Assignee: Apple Inc.
    Inventors: Shujie Liu, Ke Zhang, Xiaosong Zhou, Hsi-Jung Wu, Chris Y. Chung, James O. Normile, Douglas S. Price, Albert E. Keinath
  • Patent number: 10366497
    Abstract: Techniques for cropping images containing an occlusion are presented. A method for image editing is presented comprising, when an occlusion is detected in an original digital image, determining an area occupied by the occlusion, assigning importance scores to different content elements of the original digital image, defining a cropping window around an area of the original digital image that does not include the area occupied by the occlusion and that is based on the importance scores, and cropping the original digital image to the cropping window.
    Type: Grant
    Filed: June 9, 2017
    Date of Patent: July 30, 2019
    Assignee: Apple Inc.
    Inventors: Ke Zhang, Jiefu Zhai, Yunfei Zheng, Shujie Liu, Albert E. Keinath, Xiaosong Zhou, Chris Y. Chung, Hsi-Jung Wu
  • Patent number: 10282633
    Abstract: A method for processing media assets includes, given a first media asset, deriving characteristics from the first media asset, searching for other media assets having characteristics that correlate to the characteristics of the first media asset, when a match is found, deriving content corrections for the first media asset or a matching media asset from the other of the first media asset or the matching media asset, and correcting content of the first media asset or the matching media asset based on the content corrections.
    Type: Grant
    Filed: October 20, 2016
    Date of Patent: May 7, 2019
    Assignee: Apple Inc.
    Inventors: Shujie Liu, Jiefu Zhai, Chris Y. Chung, Hsi-Jung Wu, Yunfei Zheng, Albert E. Keinath, Xiaosong Zhou, Ke Zhang
  • Patent number: 10062412
    Abstract: Methods for organizing media data by automatically segmenting media data into hierarchical layers of scenes are described. The media data may include metadata and content having still image, video or audio data. The metadata may be content-based (e.g., differences between neighboring frames, exposure data, key frame identification data, motion data, or face detection data) or non-content-based (e.g., exposure, focus, location, time) and used to prioritize and/or classify portions of video. The metadata may be generated at the time of image capture or during post-processing. Prioritization information, such as a score for various portions of the image data may be based on the metadata and/or image data. Classification information such as the type or quality of a scene may be determined based on the metadata and/or image data. The classification and prioritization information may be metadata and may be used to organize the media data.
    Type: Grant
    Filed: June 3, 2016
    Date of Patent: August 28, 2018
    Assignee: Apple Inc.
    Inventors: Shujie Liu, Yunfei Zheng, Xiaosong Zhou, Hsi-Jung Wu, Ke Zhang, Albert E. Keinath, Chris Y. Chung
  • Publication number: 20170358059
    Abstract: Techniques for cropping images containing an occlusion are presented. A method for image editing is presented comprising, when an occlusion is detected in an original digital image, determining an area occupied by the occlusion, assigning importance scores to different content elements of the original digital image, defining a cropping window around an area of the original digital image that does not include the area occupied by the occlusion and that is based on the importance scores, and cropping the original digital image to the cropping window.
    Type: Application
    Filed: June 9, 2017
    Publication date: December 14, 2017
    Inventors: Ke Zhang, Jiefu Zhai, Yunfei Zheng, Shujie Liu, Albert E. Keinath, Xiaosong Zhou, Chris Y. Chung, Hsi-Jung Wu
  • Publication number: 20170109596
    Abstract: A method for processing media assets includes, given a first media asset, deriving characteristics from the first media asset, searching for other media assets having characteristics that correlate to the characteristics of the first media asset, when a match is found, deriving content corrections for the first media asset or a matching media asset from the other of the first media asset or the matching media asset, and correcting content of the first media asset or the matching media asset based on the content corrections.
    Type: Application
    Filed: October 20, 2016
    Publication date: April 20, 2017
    Inventors: Shujie Liu, Jiefu Zhai, Chris Y. Chung, Hsi-Jung Wu, Yunfei Zheng, Albert E. Keinath, Xiaosong Zhou, Ke Zhang
  • Publication number: 20160358628
    Abstract: Methods for organizing media data by automatically segmenting media data into hierarchical layers of scenes are described. The media data may include metadata and content having still image, video or audio data. The metadata may be content-based (e.g., differences between neighboring frames, exposure data, key frame identification data, motion data, or face detection data) or non-content-based (e.g., exposure, focus, location, time) and used to prioritize and/or classify portions of video. The metadata may be generated at the time of image capture or during post-processing. Prioritization information, such as a score for various portions of the image data may be based on the metadata and/or image data. Classification information such as the type or quality of a scene may be determined based on the metadata and/or image data. The classification and prioritization information may be metadata and may be used to organize the media data.
    Type: Application
    Filed: June 3, 2016
    Publication date: December 8, 2016
    Inventors: Shujie Liu, Yunfei Zheng, Xiaosong Zhou, Hsi-Jung Wu, Ke Zhang, Albert E. Keinath, Chris Y. Chung
  • Publication number: 20160092561
    Abstract: Systems and processes for improved video editing, summarization and navigation based on generation and analysis of metadata are described. The metadata may be content-based (e.g., differences between neighboring frames, exposure data, key frame identification data, motion data, or face detection data) or non-content-based (e.g., exposure, focus, location, time) and used to prioritize and/or classify portions of video. The metadata may be generated at the time of image capture or during post-processing. Prioritization information, such as a score for various portions of the image data may be based on the metadata and/or image data. Classification information such as the type or quality of a scene may be determined based on the metadata and/or image data. The classification and prioritization information may be metadata and may be used to automatically remove undesirable portions of the video, generate suggestions during editing or automatically generate summary video.
    Type: Application
    Filed: December 3, 2014
    Publication date: March 31, 2016
    Applicant: Apple Inc.
    Inventors: Shujie Liu, Ke Zhang, Xiaosong Zhou, Hsi-Jung Wu, Chris Y. Chung, James O. Normile, Douglas S. Price, Albert E. Keinath
  • Patent number: 8923640
    Abstract: The invention is directed to an efficient way for encoding and decoding video. Embodiments include identifying different coding units that share a similar characteristic. The characteristic can be, for example: quantization values, modes, block sizes, color space, motion vectors, depth, facial and non-facial regions, and filter values. An encoder may then group the units together as a coherence group. An encoder may similarly create a table or other data structure of the coding units. An encoder may then extract the commonly repeating characteristic or attribute from the coding units. The encoder may transmit the coherence groups along with the data structure, and other coding units which were not part of a coherence group. The decoder may receive the data, and utilize the shared characteristic by storing locally in cache, for faster repeated decoding, and decode the coherence group together.
    Type: Grant
    Filed: June 7, 2013
    Date of Patent: December 30, 2014
    Assignee: Apple Inc.
    Inventors: Xiaosong Zhou, Hsi-Jung Wu, Chris Y. Chung, Albert E. Keinath, David R. Conrad, Yunfei Zheng, Dazhong Zhang, Jae Hoon Kim
  • Publication number: 20140362919
    Abstract: The invention is directed to an efficient way for encoding and decoding video. Embodiments include identifying different coding units that share a similar characteristic. The characteristic can be, for example: quantization values, modes, block sizes, color space, motion vectors, depth, facial and non-facial regions, and filter values. An encoder may then group the units together as a coherence group. An encoder may similarly create a table or other data structure of the coding units. An encoder may then extract the commonly repeating characteristic or attribute from the coding units. The encoder may transmit the coherence groups along with the data structure, and other coding units which were not part of a coherence group. The decoder may receive the data, and utilize the shared characteristic by storing locally in cache, for faster repeated decoding, and decode the coherence group together.
    Type: Application
    Filed: June 7, 2013
    Publication date: December 11, 2014
    Inventors: Xiaosong Zhou, Hsi-Jung Wu, Chris Y. Chung, Albert E. Keinath, David R. Conrad, Yunfei Zheng, Dazhong Zhang, Jae Hoon Kim