Patents by Inventor Weilong Yang

Weilong Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9953222
    Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.
    Type: Grant
    Filed: September 8, 2015
    Date of Patent: April 24, 2018
    Assignee: Google LLC
    Inventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susanna Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
  • Publication number: 20180089200
    Abstract: Facilitating of content entity annotation while maintaining joint quality, coverage and/or completeness performance conditions is provided. In one example, a non-transitory computer-readable medium comprises computer-readable instructions that, in response to execution, cause a computing system to perform operations. The operations include aggregating information indicative of initial entities for content and initial scores associated with the initial entities received from one or more content annotation sources and mapping the initial scores to respective values to generate calibrated scores. The operations include applying weights to the calibrated scores to generate weighted scores and combining the weighted scores using a linear aggregation model to generate a final score. The operations include determining whether to annotate the content with at least one of the initial entities based on a comparison of the final score and a defined threshold value.
    Type: Application
    Filed: November 21, 2017
    Publication date: March 29, 2018
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Weilong Yang, John Burge, Sanketh Shetty, Omid Madani
  • Publication number: 20180025228
    Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.
    Type: Application
    Filed: October 2, 2017
    Publication date: January 25, 2018
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
  • Publication number: 20180005666
    Abstract: A method of generating a moving thumbnail is disclosed. The method includes sampling video frames of a video item. The method further includes determining frame-level quality scores for the sampled video frames. The method also includes determining multiple group-level quality scores for multiple groups of the sampled video frames using the frame-level quality scores of the sampled video frames. The method further includes selecting one of the groups of the sampled video frames based on the multiple group-level quality scores. The method includes creating a moving thumbnail using a subset of the video frames that have timestamps within a range from the start timestamp to the end timestamp.
    Type: Application
    Filed: June 30, 2016
    Publication date: January 4, 2018
    Inventors: Weilong Yang, Min-Hsuan Tsai, Zheng Sun, Pei Cao, Tomas Izo
  • Patent number: 9830361
    Abstract: Facilitation of content entity annotation while maintaining joint quality, coverage and/or completeness performance conditions is provided. In one example, a system includes an aggregation component that aggregates signals indicative of initial entities for content and initial scores associated with the initial entities generated by one or more content annotation sources; and a mapping component that maps the initial scores to calibrated scores within a defined range. The system also includes a linear aggregation component that: applies selected weights to the calibrated scores, wherein the selected weights are based on joint performance conditions; and combines the weighted, calibrated scores based on a selected linear aggregation model of a plurality of linear aggregation models to generate a final score. The system also includes an annotation component that determines whether to annotate the content with one of the initial entities based on a comparison of the final score and a defined threshold value.
    Type: Grant
    Filed: December 4, 2013
    Date of Patent: November 28, 2017
    Assignee: GOOGLE INC.
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Weilong Yang, John Burge, Sanketh Shetty, Omid Madani
  • Patent number: 9779304
    Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.
    Type: Grant
    Filed: August 11, 2015
    Date of Patent: October 3, 2017
    Assignee: Google Inc.
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
  • Patent number: 9627004
    Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method selects an entity from a plurality of entities identifying characteristics of a video item, where the video item has associated metadata. The computer-implemented method receives probabilities of existence of the entity in video frames of the video item, and selects a video frame determined to comprise the entity responsive to determining the video frame having a probability of existence of the entity greater than zero. The computer-implemented method determines a scaling factor for the probability of existence of the entity using the metadata of the video item, and determines an adjusted probability of existence of the entity by using the scaling factor to adjust the probability of existence of the entity. The computer-implemented method labels the video frame with the adjusted probability of existence.
    Type: Grant
    Filed: October 14, 2015
    Date of Patent: April 18, 2017
    Assignee: Google Inc.
    Inventors: Balakrishnan Varadarajan, Sanketh Shetty, Apostol Natsev, Nitin Khandelwal, Weilong Yang, Sudheendra Vijayanarasimhan, WeiHsin Gu, Nicola Muscettola
  • Publication number: 20170046573
    Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.
    Type: Application
    Filed: August 11, 2015
    Publication date: February 16, 2017
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
  • Publication number: 20160125034
    Abstract: A system and method of annotating an application, including obtaining input signals associated with a target application, wherein the input signals are obtained from a plurality of sources, obtaining first annotation data from the obtained input signals, generating second annotation data in a machine-understandable form based on the first annotation data, and associating the second annotation data with the target application.
    Type: Application
    Filed: February 5, 2015
    Publication date: May 5, 2016
    Inventors: Huazhong Ning, Weilong Yang, Tianhong Fang, Min-hsuan Tsai, Hrishikesh Balkrishna Aradhye
  • Publication number: 20160070962
    Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.
    Type: Application
    Filed: September 8, 2015
    Publication date: March 10, 2016
    Inventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susanna Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
  • Patent number: 8930288
    Abstract: A tag learning module trains video classifiers associated with a stored set of tags derived from textual metadata of a plurality of videos, the training based on features extracted from training videos. Each of the tag classifiers is comprised of a plurality of subtag classifiers relating to latent subtags within the tag. The latent subtags can be initialized by clustering cowatch information relating to the videos for a tag. After initialization to identify subtag groups, a subtag classifier can be trained on features extracted from each subtag group. Iterative training of the subtag classifiers can be accomplished by identifying the latent subtags of a training set using the subtag classifiers, then iteratively improving the subtag classifiers by training each subtag classifier with the videos designated as conforming closest to that subtag.
    Type: Grant
    Filed: November 11, 2011
    Date of Patent: January 6, 2015
    Assignee: Google Inc.
    Inventors: George D. Toderici, Weilong Yang
  • Publication number: 20120123978
    Abstract: A tag learning module trains video classifiers associated with a stored set of tags derived from textual metadata of a plurality of videos, the training based on features extracted from training videos. Each of the tag classifiers is comprised of a plurality of subtag classifiers relating to latent subtags within the tag. The latent subtags can be initialized by clustering cowatch information relating to the videos for a tag. After initialization to identify subtag groups, a subtag classifier can be trained on features extracted from each subtag group. Iterative training of the subtag classifiers can be accomplished by identifying the latent subtags of a training set using the subtag classifiers, then iteratively improving the subtag classifiers by training each subtag classifier with the videos designated as conforming closest to that subtag.
    Type: Application
    Filed: November 11, 2011
    Publication date: May 17, 2012
    Applicant: GOOGLE INC.
    Inventors: George Toderice, Weilong Yang