Patents by Inventor George Dan Toderici

George Dan Toderici has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11042553
    Abstract: Facilitating of content entity annotation while maintaining joint quality, coverage and/or completeness performance conditions is provided. In one example, a non-transitory computer-readable medium comprises computer-readable instructions that, in response to execution, cause a computing system to perform operations. The operations include aggregating information indicative of initial entities for content and initial scores associated with the initial entities received from one or more content annotation sources and mapping the initial scores to respective values to generate calibrated scores. The operations include applying weights to the calibrated scores to generate weighted scores and combining the weighted scores using a linear aggregation model to generate a final score. The operations include determining whether to annotate the content with at least one of the initial entities based on a comparison of the final score and a defined threshold value.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: June 22, 2021
    Assignee: GOOGLE LLC
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Weilong Yang, John Burge, Sanketh Shetty, Omid Madani
  • Publication number: 20210166035
    Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.
    Type: Application
    Filed: December 14, 2020
    Publication date: June 3, 2021
    Inventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susana Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
  • Patent number: 10867183
    Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.
    Type: Grant
    Filed: April 23, 2018
    Date of Patent: December 15, 2020
    Assignee: Google LLC
    Inventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susanna Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
  • Publication number: 20200311548
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving, by a neural network (NN), a dataset for generating features from the dataset. A first set of features is computed from the dataset using at least a feature layer of the NN. The first set of features i) is characterized by a measure of informativeness; and ii) is computed such that a size of the first set of features is compressible into a second set of features that is smaller in size than the first set of features and that has a same measure of informativeness as the measure of informativeness of the first set of features. The second set of features if generated from the first set of features using a compression method that compresses the first set of features to generate the second set of features.
    Type: Application
    Filed: October 29, 2019
    Publication date: October 1, 2020
    Inventors: Abhinav Shrivastava, Saurabh Singh, Johannes Balle, Sami Ahmad Abu-El-Haija, Nicholas Johnston, George Dan Toderici
  • Patent number: 10713818
    Abstract: Methods, and systems, including computer programs encoded on computer storage media for compressing data items with variable compression rate. A system includes an encoder sub-network configured to receive a system input image and to generate an encoded representation of the system input image, the encoder sub-network including a first stack of neural network layers including one or more LSTM neural network layers and one or more non-LSTM neural network layers, the first stack configured to, at each of a plurality of time steps, receive an input image for the time step that is derived from the system input image and generate a corresponding first stack output, and a binarizing neural network layer configured to receive a first stack output as input and generate a corresponding binarized output.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: July 14, 2020
    Assignee: Google LLC
    Inventors: George Dan Toderici, Sean O'Malley, Rahul Sukthankar, Sung Jin Hwang, Damien Vincent, Nicholas Johnston, David Charles Minnen, Joel Shor, Michele Covell
  • Publication number: 20200111238
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for image compression and reconstruction. An image encoder system receives a request to generate an encoded representation of an input image that has been partitioned into a plurality of tiles and generates the encoded representation of the input image. To generate the encoded representation, the system processes a context for each tile using a spatial context prediction neural network that has been trained to process context for an input tile and generate an output tile that is a prediction of the input tile. The system determines a residual image between the particular tile and the output tile generated by the spatial context prediction neural network by process the context for the particular tile and generates a set of binary codes for the particular tile by encoding the residual image using an encoder neural network.
    Type: Application
    Filed: May 29, 2018
    Publication date: April 9, 2020
    Inventors: Michele Covell, Damien Vincent, David Charles Minnen, Saurabh Singh, Sung Jin Hwang, Nicholas Johnston, Joel Eric Shor, George Dan Toderici
  • Publication number: 20200082173
    Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.
    Type: Application
    Filed: November 18, 2019
    Publication date: March 12, 2020
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
  • Publication number: 20200027247
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for compressing and decompressing data. In one aspect, a method comprises: processing data using an encoder neural network to generate a latent representation of the data; processing the latent representation of the data using a hyper-encoder neural network to generate a latent representation of an entropy model; generating an entropy encoded representation of the latent representation of the entropy model; generating an entropy encoded representation of the latent representation of the data using the latent representation of the entropy model; and determining a compressed representation of the data from the entropy encoded representations of: (i) the latent representation of the data and (ii) the latent representation of the entropy model used to entropy encode the latent representation of the data.
    Type: Application
    Filed: July 18, 2019
    Publication date: January 23, 2020
    Inventors: David Charles Minnen, Saurabh Singh, Johannes Balle, Troy Chinen, Sung Jin Hwang, Nicholas Johnston, George Dan Toderici
  • Publication number: 20190356330
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for compressing and decompressing data. In one aspect, an encoder neural network processes data to generate an output including a representation of the data as an ordered collection of code symbols. The ordered collection of code symbols is entropy encoded using one or more code symbol probability distributions. A compressed representation of the data is determined based on the entropy encoded representation of the collection of code symbols and data indicating the code symbol probability distributions used to entropy encode the collection of code symbols. In another aspect, a compressed representation of the data is decoded to determine the collection of code symbols representing the data. A reconstruction of the data is determined by processing the collection of code symbols by a decoder neural network.
    Type: Application
    Filed: May 21, 2018
    Publication date: November 21, 2019
    Inventors: David Charles Minnen, Michele Covell, Saurabh Singh, Sung Jin Hwang, George Dan Toderici
  • Patent number: 10482328
    Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.
    Type: Grant
    Filed: October 2, 2017
    Date of Patent: November 19, 2019
    Assignee: Google LLC
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
  • Patent number: 10289912
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for classifying videos using neural networks. One of the methods includes obtaining a temporal sequence of video frames, wherein the temporal sequence comprises a respective video frame from a particular video at each of a plurality time steps; for each time step of the plurality of time steps: processing the video frame at the time step using a convolutional neural network to generate features of the video frame; and processing the features of the video frame using an LSTM neural network to generate a set of label scores for the time step and classifying the video as relating to one or more of the topics represented by labels in the set of labels from the label scores for each of the plurality of time steps.
    Type: Grant
    Filed: April 29, 2016
    Date of Patent: May 14, 2019
    Assignee: Google LLC
    Inventors: Sudheendra Vijayanarasimhan, George Dan Toderici, Yue Hei Ng, Matthew John Hausknecht, Oriol Vinyals, Rajat Monga
  • Patent number: 10192327
    Abstract: Methods, and systems, including computer programs encoded on computer storage media for compressing data items with variable compression rate. A system includes an encoder sub-network configured to receive a system input image and to generate an encoded representation of the system input image, the encoder sub-network including a first stack of neural network layers including one or more LSTM neural network layers and one or more non-LSTM neural network layers, the first stack configured to, at each of a plurality of time steps, receive an input image for the time step that is derived from the system input image and generate a corresponding first stack output, and a binarizing neural network layer configured to receive a first stack output as input and generate a corresponding binarized output.
    Type: Grant
    Filed: February 3, 2017
    Date of Patent: January 29, 2019
    Assignee: Google LLC
    Inventors: George Dan Toderici, Sean O'Malley, Rahul Sukthankar, Sung Jin Hwang, Damien Vincent, Nicholas Johnston, David Charles Minnen, Joel Shor, Michele Covell
  • Publication number: 20180239964
    Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.
    Type: Application
    Filed: April 23, 2018
    Publication date: August 23, 2018
    Inventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susanna Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
  • Patent number: 9953222
    Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.
    Type: Grant
    Filed: September 8, 2015
    Date of Patent: April 24, 2018
    Assignee: Google LLC
    Inventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susanna Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
  • Publication number: 20180089200
    Abstract: Facilitating of content entity annotation while maintaining joint quality, coverage and/or completeness performance conditions is provided. In one example, a non-transitory computer-readable medium comprises computer-readable instructions that, in response to execution, cause a computing system to perform operations. The operations include aggregating information indicative of initial entities for content and initial scores associated with the initial entities received from one or more content annotation sources and mapping the initial scores to respective values to generate calibrated scores. The operations include applying weights to the calibrated scores to generate weighted scores and combining the weighted scores using a linear aggregation model to generate a final score. The operations include determining whether to annotate the content with at least one of the initial entities based on a comparison of the final score and a defined threshold value.
    Type: Application
    Filed: November 21, 2017
    Publication date: March 29, 2018
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Weilong Yang, John Burge, Sanketh Shetty, Omid Madani
  • Publication number: 20180025228
    Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.
    Type: Application
    Filed: October 2, 2017
    Publication date: January 25, 2018
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
  • Patent number: 9830361
    Abstract: Facilitation of content entity annotation while maintaining joint quality, coverage and/or completeness performance conditions is provided. In one example, a system includes an aggregation component that aggregates signals indicative of initial entities for content and initial scores associated with the initial entities generated by one or more content annotation sources; and a mapping component that maps the initial scores to calibrated scores within a defined range. The system also includes a linear aggregation component that: applies selected weights to the calibrated scores, wherein the selected weights are based on joint performance conditions; and combines the weighted, calibrated scores based on a selected linear aggregation model of a plurality of linear aggregation models to generate a final score. The system also includes an annotation component that determines whether to annotate the content with one of the initial entities based on a comparison of the final score and a defined threshold value.
    Type: Grant
    Filed: December 4, 2013
    Date of Patent: November 28, 2017
    Assignee: GOOGLE INC.
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Weilong Yang, John Burge, Sanketh Shetty, Omid Madani
  • Patent number: 9779304
    Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.
    Type: Grant
    Filed: August 11, 2015
    Date of Patent: October 3, 2017
    Assignee: Google Inc.
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
  • Publication number: 20170046573
    Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.
    Type: Application
    Filed: August 11, 2015
    Publication date: February 16, 2017
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
  • Publication number: 20160378863
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for selecting representative frames for videos. One of the methods includes receiving a search query; determining a query representation for the search query; obtaining data identifying a plurality of responsive videos for the search query, wherein each responsive video comprises a plurality of frames, wherein each frame has a respective frame representation; selecting, for each responsive video, a representative frame from the responsive video using the query representation and the frame representations for the frames in the responsive video; and generating a response to the search query, wherein the response to the search query includes a respective video search result for each of the responsive videos, and wherein the respective video search result for each of the responsive videos includes a presentation of the representative video frame from the responsive video.
    Type: Application
    Filed: June 24, 2015
    Publication date: December 29, 2016
    Inventors: Jonathon Shlens, George Dan Toderici, Sami Ahmad Abu-El-Haija