Patents by Inventor Sudheendra Vijayanarasimhan

Sudheendra Vijayanarasimhan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160180200
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for classification using a neural network. One of the methods for processing an input through each of multiple layers of a neural network to generate an output, wherein each of the multiple layers of the neural network includes a respective multiple nodes includes for a particular layer of the multiple layers: receiving, by a classification system, an activation vector as input for the particular layer, selecting one or more nodes in the particular layer using the activation vector and a hash table that maps numeric values to nodes in the particular layer, and processing the activation vector using the selected nodes to generate an output for the particular layer.
    Type: Application
    Filed: November 5, 2015
    Publication date: June 23, 2016
    Inventors: Sudheendra Vijayanarasimhan, Jay Yagnik
  • Publication number: 20160070962
    Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.
    Type: Application
    Filed: September 8, 2015
    Publication date: March 10, 2016
    Inventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susanna Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
  • Patent number: 9230159
    Abstract: This disclosure generally relates to systems and methods that facilitate employing exemplar Histogram of Oriented Gradients Linear Discriminant Analysis (HOG-LDA) models along with Localizer Hidden Markov Models (HMM) to train a classification model to classify actions in videos by learning poses and transitions between the poses associated with the actions in a view of a continuous state represented by bounding boxes corresponding to where the action is located in frames of the video.
    Type: Grant
    Filed: December 9, 2013
    Date of Patent: January 5, 2016
    Assignee: Google Inc.
    Inventors: Sudheendra Vijayanarasimhan, Balakrishnan Varadarajan, Rahul Sukthankar
  • Patent number: 9076065
    Abstract: Techniques for detecting the location of an object of interest in a visual image are presented. A detector component extracts Histogram of Gradient (HOG) features from grid regions associated with the visual image. A trained linear filter model uses a classifier to facilitate differentiating between positive and negative instances of the object in grid regions based on HOG features. A classifier component detects the K top-scoring activations of filters associated with the visual image. The classifier component detects the location of the object in the visual image based on a generalized Hough transform, given filter locations associated with the visual image. The classifier component projects the object location given filter activations and clusters the filter activations into respective clusters. The classifier component classifies whether a cluster is associated with the object based on the weighted sum of the activation scores of filters within the cluster and object detection criteria.
    Type: Grant
    Filed: January 26, 2012
    Date of Patent: July 7, 2015
    Assignee: Google Inc.
    Inventor: Sudheendra Vijayanarasimhan
  • Patent number: 8977627
    Abstract: This disclosure relates to filter based object detection using hash functions. A hashing component can compute respective hash values for a set of object windows that are associated with an image to be scanned. The hashing component can employ various hash functions in connection with computing the hash values, such as a winner takes all (WTA) hash function. A filter selection component can compare the respective hash values of the object windows against a hash table of object filters, and can select one or more object filters for recognizing or localizing at least one of an object within the image as a function of the comparison.
    Type: Grant
    Filed: November 1, 2011
    Date of Patent: March 10, 2015
    Assignee: Google Inc.
    Inventors: Sudheendra Vijayanarasimhan, Jay Yagnik