Patents by Inventor Jay Yagnik

Jay Yagnik has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8959540
    Abstract: User engagement in unwatched videos is predicted by collecting and aggregating data describing user engagement with watched videos. The data are normalized to reduce the influence of factors other than the content of the videos on user engagement. Engagement metrics are calculated for segments of watched videos that indicate user engagement with each segment relative to overall user engagement with the watched videos. Features of the watched videos within time windows are characterized, and a function is learned that relates the features of the videos within the time windows to the engagement metrics for the time windows. The features of a time window of an unwatched video are characterized, and the learned function is applied to the features to predict user engagement to the time window of the unwatched video. The unwatched video can be enhanced based on the predicted user engagement.
    Type: Grant
    Filed: May 19, 2010
    Date of Patent: February 17, 2015
    Assignee: Google Inc.
    Inventors: Ullas Gargi, Jay Yagnik, Anindya Sarkar
  • Patent number: 8959128
    Abstract: Wiberg minimization operates on a system with two sets of variables described by a linear function and in which some data or observations are missing. The disclosure generalizes Wiberg minimization, solving for a function that is nonlinear in both sets of variables, U and V, iteratively. In one embodiment, defining a first function ƒ(U, V) that may be defined that may be nonlinear in both a first set of variables U and a second set of variables V. A first function ƒ(U, V) may be transformed into ƒ(U, V(U)). First assumed values of the first set of variables U may be assigned. The second set of variables V may be iteratively estimated based upon the transformed first function ƒ(U, V(U)) and the assumed values of the first set of variables U such that ƒ(U, V(U)) may be minimized with respect to V. New estimates of the first set of variables U may be iteratively computed.
    Type: Grant
    Filed: November 16, 2011
    Date of Patent: February 17, 2015
    Assignee: Google Inc.
    Inventors: Dennis Strelow, Jay Yagnik
  • Patent number: 8898148
    Abstract: A computer-implemented information targeting method is disclosed. The method includes receiving a search query from a computing device, where the search query has at least two different meanings, identifying metadata associated with the search query, using the metadata to promote search results corresponding to a first meaning of the at least two meanings of the search query, and providing search results corresponding to the first meaning of the search query to the computing device. Using the metadata to promote search results may comprise analyzing (a) prior search queries that are related to the received search query, (b) metadata associated with the prior search queries, and (c) selections of search results provided in response to the prior search queries; and identifying a correlation between the metadata associated with the prior search queries and selections of search results presented in response to the prior search queries.
    Type: Grant
    Filed: April 10, 2012
    Date of Patent: November 25, 2014
    Assignee: Google Inc.
    Inventors: Jay Yagnik, Niyati Yagnik
  • Patent number: 8880415
    Abstract: A computing device identifies a first codeword in a first codebook to represent short-timescale information of frames in a time-based data item segmented at intervals and identifies a second codeword in a second codebook to represent long-timescale information of the frames. The computing device generates a third codebook based on the first codeword and the second codeword for the frames to add long-timescale information context to the short-timescale information of the frames.
    Type: Grant
    Filed: December 9, 2011
    Date of Patent: November 4, 2014
    Assignee: Google Inc.
    Inventors: Douglas Eck, Jay Yagnik
  • Patent number: 8847951
    Abstract: Methods and systems permit automatic matching of videos with images from dense image-based geographic information systems. In some embodiments, video data including image frames is accessed. The video data may be segmented to determine a first image frame of a segment of the video data. Data representing information from the first image frame may be automatically compared with data representing information from a plurality of image frames of an image-based geographic information data system. Such a comparison may, for example, involve a search for a best match between geometric features, histograms, color data, texture data, etc. of the compared images. Based on the automatic comparing, an association between the video and one or more images of the image-based geographic information data system may be generated. The association may represent a geographic correlation between selected images of the system and the video data.
    Type: Grant
    Filed: October 16, 2013
    Date of Patent: September 30, 2014
    Assignee: Google Inc.
    Inventors: Dragomir Anguelov, Abhijit S. Ogale, Ehud Rivlin, Jay Yagnik
  • Patent number: 8819024
    Abstract: A classifier training system learns classifiers for categories by combining data from a category-instance repository comprising relationships between categories and more specific instances of those categories with a set of video classifiers for different concepts. The category-instance repository is derived from the domain of textual documents, such as web pages, and the concept classifiers are derived from the domain of video. Taken together, the category-instance repository and the concept classifiers provide sufficient data for obtaining accurate classifiers for categories that encompass other lower-level concepts, where the categories and their classifiers may not be obtainable solely from the video domain.
    Type: Grant
    Filed: November 19, 2010
    Date of Patent: August 26, 2014
    Assignee: Google Inc.
    Inventors: George Toderici, Hrishikesh Aradhye, Alexandru Marius Pasca, Luciano Sbaiz, Jay Yagnik
  • Patent number: 8805090
    Abstract: Systems and methods for measuring consistency between two objects based upon a rank of object elements instead of based upon the values of those object elements. Objects being compared can be represented by d-dimension feature vectors, U and V, where each dimension includes an associated value. U and V can be converted to rank vectors, P and Q, where values of U and V dimensions are replaced by an ordered rank or a function thereof. Analysis directed to the consistency between U and V can be accomplished by determining consistency between P and Q, which can be more efficient and more accurate, particularly with regard to illumination-invariant comparisons.
    Type: Grant
    Filed: February 7, 2012
    Date of Patent: August 12, 2014
    Assignee: Google Inc.
    Inventors: Jay Yagnik, Sergey Ioffe
  • Patent number: 8798438
    Abstract: A computing system may process a plurality of audiovisual files to determine a mapping between audio characteristics and visual characteristics. The computing system may process an audio playlist to determine audio characteristics of the audio playlist. The computing system may determine, using the mapping, visual characteristics that are complementary to the audio characteristics of the audio playlist. The computing system may search a plurality of images to find one or more image(s) that have the determined visual characteristics. The computing system may link or associate the one or more image(s) that have the determined visual characteristics to the audio playlist such that the one or more images are displayed on a screen of the computing device during playback of the audio playlist.
    Type: Grant
    Filed: December 7, 2012
    Date of Patent: August 5, 2014
    Assignee: Google Inc.
    Inventors: Jay Yagnik, Douglas Eck
  • Patent number: 8792732
    Abstract: An object recognition system performs a number of rounds of dimensionality reduction and consistency learning on visual content items such as videos and still images, resulting in a set of feature vectors that accurately predict the presence of a visual object represented by a given object name within an visual content item. The feature vectors are stored in association with the object name which they represent and with an indication of the number of rounds of dimensionality reduction and consistency learning that produced them. The feature vectors and the indication can be used for various purposes, such as quickly determining a visual content item containing a visual representation of a given object name.
    Type: Grant
    Filed: July 26, 2012
    Date of Patent: July 29, 2014
    Assignee: Google Inc.
    Inventors: Ming Zhao, Jay Yagnik
  • Patent number: 8788503
    Abstract: Systems, computer program products, and methods can identify a training set of content, and generate one or more clusters from the training set of content, where each of the one or more clusters represent similar features of the training set of content. The one or more clusters can be used to generate a classifier. New content is identified and the classifier is used to associate at least one label with the new content.
    Type: Grant
    Filed: September 25, 2013
    Date of Patent: July 22, 2014
    Assignee: Google Inc.
    Inventor: Jay Yagnik
  • Patent number: 8774509
    Abstract: A system computes a vectorial representation for each of a set of initial patches in an image and compares the vectorial representation for each initial patch with vectorial representations of nearby patches. Each nearby patch is within a distance from an initial patch. The system applies an ordinal coding algorithm on the comparison results between the vectorial representations for the initial patches and vectorial representations of nearby patches to generate a two-dimensional representation of the image indicating a repeating pattern within the image.
    Type: Grant
    Filed: March 1, 2012
    Date of Patent: July 8, 2014
    Assignee: Google Inc.
    Inventors: Jay Yagnik, Douglas Eck
  • Patent number: 8768048
    Abstract: A computing device segments an image into a plurality of segments, wherein each segment of the plurality of segments comprises a set of pixels that share visual characteristics. The computing device then determines expected contexts for the segments, wherein an expected context for a segment comprises at least one of additional segments or features expected to occur in the image together with the segment. The computing device then identifies a probable object based on the expected contexts.
    Type: Grant
    Filed: November 18, 2011
    Date of Patent: July 1, 2014
    Assignee: Google Inc.
    Inventors: Vivek Kwatra, Jay Yagnik, Alexander T. Toshev, Poonam Suryanarayan
  • Patent number: 8738633
    Abstract: This disclosure relates to transformation invariant media matching. A fingerprinting component can generate a transformation invariant identifier for media content by adaptively encoding the relative ordering of interest points in media content. The interest points can be grouped into subsets, and stretch invariant descriptors can be generated for the subsets based on ratios of coordinates of interest points included in the subsets. The stretch invariant descriptors can be aggregated into a transformation invariant identifier. An identification component compares the identifier against a set of identifiers for known media content, and the media content can be matched or identified as a function of the comparison.
    Type: Grant
    Filed: January 31, 2012
    Date of Patent: May 27, 2014
    Assignee: Google Inc.
    Inventors: Matthew Sharifi, Sergey Ioffe, Jay Yagnik, Gheorghe Postelnicu, Dominik Roblek, George Tzanetakis
  • Patent number: 8683521
    Abstract: A suggestion server generates suggestions of videos. The suggestion server analyzes log data to create co-watch data identifying pairs of co-watched videos and containing generate values representing the number of times the pairs of videos were co-watched. The suggestion server uses the co-watch data to create feature vectors for the co-watched videos. The suggestion server uses the feature vectors to train a ranker for each video. When trained, the ranker can be applied to a feature vector for a video to produce a ranking score. To produce suggestions for a given video, a set of candidate videos is defined. The suggestion server applies the feature vectors for the candidates to the ranker for the given video to produce ranking scores. The candidate videos are ranked based on their ranking scores, and the highest-ranked candidates are provided as suggestions for the given video.
    Type: Grant
    Filed: March 31, 2009
    Date of Patent: March 25, 2014
    Assignee: Google Inc.
    Inventors: Ullas Gargi, Jay Yagnik
  • Patent number: 8676725
    Abstract: Methods, systems and articles of manufacture for identifying semantic nearest neighbors in a feature space are described herein. A method embodiment includes generating an affinity matrix for objects in a given feature space, wherein the affinity matrix identifies the semantic similarity between each pair of objects in the feature space, training a multi-bit hash function using a greedy algorithm that increases the Hamming distance between dissimilar objects in the feature space while minimizing the Hamming distance between similar objects, and identifying semantic nearest neighbors for an object in a second feature space using the multi-bit hash function. A system embodiment includes a hash generator configured to generate the affinity matrix and train the multi-bit hash function, and a similarity determiner configured to identify semantic nearest neighbors for an object in a second feature space using the multi-bit hash function.
    Type: Grant
    Filed: June 4, 2010
    Date of Patent: March 18, 2014
    Assignee: Google Inc.
    Inventors: Ruei-Sung Lin, David Ross, Jay Yagnik
  • Patent number: 8676803
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for clustering images. In one aspect a system includes one or more computers configured to, for each of a plurality of digital images, associate extrinsic image-related information with each individual image, the extrinsic image-related information including text information and co-click data for the individual image, assign images from the plurality of images to one or more of the clusters of images based on the extrinsic information associated with each of the plurality of images, receive in the search system a user query from a user device, identify by operation of the search system one or more clusters of images that match the query, and provide one or more cluster results, where each cluster result provides information about an identified cluster.
    Type: Grant
    Filed: November 4, 2009
    Date of Patent: March 18, 2014
    Assignee: Google Inc.
    Inventors: Thomas Leung, Jay Yagnik
  • Publication number: 20140016706
    Abstract: This disclosure relates to transformation invariant media matching. A fingerprinting component can generate a transformation invariant identifier for media content by adaptively encoding the relative ordering of signal markers in media content. The signal markers can be adaptively encoded via reference point geometry, or ratio histograms. An identification component compares the identifier against a set of identifiers for known media content, and the media content can be matched or identified as a function of the comparison.
    Type: Application
    Filed: September 12, 2013
    Publication date: January 16, 2014
    Applicant: Google Inc.
    Inventors: Jay Yagnik, Sergey Ioffe
  • Patent number: 8611689
    Abstract: A method and system generates and compares fingerprints for videos in a video library. The video fingerprints provide a compact representation of the spatial and sequential characteristics of the video that can be used to quickly and efficiently identify video content. Because the fingerprints are based on spatial and sequential characteristics rather than exact bit sequences, visual content of videos can be effectively compared even when there are small differences between the videos in compression factors, source resolutions, start and stop times, frame rates, and so on. Comparison of video fingerprints can be used, for example, to search for and remove copyright protected videos from a video library. Further, duplicate videos can be detected and discarded in order to preserve storage space.
    Type: Grant
    Filed: December 15, 2010
    Date of Patent: December 17, 2013
    Assignee: Google Inc.
    Inventors: Jay Yagnik, Henry A. Rowley, Sergey Ioffe
  • Patent number: 8611422
    Abstract: A method and system generates and compares fingerprints for videos in a video library. The video fingerprints provide a compact representation of the temporal locations of discontinuities in the video that can be used to quickly and efficiently identify video content. Discontinuities can be, for example, shot boundaries in the video frame sequence or silent points in the audio stream. Because the fingerprints are based on structural discontinuity characteristics rather than exact bit sequences, visual content of videos can be effectively compared even when there are small differences between the videos in compression factors, source resolutions, start and stop times, frame rates, and so on. Comparison of video fingerprints can be used, for example, to search for and remove copyright protected videos from a video library. Furthermore, duplicate videos can be detected and discarded in order to preserve storage space.
    Type: Grant
    Filed: June 19, 2007
    Date of Patent: December 17, 2013
    Assignee: Google Inc.
    Inventors: Jay Yagnik, Henry A. Rowley, Sergey Ioffe
  • Patent number: 8593485
    Abstract: Methods and systems permit automatic matching of videos with images from dense image-based geographic information systems. In some embodiments, video data including image frames is accessed. The video data may be segmented to determine a representative image frame of a segment of the video data. Data representing information from the representative image frame may be automatically compared with data representing information from a plurality of image frames of an image-based geographic information data system. Such a comparison may, for example, involve a search for a best match between geometric features, histograms, color data, texture data, etc. of the compared images. Based on the automatic comparing, an association between the video and one or more images of the image-based geographic information data system may be generated. The association may represent a geographic correlation between selected images of the system and the video data.
    Type: Grant
    Filed: April 28, 2009
    Date of Patent: November 26, 2013
    Assignee: Google Inc.
    Inventors: Dragomir Anguelov, Abhijit Ogale, Ehud Rivlin, Jay Yagnik