Patents by Inventor Jay N. Yagnik

Jay N. Yagnik has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10080042
    Abstract: User engagement in unwatched videos is predicted by collecting and aggregating data describing user engagement with watched videos. The data are normalized to reduce the influence of factors other than the content of the videos on user engagement. Engagement metrics are calculated for segments of watched videos that indicate user engagement with each segment relative to overall user engagement with the watched videos. Features of the watched videos within time windows are characterized, and a function is learned that relates the features of the videos within the time windows to the engagement metrics for the time windows. The features of a time window of an unwatched video are characterized, and the learned function is applied to the features to predict user engagement to the time window of the unwatched video. The unwatched video can be enhanced based on the predicted user engagement.
    Type: Grant
    Filed: May 12, 2017
    Date of Patent: September 18, 2018
    Assignee: GOOGLE LLC
    Inventors: Ullas Gargi, Jay N. Yagnik, Anindya Sarkar
  • Patent number: 9681158
    Abstract: User engagement in unwatched videos is predicted by collecting and aggregating data describing user engagement with watched videos. The data are normalized to reduce the influence of factors other than the content of the videos on user engagement. Engagement metrics are calculated for segments of watched videos that indicate user engagement with each segment relative to overall user engagement with the watched videos. Features of the watched videos within time windows are characterized, and a function is learned that relates the features of the videos within the time windows to the engagement metrics for the time windows. The features of a time window of an unwatched video are characterized, and the learned function is applied to the features to predict user engagement to the time window of the unwatched video. The unwatched video can be enhanced based on the predicted user engagement.
    Type: Grant
    Filed: January 13, 2015
    Date of Patent: June 13, 2017
    Assignee: Google Inc.
    Inventors: Ullas Gargi, Jay N. Yagnik, Anindya Sarkar
  • Patent number: 9122986
    Abstract: A computer-implemented technique of providing relevant search results to a user of a website at a query time. The technique can include receiving, at a computing device having one or more processors, a query from the user, the query corresponding to a description of potential search results desired by the user. The technique can further include retrieving a user history corresponding to previous user interactions with the website and determining a context of the user corresponding to an interaction of the user with the website at the query time. The relevant search results can be determined based on the query, the user history, and the context of the user and a prediction model, and be provided to the user via updating of a webpage presented to the user. The technique can further include adapting the prediction model based on a prediction event and set of corresponding prediction event features.
    Type: Grant
    Filed: November 5, 2012
    Date of Patent: September 1, 2015
    Assignee: Google Inc.
    Inventor: Jay N. Yagnik
  • Publication number: 20150154493
    Abstract: A computer-implemented technique of providing relevant search results to a user of a website at a query time. The technique can include receiving, at a computing device having one or more processors, a query from the user, the query corresponding to a description of potential search results desired by the user. The technique can further include retrieving a user history corresponding to previous user interactions with the website and determining a context of the user corresponding to an interaction of the user with the website at the query time. The relevant search results can be determined based on the query, the user history, and the context of the user and a prediction model, and be provided to the user via updating of a webpage presented to the user. The technique can further include adapting the prediction model based on a prediction event and set of corresponding prediction event features.
    Type: Application
    Filed: November 5, 2012
    Publication date: June 4, 2015
    Inventor: Jay N. Yagnik
  • Patent number: 8924993
    Abstract: A demographics analysis trains classifier models for predicting demographic attribute values of videos and users not already having known demographics. In one embodiment, the demographics analysis system trains classifier models for predicting demographics of videos using video features such as demographics of video uploaders, textual metadata, and/or audiovisual content of videos. In one embodiment, the demographics analysis system trains classifier models for predicting demographics of users (e.g., anonymous users) using user features based on prior video viewing periods of users. For example, viewing-period based user features can include individual viewing period statistics such as total videos viewed. Further, the viewing-period based features can include distributions of values over the viewing period, such as distributions in demographic attribute values of video uploaders, and/or distributions of viewings over hours of the day, days of the week, and the like.
    Type: Grant
    Filed: November 10, 2011
    Date of Patent: December 30, 2014
    Assignee: Google Inc.
    Inventors: Juan Carlos Niebles Duque, Hrishikesh Balkrishna Aradhye, Luciano Sbaiz, Jay N. Yagnik, Reto Strobl
  • Patent number: 8879862
    Abstract: One embodiment of the present invention provides a system that automatically produces a summary of a video. During operation, the system partitions the video into scenes and then determines similarities between the scenes. Next, the system selects representative scenes from the video based on the determined similarities, and combines the selected scenes to produce the summary for the video.
    Type: Grant
    Filed: February 18, 2014
    Date of Patent: November 4, 2014
    Assignee: Google Inc.
    Inventor: Jay N. Yagnik
  • Publication number: 20140161351
    Abstract: One embodiment of the present invention provides a system that automatically produces a summary of a video. During operation, the system partitions the video into scenes and then determines similarities between the scenes. Next, the system selects representative scenes from the video based on the determined similarities, and combines the selected scenes to produce the summary for the video.
    Type: Application
    Filed: February 18, 2014
    Publication date: June 12, 2014
    Applicant: Google Inc.
    Inventor: Jay N. Yagnik
  • Patent number: 8699806
    Abstract: One embodiment of the present invention provides a system that automatically produces a summary of a video. During operation, the system partitions the video into scenes and then determines similarities between the scenes. Next, the system selects representative scenes from the video based on the determined similarities, and combines the selected scenes to produce the summary for the video.
    Type: Grant
    Filed: June 15, 2006
    Date of Patent: April 15, 2014
    Assignee: Google Inc.
    Inventor: Jay N. Yagnik
  • Patent number: 8660370
    Abstract: Clustering algorithms such as k-means clustering algorithm are used in applications that process entities with spatial and/or temporal characteristics, for example, media objects representing audio, video, or graphical data. Feature vectors representing characteristics of the entities are partitioned using clustering methods that produce results sensitive to an initial set of cluster seeds. The set of initial cluster seeds is generated using principal component analysis of either the complete feature vector set or a subset thereof. The feature vector set is divided into a desired number of initial clusters and a seed determined from each initial cluster.
    Type: Grant
    Filed: January 31, 2013
    Date of Patent: February 25, 2014
    Assignee: Google Inc.
    Inventors: Sangho Yoon, Jay N. Yagnik, Mei Han, Vivek Kwatra
  • Patent number: 8571349
    Abstract: An image processing system enhances the resolution of an original image using higher-resolution image data from other images. The image processing system defines a plurality of overlapping partitions for the original image, each partition defining a set of non-overlapping site patches. During an optimization phase, the system identifies, for site patches of the original images, label patches within related images that are of most relevance. During a rendering phase independent of the optimization phase, an output image with enhanced resolution is synthesized by substituting, for site patches of the original image, the identified relevant label patches from the related images.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: October 29, 2013
    Assignee: Google Inc
    Inventors: Vivek Kwatra, Mei Han, Jay N. Yagnik
  • Patent number: 8229156
    Abstract: One embodiment of the present invention provides a computer-based system that automatically characterizes a video. During operation, the system extracts feature vectors from sampled frames in the video. Next, the system uses the extracted feature vectors for successive sampled frames in the video to define a curve. The system then determines a set of invariants for the curve. Next, the system using the set of invariants to characterize the video. The system can then use the characterization of the video to perform various operations, such as classifying the video with respect to other videos or detecting duplicates of the video.
    Type: Grant
    Filed: August 8, 2006
    Date of Patent: July 24, 2012
    Assignee: Google Inc.
    Inventor: Jay N. Yagnik
  • Patent number: 8065313
    Abstract: One embodiment of the present invention provides a system that automatically annotates an image. During operation, the system receives the image. Next, the system extracts image features from the image. The system then identifies other images which have similar image features. The system next obtains text associated with the other images, and identifies intersecting keywords in the obtained text. Finally, the system annotates the image with the intersecting keywords.
    Type: Grant
    Filed: July 24, 2006
    Date of Patent: November 22, 2011
    Assignee: Google Inc.
    Inventor: Jay N. Yagnik
  • Publication number: 20080021928
    Abstract: One embodiment of the present invention provides a system that automatically annotates an image. During operation, the system receives the image. Next, the system extracts image features from the image. The system then identifies other images which have similar image features. The system next obtains text associated with the other images, and identifies intersecting keywords in the obtained text. Finally, the system annotates the image with the intersecting keywords.
    Type: Application
    Filed: July 24, 2006
    Publication date: January 24, 2008
    Inventor: Jay N. Yagnik
  • Publication number: 20070245242
    Abstract: One embodiment of the present invention provides a system that automatically produces a summary of a video. During operation, the system partitions the video into scenes and then determines similarities between the scenes. Next, the system selects representative scenes from the video based on the determined similarities, and combines the selected scenes to produce the summary for the video.
    Type: Application
    Filed: June 15, 2006
    Publication date: October 18, 2007
    Inventor: Jay N. Yagnik