Patents by Inventor Jay N. Yagnik
Jay N. Yagnik has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10080042Abstract: User engagement in unwatched videos is predicted by collecting and aggregating data describing user engagement with watched videos. The data are normalized to reduce the influence of factors other than the content of the videos on user engagement. Engagement metrics are calculated for segments of watched videos that indicate user engagement with each segment relative to overall user engagement with the watched videos. Features of the watched videos within time windows are characterized, and a function is learned that relates the features of the videos within the time windows to the engagement metrics for the time windows. The features of a time window of an unwatched video are characterized, and the learned function is applied to the features to predict user engagement to the time window of the unwatched video. The unwatched video can be enhanced based on the predicted user engagement.Type: GrantFiled: May 12, 2017Date of Patent: September 18, 2018Assignee: GOOGLE LLCInventors: Ullas Gargi, Jay N. Yagnik, Anindya Sarkar
-
Patent number: 9681158Abstract: User engagement in unwatched videos is predicted by collecting and aggregating data describing user engagement with watched videos. The data are normalized to reduce the influence of factors other than the content of the videos on user engagement. Engagement metrics are calculated for segments of watched videos that indicate user engagement with each segment relative to overall user engagement with the watched videos. Features of the watched videos within time windows are characterized, and a function is learned that relates the features of the videos within the time windows to the engagement metrics for the time windows. The features of a time window of an unwatched video are characterized, and the learned function is applied to the features to predict user engagement to the time window of the unwatched video. The unwatched video can be enhanced based on the predicted user engagement.Type: GrantFiled: January 13, 2015Date of Patent: June 13, 2017Assignee: Google Inc.Inventors: Ullas Gargi, Jay N. Yagnik, Anindya Sarkar
-
Patent number: 9122986Abstract: A computer-implemented technique of providing relevant search results to a user of a website at a query time. The technique can include receiving, at a computing device having one or more processors, a query from the user, the query corresponding to a description of potential search results desired by the user. The technique can further include retrieving a user history corresponding to previous user interactions with the website and determining a context of the user corresponding to an interaction of the user with the website at the query time. The relevant search results can be determined based on the query, the user history, and the context of the user and a prediction model, and be provided to the user via updating of a webpage presented to the user. The technique can further include adapting the prediction model based on a prediction event and set of corresponding prediction event features.Type: GrantFiled: November 5, 2012Date of Patent: September 1, 2015Assignee: Google Inc.Inventor: Jay N. Yagnik
-
Publication number: 20150154493Abstract: A computer-implemented technique of providing relevant search results to a user of a website at a query time. The technique can include receiving, at a computing device having one or more processors, a query from the user, the query corresponding to a description of potential search results desired by the user. The technique can further include retrieving a user history corresponding to previous user interactions with the website and determining a context of the user corresponding to an interaction of the user with the website at the query time. The relevant search results can be determined based on the query, the user history, and the context of the user and a prediction model, and be provided to the user via updating of a webpage presented to the user. The technique can further include adapting the prediction model based on a prediction event and set of corresponding prediction event features.Type: ApplicationFiled: November 5, 2012Publication date: June 4, 2015Inventor: Jay N. Yagnik
-
Patent number: 8924993Abstract: A demographics analysis trains classifier models for predicting demographic attribute values of videos and users not already having known demographics. In one embodiment, the demographics analysis system trains classifier models for predicting demographics of videos using video features such as demographics of video uploaders, textual metadata, and/or audiovisual content of videos. In one embodiment, the demographics analysis system trains classifier models for predicting demographics of users (e.g., anonymous users) using user features based on prior video viewing periods of users. For example, viewing-period based user features can include individual viewing period statistics such as total videos viewed. Further, the viewing-period based features can include distributions of values over the viewing period, such as distributions in demographic attribute values of video uploaders, and/or distributions of viewings over hours of the day, days of the week, and the like.Type: GrantFiled: November 10, 2011Date of Patent: December 30, 2014Assignee: Google Inc.Inventors: Juan Carlos Niebles Duque, Hrishikesh Balkrishna Aradhye, Luciano Sbaiz, Jay N. Yagnik, Reto Strobl
-
Patent number: 8879862Abstract: One embodiment of the present invention provides a system that automatically produces a summary of a video. During operation, the system partitions the video into scenes and then determines similarities between the scenes. Next, the system selects representative scenes from the video based on the determined similarities, and combines the selected scenes to produce the summary for the video.Type: GrantFiled: February 18, 2014Date of Patent: November 4, 2014Assignee: Google Inc.Inventor: Jay N. Yagnik
-
Publication number: 20140161351Abstract: One embodiment of the present invention provides a system that automatically produces a summary of a video. During operation, the system partitions the video into scenes and then determines similarities between the scenes. Next, the system selects representative scenes from the video based on the determined similarities, and combines the selected scenes to produce the summary for the video.Type: ApplicationFiled: February 18, 2014Publication date: June 12, 2014Applicant: Google Inc.Inventor: Jay N. Yagnik
-
Patent number: 8699806Abstract: One embodiment of the present invention provides a system that automatically produces a summary of a video. During operation, the system partitions the video into scenes and then determines similarities between the scenes. Next, the system selects representative scenes from the video based on the determined similarities, and combines the selected scenes to produce the summary for the video.Type: GrantFiled: June 15, 2006Date of Patent: April 15, 2014Assignee: Google Inc.Inventor: Jay N. Yagnik
-
Patent number: 8660370Abstract: Clustering algorithms such as k-means clustering algorithm are used in applications that process entities with spatial and/or temporal characteristics, for example, media objects representing audio, video, or graphical data. Feature vectors representing characteristics of the entities are partitioned using clustering methods that produce results sensitive to an initial set of cluster seeds. The set of initial cluster seeds is generated using principal component analysis of either the complete feature vector set or a subset thereof. The feature vector set is divided into a desired number of initial clusters and a seed determined from each initial cluster.Type: GrantFiled: January 31, 2013Date of Patent: February 25, 2014Assignee: Google Inc.Inventors: Sangho Yoon, Jay N. Yagnik, Mei Han, Vivek Kwatra
-
Patent number: 8571349Abstract: An image processing system enhances the resolution of an original image using higher-resolution image data from other images. The image processing system defines a plurality of overlapping partitions for the original image, each partition defining a set of non-overlapping site patches. During an optimization phase, the system identifies, for site patches of the original images, label patches within related images that are of most relevance. During a rendering phase independent of the optimization phase, an output image with enhanced resolution is synthesized by substituting, for site patches of the original image, the identified relevant label patches from the related images.Type: GrantFiled: September 14, 2012Date of Patent: October 29, 2013Assignee: Google IncInventors: Vivek Kwatra, Mei Han, Jay N. Yagnik
-
Patent number: 8229156Abstract: One embodiment of the present invention provides a computer-based system that automatically characterizes a video. During operation, the system extracts feature vectors from sampled frames in the video. Next, the system uses the extracted feature vectors for successive sampled frames in the video to define a curve. The system then determines a set of invariants for the curve. Next, the system using the set of invariants to characterize the video. The system can then use the characterization of the video to perform various operations, such as classifying the video with respect to other videos or detecting duplicates of the video.Type: GrantFiled: August 8, 2006Date of Patent: July 24, 2012Assignee: Google Inc.Inventor: Jay N. Yagnik
-
Patent number: 8065313Abstract: One embodiment of the present invention provides a system that automatically annotates an image. During operation, the system receives the image. Next, the system extracts image features from the image. The system then identifies other images which have similar image features. The system next obtains text associated with the other images, and identifies intersecting keywords in the obtained text. Finally, the system annotates the image with the intersecting keywords.Type: GrantFiled: July 24, 2006Date of Patent: November 22, 2011Assignee: Google Inc.Inventor: Jay N. Yagnik
-
Publication number: 20080021928Abstract: One embodiment of the present invention provides a system that automatically annotates an image. During operation, the system receives the image. Next, the system extracts image features from the image. The system then identifies other images which have similar image features. The system next obtains text associated with the other images, and identifies intersecting keywords in the obtained text. Finally, the system annotates the image with the intersecting keywords.Type: ApplicationFiled: July 24, 2006Publication date: January 24, 2008Inventor: Jay N. Yagnik
-
Publication number: 20070245242Abstract: One embodiment of the present invention provides a system that automatically produces a summary of a video. During operation, the system partitions the video into scenes and then determines similarities between the scenes. Next, the system selects representative scenes from the video based on the determined similarities, and combines the selected scenes to produce the summary for the video.Type: ApplicationFiled: June 15, 2006Publication date: October 18, 2007Inventor: Jay N. Yagnik