Patents by Inventor Jay Yagnik
Jay Yagnik has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8589457Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training scoring models. One method includes storing data identifying a plurality of positive and a plurality of negative training images for a query. The method further includes selecting a first image from either the positive group of images or the negative group of images, and applying a scoring model to the first image. The method further includes selecting a plurality of candidate images from the other group of images, applying the scoring model to each of the candidate images, and then selecting a second image from the candidate images according to scores for the images. The method further includes determining that the scores for the first image and the second image fail to satisfy a criterion, updating the scoring model, and storing the updated scoring model.Type: GrantFiled: September 14, 2012Date of Patent: November 19, 2013Assignee: Google Inc.Inventors: Samy Bengio, Gal Chechik, Sergey Ioffe, Jay Yagnik
-
Patent number: 8588525Abstract: This disclosure relates to transformation invariant media matching. A fingerprinting component can generate a transformation invariant identifier for media content by adaptively encoding the relative ordering of signal markers in media content. The signal markers can be adaptively encoded via reference point geometry, or ratio histograms. An identification component compares the identifier against a set of identifiers for known media content, and the media content can be matched or identified as a function of the comparison.Type: GrantFiled: November 17, 2011Date of Patent: November 19, 2013Assignee: Google Inc.Inventors: Jay Yagnik, Sergey Ioffe
-
Patent number: 8572087Abstract: Systems, computer program products, and methods can identify a training set of content, and generate one or more clusters from the training set of content, where each of the one or more clusters represent similar features of the training set of content. The one or more clusters can be used to generate a classifier. New content is identified and the classifier is used to associate at least one label with the new content.Type: GrantFiled: October 17, 2007Date of Patent: October 29, 2013Assignee: Google Inc.Inventor: Jay Yagnik
-
Patent number: 8572099Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes generating content-based keywords based on content generated by users of a social network. The method includes labeling nodes comprising user nodes, which are representations of the users, with advertising labels comprising content-based keywords that coincide with advertiser-selected keywords that are based on one or more terms specified by an advertiser. The method also includes outputting, for each node, weights for the advertising labels based on weights of advertising labels associated with neighboring nodes, which are related to the node by a relationship.Type: GrantFiled: January 14, 2011Date of Patent: October 29, 2013Assignee: Google Inc.Inventors: Shumeet Baluja, Yushi Jing, Dandapani Sivakumar, Jay Yagnik
-
Patent number: 8537175Abstract: A video enhancement server enhances a video. A scene segmentation module detects scene boundaries and segments the video into a number of scenes. For each frame in a given scene, a local white level and a local black level are determined from the distribution of pixel luminance values in the frame. A global white level and global black level are also determined from the distribution of pixel luminance values throughout the scene. Weighted white levels and black levels are determined for each frame as a weighted combination of the local and global levels. The video segmentation server then applies histogram stretching and saturation adjustment to each frame using the weighted white levels and black levels to determine enhanced pixel luminance values. An enhanced video comprising the enhanced pixel luminance values is stored to a video server for serving to clients.Type: GrantFiled: November 25, 2009Date of Patent: September 17, 2013Assignee: Google Inc.Inventors: George Toderici, Jay Yagnik
-
Patent number: 8533236Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes inferring labels for videos, users, advertisements, groups of users, and other entities included in a social network system. The inferred labels can be used to generate recommendations such as videos or advertisements in which a user may be interested. Inferred labels can be generated based on social or other relationships derived from, for example, profiles or activities of social network users. Inferred labels can be advantageous when explicit information about these entities is not available. For example, a particular user may not have clicked on any online advertisements, so the user is not explicitly linked to any advertisements.Type: GrantFiled: July 26, 2012Date of Patent: September 10, 2013Assignee: Google Inc.Inventors: Shumeet Baluja, Yushi Jing, Dandapani Sivakumar, Jay Yagnik
-
Patent number: 8510252Abstract: A method, a system and a computer program product generate a statistical classification model used by a computer system to determine whether a video contains content in a particular class, such as inappropriate content.Type: GrantFiled: October 9, 2008Date of Patent: August 13, 2013Assignee: Google, Inc.Inventors: Ullas Gargi, Jay Yagnik
-
Patent number: 8473500Abstract: The subject matter of this specification can be embodied in, among other things, a method that includes determining, for a portion of users of a social network, label values each comprising an inferred interest level of a user in a subject indicated by a label, associating a first user with one or more second users based on one or more relationships specified by the first user, and outputting a first label value for the first user based on one or more second label values of the one or more second users.Type: GrantFiled: November 7, 2011Date of Patent: June 25, 2013Assignee: Google Inc.Inventors: Shumeet Baluja, Yushi Jing, Dandapani Sivakumar, Jay Yagnik
-
Patent number: 8467607Abstract: Methods and systems for processing an image to create an object model are disclosed. In accordance with one embodiment, each segment of the image is assigned to a respective bin of a bounding box. For each bin of the bounding box, the value of a feature for the bin is computed based on the values of that feature for each of the segments assigned to the bin. An object model is then created based on the values of the feature for the bin.Type: GrantFiled: November 21, 2011Date of Patent: June 18, 2013Assignee: Google Inc.Inventors: Alexander T. Toshev, Jay Yagnik, Vivek Kwatra
-
Patent number: 8452778Abstract: A classifier training system trains adapted classifiers for classifying videos based at least in part on scores produced by application of text-based classifiers to textual metadata of the videos. Each classifier corresponds to a particular category, and when applied to a given video indicates whether the video represents the corresponding category. The classifier training system applies the text-based classifiers to textual metadata of the videos to obtain the scores, and also extracts features from content of the videos, combining the scores and the content features for a video into a set of hybrid features. The adapted classifiers are then trained on the hybrid features. The adaption of the text-based classifiers from the textual domain to the video domain allows the training of accurate video classifiers (the adapted classifiers) without requiring a large training set of authoritatively labeled videos.Type: GrantFiled: September 1, 2010Date of Patent: May 28, 2013Assignee: Google Inc.Inventors: Yang Song, Ming Zhao, Jay Yagnik
-
Publication number: 20130117780Abstract: A volume identification system identifies a set of unlabeled spatio-temporal volumes within each of a set of videos, each volume representing a distinct object or action. The volume identification system further determines, for each of the videos, a set of volume-level features characterizing the volume as a whole. In one embodiment, the features are based on a codebook and describe the temporal and spatial relationships of different codebook entries of the volume. The volume identification system uses the volume-level features, in conjunction with existing labels assigned to the videos as a whole, to label with high confidence some subset of the identified volumes, e.g., by employing consistency learning or training and application of weak volume classifiers. The labeled volumes may be used for a number of applications, such as training strong volume classifiers, improving video search (including locating individual volumes), and creating composite videos based on identified volumes.Type: ApplicationFiled: October 1, 2012Publication date: May 9, 2013Inventors: RAHUL SUKTHANKAR, JAY YAGNIK
-
Publication number: 20130114902Abstract: A volume identification system identifies a set of unlabeled spatio-temporal volumes within each of a set of videos, each volume representing a distinct object or action. The volume identification system further determines, for each of the videos, a set of volume-level features characterizing the volume as a whole. In one embodiment, the features are based on a codebook and describe the temporal and spatial relationships of different codebook entries of the volume. The volume identification system uses the volume-level features, in conjunction with existing labels assigned to the videos as a whole, to label with high confidence some subset of the identified volumes, e.g., by employing consistency learning or training and application of weak volume classifiers. The labeled volumes may be used for a number of applications, such as training strong volume classifiers, improving video search (including locating individual volumes), and creating composite videos based on identified volumes.Type: ApplicationFiled: August 31, 2012Publication date: May 9, 2013Applicant: GOOGLE INC.Inventors: Rahul Sukthankar, Jay Yagnik
-
Publication number: 20130113877Abstract: A volume identification system identifies a set of unlabeled spatio-temporal volumes within each of a set of videos, each volume representing a distinct object or action. The volume identification system further determines, for each of the videos, a set of volume-level features characterizing the volume as a whole. In one embodiment, the features are based on a codebook and describe the temporal and spatial relationships of different codebook entries of the volume. The volume identification system uses the volume-level features, in conjunction with existing labels assigned to the videos as a whole, to label with high confidence some subset of the identified volumes, e.g., by employing consistency learning or training and application of weak volume classifiers. The labeled volumes may be used for a number of applications, such as training strong volume classifiers, improving video search (including locating individual volumes), and creating composite videos based on identified volumes.Type: ApplicationFiled: October 1, 2012Publication date: May 9, 2013Inventors: RAHUL SUKTHANKAR, JAY YAGNIK
-
Publication number: 20130108177Abstract: A motion manifold system analyzes a set of videos, identifying image patches within those videos corresponding to regions of interest and identifying patch trajectories by tracking the movement of the regions over time in the videos. Based on the patch identification and tracking, the system produces a motion manifold data structure that captures the way in which the same semantic region can have different visual representations over time. The motion manifold can then be applied to determine the semantic similarity between different patches, or between higher-level constructs such as images or video segments, including detecting semantic similarity between patches or other constructs that are visually dissimilar.Type: ApplicationFiled: January 9, 2012Publication date: May 2, 2013Applicant: GOOGLE INC.Inventors: RAHUL SUKTHANKAR, JAY YAGNIK
-
Patent number: 8429212Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training scoring models. One method includes storing data identifying a plurality of positive and a plurality of negative training images for a query. The method further includes selecting a first image from either the positive group of images or the negative group of images, and applying a scoring model to the first image. The method further includes selecting a plurality of candidate images from the other group of images, applying the scoring model to each of the candidate images, and then selecting a second image from the candidate images according to scores for the images. The method further includes determining that the scores for the first image and the second image fail to satisfy a criterion, updating the scoring model, and storing the updated scoring model.Type: GrantFiled: January 3, 2012Date of Patent: April 23, 2013Assignee: Google Inc.Inventors: Samy Bengio, Gal Chechik, Sergey Ioffe, Jay Yagnik
-
Patent number: 8417751Abstract: Convolutions are frequently used in signal processing. A method for performing an ordinal convolution is disclosed. In an embodiment of the disclosed subject matter, an ordinal mask may be obtained. The ordinal mask may describe a property of a signal. A representation of a signal may be received. A processor may convert the representation of the signal to an ordinal representation of the signal. The ordinal mask may be applied to the ordinal representation of the signal. Based upon the application of the ordinal mask to the ordinal representation of the signal, it may be determined that the property is present in the signal. The ordinal convolution method described herein may be applied to any type of signal processing method that relies on a transform or convolution.Type: GrantFiled: November 4, 2011Date of Patent: April 9, 2013Assignee: Google Inc.Inventor: Jay Yagnik
-
Patent number: 8396286Abstract: A concept learning module trains video classifiers associated with a stored set of concepts derived from textual metadata of a plurality of videos, the training based on features extracted from training videos. Each of the video classifiers can then be applied to a given video to obtain a score indicating whether or not the video is representative of the concept associated with the classifier. The learning process does not require any concepts to be known a priori, nor does it require a training set of videos having training labels manually applied by human experts. Rather, in one embodiment the learning is based solely upon the content of the videos themselves and on whatever metadata was provided along with the video, e.g., on possibly sparse and/or inaccurate textual metadata specified by a user of a video hosting service who submitted the video.Type: GrantFiled: June 24, 2010Date of Patent: March 12, 2013Assignee: Google Inc.Inventors: Hrishikesh Aradhye, George Toderici, Jay Yagnik
-
Patent number: 8396325Abstract: An image processing system enhances the resolution of an original image using higher-resolution image data from other images. The image processing system defines a plurality of overlapping partitions for the original image, each partition defining a set of non-overlapping site patches. During an optimization phase, the system identifies, for site patches of the original images, label patches within related images that are of most relevance. During a rendering phase independent of the optimization phase, an output image with enhanced resolution is synthesized by substituting, for site patches of the original image, the identified relevant label patches from the related images.Type: GrantFiled: April 27, 2009Date of Patent: March 12, 2013Assignee: Google Inc.Inventors: Vivek Kwatra, Mei Han, Jay Yagnik
-
Patent number: 8385662Abstract: Clustering algorithms such as k-means clustering algorithm are used in applications that process entities with spatial and/or temporal characteristics, for example, media objects representing audio, video, or graphical data. Feature vectors representing characteristics of the entities are partitioned using clustering methods that produce results sensitive to an initial set of cluster seeds. The set of initial cluster seeds is generated using principal component analysis of either the complete feature vector set or a subset thereof. The feature vector set is divided into a desired number of initial clusters and a seed determined from each initial cluster.Type: GrantFiled: April 30, 2009Date of Patent: February 26, 2013Assignee: Google Inc.Inventors: Sangho Yoon, Jay Yagnik, Mei Han, Vivek Kwatra
-
Patent number: 8340449Abstract: A method and system generates and compares fingerprints for videos in a video library. The video fingerprints provide a compact representation of the spatial and sequential characteristics of the video that can be used to quickly and efficiently identify video content. Because the fingerprints are based on spatial and sequential characteristics rather than exact bit sequences, visual content of videos can be effectively compared even when there are small differences between the videos in compression factors, source resolutions, start and stop times, frame rates, and so on. Comparison of video fingerprints can be used, for example, to search for and remove copyright protected videos from a video library. Further, duplicate videos can be detected and discarded in order to preserve storage space.Type: GrantFiled: September 30, 2011Date of Patent: December 25, 2012Assignee: Google Inc.Inventors: Jay Yagnik, Henry A. Rowley, Sergey Ioffe