Patents by Inventor Sanketh Shetty
Sanketh Shetty has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9811780Abstract: A system and method for identifying and predicting subjective attributes for entities (e.g., media clips, images, newspaper articles, blog entries, persons, organizations, commercial businesses, etc.) are disclosed. In one aspect, a first set of subjective attributes for a first entity is identified based on a reaction to the first entity. A classifier is trained on a set of input-output mappings, wherein the set of input-output mappings comprises an input-output mapping whose input is based on a feature vector for the first entity and whose output is based on the first set of subjective attributes. A feature vector for a second entity is then provided to the trained classifier to obtain a second set of subjective attributes for the second entity.Type: GrantFiled: March 15, 2013Date of Patent: November 7, 2017Assignee: GOOGLE INC.Inventors: Hrishikesh Aradhye, Sanketh Shetty
-
Patent number: 9779304Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.Type: GrantFiled: August 11, 2015Date of Patent: October 3, 2017Assignee: Google Inc.Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
-
Patent number: 9659218Abstract: Implementations disclose predicting video start times for maximizing user engagement. A method includes applying a machine-learned model to audio-visual content features of segments of a target content item, the machine-learned model trained based on user interaction signals and audio-visual content features of a training set of content item segments, calculating, based on applying the machine-learned model, a salience score for each of the segments of the target content item, and selecting, based on the calculated salience scores, one of the segments of the target content item as a starting point for playback of the target content item.Type: GrantFiled: April 29, 2015Date of Patent: May 23, 2017Assignee: Google Inc.Inventors: Sanketh Shetty, Apostol Natsev, Balakrishnan Varadarajan, Tomas Izo
-
Patent number: 9627004Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method selects an entity from a plurality of entities identifying characteristics of a video item, where the video item has associated metadata. The computer-implemented method receives probabilities of existence of the entity in video frames of the video item, and selects a video frame determined to comprise the entity responsive to determining the video frame having a probability of existence of the entity greater than zero. The computer-implemented method determines a scaling factor for the probability of existence of the entity using the metadata of the video item, and determines an adjusted probability of existence of the entity by using the scaling factor to adjust the probability of existence of the entity. The computer-implemented method labels the video frame with the adjusted probability of existence.Type: GrantFiled: October 14, 2015Date of Patent: April 18, 2017Assignee: Google Inc.Inventors: Balakrishnan Varadarajan, Sanketh Shetty, Apostol Natsev, Nitin Khandelwal, Weilong Yang, Sudheendra Vijayanarasimhan, WeiHsin Gu, Nicola Muscettola
-
Patent number: 9607224Abstract: A solution is provided for temporally segmenting a video based on analysis of entities identified in the video frames of the video. The video is decoded into multiple video frames and multiple video frames are selected for annotation. The annotation process identifies entities present in a sample video frame and each identified entity has a timestamp and confidence score indicating the likelihood that the entity is accurately identified. For each identified entity, a time series comprising of timestamps and corresponding confidence scores is generated and smoothed to reduce annotation noise. One or more segments containing an entity over the length of the video are obtained by detecting boundaries of the segments in the time series of the entity. From the individual temporal segmentation for each identified entity in the video, an overall temporal segmentation for the video is generated, where the overall temporal segmentation reflects the semantics of the video.Type: GrantFiled: May 14, 2015Date of Patent: March 28, 2017Assignee: Google Inc.Inventors: Min-hsuan Tsai, Sudheendra Vijayanarasimhan, Tomas Izo, Sanketh Shetty, Balakrishnan Varadarajan
-
Publication number: 20170046573Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.Type: ApplicationFiled: August 11, 2015Publication date: February 16, 2017Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
-
Publication number: 20160335499Abstract: A solution is provided for temporally segmenting a video based on analysis of entities identified in the video frames of the video. The video is decoded into multiple video frames and multiple video frames are selected for annotation. The annotation process identifies entities present in a sample video frame and each identified entity has a timestamp and confidence score indicating the likelihood that the entity is accurately identified. For each identified entity, a time series comprising of timestamps and corresponding confidence scores is generated and smoothed to reduce annotation noise. One or more segments containing an entity over the length of the video are obtained by detecting boundaries of the segments in the time series of the entity. From the individual temporal segmentation for each identified entity in the video, an overall temporal segmentation for the video is generated, where the overall temporal segmentation reflects the semantics of the video.Type: ApplicationFiled: May 14, 2015Publication date: November 17, 2016Inventors: Min-hsuan Tsai, Sudheendra Vijayanarasimhan, Tomas Izo, Sanketh Shetty, Balakrishnan Varadarajan
-
Publication number: 20160306804Abstract: Methods, systems, and media for presenting comments based on correlation with content are provided. In some implementations, a method for presenting ranked comments is provided, the method comprising: receiving, using a hardware processor, content data related to an item of content; receiving, using the hardware processor, comment data related to a comment associated with the item of content; determining, using the hardware processor, a degree of correlation between at least a portion of the comment data and one or more portions of the content data; determining, using the hardware processor, a priority for the comment based on the degree of correlation; and presenting, using the hardware processor, the comment based on the priority.Type: ApplicationFiled: June 28, 2016Publication date: October 20, 2016Inventors: Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan, Sanketh Shetty, Nisarg Dilipkumar Kothari, Nicholas Delmonico Rizzolo
-
Patent number: 9384242Abstract: Techniques identify time-sensitive content and present the time-sensitive content to communication devices of users interested or potentially interested in the time-sensitive content. A content management component analyzes video or audio content, and extracts information from the content and determines whether the content is time-sensitive content, such as recent news-related content, based on analysis of the content and extracted information. The content management component evaluates user-related information and the extracted information, and determines whether a user(s) is likely to be interested in the time-sensitive content based on the evaluation results. The content management component sends a notification to the communication device(s) of the user(s) in response to determining the user(s) is likely to be interested in the time-sensitive content.Type: GrantFiled: March 14, 2013Date of Patent: July 5, 2016Assignee: Google Inc.Inventors: Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan, Sanketh Shetty, Nisarg Dilipkumar Kothari, Nicholas Delmonico Rizzolo
-
Publication number: 20160132771Abstract: A machine learning technique may be applied to applications hosted by an application store to extract features that can be utilized to train one or more classifiers of the applications based on their relative complexity. A processor may receive pairwise comparisons of relative complexity and feature representations for the applications to be used in training of a classifier. The processor may determine a feature set that is correlated with the pairwise comparison of relative complexity and obtain a classifier based thereupon.Type: ApplicationFiled: November 12, 2014Publication date: May 12, 2016Inventors: Sanketh Shetty, Ibrahim Elbouchikhi
-
Patent number: 9330171Abstract: A method includes receiving, by a processing device of a content sharing platform, a video content, selecting at least one video frame from the video content, subsampling the at least one video frame to generate a first representation of the at least one video frame, selecting a sub-region of the at least one video frame to generate a second representation of the at least one video frame, and applying a convolutional neuron network to the first and second representations of the at least one video frame to generate an annotation for the video content.Type: GrantFiled: January 22, 2014Date of Patent: May 3, 2016Assignee: GOOGLE INC.Inventors: Sanketh Shetty, Andrej Karpathy, George Dan Toderici
-
Publication number: 20160070962Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.Type: ApplicationFiled: September 8, 2015Publication date: March 10, 2016Inventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susanna Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
-
Patent number: 9165255Abstract: A given set of videos are sequenced in an aesthetically pleasing manner using models learned from human curated playlists. Semantic features associated with each video in the curated playlists are identified and a first order Markov chain model is learned from curated playlists. In one method, a directed graph using the Markov model is induced, wherein sequencing is obtained by finding the shortest path through the directed graph. In another method a sampling based approach is implemented to produce paths on the digraph. Multiple samples are generated and the best scoring sample is returned as the output. In a third method, a relevance based random walk sampling algorithm is modified to produce a reordering of the playlist.Type: GrantFiled: July 26, 2012Date of Patent: October 20, 2015Assignee: Google Inc.Inventors: Sanketh Shetty, Ruei-Sung Lin, David A. Ross, Hrishikesh Balkrishna Aradhye
-
Patent number: 9009083Abstract: A mechanism for automatic quantification of multimedia production quality is presented. A method of embodiments includes assembling data samples from users, the data samples indicating a relative production quality of a set of content items based on a comparison of production quality between content items in the set, extracting content features from each of the content items in the set, and learning, based on the data samples from the plurality of users, a statistical model on the extracted content features, wherein the learned statistical model can predict a production quality of another content item that is not part of the set of content items.Type: GrantFiled: February 15, 2012Date of Patent: April 14, 2015Assignee: Google Inc.Inventors: Sanketh Shetty, Jonathon Shlens, Hrishikesh Aradhye
-
Patent number: 8886723Abstract: A method for assessing sharing of items within a social network is provided. The method includes identifying a first sharing of a social item by a first user of a social network, determining one or more second sharings of the social item by one or more second users, the one or more second sharings being based on the first sharing. The method also includes determining a sharing score associated with the first user based on a number of the one or more second sharings, and updating a data structure based on the determined sharing score associated with the first user. The data structure stores respective sharing scores associated with the plurality of users of the social network. Systems and machine-readable media are also provided.Type: GrantFiled: June 21, 2012Date of Patent: November 11, 2014Assignee: Google Inc.Inventors: Ullas Gargi, Sanketh Shetty, Tomá{hacek over (s)} I{hacek over (z)}o, Charles Duhadway, Kevin Snow McCurley, Nisarg Dilipkumar Kothari
-
Publication number: 20110142335Abstract: An image comparison system includes a memory unit that stores data representative of target apparel images that depict apparel items. An image processing unit is provided to process a query apparel image to extract data representative of a query apparel item depicted in the query apparel image. The image processing unit determines weighted color and pattern differences between the target apparel images and the query apparel image.Type: ApplicationFiled: December 11, 2009Publication date: June 16, 2011Inventors: Bernard Ghanem, Sanketh Shetty, Esther Resendiz