Patents by Inventor Balakrishnan Varadarajan
Balakrishnan Varadarajan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9779304Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.Type: GrantFiled: August 11, 2015Date of Patent: October 3, 2017Assignee: Google Inc.Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
-
Publication number: 20170221385Abstract: A system and method for quantifying clinical skill of a user, comprising: collecting data relating to a surgical task done by a user using a surgical device; comparing the data for the surgical task to other data for another similar surgical task; quantifying the clinical skill of the user based on the comparing of the data for the surgical task to the other data for the other similar surgical task; outputting the clinical skill of the user.Type: ApplicationFiled: April 19, 2017Publication date: August 3, 2017Applicant: The Johns Hopkins UniversityInventors: Carol E. REILEY, Gregory D. HAGER, Balakrishnan VARADARAJAN, Sanjeev Pralhad KHUDANPUR, Rajesh KUMAR, Henry C. LIN
-
Patent number: 9659218Abstract: Implementations disclose predicting video start times for maximizing user engagement. A method includes applying a machine-learned model to audio-visual content features of segments of a target content item, the machine-learned model trained based on user interaction signals and audio-visual content features of a training set of content item segments, calculating, based on applying the machine-learned model, a salience score for each of the segments of the target content item, and selecting, based on the calculated salience scores, one of the segments of the target content item as a starting point for playback of the target content item.Type: GrantFiled: April 29, 2015Date of Patent: May 23, 2017Assignee: Google Inc.Inventors: Sanketh Shetty, Apostol Natsev, Balakrishnan Varadarajan, Tomas Izo
-
Patent number: 9627004Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method selects an entity from a plurality of entities identifying characteristics of a video item, where the video item has associated metadata. The computer-implemented method receives probabilities of existence of the entity in video frames of the video item, and selects a video frame determined to comprise the entity responsive to determining the video frame having a probability of existence of the entity greater than zero. The computer-implemented method determines a scaling factor for the probability of existence of the entity using the metadata of the video item, and determines an adjusted probability of existence of the entity by using the scaling factor to adjust the probability of existence of the entity. The computer-implemented method labels the video frame with the adjusted probability of existence.Type: GrantFiled: October 14, 2015Date of Patent: April 18, 2017Assignee: Google Inc.Inventors: Balakrishnan Varadarajan, Sanketh Shetty, Apostol Natsev, Nitin Khandelwal, Weilong Yang, Sudheendra Vijayanarasimhan, WeiHsin Gu, Nicola Muscettola
-
Patent number: 9607224Abstract: A solution is provided for temporally segmenting a video based on analysis of entities identified in the video frames of the video. The video is decoded into multiple video frames and multiple video frames are selected for annotation. The annotation process identifies entities present in a sample video frame and each identified entity has a timestamp and confidence score indicating the likelihood that the entity is accurately identified. For each identified entity, a time series comprising of timestamps and corresponding confidence scores is generated and smoothed to reduce annotation noise. One or more segments containing an entity over the length of the video are obtained by detecting boundaries of the segments in the time series of the entity. From the individual temporal segmentation for each identified entity in the video, an overall temporal segmentation for the video is generated, where the overall temporal segmentation reflects the semantics of the video.Type: GrantFiled: May 14, 2015Date of Patent: March 28, 2017Assignee: Google Inc.Inventors: Min-hsuan Tsai, Sudheendra Vijayanarasimhan, Tomas Izo, Sanketh Shetty, Balakrishnan Varadarajan
-
Publication number: 20170046573Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.Type: ApplicationFiled: August 11, 2015Publication date: February 16, 2017Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
-
Publication number: 20160335499Abstract: A solution is provided for temporally segmenting a video based on analysis of entities identified in the video frames of the video. The video is decoded into multiple video frames and multiple video frames are selected for annotation. The annotation process identifies entities present in a sample video frame and each identified entity has a timestamp and confidence score indicating the likelihood that the entity is accurately identified. For each identified entity, a time series comprising of timestamps and corresponding confidence scores is generated and smoothed to reduce annotation noise. One or more segments containing an entity over the length of the video are obtained by detecting boundaries of the segments in the time series of the entity. From the individual temporal segmentation for each identified entity in the video, an overall temporal segmentation for the video is generated, where the overall temporal segmentation reflects the semantics of the video.Type: ApplicationFiled: May 14, 2015Publication date: November 17, 2016Inventors: Min-hsuan Tsai, Sudheendra Vijayanarasimhan, Tomas Izo, Sanketh Shetty, Balakrishnan Varadarajan
-
Publication number: 20160306804Abstract: Methods, systems, and media for presenting comments based on correlation with content are provided. In some implementations, a method for presenting ranked comments is provided, the method comprising: receiving, using a hardware processor, content data related to an item of content; receiving, using the hardware processor, comment data related to a comment associated with the item of content; determining, using the hardware processor, a degree of correlation between at least a portion of the comment data and one or more portions of the content data; determining, using the hardware processor, a priority for the comment based on the degree of correlation; and presenting, using the hardware processor, the comment based on the priority.Type: ApplicationFiled: June 28, 2016Publication date: October 20, 2016Inventors: Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan, Sanketh Shetty, Nisarg Dilipkumar Kothari, Nicholas Delmonico Rizzolo
-
Patent number: 9384242Abstract: Techniques identify time-sensitive content and present the time-sensitive content to communication devices of users interested or potentially interested in the time-sensitive content. A content management component analyzes video or audio content, and extracts information from the content and determines whether the content is time-sensitive content, such as recent news-related content, based on analysis of the content and extracted information. The content management component evaluates user-related information and the extracted information, and determines whether a user(s) is likely to be interested in the time-sensitive content based on the evaluation results. The content management component sends a notification to the communication device(s) of the user(s) in response to determining the user(s) is likely to be interested in the time-sensitive content.Type: GrantFiled: March 14, 2013Date of Patent: July 5, 2016Assignee: Google Inc.Inventors: Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan, Sanketh Shetty, Nisarg Dilipkumar Kothari, Nicholas Delmonico Rizzolo
-
Publication number: 20160070962Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.Type: ApplicationFiled: September 8, 2015Publication date: March 10, 2016Inventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susanna Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
-
Patent number: 9230159Abstract: This disclosure generally relates to systems and methods that facilitate employing exemplar Histogram of Oriented Gradients Linear Discriminant Analysis (HOG-LDA) models along with Localizer Hidden Markov Models (HMM) to train a classification model to classify actions in videos by learning poses and transitions between the poses associated with the actions in a view of a continuous state represented by bounding boxes corresponding to where the action is located in frames of the video.Type: GrantFiled: December 9, 2013Date of Patent: January 5, 2016Assignee: Google Inc.Inventors: Sudheendra Vijayanarasimhan, Balakrishnan Varadarajan, Rahul Sukthankar