Patents by Inventor Balakrishnan Varadarajan

Balakrishnan Varadarajan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240038097
    Abstract: A system quantifying clinical skill of a user, comprising: collecting data relating to a surgical task done by a user using a surgical device; comparing the data for the surgical task to other data for another similar surgical task; quantifying the clinical skill of the user based on the comparing of the data for the surgical task to the other data for the other similar surgical task; outputting the clinical skill of the user.
    Type: Application
    Filed: October 10, 2023
    Publication date: February 1, 2024
    Applicant: The Johns Hopkins University
    Inventors: Carol E. REILEY, Gregory D. Hager, Balakrishnan Varadarajan, Sanjeey Pralhad Khudanpur, Rajesh Kumar, Henry C. Lin
  • Publication number: 20220297728
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for agent trajectory prediction using context-sensitive fusion.
    Type: Application
    Filed: March 21, 2022
    Publication date: September 22, 2022
    Inventors: Balakrishnan Varadarajan, Ahmed Said Mohammed Hefny, Benjamin Sapp, Khaled Refaat, Dragomir Anguelov
  • Publication number: 20220207873
    Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.
    Type: Application
    Filed: December 13, 2021
    Publication date: June 30, 2022
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
  • Patent number: 11295171
    Abstract: A MapReduce-based training framework exploits both data parallelism and model parallelism to scale training of complex models. Particular model architectures facilitate and benefit from use of such training framework. As one example, a machine-learned model can include a shared feature extraction portion configured to receive and process a data input to produce an intermediate feature representation and a plurality of prediction heads that are configured to receive and process the intermediate feature representation to respectively produce a plurality of predictions. For example, the data input can be a video and the plurality of predictions can be a plurality of classifications for content of the video (e.g., relative to a plurality of classes).
    Type: Grant
    Filed: October 18, 2019
    Date of Patent: April 5, 2022
    Assignee: GOOGLE LLC
    Inventors: Joonseok Lee, Balakrishnan Varadarajan, Ariel Gordon, Apostol Ivanov Natsev, Seong Jae Hwang
  • Patent number: 11200423
    Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: December 14, 2021
    Assignee: Google LLC
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
  • Patent number: 11042553
    Abstract: Facilitating of content entity annotation while maintaining joint quality, coverage and/or completeness performance conditions is provided. In one example, a non-transitory computer-readable medium comprises computer-readable instructions that, in response to execution, cause a computing system to perform operations. The operations include aggregating information indicative of initial entities for content and initial scores associated with the initial entities received from one or more content annotation sources and mapping the initial scores to respective values to generate calibrated scores. The operations include applying weights to the calibrated scores to generate weighted scores and combining the weighted scores using a linear aggregation model to generate a final score. The operations include determining whether to annotate the content with at least one of the initial entities based on a comparison of the final score and a defined threshold value.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: June 22, 2021
    Assignee: GOOGLE LLC
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Weilong Yang, John Burge, Sanketh Shetty, Omid Madani
  • Publication number: 20210166035
    Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.
    Type: Application
    Filed: December 14, 2020
    Publication date: June 3, 2021
    Inventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susana Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
  • Publication number: 20210117728
    Abstract: A MapReduce-based training framework exploits both data parallelism and model parallelism to scale training of complex models. Particular model architectures facilitate and benefit from use of such training framework. As one example, a machine-learned model can include a shared feature extraction portion configured to receive and process a data input to produce an intermediate feature representation and a plurality of prediction heads that are configured to receive and process the intermediate feature representation to respectively produce a plurality of predictions. For example, the data input can be a video and the plurality of predictions can be a plurality of classifications for content of the video (e.g., relative to a plurality of classes).
    Type: Application
    Filed: October 18, 2019
    Publication date: April 22, 2021
    Inventors: Joonseok Lee, Balakrishnan Varadarajan, Ariel Gordon, Apostol Ivanov Natsev, Seong Jae Hwang
  • Patent number: 10867183
    Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.
    Type: Grant
    Filed: April 23, 2018
    Date of Patent: December 15, 2020
    Assignee: Google LLC
    Inventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susanna Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
  • Publication number: 20200082173
    Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.
    Type: Application
    Filed: November 18, 2019
    Publication date: March 12, 2020
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
  • Patent number: 10482328
    Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.
    Type: Grant
    Filed: October 2, 2017
    Date of Patent: November 19, 2019
    Assignee: Google LLC
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
  • Patent number: 10390067
    Abstract: Implementations disclose predicting video start times for maximizing user engagement. A method includes receiving a first content item comprising content item segments, processing the first content item using a trained machine learning model that is trained based on interaction signals and audio-visual content features of a training set of training segments of training content items, and obtaining, based on the processing of the first content item using the trained machine learning model, one or more outputs comprising salience scores for the content item segments, the salience scores indicating which content item segment of the content item segments is to be selected as a starting point for playback of the first content item.
    Type: Grant
    Filed: May 12, 2017
    Date of Patent: August 20, 2019
    Assignee: Google LLC
    Inventors: Sanketh Shetty, Apostol Natsev, Balakrishnan Varadarajan, Tomas Izo
  • Patent number: 10235428
    Abstract: Techniques identify time-sensitive content and present the time-sensitive content to communication devices of users interested or potentially interested in the time-sensitive content. A content management component analyzes video or audio content, and extracts information from the content and determines whether the content is time-sensitive content, such as recent news-related content, based on analysis of the content and extracted information. The content management component evaluates user-related information and the extracted information, and determines whether a user(s) is likely to be interested in the time-sensitive content based on the evaluation results. The content management component sends a notification to the communication device(s) of the user(s) in response to determining the user(s) is likely to be interested in the time-sensitive content.
    Type: Grant
    Filed: June 28, 2016
    Date of Patent: March 19, 2019
    Assignee: Google LLC
    Inventors: Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan, Sanketh Shetty, Nisarg Dilipkumar Kothari, Nicholas Delmonico Rizzolo
  • Publication number: 20180253994
    Abstract: A system quantifying clinical skill of a user, comprising: collecting data relating to a surgical task done by a user using a surgical device; comparing the data for the surgical task to other data for another similar surgical task; quantifying the clinical skill of the user based on the comparing of the data for the surgical task to the other data for the other similar surgical task; outputting the clinical skill of the user.
    Type: Application
    Filed: May 4, 2018
    Publication date: September 6, 2018
    Applicant: The Johns Hopkins University
    Inventors: Carol E. REILEY, Gregory D. HAGER, Balakrishnan VARADARAJAN, Sanjeev Pralhad KHUDANPUR, Rajesh KUMAR, Henry C. LIN
  • Publication number: 20180239964
    Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.
    Type: Application
    Filed: April 23, 2018
    Publication date: August 23, 2018
    Inventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susanna Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
  • Patent number: 10008129
    Abstract: A system quantifying clinical skill of a user, comprising: collecting data relating to a surgical task done by a user using a surgical device; comparing the data for the surgical task to other data for another similar surgical task; quantifying the clinical skill of the user based on the comparing of the data for the surgical task to the other data for the other similar surgical task; outputting the clinical skill of the user.
    Type: Grant
    Filed: April 19, 2017
    Date of Patent: June 26, 2018
    Assignee: The Johns Hopkins University
    Inventors: Carol E. Reiley, Gregory D. Hager, Balakrishnan Varadarajan, Sanjeev Pralhad Khudanpur, Rajesh Kumar, Henry C. Lin
  • Patent number: 9953222
    Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.
    Type: Grant
    Filed: September 8, 2015
    Date of Patent: April 24, 2018
    Assignee: Google LLC
    Inventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susanna Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
  • Publication number: 20180089200
    Abstract: Facilitating of content entity annotation while maintaining joint quality, coverage and/or completeness performance conditions is provided. In one example, a non-transitory computer-readable medium comprises computer-readable instructions that, in response to execution, cause a computing system to perform operations. The operations include aggregating information indicative of initial entities for content and initial scores associated with the initial entities received from one or more content annotation sources and mapping the initial scores to respective values to generate calibrated scores. The operations include applying weights to the calibrated scores to generate weighted scores and combining the weighted scores using a linear aggregation model to generate a final score. The operations include determining whether to annotate the content with at least one of the initial entities based on a comparison of the final score and a defined threshold value.
    Type: Application
    Filed: November 21, 2017
    Publication date: March 29, 2018
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Weilong Yang, John Burge, Sanketh Shetty, Omid Madani
  • Publication number: 20180025228
    Abstract: A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.
    Type: Application
    Filed: October 2, 2017
    Publication date: January 25, 2018
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Nitin Khandelwal, Sudheendra Vijayanarasimhan, Weilong Yang, Sanketh Shetty
  • Patent number: 9830361
    Abstract: Facilitation of content entity annotation while maintaining joint quality, coverage and/or completeness performance conditions is provided. In one example, a system includes an aggregation component that aggregates signals indicative of initial entities for content and initial scores associated with the initial entities generated by one or more content annotation sources; and a mapping component that maps the initial scores to calibrated scores within a defined range. The system also includes a linear aggregation component that: applies selected weights to the calibrated scores, wherein the selected weights are based on joint performance conditions; and combines the weighted, calibrated scores based on a selected linear aggregation model of a plurality of linear aggregation models to generate a final score. The system also includes an annotation component that determines whether to annotate the content with one of the initial entities based on a comparison of the final score and a defined threshold value.
    Type: Grant
    Filed: December 4, 2013
    Date of Patent: November 28, 2017
    Assignee: GOOGLE INC.
    Inventors: Balakrishnan Varadarajan, George Dan Toderici, Apostol Natsev, Weilong Yang, John Burge, Sanketh Shetty, Omid Madani