Patents by Inventor Harpreet S. Sawhney

Harpreet S. Sawhney has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11397462
    Abstract: A computing system includes a vision-based user interface platform to, among other things, analyze multi-modal user interactions, semantically correlate stored knowledge with visual features of a scene depicted in a video, determine relationships between different features of the scene, and selectively display virtual elements on the video depiction of the scene. The analysis of user interactions can be used to filter the information retrieval and correlating of the visual features with the stored knowledge.
    Type: Grant
    Filed: October 8, 2015
    Date of Patent: July 26, 2022
    Assignee: SRI International
    Inventors: Jayakrishnan Eledath, Supun Samarasekera, Harpreet S. Sawhney, Rakesh Kumar, Mayank Bansal, Girish Acharya, Michael John Wolverton, Aaron Spaulding, Ron Krakower
  • Patent number: 11030458
    Abstract: The disclosure herein describes training a machine learning model to recognize a real-world object based on generated virtual scene variations associated with a model of the real-world object. A digitized three-dimensional (3D) model representing the real-world object is obtained and a virtual scene is built around the 3D model. A plurality of virtual scene variations is generated by varying one or more characteristics. Each virtual scene variation is generated to include a label identifying the 3D model in the virtual scene variation. A machine learning model may be trained based on the plurality of virtual scene variations. The use of generated digital assets to train the machine learning model greatly decreases the time and cost requirements of creating training assets and provides training quality benefits based on the quantity and quality of variations that may be generated, as well as the completeness of information included in each generated digital asset.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: June 8, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Muhammad Zeeshan Zia, Emanuel Shalev, Jonathan C. Hanzelka, Harpreet S. Sawhney, Pedro U. Escos, Michael J. Ebstyne
  • Publication number: 20200089954
    Abstract: The disclosure herein describes training a machine learning model to recognize a real-world object based on generated virtual scene variations associated with a model of the real-world object. A digitized three-dimensional (3D) model representing the real-world object is obtained and a virtual scene is built around the 3D model. A plurality of virtual scene variations is generated by varying one or more characteristics. Each virtual scene variation is generated to include a label identifying the 3D model in the virtual scene variation. A machine learning model may be trained based on the plurality of virtual scene variations. The use of generated digital assets to train the machine learning model greatly decreases the time and cost requirements of creating training assets and provides training quality benefits based on the quantity and quality of variations that may be generated, as well as the completeness of information included in each generated digital asset.
    Type: Application
    Filed: September 14, 2018
    Publication date: March 19, 2020
    Inventors: Muhammad Zeeshan ZIA, Emanuel SHALEV, Jonathan C. HANZELKA, Harpreet S. SAWHNEY, Pedro U. ESCOS, Michael J. EBSTYNE
  • Patent number: 9916520
    Abstract: A food recognition assistant system includes technologies to recognize foods and combinations of foods depicted in a digital picture of food. Some embodiments include technologies to estimate portion size and calories, and to estimate nutritional value of the foods. In some embodiments, data identifying recognized foods and related information are generated in an automated fashion without relying on human assistance to identify the foods. In some embodiments, the system includes technologies for achieving automatic food detection and recognition in a real-life setting with a cluttered background, without the images being taken in a controlled lab setting, and without requiring additional user input (such as user-defined bounding boxes). Some embodiments of the system include technologies for personalizing the food classification based on user-specific habits, location and/or other criteria.
    Type: Grant
    Filed: December 11, 2014
    Date of Patent: March 13, 2018
    Assignee: SRI International
    Inventors: Ajay Divakaran, Weiyu Zhang, Qian Yu, Harpreet S. Sawhney
  • Patent number: 9911340
    Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.
    Type: Grant
    Filed: November 7, 2016
    Date of Patent: March 6, 2018
    Assignee: SRI International
    Inventors: Supun Samarasekera, Raia Hadsell, Rakesh Kumar, Harpreet S. Sawhney, Bogdan C. Matei, Ryan Villamil
  • Patent number: 9734426
    Abstract: A food recognition assistant system includes technologies to recognize foods and combinations of foods depicted in a digital picture of food. Some embodiments include technologies to estimate portion size and calories, and to estimate nutritional value of the foods. In some embodiments, data identifying recognized foods and related information are generated in an automated fashion without relying on human assistance to identify the foods. In some embodiments, the system includes technologies for achieving automatic food detection and recognition in a real-life setting with a cluttered background, without the images being taken in a controlled lab setting, and without requiring additional user input (such as user-defined bounding boxes). Some embodiments of the system include technologies for personalizing the food classification based on user-specific habits, location and/or other criteria.
    Type: Grant
    Filed: December 11, 2014
    Date of Patent: August 15, 2017
    Assignee: SRI International
    Inventors: Ajay Divakaran, Weiyu Zhang, Qian Yu, Harpreet S. Sawhney
  • Patent number: 9734730
    Abstract: A multi-modal interaction modeling system can model a number of different aspects of a human interaction across one or more temporal interaction sequences. Some versions of the system can generate assessments of the nature or quality of the interaction or portions thereof, which can be used to, among other things, provide assistance to one or more of the participants in the interaction.
    Type: Grant
    Filed: January 31, 2013
    Date of Patent: August 15, 2017
    Assignee: SRI International
    Inventors: Ajay Divakaran, Behjat Siddiquie, Saad Khan, Jeffrey Lubin, Harpreet S. Sawhney
  • Publication number: 20170053538
    Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.
    Type: Application
    Filed: November 7, 2016
    Publication date: February 23, 2017
    Inventors: Supun Samarasekera, Raia Hadsell, Rakesh Kumar, Harpreet S. Sawhney, Bogdan C. Matei, Ryan Villamil
  • Publication number: 20160378861
    Abstract: A computing system includes a vision-based user interface platform to, among other things, analyze multi-modal user interactions, semantically correlate stored knowledge with visual features of a scene depicted in a video, determine relationships between different features of the scene, and selectively display virtual elements on the video depiction of the scene. The analysis of user interactions can be used to filter the information retrieval and correlating of the visual features with the stored knowledge.
    Type: Application
    Filed: October 8, 2015
    Publication date: December 29, 2016
    Inventors: Jayakrishnan Eledath, Supun Samarasekera, Harpreet S. Sawhney, Rakesh Kumar, Mayank Bansal, Girish Acharya, Michael John Wolverton, Aaron Spaulding, Ron Krakower
  • Patent number: 9488492
    Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.
    Type: Grant
    Filed: December 18, 2014
    Date of Patent: November 8, 2016
    Assignee: SRI INTERNATIONAL
    Inventors: Supun Samarasekera, Raia Hadsell, Rakesh Kumar, Harpreet S. Sawhney, Bogdan C. Matei, Ryan Villamil
  • Patent number: 9476730
    Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.
    Type: Grant
    Filed: December 18, 2014
    Date of Patent: October 25, 2016
    Assignee: SRI INTERNATIONAL
    Inventors: Supun Samarasekera, Raia Hadsell, Rakesh Kumar, Harpreet S. Sawhney, Bogdan C. Matei, Ryan Villamil
  • Publication number: 20160071024
    Abstract: Technologies for analyzing temporal components of multimodal data to detect short-term multimodal events, determine relationships between short-term multimodal events, and recognize long-term multimodal events, using a deep learning architecture, are disclosed.
    Type: Application
    Filed: February 25, 2015
    Publication date: March 10, 2016
    Inventors: Mohamed R. Amer, Behjat Siddiquie, Ajay Divakaran, Colleen Richey, Saad Khan, Harpreet S. Sawhney
  • Publication number: 20160063692
    Abstract: A food recognition assistant system includes technologies to recognize foods and combinations of foods depicted in a digital picture of food. Some embodiments include technologies to estimate portion size and calories, and to estimate nutritional value of the foods. In some embodiments, data identifying recognized foods and related information are generated in an automated fashion without relying on human assistance to identify the foods. In some embodiments, the system includes technologies for achieving automatic food detection and recognition in a real-life setting with a cluttered background, without the images being taken in a controlled lab setting, and without requiring additional user input (such as user-defined bounding boxes). Some embodiments of the system include technologies for personalizing the food classification based on user-specific habits, location and/or other criteria.
    Type: Application
    Filed: December 11, 2014
    Publication date: March 3, 2016
    Inventors: Ajay Divakaran, Weiyu Zhang, Qian Yu, Harpreet S. Sawhney
  • Publication number: 20160063734
    Abstract: A food recognition assistant system includes technologies to recognize foods and combinations of foods depicted in a digital picture of food. Some embodiments include technologies to estimate portion size and calories, and to estimate nutritional value of the foods. In some embodiments, data identifying recognized foods and related information are generated in an automated fashion without relying on human assistance to identify the foods. In some embodiments, the system includes technologies for achieving automatic food detection and recognition in a real-life setting with a cluttered background, without the images being taken in a controlled lab setting, and without requiring additional user input (such as user-defined bounding boxes). Some embodiments of the system include technologies for personalizing the food classification based on user-specific habits, location and/or other criteria.
    Type: Application
    Filed: December 11, 2014
    Publication date: March 3, 2016
    Inventors: Ajay Divakaran, Weiyu Zhang, Qian Yu, Harpreet S. Sawhney
  • Publication number: 20150268058
    Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.
    Type: Application
    Filed: December 18, 2014
    Publication date: September 24, 2015
    Inventors: Supun Samarasekera, Raia Hadsell, Rakesh Kumar, Harpreet S. Sawhney, Bogdan C. Matei, Ryan Villamil
  • Publication number: 20150269438
    Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.
    Type: Application
    Filed: December 18, 2014
    Publication date: September 24, 2015
    Inventors: Supun Samarasekera, Raia Hadsell, Rakesh Kumar, Harpreet S. Sawhney, Bogdan C. Matei, Ryan Villamil
  • Publication number: 20140212854
    Abstract: A multi-modal interaction modeling system can model a number of different aspects of a human interaction across one or more temporal interaction sequences. Some versions of the system can generate assessments of the nature or quality of the interaction or portions thereof, which can be used to, among other things, provide assistance to one or more of the participants in the interaction.
    Type: Application
    Filed: January 31, 2013
    Publication date: July 31, 2014
    Applicant: SRI INTERNATIONAL
    Inventors: Ajay Divakaran, Behjat Siddiquie, Saad Khan, Jeffrey Lubin, Harpreet S. Sawhney
  • Patent number: 8439683
    Abstract: A method and system for analyzing at least one food item on a food plate is disclosed. A plurality of images of the food plate is received by an image capturing device. A description of the at least one food item on the food plate is received by a recognition device. The description is at least one of a voice description and a text description. At least one processor extracts a list of food items from the description; classifies and segments the at least one food item from the list using color and texture features derived from the plurality of images; and estimates the volume of the classified and segmented at least one food item. The processor is also configured to estimate the caloric content of the at least one food item.
    Type: Grant
    Filed: January 6, 2010
    Date of Patent: May 14, 2013
    Assignee: SRI International
    Inventors: Manika Puri, Zhiwei Zhu, Jeffrey Lubin, Tom Pschar, Ajay Divakaran, Harpreet S. Sawhney
  • Patent number: 7853072
    Abstract: The present invention provides an improved system and method for object detection with histogram of oriented gradient (HOG) based support vector machine (SVM). Specifically, the system provides a computational framework to stably detect still or not moving objects over a wide range of viewpoints. The framework includes providing a sensor input of images which are received by the “focus of attention” mechanism to identify the regions in the image that potentially contain the target objects. These regions are further computed to generate hypothesized objects, specifically generating selected regions containing the target object hypothesis with respect to their positions. Thereafter, these selected regions are verified by an extended HOG-based SVM classifier to generate the detected objects.
    Type: Grant
    Filed: July 19, 2007
    Date of Patent: December 14, 2010
    Assignee: Sarnoff Corporation
    Inventors: Feng Han, Ying Shan, Ryan Cekander, Harpreet S. Sawhney, Rakesh Kumar
  • Publication number: 20100173269
    Abstract: A method and system for analyzing at least one food item on a food plate is disclosed. A plurality of images of the food plate is received by an image capturing device. A description of the at least one food item on the food plate is received by a recognition device. The description is at least one of a voice description and a text description. At least one processor extracts a list of food items from the description; classifies and segments the at least one food item from the list using color and texture features derived from the plurality of images; and estimates the volume of the classified and segmented at least one food item. The processor is also configured to estimate the caloric content of the at least one food item.
    Type: Application
    Filed: January 6, 2010
    Publication date: July 8, 2010
    Inventors: Manika Puri, Zhiwei Zhu, Jeffrey Lubin, Tom Pschar, Ajay Divakaran, Harpreet S. Sawhney