Patents by Inventor Harpreet S. Sawhney
Harpreet S. Sawhney has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11397462Abstract: A computing system includes a vision-based user interface platform to, among other things, analyze multi-modal user interactions, semantically correlate stored knowledge with visual features of a scene depicted in a video, determine relationships between different features of the scene, and selectively display virtual elements on the video depiction of the scene. The analysis of user interactions can be used to filter the information retrieval and correlating of the visual features with the stored knowledge.Type: GrantFiled: October 8, 2015Date of Patent: July 26, 2022Assignee: SRI InternationalInventors: Jayakrishnan Eledath, Supun Samarasekera, Harpreet S. Sawhney, Rakesh Kumar, Mayank Bansal, Girish Acharya, Michael John Wolverton, Aaron Spaulding, Ron Krakower
-
Patent number: 11030458Abstract: The disclosure herein describes training a machine learning model to recognize a real-world object based on generated virtual scene variations associated with a model of the real-world object. A digitized three-dimensional (3D) model representing the real-world object is obtained and a virtual scene is built around the 3D model. A plurality of virtual scene variations is generated by varying one or more characteristics. Each virtual scene variation is generated to include a label identifying the 3D model in the virtual scene variation. A machine learning model may be trained based on the plurality of virtual scene variations. The use of generated digital assets to train the machine learning model greatly decreases the time and cost requirements of creating training assets and provides training quality benefits based on the quantity and quality of variations that may be generated, as well as the completeness of information included in each generated digital asset.Type: GrantFiled: September 14, 2018Date of Patent: June 8, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Muhammad Zeeshan Zia, Emanuel Shalev, Jonathan C. Hanzelka, Harpreet S. Sawhney, Pedro U. Escos, Michael J. Ebstyne
-
Publication number: 20200089954Abstract: The disclosure herein describes training a machine learning model to recognize a real-world object based on generated virtual scene variations associated with a model of the real-world object. A digitized three-dimensional (3D) model representing the real-world object is obtained and a virtual scene is built around the 3D model. A plurality of virtual scene variations is generated by varying one or more characteristics. Each virtual scene variation is generated to include a label identifying the 3D model in the virtual scene variation. A machine learning model may be trained based on the plurality of virtual scene variations. The use of generated digital assets to train the machine learning model greatly decreases the time and cost requirements of creating training assets and provides training quality benefits based on the quantity and quality of variations that may be generated, as well as the completeness of information included in each generated digital asset.Type: ApplicationFiled: September 14, 2018Publication date: March 19, 2020Inventors: Muhammad Zeeshan ZIA, Emanuel SHALEV, Jonathan C. HANZELKA, Harpreet S. SAWHNEY, Pedro U. ESCOS, Michael J. EBSTYNE
-
Patent number: 9916520Abstract: A food recognition assistant system includes technologies to recognize foods and combinations of foods depicted in a digital picture of food. Some embodiments include technologies to estimate portion size and calories, and to estimate nutritional value of the foods. In some embodiments, data identifying recognized foods and related information are generated in an automated fashion without relying on human assistance to identify the foods. In some embodiments, the system includes technologies for achieving automatic food detection and recognition in a real-life setting with a cluttered background, without the images being taken in a controlled lab setting, and without requiring additional user input (such as user-defined bounding boxes). Some embodiments of the system include technologies for personalizing the food classification based on user-specific habits, location and/or other criteria.Type: GrantFiled: December 11, 2014Date of Patent: March 13, 2018Assignee: SRI InternationalInventors: Ajay Divakaran, Weiyu Zhang, Qian Yu, Harpreet S. Sawhney
-
Patent number: 9911340Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.Type: GrantFiled: November 7, 2016Date of Patent: March 6, 2018Assignee: SRI InternationalInventors: Supun Samarasekera, Raia Hadsell, Rakesh Kumar, Harpreet S. Sawhney, Bogdan C. Matei, Ryan Villamil
-
Patent number: 9734426Abstract: A food recognition assistant system includes technologies to recognize foods and combinations of foods depicted in a digital picture of food. Some embodiments include technologies to estimate portion size and calories, and to estimate nutritional value of the foods. In some embodiments, data identifying recognized foods and related information are generated in an automated fashion without relying on human assistance to identify the foods. In some embodiments, the system includes technologies for achieving automatic food detection and recognition in a real-life setting with a cluttered background, without the images being taken in a controlled lab setting, and without requiring additional user input (such as user-defined bounding boxes). Some embodiments of the system include technologies for personalizing the food classification based on user-specific habits, location and/or other criteria.Type: GrantFiled: December 11, 2014Date of Patent: August 15, 2017Assignee: SRI InternationalInventors: Ajay Divakaran, Weiyu Zhang, Qian Yu, Harpreet S. Sawhney
-
Patent number: 9734730Abstract: A multi-modal interaction modeling system can model a number of different aspects of a human interaction across one or more temporal interaction sequences. Some versions of the system can generate assessments of the nature or quality of the interaction or portions thereof, which can be used to, among other things, provide assistance to one or more of the participants in the interaction.Type: GrantFiled: January 31, 2013Date of Patent: August 15, 2017Assignee: SRI InternationalInventors: Ajay Divakaran, Behjat Siddiquie, Saad Khan, Jeffrey Lubin, Harpreet S. Sawhney
-
Publication number: 20170053538Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.Type: ApplicationFiled: November 7, 2016Publication date: February 23, 2017Inventors: Supun Samarasekera, Raia Hadsell, Rakesh Kumar, Harpreet S. Sawhney, Bogdan C. Matei, Ryan Villamil
-
Publication number: 20160378861Abstract: A computing system includes a vision-based user interface platform to, among other things, analyze multi-modal user interactions, semantically correlate stored knowledge with visual features of a scene depicted in a video, determine relationships between different features of the scene, and selectively display virtual elements on the video depiction of the scene. The analysis of user interactions can be used to filter the information retrieval and correlating of the visual features with the stored knowledge.Type: ApplicationFiled: October 8, 2015Publication date: December 29, 2016Inventors: Jayakrishnan Eledath, Supun Samarasekera, Harpreet S. Sawhney, Rakesh Kumar, Mayank Bansal, Girish Acharya, Michael John Wolverton, Aaron Spaulding, Ron Krakower
-
Patent number: 9488492Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.Type: GrantFiled: December 18, 2014Date of Patent: November 8, 2016Assignee: SRI INTERNATIONALInventors: Supun Samarasekera, Raia Hadsell, Rakesh Kumar, Harpreet S. Sawhney, Bogdan C. Matei, Ryan Villamil
-
Patent number: 9476730Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.Type: GrantFiled: December 18, 2014Date of Patent: October 25, 2016Assignee: SRI INTERNATIONALInventors: Supun Samarasekera, Raia Hadsell, Rakesh Kumar, Harpreet S. Sawhney, Bogdan C. Matei, Ryan Villamil
-
Publication number: 20160071024Abstract: Technologies for analyzing temporal components of multimodal data to detect short-term multimodal events, determine relationships between short-term multimodal events, and recognize long-term multimodal events, using a deep learning architecture, are disclosed.Type: ApplicationFiled: February 25, 2015Publication date: March 10, 2016Inventors: Mohamed R. Amer, Behjat Siddiquie, Ajay Divakaran, Colleen Richey, Saad Khan, Harpreet S. Sawhney
-
Publication number: 20160063692Abstract: A food recognition assistant system includes technologies to recognize foods and combinations of foods depicted in a digital picture of food. Some embodiments include technologies to estimate portion size and calories, and to estimate nutritional value of the foods. In some embodiments, data identifying recognized foods and related information are generated in an automated fashion without relying on human assistance to identify the foods. In some embodiments, the system includes technologies for achieving automatic food detection and recognition in a real-life setting with a cluttered background, without the images being taken in a controlled lab setting, and without requiring additional user input (such as user-defined bounding boxes). Some embodiments of the system include technologies for personalizing the food classification based on user-specific habits, location and/or other criteria.Type: ApplicationFiled: December 11, 2014Publication date: March 3, 2016Inventors: Ajay Divakaran, Weiyu Zhang, Qian Yu, Harpreet S. Sawhney
-
Publication number: 20160063734Abstract: A food recognition assistant system includes technologies to recognize foods and combinations of foods depicted in a digital picture of food. Some embodiments include technologies to estimate portion size and calories, and to estimate nutritional value of the foods. In some embodiments, data identifying recognized foods and related information are generated in an automated fashion without relying on human assistance to identify the foods. In some embodiments, the system includes technologies for achieving automatic food detection and recognition in a real-life setting with a cluttered background, without the images being taken in a controlled lab setting, and without requiring additional user input (such as user-defined bounding boxes). Some embodiments of the system include technologies for personalizing the food classification based on user-specific habits, location and/or other criteria.Type: ApplicationFiled: December 11, 2014Publication date: March 3, 2016Inventors: Ajay Divakaran, Weiyu Zhang, Qian Yu, Harpreet S. Sawhney
-
Publication number: 20150269438Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.Type: ApplicationFiled: December 18, 2014Publication date: September 24, 2015Inventors: Supun Samarasekera, Raia Hadsell, Rakesh Kumar, Harpreet S. Sawhney, Bogdan C. Matei, Ryan Villamil
-
Publication number: 20150268058Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.Type: ApplicationFiled: December 18, 2014Publication date: September 24, 2015Inventors: Supun Samarasekera, Raia Hadsell, Rakesh Kumar, Harpreet S. Sawhney, Bogdan C. Matei, Ryan Villamil
-
Publication number: 20140212854Abstract: A multi-modal interaction modeling system can model a number of different aspects of a human interaction across one or more temporal interaction sequences. Some versions of the system can generate assessments of the nature or quality of the interaction or portions thereof, which can be used to, among other things, provide assistance to one or more of the participants in the interaction.Type: ApplicationFiled: January 31, 2013Publication date: July 31, 2014Applicant: SRI INTERNATIONALInventors: Ajay Divakaran, Behjat Siddiquie, Saad Khan, Jeffrey Lubin, Harpreet S. Sawhney
-
Patent number: 8439683Abstract: A method and system for analyzing at least one food item on a food plate is disclosed. A plurality of images of the food plate is received by an image capturing device. A description of the at least one food item on the food plate is received by a recognition device. The description is at least one of a voice description and a text description. At least one processor extracts a list of food items from the description; classifies and segments the at least one food item from the list using color and texture features derived from the plurality of images; and estimates the volume of the classified and segmented at least one food item. The processor is also configured to estimate the caloric content of the at least one food item.Type: GrantFiled: January 6, 2010Date of Patent: May 14, 2013Assignee: SRI InternationalInventors: Manika Puri, Zhiwei Zhu, Jeffrey Lubin, Tom Pschar, Ajay Divakaran, Harpreet S. Sawhney
-
Patent number: 7853072Abstract: The present invention provides an improved system and method for object detection with histogram of oriented gradient (HOG) based support vector machine (SVM). Specifically, the system provides a computational framework to stably detect still or not moving objects over a wide range of viewpoints. The framework includes providing a sensor input of images which are received by the “focus of attention” mechanism to identify the regions in the image that potentially contain the target objects. These regions are further computed to generate hypothesized objects, specifically generating selected regions containing the target object hypothesis with respect to their positions. Thereafter, these selected regions are verified by an extended HOG-based SVM classifier to generate the detected objects.Type: GrantFiled: July 19, 2007Date of Patent: December 14, 2010Assignee: Sarnoff CorporationInventors: Feng Han, Ying Shan, Ryan Cekander, Harpreet S. Sawhney, Rakesh Kumar
-
Publication number: 20100173269Abstract: A method and system for analyzing at least one food item on a food plate is disclosed. A plurality of images of the food plate is received by an image capturing device. A description of the at least one food item on the food plate is received by a recognition device. The description is at least one of a voice description and a text description. At least one processor extracts a list of food items from the description; classifies and segments the at least one food item from the list using color and texture features derived from the plurality of images; and estimates the volume of the classified and segmented at least one food item. The processor is also configured to estimate the caloric content of the at least one food item.Type: ApplicationFiled: January 6, 2010Publication date: July 8, 2010Inventors: Manika Puri, Zhiwei Zhu, Jeffrey Lubin, Tom Pschar, Ajay Divakaran, Harpreet S. Sawhney