Patents by Inventor Supun Samarasekera

Supun Samarasekera has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190114507
    Abstract: Techniques are disclosed for improving navigation accuracy for a mobile platform. In one example, a navigation system comprises an image sensor that generates a plurality of images, each image comprising one or more features. A computation engine executing on one or more processors of the navigation system processes each image of the plurality of images to determine a semantic class of each feature of the one or more features of the image. The computation engine determines, for each feature of the one or more features of each image and based on the semantic class of the feature, whether to include the feature as a constraint in a navigation inference engine. The computation engine generates, based at least on features of the one or more features included as constraints in the navigation inference engine, navigation information. The computation engine outputs the navigation information to improve navigation accuracy for the mobile platform.
    Type: Application
    Filed: October 17, 2018
    Publication date: April 18, 2019
    Inventors: Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar, Varun Murali
  • Publication number: 20190051056
    Abstract: Techniques for augmenting a reality captured by an image capture device are disclosed. In one example, a system includes an image capture device that generates a two-dimensional frame at a local pose. The system further includes a computation engine executing on one or more processors that queries, based on an estimated pose prior, a reference database of three-dimensional mapping information to obtain an estimated view of the three-dimensional mapping information at the estimated pose prior. The computation engine processes the estimated view at the estimated pose prior to generate semantically segmented sub-views of the estimated view. The computation engine correlates, based on at least one of the semantically segmented sub-views of the estimated view, the estimated view to the two-dimensional frame. Based on the correlation, the computation engine generates and outputs data for augmenting a reality represented in at least one frame captured by the image capture device.
    Type: Application
    Filed: August 10, 2018
    Publication date: February 14, 2019
    Inventors: Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar, Ryan Villamil, Varun Murali, Gregory Drew Kessler
  • Patent number: 9911340
    Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.
    Type: Grant
    Filed: November 7, 2016
    Date of Patent: March 6, 2018
    Assignee: SRI International
    Inventors: Supun Samarasekera, Raia Hadsell, Rakesh Kumar, Harpreet S. Sawhney, Bogdan C. Matei, Ryan Villamil
  • Patent number: 9892563
    Abstract: A system and method for generating a mixed-reality environment is provided. The system and method provides a user-worn sub-system communicatively connected to a synthetic object computer module. The user-worn sub-system may utilize a plurality of user-worn sensors to capture and process data regarding a user's pose and location. The synthetic object computer module may generate and provide to the user-worn sub-system synthetic objects based information defining a user's real world life scene or environment indicating a user's pose and location. The synthetic objects may then be rendered on a user-worn display, thereby inserting the synthetic objects into a user's field of view. Rendering the synthetic objects on the user-worn display creates the virtual effect for the user that the synthetic objects are present in the real world.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: February 13, 2018
    Assignee: SRI International
    Inventors: Rakesh Kumar, Taragay Oskiper, Oleg Naroditsky, Supun Samarasekera, Zhiwei Zhu, Janet Yonga Kim Knowles
  • Patent number: 9872968
    Abstract: Biofeedback virtual reality sleep assistant technologies monitor one or more physiological parameters while presenting an immersive environment. The presentation of the immersive environment changes over time in response to changes in the values of the physiological parameters. The changes in the presentation of the immersive environment are configured using biofeedback technology and are designed to promote sleep.
    Type: Grant
    Filed: April 11, 2014
    Date of Patent: January 23, 2018
    Assignee: SRI INTERNATIONAL
    Inventors: Massimiliano de Zambotti, Ian M. Colrain, Fiona C. Baker, Rakesh Kumar, Mikhail Sizintsev, Supun Samarasekera, Glenn A. Murray
  • Patent number: 9734414
    Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.
    Type: Grant
    Filed: August 25, 2015
    Date of Patent: August 15, 2017
    Assignee: SRI International
    Inventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
  • Publication number: 20170193710
    Abstract: A system and method for generating a mixed-reality environment is provided. The system and method provides a user-worn sub-system communicatively connected to a synthetic object computer module. The user-worn sub-system may utilize a plurality of user-worn sensors to capture and process data regarding a user's pose and location. The synthetic object computer module may generate and provide to the user-worn sub-system synthetic objects based information defining a user's real world life scene or environment indicating a user's pose and location. The synthetic objects may then be rendered on a user-worn display, thereby inserting the synthetic objects into a user's field of view. Rendering the synthetic objects on the user-worn display creates the virtual effect for the user that the synthetic objects are present in the real world.
    Type: Application
    Filed: March 21, 2017
    Publication date: July 6, 2017
    Inventors: Rakesh Kumar, Targay Oskiper, Oleg Naroditsky, Supun Samarasekera, Zhiwei Zhu, Janet Kim
  • Patent number: 9600067
    Abstract: A system and method for generating a mixed-reality environment is provided. The system and method provides a user-worn sub-system communicatively connected to a synthetic object computer module. The user-worn sub-system may utilize a plurality of user-worn sensors to capture and process data regarding a user's pose and location. The synthetic object computer module may generate and provide to the user-worn sub-system synthetic objects based information defining a user's real world life scene or environment indicating a user's pose and location. The synthetic objects may then be rendered on a user-worn display, thereby inserting the synthetic objects into a user's field of view. Rendering the synthetic objects on the user-worn display creates the virtual effect for the user that the synthetic objects are present in the real world.
    Type: Grant
    Filed: October 27, 2009
    Date of Patent: March 21, 2017
    Assignee: SRI International
    Inventors: Rakesh Kumar, Targay Oskiper, Oleg Naroditsky, Supun Samarasekera, Zhiwei Zhu, Janet Kim
  • Publication number: 20170053538
    Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.
    Type: Application
    Filed: November 7, 2016
    Publication date: February 23, 2017
    Inventors: Supun Samarasekera, Raia Hadsell, Rakesh Kumar, Harpreet S. Sawhney, Bogdan C. Matei, Ryan Villamil
  • Publication number: 20170024904
    Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more mages of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.
    Type: Application
    Filed: October 5, 2016
    Publication date: January 26, 2017
    Inventors: Supun SAMARASEKERA, Taragay OSKIPER, Rakesh KUMAR, Mikhail SIZINTSEV, Vlad BRANZOI
  • Publication number: 20160378861
    Abstract: A computing system includes a vision-based user interface platform to, among other things, analyze multi-modal user interactions, semantically correlate stored knowledge with visual features of a scene depicted in a video, determine relationships between different features of the scene, and selectively display virtual elements on the video depiction of the scene. The analysis of user interactions can be used to filter the information retrieval and correlating of the visual features with the stored knowledge.
    Type: Application
    Filed: October 8, 2015
    Publication date: December 29, 2016
    Inventors: Jayakrishnan Eledath, Supun Samarasekera, Harpreet S. Sawhney, Rakesh Kumar, Mayank Bansal, Girish Acharya, Michael John Wolverton, Aaron Spaulding, Ron Krakower
  • Patent number: 9495783
    Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more images of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.
    Type: Grant
    Filed: June 13, 2013
    Date of Patent: November 15, 2016
    Assignee: SRI INTERNATIONAL
    Inventors: Supun Samarasekera, Taragay Oskiper, Rakesh Kumar, Mikhail Sizintsev, Vlad Branzoi
  • Patent number: 9488492
    Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.
    Type: Grant
    Filed: December 18, 2014
    Date of Patent: November 8, 2016
    Assignee: SRI INTERNATIONAL
    Inventors: Supun Samarasekera, Raia Hadsell, Rakesh Kumar, Harpreet S. Sawhney, Bogdan C. Matei, Ryan Villamil
  • Patent number: 9476730
    Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.
    Type: Grant
    Filed: December 18, 2014
    Date of Patent: October 25, 2016
    Assignee: SRI INTERNATIONAL
    Inventors: Supun Samarasekera, Raia Hadsell, Rakesh Kumar, Harpreet S. Sawhney, Bogdan C. Matei, Ryan Villamil
  • Publication number: 20160078303
    Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.
    Type: Application
    Filed: August 25, 2015
    Publication date: March 17, 2016
    Inventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
  • Publication number: 20150268058
    Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.
    Type: Application
    Filed: December 18, 2014
    Publication date: September 24, 2015
    Inventors: Supun Samarasekera, Raia Hadsell, Rakesh Kumar, Harpreet S. Sawhney, Bogdan C. Matei, Ryan Villamil
  • Publication number: 20150269438
    Abstract: A multi-sensor, multi-modal data collection, analysis, recognition, and visualization platform can be embodied in a navigation capable vehicle. The platform provides an automated tool that can integrate multi-modal sensor data including two-dimensional image data, three-dimensional image data, and motion, location, or orientation data, and create a visual representation of the integrated sensor data, in a live operational environment. An illustrative platform architecture incorporates modular domain-specific business analytics “plug ins” to provide real-time annotation of the visual representation with domain-specific markups.
    Type: Application
    Filed: December 18, 2014
    Publication date: September 24, 2015
    Inventors: Supun Samarasekera, Raia Hadsell, Rakesh Kumar, Harpreet S. Sawhney, Bogdan C. Matei, Ryan Villamil
  • Patent number: 9121713
    Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.
    Type: Grant
    Filed: April 19, 2012
    Date of Patent: September 1, 2015
    Assignee: SRI International
    Inventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
  • Patent number: 9031809
    Abstract: A method and apparatus for providing three-dimensional navigation for a node comprising an inertial measurement unit for providing gyroscope, acceleration and velocity information (collectively IMU information); a ranging unit for providing distance information relative to at least one reference node; at least one visual sensor for providing images of an environment surrounding the node; a preprocessor, coupled to the inertial measurement unit, the ranging unit and the plurality of visual sensors, for generating error states for the IMU information, the distance information and the images; and an error-state predictive filter, coupled to the preprocessor, for processing the error states to produce a three-dimensional pose of the node.
    Type: Grant
    Filed: July 14, 2011
    Date of Patent: May 12, 2015
    Assignee: SRI International
    Inventors: Rakesh Kumar, Supun Samarasekera, Han-Pang Chiu, Zhiwei Zhu, Taragay Oskiper, Lu Wang, Raia Hadsell
  • Publication number: 20140316191
    Abstract: Biofeedback virtual reality sleep assistant technologies monitor one or more physiological parameters while presenting an immersive environment. The presentation of the immersive environment changes over time in response to changes in the values of the physiological parameters. The changes in the presentation of the immersive environment are configured using biofeedback technology and are designed to promote sleep.
    Type: Application
    Filed: April 11, 2014
    Publication date: October 23, 2014
    Inventors: Massimiliano de Zambotti, Ian M. Colrain, Fiona C. Baker, Rakesh Kumar, Mikhail Sizintsev, Supun Samarasekera, Glenn A. Murray