Patents by Inventor Supun Samarasekera

Supun Samarasekera has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11960994
    Abstract: A method, apparatus and system for artificial intelligence-based HDRL planning and control for coordinating a team of platforms includes implementing a global planning layer for determining a collective goal and determining, by applying at least one machine learning process, at least one respective platform goal to be achieved by at least one platform, implementing a platform planning layer for determining, by applying at least one machine learning process, at least one respective action to be performed by the at least one of the platforms to achieve the respective platform goal, and implementing a platform control layer for determining at least one respective function to be performed by the at least one of the platforms. In the method, apparatus and system despite the fact that information is shared between at least two of the layers, the global planning layer, the platform planning layer, and the platform control layer are trained separately.
    Type: Grant
    Filed: January 18, 2021
    Date of Patent: April 16, 2024
    Assignee: SRI International
    Inventors: Han-Pang Chiu, Jonathan D. Brookshire, Zachary Seymour, Niluthpol C. Mithun, Supun Samarasekera, Rakesh Kumar, Qiao Wang
  • Publication number: 20240096093
    Abstract: A method for AI-driven augmented reality mentoring includes determining semantic features of objects in at least one captured scene, determining 3D positional information of the objects, combining information regarding the identified objects with respective 3D positional information to determine at least one intermediate representation, completing the determined intermediate representation using machine learning to include additional objects or positional information of the objects not identifiable from the at least one captured scene, determining at least one task to be performed and determining steps to be performed using a knowledge database, generating at least one visual representation relating to the determined steps for performing the at least one task, determining a correct position for displaying the at least one visual representation, and displaying the at least one visual representation on the see-through display in the determined correct position as an augmented overlay to the view of the at least
    Type: Application
    Filed: September 19, 2023
    Publication date: March 21, 2024
    Inventors: Han-Pang CHIU, Abhinav RAJVANSHI, Niluthpol C. MITHUN, Zachary SEYMOUR, Supun SAMARASEKERA, Rakesh KUMAR, Winter Joseph Guerra
  • Publication number: 20230419410
    Abstract: Systems and methods for providing remote farm damage assessment are provided herein. In some embodiments, a system and method for providing remote farm damage assessment may include, determining a set of damage assessment locales for damage assessment; incorporating the set of damage assessment locales into a workflow; providing the workflow to a user device; receiving a first set of damage assessment images from the user device based on the workflow provided, wherein each of the first set of damage assessment images includes geolocation information and camera information; determining a damage assessment based on the first set of damage assessment images using a damage assessment machine learning model; and outputting a damage assessment indication including one or more of whether there is damage, a confidence level of assessing the damage, or a confidence level associated with the level of damage.
    Type: Application
    Filed: December 15, 2021
    Publication date: December 28, 2023
    Inventors: Supun SAMARASEKERA, Rakesh KUMAR, Garbis SALGIAN, Qiao WANG, Glenn A. MURRAY, Avijit BASU, Alison POLKINHORNE
  • Publication number: 20230394294
    Abstract: A method, apparatus and system for artificial intelligence-based HDRL planning and control for coordinating a team of platforms includes implementing a global planning layer for determining a collective goal and determining, by applying at least one machine learning process, at least one respective platform goal to be achieved by at least one platform, implementing a platform planning layer for determining, by applying at least one machine learning process, at least one respective action to be performed by the at least one of the platforms to achieve the respective platform goal, and implementing a platform control layer for determining at least one respective function to be performed by the at least one of the platforms. In the method, apparatus and system despite the fact that information is shared between at least two of the layers, the global planning layer, the platform planning layer, and the platform control layer are trained separately.
    Type: Application
    Filed: January 18, 2021
    Publication date: December 7, 2023
    Inventors: Han-Pang Chiu, Jonathan D. Brookshire, Zachary Seymour, Niluthpol C. Mithun, Supun Samarasekera, Rakesh Kumar, Qiao Wang
  • Patent number: 11740624
    Abstract: A hybrid control system includes a control agent and a control engine. The control engine is configured to install a master plan to the control agent. The master plan includes a plurality of high-level tasks. The control agent is configured to operate according to the master plan to, for each high-level task of the high-level tasks, obtain one or more low-level controls and to perform the one or more low-level controls to realize the high-level task. The control agent is configured to operate according to the master plan to transition between the plurality of high-level tasks thereby causing a seamless transition between operating at least partially autonomously and operating at least partially based on input from the tele-operator, based at least on context for the control agent, to operate at least partially autonomously and at least partially based on input from the tele-operator during execution of the master plan.
    Type: Grant
    Filed: August 17, 2018
    Date of Patent: August 29, 2023
    Assignee: SRI INTERNATIONAL
    Inventors: Bhaskar Ramamurthy, Supun Samarasekera, Thomas Low, Manish Kothari, John Peter Marcotullio, Jonathan Brookshire, Tobenna Arodiogbu, Usman Ghani
  • Patent number: 11676296
    Abstract: Techniques for augmenting a reality captured by an image capture device are disclosed. In one example, a system includes an image capture device that generates a two-dimensional frame at a local pose. The system further includes a computation engine executing on one or more processors that queries, based on an estimated pose prior, a reference database of three-dimensional mapping information to obtain an estimated view of the three-dimensional mapping information at the estimated pose prior. The computation engine processes the estimated view at the estimated pose prior to generate semantically segmented sub-views of the estimated view. The computation engine correlates, based on at least one of the semantically segmented sub-views of the estimated view, the estimated view to the two-dimensional frame. Based on the correlation, the computation engine generates and outputs data for augmenting a reality represented in at least one frame captured by the image capture device.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: June 13, 2023
    Assignee: SRI INTERNATIONAL
    Inventors: Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar, Ryan Villamil, Varun Murali, Gregory Drew Kessler
  • Publication number: 20230004797
    Abstract: A method, apparatus and system for object detection in sensor data having at least two modalities using a common embedding space includes creating first modality vector representations of features of sensor data having a first modality and second modality vector representations of features of sensor data having a second modality, projecting the first and second modality vector representations into the common embedding space such that related embedded modality vectors are closer together in the common embedding space than unrelated modality vectors, combining the projected first and second modality vector representations, and determining a similarity between the combined modality vector representations and respective embedded vector representations of features of objects in the common embedding space to identify at least one object depicted by the captured sensor data. In some instances, data manipulation of the method, apparatus and system can be guided by physics properties of a sensor and/or sensor data.
    Type: Application
    Filed: February 11, 2021
    Publication date: January 5, 2023
    Inventors: Han-Pang CHIU, Zachary SEYMOUR, Niluthpol C. MITHUN, Supun SAMARASEKERA, Rakesh KUMAR, Yi YAO
  • Publication number: 20220299592
    Abstract: A method, apparatus and system for determining change in pose of a mobile device include determining from first ranging information received at a first and a second receiver on the mobile device from a stationary node during a first time instance, a distance from the stationary node to the first receiver and the second receiver, determining from second ranging information received at the first receiver and the second receiver from the stationary node during a second time instance, a distance from the stationary node to the first receiver and second receiver, and determining from the determined distances during the first time instance and the second time instance, how far and in which direction the first receiver and the second receiver moved between the first time instance and the second time instance to determine a change in pose of the mobile device, where a position of the stationary node is unknown.
    Type: Application
    Filed: March 15, 2022
    Publication date: September 22, 2022
    Inventors: Han-Pang Chiu, Abhinav Rajvanshi, Alex Krasner, Mikhail Sizintsev, Glenn A. Murray, Supun Samarasekera
  • Patent number: 11423586
    Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more images of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.
    Type: Grant
    Filed: January 25, 2021
    Date of Patent: August 23, 2022
    Assignee: SRI International
    Inventors: Supun Samarasekera, Taragay Oskiper, Rakesh Kumar, Mikhail Sizintsev, Vlad Branzoi
  • Patent number: 11397462
    Abstract: A computing system includes a vision-based user interface platform to, among other things, analyze multi-modal user interactions, semantically correlate stored knowledge with visual features of a scene depicted in a video, determine relationships between different features of the scene, and selectively display virtual elements on the video depiction of the scene. The analysis of user interactions can be used to filter the information retrieval and correlating of the visual features with the stored knowledge.
    Type: Grant
    Filed: October 8, 2015
    Date of Patent: July 26, 2022
    Assignee: SRI International
    Inventors: Jayakrishnan Eledath, Supun Samarasekera, Harpreet S. Sawhney, Rakesh Kumar, Mayank Bansal, Girish Acharya, Michael John Wolverton, Aaron Spaulding, Ron Krakower
  • Publication number: 20220222824
    Abstract: A method, machine readable medium and system for semantic segmentation of 3D point cloud data includes determining ground data points of the 3D point cloud data, categorizing non-ground data points relative to a ground surface determined from the ground data points to determine legitimate non-ground data points, segmenting the determined legitimate non-ground and ground data points based on a set of common features, applying logical rules to a data structure of the features built on the segmented determined non-ground and ground data points based on their spatial relationships and incorporated within a machine learning system, and constructing a 3D semantics model from the application of the logical rules to the data structure.
    Type: Application
    Filed: September 15, 2021
    Publication date: July 14, 2022
    Inventors: Anil Usumezbas, Bogdan Calin Mihai Matei, Rakesh Kumar, Supun Samarasekera
  • Publication number: 20220198813
    Abstract: A method, apparatus and system for efficient navigation in a navigation space includes determining semantic features and respective 3D positional information of the semantic features for scenes of captured image content and depth-related content in the navigation space, combining information of the determined semantic features of the scene with respective 3D positional information using neural networks to determine an intermediate representation of the scene which provides information regarding positions of the semantic features in the scene and spatial relationships among the sematic features, and using the information regarding the positions of the semantic features and the spatial relationships among the sematic features in a machine learning process to provide at least one of a navigation path in the navigation space, a model of the navigation space, and an explanation of a navigation action by the single, mobile agent in the navigation space.
    Type: Application
    Filed: December 17, 2021
    Publication date: June 23, 2022
    Inventors: Han-Pang CHIU, Zachary SEYMOUR, Niluthpol C. MITHUN, Supun SAMARASEKERA, Rakesh KUMAR, Kowshik THOPALLI, Muhammad Zubair IRSHAD
  • Patent number: 11361470
    Abstract: A method, apparatus and system for visual localization includes extracting appearance features of an image, extracting semantic features of the image, fusing the extracted appearance features and semantic features, pooling and projecting the fused features into a semantic embedding space having been trained using fused appearance and semantic features of images having known locations, computing a similarity measure between the projected fused features and embedded, fused appearance and semantic features of images, and predicting a location of the image associated with the projected, fused features. An image can include at least one image from a plurality of modalities such as a Light Detection and Ranging image, a Radio Detection and Ranging image, or a 3D Computer Aided Design modeling image, and an image from a different sensor, such as an RGB image sensor, captured from a same geo-location, which is used to determine the semantic features of the multi-modal image.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: June 14, 2022
    Assignee: SRI International
    Inventors: Han-Pang Chiu, Zachary Seymour, Karan Sikka, Supun Samarasekera, Rakesh Kumar, Niluthpol Mithun
  • Patent number: 11313684
    Abstract: During GPS-denied/restricted navigation, images proximate a platform device are captured using a camera, and corresponding motion measurements of the platform device are captured using an IMU device. Features of a current frame of the images captured are extracted. Extracted features are matched and feature information between consecutive frames is tracked. The extracted features are compared to previously stored, geo-referenced visual features from a plurality of platform devices. If one of the extracted features does not match a geo-referenced visual feature, a pose is determined for the platform device using IMU measurements propagated from a previous pose and relative motion information between consecutive frames, which is determined using the tracked feature information.
    Type: Grant
    Filed: March 28, 2017
    Date of Patent: April 26, 2022
    Assignee: SRI International
    Inventors: Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar, Mikhail Sizintsev, Xun Zhou, Philip Miller, Glenn Murray
  • Publication number: 20220108455
    Abstract: A method, machine readable medium and system for RGBD semantic segmentation of video data includes determining semantic segmentation data and depth segmentation data for less than all classes for images of each frame of a first video, determining semantic segmentation data and depth segmentation data for images of each key frame of a second video including a synchronous combination of respective frames of the RGB video and the depth-aware video in parallel to the determination of the semantic segmentation data and the depth segmentation data for each frame of the first video, temporally and geometrically aligning respective frames of the first video and the second video, and predicting semantic segmentation data and depth segmentation data for images of a subsequent frame of the first video based on the determination of the semantic segmentation data and depth segmentation data for images of a key frame of the second video.
    Type: Application
    Filed: October 7, 2021
    Publication date: April 7, 2022
    Inventors: Han-Pang CHIU, Junjiao TIAN, Zachary SEYMOUR, Niluthpol C. MITHUN, Alex KRASNER, Mikhail SIZINTSEV, Abhinav RAJVANSHI, Kevin KAIGHN, Philip MILLER, Ryan VILLAMIL, Supun SAMARASEKERA
  • Patent number: 11270426
    Abstract: Computer aided inspection systems (CAIS) and method for inspection, error analysis and comparison of structures are presented herein. In some embodiments, a CAIS may include a SLAM system configured to determine real-world global localization information of a user in relation to a structure being inspected using information obtained from a first sensor package, a model alignment system configured to: use the determined global localization information to index into a corresponding location in a 3D computer model of the structure being inspected; and align observations and/or information obtained from the first sensor package to the local area of the model 3D computer model of the structure extracted; a second sensor package configured to obtain fine level measurements of the structure; and a model recognition system configured to compare the fine level measurements and information obtained about the structure from the second sensor package to the 3D computer model.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: March 8, 2022
    Assignee: SRI International
    Inventors: Garbis Salgian, Bogdan C. Matei, Taragay Oskiper, Mikhail Sizintsev, Rakesh Kumar, Supun Samarasekera
  • Patent number: 11263443
    Abstract: A method, apparatus and system for human skeleton pose estimation includes synchronously capturing images of a human moving through an area from a plurality of different points of view, for each of the plurality of captured images, determining a bounding box that bounds the human in the captured image and identifying pixel locations of the bounding box in the image, for each of the plurality of captured images, determining 2D and single-view 3D skeletons from the pixel locations of the bounding box, determining a first, multi-view 3D skeleton using a combination of the 2D and single-view 3D skeletons, and optimizing the first, multi-view 3D skeleton to determine a final 3D skeleton pose for the human. The method, apparatus and system can further include illuminating the area with structured light during the capturing of the images of the human moving through the area.
    Type: Grant
    Filed: May 7, 2020
    Date of Patent: March 1, 2022
    Assignee: SRI International
    Inventors: Jonathan D. Brookshire, Supun Samarasekera, Kshitij Singh Minhas
  • Publication number: 20210142530
    Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more images of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.
    Type: Application
    Filed: January 25, 2021
    Publication date: May 13, 2021
    Inventors: Supun Samarasekera, Taragay Oskiper, Rakesh Kumar, Mikhail Sizintsev, Vlad Branzoi
  • Patent number: 10991156
    Abstract: A method for providing a real time, three-dimensional (3D) navigational map for platforms includes integrating at least two sources of multi-modal and multi-dimensional platform sensor information to produce a more accurate 3D navigational map. The method receives both a 3D point cloud from a first sensor on a platform with a first modality and a 2D image from a second sensor on the platform with a second modality different from the first modality, generates a semantic label and a semantic label uncertainty associated with a first space point in the 3D point cloud, generates a semantic label and a semantic label uncertainty associated with a second space point in the 2D image, and fuses the first space semantic label and the first space semantic uncertainty with the second space semantic label and the second space semantic label uncertainty to create fused 3D spatial information to enhance the 3D navigational map.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: April 27, 2021
    Assignee: SRI International
    Inventors: Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar, Bogdan C. Matei, Bhaskar Ramamurthy
  • Patent number: 10929713
    Abstract: Techniques are disclosed for improving navigation accuracy for a mobile platform. In one example, a navigation system comprises an image sensor that generates a plurality of images, each image comprising one or more features. A computation engine executing on one or more processors of the navigation system processes each image of the plurality of images to determine a semantic class of each feature of the one or more features of the image. The computation engine determines, for each feature of the one or more features of each image and based on the semantic class of the feature, whether to include the feature as a constraint in a navigation inference engine. The computation engine generates, based at least on features of the one or more features included as constraints in the navigation inference engine, navigation information. The computation engine outputs the navigation information to improve navigation accuracy for the mobile platform.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: February 23, 2021
    Assignee: SRI International
    Inventors: Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar, Varun Murali