Patents by Inventor Niluthpol C. Mithun

Niluthpol C. Mithun has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11960994
    Abstract: A method, apparatus and system for artificial intelligence-based HDRL planning and control for coordinating a team of platforms includes implementing a global planning layer for determining a collective goal and determining, by applying at least one machine learning process, at least one respective platform goal to be achieved by at least one platform, implementing a platform planning layer for determining, by applying at least one machine learning process, at least one respective action to be performed by the at least one of the platforms to achieve the respective platform goal, and implementing a platform control layer for determining at least one respective function to be performed by the at least one of the platforms. In the method, apparatus and system despite the fact that information is shared between at least two of the layers, the global planning layer, the platform planning layer, and the platform control layer are trained separately.
    Type: Grant
    Filed: January 18, 2021
    Date of Patent: April 16, 2024
    Assignee: SRI International
    Inventors: Han-Pang Chiu, Jonathan D. Brookshire, Zachary Seymour, Niluthpol C. Mithun, Supun Samarasekera, Rakesh Kumar, Qiao Wang
  • Publication number: 20240096093
    Abstract: A method for AI-driven augmented reality mentoring includes determining semantic features of objects in at least one captured scene, determining 3D positional information of the objects, combining information regarding the identified objects with respective 3D positional information to determine at least one intermediate representation, completing the determined intermediate representation using machine learning to include additional objects or positional information of the objects not identifiable from the at least one captured scene, determining at least one task to be performed and determining steps to be performed using a knowledge database, generating at least one visual representation relating to the determined steps for performing the at least one task, determining a correct position for displaying the at least one visual representation, and displaying the at least one visual representation on the see-through display in the determined correct position as an augmented overlay to the view of the at least
    Type: Application
    Filed: September 19, 2023
    Publication date: March 21, 2024
    Inventors: Han-Pang CHIU, Abhinav RAJVANSHI, Niluthpol C. MITHUN, Zachary SEYMOUR, Supun SAMARASEKERA, Rakesh KUMAR, Winter Joseph Guerra
  • Publication number: 20230394294
    Abstract: A method, apparatus and system for artificial intelligence-based HDRL planning and control for coordinating a team of platforms includes implementing a global planning layer for determining a collective goal and determining, by applying at least one machine learning process, at least one respective platform goal to be achieved by at least one platform, implementing a platform planning layer for determining, by applying at least one machine learning process, at least one respective action to be performed by the at least one of the platforms to achieve the respective platform goal, and implementing a platform control layer for determining at least one respective function to be performed by the at least one of the platforms. In the method, apparatus and system despite the fact that information is shared between at least two of the layers, the global planning layer, the platform planning layer, and the platform control layer are trained separately.
    Type: Application
    Filed: January 18, 2021
    Publication date: December 7, 2023
    Inventors: Han-Pang Chiu, Jonathan D. Brookshire, Zachary Seymour, Niluthpol C. Mithun, Supun Samarasekera, Rakesh Kumar, Qiao Wang
  • Publication number: 20230004797
    Abstract: A method, apparatus and system for object detection in sensor data having at least two modalities using a common embedding space includes creating first modality vector representations of features of sensor data having a first modality and second modality vector representations of features of sensor data having a second modality, projecting the first and second modality vector representations into the common embedding space such that related embedded modality vectors are closer together in the common embedding space than unrelated modality vectors, combining the projected first and second modality vector representations, and determining a similarity between the combined modality vector representations and respective embedded vector representations of features of objects in the common embedding space to identify at least one object depicted by the captured sensor data. In some instances, data manipulation of the method, apparatus and system can be guided by physics properties of a sensor and/or sensor data.
    Type: Application
    Filed: February 11, 2021
    Publication date: January 5, 2023
    Inventors: Han-Pang CHIU, Zachary SEYMOUR, Niluthpol C. MITHUN, Supun SAMARASEKERA, Rakesh KUMAR, Yi YAO
  • Publication number: 20220198813
    Abstract: A method, apparatus and system for efficient navigation in a navigation space includes determining semantic features and respective 3D positional information of the semantic features for scenes of captured image content and depth-related content in the navigation space, combining information of the determined semantic features of the scene with respective 3D positional information using neural networks to determine an intermediate representation of the scene which provides information regarding positions of the semantic features in the scene and spatial relationships among the sematic features, and using the information regarding the positions of the semantic features and the spatial relationships among the sematic features in a machine learning process to provide at least one of a navigation path in the navigation space, a model of the navigation space, and an explanation of a navigation action by the single, mobile agent in the navigation space.
    Type: Application
    Filed: December 17, 2021
    Publication date: June 23, 2022
    Inventors: Han-Pang CHIU, Zachary SEYMOUR, Niluthpol C. MITHUN, Supun SAMARASEKERA, Rakesh KUMAR, Kowshik THOPALLI, Muhammad Zubair IRSHAD
  • Publication number: 20220108455
    Abstract: A method, machine readable medium and system for RGBD semantic segmentation of video data includes determining semantic segmentation data and depth segmentation data for less than all classes for images of each frame of a first video, determining semantic segmentation data and depth segmentation data for images of each key frame of a second video including a synchronous combination of respective frames of the RGB video and the depth-aware video in parallel to the determination of the semantic segmentation data and the depth segmentation data for each frame of the first video, temporally and geometrically aligning respective frames of the first video and the second video, and predicting semantic segmentation data and depth segmentation data for images of a subsequent frame of the first video based on the determination of the semantic segmentation data and depth segmentation data for images of a key frame of the second video.
    Type: Application
    Filed: October 7, 2021
    Publication date: April 7, 2022
    Inventors: Han-Pang CHIU, Junjiao TIAN, Zachary SEYMOUR, Niluthpol C. MITHUN, Alex KRASNER, Mikhail SIZINTSEV, Abhinav RAJVANSHI, Kevin KAIGHN, Philip MILLER, Ryan VILLAMIL, Supun SAMARASEKERA
  • Publication number: 20220092366
    Abstract: Techniques are disclosed for an image understanding system comprising a machine learning system that applies a machine learning model to perform image understanding of each pixel of an image, the pixel labeled with a class, to determine an estimated class to which the pixel belongs. The machine learning system determines, based on the classes with which the pixels are labeled and the estimated classes, a cross entropy loss of each class. The machine learning system determines, based on one or more region metrics, a weight for each class and applies the weight to the cross entropy loss of each class to obtain a weighted cross entropy loss. The machine learning system updates the machine learning model with the weighted cross entropy loss to improve a performance metric of the machine learning model for each class.
    Type: Application
    Filed: September 17, 2021
    Publication date: March 24, 2022
    Inventors: Han-Pang Chiu, Junjiao Tian, Zachary Seymour, Niluthpol C. Mithun