Patents by Inventor Han-Pang Chiu

Han-Pang Chiu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12361732
    Abstract: A method, apparatus and system for efficient navigation in a navigation space includes determining semantic features and respective 3D positional information of the semantic features for scenes of captured image content and depth-related content in the navigation space, combining information of the determined semantic features of the scene with respective 3D positional information using neural networks to determine an intermediate representation of the scene which provides information regarding positions of the semantic features in the scene and spatial relationships among the semantic features, and using the information regarding the positions of the semantic features and the spatial relationships among the semantic features in a machine learning process to provide at least one of a navigation path in the navigation space, a model of the navigation space, and an explanation of a navigation action by the single, mobile agent in the navigation space.
    Type: Grant
    Filed: December 17, 2021
    Date of Patent: July 15, 2025
    Assignee: SRI International
    Inventors: Han-Pang Chiu, Zachary Seymour, Niluthpol C. Mithun, Supun Samarasekera, Rakesh Kumar, Kowshik Thopalli, Muhammad Zubair Irshad
  • Publication number: 20250224538
    Abstract: A method and system for generating mineral potential maps (MPM) is described in embodiments consistent with the present disclosure. In some embodiments, a method for generating an MPM includes extracting features from mineral mapping data (MMD) from a plurality of data source modalities using one or more feature extraction networks; fusing the features extracted by each of the one or more feature extraction networks to produce fused multimodal features; projecting the fused multimodal features into an embedding space that is trained to classify the features' mineral deposit potential; and generating mineral potential data indicating a spatial output of mineral deposit potential.
    Type: Application
    Filed: December 13, 2024
    Publication date: July 10, 2025
    Inventors: Han-Pang CHIU, Angel Andres DARUNA, Vasily ZADOROZHNYY
  • Patent number: 12320911
    Abstract: A method, apparatus and system for determining change in pose of a mobile device include determining from first ranging information received at a first and a second receiver on the mobile device from a stationary node during a first time instance, a distance from the stationary node to the first receiver and the second receiver, determining from second ranging information received at the first receiver and the second receiver from the stationary node during a second time instance, a distance from the stationary node to the first receiver and second receiver, and determining from the determined distances during the first time instance and the second time instance, how far and in which direction the first receiver and the second receiver moved between the first time instance and the second time instance to determine a change in pose of the mobile device, where a position of the stationary node is unknown.
    Type: Grant
    Filed: March 15, 2022
    Date of Patent: June 3, 2025
    Assignee: SRI International
    Inventors: Han-Pang Chiu, Abhinav Rajvanshi, Alex Krasner, Mikhail Sizintsev, Glenn A. Murray, Supun Samarasekera
  • Publication number: 20250094675
    Abstract: A method, apparatus, and system for developing an understanding of at least one perceived environment includes determining semantic features and respective positional information of the semantic features from received data related to images and respective depth-related content of the at least one perceived environment on the fly as changes in the received data occur, for each perceived environment, combining information of the determined semantic features with the respective positional information to determine a compact representation of the perceived environment which provides information regarding positions of the semantic features in the perceived environment and at least spatial relationships among the sematic features, for each of the at least one perceived environments, combining information from the determined intermediate representation with information stored in a foundational model to determine a respective understanding of the perceived environment, and outputting an indication of the determined re
    Type: Application
    Filed: September 13, 2024
    Publication date: March 20, 2025
    Inventors: Han-Pang CHIU, Karan SIKKA, Louise YARNELL, Supun SAMARASEKERA, Rakesh KUMAR
  • Publication number: 20240403649
    Abstract: In an example, a system includes processing circuitry in communication with storage media. The processing circuitry is configured to execute a machine learning system including at least a first module, a second module and a third module. The machine learning system is configured to train one or more machine learning models. The first module is configured to generate augmented input data based on the streaming input data. The second module includes a machine learning model configured to perform a specific task based at least in part on the augmented input data. The third module configured to adapt a network architecture of the one or more machine learning models based on changes in the streaming input data.
    Type: Application
    Filed: November 28, 2023
    Publication date: December 5, 2024
    Inventors: Han-Pang Chiu, Yi Yao, Zachary Seymour, Alex Krasner, Bradley J. Clymer, Michael A. Cogswell, Cecile Eliane Jeannine Mackay, Alex C. Tozzo, Tixiao Shan, Philip Miller, Chuanyong Gan, Glenn A. Murray, Richard Louis Ferranti, Uma Rajendran, Supun Samarasekera, Rakesh Kumar, James Smith
  • Publication number: 20240404072
    Abstract: A method, machine readable medium and system for RGBD semantic segmentation of video data includes determining semantic segmentation data and depth segmentation data for less than all classes for images of each frame of a first video, determining semantic segmentation data and depth segmentation data for images of each key frame of a second video including a synchronous combination of respective frames of the RGB video and the depth-aware video in parallel to the determination of the semantic segmentation data and the depth segmentation data for each frame of the first video, temporally and geometrically aligning respective frames of the first video and the second video, and predicting semantic segmentation data and depth segmentation data for images of a subsequent frame of the first video based on the determination of the semantic segmentation data and depth segmentation data for images of a key frame of the second video.
    Type: Application
    Filed: August 8, 2024
    Publication date: December 5, 2024
    Inventors: Han-Pang CHIU, Junjiao TIAN, Zachary SEYMOUR, Niluthpol C. MITHUN, Alex KRASNER, Mikhail SIZINTSEV, Abhinav RAJVANSHI, Kevin KAIGHN, Philip MILLER, Ryan VILLAMIL, Supun SAMARASEKERA
  • Publication number: 20240394506
    Abstract: A method, apparatus, and system for determining an uncertainty estimation of at least one layer of a neural network includes identifying a neural network to be analyzed, representing values of each layer of the neural network as respective variable nodes in a graphical representation of the neural network, and modeling connections among each of the layers of the neural network as different respective factors across the variable nodes in the graphical representation, the graphical representation to be used to determine the uncertainty estimation of at least one layer of the neural network. The method, apparatus, and system can further include propagating data through the graphical representation to determine the uncertainty estimation of the neural network.
    Type: Application
    Filed: May 23, 2024
    Publication date: November 28, 2024
    Inventors: Han-Pang CHIU, Yi YAO, Angel DARUNA, Yunye GONG, Abhinav RAJVANSHI, Giedrius BURACHAS
  • Publication number: 20240312197
    Abstract: In general, techniques are described for unsupervised domain adaptation of models with pseudo-label curation. In an example, a method includes generating a plurality of pseudo-labels for a dataset of unlabeled data using a source machine learning model; estimating a reliability of each pseudo-label of the plurality of pseudo-labels using one or more reliability measures; selecting a subset of the plurality of pseudo-labels having estimated reliabilities that satisfy a reliability threshold; and training, using one or more curriculum learning techniques, a target machine learning model starting with the selected subset of the plurality of pseudo-labels and the corresponding unlabeled data.
    Type: Application
    Filed: March 14, 2024
    Publication date: September 19, 2024
    Inventors: Han-Pang Chiu, Niluthpol C. Mithun, Supun Samarasekera, Abhinav Rajvanshi, Xingchen Zhao, Md Nazmul Karim
  • Publication number: 20240303860
    Abstract: A method, apparatus, and system for providing orientation and location estimates for a query ground image include determining spatial-aware features of a ground image and applying a model to the determined spatial-aware features to determine orientation and location estimates of the ground image.
    Type: Application
    Filed: March 8, 2024
    Publication date: September 12, 2024
    Inventors: Niluthpol MITHUN, Kshitij MINHAS, Han-Pang CHIU, Taragay OSKIPER, Mikhail SIZINTSEV, Supun SAMARASEKERA, Rakesh KUMAR
  • Patent number: 12062186
    Abstract: A method, machine readable medium and system for RGBD semantic segmentation of video data includes determining semantic segmentation data and depth segmentation data for less than all classes for images of each frame of a first video, determining semantic segmentation data and depth segmentation data for images of each key frame of a second video including a synchronous combination of respective frames of the RGB video and the depth-aware video in parallel to the determination of the semantic segmentation data and the depth segmentation data for each frame of the first video, temporally and geometrically aligning respective frames of the first video and the second video, and predicting semantic segmentation data and depth segmentation data for images of a subsequent frame of the first video based on the determination of the semantic segmentation data and depth segmentation data for images of a key frame of the second video.
    Type: Grant
    Filed: October 7, 2021
    Date of Patent: August 13, 2024
    Assignee: SRI International
    Inventors: Han-Pang Chiu, Junjiao Tian, Zachary Seymour, Niluthpol C. Mithun, Alex Krasner, Mikhail Sizintsev, Abhinav Rajvanshi, Kevin Kaighn, Philip Miller, Ryan Villamil, Supun Samarasekera
  • Patent number: 11960994
    Abstract: A method, apparatus and system for artificial intelligence-based HDRL planning and control for coordinating a team of platforms includes implementing a global planning layer for determining a collective goal and determining, by applying at least one machine learning process, at least one respective platform goal to be achieved by at least one platform, implementing a platform planning layer for determining, by applying at least one machine learning process, at least one respective action to be performed by the at least one of the platforms to achieve the respective platform goal, and implementing a platform control layer for determining at least one respective function to be performed by the at least one of the platforms. In the method, apparatus and system despite the fact that information is shared between at least two of the layers, the global planning layer, the platform planning layer, and the platform control layer are trained separately.
    Type: Grant
    Filed: January 18, 2021
    Date of Patent: April 16, 2024
    Assignee: SRI International
    Inventors: Han-Pang Chiu, Jonathan D. Brookshire, Zachary Seymour, Niluthpol C. Mithun, Supun Samarasekera, Rakesh Kumar, Qiao Wang
  • Publication number: 20240096093
    Abstract: A method for AI-driven augmented reality mentoring includes determining semantic features of objects in at least one captured scene, determining 3D positional information of the objects, combining information regarding the identified objects with respective 3D positional information to determine at least one intermediate representation, completing the determined intermediate representation using machine learning to include additional objects or positional information of the objects not identifiable from the at least one captured scene, determining at least one task to be performed and determining steps to be performed using a knowledge database, generating at least one visual representation relating to the determined steps for performing the at least one task, determining a correct position for displaying the at least one visual representation, and displaying the at least one visual representation on the see-through display in the determined correct position as an augmented overlay to the view of the at least
    Type: Application
    Filed: September 19, 2023
    Publication date: March 21, 2024
    Inventors: Han-Pang CHIU, Abhinav RAJVANSHI, Niluthpol C. MITHUN, Zachary SEYMOUR, Supun SAMARASEKERA, Rakesh KUMAR, Winter Joseph Guerra
  • Publication number: 20230394294
    Abstract: A method, apparatus and system for artificial intelligence-based HDRL planning and control for coordinating a team of platforms includes implementing a global planning layer for determining a collective goal and determining, by applying at least one machine learning process, at least one respective platform goal to be achieved by at least one platform, implementing a platform planning layer for determining, by applying at least one machine learning process, at least one respective action to be performed by the at least one of the platforms to achieve the respective platform goal, and implementing a platform control layer for determining at least one respective function to be performed by the at least one of the platforms. In the method, apparatus and system despite the fact that information is shared between at least two of the layers, the global planning layer, the platform planning layer, and the platform control layer are trained separately.
    Type: Application
    Filed: January 18, 2021
    Publication date: December 7, 2023
    Inventors: Han-Pang Chiu, Jonathan D. Brookshire, Zachary Seymour, Niluthpol C. Mithun, Supun Samarasekera, Rakesh Kumar, Qiao Wang
  • Patent number: 11676296
    Abstract: Techniques for augmenting a reality captured by an image capture device are disclosed. In one example, a system includes an image capture device that generates a two-dimensional frame at a local pose. The system further includes a computation engine executing on one or more processors that queries, based on an estimated pose prior, a reference database of three-dimensional mapping information to obtain an estimated view of the three-dimensional mapping information at the estimated pose prior. The computation engine processes the estimated view at the estimated pose prior to generate semantically segmented sub-views of the estimated view. The computation engine correlates, based on at least one of the semantically segmented sub-views of the estimated view, the estimated view to the two-dimensional frame. Based on the correlation, the computation engine generates and outputs data for augmenting a reality represented in at least one frame captured by the image capture device.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: June 13, 2023
    Assignee: SRI INTERNATIONAL
    Inventors: Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar, Ryan Villamil, Varun Murali, Gregory Drew Kessler
  • Publication number: 20230004797
    Abstract: A method, apparatus and system for object detection in sensor data having at least two modalities using a common embedding space includes creating first modality vector representations of features of sensor data having a first modality and second modality vector representations of features of sensor data having a second modality, projecting the first and second modality vector representations into the common embedding space such that related embedded modality vectors are closer together in the common embedding space than unrelated modality vectors, combining the projected first and second modality vector representations, and determining a similarity between the combined modality vector representations and respective embedded vector representations of features of objects in the common embedding space to identify at least one object depicted by the captured sensor data. In some instances, data manipulation of the method, apparatus and system can be guided by physics properties of a sensor and/or sensor data.
    Type: Application
    Filed: February 11, 2021
    Publication date: January 5, 2023
    Inventors: Han-Pang CHIU, Zachary SEYMOUR, Niluthpol C. MITHUN, Supun SAMARASEKERA, Rakesh KUMAR, Yi YAO
  • Publication number: 20220299592
    Abstract: A method, apparatus and system for determining change in pose of a mobile device include determining from first ranging information received at a first and a second receiver on the mobile device from a stationary node during a first time instance, a distance from the stationary node to the first receiver and the second receiver, determining from second ranging information received at the first receiver and the second receiver from the stationary node during a second time instance, a distance from the stationary node to the first receiver and second receiver, and determining from the determined distances during the first time instance and the second time instance, how far and in which direction the first receiver and the second receiver moved between the first time instance and the second time instance to determine a change in pose of the mobile device, where a position of the stationary node is unknown.
    Type: Application
    Filed: March 15, 2022
    Publication date: September 22, 2022
    Inventors: Han-Pang Chiu, Abhinav Rajvanshi, Alex Krasner, Mikhail Sizintsev, Glenn A. Murray, Supun Samarasekera
  • Publication number: 20220198813
    Abstract: A method, apparatus and system for efficient navigation in a navigation space includes determining semantic features and respective 3D positional information of the semantic features for scenes of captured image content and depth-related content in the navigation space, combining information of the determined semantic features of the scene with respective 3D positional information using neural networks to determine an intermediate representation of the scene which provides information regarding positions of the semantic features in the scene and spatial relationships among the sematic features, and using the information regarding the positions of the semantic features and the spatial relationships among the sematic features in a machine learning process to provide at least one of a navigation path in the navigation space, a model of the navigation space, and an explanation of a navigation action by the single, mobile agent in the navigation space.
    Type: Application
    Filed: December 17, 2021
    Publication date: June 23, 2022
    Inventors: Han-Pang CHIU, Zachary SEYMOUR, Niluthpol C. MITHUN, Supun SAMARASEKERA, Rakesh KUMAR, Kowshik THOPALLI, Muhammad Zubair IRSHAD
  • Patent number: 11361470
    Abstract: A method, apparatus and system for visual localization includes extracting appearance features of an image, extracting semantic features of the image, fusing the extracted appearance features and semantic features, pooling and projecting the fused features into a semantic embedding space having been trained using fused appearance and semantic features of images having known locations, computing a similarity measure between the projected fused features and embedded, fused appearance and semantic features of images, and predicting a location of the image associated with the projected, fused features. An image can include at least one image from a plurality of modalities such as a Light Detection and Ranging image, a Radio Detection and Ranging image, or a 3D Computer Aided Design modeling image, and an image from a different sensor, such as an RGB image sensor, captured from a same geo-location, which is used to determine the semantic features of the multi-modal image.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: June 14, 2022
    Assignee: SRI International
    Inventors: Han-Pang Chiu, Zachary Seymour, Karan Sikka, Supun Samarasekera, Rakesh Kumar, Niluthpol Mithun
  • Patent number: 11313684
    Abstract: During GPS-denied/restricted navigation, images proximate a platform device are captured using a camera, and corresponding motion measurements of the platform device are captured using an IMU device. Features of a current frame of the images captured are extracted. Extracted features are matched and feature information between consecutive frames is tracked. The extracted features are compared to previously stored, geo-referenced visual features from a plurality of platform devices. If one of the extracted features does not match a geo-referenced visual feature, a pose is determined for the platform device using IMU measurements propagated from a previous pose and relative motion information between consecutive frames, which is determined using the tracked feature information.
    Type: Grant
    Filed: March 28, 2017
    Date of Patent: April 26, 2022
    Assignee: SRI International
    Inventors: Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar, Mikhail Sizintsev, Xun Zhou, Philip Miller, Glenn Murray
  • Publication number: 20220108455
    Abstract: A method, machine readable medium and system for RGBD semantic segmentation of video data includes determining semantic segmentation data and depth segmentation data for less than all classes for images of each frame of a first video, determining semantic segmentation data and depth segmentation data for images of each key frame of a second video including a synchronous combination of respective frames of the RGB video and the depth-aware video in parallel to the determination of the semantic segmentation data and the depth segmentation data for each frame of the first video, temporally and geometrically aligning respective frames of the first video and the second video, and predicting semantic segmentation data and depth segmentation data for images of a subsequent frame of the first video based on the determination of the semantic segmentation data and depth segmentation data for images of a key frame of the second video.
    Type: Application
    Filed: October 7, 2021
    Publication date: April 7, 2022
    Inventors: Han-Pang CHIU, Junjiao TIAN, Zachary SEYMOUR, Niluthpol C. MITHUN, Alex KRASNER, Mikhail SIZINTSEV, Abhinav RAJVANSHI, Kevin KAIGHN, Philip MILLER, Ryan VILLAMIL, Supun SAMARASEKERA