Patents by Inventor Karen Yan Ming Leung

Karen Yan Ming Leung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240400101
    Abstract: In various examples, systems and methods are disclosed relating to refinement of safety zones and improving evaluation metrics for the perception modules of autonomous and semi-autonomous systems. Example implementations can exclude areas in the state space that are not safety critical, while retaining the areas that are safety-critical. This can be accomplished by leveraging ego maneuver information and conditioning safety zone computations on ego maneuvers. A maneuver-based decomposition of perception safety zones may leverage a temporal convolution operation with the capability to account for collision at any intermediate time along the way to maneuver completion. This provides a significant reduction in zone volume while maintaining completeness, thus optimizing or otherwise enhancing obstacle perception performance requirements by filtering out regions of state space not relevant to a system's route of travel.
    Type: Application
    Filed: June 2, 2023
    Publication date: December 5, 2024
    Applicant: NVIDIA Corporation
    Inventors: Sever Ioan TOPAN, Yuxiao CHEN, Edward FU SCHMERLING, Karen Yan Ming LEUNG, Hans Jonas NILSSON, Michael COX, Marco PAVONE
  • Publication number: 20240160913
    Abstract: In various examples, learning responsibility allocations for machine interactions is described herein. Systems and methods are disclosed that train a neural network(s) to generate outputs indicating estimated levels of responsibilities associated with interactions between vehicles or machines and other objects (e.g., other vehicles, machines, pedestrians, animals, etc.). In some examples, the neural network(s) is trained using real-world data, such as data representing scenes depicting actual interactions between vehicles and objects and/or parameters (e.g., velocities, positions, directions, etc.) associated with the interactions. Then, in practice, a vehicle (e.g., an autonomous vehicle, a semi-autonomous vehicle, etc.) may use the neural network(s) to generate an output indicating a proposed or estimated level of responsibility associated with an interaction between the vehicle and an object. The vehicle may then use the output to determine one or more controls for the vehicle to use when navigating.
    Type: Application
    Filed: October 31, 2022
    Publication date: May 16, 2024
    Inventors: Ryan Cosner, Yuxiao Chen, Karen Yan Ming Leung, Marco Pavone
  • Publication number: 20240085914
    Abstract: In various examples, techniques for determining perception zones for object detection are described. For instance, a system may use a dynamic model associated with an ego-machine, a dynamic model associated with an object, and one or more possible interactions between the ego-machine and the object to determine a perception zone. The system may then perform one or more processes using the perception zone. For instance, if the system is validating a perception system of the ego-machine, the system may determine whether a detection error associated with the object is a safety-critical error based on whether the object is located within the perception zone. Additionally, if the system is executing within the ego-machine, the system may determine whether the object is a safety-critical object based on whether the object is located within the perception zone.
    Type: Application
    Filed: September 12, 2022
    Publication date: March 14, 2024
    Inventors: Sever Ioan Topan, Karen Yan Ming Leung, Yuxiao Chen, Pritish Tupekar, Edward Fu Schmerling, Hans Jonas Nilsson, Michael Cox, Marco Pavone
  • Publication number: 20240010196
    Abstract: In various examples, control policies for controlling agents may be learned from demonstrations capturing joint states of entities navigating through the environment. A control policy may be learned mapping joint states to control actions, where the joint states are between agents, and the control actions are of at least one of the agents. The control policy may be learned to define the mappings as control invariant sets of the joint sates and the control actions. The control policy may be used to determine one or more functions that compute, based at least on a joint state between entities, output indicating a likelihood of collision between the entities operating in accordance with the control policy. Using the output, current and/or potential states of the environment may be evaluated to determine control operations for a machine, such as a vehicle.
    Type: Application
    Filed: March 14, 2023
    Publication date: January 11, 2024
    Inventors: Karen Yan Ming Leung, Sushant Veer, Edward Fu Schmerling, Marco Pavone