Patents by Inventor Eun Choe

Eun Choe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12290508
    Abstract: The present invention provides methods for treating cancer in a subject with a trisubstituted benzotriazole derivative with the formula (I) or a pharmaceutically acceptable salt thereof, wherein the variables R1, R2 and R3 are as defined herein.
    Type: Grant
    Filed: June 9, 2023
    Date of Patent: May 6, 2025
    Assignee: Servier Pharmaceuticals LLC
    Inventors: Danielle Ulanet, Sung Eun Choe
  • Patent number: 12235353
    Abstract: In various examples, a hazard detection system fuses outputs from multiple sensors over time to determine a probability that a stationary object or hazard exists at a location. The system may then use sensor data to calculate a detection bounding shape for detected objects and, using the bounding shape, may generate a set of particles, each including a confidence value that an object exists at a corresponding location. The system may then capture additional sensor data by one or more sensors of the ego-machine that are different from those used to capture the first sensor data. To improve the accuracy of the confidences of the particles, the system may determine a correspondence between the first sensor data and the additional sensor data (e.g., depth sensor data), which may be used to filter out a portion of the particles and improve the depth predictions corresponding to the object.
    Type: Grant
    Filed: November 10, 2021
    Date of Patent: February 25, 2025
    Assignee: NVIDIA Corporation
    Inventors: Gang Pan, Joachim Pehserl, Dong Zhang, Baris Evrim Demiroz, Samuel Rupp Ogden, Tae Eun Choe, Sangmin Oh
  • Publication number: 20240412104
    Abstract: Systems and methods are disclosed that relate to training a machine learning system using simulated objects. A simulated object may be generated based at least on extracting at least a portion of the simulated object from a simulated textured representation. Further, the simulated anomaly object may be combined with an existing image to generate a training image. One or more parameters of a machine learning model may be updated based at least on the training image and ground truth data corresponding to the training image.
    Type: Application
    Filed: June 12, 2023
    Publication date: December 12, 2024
    Inventors: Dong ZHANG, Tae Eun CHOE, Seungwoo YOO, Sangmin OH, Minwoo PARK
  • Publication number: 20240317263
    Abstract: Systems and methods are disclosed relating to viewpoint adapted perception for autonomous machines and applications. A 3D perception network may be adapted to handle unavailable target rig data by training one or more layers of the 3D perception network as part of a training network using real source rig data and simulated source and target rig data. Feature statistics extracted from the real source data may be used to transform the features extracted from the simulated data during training. The paths for real and simulated data through the resulting network may be alternately trained on real and simulated data to update shared weights for the different paths. As such, one or more of the paths through the training network(s) may be designated as the 3D perception network, and target rig data may be applied to the 3D perception network to perform one or more perception tasks.
    Type: Application
    Filed: May 31, 2024
    Publication date: September 26, 2024
    Inventors: Ahyun SEO, Tae Eun Choe, Minwoo Park, Jung Seock Joo
  • Publication number: 20240320923
    Abstract: Systems and methods are disclosed relating to viewpoint adapted perception for autonomous machines and applications. A 3D perception network may be adapted to handle unavailable target rig data by training the one or more layers of the 3D perception network as part of a training network using simulated source and target rig data. A consistency loss that compares (e.g., top-down) transformed feature maps extracted from simulated source and target rig data may be used to minimize differences across training channels. As such, one or more of the paths through the training network(s) may be designated as the 3D perception network, and target rig data may be applied to the 3D perception network to perform one or more perception tasks.
    Type: Application
    Filed: May 31, 2024
    Publication date: September 26, 2024
    Inventors: Ahyun SEO, Tae Eun Choe, Minwoo Park, Jung Seock Joo
  • Publication number: 20240312123
    Abstract: In various examples, systems and methods are disclosed that relate to data augmentation for training/updating perception models in autonomous or semi-autonomous systems and applications. For example, a system may receive data associated with a set of frames that are captured using a plurality of cameras positioned in fixed relation relative to the machine; generate a panoramic view based at least on the set of frames; provide data associated with the panoramic view to a model to cause the model to generate a high dynamic range (HDR) panoramic view; determine lighting information associated with a light distribution map based at least on the HDR panoramic view; determine a virtual scene; and render an asset and a shadow on at least one of the frames, based at least on the virtual scene and the light distribution map, the shadow being a shadow corresponding to the asset.
    Type: Application
    Filed: February 29, 2024
    Publication date: September 19, 2024
    Applicant: NVIDIA Corporation
    Inventors: Malik Aqeel Anwar, Tae Eun Choe, Zian Wang, Sanja Fidler, Minwoo Park
  • Publication number: 20240265555
    Abstract: Systems and methods are disclosed that use a geometric approach to detect objects on a road surface. A set of points within a region of interest between a first frame and a second frame are captured and tracked to determine a difference in location between the set of points in two frames. The first frame may be aligned with the second frame and the first pixel values of the first frame may be compared with the second pixel values of the second frame to generate a disparity image including third pixels. Subsets of the third pixels that have an disparity image value about a first threshold may be combined, and the third pixels may be scored and associated with disparity values for each pixel of the one or more subsets of the third pixels. A bounding shape may be generated based on the scoring that corresponds to the object.
    Type: Application
    Filed: March 22, 2024
    Publication date: August 8, 2024
    Inventors: Dong Zhang, Sangmin Oh, Junghyun Kwon, Baris Evrim Demiroz, Tae Eun Choe, Minwoo Park, Chethan Ningaraju, Hao Tsui, Eric Viscito, Jagadeesh Sankaran, Yongqing Liang
  • Publication number: 20240148697
    Abstract: The present invention provides methods for treating cancer in a subject with a trisubstituted benzotriazole derivative with the formula (I) or a pharmaceutically acceptable salt thereof, wherein the variables R1, R2 and R3 are as defined herein.
    Type: Application
    Filed: June 9, 2023
    Publication date: May 9, 2024
    Inventors: Danielle Ulanet, Sung Eun Choe
  • Patent number: 11961243
    Abstract: A geometric approach may be used to detect objects on a road surface. A set of points within a region of interest between a first frame and a second frame are captured and tracked to determine a difference in location between the set of points in two frames. The first frame may be aligned with the second frame and the first pixel values of the first frame may be compared with the second pixel values of the second frame to generate a disparity image including third pixels. One or more subsets of the third pixels that have a value above a first threshold may be combined, and the third pixels may be scored and associated with disparity values for each pixel of the one or more subsets of the third pixels. A bounding shape may be generated based on the scoring.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: April 16, 2024
    Assignee: NVIDIA Corporation
    Inventors: Dong Zhang, Sangmin Oh, Junghyun Kwon, Baris Evrim Demiroz, Tae Eun Choe, Minwoo Park, Chethan Ningaraju, Hao Tsui, Eric Viscito, Jagadeesh Sankaran, Yongqing Liang
  • Publication number: 20240001957
    Abstract: In various examples, systems and methods are disclosed that preserve rich, detail-centric information from a real-world image by augmenting the real-world image with simulated objects to train a machine learning model to detect objects in an input image. The machine learning model may be trained, in deployment, to detect objects and determine bounding shapes to encapsulate detected objects. The machine learning model may further be trained to determine the type of road object encountered, calculate hazard ratings, and calculate confidence percentages. In deployment, detection of a road object, determination of a corresponding bounding shape, identification of road object type, and/or calculation of a hazard rating by the machine learning model may be used as an aid for determining next steps regarding the surrounding environment—e.g., navigating around the road debris, driving over the road debris, or coming to a complete stop—in a variety of autonomous machine applications.
    Type: Application
    Filed: September 14, 2023
    Publication date: January 4, 2024
    Inventors: Tae Eun Choe, Pengfei Hao, Xiaolin Lin, Minwoo Park
  • Patent number: 11801861
    Abstract: In various examples, systems and methods are disclosed that preserve rich, detail-centric information from a real-world image by augmenting the real-world image with simulated objects to train a machine learning model to detect objects in an input image. The machine learning model may be trained, in deployment, to detect objects and determine bounding shapes to encapsulate detected objects. The machine learning model may further be trained to determine the type of road object encountered, calculate hazard ratings, and calculate confidence percentages. In deployment, detection of a road object, determination of a corresponding bounding shape, identification of road object type, and/or calculation of a hazard rating by the machine learning model may be used as an aid for determining next steps regarding the surrounding environment—e.g., navigating around the road debris, driving over the road debris, or coming to a complete stop—in a variety of autonomous machine applications.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: October 31, 2023
    Assignee: NVIDIA Corporation
    Inventors: Tae Eun Choe, Pengfei Hao, Xiaolin Lin, Minwoo Park
  • Publication number: 20230321163
    Abstract: The present invention relates to a composition for preventing or treating Clostridium difficile infection, comprising at least one from the group consisting of cells, cultures, lysates, and extracts of Clostridium scindens, Blautia producta, and Enterococcus faecium.
    Type: Application
    Filed: August 20, 2021
    Publication date: October 12, 2023
    Inventors: Gwangpyo Ko, Hyun Ju You, Jun Sun Yu, Su Eun Choe
  • Publication number: 20230294727
    Abstract: In various examples, a hazard detection system plots hazard indicators from multiple detection sensors to grid cells of an occupancy grid corresponding to a driving environment. For example, as the ego-machine travels along a roadway, one or more sensors of the ego-machine may capture sensor data representing the driving environment. A system of the ego-machine may then analyze the sensor data to determine the existence and/or location of the one or more hazards within an occupancy grid—and thus within the environment. When a hazard is detected using a respective sensor, the system may plot an indicator of the hazard to one or more grid cells that correspond to the detected location of the hazard. Based, at least in part, on a fused or combined confidence of the hazard indicators for each grid cell, the system may predict whether the corresponding grid cell is occupied by a hazard.
    Type: Application
    Filed: March 15, 2022
    Publication date: September 21, 2023
    Inventors: Sangmin Oh, Baris Evrim Demiroz, Gang Pan, Dong Zhang, Joachim Pehserl, Samuel Rupp Ogden, Tae Eun Choe
  • Publication number: 20230282005
    Abstract: In various examples, a multi-sensor fusion machine learning model – such as a deep neural network (DNN) – may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.
    Type: Application
    Filed: May 1, 2023
    Publication date: September 7, 2023
    Inventors: Minwoo Park, Junghyun Kwon, Mehmet K. Kocamaz, Hae-Jong Seo, Berta Rodriguez Hervas, Tae Eun Choe
  • Patent number: 11717512
    Abstract: The present invention provides methods for treating cancer in a subject with a trisubstituted benzotriazole derivative with the formula (I) or a pharmaceutically acceptable salt thereof, wherein the variables R1, R2 and R3 are as defined herein.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: August 8, 2023
    Assignee: Servier Pharmaceuticals LLC
    Inventors: Danielle Ulanet, Sung Eun Choe
  • Patent number: 11704913
    Abstract: A list of images is received. The images were captured by a sensor of an ADV chronologically while driving through a driving environment. A first image of the images is identified that includes a first object in a first dimension (e.g., larger size) detected by an object detector using an object detection algorithm. In response to the detection of the first object, the images in the list are traversed backwardly in time from the first image to identify a second image that includes a second object in a second dimension (e.g., smaller size) based on a moving trail of the ADV represented by the list of images. The second object is then labeled or annotated in the second image equivalent to the first object in the first image. The list of images having the labeled second image can be utilized for subsequent object detection during autonomous driving.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: July 18, 2023
    Assignee: BAIDU USA LLC
    Inventors: Tae Eun Choe, Guang Chen, Weide Zhang, Yuliang Guo, Ka Wai Tsoi
  • Patent number: 11688181
    Abstract: In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: June 27, 2023
    Assignee: NVIDIA Corporation
    Inventors: Minwoo Park, Junghyun Kwon, Mehmet K. Kocamaz, Hae-Jong Seo, Berta Rodriguez Hervas, Tae Eun Choe
  • Patent number: 11679764
    Abstract: During the autonomous driving, the movement trails or moving history of obstacles, as well as, an autonomous driving vehicle (ADV) may be maintained in a corresponding buffer. For the obstacles and the ADV, the vehicle states at different points in time are maintained and stored in one or more buffers. The vehicle states representing the moving trails or moving history of the obstacles and the ADV may be utilized to reconstruct a history trajectory of the obstacles and the ADV, which may be used for a variety of purposes. For example, the moving trails or history of obstacles may be utilized to determine lane configuration of one or more lanes of a road, particularly, in a rural area where the lane markings are unclear. The moving history of the obstacles may also be utilized predict the future movement of the obstacles, tailgate an obstacle, and infer a lane line.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: June 20, 2023
    Assignee: BAIDU USA LLC
    Inventors: Tae Eun Choe, Guang Chen, Weide Zhang, Yuliang Guo, Ka Wai Tsoi
  • Publication number: 20230142299
    Abstract: In various examples, a hazard detection system fuses outputs from multiple sensors over time to determine a probability that a stationary object or hazard exists at a location. The system may then use sensor data to calculate a detection bounding shape for detected objects and, using the bounding shape, may generate a set of particles, each including a confidence value that an object exists at a corresponding location. The system may then capture additional sensor data by one or more sensors of the ego-machine that are different from those used to capture the first sensor data. To improve the accuracy of the confidences of the particles, the system may determine a correspondence between the first sensor data and the additional sensor data (e.g., depth sensor data), which may be used to filter out a portion of the particles and improve the depth predictions corresponding to the object.
    Type: Application
    Filed: November 10, 2021
    Publication date: May 11, 2023
    Inventors: Gang Pan, Joachim Pehserl, Dong Zhang, Baris Evrim Demiroz, Samuel Rupp Ogden, Tae Eun Choe, Sangmin Oh
  • Patent number: 11484589
    Abstract: Provided is a swine fever antigen fused with a porcine Fc fragment, and more particularly, to a vaccine composition having an autoimmune-enhancing effect by binding the Fc fragment to a swine fever antigen, and a preparation method thereof.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: November 1, 2022
    Assignees: BIOAPPLICATIONS INC., REPUBLIC OF KOREA (ANIMAL AND PLANT QUARANTINE AGENCY)
    Inventors: Yongjik Lee, Dong-Jun An, Se Eun Choe, Jae-Young Song, In-Soo Cho