Patents by Inventor Kuan-Hui Lee

Kuan-Hui Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11927793
    Abstract: A double-sided display device includes a first panel, a second panel, a light guide plate and a light source. The second panel is arranged opposite to the first panel. The light guide plate is arranged between the first panel and the second panel, and includes a main body portion including a first surface and a second surface, a first pattern arranged on the first surface, and a second pattern arranged on the second surface. The light source is arranged adjacent to the light guide plate. The first pattern is different from the second pattern.
    Type: Grant
    Filed: February 28, 2023
    Date of Patent: March 12, 2024
    Assignee: INNOLUX CORPORATION
    Inventors: Yi-Hui Lee, Kuan-Chou Chen, Yung-Chih Cheng
  • Patent number: 11922640
    Abstract: A method for 3D object tracking is described. The method includes inferring first 2D semantic keypoints of a 3D object within a sparsely annotated video stream. The method also includes matching the first 2D semantic keypoints of a current frame with second 2D semantic keypoints in a next frame of the sparsely annotated video stream using embedded descriptors within the current frame and the next frame. The method further includes warping the first 2D semantic keypoints to the second 2D semantic keypoints to form warped 2D semantic keypoints in the next frame. The method also includes labeling a 3D bounding box in the next frame according to the warped 2D semantic keypoints in the next frame.
    Type: Grant
    Filed: March 8, 2021
    Date of Patent: March 5, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Arjun Bhargava, Sudeep Pillai, Kuan-Hui Lee
  • Patent number: 11916023
    Abstract: A package includes a package component, a device die over and bonded to the package component, a metal cap having a top portion over the device die, and a thermal interface material between and contacting the device die and the metal cap. The thermal interface material includes a first portion directly over an inner portion of the device die, and a second portion extending directly over a corner region of the device die. The first portion has a first thickness. The second portion has a second thickness greater than the first thickness.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: February 27, 2024
    Assignee: Taiwan Semiconductor Manufacturing Company, Ltd.
    Inventors: Sung-Hui Huang, Da-Cyuan Yu, Kuan-Yu Huang, Pai Yuan Li, Hsiang-Fan Lee
  • Patent number: 11878684
    Abstract: A system for trajectory prediction using a predicted endpoint conditioned network includes one or more processors and a memory that includes a sensor input module, an endpoint distribution module, and a future trajectory module. The modules cause the one or more processors to the one or more processors to obtain sensor data of a scene having a plurality of pedestrians, determine endpoint distributions of the plurality of pedestrians within the scene, the endpoint distributions representing desired end destinations of the plurality of pedestrians from the scene, and determine future trajectory points for at least one of the plurality of pedestrians based on prior trajectory points of the plurality of pedestrians and the endpoint distributions of the plurality of pedestrians. The future trajectory points may be conditioned not only on the pedestrian and their immediate neighbors' histories (observed trajectories) but also on all the other pedestrian's estimated endpoints.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: January 23, 2024
    Assignee: Toyota Research Institute, Inc.
    Inventors: Karttikeya Mangalam, Kuan-Hui Lee, Adrien David Gaidon
  • Patent number: 11810367
    Abstract: Described herein are systems and methods for determining if a vehicle is parked. In one example, a system includes a processor, a sensor system, and a memory. Both the sensor system and the memory are in communication with the processor. The memory includes a parking determination module having instructions that, when executed by the processor, cause the processor to determine, using a random forest model, when the vehicle is parked based on vehicle estimated features, vehicle learned features, and vehicle taillight features of the vehicle that are based on sensor data from the sensor system.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: November 7, 2023
    Assignee: Toyota Research Institute, Inc.
    Inventors: Chao Fang, Kuan-Hui Lee, Logan Michael Ellis, Jia-En Pan, Kun-Hsin Chen, Sudeep Pillai, Daniele Molinari, Constantin Franziskus Dominik Hubmann, T. Wolfram Burgard
  • Publication number: 20230351244
    Abstract: System, methods, and other embodiments described herein relate to a manner of generating and relating frames that improves the retrieval of sensor and agent data for processing by different vehicle tasks. In one embodiment, a method includes acquiring sensor data by a vehicle. The method also includes generating a frame including the sensor data and agent perceptions determined from the sensor data at a timestamp, the agent perceptions including multi-dimensional data that describes features for surrounding vehicles of the vehicle. The method also includes relating the frame to other frames of the vehicle by track, the other frames having processed data from various times and the track having a predetermined window of scene information associated with an agent. The method also includes training a learning model using the agent perceptions accessed from the track.
    Type: Application
    Filed: April 29, 2022
    Publication date: November 2, 2023
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Chao Fang, Charles Christopher Ochoa, Kuan-Hui Lee, Kun-Hsin Chen, Visak Kumar
  • Publication number: 20230351886
    Abstract: A method for vehicle prediction, planning, and control is described. The method includes separately encoding traffic state information at an intersection into corresponding traffic state latent spaces. The method also includes aggregating the corresponding traffic state latent spaces to form a generalized traffic geometry latent space. The method further includes interpreting the generalized traffic geometry latent space to form a traffic flow map including current and future vehicle trajectories. The method also includes decoding the generalized traffic geometry latent space to predict a vehicle behavior according to the traffic flow map based on the current and future vehicle trajectories.
    Type: Application
    Filed: April 28, 2022
    Publication date: November 2, 2023
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Kuan-Hui LEE, Charles Christopher OCHOA, Arjun BHARGAVA, Chao FANG, Kun-Hsin CHEN
  • Publication number: 20230351739
    Abstract: Systems, methods, and other embodiments described herein relate to a multi-task model that integrates recurrent models to improve handling of multi-sweep inputs. In one embodiment, a method includes acquiring sensor data from multiple modalities. The method includes separately encoding respective segments of the sensor data according to an associated one of the different modalities to form encoded features using separate encoders of a network. The method includes accumulating, in a detector, sparse features associated with sparse sensor inputs of the multiple modalities to densify the sparse features into dense features. The method includes providing observations according to the encoded features and the sparse features using the network.
    Type: Application
    Filed: April 29, 2022
    Publication date: November 2, 2023
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Kuan-Hui Lee, Charles Christopher Ochoa, Arjun Bhargava, Chao Fang, Kun-Hsin Chen
  • Publication number: 20230350050
    Abstract: The disclosure generally relates to methods for gathering radar measurements, wherein the radar measurements includes one or more angular uncertainties, generating a two dimensional radar uncertainty cloud, wherein the radar uncertainty cloud includes one or more shaded regions that each represent an angular uncertainty, capturing image data, wherein the image data includes one or more targets within a region of interest, and fusing the two dimensional radar uncertainty cloud with the image data to overlay the one or more regions of uncertainty over a target.
    Type: Application
    Filed: April 27, 2022
    Publication date: November 2, 2023
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Charles Christopher Ochoa, Arjun Bhargava, Chao Fang, Kun-Hsin Chen, Kuan-Hui Lee
  • Publication number: 20230351767
    Abstract: A method for generating a dense light detection and ranging (LiDAR) representation by a vision system includes receiving, at a sparse depth network, one or more sparse representations of an environment. The method also includes generating a depth estimate of the environment depicted in an image captured by an image capturing sensor. The method further includes generating, via the sparse depth network, one or more sparse depth estimates based on receiving the one or more sparse representations. The method also includes fusing the depth estimate and the one or more sparse depth estimates to generate a dense depth estimate. The method further includes generating the dense LiDAR representation based on the dense depth estimate and controlling an action of the vehicle based on identifying a three-dimensional object in the dense LiDAR representation.
    Type: Application
    Filed: April 28, 2022
    Publication date: November 2, 2023
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Arjun BHARGAVA, Chao FANG, Charles Christopher OCHOA, Kun-Hsin CHEN, Kuan-Hui LEE, Vitor GUIZILINI
  • Publication number: 20230351766
    Abstract: A method controlling an ego vehicle in an environment includes determining, via a flow model of a parked vehicle recognition system, a flow between a first representation of the environment and a second representation of the environment. The method also includes determining, via a velocity model of the parked vehicle recognition system, a velocity of a vehicle in the environment based on the flow. The method further includes determining, via a parked vehicle classification model of the parked vehicle recognition system, the vehicle is parked based on the velocity of the vehicle and one or more of features associated with the vehicle and/or the environment. The method still further includes planning a trajectory of the ego vehicle based on determining the vehicle is parked.
    Type: Application
    Filed: April 28, 2022
    Publication date: November 2, 2023
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Kuan-Hui LEE, Charles Christopher OCHOA, Arjun BHARGAVA, Chao FANG
  • Publication number: 20230351774
    Abstract: A method for controlling an ego vehicle in an environment includes associating, by a velocity model, one or more objects within the environment with a respective velocity instance label. The method also includes selectively, by a recurrent network of the taillight recognition system, focusing on a selected region of the sequence of images according to a spatial attention model for a vehicle taillight recognition task. The method further includes concatenating the selected region with the respective velocity instance label of each object of the one or more objects within the environment to generate a concatenated region label. The method still further planning a trajectory of the ego vehicle based on inferring, at a classifier of the taillight recognition system, an intent of each object of the one or more objects according to a respective taillight state of each object, as determined based on the concatenated region label.
    Type: Application
    Filed: April 28, 2022
    Publication date: November 2, 2023
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Kuan-Hui LEE, Charles Christopher OCHOA, Arjun BHARGAVA, Chao FANG
  • Publication number: 20230351773
    Abstract: System, methods, and other embodiments described herein relate to detection of traffic lights corresponding to a driving lane from views captured by multiple cameras. In one embodiment, a method includes estimating, by a first model using images from multiple cameras, positions and state confidences of traffic lights corresponding to a driving lane of a vehicle. The method also includes aggregating, by a second model, the state confidences and a multi-view stereo composition from geometric representations associated with the positions of the traffic lights. The method also includes assigning, by the second model according to the aggregating, a relevancy score computed for a candidate traffic light of the traffic lights to the driving lane. The method also includes executing a task by the vehicle according to the relevancy score.
    Type: Application
    Filed: April 28, 2022
    Publication date: November 2, 2023
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Kun-Hsin Chen, Kuan-Hui Lee, Chao Fang, Charles Christopher Ochoa
  • Publication number: 20230343109
    Abstract: System, methods, and other embodiments described herein relate to improving the detection of traffic lights associated with a driving lane using a camera instead of map data. In one embodiment, a method includes estimating, from an image using a first model, depth and orientation information of traffic lights relative to a driving lane of a vehicle. The method also includes computing, using a second model, relevancy scores for the traffic lights according to geometric inferences between the depth and the orientation information. The method also includes assigning, using the second model, a primary relevancy score for a light of the traffic lights associated with the driving lane according to the depth and the orientation information. The method also includes executing a control task by the vehicle according to the primary relevancy score and a state confidence, computed by the first model, for the light.
    Type: Application
    Filed: April 22, 2022
    Publication date: October 26, 2023
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Kun-Hsin Chen, Kuan-Hui Lee, Chao Fang, Charles Christopher Ochoa
  • Publication number: 20230334876
    Abstract: A method for an end-to-end boundary lane detection system is described. The method includes gridding a red-green-blue (RGB) image captured by a camera sensor mounted on an ego vehicle into a plurality of image patches. The method also includes generating different image patch embeddings to provide correlations between the plurality of image patches and the RGB image. The method further includes encoding the different image patch embeddings into predetermined categories, grid offsets, and instance identifications. The method also includes generating lane boundary keypoints of the RGB image based on the encoding of the different image patch embeddings.
    Type: Application
    Filed: April 14, 2022
    Publication date: October 19, 2023
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Kun-Hsin CHEN, Shunsho KAKU, Jie LI, Steven PARKISON, Jeffrey M. WALLS, Kuan-Hui LEE
  • Publication number: 20230334873
    Abstract: System, methods, and other embodiments described herein relate to accurately distinguishing a traffic light from other illuminated objects in the traffic scene and detecting states using hierarchical modeling. In one embodiment, a method includes detecting, using a machine learning (ML) model, two-dimensional (2D) coordinates of illuminated objects identified from a monocular image of a traffic scene for control adaptation by a control model. The method also includes assigning, using the ML model, computed probabilities to the illuminated objects for categories within a hierarchical ontology of environmental lights associated with the traffic scene, wherein one of the probabilities indicates existence of a traffic light instead of a brake light in the traffic scene. The method also includes executing a task by the control model for a vehicle according to the 2D coordinates and the computed probabilities.
    Type: Application
    Filed: April 15, 2022
    Publication date: October 19, 2023
    Inventors: Kun-Hsin Chen, Kuan-Hui Lee, Chao Fang, Charles Christopher Ochoa
  • Patent number: 11776281
    Abstract: A traffic light classification system for a vehicle includes an image capture device to capture an image of a scene that includes a traffic light with multiple light signals, a processor, and a memory communicably coupled to the processor and storing a first neural network module including instructions that when executed by the processor cause the processor to determine, based on inputting the image into a neural network, a semantic keypoint for each light signal in the traffic light, and determine, based on each semantic keypoint, a classification state of each light signal.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: October 3, 2023
    Assignee: Toyota Research Institute, Inc.
    Inventors: Kun-Hsin Chen, Kuan-Hui Lee, Jia-En Pan, Sudeep Pillai
  • Publication number: 20230252799
    Abstract: In one embodiment, a signal light state detection system includes one or more processors, an a non-transitory memory module storing computer-readable instructions. The computer-readable instructions are configured to cause the one or more processors to receive a first image of a vehicle and receiving a second image of the vehicle, wherein the second image is later in time from the first image, and generate a warped image from the first image and the second image, wherein the warped image has individual pixels of one of the first image and the second image that are shifted to locations of corresponding pixels of the other of the first image and the second image. The one or more processors further generate a difference image from the warped image and one of the first image and the second image, and determine, using a classifier module, a probability of a state of vehicle signal lights.
    Type: Application
    Filed: February 9, 2022
    Publication date: August 10, 2023
    Applicant: Toyota Research Institute, Inc.
    Inventors: Naoki Nagasaka, Blake Wulfe, Kuan-Hui Lee, Jia-En Marcus Pan
  • Patent number: 11721065
    Abstract: A method for 3D object modeling includes linking 2D semantic keypoints of an object within a video stream into a 2D structured object geometry. The method includes inputting, to a neural network, the object to generate a 2D NOCS image and a shape vector, the shape vector being mapped to a continuously traversable coordinate shape. The method includes applying a differentiable shape renderer to the SDF shape and the 2D NOCS image to render a shape of the object corresponding to a 3D object model in the continuously traversable coordinate shape space. The method includes lifting the linked, 2D semantic keypoints of the 2D structured object geometry to a 3D structured object geometry. The method includes geometrically and projectively aligning the 3D object model, the 3D structured object geometry, and the rendered shape to form a rendered object. The method includes generating 3D bounding boxes from the rendered object.
    Type: Grant
    Filed: August 25, 2022
    Date of Patent: August 8, 2023
    Inventors: Arjun Bhargava, Sudeep Pillai, Kuan-Hui Lee, Kun-Hsin Chen
  • Patent number: 11625839
    Abstract: Systems and methods determining velocity of an object associated with a three-dimensional (3D) scene may include: a LIDAR system generating two sets of 3D point cloud data of the scene from two consecutive point cloud sweeps; a pillar feature network encoding data of the point cloud data to extract two-dimensional (2D) bird's-eye-view embeddings for each of the point cloud data sets in the form of pseudo images, wherein the 2D bird's-eye-view embeddings for a first of the two point cloud data sets comprises pillar features for the first point cloud data set and the 2D bird's-eye-view embeddings for a second of the two point cloud data sets comprises pillar features for the second point cloud data set; and a feature pyramid network encoding the pillar features and performing a 2D optical flow estimation to estimate the velocity of the object.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: April 11, 2023
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Kuan-Hui Lee, Sudeep Pillai, Adrien David Gaidon