Patents by Inventor Kun-Hsin Chen

Kun-Hsin Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12148223
    Abstract: A method for generating a dense light detection and ranging (LiDAR) representation by a vision system includes receiving, at a sparse depth network, one or more sparse representations of an environment. The method also includes generating a depth estimate of the environment depicted in an image captured by an image capturing sensor. The method further includes generating, via the sparse depth network, one or more sparse depth estimates based on receiving the one or more sparse representations. The method also includes fusing the depth estimate and the one or more sparse depth estimates to generate a dense depth estimate. The method further includes generating the dense LiDAR representation based on the dense depth estimate and controlling an action of the vehicle based on identifying a three-dimensional object in the dense LiDAR representation.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: November 19, 2024
    Assignees: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Arjun Bhargava, Chao Fang, Charles Christopher Ochoa, Kun-Hsin Chen, Kuan-Hui Lee, Vitor Guizilini
  • Patent number: 12141235
    Abstract: Datasets for autonomous driving systems and multi-modal scenes may be automatically labeled using previously trained models as priors to mitigate the limitations of conventional manual data labeling. Properly versioned models, including model weights and knowledge of the dataset on which the model was previously trained, may be used to run an inference operation on unlabeled data, thus automatically labeling the dataset. The newly labeled dataset may then be used to train new models, including sparse data sets, in a semi-supervised or weakly-supervised fashion.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: November 12, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Allan Raventos, Arjun Bhargava, Kun-Hsin Chen, Sudeep Pillai, Adrien David Gaidon
  • Publication number: 20240371987
    Abstract: A semiconductor arrangement includes a first well formed to a first depth and a first width in a substrate and a second well formed to a second depth and a second width in the substrate. The first well is formed in the second well, the first depth is greater than the second depth, and the second width is greater than the first width. A source region is formed in the second well and a drain region is formed in the substrate.
    Type: Application
    Filed: July 18, 2024
    Publication date: November 7, 2024
    Inventors: Chi-Fu LIN, Cheng-Hsin CHEN, Ming-I HSU, Kun-Ming HUANG, Chien-Li KUO
  • Publication number: 20240363000
    Abstract: A method for vehicle prediction, planning, and control is described. The method includes separately encoding traffic state information at an intersection into corresponding traffic state latent spaces. The method also includes aggregating the corresponding traffic state latent spaces to form a generalized traffic geometry latent space. The method further includes interpreting the generalized traffic geometry latent space to form a traffic flow map including current and future vehicle trajectories. The method also includes decoding the generalized traffic geometry latent space to predict a vehicle behavior according to the traffic flow map based on the current and future vehicle trajectories.
    Type: Application
    Filed: July 11, 2024
    Publication date: October 31, 2024
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Kuan-Hui LEE, Charles Christopher OCHOA, Arun BHARGAVA, Chao FANG, Kun-Hsin CHEN
  • Publication number: 20240329361
    Abstract: An optical element driving mechanism is provided and includes a fixed assembly, a movable assembly, a driving assembly and a circuit assembly. The movable assembly is configured to connect an optical element, the movable assembly is movable relative to the fixed assembly, and the optical element has an optical axis. The driving assembly is configured to drive the movable assembly to move relative to the fixed assembly. The circuit assembly includes a plurality of circuits and is affixed to the fixed assembly.
    Type: Application
    Filed: June 7, 2024
    Publication date: October 3, 2024
    Inventors: Sin-Hong LIN, Yung-Ping YANG, Wen-Yen HUANG, Yu-Cheng LIN, Kun-Shih LIN, Chao-Chang HU, Yung-Hsien YEH, Mao-Kuo HSU, Chih-Wei WENG, Ching-Chieh HUANG, Chih-Shiang WU, Chun-Chia LIAO, Chia-Yu CHANG, Hung-Ping CHEN, Wei-Zhong LUO, Wen-Chang LIN, Shou-Jen LIU, Shao-Chung CHANG, Chen-Hsin HUANG, Meng-Ting LIN, Yen-Cheng CHEN, I-Mei HUANG, Yun-Fei WANG, Wei-Jhe SHEN
  • Patent number: 12100754
    Abstract: A semiconductor arrangement includes a first well formed to a first depth and a first width in a substrate and a second well formed to a second depth and a second width in the substrate. The first well is formed in the second well, the first depth is greater than the second depth, and the second width is greater than the first width. A source region is formed in the second well and a drain region is formed in the substrate.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: September 24, 2024
    Assignee: TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY LIMITED
    Inventors: Chi-Fu Lin, Cheng-Hsin Chen, Ming-I Hsu, Kun-Ming Huang, Chien-Li Kuo
  • Patent number: 12087063
    Abstract: System, methods, and other embodiments described herein relate to detection of traffic lights corresponding to a driving lane from views captured by multiple cameras. In one embodiment, a method includes estimating, by a first model using images from multiple cameras, positions and state confidences of traffic lights corresponding to a driving lane of a vehicle. The method also includes aggregating, by a second model, the state confidences and a multi-view stereo composition from geometric representations associated with the positions of the traffic lights. The method also includes assigning, by the second model according to the aggregating, a relevancy score computed for a candidate traffic light of the traffic lights to the driving lane. The method also includes executing a task by the vehicle according to the relevancy score.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: September 10, 2024
    Assignees: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Kun-Hsin Chen, Kuan-Hui Lee, Chao Fang, Charles Christopher Ochoa
  • Patent number: 12080161
    Abstract: A method for vehicle prediction, planning, and control is described. The method includes separately encoding traffic state information at an intersection into corresponding traffic state latent spaces. The method also includes aggregating the corresponding traffic state latent spaces to form a generalized traffic geometry latent space. The method further includes interpreting the generalized traffic geometry latent space to form a traffic flow map including current and future vehicle trajectories. The method also includes decoding the generalized traffic geometry latent space to predict a vehicle behavior according to the traffic flow map based on the current and future vehicle trajectories.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: September 3, 2024
    Assignees: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Kuan-Hui Lee, Charles Christopher Ochoa, Arjun Bhargava, Chao Fang, Kun-Hsin Chen
  • Patent number: 12073633
    Abstract: System, methods, and other embodiments described herein relate to improving the detection of traffic lights associated with a driving lane using a camera instead of map data. In one embodiment, a method includes estimating, from an image using a first model, depth and orientation information of traffic lights relative to a driving lane of a vehicle. The method also includes computing, using a second model, relevancy scores for the traffic lights according to geometric inferences between the depth and the orientation information. The method also includes assigning, using the second model, a primary relevancy score for a light of the traffic lights associated with the driving lane according to the depth and the orientation information. The method also includes executing a control task by the vehicle according to the primary relevancy score and a state confidence, computed by the first model, for the light.
    Type: Grant
    Filed: April 22, 2022
    Date of Patent: August 27, 2024
    Assignees: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Kun-Hsin Chen, Kuan-Hui Lee, Chao Fang, Charles Christopher Ochoa
  • Patent number: 12073632
    Abstract: Systems and methods are provided for developing/leveraging a hierarchical ontology in traffic light perception. A hierarchical ontology representative of various traffic light characteristic (e.g., states, transitions, colors, shapes, etc.) allow for structured and/or automated annotation (in supervised machine learning), as well as the ability to bootstrap traffic light prediction. Further still, the use of a hierarchical ontology provides the ability to accommodate both coarse and fine-grained model prediction, as well as the ability to generate models that are applicable to different traffic light systems used, e.g., in different geographical regions and/or contexts.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: August 27, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Kun-Hsin Chen, Dennis I. Park, Jie Li
  • Patent number: 12046050
    Abstract: System, methods, and other embodiments described herein relate to accurately distinguishing a traffic light from other illuminated objects in the traffic scene and detecting states using hierarchical modeling. In one embodiment, a method includes detecting, using a machine learning (ML) model, two-dimensional (2D) coordinates of illuminated objects identified from a monocular image of a traffic scene for control adaptation by a control model. The method also includes assigning, using the ML model, computed probabilities to the illuminated objects for categories within a hierarchical ontology of environmental lights associated with the traffic scene, wherein one of the probabilities indicates existence of a traffic light instead of a brake light in the traffic scene. The method also includes executing a task by the control model for a vehicle according to the 2D coordinates and the computed probabilities.
    Type: Grant
    Filed: April 15, 2022
    Date of Patent: July 23, 2024
    Assignee: Toyota Research Institute, Inc.
    Inventors: Kun-Hsin Chen, Kuan-Hui Lee, Chao Fang, Charles Christopher Ochoa
  • Publication number: 20240212364
    Abstract: Systems and methods are provided for developing/updating training datasets for traffic light detection/perception models. V2I-based information may indicate a particular traffic light state/state of transition. This information can be compared to a traffic light perception prediction. When the prediction is inconsistent with the V2I-based information, data regarding the condition(s)/traffic light(s)/etc. can be saved and uploaded to a training database to update/refine the training dataset(s) maintained therein. In this way, an existing traffic light perception model can be updated/improved and/or a better traffic light perception model can be developed.
    Type: Application
    Filed: March 6, 2024
    Publication date: June 27, 2024
    Inventors: KUN-HSIN CHEN, PEIYAN GONG, SHUNSHO KAKU, SUDEEP PILLAI, HAI JIN, SARAH YOO, DAVID L. GARBER, RYAN W. WOLCOTT
  • Patent number: 12014549
    Abstract: A vehicle light classification system captures a sequence of images of a scene that includes a front/rear view of a vehicle with front/rear-side lights, determines semantic keypoints, in the images and associated with the front/rear-side lights, based on inputting the images into a first neural network, obtains multiple difference images that are each a difference between successive images from among the sequence of images, the successive images being aligned based on their respective semantic keypoints, and determines a classification of the front/rear-side lights based at least in part on the difference images by inputting the difference images into a second neural network.
    Type: Grant
    Filed: March 4, 2021
    Date of Patent: June 18, 2024
    Assignee: Toyota Research Institute, Inc.
    Inventors: Jia-En Pan, Kuan-Hui Lee, Chao Fang, Kun-Hsin Chen, Arjun Bhargava, Sudeep Pillai
  • Patent number: 11954919
    Abstract: Systems and methods are provided for developing/updating training datasets for traffic light detection/perception models. V2I-based information may indicate a particular traffic light state/state of transition. This information can be compared to a traffic light perception prediction. When the prediction is inconsistent with the V2I-based information, data regarding the condition(s)/traffic light(s)/etc. can be saved and uploaded to a training database to update/refine the training dataset(s) maintained therein. In this way, an existing traffic light perception model can be updated/improved and/or a better traffic light perception model can be developed.
    Type: Grant
    Filed: October 8, 2020
    Date of Patent: April 9, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Kun-Hsin Chen, Peiyan Gong, Shunsho Kaku, Sudeep Pillai, Hai Jin, Sarah Yoo, David L. Garber, Ryan W. Wolcott
  • Publication number: 20240067207
    Abstract: Systems and methods for detecting roadway lane boundaries are disclosed herein. One embodiment receives image data of a portion of a roadway; receives historical vehicle trajectory data for the portion of the roadway; generates, from the historical vehicle trajectory data, a heatmap indicating, for a given pixel in the heatmap, an extent to which the given pixel coincides spatially with vehicle trajectories in the historical vehicle trajectory data; and projects the heatmap onto the image data to generate a composite image that is used in training a neural network to detect roadway lane boundaries, the projected heatmap acting as supervisory data. The trained neural network is deployed in a vehicle to generate and save map data including detected roadway lane boundaries for use by other vehicles or to control operation of the vehicle itself based, at least in part, on roadway lane boundaries detected by the trained neural network.
    Type: Application
    Filed: August 25, 2022
    Publication date: February 29, 2024
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Kun-Hsin Chen, Shunsho Kaku, Jeffrey M. Walls, Jie Li, Steven A. Parkison
  • Publication number: 20240037961
    Abstract: System, methods, and other embodiments described herein relate to the detection of lanes in a driving scene through segmenting road regions using an ontology enhanced to derive semantic context. In one embodiment, a method includes segmenting an image of a driving scene, independent of maps, by lane lines and road regions defined by an ontology and a pixel subset from the image has semantics of lane information from the ontology. The method also includes computing pixel depth from the image for the lane lines and the road regions using a model. The method also includes deriving 3D context using relations between the semantics and the pixel depth, the relations infer a driving lane for a vehicle from types of the lanes lines and the road regions adjacent to the driving lane. The method also includes executing a task to control the vehicle on the driving lane using the 3D context.
    Type: Application
    Filed: July 26, 2022
    Publication date: February 1, 2024
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Shunsho Kaku, Jeffrey M. Walls, Jie Li, Kun-Hsin Chen, Steven A. Parkison
  • Patent number: 11810367
    Abstract: Described herein are systems and methods for determining if a vehicle is parked. In one example, a system includes a processor, a sensor system, and a memory. Both the sensor system and the memory are in communication with the processor. The memory includes a parking determination module having instructions that, when executed by the processor, cause the processor to determine, using a random forest model, when the vehicle is parked based on vehicle estimated features, vehicle learned features, and vehicle taillight features of the vehicle that are based on sensor data from the sensor system.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: November 7, 2023
    Assignee: Toyota Research Institute, Inc.
    Inventors: Chao Fang, Kuan-Hui Lee, Logan Michael Ellis, Jia-En Pan, Kun-Hsin Chen, Sudeep Pillai, Daniele Molinari, Constantin Franziskus Dominik Hubmann, T. Wolfram Burgard
  • Publication number: 20230351244
    Abstract: System, methods, and other embodiments described herein relate to a manner of generating and relating frames that improves the retrieval of sensor and agent data for processing by different vehicle tasks. In one embodiment, a method includes acquiring sensor data by a vehicle. The method also includes generating a frame including the sensor data and agent perceptions determined from the sensor data at a timestamp, the agent perceptions including multi-dimensional data that describes features for surrounding vehicles of the vehicle. The method also includes relating the frame to other frames of the vehicle by track, the other frames having processed data from various times and the track having a predetermined window of scene information associated with an agent. The method also includes training a learning model using the agent perceptions accessed from the track.
    Type: Application
    Filed: April 29, 2022
    Publication date: November 2, 2023
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Chao Fang, Charles Christopher Ochoa, Kuan-Hui Lee, Kun-Hsin Chen, Visak Kumar
  • Publication number: 20230351773
    Abstract: System, methods, and other embodiments described herein relate to detection of traffic lights corresponding to a driving lane from views captured by multiple cameras. In one embodiment, a method includes estimating, by a first model using images from multiple cameras, positions and state confidences of traffic lights corresponding to a driving lane of a vehicle. The method also includes aggregating, by a second model, the state confidences and a multi-view stereo composition from geometric representations associated with the positions of the traffic lights. The method also includes assigning, by the second model according to the aggregating, a relevancy score computed for a candidate traffic light of the traffic lights to the driving lane. The method also includes executing a task by the vehicle according to the relevancy score.
    Type: Application
    Filed: April 28, 2022
    Publication date: November 2, 2023
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Kun-Hsin Chen, Kuan-Hui Lee, Chao Fang, Charles Christopher Ochoa
  • Publication number: 20230351767
    Abstract: A method for generating a dense light detection and ranging (LiDAR) representation by a vision system includes receiving, at a sparse depth network, one or more sparse representations of an environment. The method also includes generating a depth estimate of the environment depicted in an image captured by an image capturing sensor. The method further includes generating, via the sparse depth network, one or more sparse depth estimates based on receiving the one or more sparse representations. The method also includes fusing the depth estimate and the one or more sparse depth estimates to generate a dense depth estimate. The method further includes generating the dense LiDAR representation based on the dense depth estimate and controlling an action of the vehicle based on identifying a three-dimensional object in the dense LiDAR representation.
    Type: Application
    Filed: April 28, 2022
    Publication date: November 2, 2023
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Arjun BHARGAVA, Chao FANG, Charles Christopher OCHOA, Kun-Hsin CHEN, Kuan-Hui LEE, Vitor GUIZILINI