Patents by Inventor Xuewei QI
Xuewei QI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12100220Abstract: System, methods, and embodiments described herein relate to dynamically generating a wide field-of-view three-dimensional pseudo point cloud of an environment around a vehicle. A disclosed method may include capturing, via a camera, a first view in a first image, determining a first depth map based on the first image, obtaining, from an external system, a second image of a second view that overlaps the first view and a second depth map based on the second image, inputting the first image and the second image into a self-supervised homograph network that is trained to output a homographic transformation matrix between the first image and the second image, and generating a three-dimensional pseudo point cloud that combines the first depth map and the second depth map based on the homographic transformation matrix.Type: GrantFiled: September 30, 2021Date of Patent: September 24, 2024Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.Inventors: Xuewei Qi, Rui Guo, Prashant Tiwari, Chang-Heng Wang, Takayuki Shimizu
-
Systems and methods for analyzing the in-lane driving behavior of a road agent external to a vehicle
Patent number: 12039861Abstract: One embodiment of a system for analyzing the in-lane driving behavior of an external road agent generates a sequence of sparse 3D point clouds based on a sequence of depth maps corresponding to a sequence of images of a scene. The system performs flow clustering based on the sequence of depth maps and a sequence of flow maps to identify points across the sequence of sparse 3D point clouds that belong to a detected road agent. The system generates a dense 3D point cloud by combining at least some of those identified points. The system detects one or more lane markings and projects them into the dense 3D point cloud to generate an annotated 3D point cloud. The system analyzes the in-lane driving behavior of the detected road agent based, at least in part, on the annotated 3D point cloud.Type: GrantFiled: February 26, 2021Date of Patent: July 16, 2024Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.Inventors: Rui Guo, Kentaro Oguchi, Takamasa Higuchi, Xuewei Qi, Seyhan Ucar, Haritha Muralidharan -
Patent number: 12018959Abstract: Systems and methods are provided for utilizing sensor data from sensors of different modalities and from different vehicles to generate a combined image of an environment. Sensor data, such as a point cloud, generated by a LiDAR sensor on a first vehicle may be combined with sensor data, such as image data, generated by a camera on a second vehicle. The point cloud and image data may be combined to provide benefits over either data individually and processed to provide an improved image of the environment of the first and second vehicles. Either vehicle can perform this processing when receiving the sensor data from the other vehicle. An external system can also do the processing when receiving the sensor data from both vehicles. The improved image can then be used by one or both of the vehicles to improve, for example, automated travel through or obstacle identification in the environment.Type: GrantFiled: January 6, 2022Date of Patent: June 25, 2024Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.Inventors: Xuewei Qi, Rui Guo, Prashant Tiwari, Chang-Heng Wang, Takayuki Shimizu
-
Patent number: 12014507Abstract: System, methods, and other embodiments described herein relate to training a prediction system for improving depth perception in low-light. In one embodiment, a method includes computing, in a first training stage, losses associated with predicting a depth map for a synthetic image of a low-light scene, wherein the losses include a pose loss, a flow loss, and a supervised loss. The method also includes adjusting, according to the losses, a style model and a depth model. The method also includes training, in a second training stage, the depth model using a synthetic representation of a low-light image. The method also includes providing the depth model.Type: GrantFiled: June 10, 2021Date of Patent: June 18, 2024Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.Inventors: Rui Guo, Xuewei Qi, Kentaro Oguchi
-
Patent number: 12014520Abstract: System, methods, and other embodiments detecting and localizing objects within an image in a wide-view format using a synthetic representation. The method includes converting a real image in a wide-view format to a synthetic representation using a style model, wherein the synthetic representation depicts a distorted view of an object. The method also includes identifying features of the object using an extraction model that distinguishes different scales of the synthetic representation and a simulated scene to define structures associated with the distorted view. The method also includes detecting the object using a decoder model that identifies an attribute and a bounding box of the object from the features. The method also includes executing a task using the attribute and the bounding box to localize the object in the simulated scene.Type: GrantFiled: October 28, 2021Date of Patent: June 18, 2024Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.Inventors: Yongkang Liu, Xuewei Qi, Kentaro Oguchi
-
Patent number: 11941081Abstract: System, methods, and other embodiments described herein relate to training a model to stylize low-light images for improved perception. In one embodiment, a method includes encoding, by a style model, an input image to identify first content information. The method also includes decoding, by the style model, the first content information into an albedo component and a shading component. The method also includes generating, by the style model, a synthetic image using the albedo component and the shading component. The method also includes training the style model according to computed losses between the input image and the synthetic image.Type: GrantFiled: June 18, 2021Date of Patent: March 26, 2024Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.Inventors: Kareem Metwaly, Rui Guo, Xuewei Qi, Kentaro Oguchi
-
Patent number: 11935254Abstract: System, methods, and other embodiments described herein relate to improving depth prediction for objects within a low-light image using a style model. In one embodiment, a method includes encoding, by a style model, an input image to identify content information. The method also includes decoding, by the style model, the content information into an albedo component and a shading component. The method also includes generating, by the style model, a synthetic image using the albedo component and the shading component. The method also includes providing the synthetic image to a depth model.Type: GrantFiled: June 9, 2021Date of Patent: March 19, 2024Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.Inventors: Rui Guo, Xuewei Qi, Kentaro Oguchi, Kareem Metwaly
-
Patent number: 11886199Abstract: In accordance with one embodiment of the present disclosure, method includes obtaining multi-level environment data corresponding to a plurality of driving environment levels, encoding the multi-level environment data at each level, extracting features from the multi-level environment data at each encoded level, fusing the extracted features from each encoded level with a spatial-temporal attention framework to generate a fused information embedding, and decoding the fused information embedding to predict driving environment information at one or more driving environment levels.Type: GrantFiled: October 13, 2021Date of Patent: January 30, 2024Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.Inventors: Xuewei Qi, Kentaro Oguchi, Yongkang Liu
-
Publication number: 20230367013Abstract: System, methods, and other embodiments described herein relate to cooperative perception. In one embodiment, a method includes computing, at a first timestep, a base relative pose between an ego vehicle and a remote vehicle based upon respective point clouds of the ego vehicle and the remote vehicle. The method includes computing, at a second timestep, a relative pose between the ego vehicle and the remote vehicle based upon the base relative pose, a first temporal relative pose of the ego vehicle, and a second temporal relative pose received from the remote vehicle. The method includes generating a combined point cloud based upon a first point cloud of the ego vehicle, a second point cloud received from the remote vehicle, and the relative pose.Type: ApplicationFiled: May 16, 2022Publication date: November 16, 2023Applicants: Toyota Motor Engineering & Manufacturing North America, Inc., Toyota Jidosha Kabushiki KaishaInventors: Xuewei Qi, Qi Chen
-
Patent number: 11763497Abstract: A method for generating a dataset is provided. The method includes generating, within a simulated environment, a simulated image including one or more distortions, the simulated image includes a plurality of vehicles, generating vehicle image patches and ground truth from the simulated image, performing, using a style transfer module, a style-transfer operation on the vehicle image patches, combining the vehicle image patches, on which the style-transfer operation is performed, with a background image of a real-world location, and generating a dataset based on the ground truth and the combination of the vehicle image patches and the background image.Type: GrantFiled: October 19, 2021Date of Patent: September 19, 2023Assignee: TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.Inventors: Yongkang Liu, Xuewei Qi, Kentaro Oguchi
-
Publication number: 20230274641Abstract: Systems, methods, and other embodiments described herein relate to improving the performance of a device in different geographic locations by using transfer learning to provide a customized learning model for the different locations. In one embodiment, a method includes receiving segments of a model from separate members in a geographic hierarchy and assembling the segments into the model. The segments include at least a first segment, a second segment, and a third segment. The method includes processing sensor data using the model to provide an output for assisting a device.Type: ApplicationFiled: February 25, 2022Publication date: August 31, 2023Inventors: Xuewei Qi, Kentaro Oguchi, Yongkang Liu, Emrah Akin Sisbot
-
Publication number: 20230213354Abstract: Systems and methods are provided for utilizing sensor data from sensors of different modalities and from different vehicles to generate a combined image of an environment. Sensor data, such as a point cloud, generated by a LiDAR sensor on a first vehicle may be combined with sensor data, such as image data, generated by a camera on a second vehicle. The point cloud and image data may be combined to provide benefits over either data individually and processed to provide an improved image of the environment of the first and second vehicles. Either vehicle can perform this processing when receiving the sensor data from the other vehicle. An external system can also do the processing when receiving the sensor data from both vehicles. The improved image can then be used by one or both of the vehicles to improve, for example, automated travel through or obstacle identification in the environment.Type: ApplicationFiled: January 6, 2022Publication date: July 6, 2023Inventors: XUEWEI QI, RUI GUO, PRASHANT TIWARI, CHANG-HENG WANG, TAKAYUKI SHIMIZU
-
Patent number: 11661077Abstract: A method comprises receiving a service request from a vehicle, obtaining environment data with one or more sensors, determining a vehicle type of the vehicle based on the service request, determining service data responsive to the service request based on the vehicle type of the vehicle and the environment data, and transmitting a service message comprising the service data to the vehicle.Type: GrantFiled: April 27, 2021Date of Patent: May 30, 2023Assignee: TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA. INC.Inventors: Xuewei Qi, Kentaro Oguchi
-
Publication number: 20230118817Abstract: A method for generating a dataset is provided. The method includes generating, within a simulated environment, a simulated image including one or more distortions, the simulated image includes a plurality of vehicles, generating vehicle image patches and ground truth from the simulated image, performing, using a style transfer module, a style-transfer operation on the vehicle image patches, combining the vehicle image patches, on which the style-transfer operation is performed, with a background image of a real-world location, and generating a dataset based on the ground truth and the combination of the vehicle image patches and the background image.Type: ApplicationFiled: October 19, 2021Publication date: April 20, 2023Applicant: TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.Inventors: Yongkang Liu, Xuewei Qi, Kentaro Oguchi
-
Publication number: 20230116442Abstract: In accordance with one embodiment of the present disclosure, method includes obtaining multi-level environment data corresponding to a plurality of driving environment levels, encoding the multi-level environment data at each level, extracting features from the multi-level environment data at each encoded level, fusing the extracted features from each encoded level with a spatial-temporal attention framework to generate a fused information embedding, and decoding the fused information embedding to predict driving environment information at one or more driving environment levels.Type: ApplicationFiled: October 13, 2021Publication date: April 13, 2023Applicant: Toyota Motor Engineering & Manufacturing North America, Inc.Inventors: Xuewei Qi, Kentaro Oguchi, Yongkang Liu
-
Publication number: 20230098141Abstract: System, methods, and embodiments described herein relate to dynamically generating a wide field-of-view three-dimensional pseudo point cloud of an environment around a vehicle. A disclosed method may include capturing, via a camera, a first view in a first image, determining a first depth map based on the first image, obtaining, from an external system, a second image of a second view that overlaps the first view and a second depth map based on the second image, inputting the first image and second image into a self-supervised homograph network that is trained to output a homographic transformation matrix between the first image and the second image, and generating a three-dimensional pseudo point cloud that combines the first depth map and the second depth map based on the homographic transformation matrix.Type: ApplicationFiled: September 30, 2021Publication date: March 30, 2023Inventors: Xuewei Qi, Rui Guo, Prashant Tiwari, Chang-Heng Wang, Takayuki Shimizu
-
Publication number: 20230085296Abstract: System, methods, and other embodiments described herein relate to predicting trajectories of multiple vehicles using graphs and multiple decoding models. In one embodiment, a method includes computing, using an encoding model, a graph having a geographic map and vehicle features associated with a plurality of vehicles in an area according to characteristics, prior trajectories, and spatiotemporal interactions. The method also includes processing, using the encoding model, updates for the geographic map and the vehicle features separately in association with encoded features of neighboring vehicles. The method also includes decoding, using a probability model and a regression model, the geographic map and the vehicle features to output estimated trajectories for the plurality of vehicles.Type: ApplicationFiled: November 30, 2021Publication date: March 16, 2023Inventors: Yongkang Liu, Xuewei Qi, Kentaro Oguchi
-
Publication number: 20230077082Abstract: System, methods, and other embodiments detecting and localizing objects within an image in a wide-view format using a synthetic representation. The method includes converting a real image in a wide-view format to a synthetic representation using a style model, wherein the synthetic representation depicts a distorted view of an object. The method also includes identifying features of the object using an extraction model that distinguishes different scales of the synthetic representation and a simulated scene to define structures associated with the distorted view. The method also includes detecting the object using a decoder model that identifies an attribute and a bounding box of the object from the features. The method also includes executing a task using the attribute and the bounding box to localize the object in the simulated scene.Type: ApplicationFiled: October 28, 2021Publication date: March 9, 2023Inventors: Yongkang Liu, Xuewei Qi, Kentaro Oguchi
-
Patent number: 11548515Abstract: Systems and methods for managing driver habits are disclosed herein. One embodiment learns undesirable driving habits of a driver over time as the driver operates a vehicle; identifies, for each learned undesirable driving habit, one or more situational triggers associated with that undesirable driving habit; receives information from one or more of vehicle sensors and one or more external sources; predicts that the driver will engage in a particular undesirable driving habit; and carries out one or more of the following avoidance strategies to assist the driver in refraining from engaging in the particular undesirable driving habit: communicating one or more speed advisories to the driver; suggesting an alternate route to the driver; and presenting the driver with one or more of a coupon, an offer, and a discount at a place of business to encourage the driver to take a break by stopping at the place of business.Type: GrantFiled: January 22, 2021Date of Patent: January 10, 2023Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.Inventors: Seyhan Ucar, Xuewei Qi, Kentaro Oguchi
-
Publication number: 20220405530Abstract: System, methods, and other embodiments described herein relate to training a model to stylize low-light images for improved perception. In one embodiment, a method includes encoding, by a style model, an input image to identify first content information. The method also includes decoding, by the style model, the first content information into an albedo component and a shading component. The method also includes generating, by the style model, a synthetic image using the albedo component and the shading component. The method also includes training the style model according to computed losses between the input image and the synthetic image.Type: ApplicationFiled: June 18, 2021Publication date: December 22, 2022Inventors: Kareem Metwaly, Rui Guo, Xuewei Qi, Kentaro Oguchi