Patents by Inventor Xuewei QI

Xuewei QI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11941081
    Abstract: System, methods, and other embodiments described herein relate to training a model to stylize low-light images for improved perception. In one embodiment, a method includes encoding, by a style model, an input image to identify first content information. The method also includes decoding, by the style model, the first content information into an albedo component and a shading component. The method also includes generating, by the style model, a synthetic image using the albedo component and the shading component. The method also includes training the style model according to computed losses between the input image and the synthetic image.
    Type: Grant
    Filed: June 18, 2021
    Date of Patent: March 26, 2024
    Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.
    Inventors: Kareem Metwaly, Rui Guo, Xuewei Qi, Kentaro Oguchi
  • Patent number: 11935254
    Abstract: System, methods, and other embodiments described herein relate to improving depth prediction for objects within a low-light image using a style model. In one embodiment, a method includes encoding, by a style model, an input image to identify content information. The method also includes decoding, by the style model, the content information into an albedo component and a shading component. The method also includes generating, by the style model, a synthetic image using the albedo component and the shading component. The method also includes providing the synthetic image to a depth model.
    Type: Grant
    Filed: June 9, 2021
    Date of Patent: March 19, 2024
    Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.
    Inventors: Rui Guo, Xuewei Qi, Kentaro Oguchi, Kareem Metwaly
  • Patent number: 11886199
    Abstract: In accordance with one embodiment of the present disclosure, method includes obtaining multi-level environment data corresponding to a plurality of driving environment levels, encoding the multi-level environment data at each level, extracting features from the multi-level environment data at each encoded level, fusing the extracted features from each encoded level with a spatial-temporal attention framework to generate a fused information embedding, and decoding the fused information embedding to predict driving environment information at one or more driving environment levels.
    Type: Grant
    Filed: October 13, 2021
    Date of Patent: January 30, 2024
    Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.
    Inventors: Xuewei Qi, Kentaro Oguchi, Yongkang Liu
  • Publication number: 20230367013
    Abstract: System, methods, and other embodiments described herein relate to cooperative perception. In one embodiment, a method includes computing, at a first timestep, a base relative pose between an ego vehicle and a remote vehicle based upon respective point clouds of the ego vehicle and the remote vehicle. The method includes computing, at a second timestep, a relative pose between the ego vehicle and the remote vehicle based upon the base relative pose, a first temporal relative pose of the ego vehicle, and a second temporal relative pose received from the remote vehicle. The method includes generating a combined point cloud based upon a first point cloud of the ego vehicle, a second point cloud received from the remote vehicle, and the relative pose.
    Type: Application
    Filed: May 16, 2022
    Publication date: November 16, 2023
    Applicants: Toyota Motor Engineering & Manufacturing North America, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Xuewei Qi, Qi Chen
  • Patent number: 11763497
    Abstract: A method for generating a dataset is provided. The method includes generating, within a simulated environment, a simulated image including one or more distortions, the simulated image includes a plurality of vehicles, generating vehicle image patches and ground truth from the simulated image, performing, using a style transfer module, a style-transfer operation on the vehicle image patches, combining the vehicle image patches, on which the style-transfer operation is performed, with a background image of a real-world location, and generating a dataset based on the ground truth and the combination of the vehicle image patches and the background image.
    Type: Grant
    Filed: October 19, 2021
    Date of Patent: September 19, 2023
    Assignee: TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.
    Inventors: Yongkang Liu, Xuewei Qi, Kentaro Oguchi
  • Publication number: 20230274641
    Abstract: Systems, methods, and other embodiments described herein relate to improving the performance of a device in different geographic locations by using transfer learning to provide a customized learning model for the different locations. In one embodiment, a method includes receiving segments of a model from separate members in a geographic hierarchy and assembling the segments into the model. The segments include at least a first segment, a second segment, and a third segment. The method includes processing sensor data using the model to provide an output for assisting a device.
    Type: Application
    Filed: February 25, 2022
    Publication date: August 31, 2023
    Inventors: Xuewei Qi, Kentaro Oguchi, Yongkang Liu, Emrah Akin Sisbot
  • Publication number: 20230213354
    Abstract: Systems and methods are provided for utilizing sensor data from sensors of different modalities and from different vehicles to generate a combined image of an environment. Sensor data, such as a point cloud, generated by a LiDAR sensor on a first vehicle may be combined with sensor data, such as image data, generated by a camera on a second vehicle. The point cloud and image data may be combined to provide benefits over either data individually and processed to provide an improved image of the environment of the first and second vehicles. Either vehicle can perform this processing when receiving the sensor data from the other vehicle. An external system can also do the processing when receiving the sensor data from both vehicles. The improved image can then be used by one or both of the vehicles to improve, for example, automated travel through or obstacle identification in the environment.
    Type: Application
    Filed: January 6, 2022
    Publication date: July 6, 2023
    Inventors: XUEWEI QI, RUI GUO, PRASHANT TIWARI, CHANG-HENG WANG, TAKAYUKI SHIMIZU
  • Patent number: 11661077
    Abstract: A method comprises receiving a service request from a vehicle, obtaining environment data with one or more sensors, determining a vehicle type of the vehicle based on the service request, determining service data responsive to the service request based on the vehicle type of the vehicle and the environment data, and transmitting a service message comprising the service data to the vehicle.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: May 30, 2023
    Assignee: TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA. INC.
    Inventors: Xuewei Qi, Kentaro Oguchi
  • Publication number: 20230118817
    Abstract: A method for generating a dataset is provided. The method includes generating, within a simulated environment, a simulated image including one or more distortions, the simulated image includes a plurality of vehicles, generating vehicle image patches and ground truth from the simulated image, performing, using a style transfer module, a style-transfer operation on the vehicle image patches, combining the vehicle image patches, on which the style-transfer operation is performed, with a background image of a real-world location, and generating a dataset based on the ground truth and the combination of the vehicle image patches and the background image.
    Type: Application
    Filed: October 19, 2021
    Publication date: April 20, 2023
    Applicant: TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.
    Inventors: Yongkang Liu, Xuewei Qi, Kentaro Oguchi
  • Publication number: 20230116442
    Abstract: In accordance with one embodiment of the present disclosure, method includes obtaining multi-level environment data corresponding to a plurality of driving environment levels, encoding the multi-level environment data at each level, extracting features from the multi-level environment data at each encoded level, fusing the extracted features from each encoded level with a spatial-temporal attention framework to generate a fused information embedding, and decoding the fused information embedding to predict driving environment information at one or more driving environment levels.
    Type: Application
    Filed: October 13, 2021
    Publication date: April 13, 2023
    Applicant: Toyota Motor Engineering & Manufacturing North America, Inc.
    Inventors: Xuewei Qi, Kentaro Oguchi, Yongkang Liu
  • Publication number: 20230098141
    Abstract: System, methods, and embodiments described herein relate to dynamically generating a wide field-of-view three-dimensional pseudo point cloud of an environment around a vehicle. A disclosed method may include capturing, via a camera, a first view in a first image, determining a first depth map based on the first image, obtaining, from an external system, a second image of a second view that overlaps the first view and a second depth map based on the second image, inputting the first image and second image into a self-supervised homograph network that is trained to output a homographic transformation matrix between the first image and the second image, and generating a three-dimensional pseudo point cloud that combines the first depth map and the second depth map based on the homographic transformation matrix.
    Type: Application
    Filed: September 30, 2021
    Publication date: March 30, 2023
    Inventors: Xuewei Qi, Rui Guo, Prashant Tiwari, Chang-Heng Wang, Takayuki Shimizu
  • Publication number: 20230085296
    Abstract: System, methods, and other embodiments described herein relate to predicting trajectories of multiple vehicles using graphs and multiple decoding models. In one embodiment, a method includes computing, using an encoding model, a graph having a geographic map and vehicle features associated with a plurality of vehicles in an area according to characteristics, prior trajectories, and spatiotemporal interactions. The method also includes processing, using the encoding model, updates for the geographic map and the vehicle features separately in association with encoded features of neighboring vehicles. The method also includes decoding, using a probability model and a regression model, the geographic map and the vehicle features to output estimated trajectories for the plurality of vehicles.
    Type: Application
    Filed: November 30, 2021
    Publication date: March 16, 2023
    Inventors: Yongkang Liu, Xuewei Qi, Kentaro Oguchi
  • Publication number: 20230077082
    Abstract: System, methods, and other embodiments detecting and localizing objects within an image in a wide-view format using a synthetic representation. The method includes converting a real image in a wide-view format to a synthetic representation using a style model, wherein the synthetic representation depicts a distorted view of an object. The method also includes identifying features of the object using an extraction model that distinguishes different scales of the synthetic representation and a simulated scene to define structures associated with the distorted view. The method also includes detecting the object using a decoder model that identifies an attribute and a bounding box of the object from the features. The method also includes executing a task using the attribute and the bounding box to localize the object in the simulated scene.
    Type: Application
    Filed: October 28, 2021
    Publication date: March 9, 2023
    Inventors: Yongkang Liu, Xuewei Qi, Kentaro Oguchi
  • Patent number: 11548515
    Abstract: Systems and methods for managing driver habits are disclosed herein. One embodiment learns undesirable driving habits of a driver over time as the driver operates a vehicle; identifies, for each learned undesirable driving habit, one or more situational triggers associated with that undesirable driving habit; receives information from one or more of vehicle sensors and one or more external sources; predicts that the driver will engage in a particular undesirable driving habit; and carries out one or more of the following avoidance strategies to assist the driver in refraining from engaging in the particular undesirable driving habit: communicating one or more speed advisories to the driver; suggesting an alternate route to the driver; and presenting the driver with one or more of a coupon, an offer, and a discount at a place of business to encourage the driver to take a break by stopping at the place of business.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: January 10, 2023
    Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.
    Inventors: Seyhan Ucar, Xuewei Qi, Kentaro Oguchi
  • Publication number: 20220405530
    Abstract: System, methods, and other embodiments described herein relate to training a model to stylize low-light images for improved perception. In one embodiment, a method includes encoding, by a style model, an input image to identify first content information. The method also includes decoding, by the style model, the first content information into an albedo component and a shading component. The method also includes generating, by the style model, a synthetic image using the albedo component and the shading component. The method also includes training the style model according to computed losses between the input image and the synthetic image.
    Type: Application
    Filed: June 18, 2021
    Publication date: December 22, 2022
    Inventors: Kareem Metwaly, Rui Guo, Xuewei Qi, Kentaro Oguchi
  • Publication number: 20220398757
    Abstract: System, methods, and other embodiments described herein relate to improving depth prediction for objects within a low-light image using a style model. In one embodiment, a method includes encoding, by a style model, an input image to identify content information. The method also includes decoding, by the style model, the content information into an albedo component and a shading component. The method also includes generating, by the style model, a synthetic image using the albedo component and the shading component. The method also includes providing the synthetic image to a depth model.
    Type: Application
    Filed: June 9, 2021
    Publication date: December 15, 2022
    Inventors: Rui Guo, Xuewei Qi, Kentaro Oguchi, Kareem Metwaly
  • Publication number: 20220398758
    Abstract: System, methods, and other embodiments described herein relate to training a prediction system for improving depth perception in low-light. In one embodiment, a method includes computing, in a first training stage, losses associated with predicting a depth map for a synthetic image of a low-light scene, wherein the losses include a pose loss, a flow loss, and a supervised loss. The method also includes adjusting, according to the losses, a style model and a depth model. The method also includes training, in a second training stage, the depth model using a synthetic representation of a low-light image. The method also includes providing the depth model.
    Type: Application
    Filed: June 10, 2021
    Publication date: December 15, 2022
    Inventors: Rui Guo, Xuewei Qi, Kentaro Oguchi
  • Publication number: 20220388522
    Abstract: A system for learning optimal driving behavior for autonomous vehicles comprises a deep neural network, a first stage training module, and a second stage training module. The deep neural network comprises a feature learning network configured to receive sensor data from a vehicle as input and output spatial temporal feature embeddings and a decision action network configured to receive the spatial temporal feature embeddings as input and output an optimal driving policy for the vehicle. The first training stage module is configured to, during a first training stage, train the feature learning network using object detection loss. The second stage training module is configured to, during a second training stage, train the decision action network using reinforcement learning.
    Type: Application
    Filed: June 4, 2021
    Publication date: December 8, 2022
    Applicant: Toyota Motor Engineering & Manufacturing North America, Inc.
    Inventors: Xuewei Qi, Kentaro Oguchi, Yongkang Liu
  • Publication number: 20220340162
    Abstract: A method comprises receiving a service request from a vehicle, obtaining environment data with one or more sensors, determining a vehicle type of the vehicle based on the service request, determining service data responsive to the service request based on the vehicle type of the vehicle and the environment data, and transmitting a service message comprising the service data to the vehicle.
    Type: Application
    Filed: April 27, 2021
    Publication date: October 27, 2022
    Applicant: Toyota Motor Engineering & Manufacturing North America, Inc.
    Inventors: Xuewei Qi, Kentaro Oguchi
  • Publication number: 20220277647
    Abstract: Systems and methods for analyzing the in-lane driving behavior of an external road agent are disclosed herein. One embodiment generates a sequence of sparse 3D point clouds based on a sequence of depth maps corresponding to a sequence of images of a scene; performs flow clustering based on the sequence of depth maps and a sequence of flow maps to identify points across the sequence of sparse 3D point clouds that belong to a detected road agent; generates a dense 3D point cloud by combining at least some of the points across the sequence of sparse 3D point clouds that belong to the detected road agent; detects one or more lane markings and projects them into the dense 3D point cloud to generate an annotated 3D point cloud; and analyzes the in-lane driving behavior of the detected road agent based, at least in part, on the annotated 3D point cloud.
    Type: Application
    Filed: February 26, 2021
    Publication date: September 1, 2022
    Inventors: Rui Guo, Kentaro Oguchi, Takamasa Higuchi, Xuewei Qi, Seyhan Ucar, Haritha Muralidharan