Patents by Inventor Shuqing Zeng

Shuqing Zeng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210181350
    Abstract: In an exemplary embodiment, a LiDAR is provided that is configured for installation in a mobile platform. The LiDAR includes a scanner and a light-intensity receiver. The scanner includes a light source configured to direct illumination in an illuminating direction. The light-intensity receiver includes one or more light-intensity sensors; and one or more lens assemblies configured with respect to the one or more light-intensity sensors, such that that at least one sensor plane from the one or more light-intensity sensors is tilted to form a non-zero angle with at least one equivalent lens plane from the one or more lens assemblies, transferring the sensor focal plane to be align with the main light illumination direction and be consistent with the direction of movement of a mobile platform.
    Type: Application
    Filed: December 12, 2019
    Publication date: June 17, 2021
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Wei Zeng, Shuqing Zeng, Scott E. Parrish
  • Publication number: 20210179115
    Abstract: A method and associated system for monitoring the on-vehicle yaw-rate sensor includes determining a vehicle heading during vehicle operation and determining a first vehicle heading parameter based thereon. A second vehicle heading parameter is determined via the yaw-rate sensor. A yaw-rate sensor bias parameter is determined based upon the first vehicle heading parameter and the second vehicle heading parameter. A first yaw term is determined via the yaw-rate sensor, and a final yaw term is determined based upon the first yaw term and the yaw-rate sensor bias parameter.
    Type: Application
    Filed: December 16, 2019
    Publication date: June 17, 2021
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Jagannadha Reddy Satti, Xiaofeng F. Song, Shuqing Zeng, Abdoul Karim Abdoul Azizou, Azadeh Farazandeh
  • Patent number: 11035945
    Abstract: System and method of controlling operation of a device in real-time. The system includes an optical sensor having a steerable optical field of view for obtaining image data and a radar unit having a steerable radar field of view for obtaining radar data. A controller may be configured to steer a first one of the optical sensor and the radar unit to a first region of interest and a second one of the optical sensor and the radar unit to the second region of interest. The controller may be configured to steer both the optical sensor and the radar unit to the first region of interest. The radar data and the image data are fused to obtain a target location and a target velocity. The controller is configured to control operation of the device based in part on at least one of the target location and the target velocity.
    Type: Grant
    Filed: April 18, 2019
    Date of Patent: June 15, 2021
    Assignee: GM Global Technology Operations LLC
    Inventors: Tzvi Philipp, Shahar Villeval, Igal Bilik, Jeremy A. Salinger, Shuqing Zeng
  • Publication number: 20210129842
    Abstract: Presented are embedded control systems with logic for computation and data sharing, methods for making/using such systems, and vehicles with distributed sensors and embedded processing hardware for provisioning automated driving functionality. A method for operating embedded controllers connected with distributed sensors includes receiving a first data stream from a first sensor via a first embedded controller, and storing the first data stream with a first timestamp and data lifespan via a shared data buffer in a memory device. A second data stream is received from a second sensor via a second embedded controller. A timing impact of the second data stream is calculated based on the corresponding timestamp and data lifespan. Upon determining that the timing impact does not violate a timing constraint, the first data stream is purged from memory and the second data stream is stored with a second timestamp and data lifespan in the memory device.
    Type: Application
    Filed: November 1, 2019
    Publication date: May 6, 2021
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Shige Wang, Wei Tong, Stephen N. McKinnie, Shuqing Zeng
  • Patent number: 10984534
    Abstract: Systems and methods to identify an attention region in sensor-based detection involve obtaining a detection result that indicates one or more detection areas where one or more objects of interest are detected. The detection result is based on using a first detection algorithm. The method includes obtaining a reference detection result that indicates one or more reference detection areas where one or more objects of interest are detected. The reference detection result is based on using a second detection algorithm. The method also includes identifying the attention region as one of the one or more reference detection areas without a corresponding one or more detection areas. The first detection algorithm is used to perform detection in the attention region.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: April 20, 2021
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Wei Tong, Shuqing Zeng, Upali P. Mudalige
  • Patent number: 10965857
    Abstract: In various embodiments, cameras and mobile platforms are provided. In one exemplary embodiment, a mobile platform is provided that includes a body and a camera disposed on the body. The camera includes one or more image sensors, and one or more lens assemblies. The one or more lens assemblies are configured with respect to the one or more image sensors, that at least one image plane from the one or more image sensors is tilted to form a non-zero angle with at least one equivalent lens plane from the one or more lens assemblies, transferring the image focal plane to be parallel to the movement direction of the mobile platform in which the camera is installed. The use of multiple image sensors or lens assemblies in certain embodiments increases camera angle of view.
    Type: Grant
    Filed: January 8, 2020
    Date of Patent: March 30, 2021
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Wei Zeng, Shuqing Zeng, Scott E. Parrish
  • Patent number: 10965856
    Abstract: In various embodiments, cameras and mobile platforms are provided. In one exemplary embodiment, a mobile platform is provided that includes a body and a camera disposed on the body. The camera includes one or more image sensors, and one or more lens assemblies. The one or more lens assemblies are configured with respect to the one or more image sensors, that at least one image plane from the one or more image sensors is tilted to form a non-zero angle with at least one equivalent lens plane from the one or more lens assemblies, transferring the image focal plane to be parallel to the movement direction of the mobile platform in which the camera is installed. The use of multiple image sensors or lens assemblies in certain embodiments increases camera angle of view.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: March 30, 2021
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Wei Zeng, Shuqing Zeng, Scott E. Parrish
  • Publication number: 20210086695
    Abstract: The present application relates to a method and apparatus for generating a graphical user interface indicative of a vehicle underbody view including a LIDAR operative to generate a depth map of an off-road surface, a camera for capturing an image of the off-road surface, a chassis sensor operative to detect an orientation of a host vehicle, a processor operative to generate an augmented image in response to the depth map, the image and the orientation, wherein the augmented image depicts an underbody view of the host vehicle and a graphic representative of a host vehicle suspension system, and a display operative to display the augmented image to a host vehicle operator. A static and dynamic model of the vehicle underbody is compared vs the 3-D terrain model to identify contact points between the underbody and terrain are highlighted.
    Type: Application
    Filed: September 24, 2019
    Publication date: March 25, 2021
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Brian Mahnken, Bradford G. Schreiber, Jeffrey Louis Brown, Shawn W. Ryan, Brent T. Deep, Kurt A. Heier, Upali P. Mudalige, Shuqing Zeng
  • Patent number: 10955842
    Abstract: Systems and methods are provided for controlling an autonomous vehicle (AV). A scene understanding module of a high-level controller selects a particular combination of sensorimotor primitive modules to be enabled and executed for a particular driving scenario from a plurality of sensorimotor primitive modules. Each one of the particular combination of the sensorimotor primitive modules addresses a sub-task in a sequence of sub-tasks that address a particular driving scenario. A primitive processor module executes the particular combination of the sensorimotor primitive modules such that each generates a vehicle trajectory and speed profile. An arbitration module selects one of the vehicle trajectory and speed profiles having the highest priority ranking for execution, and a vehicle control module processes the selected one of vehicle trajectory and speed profiles to generate control signals used to execute one or more control actions to automatically control the AV.
    Type: Grant
    Filed: May 24, 2018
    Date of Patent: March 23, 2021
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Shuqing Zeng, Wei Tong, Upali P. Mudalige
  • Patent number: 10929719
    Abstract: Systems and methods to generate an adversarial attack on a black box object detection algorithm of a sensor involve obtaining an initial training data set from the black box object detection algorithm. The black box object detection algorithm performs object detection on initial input data to provide black box object detection algorithm output that provides the initial training data set. A substitute model is trained with the initial training data set such that output from the substitute model replicates the black box object detection algorithm output that makes up the initial training data set. Details of operation of the black box object detection algorithm are unknown and details of operation of the substitute model are known. The substitute model is used to perform the adversarial attack. The adversarial attack refers to identifying adversarial input data for which the black box object detection algorithm will fail to perform accurate detection.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: February 23, 2021
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Wei Tong, Shuqing Zeng, Upali P. Mudalige
  • Publication number: 20210041712
    Abstract: A system and method for obtaining an overall image that is constructed from multiple sub-images. The method includes: capturing a first sub-image having a first sub-image field of view using an image sensor of an electronically-steerable optical sensor; after capturing the first sub-image, steering light received at the electronically-steerable optical sensor using an electronically-controllable light-steering mechanism of the electronically-steerable optical sensor so as to obtain a second sub-image field of view; capturing a second sub-image having the second sub-image field of view using the image sensor of the electronically-steerable optical sensor; and combining the first sub-image and the second sub-image so as to obtain the overall image.
    Type: Application
    Filed: August 5, 2019
    Publication date: February 11, 2021
    Inventors: Igal Bilik, Tzvi Philipp, Shahar Villeval, Jeremy A. Salinger, Shuqing Zeng
  • Patent number: 10909390
    Abstract: Examples of techniques for using fixed-point quantization in deep neural networks are disclosed. In one example implementation according to aspects of the present disclosure, a computer-implemented method includes capturing a plurality of images at a camera associated with a vehicle and storing image data associated with the plurality of images to a memory. The method further includes dispatching vehicle perception tasks to a plurality of processing elements of an accelerator in communication with the memory. The method further includes performing, by at least one of the plurality of processing elements, the vehicle perception tasks for the vehicle perception using a neural network, wherein performing the vehicle perception tasks comprises quantizing a fixed-point value based on an activation input and a synapse weight. The method further includes controlling the vehicle based at least in part on a result of performing the vehicle perception tasks.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: February 2, 2021
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Shuqing Zeng, Wei Tong, Shige Wang, Roman L. Millett
  • Publication number: 20210018928
    Abstract: Methods and systems are provided for detecting objects within an environment of a vehicle. In one embodiment, a method includes: receiving, by a processor, image data sensed from the environment of the vehicle; determining, by a processor, an area within the image data that object identification is uncertain; controlling, by the processor, a position of a lighting device to illuminate a location in the environment of the vehicle, wherein the location is associated with the area; controlling, by the processor, a position of one or more sensors to obtain sensor data from the location of the environment of the vehicle while the lighting device is illuminating the location; identifying, by the processor, one or more objects from the sensor data; and controlling, by the processor, the vehicle based on the one or more objects.
    Type: Application
    Filed: July 18, 2019
    Publication date: January 21, 2021
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Lawrence A. Bush, Upali P. Mudalige, Zachariah E. Tyree, Wei Tong, Shuqing Zeng
  • Patent number: 10893183
    Abstract: An attention-based imaging system is described, including a camera that can adjust its field of view (FOV) and resolution and a control routine that can determine one or more regions of interest (ROI) within the FOV to prioritize camera resources. The camera includes an image sensor, an internal lens, a steerable mirror, an external lens, and a controller. The external lens is disposed to monitor a viewable region, and the steerable mirror is interposed between the internal lenses and the external lenses. The steerable mirrors are arranged to project the viewable region from the external lens onto the image sensor via the internal lens. The steerable mirror modifies the viewable region that is projected onto the image sensor and controls the image sensor to capture an image. The associated control routine can be deployed either inside the camera or in a separate external processor.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: January 12, 2021
    Assignee: GM Global Technology Operations LLC
    Inventors: Guangyu J. Zou, Shuqing Zeng, Upali P. Mudalige
  • Patent number: 10861176
    Abstract: Systems and methods for depth estimation of images from a mono-camera by use of radar data by: receiving, a plurality of input 2-D images from the mono-camera; generating, by the processing unit, an estimated depth image by supervised training of an image estimation model; generating, by the processing unit, a synthetic image from a first input image and a second input image from the mono-camera by applying an estimated transform pose; comparing, by the processing unit, an estimated three-dimensional (3-D) point cloud to radar data by applying another estimated transform pose to a 3-D point cloud wherein the 3-D point cloud is estimated from a depth image by supervised training of the image estimation model to radar distance and radar doppler measurement; correcting a depth estimation of the estimated depth image by losses derived from differences: of the synthetic image and original images; of an estimated depth image and a measured radar distance; and of an estimated doppler information and measured radar d
    Type: Grant
    Filed: November 27, 2018
    Date of Patent: December 8, 2020
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Wei Tong, Shuqing Zeng, Mohannad Murad, Alaa M. Khamis
  • Patent number: 10859673
    Abstract: A disambiguating system for disambiguating between ambiguous detections is provided. The system includes a plurality of modules, wherein each module is configured to disambiguate between ambiguous detections by selecting, as a true detection, one candidate detection in a set of ambiguous detections and wherein each module is configured to apply a different selection technique. The system includes: one or more modules configured to select as the true detection, the candidate detection whose associated position is closer to a position indicated by other data and one or more modules configured to select as the true detection, the candidate detection with the highest probability of being true based on other sensor data.
    Type: Grant
    Filed: November 1, 2018
    Date of Patent: December 8, 2020
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Lawrence A. Bush, Brent N. Bacchus, Shuqing Zeng, Stephen W. Decker
  • Publication number: 20200355820
    Abstract: The vehicle mounted perception sensor gathers environment perception data from a scene using first and second heterogeneous (different modality) sensors, at least one of the heterogeneous sensors is directable to a predetermined region of interest. A perception processor receives the environment percpetion data and performs object recognition to identify objects each with a computed confidence score. The processor assesses the confidence score vis-à-vis a predetermined threshold, and based on that assessment, generates an attention signal to redirect the one of the heterogeneous sensors to a region of interest identified by the other heterogeneous sensor. In this way information from one sensor primes the other sensor to increase accuracy and provide deeper knowledge about the scene and thus do a better job of object tracking in vehicular applications.
    Type: Application
    Filed: May 8, 2019
    Publication date: November 12, 2020
    Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Shuqing Zeng, Wei Tong, Upali P. Mudalige, Lawrence A. Bush
  • Patent number: 10832450
    Abstract: A method for image style transfer using a Semantic Preserved Generative Adversarial Network (SPGAN) includes: receiving a source image; inputting the source image into the SPGAN; extracting a source-semantic feature data from the source image; generating, by the first decoder, a first synthetic image including the source semantic content of the source image in a target style of a target image using the source-semantic feature data extracted by the first encoder of the first generator network, wherein the first synthetic image includes first-synthetic feature data; determining a first encoder loss using the source-semantic feature data and the first-synthetic feature data; discriminating the first synthetic image against the target image to determine a GAN loss; determining a total loss as a function of the first encoder loss and the first GAN loss; and training the first generator network and the first discriminator network.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: November 10, 2020
    Assignee: GM Global Technology Operations LLC
    Inventors: Wei Tong, Chengqi Bian, Farui Peng, Shuqing Zeng
  • Patent number: 10824943
    Abstract: Described herein are systems, methods, and computer-readable media for generating and training a high precision low bit convolutional neural network (CNN). A filter of each convolutional layer of the CNN is approximated using one or more binary filters and a real-valued activation function is approximated using a linear combination of binary activations. More specifically, a non-1×1 filter (e.g., a k×k filter, where k>1) is approximated using a scaled binary filter and a 1×1 filter is approximated using a linear combination of binary filters. Thus, a different strategy is employed for approximating different weights (e.g., 1×1 filter vs. a non-1×1 filter). In this manner, convolutions performed in convolutional layer(s) of the high precision low bit CNN become binary convolutions that yield a lower computational cost while still maintaining a high performance (e.g., a high accuracy).
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: November 3, 2020
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Wei Tong, Shuqing Zeng, Upali P. Mudalige, Shige Wang
  • Publication number: 20200333454
    Abstract: System and method of controlling operation of a device in real-time. The system includes an optical sensor having a steerable optical field of view for obtaining image data and a radar unit having a steerable radar field of view for obtaining radar data. A controller may be configured to steer a first one of the optical sensor and the radar unit to a first region of interest and a second one of the optical sensor and the radar unit to the second region of interest. The controller may be configured to steer both the optical sensor and the radar unit to the first region of interest. The radar data and the image data are fused to obtain a target location and a target velocity. The controller is configured to control operation of the device based in part on at least one of the target location and the target velocity.
    Type: Application
    Filed: April 18, 2019
    Publication date: October 22, 2020
    Applicant: GM Global Technology Operations LLC
    Inventors: Tzvi Philipp, Shahar Villeval, Igal Bilik, Jeremy A. Salinger, Shuqing Zeng