Patents by Inventor Lincan Zou

Lincan Zou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11803616
    Abstract: Methods and systems for performing function testing for moveable objects. One system includes an electronic processor configured to access a driving scene including a moveable object. The electronic processor is also configured to perform spatial representation learning on the driving scene. The electronic processor is also configured to generate an adversarial example based on the learned spatial representation. The electronic processor is also configured to retrain the deep learning model using the adversarial example and the driving scene.
    Type: Grant
    Filed: March 1, 2021
    Date of Patent: October 31, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Wenbin He, Liang Gou, Lincan Zou, Liu Ren
  • Publication number: 20230184949
    Abstract: A system and method are disclosed herein for developing robust semantic mapping models for estimating semantic maps from LiDAR scans. In particular, the system and method enable the generation of realistic simulated LiDAR scans based on two-dimensional (2D) floorplans, for the purpose of providing a much larger set of training data that can be used to train robust semantic mapping models. These simulated LiDAR scans, as well as real LiDAR scans, are annotated using automated and manual processes with a rich set of semantic labels. Based on the annotated LiDAR scans, one or more semantic mapping models can be trained to estimate the semantic map for new LiDAR scans. The trained semantic mapping model can be deployed in robot vacuum cleaners, as well as similar devices that must interpret LiDAR scans of an environment to perform a task.
    Type: Application
    Filed: December 9, 2021
    Publication date: June 15, 2023
    Inventors: Xinyu Huang, Sharath Gopal, Lincan Zou, Yuliang Guo, Liu Ren
  • Publication number: 20230085938
    Abstract: Embodiments of systems and methods for diagnosing an object-detecting machine learning model for autonomous driving are disclosed herein. An input image is received from a camera mounted in or on a vehicle that shows a scene. A spatial distribution of movable objects within the scene is derived using a context-aware spatial representation machine learning model. An unseen object is generated in the scene that is not originally in the input image utilizing a spatial adversarial machine learning model. Via the spatial adversarial machine learning model, the unseen object is moved to different locations to fail the object-detecting machine learning model. An interactive user interface enables a user to analyze performance of the object-detecting machine learning model with respect to the scene without the unseen object and the scene with the unseen object.
    Type: Application
    Filed: September 17, 2021
    Publication date: March 23, 2023
    Inventors: Wenbin HE, Liang GOU, Lincan ZOU, Liu REN
  • Patent number: 11587330
    Abstract: Visual analytics tool for updating object detection models in autonomous driving applications. In one embodiment, an object detection model analysis system including a computer and an interface device. The interface device includes a display device. The computer includes an electronic processor that is configured to extract object information from image data with a first object detection model, extract characteristics of objects from metadata associated with image data, generate a summary of the object information and the characteristics, generate coordinated visualizations based on the summary and the characteristics, generate a recommendation graphical user interface element based on the coordinated visualizations and a first one or more user inputs, and update the first object detection model based at least in part on a classification of one or more individual objects as an actual weakness in the first object detection model to generate a second object detection model for autonomous driving.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: February 21, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Liang Gou, Lincan Zou, Nanxiang Li, Axel Wendt, Liu Ren
  • Patent number: 11537901
    Abstract: A system and method for domain adaptation involves a first domain and a second domain. A machine learning system is trained with first sensor data and first label data of the first domain. Second sensor data of a second domain is obtained. Second label data is generated via the machine learning system based on the second sensor data. Inter-domain sensor data is generated by interpolating the first sensor data of the first domain with respect to the second sensor data of the second domain. Inter-domain label data is generated by interpolating first label data of the first domain with respect to second label data of the second domain. The machine learning system is operable to generate inter-domain output data in response to the inter-domain sensor data. Inter-domain loss data is generated based on the inter-domain output data with respect to the inter-domain label data. Parameters of the machine learning system are updated upon optimizing final loss data that includes at least the inter-domain loss data.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: December 27, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Huan Song, Shen Yan, Nanxiang Li, Lincan Zou, Liu Ren
  • Publication number: 20220277173
    Abstract: Methods and systems for performing function testing for moveable objects. One system includes an electronic processor configured to access a driving scene including a moveable object. The electronic processor is also configured to perform spatial representation learning on the driving scene. The electronic processor is also configured to generate an adversarial example based on the learned spatial representation. The electronic processor is also configured to retrain the deep learning model using the adversarial example and the driving scene.
    Type: Application
    Filed: March 1, 2021
    Publication date: September 1, 2022
    Inventors: Wenbin He, Liang Gou, Lincan Zou, Liu Ren
  • Publication number: 20220277192
    Abstract: A visual analytics workflow and system are disclosed for assessing, understanding, and improving deep neural networks. The visual analytics workflow advantageously enables interpretation and improvement of the performance of a neural network model, for example an image-based objection detection and classification model, with minimal human-in-the-loop interaction. A data representation component extracts semantic features of input image data, such as colors, brightness, background, rotation, etc. of the images or objects in the images. The input image data are passed through the neural network to obtain prediction results, such as object detection and classification results. An interactive visualization component transforms the prediction results and semantic features into interactive and human-friendly visualizations, in which graphical elements encoding the prediction results are visually arranged depending on the extracted semantic features of input image data.
    Type: Application
    Filed: February 26, 2021
    Publication date: September 1, 2022
    Inventors: Liang Gou, Lincan Zou, Wenbin He, Liu Ren
  • Publication number: 20220218230
    Abstract: A system and method for monitoring a walking activity are disclosed, which have three major components: a pre-processing phase, a step detection phase, and a filtering and post-processing phase. In the preprocessing phase, recorded motion data is received, reoriented with respect to gravity, and low-pass filtered. Next, in the step detection phase, walking step candidates are detected from vertical acceleration peaks and valleys resulting from heel strikes. Finally, in the filtering and post-processing phase, false positive steps are filtered out using a composite of criteria, including time, similarity, and horizontal motion variation. The method 200 is advantageously able to detect most walking activities with accurate time boundaries, while maintaining very low false positive rate.
    Type: Application
    Filed: January 13, 2021
    Publication date: July 14, 2022
    Inventors: Huan Song, Lincan Zou, Liu Ren
  • Publication number: 20220221482
    Abstract: A system and method for monitoring performance of a repeated activity is described. The system comprises a motion sensing system and a processing system. The motion sensing system includes sensors configured to measure or track motions corresponding to a repeated activity. The processing system is configured to process motion data received from the motion sensing system to recognize and measure cycle durations in the repeated activity. In contrast to the conventional systems and methods, which may work for repeated activities having a high level of standardization, the system advantageously enables recognition and monitoring of cycle durations for a repeated activity, even when significant abnormal motions are present in each cycle. Thus, the system can be utilized in a significantly broader set of applications, compared conventional systems and methods.
    Type: Application
    Filed: January 14, 2021
    Publication date: July 14, 2022
    Inventors: Lincan Zou, Huan Song, Liu Ren
  • Patent number: 11373356
    Abstract: A method for generating graphics of a three-dimensional (3D) virtual environment includes: receiving, with a processor, a first camera position in the 3D virtual environment and a first viewing direction in the 3D virtual environment; receiving, with the processor, weather data including first precipitation information corresponding to a first geographic region corresponding to the first camera position in the 3D virtual environment; defining, with the processor, a bounding geometry at first position that is a first distance from the first camera position in the first viewing direction, the bounding geometry being dimensioned so as to cover a field of view from the first camera position in the first viewing direction; and rendering, with the processor, a 3D particle system in the 3D virtual environment depicting precipitation only within the bounding geometry, the 3D particle system having features depending on the first precipitation information.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: June 28, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Zeng Dai, Liu Ren, Lincan Zou
  • Patent number: 11301724
    Abstract: A system includes a camera configured to obtain image information from objects. The system also includes a processor in communication with the camera and programmed to receive an input data including the image information, encode the input via an encoder, obtain a latent variable defining an attribute of the input data, generate a sequential reconstruction of the input data utilizing at least the latent variable and an adversarial noise, obtain a residual between the input data and the sequential reconstruction utilizing a comparison of at least the input and the reconstruction to learn a mean shift in latent space, and output a mean shift indicating a test result of the input compared to the adversarial noise based on the comparison.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: April 12, 2022
    Assignee: ROBERT BOSCH GMBH
    Inventors: Liang Gou, Lincan Zou, Axel Wendt, Liu Ren
  • Patent number: 11224359
    Abstract: Abnormal motions are detected in sensor data collected with respect to performance of repetitive human activities. An autoencoder network model is trained based on a set of standard activity. Repetitive activity is extracted from sensor data. A first score is generated indicative of distance of a repetition of the repetitive activity from the standard activity. The repetitive activity is used to retrain the autoencoder network model, using weights of the autoencoder network model as initial values, the weights being based on the training of the autoencoder network model using the set of standard activity. A second score is generated indicative of whether the repetition is an outlier as compared to other repetitions of the repetitive activity. A final score is generated based on a weighting of the first score and the second score.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: January 18, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Huan Song, Liu Ren, Lincan Zou
  • Patent number: 11199561
    Abstract: Motion windows are generated from a query activity sequence. For each of the motion windows in the query activity sequence, a corresponding motion window in the reference activity sequence is found. One or more difference calculations are performed between the motion windows of the query activity sequence and the corresponding motion windows in the reference activity sequence based on at least one criterion associated with physical meaning. Abnormality of the motion windows is determined based on the one or more difference calculations. A standardized evaluation result of the query activity sequence is output based on the detected abnormal motion windows in the query activity sequence.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: December 14, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Lincan Zou, Liu Ren, Huan Song, Cheng Zhang
  • Publication number: 20210342647
    Abstract: A system includes a camera configured to obtain image information from objects. The system also includes a processor in communication with the camera and programmed to receive an input data including the image information, encode the input via an encoder, obtain a latent variable defining an attribute of the input data, generate a sequential reconstruction of the input data utilizing at least the latent variable and an adversarial noise, obtain a residual between the input data and the sequential reconstruction utilizing a comparison of at least the input and the reconstruction to learn a mean shift in latent space, and output a mean shift indicating a test result of the input compared to the adversarial noise based on the comparison.
    Type: Application
    Filed: April 30, 2020
    Publication date: November 4, 2021
    Inventors: Liang GOU, Lincan ZOU, Axel WENDT, Liu REN
  • Publication number: 20210201159
    Abstract: A system and method for domain adaptation involves a first domain and a second domain. A machine learning system is trained with first sensor data and first label data of the first domain. Second sensor data of a second domain is obtained. Second label data is generated via the machine learning system based on the second sensor data. Inter-domain sensor data is generated by interpolating the first sensor data of the first domain with respect to the second sensor data of the second domain. Inter-domain label data is generated by interpolating first label data of the first domain with respect to second label data of the second domain. The machine learning system is operable to generate inter-domain output data in response to the inter-domain sensor data. Inter-domain loss data is generated based on the inter-domain output data with respect to the inter-domain label data. Parameters of the machine learning system are updated upon optimizing final loss data that includes at least the inter-domain loss data.
    Type: Application
    Filed: December 31, 2019
    Publication date: July 1, 2021
    Inventors: Huan Song, Shen Yan, Nanxiang Li, Lincan Zou, Liu Ren
  • Publication number: 20210201053
    Abstract: Visual analytics tool for updating object detection models in autonomous driving applications. In one embodiment, an object detection model analysis system including a computer and an interface device. The interface device includes a display device. The computer includes an electronic processor that is configured to extract object information from image data with a first object detection model, extract characteristics of objects from metadata associated with image data, generate a summary of the object information and the characteristics, generate coordinated visualizations based on the summary and the characteristics, generate a recommendation graphical user interface element based on the coordinated visualizations and a first one or more user inputs, and update the first object detection model based at least in part on a classification of one or more individual objects as an actual weakness in the first object detection model to generate a second object detection model for autonomous driving.
    Type: Application
    Filed: December 31, 2019
    Publication date: July 1, 2021
    Inventors: Liang Gou, Lincan Zou, Nanxiang Li, Axel Wendt, Liu Ren
  • Publication number: 20210177307
    Abstract: Abnormal motions are detected in sensor data collected with respect to performance of repetitive human activities. An autoencoder network model is trained based on a set of standard activity. Repetitive activity is extracted from sensor data. A first score is generated indicative of distance of a repetition of the repetitive activity from the standard activity. The repetitive activity is used to retrain the autoencoder network model, using weights of the autoencoder network model as initial values, the weights being based on the training of the autoencoder network model using the set of standard activity. A second score is generated indicative of whether the repetition is an outlier as compared to other repetitions of the repetitive activity. A final score is generated based on a weighting of the first score and the second score.
    Type: Application
    Filed: December 17, 2019
    Publication date: June 17, 2021
    Inventors: Huan SONG, Liu REN, Lincan ZOU
  • Patent number: 10996235
    Abstract: Using a global optimization, a cycle within a frame buffer including frames corresponding to one or more cycles of query activity sequences is detected. The detection includes creating a plurality of cycle segmentations by recursively iterating through the frame buffer to identify candidate cycles corresponding to cycles of a reference activity sequence until the frame buffer lacks sufficient frames to create additional cycles, computing segmentation errors for each of the plurality of cycle segmentations, and identifying the detected cycle as the one of the plurality of cycle segmentations having a lowest segmentation error. Cycle duration data for the detected cycle is generated. Frames belonging to the detected cycle are removed from the frame buffer. The cycle duration data is output.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: May 4, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Lincan Zou, Liu Ren, Cheng Zhang
  • Patent number: 10997467
    Abstract: Weaknesses may be exposed in image object detectors. An image object is overlaid onto a background image at each of a plurality of locations, the background image including a scene in which the image objects can be present. A detector model is used to attempt detection of the image object as overlaid onto the background image, the detector model being trained to identify the image object in background images, the detection resulting in background scene detection scores indicative of likelihood of the image object being detected at each of the plurality of locations. A detectability map is displayed overlaid on the background image, the detectability map including, for each of the plurality of locations, a bounding box of the image object illustrated according to the respective detection score.
    Type: Grant
    Filed: October 18, 2019
    Date of Patent: May 4, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Bilal Alsallakh, Nanxiang Li, Lincan Zou, Axel Wendt, Liu Ren
  • Publication number: 20210117730
    Abstract: Weaknesses may be exposed in image object detectors. An image object is overlaid onto a background image at each of a plurality of locations, the background image including a scene in which the image objects can be present. A detector model is used to attempt detection of the image object as overlaid onto the background image, the detector model being trained to identify the image object in background images, the detection resulting in background scene detection scores indicative of likelihood of the image object being detected at each of the plurality of locations. A detectability map is displayed overlaid on the background image, the detectability map including, for each of the plurality of locations, a bounding box of the image object illustrated according to the respective detection score.
    Type: Application
    Filed: October 18, 2019
    Publication date: April 22, 2021
    Inventors: Bilal ALSALLAKH, Nanxiang LI, Lincan ZOU, Axel WENDT, Liu REN