Patents by Inventor Liu Ren

Liu Ren has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220277187
    Abstract: Methods and systems for performing concept-based adversarial generation with steerable and diverse semantics. One system includes an electronic processor configured to access an input image. The electronic processor is also configured to perform concept-based semantic image generation based on the input image. The electronic processor is also configured to perform concept-based semantic adversarial learning using a set of semantic latent spaces generated as part of performing the concept-based semantic image generation. The electronic processor is also configured to generate an adversarial image based on the concept-based semantic adversarial learning. The electronic processor is also configured to test a target model using the adversarial image.
    Type: Application
    Filed: March 1, 2021
    Publication date: September 1, 2022
    Inventors: Zijie Wang, Liang Gou, Wenbin He, Liu Ren
  • Publication number: 20220277192
    Abstract: A visual analytics workflow and system are disclosed for assessing, understanding, and improving deep neural networks. The visual analytics workflow advantageously enables interpretation and improvement of the performance of a neural network model, for example an image-based objection detection and classification model, with minimal human-in-the-loop interaction. A data representation component extracts semantic features of input image data, such as colors, brightness, background, rotation, etc. of the images or objects in the images. The input image data are passed through the neural network to obtain prediction results, such as object detection and classification results. An interactive visualization component transforms the prediction results and semantic features into interactive and human-friendly visualizations, in which graphical elements encoding the prediction results are visually arranged depending on the extracted semantic features of input image data.
    Type: Application
    Filed: February 26, 2021
    Publication date: September 1, 2022
    Inventors: Liang Gou, Lincan Zou, Wenbin He, Liu Ren
  • Publication number: 20220277173
    Abstract: Methods and systems for performing function testing for moveable objects. One system includes an electronic processor configured to access a driving scene including a moveable object. The electronic processor is also configured to perform spatial representation learning on the driving scene. The electronic processor is also configured to generate an adversarial example based on the learned spatial representation. The electronic processor is also configured to retrain the deep learning model using the adversarial example and the driving scene.
    Type: Application
    Filed: March 1, 2021
    Publication date: September 1, 2022
    Inventors: Wenbin He, Liang Gou, Lincan Zou, Liu Ren
  • Patent number: 11430146
    Abstract: A system and method is disclosed having an end-to-end two-stage depth estimation deep learning framework that takes one spherical color image and estimate dense spherical depth maps. The contemplated framework may include a view synthesis (stage 1) and a multi-view stereo matching (stage 2). The combination of the two-stage process may provide the advantage of the geometric constraints from stereo matching to improve depth map quality, without the need of additional input data. It is also contemplated that a spherical warping layer may be used to integrate multiple spherical features volumes to one cost volume with uniformly sampled inverse depth for the multi-view spherical stereo matching stage. The two-stage spherical depth estimation system and method may be used in various applications including virtual reality, autonomous driving and robotics.
    Type: Grant
    Filed: October 31, 2020
    Date of Patent: August 30, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Zhixin Yan, Liu Ren, Yuyan Li, Ye Duan
  • Publication number: 20220270322
    Abstract: A system of a virtual visor includes one or more sensors configured to receive input data including images, wherein the one or more sensors includes at least a camera utilized in the virtual visor, a processor in communication with the one or more sensors. The processor is programmed to create a training dataset utilizing at least the input data, utilizing the training dataset, create a classification associated with a shadow mask region and a first face region associated with a first face, segment the shadow mask region and the first face region from the training data set, and output a shadow representation via the virtual visor and utilizing the shadow mask region, the first face region, and a second face region associated with a second face.
    Type: Application
    Filed: February 22, 2021
    Publication date: August 25, 2022
    Inventors: Xinyu HUANG, Benzun Pious Wisely BABU, Liu REN, Jason ZINK
  • Publication number: 20220269892
    Abstract: A virtual visor in a vehicle includes a screen with various regions that can alternate between being transparent and being opaque. A camera captures an image of the driver's face. A processor performs facial recognition or the like based on the captured images, and determines which region of the screen is transitioned from transparent to opaque to block out the sun from shining directly into the driver's eyes while maintaining visibility through the remainder of the screen. Low power monitors can be independently run on the vehicle, asynchronously with the algorithms and image processing that controls which region of the screen to be opaque. The monitors consume less power than operating the virtual visor continuously. Based on trigger conditions as detected by the monitors, the image processing and thus the alternating between opaque and transparent is ceased to save power until the trigger condition is no longer present.
    Type: Application
    Filed: February 22, 2021
    Publication date: August 25, 2022
    Inventors: Xinyu HUANG, Benzun Pious Wisely BABU, Liu REN
  • Publication number: 20220269122
    Abstract: A helmet and a method and system for controlling a digital visor of a helmet are disclosed herein. The helmet includes a visor screen having a plurality of liquid crystal display (LCD) pixels, with each LCD pixel configured to alter in transparency. The helmet also includes a light sensor configured to detect incident light. The helmet also includes a controller coupled to the visor screen and the light sensor. The controller is configured to alter the transparency of the plurality of LCD pixels based on the incident light. In embodiments, the controller can alter the transparency of the LCD pixels based on the direction and/or intensity of the incident light.
    Type: Application
    Filed: February 22, 2021
    Publication date: August 25, 2022
    Inventors: Xinyu HUANG, Benzun Pious Wisely BABU, Liu REN
  • Patent number: 11410436
    Abstract: A method for operating a vehicle including a vehicle sensing system includes generating a baseline image model of an cabin of the vehicle based on image data of the cabin of the vehicle generated by an imaging device of the vehicle sensing system, the baseline image model generated before a passenger event, and generating an event image model of the cabin of the vehicle based on image data of the cabin of the vehicle generated by the imaging device, the event image model generated after the passenger event. The method further includes identifying image deviations by comparing the event image model to the baseline image model with a controller of the vehicle sensing system, the image deviations corresponding to differences in the cabin of the vehicle from before the passenger event to after the passenger event, and operating the vehicle based on the identified image deviations.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: August 9, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Mao Ye, Liu Ren
  • Publication number: 20220218230
    Abstract: A system and method for monitoring a walking activity are disclosed, which have three major components: a pre-processing phase, a step detection phase, and a filtering and post-processing phase. In the preprocessing phase, recorded motion data is received, reoriented with respect to gravity, and low-pass filtered. Next, in the step detection phase, walking step candidates are detected from vertical acceleration peaks and valleys resulting from heel strikes. Finally, in the filtering and post-processing phase, false positive steps are filtered out using a composite of criteria, including time, similarity, and horizontal motion variation. The method 200 is advantageously able to detect most walking activities with accurate time boundaries, while maintaining very low false positive rate.
    Type: Application
    Filed: January 13, 2021
    Publication date: July 14, 2022
    Inventors: Huan Song, Lincan Zou, Liu Ren
  • Publication number: 20220221482
    Abstract: A system and method for monitoring performance of a repeated activity is described. The system comprises a motion sensing system and a processing system. The motion sensing system includes sensors configured to measure or track motions corresponding to a repeated activity. The processing system is configured to process motion data received from the motion sensing system to recognize and measure cycle durations in the repeated activity. In contrast to the conventional systems and methods, which may work for repeated activities having a high level of standardization, the system advantageously enables recognition and monitoring of cycle durations for a repeated activity, even when significant abnormal motions are present in each cycle. Thus, the system can be utilized in a significantly broader set of applications, compared conventional systems and methods.
    Type: Application
    Filed: January 14, 2021
    Publication date: July 14, 2022
    Inventors: Lincan Zou, Huan Song, Liu Ren
  • Patent number: 11373356
    Abstract: A method for generating graphics of a three-dimensional (3D) virtual environment includes: receiving, with a processor, a first camera position in the 3D virtual environment and a first viewing direction in the 3D virtual environment; receiving, with the processor, weather data including first precipitation information corresponding to a first geographic region corresponding to the first camera position in the 3D virtual environment; defining, with the processor, a bounding geometry at first position that is a first distance from the first camera position in the first viewing direction, the bounding geometry being dimensioned so as to cover a field of view from the first camera position in the first viewing direction; and rendering, with the processor, a 3D particle system in the 3D virtual environment depicting precipitation only within the bounding geometry, the 3D particle system having features depending on the first precipitation information.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: June 28, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Zeng Dai, Liu Ren, Lincan Zou
  • Publication number: 20220138510
    Abstract: A method to interpret a deep neural network that includes receiving a set of images, analyzing the set of images via a deep neural network, selecting an internal layer of the deep neural network, extracting neuron activations at the internal layer, factorizing the neuron activations via a matrix factorization algorithm to select prototypes and generate weights for each of the selected prototypes, replacing the neuron activations of the internal layer with selected prototypes and weights for each of the selected prototypes, receiving a second set of images, and classifying the second set of images via the deep neural network using the weighted prototypes without the internal layer.
    Type: Application
    Filed: October 25, 2021
    Publication date: May 5, 2022
    Inventors: Zeng DAI, Panpan XU, Liu REN, Subhajit DAS
  • Publication number: 20220138978
    Abstract: A system and method is disclosed having an end-to-end two-stage depth estimation deep learning framework that takes one spherical color image and estimate dense spherical depth maps. The contemplated framework may include a view synthesis (stage 1) and a multi-view stereo matching (stage 2). The combination of the two-stage process may provide the advantage of the geometric constraints from stereo matching to improve depth map quality, without the need of additional input data. It is also contemplated that a spherical warping layer may be used to integrate multiple spherical features volumes to one cost volume with uniformly sampled inverse depth for the multi-view spherical stereo matching stage. The two-stage spherical depth estimation system and method may be used in various applications including virtual reality, autonomous driving and robotics.
    Type: Application
    Filed: October 31, 2020
    Publication date: May 5, 2022
    Inventors: Zhixin YAN, Liu REN, Yuyan LI, Ye DUAN
  • Publication number: 20220138511
    Abstract: A method may include receiving a set of images, analyzing the images, selecting an internal layer, extracting neuron activations, factorizing the neuron activations via a matrix factorization algorithm to select prototypes and generate weights for each of the selected prototypes, replacing the neuron activations of the internal layer with the selected prototypes and the weights for the selected prototypes, receiving a second set of images, classifying the second set of images using the prototypes and weights, displaying the second set of images, selected prototypes, and weights, displaying predicted results and ground truth for the second set of images, providing error images based on the predicted results and ground truth; identifying error prototypes of the selected prototypes associated with the error images; ranking error weights of the error prototypes, and outputting a new image class based on the error prototypes being one of a top ranked error weights.
    Type: Application
    Filed: October 25, 2021
    Publication date: May 5, 2022
    Inventors: Panpan XU, Liu REN, Zeng DAI, Junhan ZHAO
  • Publication number: 20220138977
    Abstract: A system and method is disclosed having an end-to-end two-stage depth estimation deep learning framework that takes one spherical color image and estimate dense spherical depth maps. The contemplated framework may include a view synthesis (stage 1) and a multi-view stereo matching (stage 2). The combination of the two-stage process may provide the advantage of the geometric constraints from stereo matching to improve depth map quality, without the need of additional input data. It is also contemplated that a spherical warping layer may be used to integrate multiple spherical features volumes to one cost volume with uniformly sampled inverse depth for the multi-view spherical stereo matching stage. The two-stage spherical depth estimation system and method may be used in various applications including virtual reality, autonomous driving and robotics.
    Type: Application
    Filed: October 31, 2020
    Publication date: May 5, 2022
    Inventors: Zhixin YAN, Liu REN, Yuyan LI, Ye DUAN
  • Patent number: 11315266
    Abstract: Depth perception has become of increased interest in the image community due to the increasing usage of deep neural networks for the generation of dense depth maps. The applications of depth perception estimation, however, may still be limited due to the needs of a large amount of dense ground-truth depth for training. It is contemplated that a self-supervised control strategy may be developed for estimating depth maps using color images and data provided by a sensor system (e.g., sparse LiDAR data). Such a self-supervised control strategy may leverage superpixels (i.e., group of pixels that share common characteristics, for instance, pixel intensity) as local planar regions to regularize surface normal derivatives from estimated depth together with the photometric loss. The control strategy may be operable to produce a dense depth map that does not require a dense ground-truth supervision.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: April 26, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Zhixin Yan, Liang Mi, Liu Ren
  • Patent number: 11301724
    Abstract: A system includes a camera configured to obtain image information from objects. The system also includes a processor in communication with the camera and programmed to receive an input data including the image information, encode the input via an encoder, obtain a latent variable defining an attribute of the input data, generate a sequential reconstruction of the input data utilizing at least the latent variable and an adversarial noise, obtain a residual between the input data and the sequential reconstruction utilizing a comparison of at least the input and the reconstruction to learn a mean shift in latent space, and output a mean shift indicating a test result of the input compared to the adversarial noise based on the comparison.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: April 12, 2022
    Assignee: ROBERT BOSCH GMBH
    Inventors: Liang Gou, Lincan Zou, Axel Wendt, Liu Ren
  • Patent number: 11250279
    Abstract: Systems, methods, and non-transitory computer-readable media for detecting small objects in a roadway scene. A camera is coupled to a vehicle and configured to capture a roadway scene image. An electronic controller is coupled to the camera and configured to receive the roadway scene image from the camera. The electronic controller is also configured to generate a Generative Adversarial Network (GAN) model using the roadway scene image. The electronic controller is further configured to determine a distribution indicting how likely each location in the roadway scene image can contain a roadway object using the GAN model. The electronic controller is also configured to determine a plurality of locations in the roadway scene image by sampling the distribution. The electronic controller is further configured to detect the roadway object at one of the plurality of locations in the roadway scene images.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: February 15, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Eman Hassan, Nanxiang Li, Liu Ren
  • Patent number: 11224359
    Abstract: Abnormal motions are detected in sensor data collected with respect to performance of repetitive human activities. An autoencoder network model is trained based on a set of standard activity. Repetitive activity is extracted from sensor data. A first score is generated indicative of distance of a repetition of the repetitive activity from the standard activity. The repetitive activity is used to retrain the autoencoder network model, using weights of the autoencoder network model as initial values, the weights being based on the training of the autoencoder network model using the set of standard activity. A second score is generated indicative of whether the repetition is an outlier as compared to other repetitions of the repetitive activity. A final score is generated based on a weighting of the first score and the second score.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: January 18, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Huan Song, Liu Ren, Lincan Zou
  • Patent number: 11199561
    Abstract: Motion windows are generated from a query activity sequence. For each of the motion windows in the query activity sequence, a corresponding motion window in the reference activity sequence is found. One or more difference calculations are performed between the motion windows of the query activity sequence and the corresponding motion windows in the reference activity sequence based on at least one criterion associated with physical meaning. Abnormality of the motion windows is determined based on the one or more difference calculations. A standardized evaluation result of the query activity sequence is output based on the detected abnormal motion windows in the query activity sequence.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: December 14, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Lincan Zou, Liu Ren, Huan Song, Cheng Zhang