Patents by Inventor Liu Ren

Liu Ren has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220218230
    Abstract: A system and method for monitoring a walking activity are disclosed, which have three major components: a pre-processing phase, a step detection phase, and a filtering and post-processing phase. In the preprocessing phase, recorded motion data is received, reoriented with respect to gravity, and low-pass filtered. Next, in the step detection phase, walking step candidates are detected from vertical acceleration peaks and valleys resulting from heel strikes. Finally, in the filtering and post-processing phase, false positive steps are filtered out using a composite of criteria, including time, similarity, and horizontal motion variation. The method 200 is advantageously able to detect most walking activities with accurate time boundaries, while maintaining very low false positive rate.
    Type: Application
    Filed: January 13, 2021
    Publication date: July 14, 2022
    Inventors: Huan Song, Lincan Zou, Liu Ren
  • Patent number: 11373356
    Abstract: A method for generating graphics of a three-dimensional (3D) virtual environment includes: receiving, with a processor, a first camera position in the 3D virtual environment and a first viewing direction in the 3D virtual environment; receiving, with the processor, weather data including first precipitation information corresponding to a first geographic region corresponding to the first camera position in the 3D virtual environment; defining, with the processor, a bounding geometry at first position that is a first distance from the first camera position in the first viewing direction, the bounding geometry being dimensioned so as to cover a field of view from the first camera position in the first viewing direction; and rendering, with the processor, a 3D particle system in the 3D virtual environment depicting precipitation only within the bounding geometry, the 3D particle system having features depending on the first precipitation information.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: June 28, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Zeng Dai, Liu Ren, Lincan Zou
  • Publication number: 20220138510
    Abstract: A method to interpret a deep neural network that includes receiving a set of images, analyzing the set of images via a deep neural network, selecting an internal layer of the deep neural network, extracting neuron activations at the internal layer, factorizing the neuron activations via a matrix factorization algorithm to select prototypes and generate weights for each of the selected prototypes, replacing the neuron activations of the internal layer with selected prototypes and weights for each of the selected prototypes, receiving a second set of images, and classifying the second set of images via the deep neural network using the weighted prototypes without the internal layer.
    Type: Application
    Filed: October 25, 2021
    Publication date: May 5, 2022
    Inventors: Zeng DAI, Panpan XU, Liu REN, Subhajit DAS
  • Publication number: 20220138511
    Abstract: A method may include receiving a set of images, analyzing the images, selecting an internal layer, extracting neuron activations, factorizing the neuron activations via a matrix factorization algorithm to select prototypes and generate weights for each of the selected prototypes, replacing the neuron activations of the internal layer with the selected prototypes and the weights for the selected prototypes, receiving a second set of images, classifying the second set of images using the prototypes and weights, displaying the second set of images, selected prototypes, and weights, displaying predicted results and ground truth for the second set of images, providing error images based on the predicted results and ground truth; identifying error prototypes of the selected prototypes associated with the error images; ranking error weights of the error prototypes, and outputting a new image class based on the error prototypes being one of a top ranked error weights.
    Type: Application
    Filed: October 25, 2021
    Publication date: May 5, 2022
    Inventors: Panpan XU, Liu REN, Zeng DAI, Junhan ZHAO
  • Publication number: 20220138978
    Abstract: A system and method is disclosed having an end-to-end two-stage depth estimation deep learning framework that takes one spherical color image and estimate dense spherical depth maps. The contemplated framework may include a view synthesis (stage 1) and a multi-view stereo matching (stage 2). The combination of the two-stage process may provide the advantage of the geometric constraints from stereo matching to improve depth map quality, without the need of additional input data. It is also contemplated that a spherical warping layer may be used to integrate multiple spherical features volumes to one cost volume with uniformly sampled inverse depth for the multi-view spherical stereo matching stage. The two-stage spherical depth estimation system and method may be used in various applications including virtual reality, autonomous driving and robotics.
    Type: Application
    Filed: October 31, 2020
    Publication date: May 5, 2022
    Inventors: Zhixin YAN, Liu REN, Yuyan LI, Ye DUAN
  • Publication number: 20220138977
    Abstract: A system and method is disclosed having an end-to-end two-stage depth estimation deep learning framework that takes one spherical color image and estimate dense spherical depth maps. The contemplated framework may include a view synthesis (stage 1) and a multi-view stereo matching (stage 2). The combination of the two-stage process may provide the advantage of the geometric constraints from stereo matching to improve depth map quality, without the need of additional input data. It is also contemplated that a spherical warping layer may be used to integrate multiple spherical features volumes to one cost volume with uniformly sampled inverse depth for the multi-view spherical stereo matching stage. The two-stage spherical depth estimation system and method may be used in various applications including virtual reality, autonomous driving and robotics.
    Type: Application
    Filed: October 31, 2020
    Publication date: May 5, 2022
    Inventors: Zhixin YAN, Liu REN, Yuyan LI, Ye DUAN
  • Patent number: 11315266
    Abstract: Depth perception has become of increased interest in the image community due to the increasing usage of deep neural networks for the generation of dense depth maps. The applications of depth perception estimation, however, may still be limited due to the needs of a large amount of dense ground-truth depth for training. It is contemplated that a self-supervised control strategy may be developed for estimating depth maps using color images and data provided by a sensor system (e.g., sparse LiDAR data). Such a self-supervised control strategy may leverage superpixels (i.e., group of pixels that share common characteristics, for instance, pixel intensity) as local planar regions to regularize surface normal derivatives from estimated depth together with the photometric loss. The control strategy may be operable to produce a dense depth map that does not require a dense ground-truth supervision.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: April 26, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Zhixin Yan, Liang Mi, Liu Ren
  • Patent number: 11301724
    Abstract: A system includes a camera configured to obtain image information from objects. The system also includes a processor in communication with the camera and programmed to receive an input data including the image information, encode the input via an encoder, obtain a latent variable defining an attribute of the input data, generate a sequential reconstruction of the input data utilizing at least the latent variable and an adversarial noise, obtain a residual between the input data and the sequential reconstruction utilizing a comparison of at least the input and the reconstruction to learn a mean shift in latent space, and output a mean shift indicating a test result of the input compared to the adversarial noise based on the comparison.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: April 12, 2022
    Assignee: ROBERT BOSCH GMBH
    Inventors: Liang Gou, Lincan Zou, Axel Wendt, Liu Ren
  • Patent number: 11250279
    Abstract: Systems, methods, and non-transitory computer-readable media for detecting small objects in a roadway scene. A camera is coupled to a vehicle and configured to capture a roadway scene image. An electronic controller is coupled to the camera and configured to receive the roadway scene image from the camera. The electronic controller is also configured to generate a Generative Adversarial Network (GAN) model using the roadway scene image. The electronic controller is further configured to determine a distribution indicting how likely each location in the roadway scene image can contain a roadway object using the GAN model. The electronic controller is also configured to determine a plurality of locations in the roadway scene image by sampling the distribution. The electronic controller is further configured to detect the roadway object at one of the plurality of locations in the roadway scene images.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: February 15, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Eman Hassan, Nanxiang Li, Liu Ren
  • Patent number: 11224359
    Abstract: Abnormal motions are detected in sensor data collected with respect to performance of repetitive human activities. An autoencoder network model is trained based on a set of standard activity. Repetitive activity is extracted from sensor data. A first score is generated indicative of distance of a repetition of the repetitive activity from the standard activity. The repetitive activity is used to retrain the autoencoder network model, using weights of the autoencoder network model as initial values, the weights being based on the training of the autoencoder network model using the set of standard activity. A second score is generated indicative of whether the repetition is an outlier as compared to other repetitions of the repetitive activity. A final score is generated based on a weighting of the first score and the second score.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: January 18, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Huan Song, Liu Ren, Lincan Zou
  • Patent number: 11199561
    Abstract: Motion windows are generated from a query activity sequence. For each of the motion windows in the query activity sequence, a corresponding motion window in the reference activity sequence is found. One or more difference calculations are performed between the motion windows of the query activity sequence and the corresponding motion windows in the reference activity sequence based on at least one criterion associated with physical meaning. Abnormality of the motion windows is determined based on the one or more difference calculations. A standardized evaluation result of the query activity sequence is output based on the detected abnormal motion windows in the query activity sequence.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: December 14, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Lincan Zou, Liu Ren, Huan Song, Cheng Zhang
  • Patent number: 11181379
    Abstract: A system and method for generating a tracking state for a device includes synchronizing measurement data from exteroceptive sensors and an inertial measurement unit (IMU). A processing unit is programmed to offset one of the measurement signals by a time offset that minimizes a total error between a change in rotation of the device predicted by the exteroceptive sensor data over a time interval defined by an exteroceptive sensor sampling rate and a change in rotation of the device predicted by the IMU sensor data over the time interval.
    Type: Grant
    Filed: September 12, 2019
    Date of Patent: November 23, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Benzun Pious Wisely Babu, Mao Ye, Liu Ren
  • Patent number: 11176422
    Abstract: A computer-program product storing instructions which, when executed by a computer, cause the computer to receive an input data, encode the input via an encoder, during a first sequence, obtain a first latent variable defining an attribute of the input data, generate a sequential reconstruction of the input data utilizing the decoder and at least the first latent variable, obtain a residual between the input data and the reconstruction utilizing a comparison of at least the first latent variable, and output a final reconstruction of the input data utilizing a plurality of residuals from a plurality of sequences.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: November 16, 2021
    Assignee: ROBERT BOSCH GMBH
    Inventors: Shabnam Ghaffarzadegan, Nanxiang Li, Liu Ren
  • Publication number: 20210342647
    Abstract: A system includes a camera configured to obtain image information from objects. The system also includes a processor in communication with the camera and programmed to receive an input data including the image information, encode the input via an encoder, obtain a latent variable defining an attribute of the input data, generate a sequential reconstruction of the input data utilizing at least the latent variable and an adversarial noise, obtain a residual between the input data and the sequential reconstruction utilizing a comparison of at least the input and the reconstruction to learn a mean shift in latent space, and output a mean shift indicating a test result of the input compared to the adversarial noise based on the comparison.
    Type: Application
    Filed: April 30, 2020
    Publication date: November 4, 2021
    Inventors: Liang GOU, Lincan ZOU, Axel WENDT, Liu REN
  • Publication number: 20210304352
    Abstract: An artificial neural network is trained to produce spatial labelling for a three-dimensional environment based on image data. A two-dimensional image representation is produced of omni-direction image data captured by one or more cameras of the three-dimensional environment. The artificial neural network is applied using the two-dimensional image representation as input and producing a first predicted label as output. A rotated two-dimensional image is generated by shifting image pixels of the two-dimensional image representation in a horizontal direction. The artificial neural network is then applied again using the rotated two-dimensional image as input and producing a second predicted label as its output. The artificial neural network is trained based at least in part on a difference between the first predicted label and the second predicted label.
    Type: Application
    Filed: March 31, 2020
    Publication date: September 30, 2021
    Inventors: Zhixin Yan, Yuyan Li, Liu Ren
  • Patent number: 11093753
    Abstract: A visual SLAM system comprises a plurality of keyframes including a keyframe, a current keyframe, and a previous keyframe, a dual dense visual odometry configured to provide a pairwise transformation estimate between two of the plurality of keyframes, a frame generator configured to create keyframe graph, a loop constraint evaluator adds a constraint to the receiving keyframe graph, and a graph optimizer configured to produce a map with trajectory.
    Type: Grant
    Filed: June 26, 2017
    Date of Patent: August 17, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Soohwan Kim, Benzun Pious Wisely Babu, Zhixin Yan, Liu Ren
  • Patent number: 11074276
    Abstract: A method for generating a graphical depiction of summarized event sequences includes receiving a plurality of event sequences, each event sequence in the plurality of event sequences including a plurality of events, and generating a plurality of clusters using a minimum description length (MDL) optimization process. Each cluster in the plurality of clusters including a set of at least two event sequences in the plurality of event sequences that maps to a pattern in each cluster. The pattern in each cluster further includes a plurality of events included in at least one event sequence in the set of at least two event sequences in the cluster. The method includes generating a graphical depiction of a first cluster in the plurality of clusters, the graphical depiction including a graphical depiction of a first plurality of events in the pattern of the first cluster.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: July 27, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Panpan Xu, Liu Ren, Yuanzhe Chen
  • Publication number: 20210201053
    Abstract: Visual analytics tool for updating object detection models in autonomous driving applications. In one embodiment, an object detection model analysis system including a computer and an interface device. The interface device includes a display device. The computer includes an electronic processor that is configured to extract object information from image data with a first object detection model, extract characteristics of objects from metadata associated with image data, generate a summary of the object information and the characteristics, generate coordinated visualizations based on the summary and the characteristics, generate a recommendation graphical user interface element based on the coordinated visualizations and a first one or more user inputs, and update the first object detection model based at least in part on a classification of one or more individual objects as an actual weakness in the first object detection model to generate a second object detection model for autonomous driving.
    Type: Application
    Filed: December 31, 2019
    Publication date: July 1, 2021
    Inventors: Liang Gou, Lincan Zou, Nanxiang Li, Axel Wendt, Liu Ren
  • Publication number: 20210195981
    Abstract: A helmet includes one or more sensors located in the helmet and configured to obtain cognitive-load data indicating a cognitive load of a rider of a vehicle, a wireless transceiver in communication with the vehicle, a controller in communication with the one or more sensors and the wireless transceiver, wherein the controller is configured to determine a cognitive load of the occupant utilizing at least the cognitive-load data and send a wireless command to the vehicle utilizing the wireless transceiver to execute commands to adjust a driver assistance function when the cognitive load is above a threshold.
    Type: Application
    Filed: December 27, 2019
    Publication date: July 1, 2021
    Inventors: Shabnam GHAFFARZADEGAN, Benzun Pious Wisely BABU, Zeng DAI, Liu REN
  • Publication number: 20210201854
    Abstract: A smart helmet includes a heads-up display (HUD) configured to output graphical images within a virtual field of view on a visor of the smart helmet. A transceiver is configured to communicate with a mobile device of a user. A processor is programmed to receive, via the transceiver, calibration data from the mobile device that relates to one or more captured images from a camera on the mobile device, and alter the virtual field of view of the HUD based on the calibration data. This allows a user to calibrate his/her HUD of the smart helmet based on images received from the user's mobile device.
    Type: Application
    Filed: December 27, 2019
    Publication date: July 1, 2021
    Inventors: Benzun Pious Wisely BABU, Zeng DAI, Shabnam GHAFFARZADEGAN, Liu REN