Patents by Inventor Liu Ren

Liu Ren has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11074276
    Abstract: A method for generating a graphical depiction of summarized event sequences includes receiving a plurality of event sequences, each event sequence in the plurality of event sequences including a plurality of events, and generating a plurality of clusters using a minimum description length (MDL) optimization process. Each cluster in the plurality of clusters including a set of at least two event sequences in the plurality of event sequences that maps to a pattern in each cluster. The pattern in each cluster further includes a plurality of events included in at least one event sequence in the set of at least two event sequences in the cluster. The method includes generating a graphical depiction of a first cluster in the plurality of clusters, the graphical depiction including a graphical depiction of a first plurality of events in the pattern of the first cluster.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: July 27, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Panpan Xu, Liu Ren, Yuanzhe Chen
  • Publication number: 20210201159
    Abstract: A system and method for domain adaptation involves a first domain and a second domain. A machine learning system is trained with first sensor data and first label data of the first domain. Second sensor data of a second domain is obtained. Second label data is generated via the machine learning system based on the second sensor data. Inter-domain sensor data is generated by interpolating the first sensor data of the first domain with respect to the second sensor data of the second domain. Inter-domain label data is generated by interpolating first label data of the first domain with respect to second label data of the second domain. The machine learning system is operable to generate inter-domain output data in response to the inter-domain sensor data. Inter-domain loss data is generated based on the inter-domain output data with respect to the inter-domain label data. Parameters of the machine learning system are updated upon optimizing final loss data that includes at least the inter-domain loss data.
    Type: Application
    Filed: December 31, 2019
    Publication date: July 1, 2021
    Inventors: Huan Song, Shen Yan, Nanxiang Li, Lincan Zou, Liu Ren
  • Publication number: 20210201053
    Abstract: Visual analytics tool for updating object detection models in autonomous driving applications. In one embodiment, an object detection model analysis system including a computer and an interface device. The interface device includes a display device. The computer includes an electronic processor that is configured to extract object information from image data with a first object detection model, extract characteristics of objects from metadata associated with image data, generate a summary of the object information and the characteristics, generate coordinated visualizations based on the summary and the characteristics, generate a recommendation graphical user interface element based on the coordinated visualizations and a first one or more user inputs, and update the first object detection model based at least in part on a classification of one or more individual objects as an actual weakness in the first object detection model to generate a second object detection model for autonomous driving.
    Type: Application
    Filed: December 31, 2019
    Publication date: July 1, 2021
    Inventors: Liang Gou, Lincan Zou, Nanxiang Li, Axel Wendt, Liu Ren
  • Publication number: 20210195981
    Abstract: A helmet includes one or more sensors located in the helmet and configured to obtain cognitive-load data indicating a cognitive load of a rider of a vehicle, a wireless transceiver in communication with the vehicle, a controller in communication with the one or more sensors and the wireless transceiver, wherein the controller is configured to determine a cognitive load of the occupant utilizing at least the cognitive-load data and send a wireless command to the vehicle utilizing the wireless transceiver to execute commands to adjust a driver assistance function when the cognitive load is above a threshold.
    Type: Application
    Filed: December 27, 2019
    Publication date: July 1, 2021
    Inventors: Shabnam GHAFFARZADEGAN, Benzun Pious Wisely BABU, Zeng DAI, Liu REN
  • Publication number: 20210201854
    Abstract: A smart helmet includes a heads-up display (HUD) configured to output graphical images within a virtual field of view on a visor of the smart helmet. A transceiver is configured to communicate with a mobile device of a user. A processor is programmed to receive, via the transceiver, calibration data from the mobile device that relates to one or more captured images from a camera on the mobile device, and alter the virtual field of view of the HUD based on the calibration data. This allows a user to calibrate his/her HUD of the smart helmet based on images received from the user's mobile device.
    Type: Application
    Filed: December 27, 2019
    Publication date: July 1, 2021
    Inventors: Benzun Pious Wisely BABU, Zeng DAI, Shabnam GHAFFARZADEGAN, Liu REN
  • Publication number: 20210191518
    Abstract: A helmet includes a transceiver configured to receive vehicle data from one or more sensors located on a vehicle, an inertial measurement unit (IMU) configured to collect helmet motion data of the helmet associated with a rider of the vehicle, and a processor in communication with the transceiver and IMU, and programmed to receive, via the transceiver, vehicle data from the one or more sensors located on the vehicle, determine a gesture in response to the vehicle data from the one or more sensors located on the vehicle and the helmet motion data from the IMU, and output on a display of the helmet a status interface related to the vehicle, in response to the gesture.
    Type: Application
    Filed: December 23, 2019
    Publication date: June 24, 2021
    Inventors: Benzun Pious Wisely BABU, Zeng DAI, Shabnam GHAFFARZADEGAN, Liu REN
  • Publication number: 20210192278
    Abstract: Few-shot learning of repetitive human tasks is performed. Sliding window-based temporal segmentation is performed of sensor data for a plurality of cycles of a repetitive task. Motion alignment is performed of the plurality of cycles, the motion alignment mapping portions of the plurality of cycles to corresponding portions of other of the plurality of cycles. Categories are constructed for each of the corresponding portions of the plurality of cycles according to the motion alignment. Meta-training is performed to teach a model according to data sampled from a labeled set of human motions and the categories for each of the corresponding portions, the model utilizing a bidirectional long short-term memory (LSTM) network to account for length variation between the plurality of cycles. The model is used to perform temporal segmentation on a data stream of sensor data in real time for predicting motion windows within the data stream.
    Type: Application
    Filed: December 19, 2019
    Publication date: June 24, 2021
    Inventors: Huan SONG, Liu REN
  • Publication number: 20210181931
    Abstract: A deep sequence model with prototypes may be steered. A prototype overview is displayed, the prototype overview including a plurality of prototype sequences learned by a model through backpropagation, each of the prototype sequences including a series of events, where for each of the prototype sequences, statistical information is presented with respect to use of the prototype sequence by the model. Input is received adjusting one or more of the prototype sequences to fine-tune the model. The model is updated using the plurality of prototype sequences, as adjusted, to create an updated model. The model, as updated, is displayed in the prototype overview.
    Type: Application
    Filed: December 11, 2019
    Publication date: June 17, 2021
    Inventors: Panpan XU, Liu REN, Yao MING, Furui CHENG, Huamin QU
  • Publication number: 20210183083
    Abstract: Depth perception has become of increased interest in the image community due to the increasing usage of deep neural networks for the generation of dense depth maps. The applications of depth perception estimation, however, may still be limited due to the needs of a large amount of dense ground-truth depth for training. It is contemplated that a self-supervised control strategy may be developed for estimating depth maps using color images and data provided by a sensor system (e.g., sparse LiDAR data). Such a self-supervised control strategy may leverage superpixels (i.e., group of pixels that share common characteristics, for instance, pixel intensity) as local planar regions to regularize surface normal derivatives from estimated depth together with the photometric loss. The control strategy may be operable to produce a dense depth map that does not require a dense ground-truth supervision.
    Type: Application
    Filed: December 16, 2019
    Publication date: June 17, 2021
    Inventors: Zhixin YAN, Liang MI, Liu REN
  • Publication number: 20210177307
    Abstract: Abnormal motions are detected in sensor data collected with respect to performance of repetitive human activities. An autoencoder network model is trained based on a set of standard activity. Repetitive activity is extracted from sensor data. A first score is generated indicative of distance of a repetition of the repetitive activity from the standard activity. The repetitive activity is used to retrain the autoencoder network model, using weights of the autoencoder network model as initial values, the weights being based on the training of the autoencoder network model using the set of standard activity. A second score is generated indicative of whether the repetition is an outlier as compared to other repetitions of the repetitive activity. A final score is generated based on a weighting of the first score and the second score.
    Type: Application
    Filed: December 17, 2019
    Publication date: June 17, 2021
    Inventors: Huan SONG, Liu REN, Lincan ZOU
  • Publication number: 20210133447
    Abstract: A helmet includes a transceiver configured to receive vehicle data from one or more sensors located on a vehicle. The helmet also includes an inertial movement unit (IMU) configured to collect helmet motion data of a rider of the vehicle and a processor in communication with the transceiver and IMU, and programmed to receive, via the transceiver, vehicle data from the one or more sensors located on the vehicle and determine a rider attention state utilizing the vehicle data from the one or more sensors located on the vehicle and the helmet motion data from the IMU.
    Type: Application
    Filed: November 1, 2019
    Publication date: May 6, 2021
    Inventors: Benzun Pious Wisely Babu, Mao Ye, Liu Ren
  • Patent number: 10996235
    Abstract: Using a global optimization, a cycle within a frame buffer including frames corresponding to one or more cycles of query activity sequences is detected. The detection includes creating a plurality of cycle segmentations by recursively iterating through the frame buffer to identify candidate cycles corresponding to cycles of a reference activity sequence until the frame buffer lacks sufficient frames to create additional cycles, computing segmentation errors for each of the plurality of cycle segmentations, and identifying the detected cycle as the one of the plurality of cycle segmentations having a lowest segmentation error. Cycle duration data for the detected cycle is generated. Frames belonging to the detected cycle are removed from the frame buffer. The cycle duration data is output.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: May 4, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Lincan Zou, Liu Ren, Cheng Zhang
  • Patent number: 10997467
    Abstract: Weaknesses may be exposed in image object detectors. An image object is overlaid onto a background image at each of a plurality of locations, the background image including a scene in which the image objects can be present. A detector model is used to attempt detection of the image object as overlaid onto the background image, the detector model being trained to identify the image object in background images, the detection resulting in background scene detection scores indicative of likelihood of the image object being detected at each of the plurality of locations. A detectability map is displayed overlaid on the background image, the detectability map including, for each of the plurality of locations, a bounding box of the image object illustrated according to the respective detection score.
    Type: Grant
    Filed: October 18, 2019
    Date of Patent: May 4, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Bilal Alsallakh, Nanxiang Li, Lincan Zou, Axel Wendt, Liu Ren
  • Publication number: 20210117730
    Abstract: Weaknesses may be exposed in image object detectors. An image object is overlaid onto a background image at each of a plurality of locations, the background image including a scene in which the image objects can be present. A detector model is used to attempt detection of the image object as overlaid onto the background image, the detector model being trained to identify the image object in background images, the detection resulting in background scene detection scores indicative of likelihood of the image object being detected at each of the plurality of locations. A detectability map is displayed overlaid on the background image, the detectability map including, for each of the plurality of locations, a bounding box of the image object illustrated according to the respective detection score.
    Type: Application
    Filed: October 18, 2019
    Publication date: April 22, 2021
    Inventors: Bilal ALSALLAKH, Nanxiang LI, Lincan ZOU, Axel WENDT, Liu REN
  • Patent number: 10984311
    Abstract: A system includes a display device, a memory configured to store a visual analysis application and image data including a plurality of images including detectable objects; and a processor, operatively connected to the memory and the display device. The processor is configured to execute the visual analysis application to learn generative factors from objects detected in the plurality of images, visualize the generative factors in a user interface provided to the display device, receive grouped combinations of the generative factors and values to apply to the generative factors to control object features, create generated objects by applying the values of the generative factors to the objects detected in the plurality of images, combine the generated objects into the original images to create generated images, and apply a discriminator to the generated images to reject unrealistic images.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: April 20, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Nanxiang Li, Bilal Alsallakh, Liu Ren
  • Patent number: 10984054
    Abstract: A visual analytics method and system is disclosed for visualizing an operation of an image classification model having at least one convolutional neural network layer. The image classification model classifies sample images into one of a predefined set of possible classes. The visual analytics method determines a unified ordering of the predefined set of possible classes based on a similarity hierarchy such that classes that are similar to one another are clustered together in the unified ordering. The visual analytics method displays various graphical depictions, including a class hierarchy viewer, a confusion matrix, and a response map. In each case, the elements of the graphical depictions are arranged in accordance with the unified ordering. Using the method, a user a better able to understand the training process of the model, diagnose the separation power of the different feature detectors of the model, and improve the architecture of the model.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: April 20, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Bilal Alsallakh, Amin Jourabloo, Mao Ye, Xiaoming Liu, Liu Ren
  • Patent number: 10959479
    Abstract: A system for providing a rider of a saddle-ride vehicle, such as a motorcycle, with information about helmet usage is provided. A camera is mounted to the saddle-ride vehicle and faces the rider and monitor a rider of the vehicle and collect rider image data. A GPS system is configured to detect a location of the saddle-ride vehicle. A controller is in communication with the camera and the GPS system. The controller is configured to receive an image of the ruder from the camera, determine if the rider is wearing a helmet based on the rider image data, and output a helmet-worn indicator to the rider, in which the helmet-worn indicator varies based on the determined location of the saddle-ride vehicle.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: March 30, 2021
    Assignee: Robert Bosch GmbH
    Inventors: Benzun Pious Wisely Babu, Zeng Dai, Shabnam Ghaffarzadegan, Liu Ren
  • Publication number: 20210080259
    Abstract: A system and method for generating a. tracking state for a device includes synchronizing measurement data from exteroceptive sensors and an inertial measurement unit (IMU). A processing unit is programmed to offset one of the measurement signals by a time offset that minimizes a total error between a change in rotation of the device predicted by the exteroceptive sensor data over a time interval defined by an exteroceptive sensor sampling rate and a change in rotation of the device predicted by the IMU sensor data over the time interval.
    Type: Application
    Filed: September 12, 2019
    Publication date: March 18, 2021
    Inventors: Benzun Pious Wisely Babu, Mao Ye, Liu Ren
  • Publication number: 20210042607
    Abstract: An augmented reality (AR) system and method is disclosed that may include a controller operable to process one or more convolutional neural networks (CNN) and a visualization device operable to acquire one or more 2-D RGB images. The controller may generate an anchor vector in a semantic space in response to an anchor image being provided to a first convolutional neural network (CNN). The anchor image may be one of the 2-D RGB images. The controller may generate a positive vector and negative vector in the semantic space in response to a negative image and positive image being provided to a second CNN. The negative and positive images may be provided as 3-D CAD images. The controller may apply a cross-domain deep metric learning algorithm that is operable to extract image features in the semantic space using the anchor vector, positive vector, and negative vector.
    Type: Application
    Filed: August 5, 2019
    Publication date: February 11, 2021
    Inventors: Zhixin YAN, Mao YE, Liu REN
  • Publication number: 20210042583
    Abstract: A computer-program product storing instructions which, when executed by a computer, cause the computer to receive an input data, encode the input via an encoder, during a first sequence, obtain a first latent variable defining an attribute of the input data, generate a sequential reconstruction of the input data utilizing the decoder and at least the first latent variable, obtain a residual between the input data and the reconstruction utilizing a comparison of at least the first latent variable, and output a final reconstruction of the input data utilizing a plurality of residuals from a plurality of sequences.
    Type: Application
    Filed: August 8, 2019
    Publication date: February 11, 2021
    Inventors: Shabnam GHAFFARZADEGAN, Nanxiang LI, Liu REN