Patents by Inventor Liu Ren

Liu Ren has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230085927
    Abstract: A computer-implemented method includes receiving one or more images from one or more sensors, creating one or more image patches utilizing the one or more images, creating one or more latent representations from the one or more image patches via a neural network, outputting, to a concept extractor network, the one or more latent representations utilizing the one or more image patches, defining one or more scores associated with the one or more latent representations, and outputting one or more scores associated with the one or more image patches utilizing at least the concept extractor network.
    Type: Application
    Filed: September 20, 2021
    Publication date: March 23, 2023
    Inventors: Panpan XU, Liu REN, Zhenge ZHAO
  • Publication number: 20230085938
    Abstract: Embodiments of systems and methods for diagnosing an object-detecting machine learning model for autonomous driving are disclosed herein. An input image is received from a camera mounted in or on a vehicle that shows a scene. A spatial distribution of movable objects within the scene is derived using a context-aware spatial representation machine learning model. An unseen object is generated in the scene that is not originally in the input image utilizing a spatial adversarial machine learning model. Via the spatial adversarial machine learning model, the unseen object is moved to different locations to fail the object-detecting machine learning model. An interactive user interface enables a user to analyze performance of the object-detecting machine learning model with respect to the scene without the unseen object and the scene with the unseen object.
    Type: Application
    Filed: September 17, 2021
    Publication date: March 23, 2023
    Inventors: Wenbin HE, Liang GOU, Lincan ZOU, Liu REN
  • Publication number: 20230086327
    Abstract: Systems and methods are disclosed for identifying target graphs that have nodes or neighborhoods of nodes (sub-graphs) that correspond with an input query graph. A visual analytics system supports human-in-the-loop, example-based subgraph pattern search utilizing a database of target graphs. Users can interactively select a pattern of nodes of interest. Graph neural networks encode topological and node attributes in a graph as fixed length latent vector representations such that subgraph matching can be performed in the latent space. Once matching target graphs are identified as corresponding to the query graph, one-to-one node correspondence between the query graph and the matching target graphs.
    Type: Application
    Filed: September 17, 2021
    Publication date: March 23, 2023
    Inventors: Huan SONG, Zeng DAI, Panpan XU, Liu REN
  • Publication number: 20230089148
    Abstract: Methods and systems for providing an interactive image scene graph pattern search are provided. A user is provide with an image having a plurality of selectable segmented regions therein. The user selects one or more of the segmented regions to build a query graph. Via a graph neural network, matching target graphs are retrieved that contain the query graph from a target graph database. Each matching target graph has matching target nodes that match with the query nodes of the query graph. Matching target images from an image database are associated with the matching target graphs. Embeddings of each of the query nodes and the matching target nodes are extracted. A comparison of the embeddings of each query node with the embeddings of each matching target node is performed. The user interface displays the matching target images that are associated with the matching target graphs.
    Type: Application
    Filed: September 17, 2021
    Publication date: March 23, 2023
    Inventors: Zeng DAI, Huan SONG, Panpan XU, Liu REN
  • Patent number: 11605222
    Abstract: A helmet includes a transceiver configured to receive vehicle data from one or more sensors located on a vehicle. The helmet also includes an inertial movement unit (IMU) configured to collect helmet motion data of a rider of the vehicle and a processor in communication with the transceiver and IMU, and programmed to receive, via the transceiver, vehicle data from the one or more sensors located on the vehicle and determine a rider attention state utilizing the vehicle data from the one or more sensors located on the vehicle and the helmet motion data from the IMU.
    Type: Grant
    Filed: November 1, 2019
    Date of Patent: March 14, 2023
    Inventors: Benzun Pious Wisely Babu, Mao Ye, Liu Ren
  • Patent number: 11593589
    Abstract: A novel interpretable and steerable deep sequence modeling technique is disclosed. The technique combines prototype learning and RNNs to achieve both interpretability and high accuracy. Experiments and case studies on different real-world sequence prediction/classification tasks demonstrate that the model is not only as accurate as other state-of-the-art machine learning techniques but also much more interpretable. In addition, a large-scale user study on Amazon Mechanical Turk demonstrates that for familiar domains like sentiment analysis on texts, the model is able to select high quality prototypes that are well aligned with human knowledge for prediction and interpretation. Furthermore, the model obtains better interpretability without a loss of performance by incorporating the feedback from a user study to update the prototypes, demonstrating the benefits of involving human-in-the-loop for interpretable machine learning.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: February 28, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Panpan Xu, Liu Ren, Yao Ming
  • Patent number: 11587330
    Abstract: Visual analytics tool for updating object detection models in autonomous driving applications. In one embodiment, an object detection model analysis system including a computer and an interface device. The interface device includes a display device. The computer includes an electronic processor that is configured to extract object information from image data with a first object detection model, extract characteristics of objects from metadata associated with image data, generate a summary of the object information and the characteristics, generate coordinated visualizations based on the summary and the characteristics, generate a recommendation graphical user interface element based on the coordinated visualizations and a first one or more user inputs, and update the first object detection model based at least in part on a classification of one or more individual objects as an actual weakness in the first object detection model to generate a second object detection model for autonomous driving.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: February 21, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Liang Gou, Lincan Zou, Nanxiang Li, Axel Wendt, Liu Ren
  • Patent number: 11537901
    Abstract: A system and method for domain adaptation involves a first domain and a second domain. A machine learning system is trained with first sensor data and first label data of the first domain. Second sensor data of a second domain is obtained. Second label data is generated via the machine learning system based on the second sensor data. Inter-domain sensor data is generated by interpolating the first sensor data of the first domain with respect to the second sensor data of the second domain. Inter-domain label data is generated by interpolating first label data of the first domain with respect to second label data of the second domain. The machine learning system is operable to generate inter-domain output data in response to the inter-domain sensor data. Inter-domain loss data is generated based on the inter-domain output data with respect to the inter-domain label data. Parameters of the machine learning system are updated upon optimizing final loss data that includes at least the inter-domain loss data.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: December 27, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Huan Song, Shen Yan, Nanxiang Li, Lincan Zou, Liu Ren
  • Patent number: 11526689
    Abstract: Few-shot learning of repetitive human tasks is performed. Sliding window-based temporal segmentation is performed of sensor data for a plurality of cycles of a repetitive task. Motion alignment is performed of the plurality of cycles, the motion alignment mapping portions of the plurality of cycles to corresponding portions of other of the plurality of cycles. Categories are constructed for each of the corresponding portions of the plurality of cycles according to the motion alignment. Meta-training is performed to teach a model according to data sampled from a labeled set of human motions and the categories for each of the corresponding portions, the model utilizing a bidirectional long short-term memory (LSTM) network to account for length variation between the plurality of cycles. The model is used to perform temporal segmentation on a data stream of sensor data in real time for predicting motion windows within the data stream.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: December 13, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Huan Song, Liu Ren
  • Patent number: 11513673
    Abstract: A deep sequence model with prototypes may be steered. A prototype overview is displayed, the prototype overview including a plurality of prototype sequences learned by a model through backpropagation, each of the prototype sequences including a series of events, where for each of the prototype sequences, statistical information is presented with respect to use of the prototype sequence by the model. Input is received adjusting one or more of the prototype sequences to fine-tune the model. The model is updated using the plurality of prototype sequences, as adjusted, to create an updated model. The model, as updated, is displayed in the prototype overview.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: November 29, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Panpan Xu, Liu Ren, Yao Ming, Furui Cheng, Huamin Qu
  • Patent number: 11500470
    Abstract: A helmet includes a transceiver configured to receive vehicle data from one or more sensors located on a vehicle, an inertial measurement unit (IMU) configured to collect helmet motion data of the helmet associated with a rider of the vehicle, and a processor in communication with the transceiver and IMU, and programmed to receive, via the transceiver, vehicle data from the one or more sensors located on the vehicle, determine a gesture in response to the vehicle data from the one or more sensors located on the vehicle and the helmet motion data from the IMU, and output on a display of the helmet a status interface related to the vehicle, in response to the gesture.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: November 15, 2022
    Inventors: Benzun Pious Wisely Babu, Zeng Dai, Shabnam Ghaffarzadegan, Liu Ren
  • Publication number: 20220277187
    Abstract: Methods and systems for performing concept-based adversarial generation with steerable and diverse semantics. One system includes an electronic processor configured to access an input image. The electronic processor is also configured to perform concept-based semantic image generation based on the input image. The electronic processor is also configured to perform concept-based semantic adversarial learning using a set of semantic latent spaces generated as part of performing the concept-based semantic image generation. The electronic processor is also configured to generate an adversarial image based on the concept-based semantic adversarial learning. The electronic processor is also configured to test a target model using the adversarial image.
    Type: Application
    Filed: March 1, 2021
    Publication date: September 1, 2022
    Inventors: Zijie Wang, Liang Gou, Wenbin He, Liu Ren
  • Publication number: 20220277173
    Abstract: Methods and systems for performing function testing for moveable objects. One system includes an electronic processor configured to access a driving scene including a moveable object. The electronic processor is also configured to perform spatial representation learning on the driving scene. The electronic processor is also configured to generate an adversarial example based on the learned spatial representation. The electronic processor is also configured to retrain the deep learning model using the adversarial example and the driving scene.
    Type: Application
    Filed: March 1, 2021
    Publication date: September 1, 2022
    Inventors: Wenbin He, Liang Gou, Lincan Zou, Liu Ren
  • Publication number: 20220277192
    Abstract: A visual analytics workflow and system are disclosed for assessing, understanding, and improving deep neural networks. The visual analytics workflow advantageously enables interpretation and improvement of the performance of a neural network model, for example an image-based objection detection and classification model, with minimal human-in-the-loop interaction. A data representation component extracts semantic features of input image data, such as colors, brightness, background, rotation, etc. of the images or objects in the images. The input image data are passed through the neural network to obtain prediction results, such as object detection and classification results. An interactive visualization component transforms the prediction results and semantic features into interactive and human-friendly visualizations, in which graphical elements encoding the prediction results are visually arranged depending on the extracted semantic features of input image data.
    Type: Application
    Filed: February 26, 2021
    Publication date: September 1, 2022
    Inventors: Liang Gou, Lincan Zou, Wenbin He, Liu Ren
  • Patent number: 11430146
    Abstract: A system and method is disclosed having an end-to-end two-stage depth estimation deep learning framework that takes one spherical color image and estimate dense spherical depth maps. The contemplated framework may include a view synthesis (stage 1) and a multi-view stereo matching (stage 2). The combination of the two-stage process may provide the advantage of the geometric constraints from stereo matching to improve depth map quality, without the need of additional input data. It is also contemplated that a spherical warping layer may be used to integrate multiple spherical features volumes to one cost volume with uniformly sampled inverse depth for the multi-view spherical stereo matching stage. The two-stage spherical depth estimation system and method may be used in various applications including virtual reality, autonomous driving and robotics.
    Type: Grant
    Filed: October 31, 2020
    Date of Patent: August 30, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Zhixin Yan, Liu Ren, Yuyan Li, Ye Duan
  • Publication number: 20220270322
    Abstract: A system of a virtual visor includes one or more sensors configured to receive input data including images, wherein the one or more sensors includes at least a camera utilized in the virtual visor, a processor in communication with the one or more sensors. The processor is programmed to create a training dataset utilizing at least the input data, utilizing the training dataset, create a classification associated with a shadow mask region and a first face region associated with a first face, segment the shadow mask region and the first face region from the training data set, and output a shadow representation via the virtual visor and utilizing the shadow mask region, the first face region, and a second face region associated with a second face.
    Type: Application
    Filed: February 22, 2021
    Publication date: August 25, 2022
    Inventors: Xinyu HUANG, Benzun Pious Wisely BABU, Liu REN, Jason ZINK
  • Publication number: 20220269892
    Abstract: A virtual visor in a vehicle includes a screen with various regions that can alternate between being transparent and being opaque. A camera captures an image of the driver's face. A processor performs facial recognition or the like based on the captured images, and determines which region of the screen is transitioned from transparent to opaque to block out the sun from shining directly into the driver's eyes while maintaining visibility through the remainder of the screen. Low power monitors can be independently run on the vehicle, asynchronously with the algorithms and image processing that controls which region of the screen to be opaque. The monitors consume less power than operating the virtual visor continuously. Based on trigger conditions as detected by the monitors, the image processing and thus the alternating between opaque and transparent is ceased to save power until the trigger condition is no longer present.
    Type: Application
    Filed: February 22, 2021
    Publication date: August 25, 2022
    Inventors: Xinyu HUANG, Benzun Pious Wisely BABU, Liu REN
  • Publication number: 20220269122
    Abstract: A helmet and a method and system for controlling a digital visor of a helmet are disclosed herein. The helmet includes a visor screen having a plurality of liquid crystal display (LCD) pixels, with each LCD pixel configured to alter in transparency. The helmet also includes a light sensor configured to detect incident light. The helmet also includes a controller coupled to the visor screen and the light sensor. The controller is configured to alter the transparency of the plurality of LCD pixels based on the incident light. In embodiments, the controller can alter the transparency of the LCD pixels based on the direction and/or intensity of the incident light.
    Type: Application
    Filed: February 22, 2021
    Publication date: August 25, 2022
    Inventors: Xinyu HUANG, Benzun Pious Wisely BABU, Liu REN
  • Patent number: 11410436
    Abstract: A method for operating a vehicle including a vehicle sensing system includes generating a baseline image model of an cabin of the vehicle based on image data of the cabin of the vehicle generated by an imaging device of the vehicle sensing system, the baseline image model generated before a passenger event, and generating an event image model of the cabin of the vehicle based on image data of the cabin of the vehicle generated by the imaging device, the event image model generated after the passenger event. The method further includes identifying image deviations by comparing the event image model to the baseline image model with a controller of the vehicle sensing system, the image deviations corresponding to differences in the cabin of the vehicle from before the passenger event to after the passenger event, and operating the vehicle based on the identified image deviations.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: August 9, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Mao Ye, Liu Ren
  • Publication number: 20220221482
    Abstract: A system and method for monitoring performance of a repeated activity is described. The system comprises a motion sensing system and a processing system. The motion sensing system includes sensors configured to measure or track motions corresponding to a repeated activity. The processing system is configured to process motion data received from the motion sensing system to recognize and measure cycle durations in the repeated activity. In contrast to the conventional systems and methods, which may work for repeated activities having a high level of standardization, the system advantageously enables recognition and monitoring of cycle durations for a repeated activity, even when significant abnormal motions are present in each cycle. Thus, the system can be utilized in a significantly broader set of applications, compared conventional systems and methods.
    Type: Application
    Filed: January 14, 2021
    Publication date: July 14, 2022
    Inventors: Lincan Zou, Huan Song, Liu Ren