Patents by Inventor Liu Ren

Liu Ren has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230196755
    Abstract: A computer-implemented system and method includes generating first pseudo segment data from a first augmented image and generating second pseudo segment data from a second augmented image. The first augmented image and the second augmented image are in a dataset along with other augmented images. A machine learning system is configured to generate pixel embeddings based on the dataset. The first pseudo segment data and the second pseudo segment data are used to identify a first set of segments that a given pixel belongs with respect to the first augmented image and the second augmented image. A second set of segments is identified across the dataset. The second set of segments do not include the given pixel. A local segmentation loss is computed for the given pixel based on the corresponding pixel embedding that involves attracting the first set of segments while repelling the second set of segments.
    Type: Application
    Filed: December 22, 2021
    Publication date: June 22, 2023
    Inventors: Wenbin He, Liang Gou, Liu Ren
  • Publication number: 20230184949
    Abstract: A system and method are disclosed herein for developing robust semantic mapping models for estimating semantic maps from LiDAR scans. In particular, the system and method enable the generation of realistic simulated LiDAR scans based on two-dimensional (2D) floorplans, for the purpose of providing a much larger set of training data that can be used to train robust semantic mapping models. These simulated LiDAR scans, as well as real LiDAR scans, are annotated using automated and manual processes with a rich set of semantic labels. Based on the annotated LiDAR scans, one or more semantic mapping models can be trained to estimate the semantic map for new LiDAR scans. The trained semantic mapping model can be deployed in robot vacuum cleaners, as well as similar devices that must interpret LiDAR scans of an environment to perform a task.
    Type: Application
    Filed: December 9, 2021
    Publication date: June 15, 2023
    Inventors: Xinyu Huang, Sharath Gopal, Lincan Zou, Yuliang Guo, Liu Ren
  • Publication number: 20230186590
    Abstract: A method and device for performing a perception task are disclosed. The method and device incorporate a dense regression model. The dense regression model advantageously incorporates a distortion-free convolution technique that is designed to accommodate and appropriately handle the varying levels of distortion in omnidirectional images across different regions. In addition to distortion-free convolution, the dense regression model further utilizes a transformer that incorporates an spherical self-attention that use distortion-free image embedding to compute an appearance attention and uses spherical distance to compute a positional attention.
    Type: Application
    Filed: December 13, 2021
    Publication date: June 15, 2023
    Inventors: Yuliang Guo, Zhixin Yan, Yuyan Li, Xinyu Huang, Liu Ren
  • Publication number: 20230177637
    Abstract: A system and method are disclosed herein for developing a machine perception model in the omnidirectional image domain. The system and method utilize the knowledge distillation process to transfer and adapt knowledge from the perspective projection image domain to the omnidirectional image domain. A teacher model is pre-trained to perform the machine perception task in the perspective projection image. A student model is trained by adapting the pre-existing knowledge of the teacher model from the perspective projection image domain to the omnidirectional image domain. By way of this training, the student model learns to perform the same machine perception task, except in the omnidirectional image domain, using limited or no suitably labeled training data in the omnidirectional image domain.
    Type: Application
    Filed: December 8, 2021
    Publication date: June 8, 2023
    Inventors: Yuliang Guo, Zhixin Yan, Yuyan Li, Xinyu Huang, Liu Ren
  • Publication number: 20230089148
    Abstract: Methods and systems for providing an interactive image scene graph pattern search are provided. A user is provide with an image having a plurality of selectable segmented regions therein. The user selects one or more of the segmented regions to build a query graph. Via a graph neural network, matching target graphs are retrieved that contain the query graph from a target graph database. Each matching target graph has matching target nodes that match with the query nodes of the query graph. Matching target images from an image database are associated with the matching target graphs. Embeddings of each of the query nodes and the matching target nodes are extracted. A comparison of the embeddings of each query node with the embeddings of each matching target node is performed. The user interface displays the matching target images that are associated with the matching target graphs.
    Type: Application
    Filed: September 17, 2021
    Publication date: March 23, 2023
    Inventors: Zeng DAI, Huan SONG, Panpan XU, Liu REN
  • Publication number: 20230085938
    Abstract: Embodiments of systems and methods for diagnosing an object-detecting machine learning model for autonomous driving are disclosed herein. An input image is received from a camera mounted in or on a vehicle that shows a scene. A spatial distribution of movable objects within the scene is derived using a context-aware spatial representation machine learning model. An unseen object is generated in the scene that is not originally in the input image utilizing a spatial adversarial machine learning model. Via the spatial adversarial machine learning model, the unseen object is moved to different locations to fail the object-detecting machine learning model. An interactive user interface enables a user to analyze performance of the object-detecting machine learning model with respect to the scene without the unseen object and the scene with the unseen object.
    Type: Application
    Filed: September 17, 2021
    Publication date: March 23, 2023
    Inventors: Wenbin HE, Liang GOU, Lincan ZOU, Liu REN
  • Publication number: 20230085927
    Abstract: A computer-implemented method includes receiving one or more images from one or more sensors, creating one or more image patches utilizing the one or more images, creating one or more latent representations from the one or more image patches via a neural network, outputting, to a concept extractor network, the one or more latent representations utilizing the one or more image patches, defining one or more scores associated with the one or more latent representations, and outputting one or more scores associated with the one or more image patches utilizing at least the concept extractor network.
    Type: Application
    Filed: September 20, 2021
    Publication date: March 23, 2023
    Inventors: Panpan XU, Liu REN, Zhenge ZHAO
  • Publication number: 20230086327
    Abstract: Systems and methods are disclosed for identifying target graphs that have nodes or neighborhoods of nodes (sub-graphs) that correspond with an input query graph. A visual analytics system supports human-in-the-loop, example-based subgraph pattern search utilizing a database of target graphs. Users can interactively select a pattern of nodes of interest. Graph neural networks encode topological and node attributes in a graph as fixed length latent vector representations such that subgraph matching can be performed in the latent space. Once matching target graphs are identified as corresponding to the query graph, one-to-one node correspondence between the query graph and the matching target graphs.
    Type: Application
    Filed: September 17, 2021
    Publication date: March 23, 2023
    Inventors: Huan SONG, Zeng DAI, Panpan XU, Liu REN
  • Patent number: 11605222
    Abstract: A helmet includes a transceiver configured to receive vehicle data from one or more sensors located on a vehicle. The helmet also includes an inertial movement unit (IMU) configured to collect helmet motion data of a rider of the vehicle and a processor in communication with the transceiver and IMU, and programmed to receive, via the transceiver, vehicle data from the one or more sensors located on the vehicle and determine a rider attention state utilizing the vehicle data from the one or more sensors located on the vehicle and the helmet motion data from the IMU.
    Type: Grant
    Filed: November 1, 2019
    Date of Patent: March 14, 2023
    Inventors: Benzun Pious Wisely Babu, Mao Ye, Liu Ren
  • Patent number: 11593589
    Abstract: A novel interpretable and steerable deep sequence modeling technique is disclosed. The technique combines prototype learning and RNNs to achieve both interpretability and high accuracy. Experiments and case studies on different real-world sequence prediction/classification tasks demonstrate that the model is not only as accurate as other state-of-the-art machine learning techniques but also much more interpretable. In addition, a large-scale user study on Amazon Mechanical Turk demonstrates that for familiar domains like sentiment analysis on texts, the model is able to select high quality prototypes that are well aligned with human knowledge for prediction and interpretation. Furthermore, the model obtains better interpretability without a loss of performance by incorporating the feedback from a user study to update the prototypes, demonstrating the benefits of involving human-in-the-loop for interpretable machine learning.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: February 28, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Panpan Xu, Liu Ren, Yao Ming
  • Patent number: 11587330
    Abstract: Visual analytics tool for updating object detection models in autonomous driving applications. In one embodiment, an object detection model analysis system including a computer and an interface device. The interface device includes a display device. The computer includes an electronic processor that is configured to extract object information from image data with a first object detection model, extract characteristics of objects from metadata associated with image data, generate a summary of the object information and the characteristics, generate coordinated visualizations based on the summary and the characteristics, generate a recommendation graphical user interface element based on the coordinated visualizations and a first one or more user inputs, and update the first object detection model based at least in part on a classification of one or more individual objects as an actual weakness in the first object detection model to generate a second object detection model for autonomous driving.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: February 21, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Liang Gou, Lincan Zou, Nanxiang Li, Axel Wendt, Liu Ren
  • Patent number: 11537901
    Abstract: A system and method for domain adaptation involves a first domain and a second domain. A machine learning system is trained with first sensor data and first label data of the first domain. Second sensor data of a second domain is obtained. Second label data is generated via the machine learning system based on the second sensor data. Inter-domain sensor data is generated by interpolating the first sensor data of the first domain with respect to the second sensor data of the second domain. Inter-domain label data is generated by interpolating first label data of the first domain with respect to second label data of the second domain. The machine learning system is operable to generate inter-domain output data in response to the inter-domain sensor data. Inter-domain loss data is generated based on the inter-domain output data with respect to the inter-domain label data. Parameters of the machine learning system are updated upon optimizing final loss data that includes at least the inter-domain loss data.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: December 27, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Huan Song, Shen Yan, Nanxiang Li, Lincan Zou, Liu Ren
  • Patent number: 11526689
    Abstract: Few-shot learning of repetitive human tasks is performed. Sliding window-based temporal segmentation is performed of sensor data for a plurality of cycles of a repetitive task. Motion alignment is performed of the plurality of cycles, the motion alignment mapping portions of the plurality of cycles to corresponding portions of other of the plurality of cycles. Categories are constructed for each of the corresponding portions of the plurality of cycles according to the motion alignment. Meta-training is performed to teach a model according to data sampled from a labeled set of human motions and the categories for each of the corresponding portions, the model utilizing a bidirectional long short-term memory (LSTM) network to account for length variation between the plurality of cycles. The model is used to perform temporal segmentation on a data stream of sensor data in real time for predicting motion windows within the data stream.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: December 13, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Huan Song, Liu Ren
  • Patent number: 11513673
    Abstract: A deep sequence model with prototypes may be steered. A prototype overview is displayed, the prototype overview including a plurality of prototype sequences learned by a model through backpropagation, each of the prototype sequences including a series of events, where for each of the prototype sequences, statistical information is presented with respect to use of the prototype sequence by the model. Input is received adjusting one or more of the prototype sequences to fine-tune the model. The model is updated using the plurality of prototype sequences, as adjusted, to create an updated model. The model, as updated, is displayed in the prototype overview.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: November 29, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Panpan Xu, Liu Ren, Yao Ming, Furui Cheng, Huamin Qu
  • Patent number: 11500470
    Abstract: A helmet includes a transceiver configured to receive vehicle data from one or more sensors located on a vehicle, an inertial measurement unit (IMU) configured to collect helmet motion data of the helmet associated with a rider of the vehicle, and a processor in communication with the transceiver and IMU, and programmed to receive, via the transceiver, vehicle data from the one or more sensors located on the vehicle, determine a gesture in response to the vehicle data from the one or more sensors located on the vehicle and the helmet motion data from the IMU, and output on a display of the helmet a status interface related to the vehicle, in response to the gesture.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: November 15, 2022
    Inventors: Benzun Pious Wisely Babu, Zeng Dai, Shabnam Ghaffarzadegan, Liu Ren
  • Publication number: 20220277192
    Abstract: A visual analytics workflow and system are disclosed for assessing, understanding, and improving deep neural networks. The visual analytics workflow advantageously enables interpretation and improvement of the performance of a neural network model, for example an image-based objection detection and classification model, with minimal human-in-the-loop interaction. A data representation component extracts semantic features of input image data, such as colors, brightness, background, rotation, etc. of the images or objects in the images. The input image data are passed through the neural network to obtain prediction results, such as object detection and classification results. An interactive visualization component transforms the prediction results and semantic features into interactive and human-friendly visualizations, in which graphical elements encoding the prediction results are visually arranged depending on the extracted semantic features of input image data.
    Type: Application
    Filed: February 26, 2021
    Publication date: September 1, 2022
    Inventors: Liang Gou, Lincan Zou, Wenbin He, Liu Ren
  • Publication number: 20220277187
    Abstract: Methods and systems for performing concept-based adversarial generation with steerable and diverse semantics. One system includes an electronic processor configured to access an input image. The electronic processor is also configured to perform concept-based semantic image generation based on the input image. The electronic processor is also configured to perform concept-based semantic adversarial learning using a set of semantic latent spaces generated as part of performing the concept-based semantic image generation. The electronic processor is also configured to generate an adversarial image based on the concept-based semantic adversarial learning. The electronic processor is also configured to test a target model using the adversarial image.
    Type: Application
    Filed: March 1, 2021
    Publication date: September 1, 2022
    Inventors: Zijie Wang, Liang Gou, Wenbin He, Liu Ren
  • Publication number: 20220277173
    Abstract: Methods and systems for performing function testing for moveable objects. One system includes an electronic processor configured to access a driving scene including a moveable object. The electronic processor is also configured to perform spatial representation learning on the driving scene. The electronic processor is also configured to generate an adversarial example based on the learned spatial representation. The electronic processor is also configured to retrain the deep learning model using the adversarial example and the driving scene.
    Type: Application
    Filed: March 1, 2021
    Publication date: September 1, 2022
    Inventors: Wenbin He, Liang Gou, Lincan Zou, Liu Ren
  • Patent number: 11430146
    Abstract: A system and method is disclosed having an end-to-end two-stage depth estimation deep learning framework that takes one spherical color image and estimate dense spherical depth maps. The contemplated framework may include a view synthesis (stage 1) and a multi-view stereo matching (stage 2). The combination of the two-stage process may provide the advantage of the geometric constraints from stereo matching to improve depth map quality, without the need of additional input data. It is also contemplated that a spherical warping layer may be used to integrate multiple spherical features volumes to one cost volume with uniformly sampled inverse depth for the multi-view spherical stereo matching stage. The two-stage spherical depth estimation system and method may be used in various applications including virtual reality, autonomous driving and robotics.
    Type: Grant
    Filed: October 31, 2020
    Date of Patent: August 30, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Zhixin Yan, Liu Ren, Yuyan Li, Ye Duan
  • Publication number: 20220270322
    Abstract: A system of a virtual visor includes one or more sensors configured to receive input data including images, wherein the one or more sensors includes at least a camera utilized in the virtual visor, a processor in communication with the one or more sensors. The processor is programmed to create a training dataset utilizing at least the input data, utilizing the training dataset, create a classification associated with a shadow mask region and a first face region associated with a first face, segment the shadow mask region and the first face region from the training data set, and output a shadow representation via the virtual visor and utilizing the shadow mask region, the first face region, and a second face region associated with a second face.
    Type: Application
    Filed: February 22, 2021
    Publication date: August 25, 2022
    Inventors: Xinyu HUANG, Benzun Pious Wisely BABU, Liu REN, Jason ZINK