Patents by Inventor Ioan Andrei Bârsan

Ioan Andrei Bârsan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230410404
    Abstract: Three dimensional object reconstruction for sensor simulation includes performing operations that include rendering, by a differential rendering engine, an object image from a target object model, and computing, by a loss function of the differential rendering engine, a loss based on a comparison of the object image with an actual image and a comparison of the target object model with a corresponding lidar point cloud. The operations further include updating the target object model by the differential rendering engine according to the loss, and rendering, after updating the target object model, a target object in a virtual world using the target object model.
    Type: Application
    Filed: June 14, 2023
    Publication date: December 21, 2023
    Applicant: WAABI Innovation Inc.
    Inventors: Ioan Andrei Barsan, Yun Chen, Wei-Chiu Ma, Sivabalan Manivasagam, Raquel Urtasun, Jingkang Wang, Ze Yang
  • Patent number: 11820397
    Abstract: A computer-implemented method for localizing a vehicle can include accessing, by a computing system comprising one or more computing devices, a machine-learned retrieval model that has been trained using a ground truth dataset comprising a plurality of pre-localized sensor observations. Each of the plurality of pre-localized sensor observations has a predetermined pose value associated with a previously obtained sensor reading representation. The method also includes obtaining, by the computing system, a current sensor reading representation obtained by one or more sensors located at the vehicle. The method also includes inputting, by the computing system, the current sensor reading representation into the machine-learned retrieval model.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: November 21, 2023
    Assignee: UATC, LLC
    Inventors: Julieta Martinez Covarrubias, Raquel Urtasun, Shenlong Wang, Ioan Andrei Barsan, Gellert Sandor Mattyus, Alexandre Doubov, Hongbo Fan
  • Patent number: 11726208
    Abstract: Aspects of the present disclosure involve systems, methods, and devices for autonomous vehicle localization using a Lidar intensity map. A system is configured to generate a map embedding using a first neural network and to generate an online Lidar intensity embedding using a second neural network. The map embedding is based on input map data comprising a Lidar intensity map, and the Lidar sweep embedding is based on online Lidar sweep data. The system is further configured to generate multiple pose candidates based on the online Lidar intensity embedding and compute a three-dimensional (3D) score map comprising a match score for each pose candidate that indicates a similarity between the pose candidate and the map embedding. The system is further configured to determine a pose of a vehicle based on the 3D score map and to control one or more operations of the vehicle based on the determined pose.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: August 15, 2023
    Assignee: UATC, LLC
    Inventors: Shenlong Wang, Andrei Pokrovsky, Raquel Urtasun Sotil, Ioan Andrei Bârsan
  • Patent number: 11715012
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices associated with object localization and generation of compressed feature representations are provided. For example, a computing system can access source data and target data. The source data can include a source representation of an environment including a source object. The target data can include a compressed target feature representation of the environment. The compressed target feature representation can be based on compression of a target feature representation of the environment produced by machine-learned models. A source feature representation can be generated based on the source representation and the machine-learned models. The machine-learned models can include machine-learned feature extraction models or machine-learned attention models. A localized state of the source object with respect to the environment can be determined based on the source feature representation and the compressed target feature representation.
    Type: Grant
    Filed: October 10, 2019
    Date of Patent: August 1, 2023
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Xinkai Wei, Ioan Andrei Barsan, Julieta Martinez Covarrubias, Shenlong Wang
  • Patent number: 11461583
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices associated with object localization and generation of compressed feature representations are provided. For example, a computing system can access training data including a source feature representation and a target feature representation. An encoded target feature representation can be generated based on the target feature representation and a machine-learned encoding model. A binarized target feature representation can be generated based on the encoded target feature representation and lossless binarization operations. A reconstructed target feature representation can be generated based on the binarized target feature representation and a machine-learned decoding model. A matching score for the source feature representation and the reconstructed target feature representation can be determined. A loss associated with the matching score can be determined.
    Type: Grant
    Filed: October 10, 2019
    Date of Patent: October 4, 2022
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Xinkai Wei, Ioan Andrei Barsan, Julieta Martinez Covarrubias, Shenlong Wang
  • Patent number: 11449713
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices associated with object localization and generation of compressed feature representations are provided. For example, a computing system can access training data including a target feature representation and a source feature representation. An attention feature representation can be generated based on the target feature representation and a machine-learned attention model. An attended target feature representation can be generated based on masking the target feature representation with the attention feature representation. A matching score for the source feature representation and the target feature representation can be determined. A loss associated with the matching score and a ground-truth matching score for the source feature representation and the target feature representation can be determined. Furthermore, parameters of the machine-learned attention model can be adjusted based on the loss.
    Type: Grant
    Filed: October 10, 2019
    Date of Patent: September 20, 2022
    Assignee: UATC, LLC
    Inventors: Raquel Urtasun, Xinkai Wei, Ioan Andrei Barsan, Julieta Martinez Covarrubias, Shenlong Wang
  • Publication number: 20220137636
    Abstract: Systems and methods for the simultaneous localization and mapping of autonomous vehicle systems are provided. A method includes receiving a plurality of input image frames from the plurality of asynchronous image devices triggered at different times to capture the plurality of input image frames. The method includes identifying reference image frame(s) corresponding to a respective input image frame by matching the field of view of the respective input image frame to the fields of view of the reference image frame(s). The method includes determining association(s) between the respective input image frame and three-dimensional map point(s) based on a comparison of the respective input image frame to the one or more reference image frames. The method includes generating an estimated pose for the autonomous vehicle the one or more three-dimensional map points. The method includes updating a continuous-time motion model of the autonomous vehicle based on the estimated pose.
    Type: Application
    Filed: November 1, 2021
    Publication date: May 5, 2022
    Inventors: Anqi Joyce Yang, Can Cui, Ioan Andrei Bârsan, Shenlong Wang, Raquel Urtasun
  • Publication number: 20210146949
    Abstract: A computer-implemented method for localizing a vehicle can include accessing, by a computing system comprising one or more computing devices, a machine-learned retrieval model that has been trained using a ground truth dataset comprising a plurality of pre-localized sensor observations. Each of the plurality of pre-localized sensor observations has a predetermined pose value associated with a previously obtained sensor reading representation. The method also includes obtaining, by the computing system, a current sensor reading representation obtained by one or more sensors located at the vehicle. The method also includes inputting, by the computing system, the current sensor reading representation into the machine-learned retrieval model.
    Type: Application
    Filed: September 11, 2020
    Publication date: May 20, 2021
    Inventors: Julieta Martinez Covarrubias, Raquel Urtasun, Shenlong Wang, Ioan Andrei Barsan, Gellert Sandor Mattyus, Sasha Doubov, Hongbo Fan
  • Publication number: 20200160104
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices associated with object localization and generation of compressed feature representations are provided. For example, a computing system can access training data including a source feature representation and a target feature representation. An encoded target feature representation can be generated based on the target feature representation and a machine-learned encoding model. A binarized target feature representation can be generated based on the encoded target feature representation and lossless binarization operations. A reconstructed target feature representation can be generated based on the binarized target feature representation and a machine-learned decoding model. A matching score for the source feature representation and the reconstructed target feature representation can be determined. A loss associated with the matching score can be determined.
    Type: Application
    Filed: October 10, 2019
    Publication date: May 21, 2020
    Inventors: Raquel Urtasun, Xinkai Wei, Ioan Andrei Barsan, Julieta Covarrubias Martinez, Shenlong Wang
  • Publication number: 20200160151
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices associated with object localization and generation of compressed feature representations are provided. For example, a computing system can access source data and target data. The source data can include a source representation of an environment including a source object. The target data can include a compressed target feature representation of the environment. The compressed target feature representation can be based on compression of a target feature representation of the environment produced by machine-learned models. A source feature representation can be generated based on the source representation and the machine-learned models. The machine-learned models can include machine-learned feature extraction models or machine-learned attention models. A localized state of the source object with respect to the environment can be determined based on the source feature representation and the compressed target feature representation.
    Type: Application
    Filed: October 10, 2019
    Publication date: May 21, 2020
    Inventors: Raquel Urtasun, Xinkai Wei, Ioan Andrei Barsan, Julieta Covarrubias Martinez, Shenlong Wang
  • Publication number: 20200160117
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices associated with object localization and generation of compressed feature representations are provided. For example, a computing system can access training data including a target feature representation and a source feature representation. An attention feature representation can be generated based on the target feature representation and a machine-learned attention model. An attended target feature representation can be generated based on masking the target feature representation with the attention feature representation. A matching score for the source feature representation and the target feature representation can be determined. A loss associated with the matching score and a ground-truth matching score for the source feature representation and the target feature representation can be determined. Furthermore, parameters of the machine-learned attention model can be adjusted based on the loss.
    Type: Application
    Filed: October 10, 2019
    Publication date: May 21, 2020
    Inventors: Raquel Urtasun, Xinkai Wei, Ioan Andrei Barsan, Julieta Covarrubias Martinez, Shenlong Wang
  • Publication number: 20190383945
    Abstract: Aspects of the present disclosure involve systems, methods, and devices for autonomous vehicle localization using a Lidar intensity map. A system is configured to generate a map embedding using a first neural network and to generate an online Lidar intensity embedding using a second neural network. The map embedding is based on input map data comprising a Lidar intensity map, and the Lidar sweep embedding is based on online Lidar sweep data. The system is further configured to generate multiple pose candidates based on the online Lidar intensity embedding and compute a three-dimensional (3D) score map comprising a match score for each pose candidate that indicates a similarity between the pose candidate and the map embedding. The system is further configured to determine a pose of a vehicle based on the 3D score map and to control one or more operations of the vehicle based on the determined pose.
    Type: Application
    Filed: June 11, 2019
    Publication date: December 19, 2019
    Inventors: Shenlong Wang, Andrei Pokrovsky, Raquel Urtasun Sotil, Ioan Andrei Bârsan