Patents by Inventor Alexander Popov

Alexander Popov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11821560
    Abstract: A fluid manifold apparatus, in particular a water distribution apparatus, comprises a fluid chamber, at least one fluid inlet, a plurality of fluid outlets, and a manifold unit, including a plurality of valve units for closing the fluid outlets and a camshaft supported to be rotatable about a rotary axis, comprising a plurality of lobes for operating the valve units, wherein each of the valve units has a cam follower and a sealing unit, and the lobes of the camshaft are provided to lift the cam followers of the valve units as a function of a rotary position of the camshaft. Each of the cam followers has a receiver for receiving one sealing unit, wherein the sealing units are arranged to be linearly moveable in the receivers.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: November 21, 2023
    Assignee: Minebea Mitsumi Inc.
    Inventors: Eric Häuser, Christian Schmid, Robert Rottweiler, Vladimir Popov, Alexander Zwetkow
  • Patent number: 11811767
    Abstract: Techniques for streamlined secure deployment of cloud services in cloud computing environments are disclosed herein. In one embodiment, a method can include in response to receiving an instruction to deploy a cloud service in the cloud computing system, creating a deployment subscription to resources in the cloud computing system, the deployment subscription being owned by the deployment service and instantiating one or more computing resources accessible by the deployment service in the cloud computing system in accordance with the created deployment subscription. The method also includes retrieving one or more components of an application corresponding to the cloud service based on a manifest with the instantiated one or more computing resources and installing the retrieved one or more components of the application in the cloud computing system in accordance with an installation sequence identified in the manifest.
    Type: Grant
    Filed: June 10, 2022
    Date of Patent: November 7, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Vladimir Pogrebinsky, Sergei Popov, Alexander Wayne Eager
  • Publication number: 20230348634
    Abstract: This invention relates to a homogeneous process to produce propylene polymers using transition metal complexes of a dianionic, tridentate ligand that features a central neutral heterocyclic Lewis base and two phenolate donors, where the tridentate ligand coordinates to the metal center to form two eight-membered rings. Preferably the bis(phenolate) complexes are represented by Formula (I): where M, L, X, m, n, E, E?, Q, R1, R2, R3, R4, R1?, R2?, R3?, R4?, A1, A1?, and are as defined herein, where A1QA1? are part of a heterocyclic Lewis base containing 4 to 40 non-hydrogen atoms that links A2 to A2? via a 3-atom bridge with Q being the central atom of the 3-atom bridge.
    Type: Application
    Filed: August 11, 2020
    Publication date: November 2, 2023
    Inventors: Jo Ann M. Canich, Ru Xie, Gregory J. Smith-Karahalis, Sarah J. Mattler, Mikhail I. Sharikov, Alexander Z. Voskoboynikov, Vladislav A. Popov, Dmitry V. Uborsky, Georgy P. Goryunov, John R. Hagadorn, Peijun Jiang
  • Publication number: 20230281847
    Abstract: In various examples, methods and systems are provided for estimating depth values for images (e.g., from a monocular sequence). Disclosed approaches may define a search space of potential pixel matches between two images using one or more depth hypothesis planes based at least on a camera pose associated with one or more cameras used to generate the images. A machine learning model(s) may use this search space to predict likelihoods of correspondence between one or more pixels in the images. The predicted likelihoods may be used to compute depth values for one or more of the images. The predicted depth values may be transmitted and used by a machine to perform one or more operations.
    Type: Application
    Filed: February 3, 2022
    Publication date: September 7, 2023
    Inventors: Yiran Zhong, Charles Loop, Nikolai Smolyanskiy, Ke Chen, Stan Birchfield, Alexander Popov
  • Patent number: 11732063
    Abstract: The present disclosure relates to Lewis base catalysts. Catalysts, catalyst systems, and processes of the present disclosure can provide high temperature ethylene polymerization, propylene polymerization, or copolymerization. In at least one embodiment, the catalyst compounds belong to a family of compounds comprising amido-phenolate-heterocyclic ligands coordinated to group 4 transition metals. The tridendate ligand may include a central neutral hetrocyclic donor group, an anionic phenolate donor, and an anionic amido donor. In some embodiments, the present disclosure provides a catalyst system comprising an activator and a catalyst of the present disclosure. In some embodiments, the present disclosure provides a polymerization process comprising a) contacting one or more olefin monomers with a catalyst system comprising: i) an activator and ii) a catalyst of the present disclosure.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: August 22, 2023
    Assignee: ExxonMobil Chemical Patents Inc.
    Inventors: Georgy P. Goryunov, Mikhail I. Sharikov, Vladislav A. Popov, Dmitry V. Uborsky, Alexander Z. Voskoboynikov, John R. Hagadorn, Jo Ann M. Canich
  • Publication number: 20230245899
    Abstract: The current disclosure relates to a method of depositing a metal halide-comprising material on a substrate by a cyclic deposition process. The method comprises providing a substrate in a reaction chamber, providing a metal precursor into the reaction chamber in a vapor phase and providing a halogen precursor into the reaction chamber in a vapor phase to form the metal halide-comprising material on the substrate. In the method, the metal precursor comprises a metal atom having an oxidation state of +1 bonded to an organic ligand. Also, a deposition assembly for depositing a metal halide-comprising material is disclosed.
    Type: Application
    Filed: January 31, 2023
    Publication date: August 3, 2023
    Inventors: Georgi Popov, Alexander Weiss, Mikko Ritala, Marianna Kemell
  • Publication number: 20230049567
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
    Type: Application
    Filed: October 28, 2022
    Publication date: February 16, 2023
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20220415059
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: August 25, 2022
    Publication date: December 29, 2022
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Patent number: 11531088
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: December 20, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Patent number: 11532168
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: December 20, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20220012470
    Abstract: An intelligent assistant records speech spoken by a first user and determines a self-selection score for the first user. The intelligent assistant sends the self-selection score to another intelligent assistant, and receives a remote-selection score for the first user from the other intelligent assistant. The intelligent assistant compares the self-selection score to the remote-selection score. If the self-selection score is greater than the remote-selection score, the intelligent assistant responds to the first user and blocks subsequent responses to all other users until a disengagement metric of the first user exceeds a blocking threshold. If the self-selection score is less than the remote-selection score, the intelligent assistant does not respond to the first user.
    Type: Application
    Filed: September 27, 2021
    Publication date: January 13, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Kazuhito KOISHIDA, Alexander A. POPOV, Uros BATRICEVIC, Steven Nabil BATHICHE
  • Patent number: 11194998
    Abstract: An intelligent assistant records speech spoken by a first user and determines a self-selection score for the first user. The intelligent assistant sends the self-selection score to another intelligent assistant, and receives a remote-selection score for the first user from the other intelligent assistant. The intelligent assistant compares the self-selection score to the remote-selection score. If the self-selection score is greater than the remote-selection score, the intelligent assistant responds to the first user and blocks subsequent responses to all other users until a disengagement metric of the first user exceeds a blocking threshold. If the self-selection score is less than the remote-selection score, the intelligent assistant does not respond to the first user.
    Type: Grant
    Filed: July 24, 2017
    Date of Patent: December 7, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kazuhito Koishida, Alexander A Popov, Uros Batricevic, Steven Nabil Bathiche
  • Publication number: 20210342608
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: July 15, 2021
    Publication date: November 4, 2021
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20210342609
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: July 15, 2021
    Publication date: November 4, 2021
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20210156963
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
    Type: Application
    Filed: March 31, 2020
    Publication date: May 27, 2021
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20210156960
    Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space, in both highway and urban scenarios. RADAR detections may be accumulated, ego-motion-compensated, orthographically projected, and fed into a neural network(s). The neural network(s) may include a common trunk with a feature extractor and several heads that predict different outputs such as a class confidence head that predicts a confidence map and an instance regression head that predicts object instance data for detected objects. The outputs may be decoded, filtered, and/or clustered to form bounding shapes identifying the location, size, and/or orientation of detected object instances. The detected object instances may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: March 31, 2020
    Publication date: May 27, 2021
    Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
  • Publication number: 20210150230
    Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: June 29, 2020
    Publication date: May 20, 2021
    Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
  • Publication number: 20180233142
    Abstract: An intelligent assistant records speech spoken by a first user and determines a self-selection score for the first user. The intelligent assistant sends the self-selection score to another intelligent assistant, and receives a remote-selection score for the first user from the other intelligent assistant. The intelligent assistant compares the self-selection score to the remote-selection score. If the self-selection score is greater than the remote-selection score, the intelligent assistant responds to the first user and blocks subsequent responses to all other users until a disengagement metric of the first user exceeds a blocking threshold. If the self-selection score is less than the remote-selection score, the intelligent assistant does not respond to the first user.
    Type: Application
    Filed: July 24, 2017
    Publication date: August 16, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Kazuhito KOISHIDA, Alexander A. POPOV, Uros BATRICEVIC, Steven Nabil BATHICHE
  • Patent number: 6068637
    Abstract: A method and devices are provided for performing end-to-side anastomoses between the severed end of a first hollow organ and the side-wall of a second hollow organ utilizing transluminal approach with endoscopic assistance, wherein the first and second hollow organs can be secured utilizing a biocompatible glue, clips or by suturing. In an alternative embodiment, the method utilizes a modified cutter catheter which is introduced into the first hollow organ in combination with a receiver catheter which is introduced into the second hollow organ. The distal end of the receiver catheter includes a receiver cavity and a selectively activatable magnetic material.
    Type: Grant
    Filed: August 29, 1996
    Date of Patent: May 30, 2000
    Assignee: Cedar Sinai Medical Center
    Inventors: Alexander Popov, Peter A. Barath
  • Patent number: 5702412
    Abstract: A method and devices are provided for performing end-to-side anastomoses between the severed end of a first hollow organ and the side-wall of a second hollow organ utilizing transluminal approach with endoscopic assistance. In particular, the method utilizes a catheter, having a selectively operable cutter, which is introduced into the first hollow organ until the distal end of the catheter is substantially adjacent to the severed end of the first hollow organ. The severed end of the first hollow organ is positioned in proximity to the site for anastomoses on the side wall of the second hollow organ and the severed end is secured in sealing engagement with the side-wall, thereby defining a region of securement on the side wall of the second hollow organ.
    Type: Grant
    Filed: October 3, 1995
    Date of Patent: December 30, 1997
    Assignee: Cedars-Sinai Medical Center
    Inventors: Alexander Popov, Peter Barath