Patents by Inventor Alexander Popov
Alexander Popov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11821560Abstract: A fluid manifold apparatus, in particular a water distribution apparatus, comprises a fluid chamber, at least one fluid inlet, a plurality of fluid outlets, and a manifold unit, including a plurality of valve units for closing the fluid outlets and a camshaft supported to be rotatable about a rotary axis, comprising a plurality of lobes for operating the valve units, wherein each of the valve units has a cam follower and a sealing unit, and the lobes of the camshaft are provided to lift the cam followers of the valve units as a function of a rotary position of the camshaft. Each of the cam followers has a receiver for receiving one sealing unit, wherein the sealing units are arranged to be linearly moveable in the receivers.Type: GrantFiled: November 29, 2021Date of Patent: November 21, 2023Assignee: Minebea Mitsumi Inc.Inventors: Eric Häuser, Christian Schmid, Robert Rottweiler, Vladimir Popov, Alexander Zwetkow
-
Patent number: 11811767Abstract: Techniques for streamlined secure deployment of cloud services in cloud computing environments are disclosed herein. In one embodiment, a method can include in response to receiving an instruction to deploy a cloud service in the cloud computing system, creating a deployment subscription to resources in the cloud computing system, the deployment subscription being owned by the deployment service and instantiating one or more computing resources accessible by the deployment service in the cloud computing system in accordance with the created deployment subscription. The method also includes retrieving one or more components of an application corresponding to the cloud service based on a manifest with the instantiated one or more computing resources and installing the retrieved one or more components of the application in the cloud computing system in accordance with an installation sequence identified in the manifest.Type: GrantFiled: June 10, 2022Date of Patent: November 7, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Vladimir Pogrebinsky, Sergei Popov, Alexander Wayne Eager
-
Publication number: 20230348634Abstract: This invention relates to a homogeneous process to produce propylene polymers using transition metal complexes of a dianionic, tridentate ligand that features a central neutral heterocyclic Lewis base and two phenolate donors, where the tridentate ligand coordinates to the metal center to form two eight-membered rings. Preferably the bis(phenolate) complexes are represented by Formula (I): where M, L, X, m, n, E, E?, Q, R1, R2, R3, R4, R1?, R2?, R3?, R4?, A1, A1?, and are as defined herein, where A1QA1? are part of a heterocyclic Lewis base containing 4 to 40 non-hydrogen atoms that links A2 to A2? via a 3-atom bridge with Q being the central atom of the 3-atom bridge.Type: ApplicationFiled: August 11, 2020Publication date: November 2, 2023Inventors: Jo Ann M. Canich, Ru Xie, Gregory J. Smith-Karahalis, Sarah J. Mattler, Mikhail I. Sharikov, Alexander Z. Voskoboynikov, Vladislav A. Popov, Dmitry V. Uborsky, Georgy P. Goryunov, John R. Hagadorn, Peijun Jiang
-
Publication number: 20230281847Abstract: In various examples, methods and systems are provided for estimating depth values for images (e.g., from a monocular sequence). Disclosed approaches may define a search space of potential pixel matches between two images using one or more depth hypothesis planes based at least on a camera pose associated with one or more cameras used to generate the images. A machine learning model(s) may use this search space to predict likelihoods of correspondence between one or more pixels in the images. The predicted likelihoods may be used to compute depth values for one or more of the images. The predicted depth values may be transmitted and used by a machine to perform one or more operations.Type: ApplicationFiled: February 3, 2022Publication date: September 7, 2023Inventors: Yiran Zhong, Charles Loop, Nikolai Smolyanskiy, Ke Chen, Stan Birchfield, Alexander Popov
-
Patent number: 11732063Abstract: The present disclosure relates to Lewis base catalysts. Catalysts, catalyst systems, and processes of the present disclosure can provide high temperature ethylene polymerization, propylene polymerization, or copolymerization. In at least one embodiment, the catalyst compounds belong to a family of compounds comprising amido-phenolate-heterocyclic ligands coordinated to group 4 transition metals. The tridendate ligand may include a central neutral hetrocyclic donor group, an anionic phenolate donor, and an anionic amido donor. In some embodiments, the present disclosure provides a catalyst system comprising an activator and a catalyst of the present disclosure. In some embodiments, the present disclosure provides a polymerization process comprising a) contacting one or more olefin monomers with a catalyst system comprising: i) an activator and ii) a catalyst of the present disclosure.Type: GrantFiled: February 11, 2021Date of Patent: August 22, 2023Assignee: ExxonMobil Chemical Patents Inc.Inventors: Georgy P. Goryunov, Mikhail I. Sharikov, Vladislav A. Popov, Dmitry V. Uborsky, Alexander Z. Voskoboynikov, John R. Hagadorn, Jo Ann M. Canich
-
Publication number: 20230245899Abstract: The current disclosure relates to a method of depositing a metal halide-comprising material on a substrate by a cyclic deposition process. The method comprises providing a substrate in a reaction chamber, providing a metal precursor into the reaction chamber in a vapor phase and providing a halogen precursor into the reaction chamber in a vapor phase to form the metal halide-comprising material on the substrate. In the method, the metal precursor comprises a metal atom having an oxidation state of +1 bonded to an organic ligand. Also, a deposition assembly for depositing a metal halide-comprising material is disclosed.Type: ApplicationFiled: January 31, 2023Publication date: August 3, 2023Inventors: Georgi Popov, Alexander Weiss, Mikko Ritala, Marianna Kemell
-
Publication number: 20230049567Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.Type: ApplicationFiled: October 28, 2022Publication date: February 16, 2023Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
-
Publication number: 20220415059Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: August 25, 2022Publication date: December 29, 2022Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Patent number: 11531088Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.Type: GrantFiled: March 31, 2020Date of Patent: December 20, 2022Assignee: NVIDIA CORPORATIONInventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
-
Patent number: 11532168Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: GrantFiled: June 29, 2020Date of Patent: December 20, 2022Assignee: NVIDIA CORPORATIONInventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20220012470Abstract: An intelligent assistant records speech spoken by a first user and determines a self-selection score for the first user. The intelligent assistant sends the self-selection score to another intelligent assistant, and receives a remote-selection score for the first user from the other intelligent assistant. The intelligent assistant compares the self-selection score to the remote-selection score. If the self-selection score is greater than the remote-selection score, the intelligent assistant responds to the first user and blocks subsequent responses to all other users until a disengagement metric of the first user exceeds a blocking threshold. If the self-selection score is less than the remote-selection score, the intelligent assistant does not respond to the first user.Type: ApplicationFiled: September 27, 2021Publication date: January 13, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Kazuhito KOISHIDA, Alexander A. POPOV, Uros BATRICEVIC, Steven Nabil BATHICHE
-
Patent number: 11194998Abstract: An intelligent assistant records speech spoken by a first user and determines a self-selection score for the first user. The intelligent assistant sends the self-selection score to another intelligent assistant, and receives a remote-selection score for the first user from the other intelligent assistant. The intelligent assistant compares the self-selection score to the remote-selection score. If the self-selection score is greater than the remote-selection score, the intelligent assistant responds to the first user and blocks subsequent responses to all other users until a disengagement metric of the first user exceeds a blocking threshold. If the self-selection score is less than the remote-selection score, the intelligent assistant does not respond to the first user.Type: GrantFiled: July 24, 2017Date of Patent: December 7, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Kazuhito Koishida, Alexander A Popov, Uros Batricevic, Steven Nabil Bathiche
-
Publication number: 20210342608Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: July 15, 2021Publication date: November 4, 2021Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20210342609Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: July 15, 2021Publication date: November 4, 2021Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20210156963Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.Type: ApplicationFiled: March 31, 2020Publication date: May 27, 2021Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
-
Publication number: 20210156960Abstract: In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space, in both highway and urban scenarios. RADAR detections may be accumulated, ego-motion-compensated, orthographically projected, and fed into a neural network(s). The neural network(s) may include a common trunk with a feature extractor and several heads that predict different outputs such as a class confidence head that predicts a confidence map and an instance regression head that predicts object instance data for detected objects. The outputs may be decoded, filtered, and/or clustered to form bounding shapes identifying the location, size, and/or orientation of detected object instances. The detected object instances may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: March 31, 2020Publication date: May 27, 2021Inventors: Alexander Popov, Nikolai Smolyanskiy, Ryan Oldja, Shane Murray, Tilman Wekel, David Nister, Joachim Pehserl, Ruchi Bhargava, Sangmin Oh
-
Publication number: 20210150230Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: June 29, 2020Publication date: May 20, 2021Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20180233142Abstract: An intelligent assistant records speech spoken by a first user and determines a self-selection score for the first user. The intelligent assistant sends the self-selection score to another intelligent assistant, and receives a remote-selection score for the first user from the other intelligent assistant. The intelligent assistant compares the self-selection score to the remote-selection score. If the self-selection score is greater than the remote-selection score, the intelligent assistant responds to the first user and blocks subsequent responses to all other users until a disengagement metric of the first user exceeds a blocking threshold. If the self-selection score is less than the remote-selection score, the intelligent assistant does not respond to the first user.Type: ApplicationFiled: July 24, 2017Publication date: August 16, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Kazuhito KOISHIDA, Alexander A. POPOV, Uros BATRICEVIC, Steven Nabil BATHICHE
-
Patent number: 6068637Abstract: A method and devices are provided for performing end-to-side anastomoses between the severed end of a first hollow organ and the side-wall of a second hollow organ utilizing transluminal approach with endoscopic assistance, wherein the first and second hollow organs can be secured utilizing a biocompatible glue, clips or by suturing. In an alternative embodiment, the method utilizes a modified cutter catheter which is introduced into the first hollow organ in combination with a receiver catheter which is introduced into the second hollow organ. The distal end of the receiver catheter includes a receiver cavity and a selectively activatable magnetic material.Type: GrantFiled: August 29, 1996Date of Patent: May 30, 2000Assignee: Cedar Sinai Medical CenterInventors: Alexander Popov, Peter A. Barath
-
Patent number: 5702412Abstract: A method and devices are provided for performing end-to-side anastomoses between the severed end of a first hollow organ and the side-wall of a second hollow organ utilizing transluminal approach with endoscopic assistance. In particular, the method utilizes a catheter, having a selectively operable cutter, which is introduced into the first hollow organ until the distal end of the catheter is substantially adjacent to the severed end of the first hollow organ. The severed end of the first hollow organ is positioned in proximity to the site for anastomoses on the side wall of the second hollow organ and the severed end is secured in sealing engagement with the side-wall, thereby defining a region of securement on the side wall of the second hollow organ.Type: GrantFiled: October 3, 1995Date of Patent: December 30, 1997Assignee: Cedars-Sinai Medical CenterInventors: Alexander Popov, Peter Barath