Patents by Inventor Juri Platonov
Juri Platonov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11709542Abstract: A multi-user virtual reality system for providing a virtual reality experience to a plurality of users in a body of water includes a reference system adapted for emitting and/or receiving signals. The multi-user virtual reality system includes equipment configured to be mounted to a first user in the body of water, wherein the equipment includes a first display unit and a first signal emitting or receiving system adapted for emitting and/or receiving signals. The multi-user virtual reality system includes equipment configured to be mounted to a second user in the body of water, wherein the equipment includes a second display unit and a second signal emitting or receiving system adapted for emitting and/or receiving signals. The multi-user virtual reality system includes a data processing system including one or more data processing units.Type: GrantFiled: July 30, 2021Date of Patent: July 25, 2023Assignee: SHHUNA GMBHInventors: Juri Platonov, Pawel Dürr
-
Patent number: 11378427Abstract: A method of sensor fusion for systems including at least one sensor and at least one target object is provided. The method includes receiving configuration data at a processing device. The configuration data includes a description of a first system including one or more sensors and one or more target objects. The configuration data includes an indication that one or more geometric parameters and/or one or more sensor parameters of the first system are unknown. The method includes receiving an instruction at the processing device that the received configuration data is to be adjusted into adjusted configuration data. The adjusted configuration data includes a description of a second system including one or more sensors and one or more target objects, wherein the second system is different from the first system. The adjusted configuration data includes an indication that one or more geometric parameters and/or one or more sensor parameters of the second system are unknown.Type: GrantFiled: May 27, 2020Date of Patent: July 5, 2022Assignee: Shhuna GmbHInventors: Juri Platonov, Pawel Kaczmarczyk
-
Publication number: 20220043507Abstract: A multi-user virtual reality system for providing a virtual reality experience to a plurality of users in a body of water includes a reference system adapted for emitting and/or receiving signals. The multi-user virtual reality system includes equipment configured to be mounted to a first user in the body of water, wherein the equipment includes a first display unit and a first signal emitting or receiving system adapted for emitting and/or receiving signals. The multi-user virtual reality system includes equipment configured to be mounted to a second user in the body of water, wherein the equipment includes a second display unit and a second signal emitting or receiving system adapted for emitting and/or receiving signals. The multi-user virtual reality system includes a data processing system including one or more data processing units.Type: ApplicationFiled: July 30, 2021Publication date: February 10, 2022Inventors: Juri Platonov, Pawel Dürr
-
Publication number: 20200378808Abstract: A method of sensor fusion for systems including at least one sensor and at least one target object is provided. The method includes receiving configuration data at a processing device. The configuration data includes a description of a first system including one or more sensors and one or more target objects. The configuration data includes an indication that one or more geometric parameters and/or one or more sensor parameters of the first system are unknown. The method includes receiving an instruction at the processing device that the received configuration data is to be adjusted into adjusted configuration data. The adjusted configuration data includes a description of a second system including one or more sensors and one or more target objects, wherein the second system is different from the first system. The adjusted configuration data includes an indication that one or more geometric parameters and/or one or more sensor parameters of the second system are unknown.Type: ApplicationFiled: May 27, 2020Publication date: December 3, 2020Inventors: Juri Platonov, Pawel Kaczmarczyk
-
Patent number: 9262828Abstract: A method for determining the orientation of a video camera attached to a vehicle relative to the vehicle coordinate system is disclosed. Using a video camera an incremental motion is measured based on the optical flow of the video stream. A linear motion component and a rotational motion component are determined. Based on a determined rotational angle the incremental motion is classified as linear or as rotational. The directional vector of the linear motion is used to estimate the longitudinal axis of the vehicle, and the rotational vector of the rotational motion is used for estimating the normal to the vehicle plane. Based thereon the orientation of the camera follows.Type: GrantFiled: January 17, 2013Date of Patent: February 16, 2016Assignee: ESG Elektroniksystem-Und Logistik-GMBHInventors: Christopher Pflug, Juri Platonov, Thomas Gebauer, Pawel Kaczmarczyk
-
Patent number: 9129164Abstract: A driver assist system is provided that generates a video signal representing a vehicle environment outside a vehicle. At least one feature is extracted from the video signal. A reference is selected from a plurality of reference features stored as location attributes in a map database. The extracted feature is compared to at least one reference feature. An object in the vehicle environment is identified based on the comparison of the extracted feature and the reference feature. An indication is provided to a driver of the vehicle on the basis of the identified object. In one example, the system includes a video capturing device, an indicating device, a vehicle-based processing resource and access to a map database server. Processing tasks may be distributed among the vehicle-based processing resource and an external processing resource.Type: GrantFiled: February 3, 2010Date of Patent: September 8, 2015Assignee: Harman Becker Automotive Systems GmbHInventors: Juri Platonov, Alexey Pryakhin, Peter Kunath
-
Publication number: 20150036885Abstract: A method for determining the orientation of a video camera attached to a vehicle relative to the vehicle coordinate system, comprising: Measuring an incremental vehicle motion in the coordinate system of the video camera based on the optical flow of the video stream of the video camera; representing the incremental vehicle motion by a motion consisting of a linear motion component and a rotational motion, wherein the linear motion component is represented by a directional vector and the rotational motion is represented by a rotation axis and a rotational angle; classifying the measured incremental vehicle motion as a linear motion or as a rotational motion based on the determined rotational angle; measuring at least one incremental motion which has been classified as a linear motion; measuring at least one incremental motion which has been classified as a rotational motion; and using the directional vector of the at least one measured linear motion for estimating the longitudinal axis of the vehicle; usingType: ApplicationFiled: January 17, 2013Publication date: February 5, 2015Inventors: Christopher Pflug, Juri Platonov, Thomas Gebauer, Pawel Kaczmarczyk
-
Patent number: 8929604Abstract: A vision system comprises a camera that captures an image and a processor coupled to process the received image to determine at least one feature descriptor for the image. The processor includes an interface to access annotated map data that includes geo-referenced feature descriptors. The processor is configured to perform a matching procedure between the at least one feature descriptor determined for the at least one image and the retrieved geo-referenced feature descriptors.Type: GrantFiled: November 9, 2011Date of Patent: January 6, 2015Assignee: Harman Becker Automotive Systems GmbHInventors: Juri Platonov, Alexey Pryakhin, Peter Kunath
-
Publication number: 20120114178Abstract: A vision system comprises a camera that captures an image and a processor coupled to process the received image to determine at least one feature descriptor for the image. The processor includes an interface to access annotated map data that includes geo-referenced feature descriptors. The processor is configured to perform a matching procedure between the at least one feature descriptor determined for the at least one image and the retrieved geo-referenced feature descriptors.Type: ApplicationFiled: November 9, 2011Publication date: May 10, 2012Inventors: Juri Platonov, Alexey Pryakhin, Peter Kunath
-
Publication number: 20110090071Abstract: A driver assist system is provided that generates a video signal representing a vehicle environment outside a vehicle. At least one feature is extracted from the video signal. A reference is selected from a plurality of reference features stored as location attributes in a map database. The extracted feature is compared to at least one reference feature. An object in the vehicle environment is identified based on the comparison of the extracted feature and the reference feature. An indication is provided to a driver of the vehicle on the basis of the identified object. In one example, the system includes a video capturing device, an indicating device, a vehicle-based processing resource and access to a map database server. Processing tasks may be distributed among the vehicle-based processing resource and an external processing resource.Type: ApplicationFiled: February 3, 2010Publication date: April 21, 2011Applicant: Harman Becker Automotive Systems GmbHInventors: Juri Platonov, Alexey Pryakhin, Peter Kunath
-
Patent number: 7889193Abstract: A data model which is designed for being superposed with an image of a real object in an optical object tracking process is determined by the following steps: providing a three-dimensional CAD model (10) for representing the real object, and thereafter there are different synthetic two-dimensional views (31 to 34) of said CAD model (10) generated. Each generated view (31 to 34) is subjected to edge extraction for determining at least one extracted edge (38, 39) in the respective view, with the edges (38, 39) extracted from said respective views (31 to 34) being transformed to a three-dimensional contour model (85, 91) corresponding to said data model to be determined. Permits rapid and efficient generation of a contour model as a data model intended for being superposed with an image of a real object.Type: GrantFiled: January 30, 2007Date of Patent: February 15, 2011Assignee: Metaio GmbHInventors: Juri Platonov, Marion Langer
-
Patent number: 7592997Abstract: The invention relates to a system for determining the position of a user and/or a moving device by means of tracking methods, in particular for augmented reality applications, with an interface (9) to integrate at least one sensor type and/or data generator (1, 2, 3, 4) of a tracking method, a configuration unit (20) to describe communication between the tracking methods and/or tracking algorithms and at least one processing unit (5, 6, 7, 8, 10, 11, 12, 13, 16) to calculate the position of the user and/or the moving device based on the data supplied by the tracking methods and/or tracking algorithms.Type: GrantFiled: June 1, 2005Date of Patent: September 22, 2009Assignee: Siemens AktiengesellschaftInventors: Jan-Friso Evers-Senne, Jan-Michael Frahm, Mehdi Hamadou, Dirk Jahn, Peter Georg Meier, Juri Platonov, Didier Stricker, Jens Weidenhausen
-
Publication number: 20070182739Abstract: A data model which is designed for being superposed with an image of a real object in an optical object tracking process is determined by the following steps: providing a three-dimensional CAD model (10) for representing the real object, and thereafter there are different synthetic two-dimensional views (31 to 34) of said CAD model (10) generated. Each generated view (31 to 34) is subjected to edge extraction for determining at least one extracted edge (38, 39) in the respective view, with the edges (38, 39) extracted from said respective views (31 to 34) being transformed to a three-dimensional contour model (85, 91) corresponding to said data model to be determined. Permits rapid and efficient generation of a contour model as a data model intended for being superposed with an image of a real object.Type: ApplicationFiled: January 30, 2007Publication date: August 9, 2007Inventors: Juri Platonov, Marion Langer
-
Publication number: 20050275722Abstract: The invention relates to a system for determining the position of a user and/or a moving device by means of tracking methods, in particular for augmented reality applications, with an interface (9) to integrate at least one sensor type and/or data generator (1, 2, 3, 4) of a tracking method, a configuration unit (20) to describe communication between the tracking methods and/or tracking algorithms and at least one processing unit (5, 6, 7, 8, 10, 11, 12, 13, 16) to calculate the position of the user and/or the moving device based on the data supplied by the tracking methods and/or tracking algorithms.Type: ApplicationFiled: June 1, 2005Publication date: December 15, 2005Inventors: Jan-Friso Evers-Senne, Jan-Michael Frahm, Mehdi Hamadou, Dirk Jahn, Peter Meier, Juri Platonov, Didier Stricker, Jens Weidenhausen