Patents by Inventor Oleg Stavitskii
Oleg Stavitskii has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12248289Abstract: Disclosed are systems and techniques for creating a personalized sound environment for a user. Output is received from a plurality of sensors, wherein the sensor output detects a state of a user and an environment in which the user is active. Two or more sound sections for presentation to the user are selected from a plurality of sound sections, the selecting based on the sensor output and automatically determined sound preferences of the user. A first sound phase is generated, wherein the first sound phase includes the two or more selected sound sections. A personalized sound environment for presentation to the user is generated, wherein the personalized sound environment includes at least the first sound phase and a second sound phase. The personalized sound environment is presented to the user on a user device.Type: GrantFiled: July 26, 2022Date of Patent: March 11, 2025Assignee: Endel Sound GmbHInventors: Oleg Stavitskii, Kyrylo Bulatsev, Philipp Petrenko, Dmitry Bezugly, Evgeny Gurzhiy, Dmitry Evgrafov
-
Publication number: 20230367281Abstract: Disclosed are systems and techniques for creating a personalized sound environment for a user. A process can include obtaining text data comprising a plurality of words. A plurality of text frames are generated based on the text data, each respective text frame including a subset of the plurality of words. A machine learning network can be used to analyze each respective text frame to generate one or more features corresponding to the respective text frame and the subset of the plurality of words. Two or more sound sections can be determined for presentation to a user, each sound section corresponding to a particular text frame of the plurality of text frames and generated based at least in part on the one or more features of the particular text frame. A personalized sound environment is generated to include at least the two or more sound sections and is presented to the user on a user computing device.Type: ApplicationFiled: July 24, 2023Publication date: November 16, 2023Applicant: Endel Sound GmbHInventors: Oleg Stavitskii, Vladimir Terekhov, Dmitry Evgrafov, Kyrylo Bulatsev, Philipp Petrenko, Dmitry Bezugly, Evgeny Gurzhiy, Igor Skovorodkin
-
Publication number: 20220365504Abstract: Disclosed are systems and techniques for creating a personalized sound environment for a user. Output is received from a plurality of sensors, wherein the sensor output detects a state of a user and an environment in which the user is active. Two or more sound sections for presentation to the user are selected from a plurality of sound sections, the selecting based on the sensor output and automatically determined sound preferences of the user. A first sound phase is generated, wherein the first sound phase includes the two or more selected sound sections. A personalized sound environment for presentation to the user is generated, wherein the personalized sound environment includes at least the first sound phase and a second sound phase. The personalized sound environment is presented to the user on a user device.Type: ApplicationFiled: July 26, 2022Publication date: November 17, 2022Applicant: Endel Sound GmbHInventors: Oleg Stavitskii, Kyrylo Bulatsev, Philipp Petrenko, Dmitry Bezugly, Evgeny Gurzhiy, Dmitry Evgrafov
-
Publication number: 20220310049Abstract: Disclosed are systems and techniques for creating a personalized sound environment for a user. Output is received from a plurality of sensors, wherein the sensor output detects a state of a user and an environment in which the user is active. Two or more sound sections for presentation to the user are selected from a plurality of sound sections, the selecting based on the sensor output and automatically determined sound preferences of the user. A first sound phase is generated, wherein the first sound phase includes the two or more selected sound sections. A personalized sound environment for presentation to the user is generated, wherein the personalized sound environment includes at least the first sound phase and a second sound phase. The personalized sound environment is presented to the user on a user device.Type: ApplicationFiled: June 15, 2022Publication date: September 29, 2022Applicant: Endel Sound GmbHInventors: Oleg Stavitskii, Kyrylo Bulatsev, Philipp Petrenko, Dmitry Bezugly, Evgeny Gurzhiy, Dmitry Evgrafov
-
Publication number: 20220229408Abstract: A system and method of creating a personalized sounds and visuals environment to address a person's individual environment and state by receiving output from a plurality of sensors, the sensors detecting the activity of the user and the environment in which the user is active. Sounds and/or visuals to be transmitted to the user for listening and watching on the user's device are determined based on one or more of the sensor outputs, a user profile, a user mode, a user state, and a user context. The determined sounds and/or visuals are transmitted and presented to the user, and the determined sounds and/or visuals are automatically and dynamically modified in real time based on changes in the output from one or more of the plurality of sensors and/or changes in the user's profile.Type: ApplicationFiled: February 4, 2022Publication date: July 21, 2022Applicant: Endel Sound GmbHInventors: Oleg Stavitskii, Kyrylo Bulatsev, Philipp Petrenko, Dmitry Bezugly, Evgeny Gurzhiy, Dmitry Evgrafov
-
Patent number: 11275350Abstract: A system and method of creating a personalized sounds and visuals environment to address a person's individual environment and state by receiving output from a plurality of sensors, the sensors detecting the activity of the user and the environment in which the user is active. Sounds and/or visuals to be transmitted to the user for listening and watching on the user's device are determined based on one or more of the sensor outputs, a user profile, a user mode, a user state, and a user context. The determined sounds and/or visuals are transmitted and presented to the user, and the determined sounds and/or visuals are automatically and dynamically modified in real time based on changes in the output from one or more of the plurality of sensors and/or changes in the user's profile.Type: GrantFiled: November 5, 2019Date of Patent: March 15, 2022Assignee: Endel Sound GmbHInventors: Oleg Stavitskii, Kyrylo Bulatsev, Philipp Petrenko, Dmitry Bezugly, Evgeny Gurzhiy, Dmitry Evgrafov
-
Patent number: 10948890Abstract: A system and method of creating a personalized sounds and visuals environment to address a person's individual environment and state by receiving output from a plurality of sensors, the sensors detecting the activity of the user and the environment in which the user is active. Sounds and/or visuals to be transmitted to the user for listening and watching on the user's device are determined based on one or more of the sensor outputs, a user profile, a user mode, a user state, and a user context. The determined sounds and/or visuals are transmitted and presented to the user, and the determined sounds and/or visuals are automatically and dynamically modified in real time based on changes in the output from one or more of the plurality of sensors and/or changes in the user's profile.Type: GrantFiled: September 18, 2020Date of Patent: March 16, 2021Assignee: Endel Sound GmbHInventors: Oleg Stavitskii, Kyrylo Bulatsev, Philipp Petrenko, Dmitry Bezugly, Evgeny Gurzhiy, Dmitry Evgrafov
-
Publication number: 20210003980Abstract: A system and method of creating a personalized sounds and visuals environment to address a person's individual environment and state by receiving output from a plurality of sensors, the sensors detecting the activity of the user and the environment in which the user is active. Sounds and/or visuals to be transmitted to the user for listening and watching on the user's device are determined based on one or more of the sensor outputs, a user profile, a user mode, a user state, and a user context. The determined sounds and/or visuals are transmitted and presented to the user, and the determined sounds and/or visuals are automatically and dynamically modified in real time based on changes in the output from one or more of the plurality of sensors and/or changes in the user's profile.Type: ApplicationFiled: September 18, 2020Publication date: January 7, 2021Applicant: Endel Sound GmbHInventors: Oleg Stavitskii, Kyrylo Bulatsev, Philipp Petrenko, Dmitry Bezugly, Evgeny Gurzhiy, Dmitry Evgrafov
-
Publication number: 20200142371Abstract: A system and method of creating a personalized sounds and visuals environment to address a person's individual environment and state by receiving output from a plurality of sensors, the sensors detecting the activity of the user and the environment in which the user is active. Sounds and/or visuals to be transmitted to the user for listening and watching on the user's device are determined based on one or more of the sensor outputs, a user profile, a user mode, a user state, and a user context. The determined sounds and/or visuals are transmitted and presented to the user, and the determined sounds and/or visuals are automatically and dynamically modified in real time based on changes in the output from one or more of the plurality of sensors and/or changes in the user's profile.Type: ApplicationFiled: November 5, 2019Publication date: May 7, 2020Applicant: Endel Sound GmbHInventors: Oleg Stavitskii, Kyrylo Bulatsev, Philipp Petrenko, Dmitry Bezugly, Evgeny Gurzhiy, Dmitry Evgrafov