Patents by Inventor Dmitry Evgrafov

Dmitry Evgrafov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230367281
    Abstract: Disclosed are systems and techniques for creating a personalized sound environment for a user. A process can include obtaining text data comprising a plurality of words. A plurality of text frames are generated based on the text data, each respective text frame including a subset of the plurality of words. A machine learning network can be used to analyze each respective text frame to generate one or more features corresponding to the respective text frame and the subset of the plurality of words. Two or more sound sections can be determined for presentation to a user, each sound section corresponding to a particular text frame of the plurality of text frames and generated based at least in part on the one or more features of the particular text frame. A personalized sound environment is generated to include at least the two or more sound sections and is presented to the user on a user computing device.
    Type: Application
    Filed: July 24, 2023
    Publication date: November 16, 2023
    Applicant: Endel Sound GmbH
    Inventors: Oleg Stavitskii, Vladimir Terekhov, Dmitry Evgrafov, Kyrylo Bulatsev, Philipp Petrenko, Dmitry Bezugly, Evgeny Gurzhiy, Igor Skovorodkin
  • Publication number: 20220365504
    Abstract: Disclosed are systems and techniques for creating a personalized sound environment for a user. Output is received from a plurality of sensors, wherein the sensor output detects a state of a user and an environment in which the user is active. Two or more sound sections for presentation to the user are selected from a plurality of sound sections, the selecting based on the sensor output and automatically determined sound preferences of the user. A first sound phase is generated, wherein the first sound phase includes the two or more selected sound sections. A personalized sound environment for presentation to the user is generated, wherein the personalized sound environment includes at least the first sound phase and a second sound phase. The personalized sound environment is presented to the user on a user device.
    Type: Application
    Filed: July 26, 2022
    Publication date: November 17, 2022
    Applicant: Endel Sound GmbH
    Inventors: Oleg Stavitskii, Kyrylo Bulatsev, Philipp Petrenko, Dmitry Bezugly, Evgeny Gurzhiy, Dmitry Evgrafov
  • Publication number: 20220310049
    Abstract: Disclosed are systems and techniques for creating a personalized sound environment for a user. Output is received from a plurality of sensors, wherein the sensor output detects a state of a user and an environment in which the user is active. Two or more sound sections for presentation to the user are selected from a plurality of sound sections, the selecting based on the sensor output and automatically determined sound preferences of the user. A first sound phase is generated, wherein the first sound phase includes the two or more selected sound sections. A personalized sound environment for presentation to the user is generated, wherein the personalized sound environment includes at least the first sound phase and a second sound phase. The personalized sound environment is presented to the user on a user device.
    Type: Application
    Filed: June 15, 2022
    Publication date: September 29, 2022
    Applicant: Endel Sound GmbH
    Inventors: Oleg Stavitskii, Kyrylo Bulatsev, Philipp Petrenko, Dmitry Bezugly, Evgeny Gurzhiy, Dmitry Evgrafov
  • Publication number: 20220229408
    Abstract: A system and method of creating a personalized sounds and visuals environment to address a person's individual environment and state by receiving output from a plurality of sensors, the sensors detecting the activity of the user and the environment in which the user is active. Sounds and/or visuals to be transmitted to the user for listening and watching on the user's device are determined based on one or more of the sensor outputs, a user profile, a user mode, a user state, and a user context. The determined sounds and/or visuals are transmitted and presented to the user, and the determined sounds and/or visuals are automatically and dynamically modified in real time based on changes in the output from one or more of the plurality of sensors and/or changes in the user's profile.
    Type: Application
    Filed: February 4, 2022
    Publication date: July 21, 2022
    Applicant: Endel Sound GmbH
    Inventors: Oleg Stavitskii, Kyrylo Bulatsev, Philipp Petrenko, Dmitry Bezugly, Evgeny Gurzhiy, Dmitry Evgrafov
  • Patent number: 11275350
    Abstract: A system and method of creating a personalized sounds and visuals environment to address a person's individual environment and state by receiving output from a plurality of sensors, the sensors detecting the activity of the user and the environment in which the user is active. Sounds and/or visuals to be transmitted to the user for listening and watching on the user's device are determined based on one or more of the sensor outputs, a user profile, a user mode, a user state, and a user context. The determined sounds and/or visuals are transmitted and presented to the user, and the determined sounds and/or visuals are automatically and dynamically modified in real time based on changes in the output from one or more of the plurality of sensors and/or changes in the user's profile.
    Type: Grant
    Filed: November 5, 2019
    Date of Patent: March 15, 2022
    Assignee: Endel Sound GmbH
    Inventors: Oleg Stavitskii, Kyrylo Bulatsev, Philipp Petrenko, Dmitry Bezugly, Evgeny Gurzhiy, Dmitry Evgrafov
  • Patent number: 10948890
    Abstract: A system and method of creating a personalized sounds and visuals environment to address a person's individual environment and state by receiving output from a plurality of sensors, the sensors detecting the activity of the user and the environment in which the user is active. Sounds and/or visuals to be transmitted to the user for listening and watching on the user's device are determined based on one or more of the sensor outputs, a user profile, a user mode, a user state, and a user context. The determined sounds and/or visuals are transmitted and presented to the user, and the determined sounds and/or visuals are automatically and dynamically modified in real time based on changes in the output from one or more of the plurality of sensors and/or changes in the user's profile.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: March 16, 2021
    Assignee: Endel Sound GmbH
    Inventors: Oleg Stavitskii, Kyrylo Bulatsev, Philipp Petrenko, Dmitry Bezugly, Evgeny Gurzhiy, Dmitry Evgrafov
  • Publication number: 20210003980
    Abstract: A system and method of creating a personalized sounds and visuals environment to address a person's individual environment and state by receiving output from a plurality of sensors, the sensors detecting the activity of the user and the environment in which the user is active. Sounds and/or visuals to be transmitted to the user for listening and watching on the user's device are determined based on one or more of the sensor outputs, a user profile, a user mode, a user state, and a user context. The determined sounds and/or visuals are transmitted and presented to the user, and the determined sounds and/or visuals are automatically and dynamically modified in real time based on changes in the output from one or more of the plurality of sensors and/or changes in the user's profile.
    Type: Application
    Filed: September 18, 2020
    Publication date: January 7, 2021
    Applicant: Endel Sound GmbH
    Inventors: Oleg Stavitskii, Kyrylo Bulatsev, Philipp Petrenko, Dmitry Bezugly, Evgeny Gurzhiy, Dmitry Evgrafov
  • Publication number: 20200142371
    Abstract: A system and method of creating a personalized sounds and visuals environment to address a person's individual environment and state by receiving output from a plurality of sensors, the sensors detecting the activity of the user and the environment in which the user is active. Sounds and/or visuals to be transmitted to the user for listening and watching on the user's device are determined based on one or more of the sensor outputs, a user profile, a user mode, a user state, and a user context. The determined sounds and/or visuals are transmitted and presented to the user, and the determined sounds and/or visuals are automatically and dynamically modified in real time based on changes in the output from one or more of the plurality of sensors and/or changes in the user's profile.
    Type: Application
    Filed: November 5, 2019
    Publication date: May 7, 2020
    Applicant: Endel Sound GmbH
    Inventors: Oleg Stavitskii, Kyrylo Bulatsev, Philipp Petrenko, Dmitry Bezugly, Evgeny Gurzhiy, Dmitry Evgrafov