Patents by Inventor Dmytro Rudchenko

Dmytro Rudchenko has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11921966
    Abstract: Systems and methods related to intelligent typing and responses using eye-gaze technology are disclosed herein. In some example aspects, a dwell-free typing system is provided to a user typing with eye-gaze. A prediction processor may intelligently determine the desired word or action of the user. In some aspects, the prediction processor may contain elements of a natural language processor. In other aspects, the systems and methods may allow quicker response times from applications due to application of intelligent response algorithms. For example, a user may fixate on a certain button within a web-browser, and the prediction processor may present a response to the user by selecting the button in the web-browser, thereby initiating an action. In other example aspects, each gaze location may be associated with a UI element. The gaze data and associated UI elements may be processed for intelligent predictions and suggestions.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: March 5, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dmytro Rudchenko, Eric N. Badger, Akhilesh Kaza, Jacob Daniel Cohen, Harish S. Kulkarni
  • Patent number: 11880545
    Abstract: Systems and methods disclosed herein relate to assigning dynamic eye-gaze dwell-times. Dynamic dwell-times may be tailored to the individual user. For example, a dynamic dwell-time system may be configured to receive data from the user, such as the duration of time the user takes to execute certain actions within applications (e.g., read a word suggestion before actually selecting it). The dynamic dwell-time system may also prevent users from making unintended selections by providing different dwell times for different buttons. Specifically, on a user interface, longer dwell times may be established for the critical keys (e.g., “close” program key, “send” key, word suggestions, and the like) and shorter dwell times may be established for the less critical keys (e.g., individual character keys on a virtual keyboard, spacebar, backspace, and the like).
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: January 23, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dmytro Rudchenko, Eric N. Badger, Akhilesh Kaza, Jacob Daniel Cohen, Peter John Ansell, Jonathan T. Campbell, Harish S. Kulkarni
  • Publication number: 20230401486
    Abstract: The subject technology receives, from a first sensor of a device, first sensor output of a first type. The subject technology receives, from a second sensor of the device, second sensor output of a second type, the first and second sensors being non-touch sensors. The subject technology provides the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted touch-based gesture based on sensor output of the first type and sensor output of the second type. The subject technology provides a predicted touch-based gesture based on output from the machine learning model. Further, the subject technology adjusts an audio output level of the device based on the predicted gesture, and where the device is an audio output device.
    Type: Application
    Filed: May 30, 2023
    Publication date: December 14, 2023
    Inventors: Keith P. AVERY, Jamil DHANANI, Harveen KAUR, Varun MAUDGALYA, Timothy S. PAEK, Dmytro RUDCHENKO, Brandt M. WESTING, Minwoo JEONG
  • Patent number: 11704592
    Abstract: The subject technology receives, from a first sensor of a device, first sensor output of a first type. The subject technology receives, from a second sensor of the device, second sensor output of a second type, the first and second sensors being non-touch sensors. The subject technology provides the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted touch-based gesture based on sensor output of the first type and sensor output of the second type. The subject technology provides a predicted touch-based gesture based on output from the machine learning model. Further, the subject technology adjusts an audio output level of the device based on the predicted gesture, and where the device is an audio output device.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: July 18, 2023
    Assignee: Apple Inc.
    Inventors: Keith P. Avery, Jamil Dhanani, Harveen Kaur, Varun Maudgalya, Timothy S. Paek, Dmytro Rudchenko, Brandt M. Westing, Minwoo Jeong
  • Publication number: 20220155912
    Abstract: Systems and methods related to intelligent typing and responses using eye-gaze technology are disclosed herein. In some example aspects, a dwell-free typing system is provided to a user typing with eye-gaze. A prediction processor may intelligently determine the desired word or action of the user. In some aspects, the prediction processor may contain elements of a natural language processor. In other aspects, the systems and methods may allow quicker response times from applications due to application of intelligent response algorithms. For example, a user may fixate on a certain button within a web-browser, and the prediction processor may present a response to the user by selecting the button in the web-browser, thereby initiating an action. In other example aspects, each gaze location may be associated with a UI element. The gaze data and associated UI elements may be processed for intelligent predictions and suggestions.
    Type: Application
    Filed: January 31, 2022
    Publication date: May 19, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Dmytro RUDCHENKO, Eric N. BADGER, Akhilesh KAZA, Jacob Daniel COHEN, Harish S. KULKARNI
  • Publication number: 20220155911
    Abstract: Systems and methods related to intelligent typing and responses using eye-gaze technology are disclosed herein. In some example aspects, a dwell-free typing system is provided to a user typing with eye-gaze. A prediction processor may intelligently determine the desired word or action of the user. In some aspects, the prediction processor may contain elements of a natural language processor. In other aspects, the systems and methods may allow quicker response times from applications due to application of intelligent response algorithms. For example, a user may fixate on a certain button within a web-browser, and the prediction processor may present a response to the user by selecting the button in the web-browser, thereby initiating an action. In other example aspects, each gaze location may be associated with a UI element. The gaze data and associated UI elements may be processed for intelligent predictions and suggestions.
    Type: Application
    Filed: January 31, 2022
    Publication date: May 19, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Dmytro RUDCHENKO, Eric N. BADGER, Akhilesh KAZA, Jacob Daniel COHEN, Harish S. KULKARNI
  • Patent number: 11237691
    Abstract: Systems and methods related to intelligent typing and responses using eye-gaze technology are disclosed herein. In some example aspects, a dwell-free typing system is provided to a user typing with eye-gaze. A prediction processor may intelligently determine the desired word or action of the user. In some aspects, the prediction processor may contain elements of a natural language processor. In other aspects, the systems and methods may allow quicker response times from applications due to application of intelligent response algorithms. For example, a user may fixate on a certain button within a web-browser, and the prediction processor may present a response to the user by selecting the button in the web-browser, thereby initiating an action. In other example aspects, each gaze location may be associated with a UI element. The gaze data and associated UI elements may be processed for intelligent predictions and suggestions.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: February 1, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dmytro Rudchenko, Eric N. Badger, Akhilesh Kaza, Jacob Daniel Cohen, Harish S. Kulkarni
  • Publication number: 20210318794
    Abstract: Systems and methods disclosed herein relate to assigning dynamic eye-gaze dwell-times. Dynamic dwell-times may be tailored to the individual user. For example, a dynamic dwell-time system may be configured to receive data from the user, such as the duration of time the user takes to execute certain actions within applications (e.g., read a word suggestion before actually selecting it). The dynamic dwell-time system may also prevent users from making unintended selections by providing different dwell times for different buttons. Specifically, on a user interface, longer dwell times may be established for the critical keys (e.g., “close” program key, “send” key, word suggestions, and the like) and shorter dwell times may be established for the less critical keys (e.g., individual character keys on a virtual keyboard, spacebar, backspace, and the like).
    Type: Application
    Filed: June 24, 2021
    Publication date: October 14, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Dmytro RUDCHENKO, Eric N. BADGER, Akhilesh KAZA, Jacob Daniel COHEN, Peter John ANSELL, Jonathan T. CAMPBELL, Harish S. KULKARNI
  • Patent number: 11079899
    Abstract: Systems and methods disclosed herein relate to assigning dynamic eye-gaze dwell-times. Dynamic dwell-times may be tailored to the individual user. For example, a dynamic dwell-time system may be configured to receive data from the user, such as the duration of time the user takes to execute certain actions within applications (e.g., read a word suggestion before actually selecting it). The dynamic dwell-time system may also prevent users from making unintended selections by providing different dwell times for different buttons. Specifically, on a user interface, longer dwell times may be established for the critical keys (e.g., “close” program key, “send” key, word suggestions, and the like) and shorter dwell times may be established for the less critical keys (e.g., individual character keys on a virtual keyboard, spacebar, backspace, and the like).
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: August 3, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dmytro Rudchenko, Eric N. Badger, Akhilesh Kaza, Jacob Daniel Cohen, Peter John Ansell, Jonathan T. Campbell, Harish S. Kulkarni
  • Publication number: 20210027199
    Abstract: The subject technology receives, from a first sensor of a device, first sensor output of a first type. The subject technology receives, from a second sensor of the device, second sensor output of a second type, the first and second sensors being non-touch sensors. The subject technology provides the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted touch-based gesture based on sensor output of the first type and sensor output of the second type. The subject technology provides a predicted touch-based gesture based on output from the machine learning model. Further, the subject technology adjusts an audio output level of the device based on the predicted gesture, and where the device is an audio output device.
    Type: Application
    Filed: July 23, 2020
    Publication date: January 28, 2021
    Inventors: Keith P. AVERY, Jamil DHANANI, Harveen KAUR, Varun MAUDGALYA, Timothy S. PAEK, Dmytro RUDCHENKO, Brandt M. WESTING, Minwoo JEONG
  • Patent number: 10698587
    Abstract: Embodiments are disclosed for a method of providing a user interface on a computing device. The method includes presenting a virtual keyboard on a display of the computing device, detecting input to the virtual keyboard. The method further includes, for each detected input, determining whether the input selects any of one or more delimiter keys, displaying a placeholder for the input responsive to the input not selecting any of the one or more delimiter keys, and receiving suggested candidate text from a word-level recognizer and replacing all currently displayed placeholders with the suggested candidate text responsive to the input selecting any of the one or more delimiter keys.
    Type: Grant
    Filed: April 11, 2019
    Date of Patent: June 30, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Timothy Paek, Dmytro Rudchenko, Vishwas Kulkarni, Asela Jeevaka Ranaweera Gunawardana, Jason Grieves, Daniel Ostrowski, Amish Patel
  • Publication number: 20190286300
    Abstract: Embodiments are disclosed for a method of providing a user interface on a computing device. The method includes presenting a virtual keyboard on a display of the computing device, detecting input to the virtual keyboard. The method further includes, for each detected input, determining whether the input selects any of one or more delimiter keys, displaying a placeholder for the input responsive to the input not selecting any of the one or more delimiter keys, and receiving suggested candidate text from a word-level recognizer and replacing all currently displayed placeholders with the suggested candidate text responsive to the input selecting any of the one or more delimiter keys.
    Type: Application
    Filed: April 11, 2019
    Publication date: September 19, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Timothy Paek, Dmytro Rudchenko, Vishwas Kulkarni, Asela Jeevaka Ranaweera Gunawardana, Jason Grieves, Daniel Ostrowski, Amish Patel
  • Patent number: 10261674
    Abstract: Embodiments are disclosed for a method of providing a user interface on a computing device. The method includes presenting a virtual keyboard on a display of the computing device, detecting input to the virtual keyboard. The method further includes, for each detected input, determining whether the input selects any of one or more delimiter keys, displaying a placeholder for the input responsive to the input not selecting any of the one or more delimiter keys, and receiving suggested candidate text from a word-level recognizer and replacing all currently displayed placeholders with the suggested candidate text responsive to the input selecting any of the one or more delimiter keys.
    Type: Grant
    Filed: November 26, 2014
    Date of Patent: April 16, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Timothy Paek, Dmytro Rudchenko, Vishwas Kulkarni, Asela Jeevaka Ranaweera Gunawardana, Jason Grieves, Daniel Ostrowski, Amish Patel
  • Publication number: 20190087084
    Abstract: An apparatus and method are disclosed for providing feedback and guidance to touch screen device users to improve text entry user experience and performance by generating input history data including character probabilities, word probabilities, and touch models. According to one embodiment, a method comprises receiving first input data, automatically learning user tendencies based on the first input data to generate input history data, receiving second input data, and generating auto-corrections or suggestion candidates for one or more words of the second input data based on the input history data. The user can then select one of the suggestion candidates to replace a selected word with the selected suggestion candidate.
    Type: Application
    Filed: November 20, 2018
    Publication date: March 21, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Eric Norman Badger, Drew Elliott Linerud, Itai Almog, Timothy S. Paek, Parthasarathy Sundararajan, Dmytro Rudchenko, Asela J. Gunawardana
  • Publication number: 20190034038
    Abstract: Systems and methods related to intelligent typing and responses using eye-gaze technology are disclosed herein. In some example aspects, a dwell-free typing system is provided to a user typing with eye-gaze. A prediction processor may intelligently determine the desired word or action of the user. In some aspects, the prediction processor may contain elements of a natural language processor. In other aspects, the systems and methods may allow quicker response times from applications due to application of intelligent response algorithms. For example, a user may fixate on a certain button within a web-browser, and the prediction processor may present a response to the user by selecting the button in the web-browser, thereby initiating an action. In other example aspects, each gaze location may be associated with a UI element. The gaze data and associated UI elements may be processed for intelligent predictions and suggestions.
    Type: Application
    Filed: December 13, 2017
    Publication date: January 31, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Dmytro RUDCHENKO, Eric N. BADGER, Akhilesh KAZA, Jacob Daniel COHEN, Harish S. KULKARNI
  • Publication number: 20190034057
    Abstract: Systems and methods disclosed herein relate to assigning dynamic eye-gaze dwell-times. Dynamic dwell-times may be tailored to the individual user. For example, a dynamic dwell-time system may be configured to receive data from the user, such as the duration of time the user takes to execute certain actions within applications (e.g., read a word suggestion before actually selecting it). The dynamic dwell-time system may also prevent users from making unintended selections by providing different dwell times for different buttons. Specifically, on a user interface, longer dwell times may be established for the critical keys (e.g., “close” program key, “send” key, word suggestions, and the like) and shorter dwell times may be established for the less critical keys (e.g., individual character keys on a virtual keyboard, spacebar, backspace, and the like).
    Type: Application
    Filed: December 13, 2017
    Publication date: January 31, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Dmytro RUDCHENKO, Eric N. BADGER, Akhilesh KAZA, Jacob Daniel COHEN, Peter John ANSELL, Jonathan T. CAMPBELL, Harish S. KULKARNI
  • Patent number: 10156981
    Abstract: An apparatus and method are disclosed for providing feedback and guidance to touch screen device users to improve text entry user experience and performance by generating input history data including character probabilities, word probabilities, and touch models. According to one embodiment, a method comprises receiving first input data, automatically learning user tendencies based on the first input data to generate input history data, receiving second input data, and generating auto-corrections or suggestion candidates for one or more words of the second input data based on the input history data. The user can then select one of the suggestion candidates to replace a selected word with the selected suggestion candidate.
    Type: Grant
    Filed: April 1, 2017
    Date of Patent: December 18, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Eric Norman Badger, Drew Elliott Linerud, Itai Almog, Timothy S. Paek, Parthasarathy Sundararajan, Dmytro Rudchenko, Asela J. Gunawardana
  • Patent number: 10146404
    Abstract: In a mobile device, the text entered by users is analyzed to determine a set of responses commonly entered by users into text applications such as SMS applications in response to received messages. This set of responses is used to provide suggested responses to a user for a currently received message in a soft input panel based on the text of the currently received message. The suggested responses are provided before any characters are provided by the user. After the user provides one or more characters, the suggested responses in the soft input panel are updated. The number of suggested responses displayed to the user in the soft input panel is limited to a total confidence value to reduce user distraction and to allow for easier selection. An undo feature for inadvertent selections of suggested responses is also provided.
    Type: Grant
    Filed: January 27, 2015
    Date of Patent: December 4, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jason Grieves, Dmytro Rudchenko, Parthasarathy Sundararajan, Tim Paek, Itai Almog, Songming He, Jerome Turner, Masahiro Ami, Kozo Miyano
  • Patent number: 9740399
    Abstract: Described herein are various technologies pertaining to shapewriting. A touch-sensitive input panel comprises a plurality of keys, where each key in the plurality of keys is representative of a respective plurality of characters. A user can generate a trace over the touch-sensitive input panel, wherein the trace passes over keys desirably selected by the user. A sequence of characters, such as a word, is decoded based upon the trace, and is output to a display or a speaker.
    Type: Grant
    Filed: January 20, 2013
    Date of Patent: August 22, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Timothy S. Paek, Johnson Apacible, Dmytro Rudchenko, Bongshin Lee, Juan Dai, Yutaka Suzue
  • Publication number: 20170206002
    Abstract: An apparatus and method are disclosed for providing feedback and guidance to touch screen device users to improve text entry user experience and performance by generating input history data including character probabilities, word probabilities, and touch models. According to one embodiment, a method comprises receiving first input data, automatically learning user tendencies based on the first input data to generate input history data, receiving second input data, and generating auto-corrections or suggestion candidates for one or more words of the second input data based on the input history data. The user can then select one of the suggestion candidates to replace a selected word with the selected suggestion candidate.
    Type: Application
    Filed: April 1, 2017
    Publication date: July 20, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Eric Norman Badger, Drew Elliott Linerud, Itai Almog, Timothy S. Paek, Parthasarathy Sundararajan, Dmytro Rudchenko, Asela J. Gunawardana