Patents by Inventor Edward Bryan Cutrell
Edward Bryan Cutrell has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11847261Abstract: A computer device is provided that includes a display device, and a sensor system configured to be mounted adjacent to a user's head and to measure an electrical potential near one or more electrodes of the sensor system. The computer device further includes a processor configured to present a periodic motion-based visual stimulus having a changing motion that is frequency-modulated for a target frequency or code-modulated for a target code, detect changes in the electrical potential via the one or more electrodes, identify a corresponding visual evoked potential feature in the detected changes in electrical potential that corresponds to the periodic motion-based visual stimulus, and recognize a user input to the computing device based on identifying the corresponding visual evoked potential feature.Type: GrantFiled: July 27, 2022Date of Patent: December 19, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Andrew D. Wilson, Hakim Si Mohammed, Christian Holz, Adrian Kuo Ching Lee, Ivan Jelev Tashev, Hannes Gamper, Edward Bryan Cutrell, David Emerson Johnston, Dimitra Emmanouilidou, Mihai R. Jalobeanu
-
Publication number: 20230053925Abstract: According to a first aspect, there is provided a computer-implemented method of controlling a user interface to selectively communicate perception results to a user, the method comprising: in response to an update instruction, using a current confidence level of each perception result of a set of perception results to determine whether or not to communicate that perception result at the user interface. The perception results are determined by processing sensor signals from a sensor system using at least one perception algorithm. At least one of the perception results is communicated at the user interface together with at least one piece of contextual information, without communicating the current confidence level that caused the perception result to be outputted, the current confidence level having been at least partially derived from the piece of contextual information.Type: ApplicationFiled: January 14, 2021Publication date: February 23, 2023Inventors: Cecily Peregrine Borgatti MORRISON, Martin Philip GRAYSON, Anja DUNPHY, Edward Bryan CUTRELL
-
Publication number: 20230046710Abstract: There is provided a computer implemented method of extracting information about a person. Incoming sensor signals for monitoring people within a field of view of a sensor system are received and processed. In response to detecting a person located within a notification region, an output device outputs a notification to the detected person. Processing of the incoming sensor signals continues in order to monitor behaviour patterns of the person and determine from his behaviour patterns whether he is currently in a consenting or non-consenting state. An extraction function attempts to extract information about the person irrespective of his determined state. A sharing function determines whether or not to share an extracted piece of information about the person with a receiving entity in accordance with his determined state, the information not being shared unless and until it is subsequently determined that the person is in the consenting state.Type: ApplicationFiled: December 14, 2020Publication date: February 16, 2023Inventors: Cecily Peregrine Borgatti MORRISON, Martin Philip GRAYSON, Anja DUNPHY, Edward Bryan CUTRELL
-
Publication number: 20220365599Abstract: A computer device is provided that includes a display device, and a sensor system configured to be mounted adjacent to a user's head and to measure an electrical potential near one or more electrodes of the sensor system. The computer device further includes a processor configured to present a periodic motion-based visual stimulus having a changing motion that is frequency-modulated for a target frequency or code-modulated for a target code, detect changes in the electrical potential via the one or more electrodes, identify a corresponding visual evoked potential feature in the detected changes in electrical potential that corresponds to the periodic motion-based visual stimulus, and recognize a user input to the computing device based on identifying the corresponding visual evoked potential feature.Type: ApplicationFiled: July 27, 2022Publication date: November 17, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Andrew D. WILSON, Hakim SI MOHAMMED, Christian HOLZ, Adrian Kuo Ching LEE, Ivan Jelev TASHEV, Hannes GAMPER, Edward Bryan CUTRELL, David Emerson JOHNSTON, Dimitra EMMANOUILIDOU, Mihai R. JALOBEANU
-
Patent number: 11409361Abstract: A computer device is provided that includes a display device, and a sensor system configured to be mounted adjacent to a user's head and to measure an electrical potential near one or more electrodes of the sensor system. The computer device further includes a processor configured to present a periodic motion-based visual stimulus having a changing motion that is frequency-modulated for a target frequency or code-modulated for a target code, detect changes in the electrical potential via the one or more electrodes, identify a corresponding visual evoked potential feature in the detected changes in electrical potential that corresponds to the periodic motion-based visual stimulus, and recognize a user input to the computing device based on identifying the corresponding visual evoked potential feature.Type: GrantFiled: February 3, 2020Date of Patent: August 9, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Andrew D. Wilson, Hakim Si Mohammed, Christian Holz, Adrian Kuo Ching Lee, Ivan Jelev Tashev, Hannes Gamper, Edward Bryan Cutrell, David Emerson Johnston, Dimitra Emmanouilidou, Mihai R. Jalobeanu
-
Publication number: 20220230374Abstract: Generation of expressive content is provided. An expressive synthesized speech system provides improved voice authoring user interfaces by which a user is enabled to efficiently author content for generating expressive output. An expressive synthesized speech system provides an expressive keyboard for enabling input of textual content and for selecting expressive operators, such as emoji objects or punctuation objects for applying predetermined prosody attributes or visual effects to the textual content. A voicesetting editor mode enables the user to author and adjust particular prosody attributes associated with the content for composing carefully-crafted synthetic speech. An active listening mode (ALM) is provided, which when selected, a set of ALM effect options are displayed, wherein each option is associated with a particular sound effect and/or visual effect. The user is enabled to rapidly respond with expressive vocal sound effects or visual effects while listening to others speak.Type: ApplicationFiled: April 5, 2022Publication date: July 21, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Ann M. PARADISO, Jonathan CAMPBELL, Edward Bryan CUTRELL, Harish KULKARNI, Meredith MORRIS, Alexander John FIANNACA, Kiley Rebecca SOBEL
-
Patent number: 11321890Abstract: Generation of expressive content is provided. An expressive synthesized speech system provides improved voice authoring user interfaces by which a user is enabled to efficiently author content for generating expressive output. An expressive synthesized speech system provides an expressive keyboard for enabling input of textual content and for selecting expressive operators, such as emoji objects or punctuation objects for applying predetermined prosody attributes or visual effects to the textual content. A voicesetting editor mode enables the user to author and adjust particular prosody attributes associated with the content for composing carefully-crafted synthetic speech. An active listening mode (ALM) is provided, which when selected, a set of ALM effect options are displayed, wherein each option is associated with a particular sound effect and/or visual effect. The user is enabled to rapidly respond with expressive vocal sound effects or visual effects while listening to others speak.Type: GrantFiled: November 9, 2016Date of Patent: May 3, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Ann M. Paradiso, Jonathan Campbell, Edward Bryan Cutrell, Harish Kulkarni, Meredith Morris, Alexander John Fiannaca, Kiley Rebecca Sobel
-
Publication number: 20210240264Abstract: A computer device is provided is includes a display device, and a sensor system configured to be mounted adjacent to a user's head and to measure an electrical potential near one or more electrodes of the sensor system. The computer device further includes a processor configured to present a periodic motion-based visual stimulus having a changing motion that is frequency-modulated for a target frequency or code-modulated for a target code, detect changes in the electrical potential via the one or more electrodes, identify a corresponding visual evoked potential feature in the detected changes in electrical potential that corresponds to the periodic motion-based visual stimulus, and recognize a user input to the computing device based on identifying the corresponding visual evoked potential feature.Type: ApplicationFiled: February 3, 2020Publication date: August 5, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Andrew D. WILSON, Hakim SI MOHAMMED, Christian HOLZ, Adrian Kuo Ching LEE, Ivan Jelev TASHEV, Hannes GAMPER, Edward Bryan CUTRELL, David Emerson JOHNSTON, Dimitra EMMANOUILIDOU, Mihai R. JALOBEANU
-
Publication number: 20180130459Abstract: Generation of expressive content is provided. An expressive synthesized speech system provides improved voice authoring user interfaces by which a user is enabled to efficiently author content for generating expressive output. An expressive synthesized speech system provides an expressive keyboard for enabling input of textual content and for selecting expressive operators, such as emoji objects or punctuation objects for applying predetermined prosody attributes or visual effects to the textual content. A voicesetting editor mode enables the user to author and adjust particular prosody attributes associated with the content for composing carefully-crafted synthetic speech. An active listening mode (ALM) is provided, which when selected, a set of ALM effect options are displayed, wherein each option is associated with a particular sound effect and/or visual effect. The user is enabled to rapidly respond with expressive vocal sound effects or visual effects while listening to others speak.Type: ApplicationFiled: November 9, 2016Publication date: May 10, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Ann M. Paradiso, Jonathan Campbell, Edward Bryan Cutrell, Harish Kulkarni, Meredith Morris, Alexander John Fiannaca, Kiley Rebecca Sobel