Patents by Inventor Ann M. Paradiso
Ann M. Paradiso has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220230374Abstract: Generation of expressive content is provided. An expressive synthesized speech system provides improved voice authoring user interfaces by which a user is enabled to efficiently author content for generating expressive output. An expressive synthesized speech system provides an expressive keyboard for enabling input of textual content and for selecting expressive operators, such as emoji objects or punctuation objects for applying predetermined prosody attributes or visual effects to the textual content. A voicesetting editor mode enables the user to author and adjust particular prosody attributes associated with the content for composing carefully-crafted synthetic speech. An active listening mode (ALM) is provided, which when selected, a set of ALM effect options are displayed, wherein each option is associated with a particular sound effect and/or visual effect. The user is enabled to rapidly respond with expressive vocal sound effects or visual effects while listening to others speak.Type: ApplicationFiled: April 5, 2022Publication date: July 21, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Ann M. PARADISO, Jonathan CAMPBELL, Edward Bryan CUTRELL, Harish KULKARNI, Meredith MORRIS, Alexander John FIANNACA, Kiley Rebecca SOBEL
-
Patent number: 11321890Abstract: Generation of expressive content is provided. An expressive synthesized speech system provides improved voice authoring user interfaces by which a user is enabled to efficiently author content for generating expressive output. An expressive synthesized speech system provides an expressive keyboard for enabling input of textual content and for selecting expressive operators, such as emoji objects or punctuation objects for applying predetermined prosody attributes or visual effects to the textual content. A voicesetting editor mode enables the user to author and adjust particular prosody attributes associated with the content for composing carefully-crafted synthetic speech. An active listening mode (ALM) is provided, which when selected, a set of ALM effect options are displayed, wherein each option is associated with a particular sound effect and/or visual effect. The user is enabled to rapidly respond with expressive vocal sound effects or visual effects while listening to others speak.Type: GrantFiled: November 9, 2016Date of Patent: May 3, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Ann M. Paradiso, Jonathan Campbell, Edward Bryan Cutrell, Harish Kulkarni, Meredith Morris, Alexander John Fiannaca, Kiley Rebecca Sobel
-
Patent number: 10496162Abstract: The systems and methods described herein assist persons with the use of computers based on eye gaze, and allow such persons to control such computing systems using various eye trackers. The systems and methods described herein use eye trackers to control cursor (or some other indicator) positioning on an operating system using the gaze location reported by the eye tracker. The systems and methods described herein utilize an interaction model that allows control of a computer using eye gaze and dwell. The data from eye trackers provides a gaze location on the screen. The systems and methods described herein control a graphical user interface that is part of an operating system relative to cursor positioning and associated actions such as Left-Click, Right-Click, Double-Click, and the like. The interaction model presents appropriate user interfaces to navigate the user through applications on the computing system.Type: GrantFiled: July 26, 2017Date of Patent: December 3, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Harish Sripad Kulkarni, Dwayne Lamb, Ann M Paradiso, Eric N Badger, Jonathan Thomas Campbell, Peter John Ansell, Jacob Daniel Cohen
-
Publication number: 20190033964Abstract: The systems and methods described herein assist persons with the use of computers based on eye gaze, and allow such persons to control such computing systems using various eye trackers. The systems and methods described herein use eye trackers to control cursor (or some other indicator) positioning on an operating system using the gaze location reported by the eye tracker. The systems and methods described herein utilize an interaction model that allows control of a computer using eye gaze and dwell. The data from eye trackers provides a gaze location on the screen. The systems and methods described herein control a graphical user interface that is part of an operating system relative to cursor positioning and associated actions such as Left-Click, Right-Click, Double-Click, and the like. The interaction model presents appropriate user interfaces to navigate the user through applications on the computing system.Type: ApplicationFiled: July 26, 2017Publication date: January 31, 2019Inventors: Harish Sripad Kulkarni, Dwayne Lamb, Ann M. Paradiso, Eric N. Badger, Jonathan Thomas Campbell, Peter John Ansell, Jacob Daniel Cohen
-
Publication number: 20180130459Abstract: Generation of expressive content is provided. An expressive synthesized speech system provides improved voice authoring user interfaces by which a user is enabled to efficiently author content for generating expressive output. An expressive synthesized speech system provides an expressive keyboard for enabling input of textual content and for selecting expressive operators, such as emoji objects or punctuation objects for applying predetermined prosody attributes or visual effects to the textual content. A voicesetting editor mode enables the user to author and adjust particular prosody attributes associated with the content for composing carefully-crafted synthetic speech. An active listening mode (ALM) is provided, which when selected, a set of ALM effect options are displayed, wherein each option is associated with a particular sound effect and/or visual effect. The user is enabled to rapidly respond with expressive vocal sound effects or visual effects while listening to others speak.Type: ApplicationFiled: November 9, 2016Publication date: May 10, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Ann M. Paradiso, Jonathan Campbell, Edward Bryan Cutrell, Harish Kulkarni, Meredith Morris, Alexander John Fiannaca, Kiley Rebecca Sobel
-
Publication number: 20140136626Abstract: The description relates to interactive presentation feedback. One example can associate multiple mobile devices with a presentation. This example can receive feedback relating to the presentation from at least some of the mobile devices and aggregate the feedback into a visualization that is configured to be presented in parallel with the presentation. The example can also generate another visualization for an individual mobile device that generated individual feedback.Type: ApplicationFiled: November 15, 2012Publication date: May 15, 2014Applicant: MICROSOFT CORPORATIONInventors: Jaime Teevan, Carlos Garcia Jurado Suarez, Daniel J. Liebling, Ann M. Paradiso, Curtis N. Von Veh, Darren F. Gehring, James F. St. George, Anthony Carbary, Gavin Jancke