Patents by Inventor Ann M. Paradiso

Ann M. Paradiso has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220230374
    Abstract: Generation of expressive content is provided. An expressive synthesized speech system provides improved voice authoring user interfaces by which a user is enabled to efficiently author content for generating expressive output. An expressive synthesized speech system provides an expressive keyboard for enabling input of textual content and for selecting expressive operators, such as emoji objects or punctuation objects for applying predetermined prosody attributes or visual effects to the textual content. A voicesetting editor mode enables the user to author and adjust particular prosody attributes associated with the content for composing carefully-crafted synthetic speech. An active listening mode (ALM) is provided, which when selected, a set of ALM effect options are displayed, wherein each option is associated with a particular sound effect and/or visual effect. The user is enabled to rapidly respond with expressive vocal sound effects or visual effects while listening to others speak.
    Type: Application
    Filed: April 5, 2022
    Publication date: July 21, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Ann M. PARADISO, Jonathan CAMPBELL, Edward Bryan CUTRELL, Harish KULKARNI, Meredith MORRIS, Alexander John FIANNACA, Kiley Rebecca SOBEL
  • Patent number: 11321890
    Abstract: Generation of expressive content is provided. An expressive synthesized speech system provides improved voice authoring user interfaces by which a user is enabled to efficiently author content for generating expressive output. An expressive synthesized speech system provides an expressive keyboard for enabling input of textual content and for selecting expressive operators, such as emoji objects or punctuation objects for applying predetermined prosody attributes or visual effects to the textual content. A voicesetting editor mode enables the user to author and adjust particular prosody attributes associated with the content for composing carefully-crafted synthetic speech. An active listening mode (ALM) is provided, which when selected, a set of ALM effect options are displayed, wherein each option is associated with a particular sound effect and/or visual effect. The user is enabled to rapidly respond with expressive vocal sound effects or visual effects while listening to others speak.
    Type: Grant
    Filed: November 9, 2016
    Date of Patent: May 3, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ann M. Paradiso, Jonathan Campbell, Edward Bryan Cutrell, Harish Kulkarni, Meredith Morris, Alexander John Fiannaca, Kiley Rebecca Sobel
  • Patent number: 10496162
    Abstract: The systems and methods described herein assist persons with the use of computers based on eye gaze, and allow such persons to control such computing systems using various eye trackers. The systems and methods described herein use eye trackers to control cursor (or some other indicator) positioning on an operating system using the gaze location reported by the eye tracker. The systems and methods described herein utilize an interaction model that allows control of a computer using eye gaze and dwell. The data from eye trackers provides a gaze location on the screen. The systems and methods described herein control a graphical user interface that is part of an operating system relative to cursor positioning and associated actions such as Left-Click, Right-Click, Double-Click, and the like. The interaction model presents appropriate user interfaces to navigate the user through applications on the computing system.
    Type: Grant
    Filed: July 26, 2017
    Date of Patent: December 3, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Harish Sripad Kulkarni, Dwayne Lamb, Ann M Paradiso, Eric N Badger, Jonathan Thomas Campbell, Peter John Ansell, Jacob Daniel Cohen
  • Publication number: 20190033964
    Abstract: The systems and methods described herein assist persons with the use of computers based on eye gaze, and allow such persons to control such computing systems using various eye trackers. The systems and methods described herein use eye trackers to control cursor (or some other indicator) positioning on an operating system using the gaze location reported by the eye tracker. The systems and methods described herein utilize an interaction model that allows control of a computer using eye gaze and dwell. The data from eye trackers provides a gaze location on the screen. The systems and methods described herein control a graphical user interface that is part of an operating system relative to cursor positioning and associated actions such as Left-Click, Right-Click, Double-Click, and the like. The interaction model presents appropriate user interfaces to navigate the user through applications on the computing system.
    Type: Application
    Filed: July 26, 2017
    Publication date: January 31, 2019
    Inventors: Harish Sripad Kulkarni, Dwayne Lamb, Ann M. Paradiso, Eric N. Badger, Jonathan Thomas Campbell, Peter John Ansell, Jacob Daniel Cohen
  • Publication number: 20180130459
    Abstract: Generation of expressive content is provided. An expressive synthesized speech system provides improved voice authoring user interfaces by which a user is enabled to efficiently author content for generating expressive output. An expressive synthesized speech system provides an expressive keyboard for enabling input of textual content and for selecting expressive operators, such as emoji objects or punctuation objects for applying predetermined prosody attributes or visual effects to the textual content. A voicesetting editor mode enables the user to author and adjust particular prosody attributes associated with the content for composing carefully-crafted synthetic speech. An active listening mode (ALM) is provided, which when selected, a set of ALM effect options are displayed, wherein each option is associated with a particular sound effect and/or visual effect. The user is enabled to rapidly respond with expressive vocal sound effects or visual effects while listening to others speak.
    Type: Application
    Filed: November 9, 2016
    Publication date: May 10, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Ann M. Paradiso, Jonathan Campbell, Edward Bryan Cutrell, Harish Kulkarni, Meredith Morris, Alexander John Fiannaca, Kiley Rebecca Sobel
  • Publication number: 20140136626
    Abstract: The description relates to interactive presentation feedback. One example can associate multiple mobile devices with a presentation. This example can receive feedback relating to the presentation from at least some of the mobile devices and aggregate the feedback into a visualization that is configured to be presented in parallel with the presentation. The example can also generate another visualization for an individual mobile device that generated individual feedback.
    Type: Application
    Filed: November 15, 2012
    Publication date: May 15, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Jaime Teevan, Carlos Garcia Jurado Suarez, Daniel J. Liebling, Ann M. Paradiso, Curtis N. Von Veh, Darren F. Gehring, James F. St. George, Anthony Carbary, Gavin Jancke