Patents by Inventor Lisa Stifelman

Lisa Stifelman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20120259633
    Abstract: A completely hands free exchange of messages, especially in portable devices, is provided through a combination of speech recognition, text-to-speech (TTS), and detection algorithms. An incoming message may be read aloud to a user and the user enabled to respond to the sender with a reply message through audio input upon determining whether the audio interaction mode is proper. Users may also be provided with options for responding in a different communication mode (e.g., a call) or perform other actions. Users may further be enabled to initiate a message exchange using natural language.
    Type: Application
    Filed: April 7, 2011
    Publication date: October 11, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Liane Aihara, Shane Landry, Lisa Stifelman, Madhusudan Chinthakunta, Anne Sullivan, Kathleen Lee
  • Publication number: 20120253790
    Abstract: Personalization of user interactions may be provided. Upon receiving a phrase from a user, a plurality of semantic concepts associated with the user may be loaded. If the phrase is determined to comprise at least one of the plurality of semantic concepts associated with the user, a first action may be performed according to the phrase. If the phrase is determined not to comprise at least one of the plurality of semantic concepts associated with the user, a second action may be performed according to the phrase.
    Type: Application
    Filed: March 31, 2011
    Publication date: October 4, 2012
    Applicant: Microsoft Corporation
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
  • Publication number: 20120254227
    Abstract: An augmented conversational understanding architecture may be provided. Upon receiving a natural language phrase from a user, the phrase may be translated into a search phrase and a search action may be performed on the search phrase.
    Type: Application
    Filed: March 31, 2011
    Publication date: October 4, 2012
    Applicant: Microsoft Corporation
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
  • Publication number: 20120253788
    Abstract: An augmented conversational understanding agent may be provided. Upon receiving, by an agent, at least one natural language phrase from a user, a context associated with the at least one natural language phrase may be identified. The natural language phrase may be associated, for example, with a conversation between the user and a second user. An agent action associated with the identified context may be performed according to the at least one natural language phrase and a result associated with performing the action may be displayed.
    Type: Application
    Filed: March 31, 2011
    Publication date: October 4, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: LARRY PAUL HECK, MADHUSUDAN CHINTHAKUNTA, DAVID MITBY, LISA STIFELMAN
  • Publication number: 20120253791
    Abstract: Identification of user intents may be provided. A plurality of network applications may be identified, and an ontology associated with each of the plurality of applications may be defined. If a phrase received from a user is associated with at least one of the defined ontologies, an action associated with the network application may be executed.
    Type: Application
    Filed: March 31, 2011
    Publication date: October 4, 2012
    Applicant: Microsoft Corporation
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
  • Publication number: 20120254810
    Abstract: A user interaction activation may be provided. A plurality of signals received from a user may be evaluated to determine whether the plurality of signals are associated with a visual display. If so, the plurality of signals may be translated into an agent action and a context associated with the visual display may be retrieved. The agent action may be performed according to the retrieved context and a result associated with the performed agent action may be displayed to the user.
    Type: Application
    Filed: March 31, 2011
    Publication date: October 4, 2012
    Applicant: Microsoft Corporation
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
  • Publication number: 20120253789
    Abstract: Conversational dialog learning and correction may be provided. Upon receiving a natural language phrase from a first user, at least one second user associated with the natural language phrase may be identified. A context state may be created according to the first user and the at least one second user. The natural language phrase may then be translated into an agent action according to the context state.
    Type: Application
    Filed: March 31, 2011
    Publication date: October 4, 2012
    Applicant: Microsoft Corporation
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
  • Publication number: 20120253802
    Abstract: Location-based conversational understanding may be provided. Upon receiving a query from a user, an environmental context associated with the query may be generated. The query may be interpreted according to the environmental context. The interpreted query may be executed and at least one result associated with the query may be provided to the user.
    Type: Application
    Filed: March 31, 2011
    Publication date: October 4, 2012
    Applicant: Microsoft Corporation
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
  • Publication number: 20100318366
    Abstract: The present invention provides a user interface for providing press-to-talk-interaction via utilization of a touch-anywhere-to-speak module on a mobile computing device. Upon receiving an indication of a touch anywhere on the screen of a touch screen interface, the touch-anywhere-to-speak module activates the listening mechanism of a speech recognition module to accept audible user input and displays dynamic visual feedback of a measured sound level of the received audible input. The touch-anywhere-to-speak module may also provide a user a convenient and more accurate speech recognition experience by utilizing and applying the data relative to a context of the touch (e.g., relative location on the visual interface) in correlation with the spoken audible input.
    Type: Application
    Filed: June 10, 2009
    Publication date: December 16, 2010
    Applicant: Microsoft Corporation
    Inventors: Anne K. Sullivan, Lisa Stifelman, Kathleen J. Lee, Su Chuin Leong
  • Publication number: 20100159909
    Abstract: A dynamically created and automatically updated personalized cloud of mobile tasks may be displayed on an interactive visual display via a personalized cloud generator application. The personalized cloud generator application may receive and/or capture information representing a mobile task performed by a mobile computing device user. The personalized cloud generator application may then store the information and determine a relevance of a given performed mobile task. If the relevance of the performed mobile task meets a prescribed threshold, the personal cloud generator application may display a selectable visual representation (e.g., selectable icon) of the performed mobile task. Given a user's activity, the visual representation may be automatically updated (displayed, removed, moved, resized, etc.) based on the information received and/or captured. Subsequent selection of the displayed visual representation allows quick and easy access or performance of the associated mobile task.
    Type: Application
    Filed: December 24, 2008
    Publication date: June 24, 2010
    Applicant: Microsoft Corporation
    Inventor: Lisa Stifelman
  • Publication number: 20070133771
    Abstract: Information associated with messages and/or missed calls is provided to a subscriber. Calls received but not answered by the subscriber may be monitored. Each monitored call is classified as one of a missed call and a message. The monitored calls may be summarized based on a customizable rule set to create a summary. The summary is provided to the subscriber via, for example, a voice notification.
    Type: Application
    Filed: December 12, 2005
    Publication date: June 14, 2007
    Inventors: Lisa Stifelman, Karen Cross, Sarah Caplener, Rajeev Khurana, Anne Sullivan, Rao Surapaneni, Justin Ward, Angus Davis
  • Patent number: 5889843
    Abstract: A method and a system for audio communication between a plurality of users at a plurality of sites utilizes a set of audio input sensors at each site. Each set of audio input sensors binaurally senses an auditory space in proximity thereto. A metaphorical representation of each of the sites is provided. Each metaphorical representation has a position which is variable within a metaphorical space. The metaphorical representations can be based upon, for example, a physical metaphor, a visual metaphor, an auditory metaphor, or a textual metaphor. The auditory space sensed at each site is combined to form at least one synthetic auditory space. The at least one synthetic auditory space is formed in dependence upon the position of each metaphorical representation within the metaphorical space. A binaurally perceivable auditory environment is produced at one or more sites based upon the at least one synthetic auditory space.
    Type: Grant
    Filed: March 4, 1996
    Date of Patent: March 30, 1999
    Assignee: Interval Research Corporation
    Inventors: Andrew Jay Singer, Sean Michael White, Glenn T. Edens, Roger C. Meike, Don Charnley, Debby Hindus, Wayne Burdick, Lisa Stifelman