Patents by Inventor Lisa Stifelman

Lisa Stifelman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11922194
    Abstract: A method of operating a computing device in support of improved accessibility includes displaying a user interface to an application on a display screen of the computing device, wherein the computing device includes an accessibility assistant that reads an audible description of an element of the user interface; initiating, on the computing device, a virtual assistant that conducts an audible conversation between a user and the virtual assistant through at least a microphone and a speaker associated with the computing device, wherein the virtual assistant is not integrated with an operating system of the computing device; inhibiting an ability of the accessibility assistant to read the audible description of the element of the user interface; and upon transition of the virtual assistant from an active state, enabling the ability of the accessibility assistant.
    Type: Grant
    Filed: May 19, 2022
    Date of Patent: March 5, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jaclyn Carley Knapp, Lisa Stifelman, André Roberto Lima Tapajós, Jin Xu, Steven DiCarlo, Kaichun Wu, Yuhua Guan
  • Publication number: 20230376327
    Abstract: A method of operating a computing device in support of improved accessibility includes displaying a user interface to an application on a display screen of the computing device, wherein the computing device includes an accessibility assistant that reads an audible description of an element of the user interface; initiating, on the computing device, a virtual assistant that conducts an audible conversation between a user and the virtual assistant through at least a microphone and a speaker associated with the computing device, wherein the virtual assistant is not integrated with an operating system of the computing device; inhibiting an ability of the accessibility assistant to read the audible description of the element of the user interface; and upon transition of the virtual assistant from an active state, enabling the ability of the accessibility assistant.
    Type: Application
    Filed: May 19, 2022
    Publication date: November 23, 2023
    Inventors: Jaclyn Carley Knapp, Lisa Stifelman, André Roberto Lima Tapajós, Jin Xu, Steven DiCarlo, Kaichun Wu, Yuhua Guan
  • Patent number: 11218565
    Abstract: A primary virtual assistant and local virtual assistant herein can provide secondary information, including, without limitation, user-specific information, without having direct access to such information. For example, a user may invoke a secondary virtual assistant, through the local virtual assistant connected to a primary virtual assistant system. This invocation may be sent through the primary virtual assistant to the third party provider, i.e., the secondary virtual assistant. The secondary virtual assistant has access to the secondary information, for example, email, calendars, other types of information specifically associated with the user or learned from past user actions while this information is not directly available to the primary virtual assistant. This secondary information is then provided to the local virtual assistant in response to the invocation to be provided to the user.
    Type: Grant
    Filed: October 23, 2019
    Date of Patent: January 4, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Alice Jane Bernheim Brush, Lisa Stifelman, James Francis Gilsinan, IV, Karl Rolando Henderson, Jr., Robert Juan Miller, Nikhil Rajkumar Jain, Hanjiang Zhou, Oliver Scholz, Hisami Suzuki
  • Patent number: 11178082
    Abstract: Methods, systems, and computer programs are presented for a smart communications assistant with an audio interface. One method includes an operation for getting messages addressed to a user. The messages are from one or more message sources and each message comprising message data that includes text. The method further includes operations for analyzing the message data to determine a meaning of each message, for generating a score for each message based on the respective message data and the meaning of the message, and for generating a textual summary for the messages based on the message scores and the meaning of the messages. A speech summary is created based on the textual summary and the speech summary is then sent to a speaker associated with the user. The audio interface further allows the user to verbally request actions for the messages.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: November 16, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Nikrouz Ghotbi, August Niehaus, Sachin Venugopalan, Aleksandar Antonijevic, Tvrtko Tadic, Vashutosh Agrawal, Lisa Stifelman
  • Publication number: 20210126985
    Abstract: A primary virtual assistant and local virtual assistant herein can provide secondary information, including, without limitation, user-specific information, without having direct access to such information. For example, a user may invoke a secondary virtual assistant, through the local virtual assistant connected to a primary virtual assistant system. This invocation may be sent through the primary virtual assistant to the third party provider, i.e., the secondary virtual assistant. The secondary virtual assistant has access to the secondary information, for example, email, calendars, other types of information specifically associated with the user or learned from past user actions while this information is not directly available to the primary virtual assistant. This secondary information is then provided to the local virtual assistant in response to the invocation to be provided to the user.
    Type: Application
    Filed: October 23, 2019
    Publication date: April 29, 2021
    Inventors: Alice Jane Bernheim BRUSH, Lisa STIFELMAN, James Francis GILSINAN, IV, Karl Rolando HENDERSON, JR., Robert Juan MILLER, Nikhil Rajkumar JAIN, Hanjiang ZHOU, Oliver SCHOLZ, Hisami SUZUKI
  • Patent number: 10866785
    Abstract: Input access may be provided. A user interface may be displayed on a user device. Upon receiving a selection of at least one element of the user interface, a plurality of input receiving modes of the user device may be activated.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: December 15, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Anne Kenny, Lisa Stifelman, Adam Elman, Ken Thai
  • Patent number: 10848614
    Abstract: A dynamically created and automatically updated personalized cloud of mobile tasks may be displayed on an interactive visual display via a personalized cloud generator application. The personalized cloud generator application may receive and/or capture information representing a mobile task performed by a mobile computing device user. The personalized cloud generator application may then store the information and determine a relevance of a given performed mobile task. If the relevance of the performed mobile task meets a prescribed threshold, the personal cloud generator application may display a selectable visual representation (e.g., selectable icon) of the performed mobile task. Given a user's activity, the visual representation may be automatically updated (displayed, removed, moved, resized, etc.) based on the information received and/or captured. Subsequent selection of the displayed visual representation allows quick and easy access or performance of the associated mobile task.
    Type: Grant
    Filed: March 11, 2014
    Date of Patent: November 24, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Lisa Stifelman
  • Patent number: 10642934
    Abstract: An augmented conversational understanding architecture may be provided. Upon receiving a natural language phrase from a user, the phrase may be translated into a search phrase and a search action may be performed on the search phrase.
    Type: Grant
    Filed: March 31, 2011
    Date of Patent: May 5, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
  • Publication number: 20200084166
    Abstract: Methods, systems, and computer programs are presented for a smart communications assistant with an audio interface. One method includes an operation for getting messages addressed to a user. The messages are from one or more message sources and each message comprising message data that includes text. The method further includes operations for analyzing the message data to determine a meaning of each message, for generating a score for each message based on the respective message data and the meaning of the message, and for generating a textual summary for the messages based on the message scores and the meaning of the messages. A speech summary is created based on the textual summary and the speech summary is then sent to a speaker associated with the user. The audio interface further allows the user to verbally request actions for the messages.
    Type: Application
    Filed: November 15, 2019
    Publication date: March 12, 2020
    Inventors: Nikrouz Ghotbi, August Niehaus, Sachin Venugopalan, Aleksandar Antonijevic, Tvrtko Tadic, Vashutosh Agrawal, Lisa Stifelman
  • Patent number: 10585957
    Abstract: Identification of user intents may be provided. A plurality of network applications may be identified, and an ontology associated with each of the plurality of applications may be defined. If a phrase received from a user is associated with at least one of the defined ontologies, an action associated with the network application may be executed.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: March 10, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
  • Publication number: 20200036762
    Abstract: Individuals may utilize devices to engage in conversations about topics respectively associated with a location (e.g., restaurants where the individuals may meet for dinner). Often, the individual momentarily withdraws from the conversation in order to issue commands to the device to retrieve and present such information, and may miss parts of the conversation while interacting with the device. Additionally, the individual often explores such topics individually on a device and conveys such information to the other individuals through messages, which is inefficient and error-prone. Presented herein are techniques enabling devices to facilitate conversations by monitoring the conversation for references, by one individual to another (rather than as a command to the device), to a topic associated with a location. In the absence of a command from an individual, the device may automatically present a map alongside a conversation interface showing the location(s) of the topic(s) referenced in the conversation.
    Type: Application
    Filed: August 5, 2019
    Publication date: January 30, 2020
    Inventors: Lisa Stifelman, Madhusudan Chinthakunta, Julian James Odell, Larry Paul Heck, Daniel Dole
  • Patent number: 10516637
    Abstract: Methods, systems, and computer programs are presented for a smart communications assistant with an audio interface. One method includes an operation for getting messages addressed to a user. The messages are from one or more message sources and each message comprising message data that includes text. The method further includes operations for analyzing the message data to determine a meaning of each message, for generating a score for each message based on the respective message data and the meaning of the message, and for generating a textual summary for the messages based on the message scores and the meaning of the messages. A speech summary is created based on the textual summary and the speech summary is then sent to a speaker associated with the user. The audio interface further allows the user to verbally request actions for the messages.
    Type: Grant
    Filed: October 17, 2017
    Date of Patent: December 24, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Nikrouz Ghotbi, August Niehaus, Sachin Venugopalan, Aleksandar Antonijevic, Tvrtko Tadic, Vashutosh Agrawal, Lisa Stifelman
  • Patent number: 10389543
    Abstract: A computing device is provided, which may include an input device configured to receive natural user input, and an application program executed by a processor of the computing device, the application program configured to: retrieve an electronic calendar including calendar data for one or more meeting events, each meeting event including a meeting time and meeting data, receive a generic meeting invocation request via a natural user input detected by the input device, based on at least receiving the generic meeting invocation request at a point in time, search the electronic calendar for a meeting event having a meeting time that is within a threshold time period of the point in time that the natural user input was received, and start the meeting event including processing the meeting data for the meeting event.
    Type: Grant
    Filed: June 28, 2016
    Date of Patent: August 20, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Isaiah Ng, Reza Ferrydiansyah, Christopher M. Becker, Chad Roberts, Roberto Sonnino, Lisa Stifelman
  • Patent number: 10375129
    Abstract: Individuals may utilize devices to engage in conversations about topics respectively associated with a location (e.g., restaurants where the individuals may meet for dinner). Often, the individual momentarily withdraws from the conversation in order to issue commands to the device to retrieve and present such information, and may miss parts of the conversation while interacting with the device. Additionally, the individual often explores such topics individually on a device and conveys such information to the other individuals through messages, which is inefficient and error-prone. Presented herein are techniques enabling devices to facilitate conversations by monitoring the conversation for references, by one individual to another (rather than as a command to the device), to a topic associated with a location. In the absence of a command from an individual, the device may automatically present a map alongside a conversation interface showing the location(s) of the topic(s) referenced in the conversation.
    Type: Grant
    Filed: June 17, 2014
    Date of Patent: August 6, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Lisa Stifelman, Madhusudan Chinthakunta, Julian James Odell, Larry Paul Heck, Daniel Dole
  • Publication number: 20190155570
    Abstract: Input access may be provided. A user interface may be displayed on a user device. Upon receiving a selection of at least one element of the user interface, a plurality of input receiving modes of the user device may be activated.
    Type: Application
    Filed: January 28, 2019
    Publication date: May 23, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Anne KENNY, Lisa STIFELMAN, Adam ELMAN, Ken THAI
  • Patent number: 10296587
    Abstract: An augmented conversational understanding agent may be provided. Upon receiving, by an agent, at least one natural language phrase from a user, a context associated with the at least one natural language phrase may be identified. The natural language phrase may be associated, for example, with a conversation between the user and a second user. An agent action associated with the identified context may be performed according to the at least one natural language phrase and 201 a result associated with performing the action may be displayed.
    Type: Grant
    Filed: June 12, 2017
    Date of Patent: May 21, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
  • Publication number: 20190116144
    Abstract: Methods, systems, and computer programs are presented for a smart communications assistant with an audio interface. One method includes an operation for getting messages addressed to a user. The messages are from one or more message sources and each message comprising message data that includes text. The method further includes operations for analyzing the message data to determine a meaning of each message, for generating a score for each message based on the respective message data and the meaning of the message, and for generating a textual summary for the messages based on the message scores and the meaning of the messages. A speech summary is created based on the textual summary and the speech summary is then sent to a speaker associated with the user. The audio interface further allows the user to verbally request actions for the messages.
    Type: Application
    Filed: October 17, 2017
    Publication date: April 18, 2019
    Inventors: Nikrouz Ghotbi, August Niehaus, Sachin Venugopalan, Aleksandar Antonijevic, Tvrtko Tadic, Vashutosh Agrawal, Lisa Stifelman
  • Patent number: 10209954
    Abstract: Input access may be provided. A user interface may be displayed on a user device. Upon receiving a selection of at least one element of the user interface, a plurality of input receiving modes of the user device may be activated.
    Type: Grant
    Filed: February 14, 2012
    Date of Patent: February 19, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Anne Sullivan, Lisa Stifelman, Adam Elman, Ken Thai
  • Patent number: 10049667
    Abstract: Location-based conversational understanding may be provided. Upon receiving a query from a user, an environmental context associated with the query may be generated. The query may be interpreted according to the environmental context. The interpreted query may be executed and at least one result associated with the query may be provided to the user.
    Type: Grant
    Filed: January 7, 2016
    Date of Patent: August 14, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
  • Publication number: 20180129646
    Abstract: An augmented conversational understanding agent may be provided. Upon receiving, by an agent, at least one natural language phrase from a user, a context associated with the at least one natural language phrase may be identified. The natural language phrase may be associated, for example, with a conversation between the user and a second user. An agent action associated with the identified context may be performed according to the at least one natural language phrase and 201 a result associated with performing the action may be displayed.
    Type: Application
    Filed: June 12, 2017
    Publication date: May 10, 2018
    Inventors: Larry Paul HECK, Madhusudan CHINTHAKUNTA, David MITBY, Lisa STIFELMAN