Patents by Inventor Lisa Stifelman
Lisa Stifelman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11922194Abstract: A method of operating a computing device in support of improved accessibility includes displaying a user interface to an application on a display screen of the computing device, wherein the computing device includes an accessibility assistant that reads an audible description of an element of the user interface; initiating, on the computing device, a virtual assistant that conducts an audible conversation between a user and the virtual assistant through at least a microphone and a speaker associated with the computing device, wherein the virtual assistant is not integrated with an operating system of the computing device; inhibiting an ability of the accessibility assistant to read the audible description of the element of the user interface; and upon transition of the virtual assistant from an active state, enabling the ability of the accessibility assistant.Type: GrantFiled: May 19, 2022Date of Patent: March 5, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Jaclyn Carley Knapp, Lisa Stifelman, André Roberto Lima Tapajós, Jin Xu, Steven DiCarlo, Kaichun Wu, Yuhua Guan
-
Publication number: 20230376327Abstract: A method of operating a computing device in support of improved accessibility includes displaying a user interface to an application on a display screen of the computing device, wherein the computing device includes an accessibility assistant that reads an audible description of an element of the user interface; initiating, on the computing device, a virtual assistant that conducts an audible conversation between a user and the virtual assistant through at least a microphone and a speaker associated with the computing device, wherein the virtual assistant is not integrated with an operating system of the computing device; inhibiting an ability of the accessibility assistant to read the audible description of the element of the user interface; and upon transition of the virtual assistant from an active state, enabling the ability of the accessibility assistant.Type: ApplicationFiled: May 19, 2022Publication date: November 23, 2023Inventors: Jaclyn Carley Knapp, Lisa Stifelman, André Roberto Lima Tapajós, Jin Xu, Steven DiCarlo, Kaichun Wu, Yuhua Guan
-
Patent number: 11218565Abstract: A primary virtual assistant and local virtual assistant herein can provide secondary information, including, without limitation, user-specific information, without having direct access to such information. For example, a user may invoke a secondary virtual assistant, through the local virtual assistant connected to a primary virtual assistant system. This invocation may be sent through the primary virtual assistant to the third party provider, i.e., the secondary virtual assistant. The secondary virtual assistant has access to the secondary information, for example, email, calendars, other types of information specifically associated with the user or learned from past user actions while this information is not directly available to the primary virtual assistant. This secondary information is then provided to the local virtual assistant in response to the invocation to be provided to the user.Type: GrantFiled: October 23, 2019Date of Patent: January 4, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Alice Jane Bernheim Brush, Lisa Stifelman, James Francis Gilsinan, IV, Karl Rolando Henderson, Jr., Robert Juan Miller, Nikhil Rajkumar Jain, Hanjiang Zhou, Oliver Scholz, Hisami Suzuki
-
Patent number: 11178082Abstract: Methods, systems, and computer programs are presented for a smart communications assistant with an audio interface. One method includes an operation for getting messages addressed to a user. The messages are from one or more message sources and each message comprising message data that includes text. The method further includes operations for analyzing the message data to determine a meaning of each message, for generating a score for each message based on the respective message data and the meaning of the message, and for generating a textual summary for the messages based on the message scores and the meaning of the messages. A speech summary is created based on the textual summary and the speech summary is then sent to a speaker associated with the user. The audio interface further allows the user to verbally request actions for the messages.Type: GrantFiled: November 15, 2019Date of Patent: November 16, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Nikrouz Ghotbi, August Niehaus, Sachin Venugopalan, Aleksandar Antonijevic, Tvrtko Tadic, Vashutosh Agrawal, Lisa Stifelman
-
Publication number: 20210126985Abstract: A primary virtual assistant and local virtual assistant herein can provide secondary information, including, without limitation, user-specific information, without having direct access to such information. For example, a user may invoke a secondary virtual assistant, through the local virtual assistant connected to a primary virtual assistant system. This invocation may be sent through the primary virtual assistant to the third party provider, i.e., the secondary virtual assistant. The secondary virtual assistant has access to the secondary information, for example, email, calendars, other types of information specifically associated with the user or learned from past user actions while this information is not directly available to the primary virtual assistant. This secondary information is then provided to the local virtual assistant in response to the invocation to be provided to the user.Type: ApplicationFiled: October 23, 2019Publication date: April 29, 2021Inventors: Alice Jane Bernheim BRUSH, Lisa STIFELMAN, James Francis GILSINAN, IV, Karl Rolando HENDERSON, JR., Robert Juan MILLER, Nikhil Rajkumar JAIN, Hanjiang ZHOU, Oliver SCHOLZ, Hisami SUZUKI
-
Patent number: 10866785Abstract: Input access may be provided. A user interface may be displayed on a user device. Upon receiving a selection of at least one element of the user interface, a plurality of input receiving modes of the user device may be activated.Type: GrantFiled: January 28, 2019Date of Patent: December 15, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Anne Kenny, Lisa Stifelman, Adam Elman, Ken Thai
-
Patent number: 10848614Abstract: A dynamically created and automatically updated personalized cloud of mobile tasks may be displayed on an interactive visual display via a personalized cloud generator application. The personalized cloud generator application may receive and/or capture information representing a mobile task performed by a mobile computing device user. The personalized cloud generator application may then store the information and determine a relevance of a given performed mobile task. If the relevance of the performed mobile task meets a prescribed threshold, the personal cloud generator application may display a selectable visual representation (e.g., selectable icon) of the performed mobile task. Given a user's activity, the visual representation may be automatically updated (displayed, removed, moved, resized, etc.) based on the information received and/or captured. Subsequent selection of the displayed visual representation allows quick and easy access or performance of the associated mobile task.Type: GrantFiled: March 11, 2014Date of Patent: November 24, 2020Assignee: Microsoft Technology Licensing, LLCInventor: Lisa Stifelman
-
Patent number: 10642934Abstract: An augmented conversational understanding architecture may be provided. Upon receiving a natural language phrase from a user, the phrase may be translated into a search phrase and a search action may be performed on the search phrase.Type: GrantFiled: March 31, 2011Date of Patent: May 5, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
-
Publication number: 20200084166Abstract: Methods, systems, and computer programs are presented for a smart communications assistant with an audio interface. One method includes an operation for getting messages addressed to a user. The messages are from one or more message sources and each message comprising message data that includes text. The method further includes operations for analyzing the message data to determine a meaning of each message, for generating a score for each message based on the respective message data and the meaning of the message, and for generating a textual summary for the messages based on the message scores and the meaning of the messages. A speech summary is created based on the textual summary and the speech summary is then sent to a speaker associated with the user. The audio interface further allows the user to verbally request actions for the messages.Type: ApplicationFiled: November 15, 2019Publication date: March 12, 2020Inventors: Nikrouz Ghotbi, August Niehaus, Sachin Venugopalan, Aleksandar Antonijevic, Tvrtko Tadic, Vashutosh Agrawal, Lisa Stifelman
-
Patent number: 10585957Abstract: Identification of user intents may be provided. A plurality of network applications may be identified, and an ontology associated with each of the plurality of applications may be defined. If a phrase received from a user is associated with at least one of the defined ontologies, an action associated with the network application may be executed.Type: GrantFiled: November 20, 2017Date of Patent: March 10, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
-
Publication number: 20200036762Abstract: Individuals may utilize devices to engage in conversations about topics respectively associated with a location (e.g., restaurants where the individuals may meet for dinner). Often, the individual momentarily withdraws from the conversation in order to issue commands to the device to retrieve and present such information, and may miss parts of the conversation while interacting with the device. Additionally, the individual often explores such topics individually on a device and conveys such information to the other individuals through messages, which is inefficient and error-prone. Presented herein are techniques enabling devices to facilitate conversations by monitoring the conversation for references, by one individual to another (rather than as a command to the device), to a topic associated with a location. In the absence of a command from an individual, the device may automatically present a map alongside a conversation interface showing the location(s) of the topic(s) referenced in the conversation.Type: ApplicationFiled: August 5, 2019Publication date: January 30, 2020Inventors: Lisa Stifelman, Madhusudan Chinthakunta, Julian James Odell, Larry Paul Heck, Daniel Dole
-
Patent number: 10516637Abstract: Methods, systems, and computer programs are presented for a smart communications assistant with an audio interface. One method includes an operation for getting messages addressed to a user. The messages are from one or more message sources and each message comprising message data that includes text. The method further includes operations for analyzing the message data to determine a meaning of each message, for generating a score for each message based on the respective message data and the meaning of the message, and for generating a textual summary for the messages based on the message scores and the meaning of the messages. A speech summary is created based on the textual summary and the speech summary is then sent to a speaker associated with the user. The audio interface further allows the user to verbally request actions for the messages.Type: GrantFiled: October 17, 2017Date of Patent: December 24, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Nikrouz Ghotbi, August Niehaus, Sachin Venugopalan, Aleksandar Antonijevic, Tvrtko Tadic, Vashutosh Agrawal, Lisa Stifelman
-
Patent number: 10389543Abstract: A computing device is provided, which may include an input device configured to receive natural user input, and an application program executed by a processor of the computing device, the application program configured to: retrieve an electronic calendar including calendar data for one or more meeting events, each meeting event including a meeting time and meeting data, receive a generic meeting invocation request via a natural user input detected by the input device, based on at least receiving the generic meeting invocation request at a point in time, search the electronic calendar for a meeting event having a meeting time that is within a threshold time period of the point in time that the natural user input was received, and start the meeting event including processing the meeting data for the meeting event.Type: GrantFiled: June 28, 2016Date of Patent: August 20, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Isaiah Ng, Reza Ferrydiansyah, Christopher M. Becker, Chad Roberts, Roberto Sonnino, Lisa Stifelman
-
Patent number: 10375129Abstract: Individuals may utilize devices to engage in conversations about topics respectively associated with a location (e.g., restaurants where the individuals may meet for dinner). Often, the individual momentarily withdraws from the conversation in order to issue commands to the device to retrieve and present such information, and may miss parts of the conversation while interacting with the device. Additionally, the individual often explores such topics individually on a device and conveys such information to the other individuals through messages, which is inefficient and error-prone. Presented herein are techniques enabling devices to facilitate conversations by monitoring the conversation for references, by one individual to another (rather than as a command to the device), to a topic associated with a location. In the absence of a command from an individual, the device may automatically present a map alongside a conversation interface showing the location(s) of the topic(s) referenced in the conversation.Type: GrantFiled: June 17, 2014Date of Patent: August 6, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Lisa Stifelman, Madhusudan Chinthakunta, Julian James Odell, Larry Paul Heck, Daniel Dole
-
Publication number: 20190155570Abstract: Input access may be provided. A user interface may be displayed on a user device. Upon receiving a selection of at least one element of the user interface, a plurality of input receiving modes of the user device may be activated.Type: ApplicationFiled: January 28, 2019Publication date: May 23, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Anne KENNY, Lisa STIFELMAN, Adam ELMAN, Ken THAI
-
Patent number: 10296587Abstract: An augmented conversational understanding agent may be provided. Upon receiving, by an agent, at least one natural language phrase from a user, a context associated with the at least one natural language phrase may be identified. The natural language phrase may be associated, for example, with a conversation between the user and a second user. An agent action associated with the identified context may be performed according to the at least one natural language phrase and 201 a result associated with performing the action may be displayed.Type: GrantFiled: June 12, 2017Date of Patent: May 21, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
-
Publication number: 20190116144Abstract: Methods, systems, and computer programs are presented for a smart communications assistant with an audio interface. One method includes an operation for getting messages addressed to a user. The messages are from one or more message sources and each message comprising message data that includes text. The method further includes operations for analyzing the message data to determine a meaning of each message, for generating a score for each message based on the respective message data and the meaning of the message, and for generating a textual summary for the messages based on the message scores and the meaning of the messages. A speech summary is created based on the textual summary and the speech summary is then sent to a speaker associated with the user. The audio interface further allows the user to verbally request actions for the messages.Type: ApplicationFiled: October 17, 2017Publication date: April 18, 2019Inventors: Nikrouz Ghotbi, August Niehaus, Sachin Venugopalan, Aleksandar Antonijevic, Tvrtko Tadic, Vashutosh Agrawal, Lisa Stifelman
-
Patent number: 10209954Abstract: Input access may be provided. A user interface may be displayed on a user device. Upon receiving a selection of at least one element of the user interface, a plurality of input receiving modes of the user device may be activated.Type: GrantFiled: February 14, 2012Date of Patent: February 19, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Anne Sullivan, Lisa Stifelman, Adam Elman, Ken Thai
-
Patent number: 10049667Abstract: Location-based conversational understanding may be provided. Upon receiving a query from a user, an environmental context associated with the query may be generated. The query may be interpreted according to the environmental context. The interpreted query may be executed and at least one result associated with the query may be provided to the user.Type: GrantFiled: January 7, 2016Date of Patent: August 14, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
-
Publication number: 20180129646Abstract: An augmented conversational understanding agent may be provided. Upon receiving, by an agent, at least one natural language phrase from a user, a context associated with the at least one natural language phrase may be identified. The natural language phrase may be associated, for example, with a conversation between the user and a second user. An agent action associated with the identified context may be performed according to the at least one natural language phrase and 201 a result associated with performing the action may be displayed.Type: ApplicationFiled: June 12, 2017Publication date: May 10, 2018Inventors: Larry Paul HECK, Madhusudan CHINTHAKUNTA, David MITBY, Lisa STIFELMAN