Patents by Inventor Andrew Paul McGovern

Andrew Paul McGovern has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11972095
    Abstract: Various embodiments discussed herein enable client applications to be heavily integrated with a voice assistant in order to perform commands associated with voice utterances of users via voice assistant functionality and also seamlessly cause client applications to automatically perform native functions as part of executing the voice utterance. Such heavy integration also allows particular embodiments to support multi-modal input from a user for a single conversational interaction. In this way, client application user interface interactions, such as clicks, touch gestures, or text inputs are executed alternative or in addition to the voice utterances.
    Type: Grant
    Filed: October 22, 2021
    Date of Patent: April 30, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Tudor Buzasu Klein, Viktoriya Taranov, Sergiy Gavrylenko, Jaclyn Carley Knapp, Andrew Paul McGovern, Harris Syed, Chad Steven Estes, Jesse Daniel Eskes Rusak, David Ernesto Heekin Burkett, Allison Anne O'Mahony, Ashok Kuppusamy, Jonathan Reed Harris, Jose Miguel Rady Allende, Diego Hernan Carlomagno, Talon Edward Ireland, Michael Francis Palermiti, II, Richard Leigh Mains, Jayant Krishnamurthy
  • Publication number: 20230401031
    Abstract: Various embodiments discussed herein enable client applications to be heavily integrated with a voice assistant in order to both perform commands associated with voice utterances of users via voice assistant functionality and also seamlessly cause client applications to automatically perform native functions as part of executing the voice utterance. For example, some embodiments can automatically and intelligently cause a switch to a page the user needs and automatically and intelligently cause a population of particular fields of the page the user needs based on a user view context and the voice utterance.
    Type: Application
    Filed: August 8, 2023
    Publication date: December 14, 2023
    Inventors: Jaclyn Carley KNAPP, Andrew Paul MCGOVERN, Harris SYED, Chad Steven ESTES, Jesse Daniel Eskes RUSAK, David Ernesto Heekin BURKETT, Allison Anne O'MAHONY, Ashok KUPPUSAMY, Jonathan Reed HARRIS, Jose Miguel Rady ALLENDE, Diego Hernan CARLOMAGNO, Talon Edward IRELAND, Michael Francis PALERMITI, II, Richard Leigh MAINS, Jayant KRISHNAMURTHY
  • Patent number: 11789696
    Abstract: Various embodiments discussed herein enable client applications to be heavily integrated with a voice assistant in order to both perform commands associated with voice utterances of users via voice assistant functionality and also seamlessly cause client applications to automatically perform native functions as part of executing the voice utterance. For example, some embodiments can automatically and intelligently cause a switch to a page the user needs and automatically and intelligently cause a population of particular fields of the page the user needs based on a user view context and the voice utterance.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: October 17, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jaclyn Carley Knapp, Andrew Paul McGovern, Harris Syed, Chad Steven Estes, Jesse Daniel Eskes Rusak, David Ernesto Heekin Burkett, Allison Anne O'Mahony, Ashok Kuppusamy, Jonathan Reed Harris, Jose Miguel Rady Allende, Diego Hernan Carlomagno, Talon Edward Ireland, Michael Francis Palermiti, II, Richard Leigh Mains, Jayant Krishnamurthy
  • Publication number: 20220308828
    Abstract: Various embodiments discussed herein enable client applications to be heavily integrated with a voice assistant in order to both perform commands associated with voice utterances of users via voice assistant functionality and also seamlessly cause client applications to automatically perform native functions as part of executing the voice utterance. For example, some embodiments can automatically and intelligently cause a switch to a page the user needs and automatically and intelligently cause a population of particular fields of the page the user needs based on a user view context and the voice utterance.
    Type: Application
    Filed: June 30, 2021
    Publication date: September 29, 2022
    Inventors: Jaclyn Carley KNAPP, Andrew Paul MCGOVERN, Harris SYED, Chad Steven ESTES, Jesse Daniel Eskes Rusak, David Ernesto Heekin Burkett, Allison Anne O'Mahony, Ashok Kuppusamy, Jonathan Reed Harris, Jose Miguel Rady Allende, Diego Hernan Carlomagno, Talon Edward Ireland, Michael Francis Palermiti, II, Richard Leigh Mains, Jayant Krishnamurthy
  • Publication number: 20220308718
    Abstract: Various embodiments discussed herein enable client applications to be heavily integrated with a voice assistant in order to perform commands associated with voice utterances of users via voice assistant functionality and also seamlessly cause client applications to automatically perform native functions as part of executing the voice utterance. Such heavy integration also allows particular embodiments to support multi-modal input from a user for a single conversational interaction. In this way, client application user interface interactions, such as clicks, touch gestures, or text inputs are executed alternative or in addition to the voice utterances.
    Type: Application
    Filed: October 22, 2021
    Publication date: September 29, 2022
    Inventors: Tudor Buzasu KLEIN, Viktoriya TARANOV, Sergiy GAVRYLENKO, Jaclyn Carley KNAPP, Andrew Paul MCGOVERN, Harris SYED, Chad Steven ESTES, Jesse Daniel Eskes RUSAK, David Ernesto Heekin BURKETT, Allison Anne O'MAHONY, Ashok KUPPUSAMY, Jonathan Reed HARRIS, Jose Miguel Rady ALLENDE, Diego Hernan CARLOMAGNO, Talon Edward IRELAND, Michael Francis PALERMITI, II, Richard Leigh MAINS, Jayant KRISHNAMURTHY
  • Publication number: 20150278370
    Abstract: One or more techniques and/or systems are provided for facilitating task completion. For example, a natural language input (e.g., “where should we eat”) may be received from a user of a client device. The natural language input may be evaluated using a set of user contextual signals, opted-in for exposure by the user for facilitating task completion, to identify a user task intent. For example, a user task intent of viewing a local Mexican restaurant menu may be identified based upon a social network post of the user indicating that the user is meeting a friend for Mexican food. Task completion functionality may be exposed to the user based upon the user task intent. For example, a restaurant app may be deep launched to display a menu of a local Mexican restaurant.
    Type: Application
    Filed: April 1, 2014
    Publication date: October 1, 2015
    Inventors: Kevin Niels Stratvert, Yu-Ting Kuo, Andrew Paul McGovern, Xiao Wei, Gaurav Anand, Thomas Lin, Adam C. Lusch
  • Patent number: 8977625
    Abstract: Methods, systems, and media are provided for facilitating generation of an inference index. In embodiments, a canonical entity is referenced. The canonical entity is associated with web documents. One or more queries that, when input, result in a selection of at least one of the web documents are identified. An entity document is generated for the canonical entity. The entity document includes the identified queries and/or associated text from the content of a document or from an entity title that result in the selection of the at least one of the web documents. The entity document and corresponding canonical entity can be combined with additional related entity documents and canonical entities to generate an inference index.
    Type: Grant
    Filed: December 15, 2010
    Date of Patent: March 10, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Gregory T. Buehrer, Li Jiang, Paul Alfred Viola, Andrew Paul McGovern, Jakub Jan Szymanski, Sanaz Ahari
  • Publication number: 20120158738
    Abstract: Methods, systems, and media are provided for facilitating generation of an inference index. In embodiments, a canonical entity is referenced. The canonical entity is associated with web documents. One or more queries that, when input, result in a selection of at least one of the web documents are identified. An entity document is generated for the canonical entity. The entity document includes the identified queries and/or associated text from the content of a document or from an entity title that result in the selection of the at least one of the web documents. The entity document and corresponding canonical entity can be combined with additional related entity documents and canonical entities to generate an inference index.
    Type: Application
    Filed: December 15, 2010
    Publication date: June 21, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Gregory T. Buehrer, Li Jiang, Paul Alfred Viola, Andrew Paul McGovern, Jakub Jan Szymanski, Sanaz Ahari
  • Publication number: 20110191014
    Abstract: Maps of a particular location are often generated with an inset area map, rendered in an inset region of the location map, of the general area including the location at a lower zoom level than the location map. However, this presentation may be disadvantageous in some scenarios (e.g., the where a user is interested in examining the extended area around the location, the area between two locations, or the spatial layout of the locations.) Instead, a composite map may be generated comprising an area map at an area map zoom level, and an inset location map rendered in an inset region of the location map and illustrating a location at a higher zoom level than the area map zoom level. Such composite maps may also be requested programmatically of a map generating service, and may be provided in an automated manner for use in an application.
    Type: Application
    Filed: February 4, 2010
    Publication date: August 4, 2011
    Applicant: Microsoft Corporation
    Inventors: Shuangtong Feng, Chad Raynor, Andrew Paul McGovern, Michael John Narayan