Patents by Inventor Michael J. Johnston

Michael J. Johnston has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160330394
    Abstract: Network connectivity is used to share relevant visual and other sensory information between vehicles, as well as delivering relevant information provided by network services to create an enhanced view of the vehicle's surroundings. The enhanced view is presented to the occupants of the vehicle to provide an improved driving experience and/or enable the occupants to take proper action (e.g., avoid obstacles, identify traffic delays, etc.). In one example, the enhanced view comprises information that is not visible to the naked eye and/or cannot be currently sensed by the vehicle's sensors (e.g., due to a partial or blocked view, low visibility conditions, hardware capabilities of the vehicle's sensors, position of the vehicle's sensors, etc.).
    Type: Application
    Filed: July 18, 2016
    Publication date: November 10, 2016
    Inventors: Behzad Shahraray, Alicia Abella, David Crawford Gibbon, Mazin E. Gilbert, Michael J. Johnston, Horst J. Schroeter, Jay Gordon Wilpon
  • Publication number: 20160249107
    Abstract: A portable communication device has a touch screen display that receives tactile input and a microphone that receives audio input. The portable communication device initiates a query for media based at least in part on tactile input and audio input. The touch screen display is a multi-touch screen. The portable communication device sends an initiated query and receives a text response indicative of a speech to text conversion of the query. The portable communication device then displays video in response to tactile input and audio input.
    Type: Application
    Filed: May 2, 2016
    Publication date: August 25, 2016
    Inventors: BEHZAD SHAHRARAY, DAVID CRAWFORD GIBBON, BERNARD S. RENGER, ZHU LIU, ANDREA BASSO, MAZIN GILBERT, MICHAEL J. JOHNSTON
  • Patent number: 9403482
    Abstract: Network connectivity is used to share relevant visual and other sensory information between vehicles, as well as delivering relevant information provided by network services to create an enhanced view of the vehicle's surroundings. The enhanced view is presented to the occupants of the vehicle to provide an improved driving experience and/or enable the occupants to take proper action (e.g., avoid obstacles, identify traffic delays, etc.). In one example, the enhanced view comprises information that is not visible to the naked eye and/or cannot be currently sensed by the vehicle's sensors (e.g., due to a partial or blocked view, low visibility conditions, hardware capabilities of the vehicle's sensors, position of the vehicle's sensors, etc.).
    Type: Grant
    Filed: November 22, 2013
    Date of Patent: August 2, 2016
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Behzad Shahraray, Alicia Abella, David Crawford Gibbon, Mazin E. Gilbert, Michael J. Johnston, Horst J. Schroeter, Jay Gordon Wilpon
  • Publication number: 20160179908
    Abstract: Methods, systems, devices, and media for creating a plan through multimodal search inputs are provided. A multimodal virtual assistant receives a first search request which comprises a geographic area. First search results are displayed in response to the first search request being received. The first search results are based on the first search request and correspond to the geographic area. Each of the first search results is associated with a geographic location. The multimodal virtual assistant receives a selection of one of the first search results, and adds the selected one of the first search results to a plan. A second search request is received after the selection, and second search results are displayed in response to the second search request being received. The second search results are based on the second search request and correspond to the geographic location of the selected one of the first search results.
    Type: Application
    Filed: December 19, 2014
    Publication date: June 23, 2016
    Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Michael J. JOHNSTON, Patrick EHLEN, Hyuckchul JUNG, Jay H. LIESKE, JR., Ethan SELFRIDGE, Brant J. VASILIEFF, Jay Gordon WILPON
  • Publication number: 20160155445
    Abstract: A system, method and computer-readable storage devices are disclosed for using targeted clarification (TC) questions in dialog systems in a multimodal virtual agent system (MVA) providing access to information about movies, restaurants, and musical events. In contrast with open-domain spoken systems, the MVA application covers a domain with a fixed set of concepts and uses a natural language understanding (NLU) component to mark concepts in automatically recognized speech. Instead of identifying an error segment, localized error detection (LED) identifies which of the concepts are likely to be present and correct using domain knowledge, automatic speech recognition (ASR), and NLU tags and scores. If at least concept is identified to be present but not correct, the TC component uses this information to generate a targeted clarification question.
    Type: Application
    Filed: December 1, 2014
    Publication date: June 2, 2016
    Inventors: Ethan SELFRIDGE, Michael J. JOHNSTON, Svetlana STOYANCHEV
  • Patent number: 9348908
    Abstract: A portable communication device has a touch screen display that receives tactile input and a microphone that receives audio input. The portable communication device initiates a query for media based at least in part on tactile input and audio input. The touch screen display is a multi-touch screen. The portable communication device sends an initiated query and receives a text response indicative of a speech to text conversion of the query. The portable communication device then displays video in response to tactile input and audio input.
    Type: Grant
    Filed: July 19, 2013
    Date of Patent: May 24, 2016
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Behzad Shahraray, David Crawford Gibbon, Bernard S. Renger, Zhu Liu, Andrea Basso, Mazin Gilbert, Michael J. Johnston
  • Publication number: 20160124706
    Abstract: A system, method and computer-readable storage devices are disclosed for multi-modal interactions with a system via a long-touch gesture on a touch-sensitive display. A system operating per this disclosure can receive a multi-modal input comprising speech and a touch on a display, wherein the speech comprises a pronoun. When the touch on the display has a duration longer than a threshold duration, the system can identify an object within a threshold distance of the touch, associate the object with the pronoun in the speech, to yield an association, and perform an action based on the speech and the association.
    Type: Application
    Filed: October 31, 2014
    Publication date: May 5, 2016
    Inventors: Brant J. VASILIEFF, Patrick EHLEN, Michael J. JOHNSTON
  • Publication number: 20160026627
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for approximating responses to a user speech query in voice-enabled search based on metadata that include demographic features of the speaker. A system practicing the method recognizes received speech from a speaker to generate recognized speech, identifies metadata about the speaker from the received speech, and feeds the recognized speech and the metadata to a question-answering engine. Identifying the metadata about the speaker is based on voice characteristics of the received speech. The demographic features can include age, gender, socio-economic group, nationality, and/or region. The metadata identified about the speaker from the received speech can be combined with or override self-reported speaker demographic information.
    Type: Application
    Filed: October 7, 2015
    Publication date: January 28, 2016
    Inventors: Michael J. Johnston, Srinivas Bangalore, Junlan Feng, Taniya Mishra
  • Publication number: 20150145995
    Abstract: Network connectivity is used to share relevant visual and other sensory information between vehicles, as well as delivering relevant information provided by network services to create an enhanced view of the vehicle's surroundings. The enhanced view is presented to the occupants of the vehicle to provide an improved driving experience and/or enable the occupants to take proper action (e.g., avoid obstacles, identify traffic delays, etc.). In one example, the enhanced view comprises information that is not visible to the naked eye and/or cannot be currently sensed by the vehicle's sensors (e.g., due to a partial or blocked view, low visibility conditions, hardware capabilities of the vehicle's sensors, position of the vehicle's sensors, etc.).
    Type: Application
    Filed: November 22, 2013
    Publication date: May 28, 2015
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Behzad Shahraray, Alicia Abella, David Crawford Gibbon, Mazin E. Gilbert, Michael J. Johnston, Horst J. Schroeter, Jay Gordon Wilpon
  • Publication number: 20150120288
    Abstract: A method of providing hybrid speech recognition between a local embedded speech recognition system and a remote speech recognition system relates to receiving speech from a user at a device communicating with a remote speech recognition system. The system recognizes a first part of speech by performing a first recognition of the first part of the speech with the embedded speech recognition system that accesses private user data, wherein the private user data is not available to the remote speech recognition system. The system recognizes the second part of the speech by performing a second recognition of the second part of the speech with the remote speech recognition system. The final recognition result is a combination of these two recognition processes. The private data can be such local information as a user location, a playlist, frequently dialed numbers or texted people, user contact list information, and so forth.
    Type: Application
    Filed: October 29, 2013
    Publication date: April 30, 2015
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: David THOMSON, Michael J. JOHNSTON, Vivek Kumar RANGARAJAN SRIDHAR
  • Publication number: 20150073811
    Abstract: When using finite-state devices to perform various functions, it is beneficial to use finite state devices representing regular grammars with terminals having markup-language-based semantics. By using markup-language-based symbols in the finite state devices, it is possible to generate valid markup-language expressions by concatenating the symbols representing the result of the performed function. The markup-language expression can be used by other applications and/or devices. Finite-state devices are used to convert strings of words and gestures into valid markup-language, for example, XML, expressions that can be used, for example, to provide an application program interface to underlying system applications.
    Type: Application
    Filed: November 13, 2014
    Publication date: March 12, 2015
    Inventors: Michael J. JOHNSTON, Srinivas BANGALORE
  • Patent number: 8898202
    Abstract: When using finite-state devices to perform various functions, it is beneficial to use finite state devices representing regular grammars with terminals having markup-language-based semantics. By using markup-language-based symbols in the finite state devices, it is possible to generate valid markup-language expressions by concatenating the symbols representing the result of the performed function. The markup-language expression can be used by other applications and/or devices. Finite-state devices are used to convert strings of words and gestures into valid markup-language, for example, XML, expressions that can be used, for example, to provide an application program interface to underlying system applications.
    Type: Grant
    Filed: April 30, 2013
    Date of Patent: November 25, 2014
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Michael J. Johnston, Srinivas Bangalore
  • Publication number: 20140136210
    Abstract: Personalization of speech recognition while maintaining privacy of user data is achieved by transmitting data associated with received speech to a speech recognition service and receiving a result from the speech recognition service. The speech recognition service result is generated from a general purpose speech language model. The system generates an input finite state machine from the speech recognition result and composes the input finite state machine with a phone edit finite state machine, to yield a resulting finite state machine. The system composes the resulting finite state machine with a user data finite state machine to yield a second resulting finite state machine, and uses a best path through the second resulting finite state machine to yield a user specific speech recognition result.
    Type: Application
    Filed: November 14, 2012
    Publication date: May 15, 2014
    Applicant: AT&T Intellectual Property I, L.P.
    Inventor: Michael J. JOHNSTON
  • Patent number: 8626507
    Abstract: Multimodal utterances contain a number of different modes. These modes can include speech, gestures, and pen, haptic, and gaze inputs, and the like. This invention use recognition results from one or more of these modes to provide compensation to the recognition process of one or more other ones of these modes. In various exemplary embodiments, a multimodal recognition system inputs one or more recognition lattices from one or more of these modes, and generates one or more models to be used by one or more mode recognizers to recognize the one or more other modes. In one exemplary embodiment, a gesture recognizer inputs a gesture input and outputs a gesture recognition lattice to a multimodal parser. The multimodal parser generates a language model and outputs it to an automatic speech recognition system, which uses the received language model to recognize the speech input that corresponds to the recognized gesture input.
    Type: Grant
    Filed: November 30, 2012
    Date of Patent: January 7, 2014
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Srinivas Bangalore, Michael J. Johnston
  • Publication number: 20130305301
    Abstract: A portable communication device has a touch screen display that receives tactile input and a microphone that receives audio input. The portable communication device initiates a query for media based at least in part on tactile input and audio input. The touch screen display is a multi-touch screen. The portable communication device sends an initiated query and receives a text response indicative of a speech to text conversion of the query. The portable communication device then displays video in response to tactile input and audio input.
    Type: Application
    Filed: July 19, 2013
    Publication date: November 14, 2013
    Applicant: AT&T Intellectual Property I, L.P.
    Inventors: Behzad Shahraray, David Crawford Gibbon, Bernard S. Renger, Zhu Liu, Andrea Basso, Mazin Gilbert, Michael J. Johnston
  • Publication number: 20130238320
    Abstract: When using finite-state devices to perform various functions, it is beneficial to use finite state devices representing regular grammars with terminals having markup-language-based semantics. By using markup-language-based symbols in the finite state devices, it is possible to generate valid markup-language expressions by concatenating the symbols representing the result of the performed function. The markup-language expression can be used by other applications and/or devices. Finite-state devices are used to convert strings of words and gestures into valid markup-language, for example, XML, expressions that can be used, for example, to provide an application program interface to underlying system applications.
    Type: Application
    Filed: April 30, 2013
    Publication date: September 12, 2013
    Applicant: AT&T Intellectual Property II, L.P.
    Inventors: Michael J. Johnston, Srinivas Bangalore
  • Patent number: 8514197
    Abstract: A portable communication device has a touch screen display that receives tactile input and a microphone that receives audio input. The portable communication device initiates a query for media based at least in part on tactile input and audio input. The touch screen display is a multi-touch screen. The portable communication device sends an initiated query and receives a text response indicative of a speech to text conversion of the query. The portable communication device then displays video in response to tactile input and audio input.
    Type: Grant
    Filed: August 8, 2012
    Date of Patent: August 20, 2013
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Behzad Shahraray, David Crawford Gibbon, Bernard S. Renger, Zhu Liu, Andrea Basso, Mazin Gilbert, Michael J. Johnston
  • Patent number: 8433731
    Abstract: When using finite-state devices to perform various functions, it is beneficial to use finite state devices representing regular grammars with terminals having markup-language-based semantics. By using markup-language-based symbols in the finite state devices, it is possible to generate valid markup-language expressions by concatenating the symbols representing the result of the performed function. The markup-language expression can be used by other applications and/or devices. Finite-state devices are used to convert strings of words and gestures into valid markup-language, for example, XML, expressions that can be used, for example, to provide an application program interface to underlying system applications.
    Type: Grant
    Filed: December 22, 2009
    Date of Patent: April 30, 2013
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Michael J. Johnston, Sirnivas Bangalore
  • Patent number: 8355916
    Abstract: Multimodal utterances contain a number of different modes. These modes can include speech, gestures, and pen, haptic, and gaze inputs, and the like. This invention use recognition results from one or more of these modes to provide compensation to the recognition process of one or more other ones of these modes. In various exemplary embodiments, a multimodal recognition system inputs one or more recognition lattices from one or more of these modes, and generates one or more models to be used by one or more mode recognizers to recognize the one or more other modes. In one exemplary embodiment, a gesture recognizer inputs a gesture input and outputs a gesture recognition lattice to a multimodal parser. The multimodal parser generates a language model and outputs it to an automatic speech recognition system, which uses the received language model to recognize the speech input that corresponds to the recognized gesture input.
    Type: Grant
    Filed: May 31, 2012
    Date of Patent: January 15, 2013
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Srinivas Bangalore, Michael J. Johnston
  • Publication number: 20120303370
    Abstract: Multimodal utterances contain a number of different modes. These modes can include speech, gestures, and pen, haptic, and gaze inputs, and the like. This invention use recognition results from one or more of these modes to provide compensation to the recognition process of one or more other ones of these modes. In various exemplary embodiments, a multimodal recognition system inputs one or more recognition lattices from one or more of these modes, and generates one or more models to be used by one or more mode recognizers to recognize the one or more other modes. In one exemplary embodiment, a gesture recognizer inputs a gesture input and outputs a gesture recognition lattice to a multimodal parser. The multimodal parser generates a language model and outputs it to an automatic speech recognition system, which uses the received language model to recognize the speech input that corresponds to the recognized gesture input.
    Type: Application
    Filed: May 31, 2012
    Publication date: November 29, 2012
    Applicant: AT&T INTELLECTUAL PROPERTY II, L.P.
    Inventors: Srinivas Bangalore, Michael J. Johnston