Patents by Inventor Michael J. Johnston

Michael J. Johnston has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11914780
    Abstract: A finger-mounted device may include finger-mounted units. The finger-mounted units may each have a body that serves as a support structure for components such as force sensors, accelerometers, and other sensors and for haptic output devices. The body may have sidewall portions coupled by a portion that rests adjacent to a user's fingernail. The body may be formed from deformable material such as metal or may be formed from adjustable structures such as sliding body portions that are coupled to each other using magnetic attraction, springs, or other structures. The body of each finger-mounted unit may have a U-shaped cross-sectional profile that leaves the finger pad of each finger exposed when the body is coupled to a fingertip of a user's finger. Control circuitry may gather finger press input, lateral finger movement input, and finger tap input using the sensors and may provide haptic output using the haptic output device.
    Type: Grant
    Filed: August 11, 2022
    Date of Patent: February 27, 2024
    Assignee: Apple Inc.
    Inventors: Paul X Wang, Alex J. Lehmann, Michael J. Rockwell, Michael Y. Cheung, Ray L. Chang, Hongcheng Sun, Ian M. Bullock, Kyle J. Nekimken, Madeleine S. Cordier, Seung Wook Kim, David H. Bloom, Scott G. Johnston
  • Patent number: 10853420
    Abstract: Extracting, from user activity data, quantitative attributes and qualitative attributes collected for users having user profiles. The quantitative attributes and the qualitative attributes are extracted during a specified time period determined before the user activity data is collected. Values for the quantitative attributes and the qualitative attributes are plotted, and subsets of the user profiles are clustered into separate group of users based on the plotted values. Delivering a product related content to the groups of users based on the clustering.
    Type: Grant
    Filed: August 21, 2017
    Date of Patent: December 1, 2020
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Srinivas Bangalore, Junlan Feng, Michael J. Johnston, Taniya Mishra
  • Patent number: 10739976
    Abstract: Methods, systems, devices, and media for creating a plan through multimodal search inputs are provided. A first search request comprises a first input received via a first input mode and a second input received via a different second input mode. The second input identifies a geographic area. First search results are displayed based on the first search request and corresponding to the geographic area. Each of the first search results is associated with a geographic location. A selection of one of the first search results is received and added to a plan. A second search request is received after the selection, and second search results are displayed in response to the second search request. The second search results are based on the second search request and correspond to the geographic location of the selected one of the first search results.
    Type: Grant
    Filed: January 16, 2018
    Date of Patent: August 11, 2020
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Michael J. Johnston, Patrick Ehlen, Hyuckchul Jung, Jay H. Lieske, Jr., Ethan Selfridge, Brant J. Vasilieff, Jay Gordon Wilpon
  • Patent number: 10497371
    Abstract: A system, method and computer-readable storage devices are disclosed for multi-modal interactions with a system via a long-touch gesture on a touch-sensitive display. A system operating per this disclosure can receive a multi-modal input comprising speech and a touch on a display, wherein the speech comprises a pronoun. When the touch on the display has a duration longer than a threshold duration, the system can identify an object within a threshold distance of the touch, associate the object with the pronoun in the speech, to yield an association, and perform an action based on the speech and the association.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: December 3, 2019
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Brant J. Vasilieff, Patrick Ehlen, Michael J. Johnston
  • Publication number: 20190251969
    Abstract: A system, method and computer-readable storage devices are disclosed for multi-modal interactions with a system via a long-touch gesture on a touch-sensitive display. A system operating per this disclosure can receive a multi-modal input comprising speech and a touch on a display, wherein the speech comprises a pronoun. When the touch on the display has a duration longer than a threshold duration, the system can identify an object within a threshold distance of the touch, associate the object with the pronoun in the speech, to yield an association, and perform an action based on the speech and the association.
    Type: Application
    Filed: April 29, 2019
    Publication date: August 15, 2019
    Inventors: Brant J. VASILIEFF, Patrick EHLEN, Michael J. JOHNSTON
  • Patent number: 10276158
    Abstract: A system, method and computer-readable storage devices are disclosed for multi-modal interactions with a system via a long-touch gesture on a touch-sensitive display. A system operating per this disclosure can receive a multi-modal input comprising speech and a touch on a display, wherein the speech comprises a pronoun. When the touch on the display has a duration longer than a threshold duration, the system can identify an object within a threshold distance of the touch, associate the object with the pronoun in the speech, to yield an association, and perform an action based on the speech and the association.
    Type: Grant
    Filed: October 31, 2014
    Date of Patent: April 30, 2019
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Brant J. Vasilieff, Patrick Ehlen, Michael J. Johnston
  • Publication number: 20180190288
    Abstract: A method of providing hybrid speech recognition between a local embedded speech recognition system and a remote speech recognition system relates to receiving speech from a user at a device communicating with a remote speech recognition system. The system recognizes a first part of speech by performing a first recognition of the first part of the speech with the embedded speech recognition system that accesses private user data, wherein the private user data is not available to the remote speech recognition system. The system recognizes the second part of the speech by performing a second recognition of the second part of the speech with the remote speech recognition system. The final recognition result is a combination of these two recognition processes. The private data can be such local information as a user location, a playlist, frequently dialed numbers or texted people, user contact list information, and so forth.
    Type: Application
    Filed: February 26, 2018
    Publication date: July 5, 2018
    Inventors: David THOMSON, Michael J. JOHNSTON, Vivek Kumar RANGARAJAN SRIDHAR
  • Publication number: 20180157403
    Abstract: Methods, systems, devices, and media for creating a plan through multimodal search inputs are provided. A first search request comprises a first input received via a first input mode and a second input received via a different second input mode. The second input identifies a geographic area. First search results are displayed based on the first search request and corresponding to the geographic area. Each of the first search results is associated with a geographic location. A selection of one of the first search results is received and added to a plan. A second search request is received after the selection, and second search results are displayed in response to the second search request. The second search results are based on the second search request and correspond to the geographic location of the selected one of the first search results.
    Type: Application
    Filed: January 16, 2018
    Publication date: June 7, 2018
    Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Michael J. JOHNSTON, Patrick EHLEN, Hyuckchul JUNG, Jay H. LIESKE, JR., Ethan SELFRIDGE, Brant J. VASILIEFF, Jay Gordon WILPON
  • Patent number: 9953644
    Abstract: A system, method and computer-readable storage devices are disclosed for using targeted clarification (TC) questions in dialog systems in a multimodal virtual agent system (MVA) providing access to information about movies, restaurants, and musical events. In contrast with open-domain spoken systems, the MVA application covers a domain with a fixed set of concepts and uses a natural language understanding (NLU) component to mark concepts in automatically recognized speech. Instead of identifying an error segment, localized error detection (LED) identifies which of the concepts are likely to be present and correct using domain knowledge, automatic speech recognition (ASR), and NLU tags and scores. If at least concept is identified to be present but not correct, the TC component uses this information to generate a targeted clarification question.
    Type: Grant
    Filed: December 1, 2014
    Date of Patent: April 24, 2018
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Ethan Selfridge, Michael J. Johnston, Svetlana Stoyanchev
  • Patent number: 9942616
    Abstract: A portable communication device has a touch screen display that receives tactile input and a microphone that receives audio input. The portable communication device initiates a query for media based at least in part on tactile input and audio input. The touch screen display is a multi-touch screen. The portable communication device sends an initiated query and receives a text response indicative of a speech to text conversion of the query. The portable communication device then displays video in response to tactile input and audio input.
    Type: Grant
    Filed: May 2, 2016
    Date of Patent: April 10, 2018
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Behzad Shahraray, David Crawford Gibbon, Bernard S. Renger, Zhu Liu, Andrea Basso, Mazin Gilbert, Michael J. Johnston
  • Patent number: 9904450
    Abstract: Methods, systems, devices, and media for creating a plan through multimodal search inputs are provided. A multimodal virtual assistant receives a first search request which comprises a geographic area. First search results are displayed in response to the first search request being received. The first search results are based on the first search request and correspond to the geographic area. Each of the first search results is associated with a geographic location. The multimodal virtual assistant receives a selection of one of the first search results, and adds the selected one of the first search results to a plan. A second search request is received after the selection, and second search results are displayed in response to the second search request being received. The second search results are based on the second search request and correspond to the geographic location of the selected one of the first search results.
    Type: Grant
    Filed: December 19, 2014
    Date of Patent: February 27, 2018
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Michael J. Johnston, Patrick Ehlen, Hyuckchul Jung, Jay H. Lieske, Jr., Ethan Selfridge, Brant J. Vasilieff, Jay Gordon Wilpon
  • Patent number: 9905228
    Abstract: A method of providing hybrid speech recognition between a local embedded speech recognition system and a remote speech recognition system relates to receiving speech from a user at a device communicating with a remote speech recognition system. The system recognizes a first part of speech by performing a first recognition of the first part of the speech with the embedded speech recognition system that accesses private user data, wherein the private user data is not available to the remote speech recognition system. The system recognizes the second part of the speech by performing a second recognition of the second part of the speech with the remote speech recognition system. The final recognition result is a combination of these two recognition processes. The private data can be such local information as a user location, a playlist, frequently dialed numbers or texted people, user contact list information, and so forth.
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: February 27, 2018
    Assignee: Nuance Communications, Inc.
    Inventors: David Thomson, Michael J. Johnston, Vivek Kumar Rangarajan Sridhar
  • Patent number: 9866782
    Abstract: Network connectivity is used to share relevant visual and other sensory information between vehicles, as well as delivering relevant information provided by network services to create an enhanced view of the vehicle's surroundings. The enhanced view is presented to the occupants of the vehicle to provide an improved driving experience and/or enable the occupants to take proper action (e.g., avoid obstacles, identify traffic delays, etc.). In one example, the enhanced view comprises information that is not visible to the naked eye and/or cannot be currently sensed by the vehicle's sensors (e.g., due to a partial or blocked view, low visibility conditions, hardware capabilities of the vehicle's sensors, position of the vehicle's sensors, etc.).
    Type: Grant
    Filed: July 18, 2016
    Date of Patent: January 9, 2018
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Behzad Shahraray, Alicia Abella, David Crawford Gibbon, Mazin E. Gilbert, Michael J. Johnston, Horst J. Schroeter, Jay Gordon Wilpon
  • Publication number: 20170344665
    Abstract: Extracting, from user activity data, quantitative attributes and qualitative attributes collected for users having user profiles. The quantitative attributes and the qualitative attributes are extracted during a specified time period determined before the user activity data is collected. Values for the quantitative attributes and the qualitative attributes are plotted, and subsets of the user profiles are clustered into separate group of users based on the plotted values. Delivering a product related content to the groups of users based on the clustering.
    Type: Application
    Filed: August 21, 2017
    Publication date: November 30, 2017
    Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Srinivas BANGALORE, Junlan FENG, Michael J. JOHNSTON, Taniya MISHRA
  • Publication number: 20170300487
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for approximating responses to a user speech query in voice-enabled search based on metadata that include demographic features of the speaker. A system practicing the method recognizes received speech from a speaker to generate recognized speech, identifies metadata about the speaker from the received speech, and feeds the recognized speech and the metadata to a question-answering engine. Identifying the metadata about the speaker is based on voice characteristics of the received speech. The demographic features can include age, gender, socio-economic group, nationality, and/or region. The metadata identified about the speaker from the received speech can be combined with or override self-reported speaker demographic information.
    Type: Application
    Filed: June 29, 2017
    Publication date: October 19, 2017
    Inventors: Michael J. Johnston, Srinivas Bangalore, Junlan Feng, Taniya Mishra
  • Publication number: 20170263253
    Abstract: A method of providing hybrid speech recognition between a local embedded speech recognition system and a remote speech recognition system relates to receiving speech from a user at a device communicating with a remote speech recognition system. The system recognizes a first part of speech by performing a first recognition of the first part of the speech with the embedded speech recognition system that accesses private user data, wherein the private user data is not available to the remote speech recognition system. The system recognizes the second part of the speech by performing a second recognition of the second part of the speech with the remote speech recognition system. The final recognition result is a combination of these two recognition processes. The private data can be such local information as a user location, a playlist, frequently dialed numbers or texted people, user contact list information, and so forth.
    Type: Application
    Filed: May 26, 2017
    Publication date: September 14, 2017
    Inventors: David THOMSON, Michael J. JOHNSTON, Vivek Kumar RANGARAJAN SRIDHAR
  • Patent number: 9697206
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for approximating responses to a user speech query in voice-enabled search based on metadata that include demographic features of the speaker. A system practicing the method recognizes received speech from a speaker to generate recognized speech, identifies metadata about the speaker from the received speech, and feeds the recognized speech and the metadata to a question-answering engine. Identifying the metadata about the speaker is based on voice characteristics of the received speech. The demographic features can include age, gender, socio-economic group, nationality, and/or region. The metadata identified about the speaker from the received speech can be combined with or override self-reported speaker demographic information.
    Type: Grant
    Filed: October 7, 2015
    Date of Patent: July 4, 2017
    Assignee: Interactions LLC
    Inventors: Michael J. Johnston, Srinivas Bangalore, Junlan Feng, Taniya Mishra
  • Patent number: 9666188
    Abstract: A method of providing hybrid speech recognition between a local embedded speech recognition system and a remote speech recognition system relates to receiving speech from a user at a device communicating with a remote speech recognition system. The system recognizes a first part of speech by performing a first recognition of the first part of the speech with the embedded speech recognition system that accesses private user data, wherein the private user data is not available to the remote speech recognition system. The system recognizes the second part of the speech by performing a second recognition of the second part of the speech with the remote speech recognition system. The final recognition result is a combination of these two recognition processes. The private data can be such local information as a user location, a playlist, frequently dialed numbers or texted people, user contact list information, and so forth.
    Type: Grant
    Filed: October 29, 2013
    Date of Patent: May 30, 2017
    Assignee: Nuance Communications, Inc.
    Inventors: David Thomson, Michael J. Johnston, Vivek Kumar Rangarajan Sridhar
  • Patent number: 9563395
    Abstract: When using finite-state devices to perform various functions, it is beneficial to use finite state devices representing regular grammars with terminals having markup-language-based semantics. By using markup-language-based symbols in the finite state devices, it is possible to generate valid markup-language expressions by concatenating the symbols representing the result of the performed function. The markup-language expression can be used by other applications and/or devices. Finite-state devices are used to convert strings of words and gestures into valid markup-language, for example, XML, expressions that can be used, for example, to provide an application program interface to underlying system applications.
    Type: Grant
    Filed: November 13, 2014
    Date of Patent: February 7, 2017
    Assignee: AT&T Intellectual Property II, L.P.
    Inventors: Michael J. Johnston, Srinivas Bangalore
  • Publication number: 20160330394
    Abstract: Network connectivity is used to share relevant visual and other sensory information between vehicles, as well as delivering relevant information provided by network services to create an enhanced view of the vehicle's surroundings. The enhanced view is presented to the occupants of the vehicle to provide an improved driving experience and/or enable the occupants to take proper action (e.g., avoid obstacles, identify traffic delays, etc.). In one example, the enhanced view comprises information that is not visible to the naked eye and/or cannot be currently sensed by the vehicle's sensors (e.g., due to a partial or blocked view, low visibility conditions, hardware capabilities of the vehicle's sensors, position of the vehicle's sensors, etc.).
    Type: Application
    Filed: July 18, 2016
    Publication date: November 10, 2016
    Inventors: Behzad Shahraray, Alicia Abella, David Crawford Gibbon, Mazin E. Gilbert, Michael J. Johnston, Horst J. Schroeter, Jay Gordon Wilpon