Patents by Inventor Michael J. Johnston
Michael J. Johnston has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11914780Abstract: A finger-mounted device may include finger-mounted units. The finger-mounted units may each have a body that serves as a support structure for components such as force sensors, accelerometers, and other sensors and for haptic output devices. The body may have sidewall portions coupled by a portion that rests adjacent to a user's fingernail. The body may be formed from deformable material such as metal or may be formed from adjustable structures such as sliding body portions that are coupled to each other using magnetic attraction, springs, or other structures. The body of each finger-mounted unit may have a U-shaped cross-sectional profile that leaves the finger pad of each finger exposed when the body is coupled to a fingertip of a user's finger. Control circuitry may gather finger press input, lateral finger movement input, and finger tap input using the sensors and may provide haptic output using the haptic output device.Type: GrantFiled: August 11, 2022Date of Patent: February 27, 2024Assignee: Apple Inc.Inventors: Paul X Wang, Alex J. Lehmann, Michael J. Rockwell, Michael Y. Cheung, Ray L. Chang, Hongcheng Sun, Ian M. Bullock, Kyle J. Nekimken, Madeleine S. Cordier, Seung Wook Kim, David H. Bloom, Scott G. Johnston
-
Patent number: 10853420Abstract: Extracting, from user activity data, quantitative attributes and qualitative attributes collected for users having user profiles. The quantitative attributes and the qualitative attributes are extracted during a specified time period determined before the user activity data is collected. Values for the quantitative attributes and the qualitative attributes are plotted, and subsets of the user profiles are clustered into separate group of users based on the plotted values. Delivering a product related content to the groups of users based on the clustering.Type: GrantFiled: August 21, 2017Date of Patent: December 1, 2020Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Srinivas Bangalore, Junlan Feng, Michael J. Johnston, Taniya Mishra
-
Patent number: 10739976Abstract: Methods, systems, devices, and media for creating a plan through multimodal search inputs are provided. A first search request comprises a first input received via a first input mode and a second input received via a different second input mode. The second input identifies a geographic area. First search results are displayed based on the first search request and corresponding to the geographic area. Each of the first search results is associated with a geographic location. A selection of one of the first search results is received and added to a plan. A second search request is received after the selection, and second search results are displayed in response to the second search request. The second search results are based on the second search request and correspond to the geographic location of the selected one of the first search results.Type: GrantFiled: January 16, 2018Date of Patent: August 11, 2020Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Michael J. Johnston, Patrick Ehlen, Hyuckchul Jung, Jay H. Lieske, Jr., Ethan Selfridge, Brant J. Vasilieff, Jay Gordon Wilpon
-
Patent number: 10497371Abstract: A system, method and computer-readable storage devices are disclosed for multi-modal interactions with a system via a long-touch gesture on a touch-sensitive display. A system operating per this disclosure can receive a multi-modal input comprising speech and a touch on a display, wherein the speech comprises a pronoun. When the touch on the display has a duration longer than a threshold duration, the system can identify an object within a threshold distance of the touch, associate the object with the pronoun in the speech, to yield an association, and perform an action based on the speech and the association.Type: GrantFiled: April 29, 2019Date of Patent: December 3, 2019Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Brant J. Vasilieff, Patrick Ehlen, Michael J. Johnston
-
Publication number: 20190251969Abstract: A system, method and computer-readable storage devices are disclosed for multi-modal interactions with a system via a long-touch gesture on a touch-sensitive display. A system operating per this disclosure can receive a multi-modal input comprising speech and a touch on a display, wherein the speech comprises a pronoun. When the touch on the display has a duration longer than a threshold duration, the system can identify an object within a threshold distance of the touch, associate the object with the pronoun in the speech, to yield an association, and perform an action based on the speech and the association.Type: ApplicationFiled: April 29, 2019Publication date: August 15, 2019Inventors: Brant J. VASILIEFF, Patrick EHLEN, Michael J. JOHNSTON
-
Patent number: 10276158Abstract: A system, method and computer-readable storage devices are disclosed for multi-modal interactions with a system via a long-touch gesture on a touch-sensitive display. A system operating per this disclosure can receive a multi-modal input comprising speech and a touch on a display, wherein the speech comprises a pronoun. When the touch on the display has a duration longer than a threshold duration, the system can identify an object within a threshold distance of the touch, associate the object with the pronoun in the speech, to yield an association, and perform an action based on the speech and the association.Type: GrantFiled: October 31, 2014Date of Patent: April 30, 2019Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Brant J. Vasilieff, Patrick Ehlen, Michael J. Johnston
-
Publication number: 20180190288Abstract: A method of providing hybrid speech recognition between a local embedded speech recognition system and a remote speech recognition system relates to receiving speech from a user at a device communicating with a remote speech recognition system. The system recognizes a first part of speech by performing a first recognition of the first part of the speech with the embedded speech recognition system that accesses private user data, wherein the private user data is not available to the remote speech recognition system. The system recognizes the second part of the speech by performing a second recognition of the second part of the speech with the remote speech recognition system. The final recognition result is a combination of these two recognition processes. The private data can be such local information as a user location, a playlist, frequently dialed numbers or texted people, user contact list information, and so forth.Type: ApplicationFiled: February 26, 2018Publication date: July 5, 2018Inventors: David THOMSON, Michael J. JOHNSTON, Vivek Kumar RANGARAJAN SRIDHAR
-
Publication number: 20180157403Abstract: Methods, systems, devices, and media for creating a plan through multimodal search inputs are provided. A first search request comprises a first input received via a first input mode and a second input received via a different second input mode. The second input identifies a geographic area. First search results are displayed based on the first search request and corresponding to the geographic area. Each of the first search results is associated with a geographic location. A selection of one of the first search results is received and added to a plan. A second search request is received after the selection, and second search results are displayed in response to the second search request. The second search results are based on the second search request and correspond to the geographic location of the selected one of the first search results.Type: ApplicationFiled: January 16, 2018Publication date: June 7, 2018Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Michael J. JOHNSTON, Patrick EHLEN, Hyuckchul JUNG, Jay H. LIESKE, JR., Ethan SELFRIDGE, Brant J. VASILIEFF, Jay Gordon WILPON
-
Patent number: 9953644Abstract: A system, method and computer-readable storage devices are disclosed for using targeted clarification (TC) questions in dialog systems in a multimodal virtual agent system (MVA) providing access to information about movies, restaurants, and musical events. In contrast with open-domain spoken systems, the MVA application covers a domain with a fixed set of concepts and uses a natural language understanding (NLU) component to mark concepts in automatically recognized speech. Instead of identifying an error segment, localized error detection (LED) identifies which of the concepts are likely to be present and correct using domain knowledge, automatic speech recognition (ASR), and NLU tags and scores. If at least concept is identified to be present but not correct, the TC component uses this information to generate a targeted clarification question.Type: GrantFiled: December 1, 2014Date of Patent: April 24, 2018Assignee: AT&T Intellectual Property I, L.P.Inventors: Ethan Selfridge, Michael J. Johnston, Svetlana Stoyanchev
-
Patent number: 9942616Abstract: A portable communication device has a touch screen display that receives tactile input and a microphone that receives audio input. The portable communication device initiates a query for media based at least in part on tactile input and audio input. The touch screen display is a multi-touch screen. The portable communication device sends an initiated query and receives a text response indicative of a speech to text conversion of the query. The portable communication device then displays video in response to tactile input and audio input.Type: GrantFiled: May 2, 2016Date of Patent: April 10, 2018Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Behzad Shahraray, David Crawford Gibbon, Bernard S. Renger, Zhu Liu, Andrea Basso, Mazin Gilbert, Michael J. Johnston
-
Patent number: 9904450Abstract: Methods, systems, devices, and media for creating a plan through multimodal search inputs are provided. A multimodal virtual assistant receives a first search request which comprises a geographic area. First search results are displayed in response to the first search request being received. The first search results are based on the first search request and correspond to the geographic area. Each of the first search results is associated with a geographic location. The multimodal virtual assistant receives a selection of one of the first search results, and adds the selected one of the first search results to a plan. A second search request is received after the selection, and second search results are displayed in response to the second search request being received. The second search results are based on the second search request and correspond to the geographic location of the selected one of the first search results.Type: GrantFiled: December 19, 2014Date of Patent: February 27, 2018Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Michael J. Johnston, Patrick Ehlen, Hyuckchul Jung, Jay H. Lieske, Jr., Ethan Selfridge, Brant J. Vasilieff, Jay Gordon Wilpon
-
Patent number: 9905228Abstract: A method of providing hybrid speech recognition between a local embedded speech recognition system and a remote speech recognition system relates to receiving speech from a user at a device communicating with a remote speech recognition system. The system recognizes a first part of speech by performing a first recognition of the first part of the speech with the embedded speech recognition system that accesses private user data, wherein the private user data is not available to the remote speech recognition system. The system recognizes the second part of the speech by performing a second recognition of the second part of the speech with the remote speech recognition system. The final recognition result is a combination of these two recognition processes. The private data can be such local information as a user location, a playlist, frequently dialed numbers or texted people, user contact list information, and so forth.Type: GrantFiled: May 26, 2017Date of Patent: February 27, 2018Assignee: Nuance Communications, Inc.Inventors: David Thomson, Michael J. Johnston, Vivek Kumar Rangarajan Sridhar
-
Patent number: 9866782Abstract: Network connectivity is used to share relevant visual and other sensory information between vehicles, as well as delivering relevant information provided by network services to create an enhanced view of the vehicle's surroundings. The enhanced view is presented to the occupants of the vehicle to provide an improved driving experience and/or enable the occupants to take proper action (e.g., avoid obstacles, identify traffic delays, etc.). In one example, the enhanced view comprises information that is not visible to the naked eye and/or cannot be currently sensed by the vehicle's sensors (e.g., due to a partial or blocked view, low visibility conditions, hardware capabilities of the vehicle's sensors, position of the vehicle's sensors, etc.).Type: GrantFiled: July 18, 2016Date of Patent: January 9, 2018Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Behzad Shahraray, Alicia Abella, David Crawford Gibbon, Mazin E. Gilbert, Michael J. Johnston, Horst J. Schroeter, Jay Gordon Wilpon
-
Publication number: 20170344665Abstract: Extracting, from user activity data, quantitative attributes and qualitative attributes collected for users having user profiles. The quantitative attributes and the qualitative attributes are extracted during a specified time period determined before the user activity data is collected. Values for the quantitative attributes and the qualitative attributes are plotted, and subsets of the user profiles are clustered into separate group of users based on the plotted values. Delivering a product related content to the groups of users based on the clustering.Type: ApplicationFiled: August 21, 2017Publication date: November 30, 2017Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Srinivas BANGALORE, Junlan FENG, Michael J. JOHNSTON, Taniya MISHRA
-
Publication number: 20170300487Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for approximating responses to a user speech query in voice-enabled search based on metadata that include demographic features of the speaker. A system practicing the method recognizes received speech from a speaker to generate recognized speech, identifies metadata about the speaker from the received speech, and feeds the recognized speech and the metadata to a question-answering engine. Identifying the metadata about the speaker is based on voice characteristics of the received speech. The demographic features can include age, gender, socio-economic group, nationality, and/or region. The metadata identified about the speaker from the received speech can be combined with or override self-reported speaker demographic information.Type: ApplicationFiled: June 29, 2017Publication date: October 19, 2017Inventors: Michael J. Johnston, Srinivas Bangalore, Junlan Feng, Taniya Mishra
-
Publication number: 20170263253Abstract: A method of providing hybrid speech recognition between a local embedded speech recognition system and a remote speech recognition system relates to receiving speech from a user at a device communicating with a remote speech recognition system. The system recognizes a first part of speech by performing a first recognition of the first part of the speech with the embedded speech recognition system that accesses private user data, wherein the private user data is not available to the remote speech recognition system. The system recognizes the second part of the speech by performing a second recognition of the second part of the speech with the remote speech recognition system. The final recognition result is a combination of these two recognition processes. The private data can be such local information as a user location, a playlist, frequently dialed numbers or texted people, user contact list information, and so forth.Type: ApplicationFiled: May 26, 2017Publication date: September 14, 2017Inventors: David THOMSON, Michael J. JOHNSTON, Vivek Kumar RANGARAJAN SRIDHAR
-
Patent number: 9697206Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for approximating responses to a user speech query in voice-enabled search based on metadata that include demographic features of the speaker. A system practicing the method recognizes received speech from a speaker to generate recognized speech, identifies metadata about the speaker from the received speech, and feeds the recognized speech and the metadata to a question-answering engine. Identifying the metadata about the speaker is based on voice characteristics of the received speech. The demographic features can include age, gender, socio-economic group, nationality, and/or region. The metadata identified about the speaker from the received speech can be combined with or override self-reported speaker demographic information.Type: GrantFiled: October 7, 2015Date of Patent: July 4, 2017Assignee: Interactions LLCInventors: Michael J. Johnston, Srinivas Bangalore, Junlan Feng, Taniya Mishra
-
Patent number: 9666188Abstract: A method of providing hybrid speech recognition between a local embedded speech recognition system and a remote speech recognition system relates to receiving speech from a user at a device communicating with a remote speech recognition system. The system recognizes a first part of speech by performing a first recognition of the first part of the speech with the embedded speech recognition system that accesses private user data, wherein the private user data is not available to the remote speech recognition system. The system recognizes the second part of the speech by performing a second recognition of the second part of the speech with the remote speech recognition system. The final recognition result is a combination of these two recognition processes. The private data can be such local information as a user location, a playlist, frequently dialed numbers or texted people, user contact list information, and so forth.Type: GrantFiled: October 29, 2013Date of Patent: May 30, 2017Assignee: Nuance Communications, Inc.Inventors: David Thomson, Michael J. Johnston, Vivek Kumar Rangarajan Sridhar
-
Patent number: 9563395Abstract: When using finite-state devices to perform various functions, it is beneficial to use finite state devices representing regular grammars with terminals having markup-language-based semantics. By using markup-language-based symbols in the finite state devices, it is possible to generate valid markup-language expressions by concatenating the symbols representing the result of the performed function. The markup-language expression can be used by other applications and/or devices. Finite-state devices are used to convert strings of words and gestures into valid markup-language, for example, XML, expressions that can be used, for example, to provide an application program interface to underlying system applications.Type: GrantFiled: November 13, 2014Date of Patent: February 7, 2017Assignee: AT&T Intellectual Property II, L.P.Inventors: Michael J. Johnston, Srinivas Bangalore
-
Publication number: 20160330394Abstract: Network connectivity is used to share relevant visual and other sensory information between vehicles, as well as delivering relevant information provided by network services to create an enhanced view of the vehicle's surroundings. The enhanced view is presented to the occupants of the vehicle to provide an improved driving experience and/or enable the occupants to take proper action (e.g., avoid obstacles, identify traffic delays, etc.). In one example, the enhanced view comprises information that is not visible to the naked eye and/or cannot be currently sensed by the vehicle's sensors (e.g., due to a partial or blocked view, low visibility conditions, hardware capabilities of the vehicle's sensors, position of the vehicle's sensors, etc.).Type: ApplicationFiled: July 18, 2016Publication date: November 10, 2016Inventors: Behzad Shahraray, Alicia Abella, David Crawford Gibbon, Mazin E. Gilbert, Michael J. Johnston, Horst J. Schroeter, Jay Gordon Wilpon