Patents Examined by Mark Villena
  • Patent number: 11288457
    Abstract: Systems and methods are disclosed for determining a move driven by an interaction. In some embodiments, a processor determines an operational state of an interaction with a user based on parameter values of a data structure. The processor identifies a plurality of candidate moves for changing the operational state by determining a domain in which the interaction is occurring, retrieving a set of candidate moves that correspond to the domain from a knowledge graph, and adding the set to the plurality of candidate moves. The processor encodes input of the user received during the interaction into encoded terms, and determines a move for changing the operational state based on a match of the encoded terms to the set of candidate moves. The processor updates the parameter values of the data structure based on the move to reflect a current operational state led to by the move.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: March 29, 2022
    Assignee: Interactions LLC
    Inventors: Svetlana Stoyanchev, Michael Johnston
  • Patent number: 11273778
    Abstract: Techniques for engaging a drowsy or otherwise impaired driver of a vehicle in a VUI dialog are described. A vehicle computing system sends data (e.g., raw sensor data and/or an indication that a driver is impaired determined based on the raw sensor data) to a remote server(s). The remote server(s) may separately determine whether the driver is impaired based on the raw sensor data and/or other contextual data. The remote server(s) selects a speechlet to provide output data based on the sensor data, contextual data, and or a level at which the driver is impaired. The remote server(s) then causes the vehicle computing system to present output audio corresponding to output data provided by the speechlet.
    Type: Grant
    Filed: November 9, 2017
    Date of Patent: March 15, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Hamza Lakhani, Thomas Schaaf, Leah Rose Nicolich-Henkin, Ricardo DeMatos, Mingzhi Yu
  • Patent number: 11270083
    Abstract: In one example, a processor may: execute a machine-translation script to generate a machine-translation for a set of strings to be displayed upon execution of a subject application; cause a first display including a listing of testing actions to be performed by a test application; cause a second display that includes a GUI of the subject application, the second display including the set of strings; receive a user-translation for each of the strings via the GUI; and update the machine-translation script to include the received user-translations.
    Type: Grant
    Filed: February 26, 2015
    Date of Patent: March 8, 2022
    Assignee: Micro Focus LLC
    Inventors: Avigad Mizrahi, Omer Frieman, Simon Rabinowitz
  • Patent number: 11263714
    Abstract: Manual human processing of documents often generates results that are subjective and include human-error. The cost and relatively slow speed of manual, human analysis makes it effectively impossible or impracticable to perform document analysis at the scale, speed, and cost desired in many industries. Accordingly, it may be advantageous to employ objective, accurate rule-based techniques to evaluate and process documents. This application discloses data processing equipment and methods specially adapted for a specific application: analysis of the breadth of documents. The processing may include context-dependent pre-processing of documents and sub-portions of the documents. The sub-portions may be analyzed based on word count and commonality of words in the respective sub-portions. The equipment and methods disclosed herein improve upon other automated techniques to provide document processing by achieving a result that quantitatively improves upon manual, human processing.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: March 1, 2022
    Assignee: AON RISK SERVICES, INC. OF MARYLAND
    Inventors: William Michael Edmund, John E. Bradley, III, Daniel Crouse
  • Patent number: 11263300
    Abstract: A device and a method for authenticating a user. The method includes selecting the phrase key from a plurality of phrase keys. The method also includes receiving, from a target service a file that includes parsed data based on speech recognition processing of a phrase spoken by a user. Additionally, the method includes sending a notification the target service, upon a determination that the parsed data matches a phrase key. The method further includes receiving a set of user credentials from the target service and sending the set of user credentials to the virtual assistant device.
    Type: Grant
    Filed: July 25, 2017
    Date of Patent: March 1, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Pei Zheng, Yu Wang
  • Patent number: 11250865
    Abstract: Methods, apparatus, systems and articles of manufacture (e.g., physical storage media) to utilize audio watermarking for people monitoring are disclosed. Example people monitoring methods disclosed herein include determining, at a user device, whether a first trigger condition for emitting an audio watermark identifying at least one of the user device or a user of the user device is satisfied. Such example methods also include, in response to determining that the first trigger condition is satisfied, providing a first audio signal including the audio watermark to an audio circuit that is to output an acoustic signal from the user device.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: February 15, 2022
    Assignee: The Nielsen Company (US), LLC
    Inventors: Alexander Topchy, Padmanabhan Soundararajan, Venugopal Srinivasan
  • Patent number: 11238844
    Abstract: Systems and methods for identifying a person's native language and/or non-native language based on code-switched text and/or speech, are presented. The systems may be trained using various methods. For example, a language identification system may be trained using one or more code-switched corpora. Text and/or speech features may be extracted from the corpora and used, in combination with a per-word language identify of the text and/or speech, to train at least one machine learner. Code-switched text and/or speech may be received and processed by extracting text and/or speech features. These features may be fed into the at least one machine learner to identify the person's native language.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: February 1, 2022
    Assignee: Educational Testing Service
    Inventors: Vikram Ramanarayanan, Robert Pugh, Yao Qian, David Suendermann-Oeft
  • Patent number: 11232803
    Abstract: An encoding device according to the disclosure includes a first encoding unit that generates a first encoded signal in which a low-band signal having a frequency lower than or equal to a predetermined frequency from a voice or audio input signal is encoded, and a low-band decoded signal; a second encoding unit that encodes, on the basis of the low-band decoded signal, a high-band signal having a band higher than that of the low-band signal to generate a high-band encoded signal; and a first multiplexing unit that multiplexes the first encoded signal and the high-band encoded signal to generate and output an encoded signal. The second encoding unit calculates an energy ratio between a high-band noise component, which is a noise component of the high-band signal, and a high-band non-tonal component of a high-band decoded signal generated from the low-band decoded signal and outputs the ratio as the high-band encoded signal.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: January 25, 2022
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Srikanth Nagisetty, Zong Xian Liu, Hiroyuki Ehara
  • Patent number: 11227124
    Abstract: Methods, apparatus, and computer readable media are described related to utilizing a context of an ongoing human-to-computer dialog to enhance the ability of an automated assistant to interpret and respond when a user abruptly transitions between different domains (subjects). In various implementations, natural language input may be received from a user during an ongoing human-to-computer dialog with an automated assistant. Grammar(s) may be selected to parse the natural language input. The selecting may be based on topic(s) stored as part of a contextual data structure associated with the ongoing human-to-computer dialog. The natural language input may be parsed based on the selected grammar(s) to generate parse(s). Based on the parse(s), a natural language response may be generated and output to the user using an output device. Any topic(s) raised by the parse(s) or the natural language response may be identified and added to the contextual data structure.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: January 18, 2022
    Assignee: GOOGLE LLC
    Inventor: Piotr Takiel
  • Patent number: 11217237
    Abstract: At least one exemplary embodiment is directed to a method and device for voice operated control with learning. The method can include measuring a first sound received from a first microphone, measuring a second sound received from a second microphone, detecting a spoken voice based on an analysis of measurements taken at the first and second microphone, learning from the analysis when the user is speaking and a speaking level in noisy environments, training a decision unit from the learning to be robust to a detection of the spoken voice in the noisy environments, mixing the first sound and the second sound to produce a mixed signal, and controlling the production of the mixed signal based on the learning of one or more aspects of the spoken voice and ambient sounds in the noisy environments.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: January 4, 2022
    Assignee: Staton Techiya, LLC
    Inventors: John Usher, Steven Goldstein, Marc Boillot
  • Patent number: 11200890
    Abstract: Aspects of the present disclosure relate to distinguishing voice commands. One or more stored blocked directions of background voice noise from one or more audio output devices for a location of a voice command device are accessed. A voice input is received at the voice command device at the location and a determination is made that the voice input is received from a blocked direction. A status of an audio output device is queried to determine whether it is emitting audio. In response to a determination that the audio output device is currently emitting audio, an audio file is obtained from the audio output device, the audio file corresponding to a time when the voice input was received. The obtained audio file is compared with the received voice input. The received voice input is ignored if there is a substantial match with the obtained audio file.
    Type: Grant
    Filed: May 1, 2018
    Date of Patent: December 14, 2021
    Assignee: International Business Machines Corporation
    Inventors: Jack Dunning, Daniel T. Cunnington, Eunjin Lee, Giacomo G. Chiarella, John J. Wood
  • Patent number: 11194998
    Abstract: An intelligent assistant records speech spoken by a first user and determines a self-selection score for the first user. The intelligent assistant sends the self-selection score to another intelligent assistant, and receives a remote-selection score for the first user from the other intelligent assistant. The intelligent assistant compares the self-selection score to the remote-selection score. If the self-selection score is greater than the remote-selection score, the intelligent assistant responds to the first user and blocks subsequent responses to all other users until a disengagement metric of the first user exceeds a blocking threshold. If the self-selection score is less than the remote-selection score, the intelligent assistant does not respond to the first user.
    Type: Grant
    Filed: July 24, 2017
    Date of Patent: December 7, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kazuhito Koishida, Alexander A Popov, Uros Batricevic, Steven Nabil Bathiche
  • Patent number: 11189276
    Abstract: A vehicle includes a communication device configured to communicate with a terminal capable of providing a communication function; a sensor configured to receive voice of a user; a storage configured to store a user pattern related to a call pattern of the user; and a controller configured to search for at least one name candidate corresponding to input voice when receiving the input voice, determine a threshold for a confidence score of the at least one name candidate based on the user pattern, and select a name corresponding to the input voice from among the at least one name candidate based on the determined threshold.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: November 30, 2021
    Assignees: Hyundai Motor Company, Kia Motors Corporation
    Inventor: Kyung Chul Lee
  • Patent number: 11176946
    Abstract: A speech recognition method includes receiving a sentence generated through speech recognition, calculating a degree of suitability for each word in the sentence based on a relationship of each word with other words in the sentence, detecting a target word to be corrected among the words in the sentence based on the degree of suitability for each word, and replacing the target word with any one of candidate words corresponding to the target word.
    Type: Grant
    Filed: April 6, 2018
    Date of Patent: November 16, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Heeyoul Choi, Hoshik Lee
  • Patent number: 11176943
    Abstract: According to an embodiment, a voice recognition device includes one or more processors. The one or more processors are configured to: recognize a voice signal representing a voice uttered by an object speaker, to generate text and meta information representing information that is not included in the text and included in the voice signal; generate an object presentation vector including a plurality of parameters representing a feature of a presentation uttered by the object speaker; calculate a similarity between the object presentation vector and a reference presentation vector including a plurality of parameters representing a feature of a presentation uttered by a reference speaker; and output the text. The one or more processors are further configured to determine whether to output the meta information based on the similarity, and upon determining to output the meta information, add the meta information to the text and output the meta information.
    Type: Grant
    Filed: February 14, 2018
    Date of Patent: November 16, 2021
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Kosei Fume, Masahiro Yamamoto
  • Patent number: 11170182
    Abstract: The present invention relates to a braille editing method using an error output function, a recording medium storing program for executing the same, and a computer program stored in a recoding medium for executing the same. More particularly, the present invention relates to a braille editing method using an error output function, a recording medium storing program for executing the same, and a computer program stored in a recoding medium for executing the same, that are capable of finding a location where a braille translation error has occurred by utilizing index information when detecting the error and thus facilitating correction.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: November 9, 2021
    Assignee: SENSEE, INC.
    Inventor: In Sik Seo
  • Patent number: 11164566
    Abstract: Methods and systems for automatic speech recognition and methods and systems for training acoustic language models are disclosed. In accordance with one automatic speech recognition method, an acoustic input data set is analyzed to identify portions of the input data set that conform to a general language and to identify portions of the input data set that conform to at least one dialect of the general language. In addition, a general language model and at least one dialect language model is applied to the input data set to perform speech recognition by dynamically selecting between the models in accordance with each of the identified portions. Further, speech recognition results obtained in accordance with the application of the models is output.
    Type: Grant
    Filed: May 7, 2018
    Date of Patent: November 2, 2021
    Assignee: International Business Machines Corporation
    Inventors: Fadi Biadsy, Lidia Mangu, Hagen Soltau
  • Patent number: 11164577
    Abstract: The present technology pertains to a voice assistant configured for use in a meeting room environment where the voice assistant can learn speech parameters for a meeting taking place in the meeting room environment. The voice assistant can use the speech parameters to deliver proactive notifications in a manner that is less intrusive to the conversation in the meeting.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: November 2, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Vikas Vashisht, Michael A. Ramalho, Mihailo Zilovic, Dario De Santis
  • Patent number: 11150869
    Abstract: Aspects of the present disclosure relate to voice command filtering. One or more directions of background noise for a location of a voice command device are determined. The one or more directions of background noise are stored as one or more blocked directions. A voice input is received at the location of the voice command device. A direction the voice input is being received from is determined and compared to the one or more blocked directions. The voice input is ignored in response to the direction of the voice input being received from corresponding to a direction of the one or more blocked directions, unless the received voice input is in a recognized voice.
    Type: Grant
    Filed: February 14, 2018
    Date of Patent: October 19, 2021
    Assignee: International Business Machines Corporation
    Inventors: Eunjin Lee, Daniel Cunnington, John J. Wood, Giacomo G. Chiarella
  • Patent number: 11151317
    Abstract: Contextual spelling methods and systems are provided that utilize natural language processing and n-gram frequencies to group documents into logical groups and to provide spelling correction suggestions. For example, a contextual spelling correction system may receive a set of documents, group the documents into separate logical groups, generate dictionaries associated with the logical groups, receive a user input, determine scores for potential spelling correction suggestions regarding the user input, and provide spelling correction suggestions based at least partly on the dictionaries associated with the logical groups.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: October 19, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Saurabh Kumar Singh, Sichen Zhao