Patents Examined by Bharatkumar S Shah
  • Patent number: 11074417
    Abstract: Embodiments provide a computer implemented method in a data processing system comprising a processor and a memory comprising instructions, which are executed by the processor to cause the processor to implement the method of removing a cognitive terminology from a news article at a news portal, the method including: receiving, by the processor, a first news article from a user; configuring, by the processor, a cognitive terminology filter list to add one or more entities and one or more cognitive terminology types associated with each entity in the cognitive terminology filter list; dividing, by the processor, the first news article into a plurality of text segments; identifying, by the processor, one or more key entities and one or more inter-entity relationships of each text segment; detecting, by the processor, one or more cognitive terminologies in the first news article; and providing, by the processor, one or more suggestions to remove the one or more cognitive terminologies.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: July 27, 2021
    Assignee: International Business Machines Corporation
    Inventors: Manish A. Bhide, Sameep Mehta, Nishtha Madaan, Kuntal Dey
  • Patent number: 11068669
    Abstract: The present disclosure relates generally to dynamic translation of text and/or audio data. The client instance hosted by one or more data centers and accessible by one or more remote client networks. In accordance with the present approach, a translation request is received from a user via a client device, wherein the translation request is associated with an untranslated file and a target language. Further, a source language of the untranslated file is identified. Further still, the untranslated file and the target language are outputted to a third party translation service. Even further, a translated file based on the target language, the untranslated file and a source language of the untranslated file is received.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: July 20, 2021
    Assignee: ServiceNow, Inc.
    Inventors: Michael Dominic Malcangio, Jebakumar Mathuram Santhosm Swvigaradoss, Ankit Goel, Rajesh Voleti, Srikar Bakka, Deepak Garg
  • Patent number: 11062694
    Abstract: Systems and methods for generating output audio with emphasized portions are described. Spoken audio is obtained and undergoes speech processing (e.g., ASR and optionally NLU) to create text. It may be determined that the resulting text includes a portion that should be emphasized (e.g., an interjection) using at least one of knowledge of an application run on a device that captured the spoken audio, prosodic analysis, and/or linguistic analysis. The portion of text to be emphasized may be tagged (e.g., using a Speech Synthesis Markup Language (SSML) tag). TTS processing is then performed on the tagged text to create output audio including an emphasized portion corresponding to the tagged portion of the text.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: July 13, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Marco Nicolis, Adam Franciszek Nadolski
  • Patent number: 11064006
    Abstract: A listening device that identifies, based on receiving the digitized voice stream, a first keyword of a plurality of keywords in the digitized voice stream. For example, the keyword may be a name associated with a service provider (e.g., “Google”). In response to identifying the first keyword of the plurality of keywords in the digitized voice stream the listening device identifies a first communication address of a first communication server of a first service provider associated with the first keyword of the plurality of keywords in the digitized voice stream. The listening server then routes the digitized voice stream and/or information associated with the digitized voice stream to the first communication server of the first service provider using the first communication address.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: July 13, 2021
    Assignee: Flex Ltd.
    Inventors: Mesut Gorkem Eraslan, Bruno Dias Leite
  • Patent number: 11056107
    Abstract: A computer-implemented conversational system framework to perform tasks associated with a client request. A conversation application executing on a hardware processor provides application workflow orchestration, the conversation application receiving a client request and sending one or more application requests based on the application workflow orchestration. A conversation system executing on a hardware processor provides conversation workflow orchestration, the conversation system receiving the one or more application requests. The conversation application and the conversation system develop dialog context and store the dialog context in a memory device. The conversation application and the conversation system develop the dialog context by invoking at least one micro-service to perform tasks associated with the one or more application requests. The conversation application generates a response to the client request based on the developed dialog context.
    Type: Grant
    Filed: September 11, 2018
    Date of Patent: July 6, 2021
    Assignee: International Business Machines Corporation
    Inventors: David Nahamoo, Lazaros Polymenakos, Nathaniel Mills, Li Zhu
  • Patent number: 11055493
    Abstract: A conversation system allowing for complex, open-ended conversations with users and method of its operation are disclosed. An example method includes identifying a conversation and a user, encoding the user's input message and finding codes in other texts, generating responses, and selecting an appropriate response, and sending the selected response to the user. The conversation system utilizes various methods, scoring, codes, and the like to find the appropriate responses. Example methods also include steps for sending short sentences representing action, or sending a “thought” or “daydream” to the user that reveals the emotional state of the conversation system with respect to the user, and steps for changing that emotional state. The system avoids reliance on artificial neural networks and keeps track of the context of a conversation by saving conversation codes in a database.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: July 6, 2021
    Inventor: James Lewis
  • Patent number: 11049504
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for detecting hotwords using a server. One of the methods includes receiving an audio signal encoding one or more utterances including a first utterance; determining whether at least a portion of the first utterance satisfies a first threshold of being at least a portion of a key phrase; in response to determining that at least the portion of the first utterance satisfies the first threshold of being at least a portion of a key phrase, sending the audio signal to a server system that determines whether the first utterance satisfies a second threshold of being the key phrase, the second threshold being more restrictive than the first threshold; and receiving tagged text data representing the one or more utterances encoded in the audio signal when the server system determines that the first utterance satisfies the second threshold.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: June 29, 2021
    Assignee: Google LLC
    Inventors: Alexander H. Gruenstein, Petar Aleksic, Johan Schalkwyk, Pedro J. Moreno Mengibar
  • Patent number: 11037583
    Abstract: A technique for detecting a music segment in an audio signal is disclosed. A time window is set for each section in an audio signal. A maximum and a statistic of the audio signal within the time window are calculated. A density index is computed for the section using the maximum and the statistic. The density index is a measure of the statistic relative to the maximum. The section is estimated as a music segment based, at least in part, on a condition with respect to the density index.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: June 15, 2021
    Assignee: International Business Machines Corporation
    Inventors: Masayuki Suzuki, Takashi Fukuda, Toru Nagano
  • Patent number: 11030451
    Abstract: A system for performing one or more steps of a method is disclosed. The method includes receiving a first legal clause, generating, using a segmentation algorithm, a first hidden Markov chain comprising a plurality of first nodes based on the first legal clause, each of the plurality of first nodes corresponding to an element of the first legal clause, generating, using the segmentation algorithm, a second hidden Markov chain comprising a plurality of second nodes based on the second legal clause, each of the plurality of second nodes corresponding to an element of the second legal clause, comparing each of the plurality of first nodes with each of the plurality of second nodes to identify a difference for each of the plurality of first nodes, determine, based on the comparison, whether the difference for each of the plurality of first nodes exceeds a predetermined difference threshold.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: June 8, 2021
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventors: Austin Walters, Jeremy Edward Goodsitt, Fardin Abdi Taghi Abad, Mark Watson, Vincent Pham, Anh Truong, Kenneth Taylor, Reza Farivar
  • Patent number: 11030407
    Abstract: A multilingual named-entity recognition system according to an embodiment includes an acquisition unit configured to acquire an annotated sample of a source language and a sample of a target language, a first generation unit configured to generate an annotated named-entity recognition model of the source language by applying Conditional Random Field sequence labeling to the annotated sample of the source language and obtaining an optimum weight for each annotated named entity of the source language, a calculation unit configured to calculate similarity between the annotated sample of the source language and the sample of the target language, and a second generation unit configured to generate a named-entity recognition model of the target language based on the annotated named-entity recognition model of the source language and the similarity.
    Type: Grant
    Filed: June 22, 2016
    Date of Patent: June 8, 2021
    Assignee: Rakuten, Inc.
    Inventors: Masato Hagiwara, Ayah Zirikly
  • Patent number: 11011163
    Abstract: Embodiments of the present disclosure disclose a method and apparatus for recognizing voice. A specific implementation of the method comprises: receiving voice information sent by a user through a terminal, and acquiring simultaneously a user identifier of the user; recognizing the voice information to obtain a first recognized text; determining a word information set stored in association with the user identifier of the user based on the user identifier of the user; and processing the first recognized text based on word information in the determined word information set to obtain a second recognized text, and sending the second recognized text to the terminal. The implementation improves the accuracy of voice recognition and meets a personalized need of a user.
    Type: Grant
    Filed: July 30, 2018
    Date of Patent: May 18, 2021
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventors: Niandong Du, Yan Xie
  • Patent number: 11011154
    Abstract: A method of performing speech synthesis, includes encoding character embeddings, using any one or any combination of convolutional neural networks (CNNs) and recurrent neural networks (RNNs), applying a relative-position-aware self attention function to each of the character embeddings and an input mel-scale spectrogram, and encoding the character embeddings to which the relative-position-aware self attention function is applied. The method further includes concatenating the encoded character embeddings and the encoded character embeddings to which the relative-position-aware self attention function is applied, to generate an encoder output, applying a multi-head attention function to the encoder output and the input mel-scale spectrogram to which the relative-position-aware self attention function is applied, and predicting an output mel-scale spectrogram, based on the encoder output and the input mel-scale spectrogram to which the multi-head attention function is applied.
    Type: Grant
    Filed: February 8, 2019
    Date of Patent: May 18, 2021
    Assignee: TENCENT AMERICA LLC
    Inventors: Shan Yang, Heng Lu, Shiyin Kang, Dong Yu
  • Patent number: 10997971
    Abstract: Techniques for capturing spoken user inputs while a device is prevented from capturing such spoken user inputs are described. When a first device has a status representing it is unbeneficial for the first device to perform wakeword detection, a second device (e.g. a vehicle) may perform wakeword detection on behalf of the first device. The second device may be unable to send audio data, representing a spoken user input, to a speech processing system. In such an example, the second device may send the audio data to a third device, which may send the audio data to the speech processing system.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: May 4, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Andrew Mitchell, Gabor Nagy
  • Patent number: 10984814
    Abstract: A computer-implemented method according to one embodiment includes creating a clean dictionary, utilizing a clean signal, creating a noisy dictionary, utilizing a first noisy signal, determining a time varying projection, utilizing the clean dictionary and the noisy dictionary, and denoising a second noisy signal, utilizing the time varying projection.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: April 20, 2021
    Assignee: International Business Machines Corporation
    Inventors: Dimitrios B. Dimitriadis, Samuel Thomas, Colin C. Vaz
  • Patent number: 10965823
    Abstract: An information processing system includes a first display control unit and a second display control unit. The first display control unit displays, on a display unit, a button on which a setting value of a program is displayed. The second display control unit starts up the program when the button is pressed and, in accordance with the started program, displays, on the display unit, a setting screen that corresponds to the button being pressed.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: March 30, 2021
    Assignee: Ricoh Company, Ltd.
    Inventors: Makoto Sasaki, Tadashi Sato, Fumiyoshi Kittaka
  • Patent number: 10957314
    Abstract: A system that provides a sharable language interface for implementing automated assistants in new domains and applications. A dialogue assistant that is trained in a first domain can receive a specification in a second domain. The specification can include language structure data such as schemas, recognizers, resolvers, constraints and invariants, actions, language hints, generation template, and other data. The specification data is applied to the automated assistant to enable the automated assistant to provide interactive dialogue with a user in a second domain associated with the received specification. In some instances, portions of the specification may be automatically mapped to portions of the first domain. By having the ability to learn new domains and applications through receipt of objects and properties rather than retooling the interface entirely, the present system is much more efficient at learning how to provide interactive dialogue in new domains than previous systems.
    Type: Grant
    Filed: March 2, 2018
    Date of Patent: March 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: David Leo Wright Hall, Daniel Klein, David Ernesto Heekin Burkett, Jordan Rian Cohen, Daniel Lawrence Roth
  • Patent number: 10950219
    Abstract: Systems and methods for selecting a virtual assistant are provided. An example system may include at least one memory device storing instructions and at least one processor configured to execute the instructions to perform operations that may include receiving a request from a user for a response, and identifying characteristics of the user based on the user request. The operations may also include determining a type of the user request and based on the determined type of the user request and the identified user characteristics, configuring a virtual assistant to interact with the user through at least one of synthesized speech or visual signals. The operations may also include providing the virtual assistant to interact with the user.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: March 16, 2021
    Assignee: Capital One Services, LLC
    Inventors: Jeffrey Rule, Kaylyn Gibilterra, Abdelkader Benkreira, Michael Mossoba
  • Patent number: 10950227
    Abstract: According to one embodiment, a sound processing apparatus extracts a feature of first speech uttered outside an objective area from first speech obtained at positions different from each other in a space of the objective area and a place outside the objective area. The apparatus creates, by learning, a determination model configured to determine whether an utterance position of second speech in the space is outside the objective area based at least in part on the feature uttered outside the objective area. The apparatus eliminates a portion of the second speech uttered outside the objective area from the second speech obtained by a second microphone based at least in part on the feature and the model. The apparatus detects and outputs remaining speech from the second speech.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: March 16, 2021
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Takehiko Kagoshima
  • Patent number: 10943069
    Abstract: Artificial intelligence (AI) technology can be used in combination with composable communication goal statements to facilitate a user's ability to quickly structure story outlines in a manner usable by an NLG narrative generation system without any need for the user to directly author computer code. Narrative analytics that are linked to communication goal statements can employ a conditional outcome framework that allows the content and structure of resulting narratives to intelligently adapt as a function of the nature of the data under consideration. This AI technology permits NLG systems to determine the appropriate content for inclusion in a narrative story about a data set in a manner that will satisfy a desired communication goal.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: March 9, 2021
    Assignee: NARRATIVE SCIENCE INC.
    Inventors: Andrew R. Paley, Nathan D. Nichols, Matthew L. Trahan, Maia Lewis Meza, Michael Tien Thinh Pham, Charlie M. Truong
  • Patent number: 10930280
    Abstract: Disclosed is a system for providing a toolkit for an agent developer. A system for providing a toolkit for an agent developer according to an embodiment of the present invention includes: an interface unit that obtains an utterance input by a user and outputs the utterance; and a support unit that determines intent of the utterance input by the user when the utterance is received through the interface unit, and provides another utterance or response corresponding to the determined intent through the interface unit.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: February 23, 2021
    Assignee: LG ELECTRONICS INC.
    Inventors: Seulki Jung, Bongjun Choi