Patents Examined by Jesse S Pullias
  • Patent number: 12039286
    Abstract: Techniques are disclosed for training and/or utilizing an automatic post-editing model in correcting translation error(s) introduced by a neural machine translation model. The automatic post-editing model can be trained using automatically generated training instances. A training instance is automatically generated by processing text in a first language using a neural machine translation model to generate text in a second language. The text in the second language is processed using a neural machine translation model to generate training text in the first language. A training instance can include the text in the first language as well as the training text in the first language.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: July 16, 2024
    Assignee: GOOGLE LLC
    Inventors: Markus Freitag, Isaac Caswell, Howard Scott Roy
  • Patent number: 12039986
    Abstract: FIG. 1 illustrates a decoder for decoding a current frame to reconstruct an audio signal according to an embodiment. The audio signal is encoded within the current frame. The current frame includes a current bitstream payload. The current bitstream payload includes a plurality of payload bits. The plurality of payload bits encodes a plurality of spectral lines of a spectrum of the audio signal. Each of the payload bits exhibits a position within the current bitstream payload. The decoder includes a decoding module and an output interface. The decoding module is configured to reconstruct the audio signal. The output interface is configured to output the audio signal.
    Type: Grant
    Filed: August 4, 2021
    Date of Patent: July 16, 2024
    Assignee: Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
    Inventors: Adrian Tomasek, Ralph Sperschneider, Jan Büthe, Conrad Benndorf, Martin Dietz, Markus Schnell, Maximilian Schlegel
  • Patent number: 12033072
    Abstract: A computer implemented method includes building a Positive Knowledge Base with directive words, designated verbs and designated objects. A Negative Knowledge Base with designated phrases and designated legal terms is built. Tasks and phrases from the Positive Knowledge Base and the Negative Knowledge Base are built. Regulations are received. Phrases from the regulations are weighted against the Positive Knowledge Base and the Negative Knowledge Base to isolate positive Maintenance Compliances. The positive Maintenance Compliances are matched to tasks to derive ranked Maintenance Compliances. The ranked Maintenance Compliances are supplied.
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: July 9, 2024
    Inventors: Daniel Cunningham, Baron R. K. Von Wolfshield
  • Patent number: 12026469
    Abstract: Aspects of the disclosure relate to detecting random and/or algorithmically-generated character sequences in domain names. A computing platform may train a machine learning model based on a set of semantically-meaningful words. Subsequently, the computing platform may receive a seed string and a set of domains to be analyzed in connection with the seed string. Based on the machine learning model, the computing platform may apply a classification algorithm to the seed string and the set of domains, where applying the classification algorithm to the seed string and the set of domains produces a classification result. Thereafter, the computing platform may store the classification result.
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: July 2, 2024
    Assignee: Proofpoint, Inc.
    Inventors: Hung-Jen Chang, Gaurav Mitesh Dalal, Ali Mesdaq
  • Patent number: 12027159
    Abstract: Embodiments disclosed are directed to a computing system that performs steps to automatically generate fine-grained call reasons from customer service call transcripts. The computing system extracts, using a natural language processing (NLP) technique, a set of events from a set of text strings of speaker turns. The computing system then identifies a set of clusters of events based on the set of events and labels each cluster of events in the set of clusters of events to generate a set of labeled clusters of events. Subsequently, the computing system assigns each event in the set of events to a respective labeled cluster of events in the set of labeled clusters of events.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: July 2, 2024
    Assignee: Capital One Services, LLC
    Inventors: Adam Faulkner, Gayle McElvain, John Qui
  • Patent number: 12020690
    Abstract: Devices and techniques are generally described for adaptive targeting for voice notifications. In various examples, first data representing a predicted likelihood that a first user will interact with first content within a predefined amount of time may be received. A first set of features including features related to past voice notifications sent to the first user may be determined. A second set of features including features related to interaction with the first content when past voice notifications were sent may be received. A first machine learning model may generate a prediction that a voice notification will increase a probability that the first user interacts with the first content based on the first data, the first set of features, and the second set of features. Audio data comprising the voice notification may be sent to a first device associated with the first content.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: June 25, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Iftah Gamzu, Marina Haikin, Nissim Halabi, Yossi Shasha, Yochai Zvik, Moshe Peretz
  • Patent number: 12019986
    Abstract: Acquisition of an utterance pair for expanding a set of utterance pairs for outputting an output utterance in response to receiving a given utterance is described. A keyword extraction unit is configured to compare a degree of characteristic of a word in expansion source utterance pair data and a degree of characteristics of a word in the given utterance data. The expansion source utterance pair data represents a set of expansion source utterance pairs including an input utterance and an output utterance for the input utterance. The present technology includes extracting, based on a comparison result, a keyword list including a keyword that is characteristic of the expansion source utterance pair data. An utterance pair extraction unit is configured to extract, based on the keyword list, an utterance pair from a set of given utterance pairs as an addition for expanding the set of utterance pairs.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: June 25, 2024
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Ko Mitsuda, Ryuichiro Higashinaka, Taichi Katayama, Junji Tomita
  • Patent number: 12014725
    Abstract: A method of training a language model for rare-word speech recognition includes obtaining a set of training text samples, and obtaining a set of training utterances used for training a speech recognition model. Each training utterance in the plurality of training utterances includes audio data corresponding to an utterance and a corresponding transcription of the utterance. The method also includes applying rare word filtering on the set of training text samples to identify a subset of rare-word training text samples that include words that do not appear in the transcriptions from the set of training utterances or appear in the transcriptions from the set of training utterances less than a threshold number of times. The method further includes training the external language model on the transcriptions from the set of training utterances and the identified subset of rare-word training text samples.
    Type: Grant
    Filed: December 13, 2021
    Date of Patent: June 18, 2024
    Assignee: Google LLC
    Inventors: Ronny Huang, Tara N. Sainath
  • Patent number: 12001788
    Abstract: Disclosed is a solution for diagnosing problems from logs used in an application development environment. A random sample of log statements is collected. The log statements can be completely unstructured and/or do not conform to any natural language. The log statements are tagged with predefined classifications. A natural language processing (NLP) classifier model is trained utilizing the log statements tagged with the predefined classification. New log statements can be classified into the plurality of predefined classifications utilizing the trained NLP classifier model. From the log statements thus classified, statements having a problem classification can be identified and presented through a dashboard running in a browser. Outputs from the trained NLP classifier model can be provided as input to another trained model for automatically and quickly identifying a type of problem associated with the statements, eliminating a need to manually sift through tens or hundreds of thousands of lines of logs.
    Type: Grant
    Filed: December 22, 2022
    Date of Patent: June 4, 2024
    Assignee: OPEN TEXT CORPORATION
    Inventors: Ankur Sharma, Ravikanth Somayaji
  • Patent number: 11996083
    Abstract: A computer-implemented method is provided of using a machine learning model for disentanglement of prosody in spoken natural language. The method includes encoding, by a computing device, the spoken natural language to produce content code. The method further includes resampling, by the computing device without text transcriptions, the content code to obscure the prosody by applying an unsupervised technique to the machine learning model to generate prosody-obscured content code. The method additionally includes decoding, by the computing device, the prosody-obscured content code to synthesize speech indirectly based upon the content code.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: May 28, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kaizhi Qian, Yang Zhang, Shiyu Chang, Jinjun Xiong, Chuang Gan, David Cox
  • Patent number: 11996116
    Abstract: Examples relate to on-device non-semantic representation fine-tuning for speech classification. A computing system may obtain audio data having a speech portion and train a neural network to learn a non-semantic speech representation based on the speech portion of the audio data. The computing system may evaluate performance of the non-semantic speech representation based on a set of benchmark tasks corresponding to a speech domain and perform a fine-tuning process on the non-semantic speech representation based on one or more downstream tasks. The computing system may further generate a model based on the non-semantic representation and provide the model to a mobile computing device. The model is configured to operate locally on the mobile computing device.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: May 28, 2024
    Assignee: Google LLC
    Inventors: Joel Shor, Ronnie Maor, Oran Lang, Omry Tuval, Marco Tagliasacchi, Ira Shavitt, Felix de Chaumont Quitry, Dotan Emanuel, Aren Jansen
  • Patent number: 11989659
    Abstract: Artificial intelligence methods and systems for triggering the generation of narratives are disclosed. Specific embodiments relate to real-time evaluation and automated generation of narrative stories based on received data. For example, data can be tested against data representative of a plurality of story angles to determine whether a narrative story incorporating one or more such story angles is to be automatically generated.
    Type: Grant
    Filed: December 5, 2022
    Date of Patent: May 21, 2024
    Assignee: Salesforce, Inc.
    Inventors: Nathan Nichols, Michael Justin Smathers, Lawrence Birnbaum, Kristian Hammond, Lawrence E. Adams
  • Patent number: 11990146
    Abstract: An apparatus for providing a processed audio signal representation on the basis of input audio signal representation configured to apply an un-windowing, in order to provide the processed audio signal representation on the basis of the input audio signal representation. The apparatus is configured to adapt the un-windowing in dependence on one or more signal characteristics and/or in dependence on one or more processing parameters used for a provision of the input audio signal representation.
    Type: Grant
    Filed: May 4, 2021
    Date of Patent: May 21, 2024
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Stefan Bayer, Pallavi Maben, Emmanuel Ravelli, Guillaume Fuchs, Eleni Fotopoulou, Markus Multrus
  • Patent number: 11972217
    Abstract: System and method for displaying a user interface of an evaluation system configured to evaluate predicted answers generated by a machine learning system. For example, the method includes receiving textual data and a predicted answer to a question associated with a text object. The text object includes a structured data field of the textual data. The predicted answer includes a confidence level. The confidence level is determined by a machine learning system. In response to determining the confidence level being larger than or equal to a predetermined confidence threshold, the predicted answer and a reference is stored in a storage for retrieval and display. The reference indicates a location of the text object in the textual data. In response to determining the confidence level being smaller than the predetermined confidence threshold, the question and the text object associated with the question is displayed.
    Type: Grant
    Filed: November 1, 2022
    Date of Patent: April 30, 2024
    Assignee: RELX INC.
    Inventors: Douglas C. Hebenthal, Cesare John Saretto, James Tracy, Richard Clinkenbeard, Christopher Liu
  • Patent number: 11972757
    Abstract: Conversational image editing and enhancement techniques are described. For example, an indication of a digital image is received from a user. Aesthetic attribute scores for multiple aesthetic attributes of the image are generated. A computing device then conducts a natural language conversation with the user to edit the digital image. The computing device receives inputs from the user to refine the digital image as the natural language conversation progresses. The computing device generates natural language suggestions to edit the digital image based on the aesthetic attribute scores as part of the natural language conversation. The computing device provides feedback to the user that includes edits to the digital image based on the series of inputs. The computing device also includes as feedback natural language outputs indicating options for additional edits to the digital image based on the series of inputs and the previous edits to the digital image.
    Type: Grant
    Filed: January 3, 2023
    Date of Patent: April 30, 2024
    Assignee: Adobe Inc.
    Inventors: Frieder Ludwig Anton Ganz, Walter Wei-Tuh Chang
  • Patent number: 11966708
    Abstract: A method, computer program product, and computer system for translating, using a beam search, a source sentence in a source language into a target sentence in a target language by an iterative process.
    Type: Grant
    Filed: October 1, 2021
    Date of Patent: April 23, 2024
    Assignee: International Business Machines Corporation
    Inventors: Sathya Santhar, Sridevi Kannan, Suvedhahari Velusamy, Kothagorla Lakshmana Rao
  • Patent number: 11961534
    Abstract: A voice operation apparatus and a control method thereof that can further improve accuracy of talker identification are provided. Provided is a voice operation apparatus including a talker identification unit that identifies a user as a talker of a voice operation based on voice information and a voice quality model of a user registered in advance, and a voice operation recognition unit that performs voice recognition on the voice information and generates voice operation information, wherein the talker identification unit identifies a talker by using, as auxiliary information, at least one of the voice operation information, position information on a voice operation apparatus, direction information on a talker, distance information on a talker, and time information.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: April 16, 2024
    Assignee: NEC CORPORATION
    Inventors: Noritada Yasumoro, Masanori Mizoguchi
  • Patent number: 11961524
    Abstract: A system for extracting speaker information in an ATC transcription and displaying the speaker information on a graphical display unit is provided. The system is configured to: segment a stream of audio received from an ATC and other aircraft into a plurality of chunks; determine, for each chunk, if the speaker is enrolled in an enrolled speaker database; when the speaker is enrolled in the enrolled speaker database, decode the chunk using a speaker-dependent automatic speech recognition (ASR) model and tag the chunk with a permanent name for the speaker; when the speaker is not enrolled in the enrolled speaker database, assign a temporary name for the speaker, tag the chunk with the temporary name, and decode the chunk using a speaker independent speech recognition model; format the decoded chunk as text; and signal the graphical display unit to display the formatted text along with an identity for the speaker.
    Type: Grant
    Filed: July 16, 2021
    Date of Patent: April 16, 2024
    Assignee: HONEYWELL INTERNATIONAL INC.
    Inventors: Jitender Kumar Agarwal, Mohan M. Thippeswamy
  • Patent number: 11960852
    Abstract: A direct speech-to-speech translation (S2ST) model includes an encoder configured to receive an input speech representation that to an utterance spoken by a source speaker in a first language and encode the input speech representation into a hidden feature representation. The S2ST model also includes an attention module configured to generate a context vector that attends to the hidden representation encoded by the encoder. The S2ST model also includes a decoder configured to receive the context vector generated by the attention module and predict a phoneme representation that corresponds to a translation of the utterance in a second different language. The S2ST model also includes a synthesizer configured to receive the context vector and the phoneme representation and generate a translated synthesized speech representation that corresponds to a translation of the utterance spoken in the different second language.
    Type: Grant
    Filed: December 15, 2021
    Date of Patent: April 16, 2024
    Assignee: Google LLC
    Inventors: Ye Jia, Michelle Tadmor Ramanovich, Tal Remez, Roi Pomerantz
  • Patent number: 11954438
    Abstract: Disclosed embodiments provide techniques to identify the in-context meanings of natural language in order to decipher the evolution or creation of new vocabulary words and create a more holistic user experience. Thus, disclosed embodiments improve the technical field of digital content comprehension. In embodiments, machine learning is used to identify sentiment of text, perform entity detection to determine topics of text, and/or perform image analysis on images used in digital content. Words, symbols, and images that are determined to be potentially unfamiliar to a user are augmented with a supplemental definition indication. Invoking the supplemental definition indication enables rendering of additional definition information for the user. This serves to accelerate understanding of digital content such as webpages and social media posts.
    Type: Grant
    Filed: June 15, 2021
    Date of Patent: April 9, 2024
    Assignee: International Business Machines Corporation
    Inventors: Thomas Jefferson Sandridge, Dasson Tan, Emma Alexandra Vert, Matthew Digman, Jessica L. Zhao