Patents Examined by Michael N. Opsasnick
-
Patent number: 11568155Abstract: There is disclosed a method and system for translating a source phrase in a first language into a second language. The method being executable by a device configured to access an index comprising a set of source sentences in the first language, and a set of target sentences in the second language, each of the target sentence corresponding to a translation of a given source sentence. The method comprises: acquiring the source phrase; generating by a translation algorithm, one or more target phrases, each of the one or more target phrases having a different semantic meaning within the second language; retrieving, from the index, a respective target sentence for each of the one or more target phrases, the respective target sentence comprising one of the one or more target phrases; and selecting each of the one or more target phrase and the respective target sentences for display.Type: GrantFiled: May 27, 2020Date of Patent: January 31, 2023Assignee: YANDEX EUROPE AGInventors: Anton Aleksandrovich Dvorkovich, Ekaterina Vladimirovna Enikeeva
-
Patent number: 11568148Abstract: Artificial intelligence (AI) technology can be used in combination with composable communication goal statements to facilitate a user's ability to quickly structure story outlines using “explanation” communication goals in a manner usable by an NLG narrative generation system without any need for the user to directly author computer code. This AI technology permits NLG systems to determine the appropriate content for inclusion in a narrative story about a data set in a manner that will satisfy a desired explanation communication goal such that the narratives will express various ideas that are deemed relevant to a given explanation communication goal.Type: GrantFiled: November 7, 2018Date of Patent: January 31, 2023Assignee: Narrative Science Inc.Inventors: Nathan D. Nichols, Andrew R. Paley, Maia Lewis Meza, Santiago Santana
-
Patent number: 11568153Abstract: A system includes a narrative repository which stores a plurality of narratives and, for each narrative, a corresponding outcome. A narrative evaluator receives the plurality of narratives and the outcome for each narrative. For each received narrative, a subset of the narrative is determined to retain based on rules. For each determined subset, a entropy matrix is determined which includes, for each word in the subset, a measure associated with whether the word is expected to appear in a sentence with another word in the subset. For each entropy matrix, a distance matrix is determined which includes, for each word in the subset, a numerical representation of a difference in meaning of the word and another word. Using one or more distance matrix(es), a first threshold distance is determined for a first word of the subset. The first word and first threshold are stored as a first word-threshold pair associated with the first outcome.Type: GrantFiled: March 5, 2020Date of Patent: January 31, 2023Assignee: Bank of America CorporationInventors: Justin Ryan Horowitz, Manohar Reddy Pilli, Xiaolei Wei
-
Patent number: 11557304Abstract: Methods and apparatus for performing variable block length watermarking of media are disclosed. Example apparatus include means for evaluating a masking ability of a first audio block; means selecting a first frequency to represent a first code, the means for selecting to (i) select the first frequency selected from a first set of frequencies that are detectable when performing a frequency transformation using a first block length, but are not detectable when performing a frequency transformation using a second block length, and (ii) select a second frequency to represent a second code, the second frequency selected from a second set of frequencies that are detectable when performing a frequency transformation using the second block length; means for synthesizing a first signal having the first frequency with the masking ability of the first audio block; and means for combining the first signal with the first audio block.Type: GrantFiled: August 10, 2020Date of Patent: January 17, 2023Assignee: THE NIELSEN COMPANY (US), LLCInventors: Venugopal Srinivasan, Alexander Topchy
-
Patent number: 11544479Abstract: Provided are a method and apparatus for constructing a compact translation model that may be installed on a terminal on the basis of a pre-built reference model, in which a pre-built reference model is miniaturized through a parameter imitation learning and is efficiently compressed through a tree search structure imitation learning without degrading the translation performance. The compact translation model provides translation accuracy and speed in a terminal environment that is limited in network, memory, and computation performance.Type: GrantFiled: January 24, 2020Date of Patent: January 3, 2023Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Yo Han Lee, Young Kil Kim
-
Patent number: 11544685Abstract: A multimedia keepsake is created containing multimedia content created by a customer and stored online as content information. After the customer selects the type of keepsake, the content information is converted to keepsake information having a format appropriate for storage in the selected type of keepsake. The keepsake information is stored online so as to be accessible via an access code, and it is downloaded to a vendor providing the access code.Type: GrantFiled: August 12, 2014Date of Patent: January 3, 2023Inventor: Geoffrey S. Stern
-
Patent number: 11544456Abstract: Systems and methods for parsing natural language sentences using an artificial neural network (ANN) are described. Embodiments of the described systems and methods may generate a plurality of word representation matrices for an input sentence, wherein each of the word representation matrices is based on an input matrix of word vectors, a query vector, a matrix of key vectors, and a matrix of value vectors, and wherein a number of the word representation matrices is based on a number of syntactic categories, compress each of the plurality of word representation matrices to produce a plurality of compressed word representation matrices, concatenate the plurality of compressed word representation matrices to produce an output matrix of word vectors, and identify at least one word from the input sentence corresponding to a syntactic category based on the output matrix of word vectors.Type: GrantFiled: March 5, 2020Date of Patent: January 3, 2023Assignee: ADOBE INC.Inventors: Khalil Mrini, Walter Chang, Trung Bui, Quan Tran, Franck Dernoncourt
-
Patent number: 11538485Abstract: A method watermarks speech data by using a generator to generate speech data including a watermark. The generator is trained to generate the speech data including the watermark. The training process generates first speech from the generator. The first speech data is configured to represent speech. The first speech data includes a candidate watermark. The training also produces an inconsistency message as a function of at least one difference between the first speech data and at least authentic speech data. The training further includes transforming the first speech data, including the candidate watermark, using a watermark robustness module to produce transformed speech data including a transformed candidate watermark. The transformed speech data includes a transformed candidate watermark. The training further produces a watermark-detectability message, using a watermark detection machine learning system, relating to one or more desirable watermark features of the transformed candidate watermark.Type: GrantFiled: August 14, 2020Date of Patent: December 27, 2022Assignee: Modulate, Inc.Inventors: William Carter Huffman, Brendan Kelly
-
Patent number: 11532302Abstract: Methods and devices for conducting, based on a clock difference, a synchronization process on voice information collected by a plurality of voice collection devices. Then, after the synchronization process is performed on the voice information collected by the plurality of voice collection devices, conducting a voice separation and recognition process on voice information that was collected by the plurality of voice collection devices and synchronized based on the clock difference among the plurality of voice collection devices.Type: GrantFiled: September 28, 2017Date of Patent: December 20, 2022Assignee: Harman International Industries, IncorporatedInventors: Xiangru Bi, Guoxia Zhang
-
Patent number: 11521630Abstract: A method, system, and computer readable medium for decomposing an audio signal into different isolated sources. The techniques and mechanisms convert an audio signal into K input spectrogram fragments. The fragments are sent into a deep neural network to isolate for different sources. The isolated fragments are then combined to form full isolated source audio signals.Type: GrantFiled: October 2, 2020Date of Patent: December 6, 2022Assignee: AUDIOSHAKE, INC.Inventor: Luke Miner
-
Patent number: 11521639Abstract: The present disclosure describes a system, method, and computer program for predicting sentiment labels for audio speech utterances using an audio speech sentiment classifier pretrained with pseudo sentiment labels. A speech sentiment classifier for audio speech (“a speech sentiment classifier”) is pretrained in an unsupervised manner by leveraging a pseudo labeler previously trained to predict sentiments for text. Specifically, a text-trained pseudo labeler is used to autogenerate pseudo sentiment labels for the audio speech utterances using transcriptions of the utterances, and the speech sentiment classifier is trained to predict the pseudo sentiment labels given corresponding embeddings of the audio speech utterances. The speech sentiment classifier is then subsequently fine tuned using a sentiment-annotated dataset of audio speech utterances, which may be significantly smaller than the unannotated dataset used in the unsupervised pretraining phase.Type: GrantFiled: May 28, 2021Date of Patent: December 6, 2022Assignee: ASAPP, INC.Inventors: Suwon Shon, Pablo Brusco, Jing Pan, Kyu Jeong Han
-
Patent number: 11521618Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for collaboration between multiple voice controlled devices are disclosed. In one aspect, a method includes the actions of identifying, by a first computing device, a second computing device that is configured to respond to a particular, predefined hotword; receiving audio data that corresponds to an utterance; receiving a transcription of additional audio data outputted by the second computing device in response to the utterance; based on the transcription of the additional audio data and based on the utterance, generating a transcription that corresponds to a response to the additional audio data; and providing, for output, the transcription that corresponds to the response.Type: GrantFiled: December 17, 2019Date of Patent: December 6, 2022Assignee: GOOGLE LLCInventors: Victor Carbune, Pedro Gonnet Anders, Thomas Deselaers, Sandro Feuz
-
Patent number: 11514885Abstract: An automatic dubbing method is disclosed. The method comprises: extracting speeches of a voice from an audio portion of a media content (504); obtaining a voice print model for the extracted speeches of the voice (506); processing the extracted speeches by utilizing the voice print model to generate replacement speeches (508); and replacing the extracted speeches of the voice with the generated replacement speeches in the audio portion of the media content (510).Type: GrantFiled: November 21, 2016Date of Patent: November 29, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Henry Gabryjelski, Jian Luan, Dapeng Li
-
Patent number: 11501079Abstract: A technique for dynamic generation of a derivative story includes obtaining content preferences from a content consumer. The content preferences indicate preferences for characteristics of the derivative story. A content data structure is identified based at least in part on the content preferences. The content data structure specifies story elements of a preexisting story. The story elements are defined at one or more different levels of story abstraction and associated with metadata constraints that constrain modification or use of the story elements within the derivative story. At least some of the metadata constraints indicate whether associated ones of the story elements are mutable story elements. One or more of the mutable story elements are adapted to the content preferences of the content consumer as constrained by the metadata constraints to generate the derivative story. The derivative story is then rendered via a user interface.Type: GrantFiled: December 5, 2019Date of Patent: November 15, 2022Assignee: X Development LLCInventors: Philip E. Watson, Christian Ervin
-
Patent number: 11495208Abstract: In some embodiments, recognition results produced by a speech processing system (which may include two or more recognition results, including a top recognition result and one or more alternative recognition results) based on an analysis of a speech input, are evaluated for indications of potential errors. In some embodiments, the indications of potential errors may include discrepancies between recognition results that are meaningful for a domain, such as medically-meaningful discrepancies. The evaluation of the recognition results may be carried out using any suitable criteria, including one or more criteria that differ from criteria used by an ASR system in determining the top recognition result and the alternative recognition results from the speech input. In some embodiments, a recognition result may additionally or alternatively be processed to determine whether the recognition result includes a word or phrase that is unlikely to appear in a domain to which speech input relates.Type: GrantFiled: October 23, 2017Date of Patent: November 8, 2022Assignee: Nuance Communications, Inc.Inventors: William F. Ganong, III, Raghu Vemula, Robert Fleming
-
Patent number: 11488588Abstract: In a control system including a printing apparatus and a server system, the server system includes a transmission unit that, if a voice instruction received by a voice control device is a query regarding the printing apparatus, transmits information concerning the printing apparatus without performing processing of content used for print processing, and a specification unit that, if the received voice instruction is a print instruction for printing the content and includes a print setting value corresponding to a first item but not a print setting value corresponding to a second item, specifies content corresponding to the print instruction, a print setting value corresponding to the first item, and a preset, predetermined print setting value for the second item. The printing apparatus includes a print control unit that performs print processing based on the content, the print setting value corresponding to the first item, and the specified predetermined print setting value.Type: GrantFiled: November 8, 2018Date of Patent: November 1, 2022Assignee: Canon Kabushiki KaishaInventor: Toshiki Shiga
-
Patent number: 11487939Abstract: Embodiments described herein provide a provide a fully unsupervised model for text compression. Specifically, the unsupervised model is configured to identify an optimal deletion path for each input sequence of texts (e.g., a sentence) and words from the input sequence are gradually deleted along the deletion path. To identify the optimal deletion path, the unsupervised model may adopt a pretrained bidirectional language model (BERT) to score each candidate deletion based on the average perplexity of the resulting sentence and performs a simple greedy look-ahead tree search to select the best deletion for each step.Type: GrantFiled: August 23, 2019Date of Patent: November 1, 2022Assignee: Salesforce.com, Inc.Inventors: Tong Niu, Caiming Xiong, Richard Socher
-
Patent number: 11481444Abstract: Provided methods and systems allow dynamic rendering of a reflexive questionnaire based on a modifiable spreadsheet for users with little to no programming experience and knowledge. Some methods comprise receiving a modifiable spreadsheet with multiple rows, each row comprising rendering instructions for a reflexive questionnaire from a first computer, such as a data type cell, statement cell, logic cell, and a field identifier; rendering a graphical user interface, on a second computer, comprising a label and an input element corresponding to the rendering instructions of a first row of the spreadsheet; receiving an input from the second computer; evaluating the input against the logic cell of the spreadsheet; in response to the input complying with the logic cell of the spreadsheet, dynamically rendering a second label and a second input element to be displayed on the graphical user interface based on the logic of the first row.Type: GrantFiled: January 28, 2020Date of Patent: October 25, 2022Assignee: HITPS LLCInventors: Mark Sayre, Harish Krishnaswamy, Sam Elsamman
-
Patent number: 11475333Abstract: Techniques for generating solutions from aural inputs include identifying, with one or more machine learning engines, a plurality of aural signals provided by two or more human speakers, at least some of the plurality of aural signals associated with a human-perceived problem; parsing, with the one or more machine learning engines, the plurality of aural signals to generate a plurality of terms, each of the terms associated with the human-perceived problem; deriving, with the one or more machine learning engines, a plurality of solution sentiments and a plurality of solution constraints from the plurality of terms; generating, with the one or more machine learning engines, at least one solution to the human-perceived problem based on the derived solution sentiments and solution constraints; and presenting the at least one solution of the human-perceived problem to the two or more human speakers through at least one of a graphical interface or an auditory interface.Type: GrantFiled: March 8, 2021Date of Patent: October 18, 2022Assignee: X Development LLCInventors: Nicholas John Foster, Carsten Schwesig
-
Patent number: 11468231Abstract: The present disclosure relates generally to systems and methods for analyzing intent. Intents may be analyzed to determine to which device or agent to route a communication. The analyzed intent information can also be used to formulate reports and analyze the accuracy of the identified intents with respect to the received communication.Type: GrantFiled: December 18, 2020Date of Patent: October 11, 2022Assignee: LIVEPERSON, INC.Inventors: Matthew Dunn, Joe Bradley, Laura Onu