Patents Examined by Edwin S Leland, III
-
Patent number: 11699447Abstract: Systems and methods are provided herein for determining one or more traits of a speaker based on voice analysis to present content item to the speaker. In one example, the method receives a voice query and determines whether the voice query matches within a first confidence threshold of a speaker identification (ID) among a plurality of speaker IDs stored in a speaker profile. In response to determining that the voice query matches to the speaker ID within the first confidence threshold, the method bypasses a trait prediction engine and retrieves a trait among the plurality of traits in the speaker profile associated with the matched speaker ID. The method further provides a content item based on the retrieved trait.Type: GrantFiled: June 22, 2020Date of Patent: July 11, 2023Assignee: ROVI GUIDES, INC.Inventors: Ankur Anil Aher, Jeffry Copps Robert Jose
-
Patent number: 11694694Abstract: A method is provided for identifying synthetic “deep-fake” audio samples versus organic audio samples. Methods may include: generating a model of a vocal tract using one or more organic audio samples from a user; identifying a set of bigram-feature pairs from the one or more audio samples; estimating the cross-sectional area of the vocal tract of the user when speaking the set of bigram-feature pairs; receiving a candidate audio sample; identifying bigram-feature pairs of the candidate audio sample that are in the set of bigram-feature pairs; calculating a cross-sectional area of a theoretical vocal tract of a user when speaking the identified bigram-feature pairs; and identifying the candidate audio sample as a deep-fake audio sample in response to the calculated cross-sectional area of the theoretical vocal tract of a user failing to correspond within a predetermined measure of the estimated cross sectional area of the vocal tract of the user.Type: GrantFiled: July 27, 2021Date of Patent: July 4, 2023Assignee: UNIVERSITY OF FLORIDA RESEARCH FOUNDATION, INCORPORATEDInventors: Patrick G. Traynor, Kevin Butler, Logan E. Blue, Luis Vargas, Kevin S. Warren, Hadi Abdullah, Cassidy Gibson, Jessica Nicole Odell
-
Patent number: 11682223Abstract: Computer-implemented systems and methods, trained through machine learning, score a sentiment expressed in a document. Individual sentences are scored and then overall document sentiment score is computed based on scores of individual sentences. Sentence scores can be computed with machine learning models. Digital matrix generator can generate N×M matrix for each sentence, where the matrix comprises vectors of word embeddings for the individual words of the sentence. A classifier computes a sentence sentiment score for each sentence based on the digital matrix for the sentence. Sentence sentiment scores computed by classifier can be adjusted based on a fuzzy matching of a phrase(s) in the sentence to key phrases in a lexicon that are labeled with a sentiment relevant to the context.Type: GrantFiled: August 19, 2022Date of Patent: June 20, 2023Assignee: Morgan Stanley Services Group Inc.Inventors: Yu Zhang, Dipayan Dutta, Zhongjie Lin, Shengjie Xia
-
Patent number: 11669300Abstract: Systems and methods for wake word detection configuration are disclosed. An electronic device may be configured to detect a wake word in a user utterance based on one or more wake word models. Upon detection, wake word APIs may be utilized to determine if a speech-processing application associated with a remote speech-processing system is installed on the device. If installed, secondary wake word detection may be performed on the audio data representing the user utterance, and if the wake word is detected, the audio data may be sent to the remote system for processing. If not installed, a display of the electronic device may present options for downloading the speech-processing application.Type: GrantFiled: May 8, 2020Date of Patent: June 6, 2023Assignee: Amazon Technologies, Inc.Inventors: Michael Douglas, Deepak Suresh Yavagal
-
Patent number: 11664033Abstract: An electronic apparatus is disclosed. The apparatus includes a memory configured to store at least one pre-registered voiceprint and a first voiceprint cluster including the at least one pre-registered voiceprint, and a processor configured to, based on a user recognition command being received, obtain information of time at which the user recognition command is received, change the at least one pre-registered voiceprint included in the first voiceprint cluster based on the obtained information of time, generate a second voiceprint cluster based on the at least one changed voiceprint, and based on a user's utterance being received, perform user recognition with respect to the received user's utterance based on the first voiceprint cluster and the second voiceprint cluster.Type: GrantFiled: January 26, 2021Date of Patent: May 30, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Maneesh Jain, Arun Kumar Singh, Atul Kumar Rai
-
Patent number: 11659102Abstract: An apparatus that causes a device to print an image includes at least one processor, and a display screen. A state of the apparatus is changed from a first state to a second state when an image is settled as a print target by a user, the first state being a state in which the at least one processor does not cause the device to print the image even if the apparatus is put close to the device, and the second state being a state in which the at least one processor can cause the device to print the image if the apparatus is put close to the device. After the apparatus in the second state performs a short distance wireless communication with the device, the device prints the image.Type: GrantFiled: May 4, 2021Date of Patent: May 23, 2023Assignee: CANON KABUSHIKI KAISHAInventor: Koji Yasuzaki
-
Patent number: 11651279Abstract: Mechanisms are provided for implementing a proximity based candidate answer pre-processor engine that outputs a sub-set of candidate answers to a question and answer (QA) system. The mechanisms receive a lexical answer type (LAT) and an entity specified in an input natural language question as well as an ontology data structure representing a corpus of natural language content. The mechanisms identify a set of candidate answers having associated nodes in the ontology data structure that are within a predetermined proximity of a node corresponding to the entity, and a sub-set of candidate answers in the set of candidate answers having an entity type corresponding to the LAT. The mechanisms output, to the QA system, the sub-set of candidate answers as candidate answers to the input natural language question for evaluation and selection of a final answer to the input natural language question.Type: GrantFiled: January 15, 2020Date of Patent: May 16, 2023Assignee: International Business Machines CorporationInventors: Timothy A. Bishop, Stephen A. Boxwell, Benjamin L. Brumfield, Stanley J. Vernier
-
Patent number: 11651161Abstract: Automated detection of reasoning in arguments. A training set is generated by: obtaining multiple arguments, each comprising one or more sentences provided as digital text; automatically estimating a probability that each of the arguments includes reasoning, wherein the estimating comprises applying a contextual language model to each of the arguments; automatically labeling as positive examples those of the arguments which have a relatively high probability to include reasoning; and automatically labeling as negative examples those of the arguments which have a relatively low probability to include reasoning. Based on the generated training set, a machine learning classifier is automatically trained to estimate a probability that a new argument includes reasoning. The trained machine learning classifier is applied to the new argument, to estimate a probability that the new argument includes reasoning.Type: GrantFiled: February 13, 2020Date of Patent: May 16, 2023Assignee: International Business Machines CorporationInventors: Avishai Gretz, Edo Cohen-Karlik, Noam Slonim, Assaf Toledo
-
Patent number: 11646037Abstract: Systems, methods, and non-transitory computer-readable media can provide audio waveform data that corresponds to a voice sample to a temporal convolutional network for evaluation. The temporal convolutional network can pre-process the audio waveform data and can output an identity embedding associated with the audio waveform data. The identity embedding associated with the voice sample can be obtained from the temporal convolutional network. Information describing a speaker associated with the voice sample can be determined based at least in part on the identity embedding.Type: GrantFiled: December 8, 2020Date of Patent: May 9, 2023Assignee: OTO Systems Inc.Inventors: Valentin Alain Jean Perret, Nicolas Lucien Perony, Nándor Kedves
-
Patent number: 11640501Abstract: A method for verifying whether a queried text of less than 500 characters has been compiled by an author, comprising the following steps: multivariate statistical analysis of the queried text, for example, PCA or PCoA, in order to generate a matrix of coordinates in a space with N dimensions; hierarchical clustering of the points of this space that can be represented by a dendrogram; verification of the author of the queried text on the basis of this clustering.Type: GrantFiled: April 12, 2019Date of Patent: May 2, 2023Assignee: Orphanalytics SAInventors: Guy Genilloud, Alexandre-Pierre Cotty, Antoine Jover, Adrien Donnet-Monay, Florent Devillard, Constanze Andel Rimensberger, Valentin Roten, Stefan Codrescu, Alain Favre, Luc-Olivier Pochon, Lionel Pousaz, Claire Roten, Stéphanie Riand, Serge Nicollerat, Myriam Eugster, Jean-Luc Buhlmann, Léonard Andrè Henri Studer, Claude-Alain Roten
-
Patent number: 11600397Abstract: Systems and methods are provided for presenting aggregate data in response to a natural language user input. In one example, a system includes a display and a computing device coupled to the display and storing instructions executable to receive a natural language user input, process the natural language user input, in response to determining that the user input includes a request to display two different plots of record data specific to the subject, generate, with the virtual assistant, a single graph including the two different plots of record data based on the processed natural language user input, the two different plots of record data plotted from two different record data sets, one or more aspects of the single graph selected based on an overlapping parameter for each of the two different record data sets, and output, to the display, the single graph as part of a communication thread.Type: GrantFiled: May 5, 2021Date of Patent: March 7, 2023Assignee: General Electric CompanyInventors: Omer Barkol, Renato Keshet, Andreas Tzanetakis, Constance Anne Rathke, Reuth Goldstein, Michelle Townshend
-
Patent number: 11594221Abstract: A method may include obtaining first audio data originating at a first device during a communication session between the first device and a second device. The method may also include obtaining a first text string that is a transcription of the first audio data, where the first text string may be generated using automatic speech recognition technology using the first audio data. The method may also include obtaining a second text string that is a transcription of second audio data, where the second audio data may include a revoicing of the first audio data by a captioning assistant and the second text string may be generated by the automatic speech recognition technology using the second audio data. The method may further include generating an output text string from the first text string and the second text string and using the output text string as a transcription of the speech.Type: GrantFiled: March 25, 2021Date of Patent: February 28, 2023Assignee: Sorenson IP Holdings, LLCInventors: David Thomson, Jadie Adams, Jonathan Skaggs, Joshua McClellan, Shane Roylance
-
Patent number: 11587579Abstract: Methods and apparatuses for detecting user speech are described. In one example, a method for detecting user speech includes receiving a microphone output signal corresponding to sound received at a microphone and identifying a spoken vowel sound in the microphone signal. The method further includes outputting an indication of user speech detection responsive to identifying the spoken vowel sound.Type: GrantFiled: August 5, 2021Date of Patent: February 21, 2023Assignee: PLANTRONICS, INC.Inventor: Arthur Leland Schiro
-
Patent number: 11580994Abstract: A method includes receiving acoustic features of a first utterance spoken by a first user that speaks with typical speech and processing the acoustic features of the first utterance using a general speech recognizer to generate a first transcription of the first utterance. The operations also include analyzing the first transcription of the first utterance to identify one or more bias terms in the first transcription and biasing the alternative speech recognizer on the one or more bias terms identified in the first transcription. The operations also include receiving acoustic features of a second utterance spoken by a second user that speaks with atypical speech and processing, using the alternative speech recognizer biased on the one or more terms identified in the first transcription, the acoustic features of the second utterance to generate a second transcription of the second utterance.Type: GrantFiled: January 20, 2021Date of Patent: February 14, 2023Assignee: Google LLCInventors: Fadi Biadsy, Pedro Jose Moreno Mengibar
-
Patent number: 11574640Abstract: Implementations set forth herein relate to an automated assistant that can be customized by a user to provide custom assistant responses to certain assistant queries, which may originate from other users. The user can establish certain custom assistant responses by providing an assistant response request to the automated assistant and/or responding to a request from the automated assistant to establish a particular custom assistant response. In some instances, a user can elect to establish a custom assistant response when the user determines or acknowledges that certain common queries are being submitted to the automated assistant—but the automated assistant is unable to resolve the common query. Establishing such custom assistant responses can therefore condense interactions between other users and the automated assistant.Type: GrantFiled: July 13, 2020Date of Patent: February 7, 2023Assignee: Google LLCInventors: Victor Carbune, Matthew Sharifi
-
Patent number: 11574131Abstract: The present disclosure is directed to systems and methods that include and/or leverage one or more machine-learned language models that generate intermediate textual analysis (e.g., including usage of structural tools such as APIs) in service of contextual text generation. For example, a computing system can obtain a contextual text string that includes one or more contextual text tokens. The computing system can process the contextual text string with the machine-learned language model to generate one or more intermediate text strings that include one or more intermediate text tokens. The computing system can process the one or more intermediate text strings with the machine-learned language model to generate an output text string comprising one or more output text tokens. The one or more intermediate text strings can include textual analysis of the contextual text string that supports the output text string.Type: GrantFiled: May 20, 2022Date of Patent: February 7, 2023Assignee: GOOGLE LLCInventors: Noam Shazeer, Daniel De Freitas Adiwardana
-
Patent number: 11574641Abstract: A processor-implemented method with data recognition includes: extracting input feature data from input data; calculating a matching score between the extracted input feature data and enrolled feature data of an enrolled user, based on the extracted input feature data, common component data of a plurality of enrolled feature data corresponding to the enrolled user, and distribution component data of the plurality of enrolled feature data corresponding to the enrolled user; and recognizing the input data based on the matching score.Type: GrantFiled: April 10, 2020Date of Patent: February 7, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Sung-Un Park, Kyuhong Kim
-
Patent number: 11568136Abstract: A system, method, and computer-readable medium are disclosed for performing a lexicon construction operation. The lexicon construction operation includes: identifying a corpus, the corpus comprising a plurality of training events, each of the plurality of training events comprising a term; grouping terms from the plurality of training events into topic clusters; analyzing the plurality of topic clusters, the analyzing providing a plurality of classified clusters; and, deriving a plurality of learned lexicons from the plurality of classified clusters.Type: GrantFiled: April 15, 2020Date of Patent: January 31, 2023Assignee: Forcepoint LLCInventors: Christopher Poirel, Amanda Kinnischtzke
-
Patent number: 11568152Abstract: A computer system configured for autonomous learning of entity values is provided. The computer system includes a memory that stores associations between entities and fields of response data. The computer system also includes a processor configured to receive a request to process an intent; generate a request to fulfill the intent; transmit the request to a fulfillment service; receive, from the fulfillment service, response data specifying values of the fields; identify the values of the fields within the response data; identify the entities via the associations using the fields; store, within the memory, the values of the fields as values of the entities; and retrain a natural language processor using the values of the entities.Type: GrantFiled: July 16, 2020Date of Patent: January 31, 2023Inventor: Lampros Dounis
-
Patent number: 11568877Abstract: [Problem] To provide a system that changes a shared image in real time based on a conversation. [Solution] A system 1 for changing an image based on a voice that includes a voice information input unit 3 configured to input voice information, a voice analysis unit 5 configured to analyze the voice information input by the voice information input unit 3, and an image change unit 7 configured to change a position of content in an image representing the content using information on the content included in the voice information analyzed by the voice analysis unit 5 and information on a change in the content.Type: GrantFiled: February 12, 2021Date of Patent: January 31, 2023Assignee: Interactive Solutions Corp.Inventor: Kiyoshi Sekine