Patents Examined by Nicole A K Schmieder
-
Patent number: 11978462Abstract: Various aspects of the present disclosure are directed to a process for modifying an audio signal. For example, one process for modifying an audio signal is disclosed including the following steps: determining a compression parameter of the audio signal that should be modified; fractionizing the audio signal into different frequency bands; obtaining the values of the compression parameter for each frequency band; and compressing at least a part of the frequency bands as a function of the determined compression parameter. Various other embodiments of the present disclosure are directed to a device for modifying an audio signal.Type: GrantFiled: July 10, 2018Date of Patent: May 7, 2024Assignee: ISUNIYE LLCInventor: Zlatan Ribic
-
Patent number: 11955136Abstract: Various embodiments of a system and associated method for detecting and localizing gunshots are disclosed herein.Type: GrantFiled: March 29, 2021Date of Patent: April 9, 2024Assignee: Arizona Board of Regents on behalf of Arizona State UniversityInventor: Garth Paine
-
Patent number: 11941154Abstract: Method and system of securing personally identifiable and sensitive information in conversational AI based communication. The method comprises enabling a first service provider device as a communication channel provider of an incoming communication mode and enabling a second service provider device s a communication channel provider of an outgoing communication mode, at least one of the incoming communication and outgoing communication modes comprising an audio communication, storing content of a conversation in the incoming communication mode in a first storage medium accessible to the first service provider device but not the second service provider device, and storing content of the conversation in the outgoing communication mode at a second storage medium accessible to the second service provider device but not the first service provider device, and anonymizing the audio communication wherein personally identifiable audio characteristics of the user are obfuscated from the service provider devices.Type: GrantFiled: March 27, 2023Date of Patent: March 26, 2024Assignee: Ventech Solutions, Inc.Inventors: Ravi Kiran Pasupuleti, Ravi Kunduru
-
Patent number: 11942093Abstract: A system and method to perform dubbing automatically for multiple languages at the same time using speech-to-text transcriptions, language translation, and artificial intelligence engines to perform the actual dubbing in the voice likeness of the original speaker.Type: GrantFiled: March 5, 2020Date of Patent: March 26, 2024Assignee: SYNCWORDS LLCInventors: Aleksandr Dubinsky, Taras Sereda
-
Patent number: 11914965Abstract: Disclosed systems relate to generating questions from text. In an example, a method includes forming a first semantic tree from a first reference text and second semantic tree from a second reference text. The method includes identifying a set of semantic nodes that are in the first semantic tree but not in the second semantic tree. The method includes forming a first syntactic tree for the first reference text and a second syntactic tree for the second reference text. The method includes identifying a set of syntactic nodes that are in the first syntactic tree but not in the second syntactic tree. The method includes mapping the set of semantic nodes to the set of syntactic nodes by identifying a correspondence between a semantic node and a syntactic node, forming a question fragment from a normalized word, and providing the question fragment to a user device.Type: GrantFiled: July 30, 2021Date of Patent: February 27, 2024Assignee: Oracle International CorporationInventor: Boris Galitsky
-
Patent number: 11908447Abstract: According to an aspect, method for synthesizing multi-speaker speech using an artificial neural network comprises generating and storing a speech learning model for a plurality of users by subjecting a synthetic artificial neural network of a speech synthesis model to learning, based on speech data of the plurality of users, generating speaker vectors for a new user who has not been learned and the plurality of users who have already been learned by using a speaker recognition model, determining a speaker vector having the most similar relationship with the speaker vector of the new user according to preset criteria out of the speaker vectors of the plurality of users who have already been learned, and generating and learning a speaker embedding of the new user by subjecting the synthetic artificial neural network of the speech synthesis model to learning, by using a value of a speaker embedding of a user for the determined speaker vector as an initial value and based on speaker data of the new user.Type: GrantFiled: August 4, 2021Date of Patent: February 20, 2024Assignee: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)Inventors: Joon Hyuk Chang, Jae Uk Lee
-
Patent number: 11893354Abstract: The present invention provides for improving training dataset by identifying errors in training dataset and generating improvement recommendations. In operation, the present invention provides for identifying and correcting duplicate utterances in training dataset comprising utterances-intent pairs. Further, a plurality of Natural Language ML models are trained with the corrected training dataset to obtain diverse set of trained ML models. Each utterance of training dataset are fed as input to trained ML models, and a probability of error associated with each utterances-intent pairs of training dataset are evaluated based on analysis of respective intent predictions received from each of the trained ML models. Furthermore, spelling errors in the dataset are identified and data-imbalances in the training dataset are evaluated.Type: GrantFiled: June 15, 2021Date of Patent: February 6, 2024Assignee: COGNIZANT TECHNOLOGY SOLUTIONS INDIA PVT. LTD.Inventors: Jithu R Jacob, Siddhartha Das
-
Patent number: 11894001Abstract: A multi-channel signal encoding method includes determining a downmixed signal of a first channel signal and a second channel signal, determining an initial reverberation gain parameter of the first channel signal and the second channel signal, determining a target reverberation gain parameter of the first channel signal and the second channel signal based on a correlation between the first channel signal and the downmixed signal, a correlation between the second channel signal and the downmixed signal, and the initial reverberation gain parameter, quantizing the first channel signal and the second channel signal based on the downmixed signal and the target reverberation gain parameter, and writing a quantized first channel signal and a quantized second channel signal into a bitstream.Type: GrantFiled: June 10, 2022Date of Patent: February 6, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Zexin Liu, Lei Miao
-
Patent number: 11887622Abstract: The present disclosure generally relates to a system and method for obtaining a diagnosis of a mental health condition. An exemplary system can receive an audio input; convert the audio input into a text string; identify a speaker associated with the text string; based on at least a portion of the audio input, determine a predefined audio characteristic of a plurality of predefined audio characteristics; based on the determined audio characteristic, identify an emotion corresponding to the portion of the audio input; generate a set of structured data based on the text string, the speaker, the predefined audio characteristic, and the identified emotion; and provide an output for obtaining the diagnosis of the mental disorder or condition, wherein the output is indicative of at least a portion of the set of structured data.Type: GrantFiled: September 12, 2019Date of Patent: January 30, 2024Assignee: United States Department of Veteran AffairsInventors: Qian Hu, Brian P. Marx, Patricia D. King, Seth-David Donald Dworman, Matthew E. Coarr, Keith A. Crouch, Stelios Melachrinoudis, Cheryl Clark, Terence M. Keane
-
Patent number: 11869493Abstract: Embodiments of the disclosure provide methods and apparatuses processing audio data. The method can include: acquiring audio data by an audio capturing device, determining feature information of an enclosure in which the audio capturing device is located, and reverberating the feature information into the audio data.Type: GrantFiled: November 15, 2022Date of Patent: January 9, 2024Assignee: Alibaba Group Holding LimitedInventors: Shaofei Xue, Biao Tian
-
Patent number: 11861317Abstract: Human-machine dialog is characterized by receiving data comprising a recording of an individual interacting with a dialog application simulating a conversation. Thereafter, the received data is parsed using automated speech recognition to result in text comprising a plurality of words. Features are extracted from the parsed data and then input an ensemble of different machine learning models each trained to generate a score characterizing a plurality of different dialog constructs. Thereafter, scores generated by the machine learning models for each of the dialog constructs are fused. A performance score is then generated based on the fused scores which characterizes a conversational proficiency of the individual interacting with the dialog application. Data can then be provided which includes or otherwise characterizes the generated score. Related apparatus, systems, techniques and articles are also described.Type: GrantFiled: April 30, 2021Date of Patent: January 2, 2024Assignee: Educational Testing ServiceInventors: Vikram Ramanarayanan, Matthew Mulholland, Debanjan Ghosh
-
Patent number: 11854571Abstract: Apparatuses and methods of transmitting and receiving a speech signal. The method of transmitting a speech signal includes extracting low frequency feature information from an input speech signal by using a first feature extracting network; and transmitting a speech signal corresponding to the low frequency feature information to a receiving end. The method of receiving a speech signal includes receiving a first speech signal transmitted by a transmitting end; extracting low frequency feature information from the first speech signal and recovering high frequency feature information based on the low frequency feature information, by using a second feature extracting network; and outputting a second speech signal including the low frequency feature information and the high frequency feature information.Type: GrantFiled: November 27, 2020Date of Patent: December 26, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Liang Wen, Lizhong Wang, Anxi Yi, Chao Min, Liangxi Yao
-
Patent number: 11842747Abstract: An example system includes a processor to receive a data set and similarity scores. The processor is to execute an eigen response analysis on eigenvectors calculated for a similarity matrix generated based on the similarity scores for the data set. The processor is to output an estimated number of clusters in the data set based on the eigen response analysis.Type: GrantFiled: October 22, 2021Date of Patent: December 12, 2023Assignee: International Business Machines CorporationInventor: Hagai Aronowitz
-
Patent number: 11837244Abstract: An analysis filter bank corresponding to multiple sub-bands, which performs frequency-division filtering on an input signal to generate multiple sub-band signals, the analysis filter bank comprising: a sub-band response pre-compensator which performs a linear filtering on the input signal to generate a response pre-compensated signal, multiple sub-filters with different central frequencies, which perform complex-type first-order infinite impulse response filtering respectively on the response pre-compensated signal to generate multiple sub-filter signals, and multiple binomially-combining and rotating devices based on a set of binomial weights, each of which performs a weighted summation on at least two of the sub-filter signals with the set of binomial weights, and rotates a weighted-summation result with a rotating phase according to a corresponding sub-band central frequency to generate one of the sub-band signals, wherein the at least two of the sub-filter signals are generated by at least two of the sub-Type: GrantFiled: March 29, 2021Date of Patent: December 5, 2023Assignee: Invictumtech Inc.Inventor: Ming-Luen Liou
-
Patent number: 11822885Abstract: Systems and methods for contextual natural language censoring are disclosed. For example, configuration data indicating details associated with content provided by a client device and/or about the client device may be received and may be utilized to determine impermissible and permissible exceptions for a given client. One or more queries may be generated utilizing the impermissible and permissible exceptions, and when input data is received from the client device and/or in association with the client identifier, the queries may be utilized to evaluate the input data for impermissible and permissible exceptions. The results may be filtered based on user preferences, the input data may be censored, the input data may be prevented from being exposed to a user device, the application associated with the input data may be removed from availability, and/or a maturity setting may be changed for the application, for example.Type: GrantFiled: June 3, 2019Date of Patent: November 21, 2023Assignee: Amazon Technologies, Inc.Inventors: Arunachalam Sundararaman, Mukul Aggarwal, Rajesh Ravindran Nandyaleth, Ankur Gupta, Dilip Sridhar
-
Patent number: 11810548Abstract: A speech translation method using a multilingual text-to-speech synthesis model includes acquiring a single artificial neural network text-to-speech synthesis model having acquired learning based on a learning text of a first language and learning speech data of the first language corresponding to the learning text of the first language, and a learning text of a second language and learning speech data of the second language corresponding to the learning text of the second language, receiving input speech data of the first language and an articulatory feature of a speaker regarding the first language, converting the input speech data of the first language into a text of the first language, converting the text of the first language into a text of the second language, and generating output speech data for the text of the second language that simulates the speaker's speech.Type: GrantFiled: July 10, 2020Date of Patent: November 7, 2023Assignee: NEOSAPIENCE, INC.Inventors: Taesu Kim, Younggun Lee
-
Patent number: 11797762Abstract: A computer-implemented method for detecting coordinated propagation of social media content may include calculating, by a computing device, a content similarity score for each social media post in relation to other social media posts in a set of social media posts. The method may also include identifying a related subset of social media posts based on the content similarity score. Additionally, the method may include detecting one or more clusters of social media posts in the related subset by clustering social media posts based on content similarity scores and timing. Furthermore, the method may include determining that a user account associated with a social media post in a detected cluster is in a coordinated network of user accounts. Finally, the method may include performing a security action in response to determining that the user account is in the coordinated network. Various other methods, systems, and computer-readable media are also disclosed.Type: GrantFiled: October 30, 2020Date of Patent: October 24, 2023Assignee: GEN DIGITAL INC.Inventor: Daniel Kats
-
Patent number: 11785141Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for determining a transfer option for transferring a call. One of the methods include receiving, by a call assistant engine, a keyword related to information provided by a user to an agent during a call; generating, by the call assistant engine, follow-up questions to be displayed on a user device of the agent in an interactive format, the first follow-up question being generated based on the keyword, each of the following follow-up questions being generated based on an answer of the agent to the previous question; and determining, by the call assistant engine, based on answers of the agent to the follow-up questions, a transfer option for transferring the call.Type: GrantFiled: April 14, 2022Date of Patent: October 10, 2023Assignee: United Services Automobile Association (USAA)Inventors: Philip Ryan Jensen, Everett Russell Freeman James, James Shamlin, Sheryl Lane Niemann, Shanna Limas, Samir Hojat
-
Patent number: 11748566Abstract: Embodiments are disclosed for automatically evaluating records. In the context of a method, an example embodiment includes receiving a set of text produced from a record, identifying, by block manipulation circuitry and from the set of text, one or more blocks of text that are related to a potential conclusion regarding the set of text, and extracting, by the block manipulation circuitry, the one or more blocks of text. The example method further includes concatenating, by the block manipulation circuitry, the extracted one or more blocks into a sequence of words, inputting the sequence of words into a machine learning model, and, in response to inputting the sequence of words into the machine learning model, producing, using the machine learning model, an indication of whether the potential conclusion regarding the record is supported by the sequence of words. Corresponding apparatuses and computer program products are also provided.Type: GrantFiled: December 7, 2018Date of Patent: September 5, 2023Assignee: Change Healthcare Holdings, LLCInventors: Adrian Lam, Bradley Strauss, John Tornblad, Adam Sullivan, Nick Giannasi
-
Patent number: 11721357Abstract: A non-transitory computer-readable recording medium having stored therein a program that causes a computer to execute a procedure, the procedure includes detecting a plurality of voice sections from an input sound that includes voices of a plurality of speakers, calculating a feature amount of each of the plurality of voice sections, determining a plurality of emotions, corresponding to the plurality of voice sections respectively, of a speaker of the plurality of speakers for each of the plurality of voice sections, and clustering a plurality of feature amounts, based on a change vector from the feature amount of the voice section determined as a first emotion of the plurality of emotions of the speaker to the feature amount of the voice section determined as a second emotion of the plurality of emotions different from the first emotion.Type: GrantFiled: January 14, 2020Date of Patent: August 8, 2023Assignee: FUJITSU LIMITEDInventors: Taro Togawa, Sayuri Nakayama, Jun Takahashi, Kiyonori Morioka