Patents Examined by Pierre-Louis Desir
  • Patent number: 12293155
    Abstract: A method includes receiving a training set of utterances for training a machine-learning model to identify one or more intents for one or more utterances, and augmenting the training set of utterances with out-of-domain (OOD) examples. The augmenting includes: generating a data set of OOD examples, filtering out OOD examples from the data set of OOD examples, determining a difficulty value for each OOD example remaining within the filtered data set of the OOD examples, and generating augmented batches of utterances including utterances from the training set of utterances and utterances from the filtered data set of the OOD based on the difficulty value for each OOD. Thereafter, the machine-learning model is trained using the augmented batches of utterances in accordance with a curriculum training protocol.
    Type: Grant
    Filed: April 9, 2024
    Date of Patent: May 6, 2025
    Assignee: Oracle International Corporation
    Inventors: Elias Luqman Jalaluddin, Vishal Vishnoi, Thanh Long Duong, Mark Edward Johnson, Poorya Zaremoodi, Gautam Singaraju, Ying Xu, Vladislav Blinov, Yu-Heng Hong
  • Patent number: 12282741
    Abstract: Provided is an information processing system that includes a user information storage unit that stores user information related to a user, a personality information storage unit that stores personality information defining a personality unique to an agent, and a control unit that controls an externalized behavior which is a behavior externalized by the agent, based on the personality information.
    Type: Grant
    Filed: April 7, 2020
    Date of Patent: April 22, 2025
    Assignee: SONY GROUP CORPORATION
    Inventors: Itaru Shimizu, Shouichi Doi, Tomoko Kouno
  • Patent number: 12277939
    Abstract: A method includes receiving, by a first encoder, an original speech segment, receiving, by a second encoder, an augmented speech segment of the original speech segment, generating, by the first encoder, a first speaker representation based on the original speech segment, generating, by the second encoder, a second speaker representation based on the augmented speech segment, and generating a contrastive loss based on the first speaker representation and the second speaker representation.
    Type: Grant
    Filed: May 2, 2022
    Date of Patent: April 15, 2025
    Assignee: TENCENT AMERICA LLC
    Inventors: Chunlei Zhang, Dong Yu
  • Patent number: 12277933
    Abstract: A computer-implemented method can include: an audio input device of a portable electronic device receiving verbal speech input from a user and converting the received verbal speech input into an audio input signal; an online processing module of the portable electronic device performing at least one speech recognition operation on the audio input signal; an offline processing module of the portable electronic device performing at least one speech recognition operation on the audio input signal; an interactive game module of the portable electronic device generating user feedback based on results from the at least one speech recognition operation performed by the online processing module and the at least one speech recognition operation by the offline processing module; and a user interface of the portable electronic device providing the user feedback to the user.
    Type: Grant
    Filed: January 24, 2022
    Date of Patent: April 15, 2025
    Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventor: Jared Duval
  • Patent number: 12277926
    Abstract: An intelligent medical speech automatic recognition method includes performing a first model training step, a second model training step, a voice receiving step, a signal pre-treatment step and a transforming step. The first model training step is performed to train a generic statement data and a medical statement data of a database to establish a first model. The second model training step is performed to train a medical textbook data of the database to establish a second model. The voice receiving step is performed to receive a speech signal. The signal pre-treatment step is performed to receive the speech signal from the voice receiver and transform the speech signal into a to-be-recognized speech signal. The transforming step is performed to transform and recognize the to-be-recognized speech signal into a complete sentence writing character according to the first model and the second model.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: April 15, 2025
    Assignee: China Medical University
    Inventors: Der-Yang Cho, Kai-Cheng Hsu, Ya-Lun Wu, Kai-Ching Chen
  • Patent number: 12277149
    Abstract: A method of controlling an electronic device includes, based on obtaining first text information corresponding to a user query, identifying a main query corresponding to the first text information; obtaining a plurality of responses related to the main query; identifying a plurality of phrases included in the first text information; identifying at least one first phrase corresponding to the main query among the plurality of phrases based on a similarity between the plurality of phrases and the main query; identifying, among the plurality of responses, at least one first response corresponding to a remaining second phrase, among the plurality of phrases except the at least one first phrase, based on a similarity between each of the plurality of responses and the remaining second phrase; and providing second text information corresponding to a remaining second response other than the at least one first response among the plurality of responses.
    Type: Grant
    Filed: April 4, 2022
    Date of Patent: April 15, 2025
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jaehyung An, Hoyoung Kim, Dongil Yang, Jiyeon Lee, Cheolseung Jung
  • Patent number: 12271691
    Abstract: Systems may perform analyses of claims included in a patent document. The systems may generate one or more search strings from the patent document and provide the one or more search strings to a third-party searching authority. The third-party searching authority may return a collection of documents responsive to the one or more search strings. In particular situations, the systems may re-rank the documents of the collection to provide a patent centric ranking. The systems may also analyze the documents of the collection with respect to the elements of the claims to generate various types of patent infringement and/or invalidity reports.
    Type: Grant
    Filed: March 11, 2024
    Date of Patent: April 8, 2025
    Assignee: Moat Metrics, Inc.
    Inventor: Lewis C. Lee
  • Patent number: 12272352
    Abstract: Various embodiments relate to an electronic device and a voice recognition performing method of an electronic device which are capable of receiving a voice input of a user and executing a function corresponding to a user command generated by the voice input.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: April 8, 2025
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ojun Kwon, Hyunjin Park, Kiyong Lee, Yoonju Lee, Jisup Lee, Jaeyung Yeo
  • Patent number: 12272348
    Abstract: A method for speech conversion includes receiving, as input to an encoder of a speech conversion model, an input spectrogram corresponding to an utterance, the encoder including a stack of self-attention blocks. The method further includes generating, as output from the encoder, an encoded spectrogram and receiving, as input to a spectrogram decoder of the speech conversion model, the encoded spectrogram generated as output from the encoder. The method further includes generating, as output from the spectrogram decoder, an output spectrogram corresponding to a synthesized speech representation of the utterance.
    Type: Grant
    Filed: March 16, 2022
    Date of Patent: April 8, 2025
    Assignee: Google LLC
    Inventors: Bhuvana Ramabhadran, Zhehuai Chen, Fadi Biadsy, Pedro J. Moreno Mengibar
  • Patent number: 12272355
    Abstract: A system for improving conversational skills using a virtual speech agent is disclosed, including a virtual speech agent to execute a phone call between the virtual agent and a user. The virtual speech agent and user engage in a back-and-forth conversation, wherein the virtual speech agent generates a summary and a feedback report in view of the conversation.
    Type: Grant
    Filed: April 20, 2022
    Date of Patent: April 8, 2025
    Assignee: ConverzAI, Inc.
    Inventor: Ashwarya Poddar
  • Patent number: 12260184
    Abstract: A translation device includes a storage unit configured to store a plurality of pieces of learning data, a normalized sentence learning unit configured to perform learning on the plurality of pieces of learning data by combining original text for learning and a corresponding normalized sentence for learning, a translated sentence learning unit configured to perform learning on the plurality of pieces of learning data by combining the original text for learning and a corresponding translated sentence for learning, and a model generation unit configured to generate one normalization/translation model on the basis of a result of learning by the normalized sentence learning unit and the translated sentence learning unit, in which, on at least a part of the learning data, the translated sentence learning unit performs learning after the normalized sentence learning unit performs learning.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: March 25, 2025
    Assignee: NTT DOCOMO, INC.
    Inventors: Toshimitsu Nakamura, Noritaka Okamoto, Wataru Uchida, Yoshinori Isoda
  • Patent number: 12248753
    Abstract: There is included a method and apparatus comprising computer code configured to cause a processor or processors to perform generating one or more aligned inventories, wherein the one or more aligned inventories are generated using one or more word sense inventories, obtaining a word in a context sentence, determining one or more semantic equivalence scores indicating semantic similarity between the word in the context sentence and each of one or more associated glosses in the one or more aligned inventories using a semantic equivalence recognizer model, and predicting a correct sense of the word in the context sentence based on the determined one or more semantic equivalence scores.
    Type: Grant
    Filed: October 22, 2021
    Date of Patent: March 11, 2025
    Assignee: TENCENT AMERICA LLC
    Inventors: Wenlin Yao, Xiaoman Pan, Lifeng Jin, Jianshu Chen, Dian Yu, Dong Yu
  • Patent number: 12249320
    Abstract: A stable evaluation result is obtained from a voice of speech for any sentence. A speech evaluation device (1) outputs a score for evaluating speech of an input voice signal spoken by a speaker in a first group. A feature extraction unit (11) extracts an acoustic feature from the input voice signal. A conversion unit (12) converts the acoustic feature of the input voice signal to an acoustic feature when a speaker in a second group speaks the same text as text of the input voice signal. An evaluation unit (13) calculates a score indicating a higher evaluation as a distance between the acoustic feature before the conversion and the acoustic feature after the conversion becomes shorter.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: March 11, 2025
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventor: Sadao Hiroya
  • Patent number: 12242977
    Abstract: This disclosure relates to extraction of tasks from documents based on a weakly supervised classification technique, wherein extraction of tasks is identification of mentions of tasks in a document. There are several prior arts addressing the problem of extraction of events, however due to crucial distinctions between events-tasks, task extraction stands as a separate problem. The disclosure explicitly defines specific characteristics of tasks, creates labelled data at a word-level based on a plurality of linguistic rules to train a word-level weakly supervised model for task extraction. The labelled data is created based on the plurality of linguistic rules for a non-negation aspect, a volitionality aspect, an expertise aspect and a plurality of generic aspects. Further the disclosure also includes a phrase expansion technique to capture the complete meaning expressed by the task instead of merely mentioning the task that may not capture the entire meaning of the sentence.
    Type: Grant
    Filed: July 15, 2022
    Date of Patent: March 4, 2025
    Assignee: Tata Consultancy Services Limited
    Inventors: Sachin Sharad Pawar, Girish Keshav Palshikar, Anindita Sinha Banerjee
  • Patent number: 12236189
    Abstract: Systems and methods are directed to providing personalized text proofing. A user model that is used to personalize generic critiques for text proofing a document is generated based on user signals indicating past user actions. During runtime of an application used to create the document, the user model is accessed and locally cached. User inputs comprising typed components used to create the document are received and a set of one or more generic critiques for the user inputs is accessed from an editor system. The user model is applied to the set which may modify a generic critique of the set. The modifying of the generic critique can cause the generic critique to be automatically applied or suppressed at the client device. The set including the modified generic critique is transmitted to a user device, whereby the user device applies the set to the document including automatically applying or suppressing the modified generic critique.
    Type: Grant
    Filed: October 15, 2021
    Date of Patent: February 25, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: James Aidan Cogley, Enrico Cadoni, Colin Laird, Shashank Shekhar Gupta, Olivier Gauthier
  • Patent number: 12230261
    Abstract: A method for controlling an electronic device is provided. The method includes identifying one or more user interface (UI) elements displayed on a screen of an electronic device, determining a characteristic(s) of one or more identified UI elements, generating a data base based on the characteristic of one or more identified UI elements, where the database comprises to predict NL utterances of one or more identified UI elements, where the NL utterances are predicted based on the at least one characteristic of one or more identified UI elements, receiving a voice input of a user of the electronic device, where the voice input comprises an utterance indicative of the at least one characteristic of one or more identified UI elements presented in the database, and automatically accessing UI element(s) of one or more UI elements in response to determining that the utterances of the received voice input from the user matches with the predicted NL utterances of one or more identified UI elements.
    Type: Grant
    Filed: October 27, 2021
    Date of Patent: February 18, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ranjan Kumar Samal, Praveen Kumar Guvvakallu Sivamoorthy, Purushothama Chowdari Gonuguntla, Rituraj Laxminarayan Kabra, Manjunath Belgod Lokanath
  • Patent number: 12223955
    Abstract: Implementations described herein relate to causing certain reasoning with respect to why an automated assistant performed (or did not perform) certain fulfillment and/or alternate fulfillment of an assistant command. For example, implementations can receive user input that includes the assistant command, process the user input to determine data to be utilized in performance of the certain fulfillment or the alternate fulfillment of the assistant command, and cause the automated assistant to utilize the data to perform the certain fulfillment or the alternate fulfillment of the assistant command. In some implementations, output that includes the certain reasoning can be provided for presentation to a user in response to additional user input that requests the certain reasoning. In some implementations, a selectable element can be visually rendered and, when selected by the user, the output that includes the certain reasoning can be provided for presentation to the user.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: February 11, 2025
    Assignee: GOOGLE LLC
    Inventors: Felix Weissenberger, Alexander Froemmgen, Bogdan Prisacari
  • Patent number: 12223274
    Abstract: A relational similarity determination engine receives as input a dataset including a set of entities and co-occurrence data that defines co-occurrence relations for pairs of the entities. The relational similarity determination engine also receives as input side information defining explicit relations between the entities. The relational similarity determination engine jointly models the co-occurrence relations and the explicit relations for the entities to compute a similarity metric for each different pair of entities within the dataset. Based on the computed similarity metrics, the relational similarity determination engine identifies a most similar replacement entity from the dataset for each of the entities within the dataset. For a select entity received as an input, the relational similarity determination engine outputs the identified most similar replacement entity.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: February 11, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Oren Barkan, Avi Caciularu, Idan Rejwan, Yonathan Weill, Noam Koenigstein, Ori Katz, Itzik Malkiel, Nir Nice
  • Patent number: 12223944
    Abstract: Implementations relate to dynamically adapting a given assistant output based on a given persona, from among a plurality of disparate personas, assigned to an automated assistant. In some implementations, the given assistant output can be generated and subsequently adapted based on the given persona assigned to the automated assistant. In other implementations, the given assistant output can be generated specific to the given persona and without having to subsequently adapt the given assistant output to the given persona. Notably, the given assistant output can include a stream of textual content to be synthesized for audible presentation to the user, and a stream of visual cues utilized in controlling a display of a client device and/or in controlling a visualized representation of the automated assistant. Various implementations utilize large language models (LLMs), or output previously generated utilizing LLMs, to reflect the given persona in the given assistant output.
    Type: Grant
    Filed: May 13, 2022
    Date of Patent: February 11, 2025
    Assignee: GOOGLE LLC
    Inventors: Martin Baeuml, Thushan Amarasiriwardena, Roberto Pieraccini, Gianluca Martini
  • Patent number: 12223975
    Abstract: The present disclosure provides systems and method for determining a background noise level. The device may receive audio from two or more microphones. The audio may include a first signal and a second signal, such that each microphone receives its own signal. The time, loudness, frequency of the first and second signals may be compared to determine the source of the audio, such as whether the audio is the user's voice or background noise. Based on the source of the audio, the audio may be suppressed to reduce false estimations when calculating the background noise level.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: February 11, 2025
    Assignee: Google LLC
    Inventors: Jae Lee, Priya Kasirajan, Leng Ooi