Patents Examined by Pierre-Louis Desir
  • Patent number: 12272352
    Abstract: Various embodiments relate to an electronic device and a voice recognition performing method of an electronic device which are capable of receiving a voice input of a user and executing a function corresponding to a user command generated by the voice input.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: April 8, 2025
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ojun Kwon, Hyunjin Park, Kiyong Lee, Yoonju Lee, Jisup Lee, Jaeyung Yeo
  • Patent number: 12260184
    Abstract: A translation device includes a storage unit configured to store a plurality of pieces of learning data, a normalized sentence learning unit configured to perform learning on the plurality of pieces of learning data by combining original text for learning and a corresponding normalized sentence for learning, a translated sentence learning unit configured to perform learning on the plurality of pieces of learning data by combining the original text for learning and a corresponding translated sentence for learning, and a model generation unit configured to generate one normalization/translation model on the basis of a result of learning by the normalized sentence learning unit and the translated sentence learning unit, in which, on at least a part of the learning data, the translated sentence learning unit performs learning after the normalized sentence learning unit performs learning.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: March 25, 2025
    Assignee: NTT DOCOMO, INC.
    Inventors: Toshimitsu Nakamura, Noritaka Okamoto, Wataru Uchida, Yoshinori Isoda
  • Patent number: 12248753
    Abstract: There is included a method and apparatus comprising computer code configured to cause a processor or processors to perform generating one or more aligned inventories, wherein the one or more aligned inventories are generated using one or more word sense inventories, obtaining a word in a context sentence, determining one or more semantic equivalence scores indicating semantic similarity between the word in the context sentence and each of one or more associated glosses in the one or more aligned inventories using a semantic equivalence recognizer model, and predicting a correct sense of the word in the context sentence based on the determined one or more semantic equivalence scores.
    Type: Grant
    Filed: October 22, 2021
    Date of Patent: March 11, 2025
    Assignee: TENCENT AMERICA LLC
    Inventors: Wenlin Yao, Xiaoman Pan, Lifeng Jin, Jianshu Chen, Dian Yu, Dong Yu
  • Patent number: 12249320
    Abstract: A stable evaluation result is obtained from a voice of speech for any sentence. A speech evaluation device (1) outputs a score for evaluating speech of an input voice signal spoken by a speaker in a first group. A feature extraction unit (11) extracts an acoustic feature from the input voice signal. A conversion unit (12) converts the acoustic feature of the input voice signal to an acoustic feature when a speaker in a second group speaks the same text as text of the input voice signal. An evaluation unit (13) calculates a score indicating a higher evaluation as a distance between the acoustic feature before the conversion and the acoustic feature after the conversion becomes shorter.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: March 11, 2025
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventor: Sadao Hiroya
  • Patent number: 12242977
    Abstract: This disclosure relates to extraction of tasks from documents based on a weakly supervised classification technique, wherein extraction of tasks is identification of mentions of tasks in a document. There are several prior arts addressing the problem of extraction of events, however due to crucial distinctions between events-tasks, task extraction stands as a separate problem. The disclosure explicitly defines specific characteristics of tasks, creates labelled data at a word-level based on a plurality of linguistic rules to train a word-level weakly supervised model for task extraction. The labelled data is created based on the plurality of linguistic rules for a non-negation aspect, a volitionality aspect, an expertise aspect and a plurality of generic aspects. Further the disclosure also includes a phrase expansion technique to capture the complete meaning expressed by the task instead of merely mentioning the task that may not capture the entire meaning of the sentence.
    Type: Grant
    Filed: July 15, 2022
    Date of Patent: March 4, 2025
    Assignee: Tata Consultancy Services Limited
    Inventors: Sachin Sharad Pawar, Girish Keshav Palshikar, Anindita Sinha Banerjee
  • Patent number: 12236189
    Abstract: Systems and methods are directed to providing personalized text proofing. A user model that is used to personalize generic critiques for text proofing a document is generated based on user signals indicating past user actions. During runtime of an application used to create the document, the user model is accessed and locally cached. User inputs comprising typed components used to create the document are received and a set of one or more generic critiques for the user inputs is accessed from an editor system. The user model is applied to the set which may modify a generic critique of the set. The modifying of the generic critique can cause the generic critique to be automatically applied or suppressed at the client device. The set including the modified generic critique is transmitted to a user device, whereby the user device applies the set to the document including automatically applying or suppressing the modified generic critique.
    Type: Grant
    Filed: October 15, 2021
    Date of Patent: February 25, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: James Aidan Cogley, Enrico Cadoni, Colin Laird, Shashank Shekhar Gupta, Olivier Gauthier
  • Patent number: 12230261
    Abstract: A method for controlling an electronic device is provided. The method includes identifying one or more user interface (UI) elements displayed on a screen of an electronic device, determining a characteristic(s) of one or more identified UI elements, generating a data base based on the characteristic of one or more identified UI elements, where the database comprises to predict NL utterances of one or more identified UI elements, where the NL utterances are predicted based on the at least one characteristic of one or more identified UI elements, receiving a voice input of a user of the electronic device, where the voice input comprises an utterance indicative of the at least one characteristic of one or more identified UI elements presented in the database, and automatically accessing UI element(s) of one or more UI elements in response to determining that the utterances of the received voice input from the user matches with the predicted NL utterances of one or more identified UI elements.
    Type: Grant
    Filed: October 27, 2021
    Date of Patent: February 18, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ranjan Kumar Samal, Praveen Kumar Guvvakallu Sivamoorthy, Purushothama Chowdari Gonuguntla, Rituraj Laxminarayan Kabra, Manjunath Belgod Lokanath
  • Patent number: 12223944
    Abstract: Implementations relate to dynamically adapting a given assistant output based on a given persona, from among a plurality of disparate personas, assigned to an automated assistant. In some implementations, the given assistant output can be generated and subsequently adapted based on the given persona assigned to the automated assistant. In other implementations, the given assistant output can be generated specific to the given persona and without having to subsequently adapt the given assistant output to the given persona. Notably, the given assistant output can include a stream of textual content to be synthesized for audible presentation to the user, and a stream of visual cues utilized in controlling a display of a client device and/or in controlling a visualized representation of the automated assistant. Various implementations utilize large language models (LLMs), or output previously generated utilizing LLMs, to reflect the given persona in the given assistant output.
    Type: Grant
    Filed: May 13, 2022
    Date of Patent: February 11, 2025
    Assignee: GOOGLE LLC
    Inventors: Martin Baeuml, Thushan Amarasiriwardena, Roberto Pieraccini, Gianluca Martini
  • Patent number: 12223274
    Abstract: A relational similarity determination engine receives as input a dataset including a set of entities and co-occurrence data that defines co-occurrence relations for pairs of the entities. The relational similarity determination engine also receives as input side information defining explicit relations between the entities. The relational similarity determination engine jointly models the co-occurrence relations and the explicit relations for the entities to compute a similarity metric for each different pair of entities within the dataset. Based on the computed similarity metrics, the relational similarity determination engine identifies a most similar replacement entity from the dataset for each of the entities within the dataset. For a select entity received as an input, the relational similarity determination engine outputs the identified most similar replacement entity.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: February 11, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Oren Barkan, Avi Caciularu, Idan Rejwan, Yonathan Weill, Noam Koenigstein, Ori Katz, Itzik Malkiel, Nir Nice
  • Patent number: 12223955
    Abstract: Implementations described herein relate to causing certain reasoning with respect to why an automated assistant performed (or did not perform) certain fulfillment and/or alternate fulfillment of an assistant command. For example, implementations can receive user input that includes the assistant command, process the user input to determine data to be utilized in performance of the certain fulfillment or the alternate fulfillment of the assistant command, and cause the automated assistant to utilize the data to perform the certain fulfillment or the alternate fulfillment of the assistant command. In some implementations, output that includes the certain reasoning can be provided for presentation to a user in response to additional user input that requests the certain reasoning. In some implementations, a selectable element can be visually rendered and, when selected by the user, the output that includes the certain reasoning can be provided for presentation to the user.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: February 11, 2025
    Assignee: GOOGLE LLC
    Inventors: Felix Weissenberger, Alexander Froemmgen, Bogdan Prisacari
  • Patent number: 12223975
    Abstract: The present disclosure provides systems and method for determining a background noise level. The device may receive audio from two or more microphones. The audio may include a first signal and a second signal, such that each microphone receives its own signal. The time, loudness, frequency of the first and second signals may be compared to determine the source of the audio, such as whether the audio is the user's voice or background noise. Based on the source of the audio, the audio may be suppressed to reduce false estimations when calculating the background noise level.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: February 11, 2025
    Assignee: Google LLC
    Inventors: Jae Lee, Priya Kasirajan, Leng Ooi
  • Patent number: 12217001
    Abstract: Various embodiments of the present invention provide methods, apparatus, systems, computing devices, computing entities, and/or the like for performing natural language processing operations using a multi-context convolutional self-attention machine learning framework that comprises a shared token embedding machine learning model, a plurality of context-specific self-attention machine learning models, and a cross-context representation inference machine learning model, where each context-specific self-attention machine learning model is configured to generate, for each input text token of an input text sequence, a context-specific token representation using a context-specific self-attention mechanism that is associated with the respective distinct context window size for the context-specific self-attention machine learning model.
    Type: Grant
    Filed: April 29, 2022
    Date of Patent: February 4, 2025
    Assignee: Optum Services (Ireland) Limited
    Inventors: Mostafa Bayomi, Ahmed Selim, Kieran O'Donoghue, Michael Bridges
  • Patent number: 12211486
    Abstract: A method includes identifying multiple tokens contained in an input utterance. The method also includes generating slot labels for at least some of the tokens contained in the input utterance using a trained machine learning model. The method further includes determining at least one action to be performed in response to the input utterance based on at least one of the slot labels. The trained machine learning model is trained to use attention distributions generated such that (i) the attention distributions associated with tokens having dissimilar slot labels are forced to be different and (ii) the attention distribution associated with each token is forced to not focus primarily on that token itself.
    Type: Grant
    Filed: January 10, 2022
    Date of Patent: January 28, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Avik Ray, Yilin Shen, Hongxia Jin
  • Patent number: 12210827
    Abstract: Ranking a plurality of text elements, each comprising at least one word, by specificity. For each text element to be ranked, such a method includes computing an embedding vector that locates a text element in an embedding space, and selecting a set of text fragments from reference text. Each of these text fragments contains the text element to be ranked and further text elements. For each text fragment, the method calculates respective distances in the embedding space between the further text elements. The method further includes calculating a specificity score for the text element to be ranked and storing the specificity score. After ranking the plurality of text elements, a text data structure using the specificity scores for text elements to extract data having a desired specificity from the data structure may be processed.
    Type: Grant
    Filed: August 23, 2021
    Date of Patent: January 28, 2025
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Francesco Fusco, Cesar Berrospi Ramis, Peter Willem Jan Staar
  • Patent number: 12210844
    Abstract: Included are input means for inputting first data that is data relating to a plurality of letters included in a text string that is a generation target, and generating means for generating second data that is data relating to the text string that satisfies predetermined constraint conditions including at least a condition relating to plausibility of the sequence of letters, on the basis of the first data.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: January 28, 2025
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Masaaki Nishino, Tsutomu Hirao, Masaaki Nagata
  • Patent number: 12204513
    Abstract: System, methods, and apparatuses for an artificial intelligence (AI) toy with improved conversational dialogue and personality development. The AI toy determines responses to stimuli based on user profiles and personality profiles that are developed through user interaction and external media inputs. Natural Language Processing (NLP) and other semantic interaction processing is paired with the profiles to develop AI personality and conversational ability.
    Type: Grant
    Filed: October 27, 2023
    Date of Patent: January 21, 2025
    Inventors: Maria Emma, Gregory Alexander D'Amico
  • Patent number: 12197409
    Abstract: System, methods, and apparatuses for an artificial intelligence (AI) toy with improved conversational dialogue and personality development. The AI toy determines responses to stimuli based on user profiles and personality profiles that are developed through user interaction and external media inputs. Natural Language Processing (NLP) and other semantic interaction processing is paired with the profiles to develop AI personality and conversational ability.
    Type: Grant
    Filed: May 23, 2023
    Date of Patent: January 14, 2025
    Inventors: Maria Emma, Gregory Alexander D'Amico
  • Patent number: 12198678
    Abstract: An electronic device of the present disclosure comprises: a communication unit; a memory; and a processor for: detecting a voice section in an audio signal acquired by the electronic device; identifying whether a wake-up word stored in the memory exists in a user voice included in the detected voice section; when it is identified that the wake-up word exists in the user voice, transmitting, via the communication unit, the user voice to a server for providing a voice recognition service; and when response information for the user voice is received from the server, providing a response to the user voice on the basis of the received response information, wherein the processor identifies that the wake-up word exists in the user voice, when a part of the user voice matches the wake-up word. In particular, a method for acquiring a natural language for providing a response may use an artificial intelligence model learned according to at least one of machine learning, a neural network, and a deep learning algorithm.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: January 14, 2025
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Changhan Kim, Bowon Kim, Jinsuk Lee, Hyeontaek Lim, Jungkwan Seo
  • Patent number: 12197867
    Abstract: Method(s), apparatus and system(s) are provided for entity type identification and/or disambiguation of entities within a corpus of text the method including: receiving one or more entity results, each entity result comprising data representative of an identified entity and a location of the identified entity within the corpus of text; identifying an entity type for each entity of the received entity results by inputting text associated with the location of said each entity in the corpus of text to a trained entity type (ET) model configured for predicting or extracting an entity type of said each entity from the corpus of text; and outputting data representative of the identified entity type of each entity in the received entity results.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: January 14, 2025
    Assignee: BenevolentAI Technology Limited
    Inventors: Joss Briody, Juha Iso-Sipila, Oliver Oechsle, Theodosia Togia
  • Patent number: 12197881
    Abstract: Various implementations of the present disclosure relate to text to visualization. In a method, information items are extracted from a natural language sentence. Visual elements associated with the information items are determined. A visual representation of the natural language sentence based on the visual elements is determined, the visual representation indicating information expressed by the natural language sentence.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: January 14, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Weiwei Cui, He Huang, Haidong Zhang, Daniel Cheung, Bei Chen, Ishita Gupta, Yu Mao, Jian-Guang Lou, Dongmei Zhang