Patents Examined by Abdelali Serrou
  • Patent number: 11947920
    Abstract: Provided are a man-machine dialogue method and system, computer device and medium. A specific implementation of the method includes: obtaining a current dialogue sentence input by a user; using the current dialogue sentence and a goal type and a goal entity of a preceding dialogue sentence obtained before the current dialogue sentence as an input of a first neural network module of a neural network system, and generating the goal type and the goal entity of the current dialogue sentence by performing feature extraction through the first neural network module; and using the current dialogue sentence, the goal type and the goal entity of the current dialogue sentence and knowledge base data as an input of a second neural network module of the neural network system, and generating a reply sentence by performing feature extraction and classification through the second neural network module.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: April 2, 2024
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventor: Tianxin Liang
  • Patent number: 11942082
    Abstract: Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognition output and fulfilled in order to generate a first natural language output candidate in the first language. At least part of the speech recognition output may be translated to a second language to generate an at least partial translation, which may then be used to identify a second language intent that is fulfilled to generate a second natural language output candidate in the second language. Scores may be determined for the first and second natural language output candidates, and based on the scores, a natural language output may be selected for presentation.
    Type: Grant
    Filed: May 26, 2022
    Date of Patent: March 26, 2024
    Assignee: GOOGLE LLC
    Inventors: James Kuczmarski, Vibhor Jain, Amarnag Subramanya, Nimesh Ranjan, Melvin Jose Johnson Premkumar, Vladimir Vuskovic, Luna Dai, Daisuke Ikeda, Nihal Sandeep Balani, Jinna Lei, Mengmeng Niu, Hongjie Chai, Wangqing Yuan
  • Patent number: 11922956
    Abstract: An apparatus for decoding an encoded audio signal, includes a spectral domain audio decoder for generating a first decoded representation of a first set of first spectral portions, the decoded representation having a first spectral resolution; a parametric decoder for generating a second decoded representation of a second set of second spectral portions having a second spectral resolution being lower than the first spectral resolution; a frequency regenerator for regenerating every constructed second spectral portion having the first spectral resolution using a first spectral portion and spectral envelope information for the second spectral portion; and a spectrum time converter for converting the first decoded representation and the reconstructed second spectral portion into a time representation.
    Type: Grant
    Filed: March 3, 2022
    Date of Patent: March 5, 2024
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Sascha Disch, Frederik Nagel, Ralf Geiger, Balaji Nagendran Thoshkahna, Konstantin Schmidt, Stefan Bayer, Christian Neukam, Bernd Edler, Christian Helmrich
  • Patent number: 11915692
    Abstract: Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognition output and fulfilled in order to generate a first natural language output candidate in the first language. At least part of the speech recognition output may be translated to a second language to generate an at least partial translation, which may then be used to identify a second language intent that is fulfilled to generate a second natural language output candidate in the second language. Scores may be determined for the first and second natural language output candidates, and based on the scores, a natural language output may be selected for presentation.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: February 27, 2024
    Assignee: GOOGLE LLC
    Inventors: James Kuczmarski, Vibhor Jain, Amarnag Subramanya, Nimesh Ranjan, Melvin Jose Johnson Premkumar, Vladimir Vuskovic, Luna Dai, Daisuke Ikeda, Nihal Sandeep Balani, Jinna Lei, Mengmeng Niu
  • Patent number: 11915691
    Abstract: An electronic apparatus includes a communication interface; a memory configured to store at least one instruction; and a processor configured to execute the at least one instruction to: receive a text corresponding to a user utterance and information regarding a first external device; obtain a plurality of weights of a plurality of elements related to the first external device; identify a second external device for obtaining response information; control the communication interface to transmit the text corresponding to the user utterance to the second external device; receive first response information regarding the user utterance from the second external device; obtain second response information; and control the communication interface to transmit the second response information to the first external device.
    Type: Grant
    Filed: October 26, 2021
    Date of Patent: February 27, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Dayoung Kwon, Hyeonmok Ko, Jonggu Kim, Seoha Song, Kyenghun Lee, Hojung Lee, Saebom Jang, Pureum Jung, Changho Paeon, Jiyeon Hong
  • Patent number: 11908465
    Abstract: An approach for controlling method of an electronic device is provided. The approach acquires voice information and image information for setting an action to be executed according to a condition, the voice information and the image information being respectively generated from a voice and a behavior associated with the voice of a user. The approach determines an event to be detected according to the condition and a function to be executed according to the action when the event is detected, based on the acquired voice information and the acquired image information. The approach determines at least one detection resource to detect the determined event. In response to the at least one determined detection resource detecting at least one event satisfying the condition, the approach executes the function according to the action.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: February 20, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Young-chul Sohn, Gyu-tae Park, Ki-beom Lee, Jong-ryul Lee
  • Patent number: 11907659
    Abstract: An item recall method includes: behavior data is acquired, where the behavior data includes items and item information of each item; target behavior data containing a retrieval category word is extracted from the behavior data; retrieval words of each item and a retrieval frequency of each retrieval word are acquired in a reverse correlation manner; word segmentation is performed on the item information to obtain multiple item segmented words; a similarity between all retrieval words and the multiple item segmented words is calculated; whether the similarity is greater than a first preset threshold or not is determined, and if yes, then a retrieval word is extracted as an expansion word of the retrieval category word; and item recall is performed according to the retrieval category word and the expansion word.
    Type: Grant
    Filed: January 2, 2020
    Date of Patent: February 20, 2024
    Assignees: Beijing JIngdong Shangke Information Technology Co., Ltd., Beijing Jingdong Century Trading Co., Ltd.
    Inventors: Yitong Hu, Yun Gao, Na Wang, Lili Zuo, Yahong Zhang
  • Patent number: 11900925
    Abstract: An output method includes obtaining voice information, determining whether the voice information is a voice request, in response to the voice information being the voice request, obtaining reply information for replying to the voice request and supplemental information, transmitting the reply information and the supplemental information to an output device for outputting the reply information and the supplemental information using different parameters, such that an output of the reply information is prioritized over an output of the supplemental information, and in response to receiving a predetermined operation, outputting the reply information and the supplemental information using different parameters, such that the output of the supplemental information is prioritized over the output of the reply information. The supplemental information is information that needs to be outputted in association with the reply information.
    Type: Grant
    Filed: May 3, 2022
    Date of Patent: February 13, 2024
    Assignee: LENOVO (BEIJING) CO., LTD.
    Inventors: Wenlin Yan, Shifeng Peng
  • Patent number: 11893310
    Abstract: Techniques for routing a user command to a speechlet and resolving conflicts between potential speechlets are described. A system determines an intent of an input command. The system also receives context information associated with the input command. The system determines speechlets (e.g., speechlets and/or skills) that may execute with respect to the input command given the intent and the context data. The system then determines whether conditions of routing rules, associated with the speechlets, are satisfied given the context data. If the conditions of only one routing rule are satisfied, the system causes the speechlet associated with the routing rule to execute with respect to the input command. If the conditions of more than one routing rule are satisfied, the system may determine a speechlet to execute with respect to the input command based on the speechlets' priorities in a list of speechlets and/or based on potential output data provided by the speechlets.
    Type: Grant
    Filed: June 7, 2022
    Date of Patent: February 6, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Andres Felipe Borja Jaramillo, David Robert Thomas, Shrish Chandra Mishra, Shijian Zheng, Alberto Milan Gutierrez
  • Patent number: 11886827
    Abstract: Systems and methods for generating a contextually adaptable classifier model are disclosed. An example method is performed by one or more processors of a system and includes obtaining a dataset, feature values, and labels, transforming each datapoint into a natural language statement (NLS) associating the datapoint's feature values and label with feature identifiers and a label identifier, generating a feature matrix for each NLS, transforming the feature matrix into a global feature vector, generating a target vector for each NLS, transforming the target vector into a global target vector having a same shape, and generating, using the vectors, a similarity measurement operation, and a loss function, a classifier model trained to generate a compatibility score predictive of an accuracy at which the classifier model can classify given data based on at least one of a different feature characterizing the given data or a different label for classifying the given data.
    Type: Grant
    Filed: July 31, 2023
    Date of Patent: January 30, 2024
    Assignee: Intuit Inc.
    Inventor: Itay Margolin
  • Patent number: 11886821
    Abstract: Automated response generation systems and methods are disclosed. The systems can include a deep learning model specially configured to apply inferencing techniques to redesign natural language querying systems for use over knowledge graphs. The disclosed systems and methods provide a model for inferencing referred to as a Hierarchical Recurrent Path Encoder (HRPE). An entity extraction and linking module as well as a data conversion and generation module process the content of a given query. The output is processed by the proposed model to generate inferred answers.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: January 30, 2024
    Assignee: Accenture Global Solutions Limited
    Inventors: Shubhashis Sengupta, Annervaz K. M., Gupta Aayushee, Sandip Sinha, Shakti Naik
  • Patent number: 11880397
    Abstract: An event argument extraction (EAE) method, an EAE apparatus and an electronic device, relates to the technical field of knowledge graphs. A specific implementation scheme includes acquiring a to-be-extracted event content; and performing argument extraction on the to-be-extracted event content based on a trained EAE model, to obtain a target argument of the to-be-extracted event content; where the trained EAE model is obtained by training a pre-trained model with event news annotation data and a weight of each argument annotated in the event news annotation data.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: January 23, 2024
    Assignee: Beijing Baidu Netcom Science Technology Co., Ltd.
    Inventors: Fayuan Li, Yuguang Chen, Lu Pan, Yuanzhen Liu, Cuiyun Han, Xi Shi, Jiayan Huang
  • Patent number: 11875788
    Abstract: Techniques described herein relate to facilitating end-to-end multilingual communications with automated assistants. In various implementations, speech recognition output may be generated based on voice input in a first language. A first language intent may be identified based on the speech recognition output and fulfilled in order to generate a first natural language output candidate in the first language. At least part of the speech recognition output may be translated to a second language to generate an at least partial translation, which may then be used to identify a second language intent that is fulfilled to generate a second natural language output candidate in the second language. Scores may be determined for the first and second natural language output candidates, and based on the scores, a natural language output may be selected for presentation.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: January 16, 2024
    Assignee: GOOGLE LLC
    Inventors: James Kuczmarski, Vibhor Jain, Amarnag Subramanya, Nimesh Ranjan, Melvin Jose Johnson Premkumar, Vladimir Vuskovic, Luna Dai, Daisuke Ikeda, Nihal Sandeep Balani, Jinna Lei, Mengmeng Niu
  • Patent number: 11875130
    Abstract: Systems and methods are disclosed for managing a generative artificial intelligence (AI) model. Managing the generative AI model may include training or tuning the generative AI model before use or managing the operation of the generative AI model during use. Training or tuning a generative AI model typically requires manual review of outputs from the model based on the queries provided to the model to reduce hallucinations generated by the generative AI model. Once the model is in use, though, hallucinations still occur. Use of a confidence (whose generation is described herein) to train or tune the generative AI model and/or manage operation of the model reduces hallucinations, and thus improves performance, of the generative AI model.
    Type: Grant
    Filed: July 25, 2023
    Date of Patent: January 16, 2024
    Assignee: Intuit Inc.
    Inventors: Dusan Bosnjakovic, Anshuman Sahu
  • Patent number: 11869494
    Abstract: A system, apparatus and a method for determining distinguishable data, includes processing input data into a plurality of elements, calculating distinguishability of the plurality of elements using phonetic vowels, and determining distinguishable elements from among the plurality of elements, according to the distinguishability calculation.
    Type: Grant
    Filed: January 10, 2019
    Date of Patent: January 9, 2024
    Assignee: International Business Machines Corporation
    Inventors: Lorin F. Wilde, Aditya Vempaty, Tamer E. Abuelsaad
  • Patent number: 11869482
    Abstract: A method and apparatus for generating a speech waveform. Fundamental frequency information, glottal features and vocal tract features associated with an input may be received, wherein the glottal features include a phase feature, a shape feature, and an energy feature (1310). A glottal waveform is generated based on the fundamental frequency information and the glottal features through a first neural network model (1320). A speech waveform is generated based on the glottal waveform and the vocal tract features through a second neural network model (1330).
    Type: Grant
    Filed: September 30, 2018
    Date of Patent: January 9, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yang Cui, Xi Wang, Lei He, Kao-Ping Soong
  • Patent number: 11868725
    Abstract: Provided are a server, a client device, and operation methods thereof for training a language model. The server, the client device, and the operation methods thereof identify a word or phrase including a named entity that is incorrectly pronounced by a user or is difficult for the user to accurately pronounce from an input text for use in training a natural language understanding (NLU) model, generate text candidates for use in training the NLU model by replacing the identified word or phrase with a word or phrase predicted to be uttered by the user and having high phonetic similarity to the identified word or phrase, and train the NLU model by using the generated text candidates.
    Type: Grant
    Filed: January 4, 2021
    Date of Patent: January 9, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hejung Yang, Kwangyoun Kim, Sungsoo Kim
  • Patent number: 11862154
    Abstract: An approach for controlling method of an electronic device is provided. The approach acquires voice information and image information for setting an action to be executed according to a condition, the voice information and the image information being respectively generated from a voice and a behavior associated with the voice of a user. The approach determines an event to be detected according to the condition and a function to be executed according to the action when the event is detected, based on the acquired voice information and the acquired image information. The approach determines at least one detection resource to detect the determined event. In response to the at least one determined detection resource detecting at least one event satisfying the condition, the approach executes the function according to the action.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: January 2, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Young-chul Sohn, Gyu-tae Park, Ki-beom Lee, Jong-ryul Lee
  • Patent number: 11860916
    Abstract: Some embodiments may obtain a natural language question, determine a context of the natural language question, and generate a first vector based on the natural language question using encoder neural network layers. Some embodiments may access a data table comprising column names, generate vectors based on the column names, and determine attention scores based on the vectors. Some embodiments may update the vectors based on the attention scores, generating a second vector based on the natural language question, determine a set of strings comprising a name of the column names and a database language operator based on the vectors. Some embodiments may determine a values based on the determined database language operator, the name, using a transformer neural network model. Some embodiments may generate a query based on the set of strings and the values.
    Type: Grant
    Filed: December 2, 2022
    Date of Patent: January 2, 2024
    Assignee: DSilo Inc.
    Inventors: Jaya Prakash Narayana Gutta, Sharad Malhautra, Lalit Gupta
  • Patent number: 11862147
    Abstract: A system for providing information to a user includes and/or interfaces with a set of models and/or algorithms. Additionally or alternatively, the system can include and/or interface with any or all of: a processing subsystem; a sensory output device; a user device; an audio input device; and/or any other components. A method for providing information to a user includes and/or interfaces with: receiving a set of inputs; processing the set of inputs to determine a set of sensory outputs; and providing the set of sensory outputs.
    Type: Grant
    Filed: August 12, 2022
    Date of Patent: January 2, 2024
    Assignee: NeoSensory, Inc.
    Inventors: Oleksii Abramenko, Kaan Donbekci, Michael V. Perrotta, Scott Novich, Kathleen W. McMahon, David M. Eagleman