Patents Examined by Marcus T. Riley
  • Patent number: 11908481
    Abstract: Provided is a method for encoding live-streaming data, including: acquiring first state information associated with a current data frame; generating backup state information by backing up the first state information; generating a first encoded data frame by encoding the current data frame based on a first bit rate and the first state information; generating reset state information by resetting the updated first state information based on the backup state information; generating a second encoded data frame by encoding the current data frame based on a second bit rate and the reset state information; and generating a first target data frame corresponding to the current data frame based on the first encoded data frame and the second encoded data frame.
    Type: Grant
    Filed: January 24, 2022
    Date of Patent: February 20, 2024
    Assignee: BEIJING DAJIA INTERNET INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Wenhao Xing, Chen Zhang
  • Patent number: 11908453
    Abstract: A method and a system for training a machine-learning algorithm (MLA) to determine a user class of a user of an electronic device are provided. The method comprises: receiving a training audio signal representative of a training user utterance; soliciting, by the processor, a plurality of assessor-generated labels for the training audio signal, the given one of the plurality of assessor-generated labels being indicative of whether the training user is perceived to be one of a first class and a second class; generating an amalgamated assessor-generated label for the training audio signal, the amalgamated assessor-generated label being indicative of a label distribution of the plurality of assessor-generated labels between the first class and the second class; generating a training set of data including the training audio signal and the amalgamated assessor-generated to train the MLA to determine the user class of the user producing an in-use user utterance.
    Type: Grant
    Filed: August 23, 2021
    Date of Patent: February 20, 2024
    Assignee: Direct Cursus Technology L.L.C
    Inventors: Vladimir Andreevich Aliev, Stepan Aleksandrovich Kargaltsev, Artem Valerevich Babenko
  • Patent number: 11906320
    Abstract: The present disclosure provides a method and an apparatus for managing navigation broadcast, and a device, related to an Intelligent transportation technology field. A specific implementation solution includes: obtaining a geographical identifier of a user; obtaining a statement-conversion template set corresponding to the geographical identifier based on the geographical identifier of the user; converting a standard navigation broadcast statement based on the statement-conversion template set to generate a geographical navigation broadcast statement; and performing navigation broadcast based on the geographical navigation broadcast statement. Thereby, the navigation broadcasts are matched with respective regions, users in different regions are provided with diversified and personalized navigation broadcasts.
    Type: Grant
    Filed: April 7, 2020
    Date of Patent: February 20, 2024
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Ran Ji, Jizhou Huang, Ying Li, Yongzhi Ji, Lei Jia
  • Patent number: 11907667
    Abstract: A system for assisting sharing of information includes circuitry to: input a plurality of sentences each representing a statement made by one of a plurality of users, the sentence being generated by speaking or writing during a meeting or by extracting from at least one of meeting data, email data, electronic file data, and chat data at any time; determine a statement type of the statement represented by each one of the plurality of sentences, the statement type being one of a plurality of statement types previously determined; select, from among the plurality of sentences being input, one or more sentences each representing a statement of a specific statement type of the plurality of types; and output a list of the selected one or more sentences as key statements of the plurality of sentences.
    Type: Grant
    Filed: August 11, 2022
    Date of Patent: February 20, 2024
    Assignee: RICOH COMPANY, LTD.
    Inventor: Tomohiro Shima
  • Patent number: 11900518
    Abstract: A method of producing an avatar video, the method comprising the steps of: providing a reference image of a person's face; providing a plurality of characteristic features representative of a facial model X0 of the person's face, the characteristic features defining a facial pose dependent on the person speaking; providing a target phrase to be rendered over a predetermined time period during the avatar video and providing a plurality of time intervals t within the predetermined time period; generating, for each of said times intervals t, speech features from the target phrase, to provide a sequence of speech features; and generating, using the plurality of characteristic features and sequence of speech features, a sequence of facial models Xt for each of said time intervals t.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: February 13, 2024
    Assignee: VirtTari Limited
    Inventors: Peter Alistair Brady, Hayden Allen-Vercoe, Sathish Sankarpandi, Ethan Dickson
  • Patent number: 11900062
    Abstract: Described are methods and systems are for generating dynamic conversational queries. For example, as opposed to being a simply reactive system, the methods and systems herein provide means for actively determining a user's intent and generating a dynamic query based on the determined user intent. Moreover, these methods and systems generate these queries in a conversational environment.
    Type: Grant
    Filed: October 1, 2021
    Date of Patent: February 13, 2024
    Assignee: Capital One Services, LLC
    Inventors: Minh Le, Arturo Hernandez Zeledon, Md Arafat Hossain Khan
  • Patent number: 11893996
    Abstract: Techniques for generating a personalization identifier that is usable by a skill to customize output of supplemental content to a user, without the skill being able to determine an identity of the user based on the personalization identifier, are described. A personalization identifier may be generated to be specific to a skill, such that different skills receive different personalization identifiers with respect to the same user. The personalization identifier may be generated by performing a one-way hash of a skill identifier, and a user profile identifier and/or a device identifier. User-perceived latency may be reduced by generating the personalization identifier at least partially in parallel to performing ASR processing and/or NLU processing.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: February 6, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Mark Conrad Kockerbeck, Song Chen, Aditi Srinivasan, Ryan Idrogo-Lam, Jilani Zeribi, John Botros
  • Patent number: 11893989
    Abstract: A system and method for controlling an electronic eyewear device using voice commands receives audio data from a microphone, processes the audio data to identify a wake word, and upon identification of a wake word, processes the audio data to identify at least one action keyword in the audio data. The audio data is provided to one of a plurality of controllers associated with different action keywords or sets of action keywords to implement an action. For example, the audio data may be provided to a settings controller to adjust settings of the electronic eyewear device when the action keyword is indicative of a request to adjust a setting of the electronic eyewear device or to a navigation controller to navigate to the system information of the electronic eyewear device when the action keyword is indicative of a request to navigate to system information of the electronic eyewear device.
    Type: Grant
    Filed: July 13, 2021
    Date of Patent: February 6, 2024
    Assignee: Snap Inc.
    Inventor: Piotr Gurgul
  • Patent number: 11887602
    Abstract: Techniques for performing audio-based device location determinations are described. A system may send, to a first device, a command to output audio requesting a location of the first device be determined. A second device may receive the audio and send, to the system, data representing the second device received the audio, where the received data includes spectral energy data representing a spectral energy of the audio as received by the second device. The system may, using the spectral energy data, determine attenuation data representing an attenuation experienced by the audio as it traveled from the first device to the second device. The system may generate, based on the attenuation data, spatial relationship data representing a spatial relationship between the first device and the second device, where the spatial relationship data is usable to determine a device for outputting a response to a subsequently received user input.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: January 30, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Brendon Jude Wilson, Henry Michael D Souza, Cindy Angie Hou, Christopher Evans, Sumit Garg, Ravina Chopra
  • Patent number: 11881220
    Abstract: A display device for providing a speech recognition service according to an embodiment of the present disclosure can include a display unit, a network interface unit configured to communicate with a server, and a control unit configured to receive a voice command uttered by a user, acquire usage information of the display device, transmit the voice command and the usage information of the display device to the server through the network interface unit, receive, from the server, an utterance intention based on the voice command and the usage information of the display device, and perform an operation corresponding to the received utterance intention.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: January 23, 2024
    Assignee: LG ELECTRONICS INC.
    Inventors: Hyangjin Lee, Jaekyung Lee
  • Patent number: 11880651
    Abstract: Taste and smell classification from multilanguage descriptions can be performed by extracting, by one or more processors using natural language processing, a text including one or more words associated with taste and smell perceptions from an input received from a plurality of users. The input includes multilanguage information regarding at least one of changes in smell and changes in taste perceived by each of the plurality of users. Feature vectors are generated for the text extracted from the input using global vectors, and a distance between the feature vectors and a plurality of reference descriptors associated with taste and smell is calculated for determining a similarity between the text and the reference descriptors and creating a training dataset based on which a classification model is generated for categorizing the plurality of users according to the at least one of changes in smell and changes in taste.
    Type: Grant
    Filed: June 23, 2021
    Date of Patent: January 23, 2024
    Assignee: International Business Machines Corporation
    Inventors: Pablo Meyer Rojas, Guillermo Cecchi, Elif Eyigoz, Raquel Norel
  • Patent number: 11881215
    Abstract: Various embodiments of the present invention relate to a method for providing an intelligent assistance service, and an electronic device performing same. According to an embodiment, the electronic device includes a display, a communication interface, at least one processor, and at least one memory, wherein the memory is configured to store a task customized by a user and mapped to any one among a selected word, phrase, or sentence. The memory may store instructions which, when executed, cause the processor to: display a user interface, configured to set or change the task, on the display; display at least one utterance related to the task as text on the user interface; identify and display at least one replaceable parameter in the utterance; receive a user input, which may be used as the parameter, for selecting or inputting at least one item; and store the task including the item.
    Type: Grant
    Filed: October 10, 2019
    Date of Patent: January 23, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Suneung Park, Taekwang Um, Jaeyung Yeo
  • Patent number: 11875121
    Abstract: Generating automated conversation responses by receiving a conversation input message, determining an intent associated with the conversation input message, detecting content associated with the intent in a data stream in response to determining the intent, and generating a conversation output according to the content and the intent.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: January 16, 2024
    Assignee: International Business Machines Corporation
    Inventors: Keith Gregory Frost, Stephen Arthur Boxwell, Kyle Matthew Brake, Stanley John Vernier
  • Patent number: 11868720
    Abstract: Techniques are described for training and/or utilizing sub-agent machine learning models to generate candidate dialog responses. In various implementations, a user-facing dialog agent (202, 302), or another component on its behalf, selects one of the candidate responses which is closest to user defined global priority objectives (318). Global priority objectives can include values (306) for a variety of dialog features such as emotion, confusion, objective-relatedness, personality, verbosity, etc. In various implementations, each machine learning model includes an encoder portion and a decoder portion. Each encoder portion and decoder portion can be a recurrent neural network (RNN) model, such as a RNN model that includes at least one memory layer, such as a long short-term memory (LSTM) layer.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: January 9, 2024
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Vivek Varma Datla, Sheikh Sadid Al Hasan, Aaditya Prakash, Oladimeji Feyisetan Farri, Tilak Raj Arora, Junyi Liu, Ashequl Qadir
  • Patent number: 11868716
    Abstract: One or more computer processors parse a received natural language question into an abstract meaning representation (AMR) graph. The one or more computer processors enrich the AMR graph into an extended AMR graph. The one or more computer processors transform the extended AMR graph into a query graph utilizing a path-based approach, wherein the query graph is a directed edge-labeled graph. The one or more computer processors generate one or more answers to the natural language question through one or more queries created utilizing the query graph.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: January 9, 2024
    Assignee: International Business Machines Corporation
    Inventors: Srinivas Ravishankar, Pavan Kapanipathi Bangalore, Ibrahim Abdelaziz, Nandana Mihindukulasooriya, Dinesh Garg, Salim Roukos, Alexander Gray
  • Patent number: 11862160
    Abstract: A control method for a display system is provided. The display system includes a display device displaying an image, and a voice processing device which generates first voice data based on a first voice requesting a first-type operation belonging to a part of a plurality of types of operations to the display device and transmits the first voice data to a server device. The display device receives a command to execute the first-type operation from the server device. The display device includes a voice recognition unit recognizing a second voice requesting a second-type operation that is different from the first-type operation, and a control unit controlling execution of the first-type operation and the second-type operation. The voice processing device transmits the first voice data requesting a permission for the execution of the second-type operation, to the server device. The display device receives a command permitting the execution of the second-type operation from the server device.
    Type: Grant
    Filed: October 27, 2021
    Date of Patent: January 2, 2024
    Assignee: SEIKO EPSON CORPORATION
    Inventors: Nona Mimura, Mitsunori Tomono
  • Patent number: 11847422
    Abstract: A system and method implemented on a computing device for analyzing a digital corpus of unstructured interlocutor conversations to discover intents, goals, or both intents and goals of one or more parties to the conversations, by grouping the conversation utterances according to semantic similarity clusters; selecting the best utterance(s) that mostly likely embody a party's stated goal or intent; creates a set of candidate intent names for each cluster based upon each intent utterance in each conversation in each cluster; rates each candidate intent (or goal) for each intent name; and selects the most likely candidate intent (or goal) name for the purposes of subsequent automation of future conversations such as, but not limited to, automated electronic responses using Artificial Intelligence and machine learning.
    Type: Grant
    Filed: August 26, 2022
    Date of Patent: December 19, 2023
    Assignee: DISCOURSE.AI, INC.
    Inventors: Pedro Vale Lima, Jonathan E. Eisenzopf
  • Patent number: 11842164
    Abstract: The disclosure discloses a method and an apparatus for training a dialog generation model, and a dialog generation method and apparatus, and relates to the field of artificial intelligence. The method includes: encoding a context sample to obtain a first latent variable, and recognizing the first latent variable to obtain a prior latent variable; encoding a response sample to obtain a second latent variable; encoding a response similar sample to obtain a third latent variable; performing recognition according to a Gaussian mixture distribution of the first latent variable, the second latent variable, and the third latent variable to obtain a posterior latent variable; and matching the prior latent variable with the posterior latent variable, and performing adversarial training on a dialog generation model.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: December 12, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LTD
    Inventors: Zekang Li, Jin Chao Zhang, Zeyang Lei, Fan Dong Meng, Jie Zhou, Cheng Niu
  • Patent number: 11842735
    Abstract: An electronic apparatus and a control method thereof are provided. A method of controlling an electronic apparatus according to an embodiment of the disclosure includes: receiving input of a first utterance, identifying a first task for the first utterance based on the first utterance, providing a response to the first task based on a predetermined response pattern, receiving input of a second utterance, identifying a second task for the second utterance based on the second utterance, determining the degree of association between the first task and the second task, and setting a response pattern for the first task based on the second task based on the determined degree of association satisfying a predetermined condition. The control method of an electronic apparatus may use an artificial intelligence model trained according to at least one of machine learning, a neural network, or a deep learning algorithm.
    Type: Grant
    Filed: May 31, 2022
    Date of Patent: December 12, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yeonho Lee, Kyenghun Lee, Saebom Jang, Silas Jeon
  • Patent number: 11842732
    Abstract: A voice command resolution apparatus, including a memory configured to store instructions; and a processor configured to execute the instructions to: recognize a voice command of a user in an input sound, analyze a non-speech sound included in the input sound, and determine at least one target Internet of things (IoT) device related to execution of the voice command, based on an analysis result of the non-speech sound.
    Type: Grant
    Filed: February 2, 2021
    Date of Patent: December 12, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ravibhushan B. Tayshete, Sourabh Tiwari, Vinay Vasanth Patage