Patents Examined by Douglas Godbold
  • Patent number: 11966964
    Abstract: A system including one or more processors and one or more non-transitory computer-readable media storing computing instructions configured to run on the one or more processors and perform receiving a voice command from a user; transforming the voice command; transforming the voice command can include using a natural language understanding and rules execution engine into (a) an intent of the user to add recipe ingredients to a cart and (b) a recipe descriptor; determining a matching recipe from a set of ingested recipes based on the recipe descriptor; determining items and quantities associated with the items that correspond to a set of ingredients included in the matching recipe using a quantity inference algorithm; and automatically adding all of the items and the quantities associated with the items to the cart. Other embodiments are disclosed.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: April 23, 2024
    Assignee: WALMART APOLLO, LLC
    Inventors: Snehasish Mukherjee, Deepa Mohan, Haoxuan Chen, Phani Ram Sayapaneni, Ghodratollah Aalipour Hafshejani, Shankara Bhargava Subramanya
  • Patent number: 11966700
    Abstract: Embodiments of the described technologies are capable of reading a text sequence that include at least one word; extracting model input data from the text sequence, where the model input data includes, for each word of the text sequence, segment data and non-segment data; using a first machine learning model and at least one second machine learning model, generating, for each word of the text sequence, a multi-level feature set; outputting, by a third machine learning model, in response to input to the third machine learning model of the multi-level feature set, a tagged version of the text sequence; executing a search based at least in part on the tagged version of the text sequence.
    Type: Grant
    Filed: March 5, 2021
    Date of Patent: April 23, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yuwei Qiu, Gonzalo Aniano Porcile, Yu Gan, Qin Iris Wang, Haichao Wei, Huiji Gao
  • Patent number: 11967309
    Abstract: Apparatus and methods for leveraging machine learning and artificial intelligence to generate a response to an utterance expressed by a user during an interaction between an interactive response system and the user is provided. The methods may include a natural language processor processing the utterance to output an utterance intent. The methods may also include a signal extractor processing the utterance, the utterance intent and previous utterance data to output utterance signals. The methods may additionally include an utterance sentiment classifier using a hierarchy of rules to extract, from a database, a label, the extracting being based on the utterance signals. The methods may further include a sequential neural network classifier using a trained algorithm to process the label and a sequence of historical labels to output a sentiment score. The methods may further include, based on the utterance intent, the label and the score, to output a response.
    Type: Grant
    Filed: December 1, 2021
    Date of Patent: April 23, 2024
    Assignee: Bank of America Corporation
    Inventors: Isaac Persing, Emad Noorizadeh, Ramakrishna R. Yannam, Sushil Golani, Hari Gopalkrishnan, Dana Patrice Morrow Branch
  • Patent number: 11961529
    Abstract: A method of audio signal processing comprising Hybrid Expansive Frequency Compression (hEFC) via a digital signal processor, wherein the method includes: classifying an audio signal input, wherein the audio signal input includes frication high-frequency speech energy, into two or more speech sound classes followed by selecting a form of input-dependent frequency remapping function; and performing hEFC including, re-coding of one or more input frequencies of the speech sound via the input-dependent frequency remapping function to generate an audio output signal, wherein the output signal is a representation of the audio signal input having a lower sound frequency.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: April 16, 2024
    Assignee: Purdue Research Foundation
    Inventor: Joshua Michael Alexander
  • Patent number: 11959262
    Abstract: A faucet is provided that electronically controls the flow volume and temperature of water being dispensed. The faucet illustratively includes a faucet body and a faucet handle. In some embodiments, the faucet may include a faucet body and be voice controlled. The faucet illustratively includes an inertial motion unit sensor mounted in the faucet handle to sense spatial orientation of the faucet handle. The faucet illustratively includes an electronic flow control system to adjust flow volume and temperature of water being dispensed. The faucet illustratively includes a controller configured to receive signals from the inertial motion unit sensor and control the electronic flow control system to adjust flow volume and temperature of water being dispensed based upon the position of the faucet handle.
    Type: Grant
    Filed: February 9, 2021
    Date of Patent: April 16, 2024
    Assignee: ASSA ABLOY Americas Residential Inc.
    Inventors: Chasen Scott Beck, Matthew Lovett, Stephen Blizzard, Evan Benstead, Elena Gorkovenko
  • Patent number: 11961509
    Abstract: Methods and systems are disclosed for improving dialog management for task-oriented dialog systems. The disclosed dialog builder leverages machine teaching processing to improve development of dialog managers. In this way, the dialog builder combines the strengths of both rule-based and machine-learned approaches to allow dialog authors to: (1) import a dialog graph developed using popular dialog composers, (2) convert the dialog graph to text-based training dialogs, (3) continuously improve the trained dialogs based on log dialogs, and (4) generate a corrected dialog for retraining the machine learning.
    Type: Grant
    Filed: April 3, 2020
    Date of Patent: April 16, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Swadheen Kumar Shukla, Lars Hasso Liden, Thomas Park, Matthew David Mazzola, Shahin Shayandeh, Jianfeng Gao, Eslam Kamal Abdelreheem
  • Patent number: 11955110
    Abstract: The present disclosure describes techniques for identifying languages associated with music. Training data may be received, wherein the training data comprise information indicative of audio data representative of a plurality of music samples and metadata associated with the plurality of music samples. The training data further comprises information indicating a language corresponding to each of the plurality of music samples. A machine learning model may be trained to identify a language associated with a piece of music by applying the training data to the machine model until the model reaches a predetermined recognition accuracy. A language associated with the piece of music may be determined using the trained machine learning model.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: April 9, 2024
    Assignee: LEMON, INC.
    Inventor: Keunwoo Choi
  • Patent number: 11955112
    Abstract: A speech-processing system may provide access to one or more virtual assistants via a voice-controlled device. A user may leverage a first virtual assistant to translate a natural language command from a first language into a second language, which the device can forward to a second virtual assistant for processing. The device may receive a command from a user and send input data representing the command to a first speech-processing system representing the first virtual assistant. The device may receive a response in the form of a first natural language output from the first speech-processing system along with an indication that the first natural language output should be directed to a second speech-processing system representing the second virtual assistant. For example, the command may be in the first language, and the first natural language output may be in the second language, which is understandable by the second speech-processing system.
    Type: Grant
    Filed: February 5, 2021
    Date of Patent: April 9, 2024
    Assignee: Amazon Technologies, Inc.
    Inventor: Robert John Mars
  • Patent number: 11955122
    Abstract: Techniques for determining whether audio is machine-outputted or non-machine-outputted are described. A device may receive audio, may process the audio to determine audio data including audio features corresponding to the audio, and may process the audio data to determine audio embedding data. The device may process the audio embedding data to determine whether the audio is machine-outputted or non-machine-outputted. In response to determining that the audio is machine-outputted, then the audio may be discarded or not processed further. Alternatively, in response to determining that the audio is non-machine-outputted (e.g., live speech from a user), then the audio may be processed further (e.g., using ASR processing).
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: April 9, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Mansour Ahmadi, Udhgee Murugesan, Roger Hau-Bin Cheng, Roberto Barra Chicote, Kian Jamali Abianeh, Yixiong Meng, Oguz Hasan Elibol, Itay Teller, Kevin Kwanghoon Ha, Andrew Roths
  • Patent number: 11948557
    Abstract: Aspects of the disclosure relate to using an apparatus for flagging and removing real time workflows that produce sub-optimal results. Such an apparatus may include an utterance sentiment classifier. The apparatus stores a hierarchy of rules. Each of the rules is associated with one or more rule signals. In response to receiving the one or more utterance signals, the classifier iterates through the hierarchy of rules in sequential order to identify a first rule for which the one or more utterance signals are a superset of the rule's one or more rule signals. In response to receiving the one or more alternate utterance signals from the signal extractor, the classifier may iterate through the hierarchy of rules in sequential order to identify the first rule in the hierarchy for which the one or more alternate utterance signals are a superset of the first rule's one or more rule signals.
    Type: Grant
    Filed: December 1, 2021
    Date of Patent: April 2, 2024
    Assignee: Bank of America Corporation
    Inventors: Ramakrishna R. Yannam, Isaac Persing, Emad Noorizadeh
  • Patent number: 11948564
    Abstract: Provided is an information processing device including a response control unit that controls a response to a user's utterance based on a first utterance interpretation result and a second utterance interpretation result. The first utterance interpretation result is a result of natural language understanding processing for an utterance text generated by automatic speech recognition processing based on the user's utterance and the second utterance interpretation result is an interpretation result acquired based on learning data in which the first utterance interpretation result and the utterance text used to acquire the first utterance interpretation result are associated with each other. The response control unit further controls the response to the user's utterance based on the second utterance interpretation result in a case where the second utterance interpretation result is acquired based on the user's utterance before acquisition of the first utterance interpretation result.
    Type: Grant
    Filed: March 13, 2019
    Date of Patent: April 2, 2024
    Assignee: SONY CORPORATION
    Inventors: Hiro Iwase, Yuhei Taki, Kunihito Sawai
  • Patent number: 11934432
    Abstract: Systems and methods are described for generating a dynamic label for a real-time communication session. An ongoing communication session is monitored to identify a content characteristic of the communication session. A size of a sliding window is determined based on the content characteristic, where the size of the sliding window defines a segment of the communication session to include in the most recent subset of communications. The most recent subset of communications is analyzed to identify relevant words based on one or more relevancy criteria. A dynamic label associated with the communication session is generated, where the dynamic label includes at least a selected one of the relevant words.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: March 19, 2024
    Assignee: SHOPIFY INC.
    Inventors: Christopher Landry, Angela Chen, Nancy Cao, Andrew Ni, Jacob Adolphe, Joaquin Fuenzalida Nunez
  • Patent number: 11935518
    Abstract: A joint works production method of a joint works production server using collective intelligence includes receiving a subject of joint works from participants of the joint works production, receiving preference information on the received subject from other participants, determining whether to adopt the subject of the joint works according to the received preference information, and classifying, when the subject of the joint works is adopted, the adopted subject of the joint works by subjects and storing the classified subject.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: March 19, 2024
    Inventor: Bang Hyeon Kim
  • Patent number: 11935532
    Abstract: Aspects of the disclosure relate to receiving a stateless application programming interface (“API”) request. The API request may store an utterance, previous utterance data and a sequence of labels, each label in the sequence of labels being associated with a previous utterance expressed by a user during an interaction. The previous utterance data may, in certain embodiments, be limited to a pre-determined number of utterances occurring prior to the utterance. Embodiments process the utterance, using a natural language processor in electronic communication with the first processor, to output an utterance intent, a semantic meaning of the utterance and an utterance parameter. The utterance parameter may include words in the utterance and be associated with the intent. The natural language processor may append the utterance intent, the semantic meaning of the utterance and the utterance parameter to the API request. A signal extractor processor may append the plurality of utterance signals to the API request.
    Type: Grant
    Filed: December 1, 2021
    Date of Patent: March 19, 2024
    Assignee: Bank of America Corporation
    Inventors: Ramakrishna R. Yannam, Emad Noorizadeh, Isaac Persing, Sushil Golani, Hari Gopalkrishnan, Dana Patrice Morrow Branch
  • Patent number: 11935531
    Abstract: Apparatus and methods for leveraging machine learning and artificial intelligence to assess a sentiment of an utterance expressed by a user during an interaction between an interactive response system and the user is provided. The methods may include a natural language processor processing the utterance to output an utterance intent. The methods may also include a signal extractor processing the utterance, the utterance intent and previous utterance data to output utterance signals. The methods may additionally include an utterance sentiment classifier using a hierarchy of rules to extract, from a database, a label, the extracting being based on the utterance signals. The methods may further include a sequential neural network classifier using a trained algorithm to process the label and a sequence of historical labels to output a sentiment score.
    Type: Grant
    Filed: December 1, 2021
    Date of Patent: March 19, 2024
    Assignee: Bank of America Corporation
    Inventors: Isaac Persing, Emad Noorizadeh, Ramakrishna R. Yannam, Sushil Golani, Hari Gopalkrishnan, Dana Patrice Morrow Branch
  • Patent number: 11935546
    Abstract: Audio streaming devices, systems, and methods may employ adaptive differential pulse code modulation (ADPCM) techniques providing for optimum performance even while ensuring robustness against transmission errors. One illustrative device includes: a difference element that produces a sequence of prediction error values by subtracting predicted values from audio samples; a scaling element that produces scaled error values by dividing each prediction error by a corresponding envelope estimate; a quantizer that operates on the scaled error values to produce quantized error values; a multiplier that uses the corresponding envelope estimates to produce reconstructed error values; a predictor that produces the next audio sample values based on the reconstructed error values; and an envelope estimator.
    Type: Grant
    Filed: May 9, 2022
    Date of Patent: March 19, 2024
    Assignee: SEMICONDUCTOR COMPONENTS INDUSTRIES, LLC
    Inventor: Erkan Onat
  • Patent number: 11935551
    Abstract: The present invention relates to audio coding systems which make use of a harmonic transposition method for high frequency reconstruction (HFR). A system and a method for generating a high frequency component of a signal from a low frequency component of the signal is described. The system comprises an analysis filter bank providing a plurality of analysis subband signals of the low frequency component of the signal. It also comprises a non-linear processing unit to generate a synthesis subband signal with a synthesis frequency by modifying the phase of a first and a second of the plurality of analysis subband signals and by combining the phase-modified analysis subband signals. Finally, it comprises a synthesis filter bank for generating the high frequency component of the signal from the synthesis subband signal.
    Type: Grant
    Filed: May 3, 2023
    Date of Patent: March 19, 2024
    Assignee: DOLBY INTERNATIONAL AB
    Inventors: Lars Villemoes, Per Hedelin
  • Patent number: 11935557
    Abstract: Various embodiments set forth systems and techniques for explaining domain-specific terms detected in a media content stream. The techniques include detecting a speech portion included in an audio signal; determining that the speech portion comprises a domain-specific term; determining an explanatory phrase associated with the domain-specific term; and integrating the explanatory phrase associated with the domain-specific term into playback of the audio signal.
    Type: Grant
    Filed: February 1, 2021
    Date of Patent: March 19, 2024
    Assignee: Harman International Industries, Incorporated
    Inventors: Stefan Marti, Evgeny Burmistrov, Joseph Verbeke, Priya Seshadri
  • Patent number: 11935553
    Abstract: It is possible to stably learn, in a short time, a model that can output embedded vectors for calculating a set of time frequency points at which the same sound source is dominant. Parameters of the neural network are learned based on a spectrogram of a sound source signal formed by a plurality of sound sources such that embedded vectors for time frequency points at which the same sound source is dominant are similar to embedded vectors for each of time frequency points output by a neural network, which is a CNN.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: March 19, 2024
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Hirokazu Kameoka, Li Li
  • Patent number: 11929077
    Abstract: Embodiments of systems and methods for user enrollment in speaker authentication and speaker identification systems are disclosed. In some embodiments, the enrollment process includes collecting speech samples that are examples of multiple speech types spoken by a user, computing a speech representation for each speech sample, and aggregating the example speech representations to form a robust overall representation or user voiceprint of the user's speech.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: March 12, 2024
    Assignee: DTS Inc.
    Inventors: Michael M. Goodwin, Teodora Ceanga, Eloy Geenjaar, Gadiel Seroussi, Brandon Smith