Patents Examined by Fariba Sirjani
  • Patent number: 11436417
    Abstract: Methods, apparatus, and computer readable media are described herein for allowing a first user to interface with an automated assistant to assign tasks to additional user(s), and/or for causing notification(s) of the assigned task to be rendered to the additional user(s) via corresponding automated assistant interface(s). In various implementations, one or more criteria can be utilized in selecting a group of client device(s), linked to the additional user, via which to provide the notification(s) for the task assigned to the additional user. Also, in various implementations condition(s) for providing the notification(s) for the task can be determined, and the notification(s) provided based on determining satisfaction of the condition(s).
    Type: Grant
    Filed: February 12, 2020
    Date of Patent: September 6, 2022
    Assignee: GOOGLE LLC
    Inventors: Ibrahim Badr, Yariv Adan, Hugo Santos, Shikha Kapoor, Karthik Nagaraj, Glenn Wilson, Arwa Rangwala, Leo Deegan, Peter Krogh
  • Patent number: 11423888
    Abstract: Predicting and learning users' intended actions on an electronic device based on free-form speech input. Users' actions can be monitored to develop a list of carrier phrases having one or more actions that correspond to the carrier phrases. A user can speak a command into a device to initiate an action. The spoken command can be parsed and compared to a list of carrier phrases. If the spoken command matches one of the known carrier phrases, the corresponding action(s) can be presented to the user for selection. If the spoken command does not match one of the known carrier phrases, search results (e.g., Internet search results) corresponding to the spoken command can be presented to the user. The actions of the user in response to the presented action(s) and/or the search results can be monitored to update the list of carrier phrases.
    Type: Grant
    Filed: April 4, 2019
    Date of Patent: August 23, 2022
    Assignee: Google LLC
    Inventors: William J. Byrne, Alexander H. Gruenstein, Douglas H. Beeferman
  • Patent number: 11410663
    Abstract: An apparatus for determining an estimated pitch lag is provided. The apparatus includes an input interface for receiving a plurality of original pitch lag values, and a pitch lag estimator for estimating the estimated pitch lag. The pitch lag estimator is configured to estimate the estimated pitch lag depending on a plurality of original pitch lag values and depending on a plurality of information values, wherein for each original pitch lag value of the plurality of original pitch lag values, an information value of the plurality of information values is assigned to the original pitch lag value.
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: August 9, 2022
    Inventors: Jeremie Lecomte, Michael Schnabel, Goran Markovic, Martin Dietz, Bernhard Neugebauer
  • Patent number: 11405466
    Abstract: Systems and processes for operating an intelligent automated assistant are provided. In one example process, a first instance of a digital assistant operating on a first electronic device receives a natural-language speech input indicative of a user request. The first electronic device obtains a set of data corresponding to a second instance of the digital assistant on a second electronic device, and updates one or more settings of the first instance of the digital assistant based on the received set of data. The first instance of the digital assistant performs one or more tasks based on the updated one or more settings and provides an output indicative of whether the one or more tasks are performed.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: August 2, 2022
    Assignee: Apple Inc.
    Inventors: Benjamin S. Phipps, Gennaro Frazzingaro, Karl F. Schramm
  • Patent number: 11393472
    Abstract: An apparatus and method for executing a voice command in an electronic device. In an exemplary embodiment, a voice signal is detected and speech thereof is recognized. When the recognized speech contains a wakeup command, a voice command mode is activated, and a signal containing at least a portion of the detected voice signal is transmitted to a server. The server generates a control signal or a result signal corresponding to the voice command, and transmits the same to the electronic device. The device receives and processes the control or result signal, and awakens. Thereby, voice commands are executed without the need for the user to physically touch the electronic device.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: July 19, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Subhojit Chakladar, Sang-Hoon Lee, Hee-Woon Kim
  • Patent number: 11393484
    Abstract: The quality of encoded signals can be improved by reclassifying AUDIO signals carrying non-speech data as VOICE signals when periodicity parameters of the signal satisfy one or more criteria. In some embodiments, only low or medium bit rate signals are considered for re-classification. The periodicity parameters can include any characteristic or set of characteristics indicative of periodicity. For example, the periodicity parameter may include pitch differences between subframes in the audio signal, a normalized pitch correlation for one or more subframes, an average normalized pitch correlation for the audio signal, or combinations thereof. Audio signals which are re-classified as VOICED signals may be encoded in the time-domain, while audio signals that remain classified as AUDIO signals may be encoded in the frequency-domain.
    Type: Grant
    Filed: April 4, 2019
    Date of Patent: July 19, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventor: Yang Gao
  • Patent number: 11386896
    Abstract: Systems and methods are disclosed. A digitized human vocal expression of a user and digital images are received over a network from a remote device. The digitized human vocal expression is processed to determine characteristics of the human vocal expression, including: pitch, volume, rapidity, a magnitude spectrum identify, and/or pauses in speech. Digital images are received and processed to detect characteristics of the user face, including detecting if one or more of the following is present: a sagging lip, a crooked smile, uneven eyebrows, and/or facial droop. Based at least on part on the human vocal expression characteristics and face characteristics, a determination is made as to what action is to be taken. A cepstrum pitch may be determined using an inverse Fourier transform of a logarithm of a spectrum of a human vocal expression signal. The volume may be determined using peak heights in a power spectrum of the human vocal expression.
    Type: Grant
    Filed: January 29, 2020
    Date of Patent: July 12, 2022
    Assignee: The Notebook, LLC
    Inventor: Karen Elaine Khaleghi
  • Patent number: 11388516
    Abstract: Systems, apparatuses, and methods are described for a privacy blocking device configured to prevent receipt, by a listening device, of video and/or audio data until a trigger occurs. A blocker may be configured to prevent receipt of video and/or audio data by one or more microphones and/or one or more cameras of a listening device. The blocker may use the one or more microphones, the one or more cameras, and/or one or more second microphones and/or one or more second cameras to monitor for a trigger. The blocker may process the data. Upon detecting the trigger, the blocker may transmit data to the listening device. For example, the blocker may transmit all or a part of a spoken phrase to the listening device.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: July 12, 2022
    Inventor: Thomas Stachura
  • Patent number: 11379660
    Abstract: A method, system, and computer program product for using a natural language processor is disclosed. Included are importing highlighted and non-highlighted training text each including training nodes, one-hot encoding the training text, training a projection model using the training text, processing the highlighted training text using the projection model, and training a classifier model using the highlighted processed training text. Also included are importing new text including new nodes, one-hot encoding the new text, processing the new text using the projection model, and determining, using the classifier model, whether one of the new nodes is in a sought-after class.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: July 5, 2022
    Assignee: International Business Machines Corporation
    Inventors: Joshua Cason, Chris Mwarabu, Thomas Hay Rogers, Corville O. Allen
  • Patent number: 11380335
    Abstract: A method of encoding samples in a digital signal is provided that includes receiving a frame of N samples of the digital signal, determining L possible distinct data values in the N samples, determining a reference data value in the L possible distinct data values and a coding order of L?1 remaining possible distinct data values, wherein each of the L?1 remaining possible distinct data values is mapped to a position in the coding order, decomposing the N samples into L?1 coding vectors based on the coding order, wherein each coding vector identifies the locations of one of the L?1 remaining possible distinct data values in the N samples, and encoding the L?1 coding vectors.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: July 5, 2022
    Assignee: Texas Instruments Incorporated
    Inventors: Lorin Paul Netsch, Jacek Piotr Stachurski
  • Patent number: 11379659
    Abstract: A method performed by a device may include identifying a plurality of samples of textual content; performing tokenization of the plurality of samples to generate a respective plurality of tokenized samples; performing embedding of the plurality of tokenized samples to generate a sample matrix; determining groupings of attributes of the sample matrix using a convolutional neural network; determining context relationships between the groupings of attributes using a bidirectional long short term memory (LSTM) technique; selecting predicted labels for the plurality of samples using a model, wherein the model selects, for a particular sample of the plurality of samples, a predicted label of the predicted labels from a plurality of labels based on respective scores of the particular sample with regard to the plurality of labels and based on a nonparametric paired comparison of the respective scores; and providing information identifying the predicted labels.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: July 5, 2022
    Assignee: Capital One Services, LLC
    Inventors: Jon Austin Osbourne, Aaron Raymer, Megan Yetman, Venkat Yashwanth Gunapati
  • Patent number: 11380351
    Abstract: A method for pulmonary condition monitoring includes selecting a phrase from an utterance of a user of an electronic device, wherein the phrase matches an entry of multiple phrases. At least one speech feature that is associated with one or more pulmonary conditions within the phrase is identified. A pulmonary condition is determined based on analysis of the at least one speech feature.
    Type: Grant
    Filed: January 14, 2019
    Date of Patent: July 5, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ebrahim Nematihosseinabadi, Md Mahbubur Rahman, Viswam Nathan, Korosh Vatanparvar, Jilong Kuang, Jun Gao
  • Patent number: 11341958
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training acoustic models and using the trained acoustic models. A connectionist temporal classification (CTC) acoustic model is accessed, the CTC acoustic model having been trained using a context-dependent state inventory generated from approximate phonetic alignments determined by another CTC acoustic model trained without fixed alignment targets. Audio data for a portion of an utterance is received. Input data corresponding to the received audio data is provided to the accessed CTC acoustic model. Data indicating a transcription for the utterance is generated based on output that the accessed CTC acoustic model produced in response to the input data. The data indicating the transcription is provided as output of an automated speech recognition service.
    Type: Grant
    Filed: September 16, 2020
    Date of Patent: May 24, 2022
    Assignee: Google LLC
    Inventors: Kanury Kanishka Rao, Andrew W. Senior, Hasim Sak
  • Patent number: 11323132
    Abstract: An encoding apparatus includes a memory and a processor configured to acquire text data, specify a first dynamic dictionary among a plurality of dynamic dictionaries based on attribute information of a first word included in the text data, register the first word in association with a first dynamic code in the first dynamic dictionary, and encode the first word into the first dynamic code.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: May 3, 2022
    Assignee: FUJITSU LIMITED
    Inventors: Masahiro Kataoka, Junki Hakamata
  • Patent number: 11314948
    Abstract: A system to convert sequences of words, along with concurrent non-verbal data, into thought representations, said system being used in association with a language understanding system, where words to thought transformation is needed, is disclosed. Said system comprises: an entity look-up subsystem that comprises: a pre-processing unit, a word database, and a cache; a controller or thought representation formation and reasoning unit; a multi-word entities buffer; an entity knowledge base; a predictive word meaning memory; and an output thought representation unit.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: April 26, 2022
    Inventors: Baljit Singh, Praveen Prakash
  • Patent number: 11314921
    Abstract: A text error correction method and a text error correction apparatus based on a recurrent neural network of artificial intelligence are provided. The method includes: acquiring text data to be error-corrected; performing error correction on the text data to be error-corrected by using a trained recurrent neural network model so as to generate error-corrected text data.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: April 26, 2022
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Chunjie Yang, Shujie Yao
  • Patent number: 11308143
    Abstract: Curation of a corpus of a cognitive computing system is performed by reporting to a user a cluster model of a parse tree structure of discrepancies and corresponding assigned confidence factors detected between at least a portion of a first electronic document and a second or more electronic documents in the information corpus. Responsive to a selection by the user of a discrepancy cluster model, drill-down details regarding the discrepancy are returned to the user, for subsequent user selection of an administrative action option for handling the detected discrepancy to curate the information corpus.
    Type: Grant
    Filed: April 4, 2019
    Date of Patent: April 19, 2022
    Assignee: International Business Machines Corporation
    Inventors: Donna K. Byron, Elie Feirouz, Ashok Kumar, William G. O'Keeffe
  • Patent number: 11282529
    Abstract: An approach is described that obtains spectrum coefficients for a replacement frame of an audio signal. A tonal component of a spectrum of an audio signal is detected based on a peak that exists in the spectra of frames preceding a replacement frame. For the tonal component of the spectrum a spectrum coefficients for the peak and its surrounding in the spectrum of the replacement frame is predicted, and for the non-tonal component of the spectrum a non-predicted spectrum coefficient for the replacement frame or a corresponding spectrum coefficient of a frame preceding the replacement frame is used.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: March 22, 2022
    Inventors: Janine Sukowski, Ralph Sperschneider, Goran Markovic, Wolfgang Jaegers, Christian Helmrich, Bernd Edler, Ralf Geiger
  • Patent number: 11270686
    Abstract: A model-pair is selected to recognize spoken words in a speech signal generated from a speech, which includes an acoustic model and a language model. A degree of disjointedness between the acoustic model and the language model is computed relative to the speech by comparing a first recognition output produced from the acoustic model and a second recognition output produced from the language model. When the acoustic model incorrectly recognizes a portion of the speech signal as a first word and the language model correctly recognizes the portion of the speech signal as a second word, a textual representation of the second word is determined and associated with a set of sound descriptors to generate a training speech pattern. Using the training speech pattern, the acoustic model is trained to recognize the portion of the speech signal as the second word.
    Type: Grant
    Filed: March 28, 2017
    Date of Patent: March 8, 2022
    Assignee: International Business Machines Corporation
    Inventors: Aaron K. Baughman, John M. Ganci, Jr., Stephen C. Hammer, Craig M. Trim
  • Patent number: 11250840
    Abstract: Some embodiments provide a method of training a MT network to detect a wake expression that directs a digital assistant to perform an operation based on a request that follows the expression. The MT network includes processing nodes with configurable parameters. The method iteratively selects different sets of input values with known sets of output values. Each of a first group of input value sets includes a vocative use of the expression. Each of a second group of input value sets includes a non-vocative use of the expression. For each set of input values, the method uses the MT network to process the input set to produce an output value set and computes an error value that expresses an error between the produced output value set and the known output value set. Based on the error values, the method adjusts configurable parameters of the processing nodes of the MT network.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: February 15, 2022
    Assignee: PERCEIVE CORPORATION
    Inventor: Steven L. Teig