Patents Examined by Bharatkumar S Shah
  • Patent number: 11645457
    Abstract: A method in a data processing system comprising a processor and a memory, for processing data entries, the method comprising receiving, by the data processing system, a data entry, parsing, by the data processing system, the data entry for features by using natural language processing (NLP), identifying, by the data processing system, data sets from a corpus of information that are relevant to the data entry, and linking, by the data processing system, the identified data sets to the data entry.
    Type: Grant
    Filed: August 30, 2017
    Date of Patent: May 9, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Joshua N Andrews, Thomas C Wisehart, Jr.
  • Patent number: 11646031
    Abstract: A method, a device, and a computer-readable storage medium having instructions for processing a speech input. A speech input from a user is received and preprocessed for at least one of two or more available speech-processing services. The preprocessed speech inputs are transferred to one or more of the available speech-processing services.
    Type: Grant
    Filed: November 26, 2018
    Date of Patent: May 9, 2023
    Inventor: RĂ¼diger Woike
  • Patent number: 11620981
    Abstract: According to one embodiment, a speech recognition error correction apparatus includes a correction network memory and an error correction circuitry. The error correction circuitry calculates a difference between a speech recognition result string of an error correction target, which is a result of performing speech recognition on a new series of speech data, and a correction network, where a speech recognition result string and a correction result by a user for the speech recognition result string are associated, and when a value indicating the difference is equal to or less than a threshold, perform error correction on a speech recognition error portion in the speech recognition result string of the error correction target by using the correction network to generate a speech recognition error correction result string.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: April 4, 2023
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Taira Ashikawa, Hiroshi Fujimura, Kenji Iwata
  • Patent number: 11610419
    Abstract: A system for performing one or more steps of a method is disclosed. The method includes receiving a first legal clause, generating, using a segmentation algorithm, a first hidden Markov chain comprising a plurality of first nodes based on the first legal clause, each of the plurality of first nodes corresponding to an element of the first legal clause, generating, using the segmentation algorithm, a second hidden Markov chain comprising a plurality of second nodes based on the second legal clause, each of the plurality of second nodes corresponding to an element of the second legal clause, comparing each of the plurality of first nodes with each of the plurality of second nodes to identify a difference for each of the plurality of first nodes, determine, based on the comparison, whether the difference for each of the plurality of first nodes exceeds a predetermined difference threshold.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: March 21, 2023
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventors: Austin Walters, Jeremy Edward Goodsitt, Fardin Abdi Taghi Abad, Mark Watson, Vincent Pham, Anh Truong, Kenneth Taylor, Reza Farivar
  • Patent number: 11600266
    Abstract: Systems and methods of network-based learning models for natural language processing are provided. Information may be stored information in memory regarding user interaction with network content. Further, a digital recording of a vocal utterance made by a user may be captured. The vocal utterance may be interpreted based on the stored user interaction information. An intent of the user may be identified based on the interpretation, and a prediction may be made based on the identified intent. The prediction may further correspond to a selected workflow.
    Type: Grant
    Filed: January 5, 2021
    Date of Patent: March 7, 2023
    Assignee: Sony Interactive Entertainment LLC
    Inventor: Stephen Yong
  • Patent number: 11594232
    Abstract: Systems, methods, and computer program products of audio processing based on Adaptive Intermediate Spatial Format (AISF) are described. The AISF is an extension to ISF that allows spatial resolution around an ISF ring to be adjusted dynamically with respect to content of incoming audio objects. An AISF encoder device adaptively warps each ISF ring during ISF encoding to adjust angular distance between objects, resulting in increase in uniformity of energy distribution around the ISF ring. At an AISF decoder device, matrices that decode sound positions to the output speaker take into account the warping that was performed at the AISF encoder device to reproduce the true positions of sound sources.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: February 28, 2023
    Assignee: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Juan Felix Torres, David S. McGrath, Michael William Mason
  • Patent number: 11586830
    Abstract: A system for reinforcement learning based controlled natural language generation is disclosed. The system includes a token generator subsystem to generate an initial output phrase including a sequence of output tokens. The system includes trained models associated with corresponding predefined tasks. Each trained model includes an attention layer to compute attention-based weights for each output token. The trained models include a scoring layer to generate a phrase sequence level score for the output phrase. The trained models include a reward generation layer to generate dense rewards for each output token based on the attention-based weights and the phrase sequence level score. The trained models include a feedback score generation layer to generate a feedback score based on the dense rewards and reward weights assigned to the dense rewards of the corresponding trained models. The feedback score generation layer provides the feedback score iteratively to the token generator subsystem.
    Type: Grant
    Filed: June 3, 2020
    Date of Patent: February 21, 2023
    Assignee: PM Labs, Inc.
    Inventors: Arjun Maheswaran, Akhilesh Sudhakar, Bhargav Upadhyay
  • Patent number: 11580094
    Abstract: An audio stream is detected during a communication session with a user. Natural language processing on the audio stream is performed to update a set of attributes by supplementing the set of attributes based on attributes derived from the audio stream. A set of filter values is updated based on the updated set of attributes. The updated set of filter values is used to query a set of databases to obtain datasets. A probabilistic program is executed during the communication session by determining a set of probability parameters characterizing a probability of an anomaly occurring based on the datasets and the set of attributes. A determination is made if whether the probability satisfies a threshold. In response to a determination that the probability satisfies the threshold, a record is updated to identify the communication session to indicate that the threshold is satisfied.
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: February 14, 2023
    Assignee: Capital One Services, LLC
    Inventors: David Beilis, Alexey Shpurov
  • Patent number: 11580967
    Abstract: A speech feature extraction apparatus 100 includes a voice activity detection unit 103 that drops non-voice frames from frames corresponding to an input speech utterance, and calculates a posterior of being voiced for each frame, a voice activity detection process unit 106 calculates a function value as weights in pooling frames to produce an utterance-level feature, from a given a voice activity detection posterior, and an utterance-level feature extraction unit 112 that extracts an utterance-level feature, from the frame on a basis of multiple frame-level features, using the function values.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: February 14, 2023
    Assignee: NEC CORPORATION
    Inventors: Qiongqiong Wang, Koji Okabe, Kong Aik Lee, Takafumi Koshinaka
  • Patent number: 11580956
    Abstract: A method includes receiving a training example that includes audio data representing a spoken utterance and a ground truth transcription. For each word in the spoken utterance, the method also includes inserting a placeholder symbol before the respective word identifying a respective ground truth alignment for a beginning and an end of the respective word, determining a beginning word piece and an ending word piece, and generating a first constrained alignment for the beginning word piece and a second constrained alignment for the ending word piece. The first constrained alignment is aligned with the ground truth alignment for the beginning of the respective word and the second constrained alignment is aligned with the ground truth alignment for the ending of the respective word. The method also includes constraining an attention head of a second pass decoder by applying the first and second constrained alignments.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: February 14, 2023
    Assignee: Google LLC
    Inventors: Tara N. Sainath, Basi Garcia, David Rybach, Trevor Strohman, Ruoming Pang
  • Patent number: 11568859
    Abstract: A computer-implemented method and apparatus for extracting key information from conversational voice data, where the method comprises receiving a first speaker text corresponding to a speech of a first speaker in a conversation with a second speaker, the conversation comprising multiple turns of speech between the first speaker and the second speaker, the first speaker text comprising multiple question lines, each question line corresponding to the speech of the first speaker at a corresponding turn, arranged chronologically. Feature words are identified, and a frequency of occurrence therefor in each question line is determined. Question lines without any of the feature words are removed, to yield candidate question lines, for each of which a mathematical representation is generated. A similarity score for each candidate question line with respect to each subsequent candidate question line is computed, and the line with the highest score is identified as a key question.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: January 31, 2023
    Assignee: UNIPHORE SOFTWARE SYSTEMS, INC.
    Inventor: Somnath Roy
  • Patent number: 11562146
    Abstract: Artificial intelligence (AI) technology can be used in combination with composable communication goal statements to facilitate a user's ability to quickly structure story outlines in a manner usable by an NLG narrative generation system without any need for the user to directly author computer code. Narrative analytics that are linked to communication goal statements can employ a conditional outcome framework that allows the content and structure of resulting narratives to intelligently adapt as a function of the nature of the data under consideration. This AI technology permits NLG systems to determine the appropriate content for inclusion in a narrative story about a data set in a manner that will satisfy a desired communication goal.
    Type: Grant
    Filed: March 3, 2021
    Date of Patent: January 24, 2023
    Assignee: Narrative Science Inc.
    Inventors: Andrew R. Paley, Nathan D. Nichols, Matthew L. Trahan, Maia Lewis Meza, Michael Tien Thinh Pham, Charlie M. Truong
  • Patent number: 11562734
    Abstract: The present disclosure relates to an automatic speech recognition system and a method thereof. The system includes a conformer encoder and a pair of ping-pong buffers. The encoder includes a plurality of encoder layers sequentially executed by one or more graphic processing units. At least one encoder layer includes a first feed forward module, a multi-head self-attention module, a convolution module, and a second feed forward module. The convolution module and the multi-head self-attention module are sandwiched between the first feedforward module and the second feed forward module. The four modules respectively include a plurality of encoder sublayers fused into one or more encoder kernels. The one or more encoder kernels respectively read from one of the pair of ping-pong buffers and write into the other of the pair of ping-pong buffers.
    Type: Grant
    Filed: January 4, 2021
    Date of Patent: January 24, 2023
    Assignee: KWAI INC.
    Inventors: Yongxiong Ren, Yang Liu, Heng Liu, Lingzhi Liu, Jie Li, Kaituo Xu, Xiaorui Wang
  • Patent number: 11556782
    Abstract: In a trained attentive decoder of a trained Sequence-to-Sequence (seq2seq) Artificial Neural Network (ANN): obtaining an encoded input vector sequence; generating, using a trained primary attention mechanism of the trained attentive decoder, a primary attention vectors sequence; for each primary attention vector of the primary attention vectors sequence: (a) generating a set of attention vector candidates corresponding to the respective primary attention vector, (b) evaluating, for each attention vector candidate of the set of attention vector candidates, a structure fit measure that quantifies a similarity of the respective attention vector candidate to a desired attention vector structure, (c) generating, using a trained soft-selection ANN, a secondary attention vector based on said evaluation and on state variables of the trained attentive decoder; and generating, using the trained attentive decoder, an output sequence based on the encoded input vector sequence and the secondary attention vectors.
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: January 17, 2023
    Assignee: International Business Machines Corporation
    Inventors: Vyacheslav Shechtman, Alexander Sorin
  • Patent number: 11551707
    Abstract: Disclosed is a method for speech processing, an information device, and a computer program product. The method for speech processing, as implemented by a computer, includes: obtaining a mixed speech signal via a microphone, wherein the mixed speech signal includes a plurality of speech signals uttered by a plurality of unspecified speakers at the same time; generating a set of simulated speech signals according to the mixed speech signal by using a Generative Adversarial Network (GAN), in order to simulate the plurality of speech signals; determining the number of the simulated speech signals in order to estimate the number of the speakers in the surroundings and providing the number as an input of an information application.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: January 10, 2023
    Assignee: RELAJET TECH (TAIWAN) CO., LTD.
    Inventors: Yun-Shu Hsu, Po-Ju Chen
  • Patent number: 11544467
    Abstract: The present disclosure relates to processing operations configured to provide a linguistic-based approach to evaluating repetition in content of an electronic document. The approach of the present disclosure is about detecting terms/words/phrases that are likely to be perceived as being repetitious by native speakers of a language rather than just identifying the occurrence of identical words or strings in a document as done by traditional language checks. Processing of the present disclosure detects and evaluates terms or phrases using positive linguistic evidence derived from evaluation of linguistic relationships between words in a string in syntactic ways. This results in more accurate and efficient determination as to whether a term is truly repetitious at the linguistic level as compared with traditional language checks.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: January 3, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Davide Turcato, Alfredo R. Arnaiz, Domenic Joseph Cipollone, Michael Wilson Daniels
  • Patent number: 11545145
    Abstract: An utterance in any of various languages is processed to derive a predicted label using a generated grammar. The grammar is suitable for deriving meaning of utterances from several languages (polyglot). The utterance is processed by an encoder using word embeddings. The encoder and a decoder process the utterance using the polyglot grammar to obtain a machine-readable result. The machine-readable result is well-formed based on accounting for re-entrances of intermediate variable references. A machine then takes action on the machine-readable result. Ambiguity is reduced by the decoder by the well-formed machine-readable result. Sparseness of the generated polyglot grammar is reduced by using a two-pass approach including placeholders which are ultimately replaced by edge labels.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: January 3, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Federico Fancellu, Akos Kadar, Ran Zhang, Afsaneh Fazly
  • Patent number: 11545159
    Abstract: A digital audio quality monitoring device uses a deep neural network (DNN) to provide accurate estimates of signal-to-noise ratio (SNR) from a limited set of features extracted from incoming audio. Some embodiments improve the SNR estimate accuracy by selecting a DNN model from a plurality of available models based on a codec used to compress/decompress the incoming audio. Each model has been trained on audio compressed/decompressed by a codec associated with the model, and the monitoring device selects the model associated with the codec used to compress/decompress the incoming audio. Other embodiments are also provided.
    Type: Grant
    Filed: June 10, 2021
    Date of Patent: January 3, 2023
    Assignee: NICE LTD.
    Inventors: Roman Frenkel, Matan Keret, Michal Daisey Lerer
  • Patent number: 11539823
    Abstract: A muffling device includes an acquisition circuit, configured to obtain reference sound wave information of a user. The muffling device includes a modulation circuit, configured to analyze an acoustic wave characteristic of the reference sound wave information to obtain a characteristic parameter of the reference sound wave information. The muffling device includes a muffling circuit, configured to generate compensated sound wave information according to the characteristic parameter of the reference sound wave information. The muffling device includes a correction circuit, configured to compare muffed sound wave information superimposed by the compensated sound wave information and the reference sound wave information with the reference sound wave information, and feed back a comparison result to the muffling circuit. The muffling circuit can adjust the compensated sound wave information according to a fed back comparison result.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: December 27, 2022
    Assignees: BEIJING BOE TECHNOLOGY DEVELOPMENT CO., LTD., FUZHOU BOE OPTOELECTRONICS TECHNOLOGY CO., LTD.
    Inventors: Yang Yu, Jiamin Liao, Shijian Luo, Tao Luo, Heyuan Qiu, Xiaowei Liu, Haiguang Li, Fan Chen
  • Patent number: 11537791
    Abstract: Techniques are disclosed for generating anomaly scores for a neuro-linguistic model of input data obtained from one or more sources. According to one embodiment, generating anomaly scores includes receiving a stream of symbols generated from an ordered stream of normalized vectors generated from input data received from one or more sensor devices during a first time period. Upon receiving the stream of symbols, generating a set of words based on an occurrence of groups of symbols from the stream of symbols, determining a number of previous occurrences of a first word of the set of words, determining a number of previous occurrences of words of a same length as the first word, and determining a first anomaly score based on the number of previous occurrences of the first word and the number of previous occurrences of words of the same length as the first word.
    Type: Grant
    Filed: February 1, 2021
    Date of Patent: December 27, 2022
    Assignee: Intellective Ai, Inc.
    Inventors: Ming-Jung Seow, Gang Xu, Tao Yang, Wesley Kenneth Cobb