Patents Examined by Richa Mishra
  • Patent number: 11170184
    Abstract: The present disclosure relates to a computer implemented system and method for automatically generating messages. A repository (102) stores information related to chat sessions corresponding to a set of recipients, the information including contents of chats, historical timestamp information, personal information corresponding to each of the recipients, and relationship details of each recipient with a user. A parser (104) parses information to generate parsed data including verbs, nouns and common phrases. An analyzer (106) analyzes the stored information to extract behavioral data of the user. A searching module (108) searches and extracts relevant data from the web based on the parsed data. A message generator (110) generates messages corresponding to each recipient.
    Type: Grant
    Filed: May 26, 2018
    Date of Patent: November 9, 2021
    Inventor: Mohan Dewan
  • Patent number: 11170794
    Abstract: An apparatus for determining a predetermined characteristic related to a spectral enhancement processing of an audio signal includes a deriver configured for obtaining a spectrum of the audio signal and for deriving a local maximum signal from the spectrum. The apparatus includes a determiner configured for determining a similarity between segments of the local maximum signal and includes a processor for providing an information indicating that the audio signal includes the predetermined characteristic dependent on an evaluation of the similarity.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: November 9, 2021
    Assignee: Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.
    Inventors: Patrick Gampp, Christian Uhle, Sascha Disch, Antonios Karampourniotis, Julia Havenstein, Oliver Hellmuth, Juergen Herre, Peter Prokein
  • Patent number: 11144734
    Abstract: A self-learning natural-language generation (NLG) system receives raw data from Internet-of-Things sensors or other data sources and a set of natural-language reports previously generated from the raw data by a legacy report-generation mechanism. The system divides the reports into two groups that are distinguished by differences in temporal characteristics of the reports or of the raw data from which each report is generated. The system performs a diachronic linguistic analysis that correlates values of the temporal characteristics with differences between linguistic features of each report group's natural-language text. The system creates translation rules that instruct the NLG system how to reproduce these differences and uses the rules to translate the raw data into its own natural-language reports.
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: October 12, 2021
    Assignee: International Business Machines Corporation
    Inventors: Craig M. Trim, Martin G. Keen, Michael Bender, Aaron K. Baughman
  • Patent number: 11120810
    Abstract: A recording device comprises a first transmission unit, a switching and control unit, a sound quality sampling and encoding-decoding unit, a second transmission unit, a data access unit, a data writing unit, and a memory unit. A first digital audio source signal of an electronic device is transmitted from the first transmission unit to the switching and control unit. The sound quality sampling and encoding-decoding unit is electrically connected to the switching and control unit, receives and converts the aforementioned signal into a first digital audio signal and a first analog audio source signal. A second analog audio source signal of an audio receiving and transmitting device is converted into a second digital audio signal by the sound quality sampling and encoding-decoding unit. The data writing unit receives the first and second digital audio signals and writes the first and second digital audio signals into the memory unit.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: September 14, 2021
    Inventor: Yi-Jou Wang
  • Patent number: 11087095
    Abstract: The present invention is a system and method for optimizing the narrative text generated by one or more narrative frameworks that utilize data input from one or more data sources to drive the creation of a narrative text output. Narrative text is generated in accordance with sets of data that provide the scope of text to be generated. A Quality Assurance module presents the narrative text output to a user that reviews both the condition and the logic evaluation associated with the scope, and the quality of the generated text. A log of Quality Assurance items is created upon review of the generated text. These items are then later resolved by locating them in a narrative text generation data structure to resolve the identified issues.
    Type: Grant
    Filed: February 20, 2015
    Date of Patent: August 10, 2021
    Assignee: STATS LLC
    Inventors: Adam Long, Robert Allen, Anne Johnson
  • Patent number: 11074419
    Abstract: A method and system for providing online content in braille is provided. The method and system can allow for any online content to be converted from text into electronic braille by tokenizing the online content and determining the electronic braille form the tokenized online content based on a set of rules.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: July 27, 2021
    Assignee: Morgan Stanley Services Group Inc.
    Inventors: Devanshu Sen, Sumanth Sampath, Pravin Patil, Mohit Pal, Kranthi Darapu, Merav Pepere
  • Patent number: 11056130
    Abstract: The present disclosure provides a speech enhancement method and apparatus, a device and a storage medium. The method includes: acquiring a first speech signal and a second speech signal; obtaining a signal to noise ratio of the first speech signal; determining, according to the signal to noise ratio of the first speech signal, a fusion coefficient of filtered signals corresponding to the first speech signal and the second speech signal; and performing, according to the fusion coefficient, speech fusion processing on the filtered signals corresponding to the first speech signal and the second speech signal to obtain an enhanced speech signal. Thereby, it is realized that a fusion coefficient of speech signals of a non-air conduction speech sensor and an air conduction speech sensor is adaptively adjusted according to environment noise, thereby improving the signal quality after speech fusion, and improving the effect of speech enhancement.
    Type: Grant
    Filed: October 23, 2019
    Date of Patent: July 6, 2021
    Assignee: SHENZHEN GOODIX TECHNOLOGY CO., LTD.
    Inventors: Hu Zhu, Xinshan Wang, Guoliang Li, Duan Zeng, Hongjing Guo
  • Patent number: 11031003
    Abstract: Technology is disclosed for providing dynamic identification and extraction or tagging of contextually-coherent text blocks from an electronic document. In an embodiment, an electronic document may be parsed into a plurality of content tokens that each corresponds to a portion of the electronic document, such as a sentence or a paragraph. Employing a sliding window approach, a number of token groups are independently analyzed, where each group of tokens has a different number of tokens included therein. Each token group is analyzed to determine confidence scores for various determinable contexts based on content included in the token set. The confidence scores can then be processed for each token group to determine an entropy score for the token group. In this way, one of the analyzed token groups can be selected as a representative text block that corresponds to one of the plurality of determinable contexts.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: June 8, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Abedelkader Asi, Liron Izhaki-Allerhand, Ran Mizrachi, Royi Ronen, Ohad Jassin
  • Patent number: 10978088
    Abstract: A method of processing a signal includes taking a signal recorded by a plurality of signal recorders, applying at least one super-resolution technique to the signal to produce an oscillator peak representation of the signal comprising a plurality of frequency components for a plurality of oscillator peaks, computing at least one Cross Channel Complex Spectral Phase Evolution (XCSPE) attribute for the signal to produce a measure of a spatial evolution of the plurality of oscillator peaks between the signal, identifying a known predicted XCSPE curve (PXC) trace corresponding to the frequency components and at least one XCSPE attribute of the plurality of oscillator peaks and utilizing the identified PXC trace to determine a spatial attribute corresponding to an origin of the signal.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: April 13, 2021
    Assignee: XMOS INC.
    Inventors: Kevin M. Short, Brian T. Hone, Pascal Brunet
  • Patent number: 10950223
    Abstract: The system and method generally include identifying whether an utterance spoken by a user (e.g., customer) is a complete or incomplete sentence. For example, the system may include a partial utterance detection module that determines whether an utterance spoken by a user is a partial utterance. The detection process may include providing a detection advice code that gives a recommendation for handling the utterance of interest. If it is determined that the utterance is an incomplete sentence, then the system and method can identify the type of utterance. For example, the system may include a partial utterance classification module that predicts the class of a partial utterance. The classification process may include providing a classification advice code that gives a recommendation for handling the utterance of interest.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: March 16, 2021
    Assignee: Accenture Global Solutions Limited
    Inventors: Poulami Debnath, Shubhashis Sengupta, Harshawardhan Madhukar Wabgaonkar
  • Patent number: 10902197
    Abstract: Features are disclosed for determining the vocabulary of a user and identifying content items appropriate for the user based on the user's personal vocabulary. The user's vocabulary can be determined by analyzing user-generated textual items. Based on the analysis of such user-generated textual items, a list of words used frequently by the user in the user's own writings can be identified as being in the user's vocabulary. The list of words in the user's vocabulary can be compared to the words in various content items to determine a degree to which the words used in the content are in the user's vocabulary. Content can then be recommended or otherwise determined to be appropriate for the user's vocabulary, identified as challenging, too difficult, or too easy, etc.
    Type: Grant
    Filed: March 30, 2015
    Date of Patent: January 26, 2021
    Assignee: Audible, Inc
    Inventor: Geetika Tewari Lakshmanan
  • Patent number: 10902187
    Abstract: In an aspect, a computerized method for generating processed files of deposition testimony transcript designations may include accessing a file containing designations of contents of a textual transcript, quarantining errors in the designations, and generating a processed file containing processed designations of contents of the textual transcript having quarantined errors removed therefrom. In another aspect, a computerized method of generating designations for a deposition testimony transcript may include accessing designation information regarding designations made with respect to text of the deposition testimony transcript, accessing rules for generating designations based on the designation information, and generating the designations based on the rules.
    Type: Grant
    Filed: September 21, 2016
    Date of Patent: January 26, 2021
    Assignee: Designation Station, LLC
    Inventor: Christopher John Grimm
  • Patent number: 10884675
    Abstract: An image forming apparatus compares device identification information acquired from XML setting information received from a connected device with device identification information of the image forming apparatus, and determines an import level based on a comparison result. The image forming apparatus extracts a setting according to the import level using each module of a plurality of applications for the image forming apparatus, and stores the extracted setting in a storage that is used for control performed in each application. The device identification information that determines the import level includes at least one of firmware version, destination information, user editing information, accessory connection information, and license information, in addition to model management number and machine body management number.
    Type: Grant
    Filed: June 18, 2018
    Date of Patent: January 5, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventor: Hidetaka Nakahara
  • Patent number: 10885898
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving audio data including an utterance, obtaining context data that indicates one or more expected speech recognition results, determining an expected speech recognition result based on the context data, receiving an intermediate speech recognition result generated by a speech recognition engine, comparing the intermediate speech recognition result to the expected speech recognition result for the audio data based on the context data, determining whether the intermediate speech recognition result corresponds to the expected speech recognition result for the audio data based on the context data, and setting an end of speech condition and providing a final speech recognition result in response to determining the intermediate speech recognition result matches the expected speech recognition result, the final speech recognition result including the one or more expected speech recognition results indicated b
    Type: Grant
    Filed: September 21, 2017
    Date of Patent: January 5, 2021
    Assignee: Google LLC
    Inventors: Petar Aleksic, Glen Shires, Michael Buchanan
  • Patent number: 10839824
    Abstract: During high-frequency interpolation, a harmonic generation unit first generates a harmonic signal for an input compressed audio signal. An HPF unit, having a cutoff frequency, extracts a high frequency component from the compressed audio signal in parallel with the generation of the harmonic signal. An HPF unit, having a cutoff frequency, extracts a high frequency component from the compressed audio signal. An estimation unit estimates a missing band in the compressed audio signal on the basis of a ratio of the signal level of a difference signal to the signal level of an output signal, the difference signal being obtained by subtracting the output signal of the HPF unit from the output signal of the HPF unit. The estimation unit controls the cutoff frequency of a variable HPF unit that extracts a signal component for high-frequency interpolation from the harmonic signal on the basis of the estimated missing band.
    Type: Grant
    Filed: March 27, 2014
    Date of Patent: November 17, 2020
    Assignee: PIONEER CORPORATION
    Inventor: Shin Hasegawa
  • Patent number: 10832685
    Abstract: According to an embodiment, a speech processing device includes an extractor, a classifier, a similarity calculator, and an identifier. The extractor is configured to extract a speech feature from utterance data. The classifier is configured to classify the utterance data into a set of utterances for each speaker based on the extracted speech feature. The similarity calculator is configured to calculate a similarity between the speech feature of the utterance data included in the set and each of a plurality of speaker models. The identifier is configured to identify a speaker for each set based on the calculated similarity.
    Type: Grant
    Filed: September 1, 2016
    Date of Patent: November 10, 2020
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Ning Ding, Makoto Hirohata
  • Patent number: 10803850
    Abstract: Techniques for generating voice with predetermined emotion type. In an aspect, semantic content and emotion type are separately specified for a speech segment to be generated. A candidate generation module generates a plurality of emotionally diverse candidate speech segments, wherein each candidate has the specified semantic content. A candidate selection module identifies an optimal candidate from amongst the plurality of candidate speech segments, wherein the optimal candidate most closely corresponds to the predetermined emotion type. In further aspects, crowd-sourcing techniques may be applied to generate the plurality of speech output candidates associated with a given semantic content, and machine-learning techniques may be applied to derive parameters for a real-time algorithm for the candidate selection module.
    Type: Grant
    Filed: September 8, 2014
    Date of Patent: October 13, 2020
    Inventors: Chi-Ho Li, Baoxun Wang, Max Leung
  • Patent number: 10796705
    Abstract: A coder and decoder, and methods therein, are provided for coding and decoding of spectral peak positions in audio coding. According to a first aspect, an audio signal segment coding method is provided for coding of spectral peak positions. The method comprises determining which one out of two lossless spectral peak position coding schemes that requires the least number of bits to code the spectral peak positions of an audio signal segment; and selecting the spectral peak position coding scheme that requires the least number of bits to code the spectral peak positions of the audio signal segment. A first one of the two lossless spectral peak position coding schemes is suitable for periodic or semi-periodic spectral peak position distributions; and a second one of two lossless spectral peak position coding schemes is suitable for sparse spectral peak position distributions.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: October 6, 2020
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Volodya Grancharov, Sigurdur Sverrisson
  • Patent number: 10726828
    Abstract: A method, computer system, and a computer program product for generating a plurality of voice data having a particular speaking style is provided. The present invention may include preparing a plurality of original voice data corresponding to at least one word or at least one phrase is prepared. The present invention may also include attenuating a low frequency component and a high frequency component in the prepared plurality of original voice data. The present invention may then include reducing power at a beginning and an end of the prepared plurality of original voice data. The present invention may further include storing a plurality of resultant voice data obtained after the attenuating and the reducing.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: July 28, 2020
    Assignee: International Business Machines Corporation
    Inventors: Takashi Fukuda, Osamu Ichikawa, Gakuto Kurata, Masayuki Suzuki
  • Patent number: 10698932
    Abstract: The present disclosure provides a method and apparatus for parsing a query based on artificial intelligence, and a storage medium, wherein the method comprises: regarding any application domain, obtaining a knowledge library corresponding to the application domain; determining a training query serving as a training language material according to the knowledge library; obtaining a deep query parsing model by training according to the training language material; using the deep query parsing model to parse the user's query to obtain a parsing result. The solution of the present disclosure may be applied to improve the accuracy of the parsing result.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: June 30, 2020
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Shuohuan Wang, Yu Sun, Dianhai Yu