Patents Examined by Qi Han
  • Patent number: 9720910
    Abstract: An approach is provided to receive a term that is included in a Business Process Model (BPM) data store with the term being from one natural language. The approach identifies that first descriptive text of the term is not available in the same natural language. A translated version of the term is retrieved from a different natural language stored in the BPM data store with descriptive text of the term being present in the different language. The descriptive text is translated to the given natural language, resulting in translated descriptive text that is, in turn, provided as a meaning of the term in the given language.
    Type: Grant
    Filed: November 11, 2015
    Date of Patent: August 1, 2017
    Assignee: International Business Machines Corporation
    Inventors: Donna K. Byron, Lakshminarayanan Krishnamurthy, Ravi S. Sinha, Craig M. Trim
  • Patent number: 9720904
    Abstract: A method for generating training data for disambiguation of an entity comprising a word or word string related to a topic to be analyzed includes acquiring sent messages by a user, each including at least one entity in a set of entities; organizing the messages and acquiring sets, each containing messages sent by each user; identifying a set of messages including different entities, greater than or equal to a first threshold value, and identifying a user corresponding to the identified set as a hot user; receiving an instruction indicating an object entity to be disambiguated; determining a likelihood of co-occurrence of each keyword and the object entity in sets of messages sent by hot users; and determining training data for the object entity on the basis of the likelihood of co-occurrence of each keyword and the object entity in the sets of messages sent by the hot users.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: August 1, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Yohei Ikawa, Akiko Suzuki
  • Patent number: 9715490
    Abstract: In an approach to automating multilingual indexing, a computer receives text of a conversation between at least two users. The computer detects at least one language associated with the text. The computer determines whether the language associated with the text is detected with a confidence level that exceeds a threshold. The computer retrieves text from one or more previous conversations between the two users. The computer detects at least one language associated with the text. The computer determines whether the at least one language associated with the text is detected with a confidence level that exceeds a pre-defined threshold. The computer analyzes the text using at least one of the detected languages to create one or more terms. The computer indexes the one or more terms and stores a boost value associated with each of the one or more indexed terms corresponding to confidence level of the detected language.
    Type: Grant
    Filed: November 6, 2015
    Date of Patent: July 25, 2017
    Assignee: International Business Machines Corporation
    Inventors: Leonid Bolshinsky, Sharon Krisher, Eitan Shapiro
  • Patent number: 9715874
    Abstract: Techniques are described for updating an automatic speech recognition (ASR) system that, prior to the update, is configured to perform ASR using a first finite-state transducer (FST) comprising a first set of paths representing recognizable speech sequences. A second FST may be accessed, comprising a second set of paths representing speech sequences to be recognized by the updated ASR system. By analyzing the second FST together with the first FST, a patch may be extracted and provided to the ASR system as an update, capable of being applied non-destructively to the first FST at the ASR system to cause the ASR system using the first FST with the patch to recognize speech using the second set of paths from the second FST. In some embodiments, the patch may be configured such that destructively applying the patch to the first FST creates a modified FST that is globally minimized.
    Type: Grant
    Filed: October 30, 2015
    Date of Patent: July 25, 2017
    Assignee: Nuance Communications, Inc.
    Inventors: Stephan Kanthak, Jan Vlietinck, Johan Vantieghem, Stijn Verschaeren
  • Patent number: 9704479
    Abstract: A speech recognition device starts to generate dictionary data for each type of name based on name data and paraphrase data, and executes dictionary registration of the dictionary data. The speech recognition device obtains text information same as text information for generating the dictionary data last time. When back-up data corresponding to the last time text information is generated, the speech recognition device executes the dictionary registration of the dictionary data generated as the back-up data. Further, a dictionary data generation device executes the dictionary registration of the dictionary data based on given name data every time the dictionary data generation device completes generation of the dictionary data based on the given name data.
    Type: Grant
    Filed: January 29, 2013
    Date of Patent: July 11, 2017
    Assignee: DENSO CORPORATION
    Inventors: Hideaki Tsuji, Satoshi Miyaguni
  • Patent number: 9697206
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for approximating responses to a user speech query in voice-enabled search based on metadata that include demographic features of the speaker. A system practicing the method recognizes received speech from a speaker to generate recognized speech, identifies metadata about the speaker from the received speech, and feeds the recognized speech and the metadata to a question-answering engine. Identifying the metadata about the speaker is based on voice characteristics of the received speech. The demographic features can include age, gender, socio-economic group, nationality, and/or region. The metadata identified about the speaker from the received speech can be combined with or override self-reported speaker demographic information.
    Type: Grant
    Filed: October 7, 2015
    Date of Patent: July 4, 2017
    Assignee: Interactions LLC
    Inventors: Michael J. Johnston, Srinivas Bangalore, Junlan Feng, Taniya Mishra
  • Patent number: 9697178
    Abstract: The tools and abstractions of the subject invention function as part of or to configure a system to use available data and information to automatically create narrative stories that describes domain events, circumstances and/or entities in a comprehensible and compelling and audience customized, manner. Computer executable instructions provide for generating a narrative story using standard and uniform structures and data for receiving domain related data and a story specification, parsing the story specification to provide constituent components, transforming the constituent components into executable code, instantiating content blocks having at least one feature for the domain according to the story specification and rendering the narrative story using the constituent components specified by the content blocks.
    Type: Grant
    Filed: May 4, 2012
    Date of Patent: July 4, 2017
    Assignee: Narrative Science Inc.
    Inventors: Nathan Drew Nichols, Lawrence A. Birnbaum, Kristian J. Hammond
  • Patent number: 9684650
    Abstract: A penalized loss is optimized using a corpus of language samples respective to a set of parameters of a language model. The penalized loss includes a function measuring predictive accuracy of the language model respective to the corpus of language samples and a penalty comprising a tree-structured norm. The trained language model with optimized values for the parameters generated by the optimizing is applied to predict a symbol following sequence of symbols of the language modeled by the language model. In some embodiments the penalty comprises a tree-structured lp-norm, such as a tree-structured l2-norm or a tree-structured l?-norm. In some embodiments a tree-structured l?-norm operates on a collapsed suffix trie in which any series of suffixes of increasing lengths which are always observed in the same context are collapsed into a single node. The optimizing may be performed using a proximal step algorithm.
    Type: Grant
    Filed: September 10, 2014
    Date of Patent: June 20, 2017
    Assignee: XEROX CORPORATION
    Inventors: Anil Kumar Nelakanti, Guillaume M. Bouchard, Cedric Archambeau, Francis Bach, Julien Mairal
  • Patent number: 9685159
    Abstract: A method for speaker recognition comprising: obtaining speaker information for a target speaker; obtaining speech samples from telephone calls from an unknown speaker; classifying the speech samples according the unknown speaker thereby providing speaker-dependent classes of speech samples; extracting speaker information of each of the speaker-dependent classes of speech samples; combining the extracted speaker information; comparing the combined extracted speaker information with the stored speaker information for the target speaker to obtain a comparison result; and determining whether the unknown speaker is identical with the target speaker based on the comparison result.
    Type: Grant
    Filed: August 3, 2015
    Date of Patent: June 20, 2017
    Assignee: Agnitio SL
    Inventors: Marta Garcia Gomar, Johan Nikolaas Langehoven Brummer, Luis Buera Rodriguez
  • Patent number: 9684446
    Abstract: In one example, a device includes at least one processor and at least one module operable by the at least one processor to output, for display, a graphical user interface including a graphical keyboard and one or more text suggestion regions, and select, based at least in part on an indication of gesture input, at least one key of the graphical keyboard. The at least one module is further operable by the at least one processor to determine a plurality of candidate character strings, determine past interaction data that comprises a representation of a past user input corresponding to at least one candidate character string while the at least one candidate character string was previously displayed in at least one of the one or more text suggestion regions, and output the at least one candidate character string for display in one of the one or more text suggestion regions.
    Type: Grant
    Filed: August 28, 2014
    Date of Patent: June 20, 2017
    Assignee: Google Inc.
    Inventors: Shumin Zhai, Philip Quinn
  • Patent number: 9672490
    Abstract: A procurement system may include a first interface configured to receive a query from a user, a command module configured to parameterize the query, an intelligent search and match engine configured to compare the parameterized query with stored queries in a historical knowledge base and, in the event the parameterized query does not match a stored query within the historical knowledge base, search for a match in a plurality of knowledge models, and a response solution engine configured to receive a system response ID from the intelligent search and match engine, the response solution engine being configured to initiate a system action by interacting with sub-system and related databases to generate a system response.
    Type: Grant
    Filed: July 9, 2015
    Date of Patent: June 6, 2017
    Assignee: GLOBAL EPROCURE
    Inventors: Subhash Makhija, Santosh Katakol, Dhananjay Nagalkar, Siddhaarth Iyer, Ravi Mevcha
  • Patent number: 9652449
    Abstract: A method, computer readable medium and apparatus for detecting a sentiment for a short message are disclosed. For example, the method receives the short message, and obtains an abstraction of the short message. The method then determines the sentiment of the short message based upon the abstraction.
    Type: Grant
    Filed: April 20, 2015
    Date of Patent: May 16, 2017
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Luciano De Andrade Barbosa, Junlan Feng
  • Patent number: 9646615
    Abstract: A method of encoding a time-domain audio signal is presented. A device transforms the time-domain signal into a frequency-domain signal including a sequence of sample blocks, wherein each block includes a coefficient for each of multiple frequencies. The coefficients of each block are grouped into frequency bands. For each frequency band of each block, a scale factor is estimated for the band, and the energy of the band for the block is compared with the energy of the band of an adjacent sample block, wherein the blocks may be adjacent to each other in either or both of an interchannel and a temporal sense. If the ratio of the band energy for the first block to the band energy for the adjacent block is less than some value, the scale factor of the band for the first block is increased. The coefficients of the band for each block are quantized based on the resulting scale factor. The encoded audio signal is generated based on the quantized coefficients and the scale factors.
    Type: Grant
    Filed: July 29, 2013
    Date of Patent: May 9, 2017
    Assignee: EchoStar Technologies L.L.C.
    Inventor: Nandury V. Kishore
  • Patent number: 9626970
    Abstract: Embodiments of the present invention relate to speaker identification using spatial information. A method of speaker identification for audio content being of a format based on multiple channels is disclosed. The method comprises extracting, from a first audio clip in the format, a plurality of spatial acoustic features across the multiple channels and location information, the first audio clip containing voices from a speaker, and constructing a first model for the speaker based on the spatial acoustic features and the location information, the first model indicating a characteristic of the voices from the speaker. The method further comprises identifying whether the audio content contains voices from the speaker based on the first model. Corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: December 16, 2015
    Date of Patent: April 18, 2017
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Shen Huang, Xuejing Sun
  • Patent number: 9620118
    Abstract: A method and system for monitoring video assets provided by a multimedia content distribution network includes testing closed captions provided in output video signals. A video and audio portion of a video signal are acquired during a time period that a closed caption occurs. A first text string is extracted from a text portion of a video image, while a second text string is extracted from speech content in the audio portion. A degree of matching between the strings is evaluated based on a threshold to determine when a caption error occurs. Various operations may be performed when the caption error occurs, including logging caption error data and sending notifications of the caption error.
    Type: Grant
    Filed: August 29, 2014
    Date of Patent: April 11, 2017
    Assignee: Nuance Communications, Inc.
    Inventor: Hung John Pham
  • Patent number: 9614690
    Abstract: A smart home interaction system is presented. It is built on a multi-modal, multithreaded conversational dialog engine. The system provides a natural language user interface for the control of household devices, appliances or household functionality. The smart home automation agent can receive input from users through sensing devices such as a smart phone, a tablet computer or a laptop computer. Users interact with the system from within the household or from remote locations. The smart home system can receive input from sensors or any other machines with which it is interfaced. The system employs interaction guide rules for processing reaction to both user and sensor input and driving the conversational interactions that result from such input. The system adaptively learns based on both user and sensor input and can learn the preferences and practices of its users.
    Type: Grant
    Filed: November 23, 2015
    Date of Patent: April 4, 2017
    Assignee: NANT HOLDINGS IP, LLC
    Inventors: Farzad Ehsani, Silke Maren Witt-Ehsani, Walter Rolandi
  • Patent number: 9609117
    Abstract: The present technology concerns improvements to smart phones and related sensor-equipped systems. Some embodiments involve spoken clues, e.g., by which a user can assist a smart phone in identifying what portion of imagery captured by a smart phone camera should be processed, or identifying what type of image processing should be conducted. Some arrangements include the degradation of captured content information in accordance with privacy rules, which may be location-dependent, or based on the unusualness of the captured content, or responsive to later consultation of the stored content information by the user. A great variety of other features and arrangements are also detailed.
    Type: Grant
    Filed: September 22, 2015
    Date of Patent: March 28, 2017
    Assignee: Digimarc Corporation
    Inventors: Bruce L. Davis, Tony F. Rodriguez, Geoffrey B. Rhoads, William Y. Conwell, John Stach
  • Patent number: 9576585
    Abstract: A decoder device for decoding a bitstream so as to produce therefrom an audio output signal, the bitstream having audio data and optionally loudness metadata containing a reference loudness value, wherein a gain control device has a reference loudness decoder configured to create a loudness value, wherein the loudness value is the reference loudness value in case that the reference loudness value is present in the bitstream; wherein the gain control device has a gain calculator configured to calculate a gain value based on the loudness value and based on a volume control value, which is provided by an external user interface allowing a user to control the volume control value, and a loudness processor configured to control the loudness of the audio output signal based on the gain value.
    Type: Grant
    Filed: July 28, 2015
    Date of Patent: February 21, 2017
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventor: Robert Bleidt
  • Patent number: 9564119
    Abstract: A voice converting apparatus and a voice converting method are provided. The method of converting a voice using a voice converting apparatus including receiving a voice from a counterpart, analyzing the voice and determining whether the voice abnormal, converting the voice into a normal voice by adjusting a harmonic signal of the voice in response to determining that the voice is abnormal, and transmitting the normal voice.
    Type: Grant
    Filed: October 11, 2013
    Date of Patent: February 7, 2017
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jong-youb Ryu, Yoon-jae Lee, Seoung-hun Kim, Young-tae Kim
  • Patent number: 9548047
    Abstract: An electronic device includes a microphone that receives an audio signal that includes a spoken trigger phrase, and a processor that is electrically coupled to the microphone. The processor measures characteristics of the audio signal, and determines, based on the measured characteristics, whether the spoken trigger phrase is acceptable for trigger phrase model training. If the spoken trigger phrase is determined not to be acceptable for trigger phrase model training, the processor rejects the trigger phrase for trigger phrase model training.
    Type: Grant
    Filed: October 10, 2013
    Date of Patent: January 17, 2017
    Assignee: Google Technology Holdings LLC
    Inventors: Joel A Clark, Tenkasi V Ramabadran, Mark A. Jasiuk