Preliminary Matching Patents (Class 704/252)
  • Patent number: 10877955
    Abstract: Identifying data quality along a data flow. A method includes identifying quality metadata for two or more datasets. The quality metadata defines one or more of quality of a data source, accuracy of a dataset, completeness of a dataset, freshness of a dataset, or relevance of a dataset. At least some of the metadata is based on results of operations along a data flow. Based on the metadata, the method includes creating one or more quality indexes for the datasets. The one or more quality indexes include a characterization of quality of two or more datasets.
    Type: Grant
    Filed: April 29, 2014
    Date of Patent: December 29, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Jeffrey Michael Derstadt
  • Patent number: 10803866
    Abstract: The present disclosure provides an interface intelligent interaction control method, apparatus and system, and a storage medium, wherein the method comprises: receiving user-input speech information, and obtaining a speech recognition result; determining scenario elements associated with the speech recognition result; generating an entry corresponding to each scenario element and sending the speech recognition result and the entry to a cloud server; receiving an entry which is best matched with the speech recognition result, returned by the cloud server and selected from the received entries; performing an interface operation corresponding to the best-matched entry. The solution of the present disclosure can be applied to improve flexibility and accuracy of the speech control.
    Type: Grant
    Filed: August 28, 2018
    Date of Patent: October 13, 2020
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Gaofei Cheng, Xiangtao Jiang, Ben Xu, Linxin Ou, Qin Xiong
  • Patent number: 10770068
    Abstract: A text generating device includes an acquirer that acquires the utterance of the user relative to content provided in real time, a retriever that retrieves from a microblog server data relating to the content, a sentence generator that generates a sentence relating to the content and the acquired utterance of the user, based on the data retrieved by the retriever 140, and an voice outputter and/or a display for using the generated sentence to reply to the user.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: September 8, 2020
    Assignee: CASIO COMPUTER CO., LTD.
    Inventor: Takahiro Tanaka
  • Patent number: 10740326
    Abstract: A network resource access system for providing access by a user to network resources over a communications network, the system comprising: a resource registry including stored resource records associated with each of the network resources and a stored user profile containing a list of network resources such that the network resources have a ranking relative to each other based at least in part on user behavior with respect to usage of each of the network resources, the user profile associated with the user such that the list of network resources contains the network resources previously accessed by the user; and a resource service for receiving an access query from a network terminal identifying the user and associated with submission of application data for processing by a selected network resource from the list, the resource service further configured for accessing the user profile to identify a suggested network resource from the list in view of the relative ranking and for sending identification of the su
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: August 11, 2020
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Mark Burns, Michael St. Laurent, Dharmesh Krishnammagaru
  • Patent number: 10739976
    Abstract: Methods, systems, devices, and media for creating a plan through multimodal search inputs are provided. A first search request comprises a first input received via a first input mode and a second input received via a different second input mode. The second input identifies a geographic area. First search results are displayed based on the first search request and corresponding to the geographic area. Each of the first search results is associated with a geographic location. A selection of one of the first search results is received and added to a plan. A second search request is received after the selection, and second search results are displayed in response to the second search request. The second search results are based on the second search request and correspond to the geographic location of the selected one of the first search results.
    Type: Grant
    Filed: January 16, 2018
    Date of Patent: August 11, 2020
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Michael J. Johnston, Patrick Ehlen, Hyuckchul Jung, Jay H. Lieske, Jr., Ethan Selfridge, Brant J. Vasilieff, Jay Gordon Wilpon
  • Patent number: 10692492
    Abstract: Techniques are disclosed for client-side analysis of audio samples to identify one or more characteristics associated with captured audio. The client-side analysis may then allow a user device, e.g., a smart phone, laptop computer, in-car infotainment system, and so on, to provide the one or more identified characteristics as configuration data to a voice recognition service at or shortly after connection with the same. In turn, the voice recognition service may load one or more recognition components, e.g., language models and/or application modules/engines, based on the received configuration data. Thus, latency may be reduced based on the voice recognition engine having “hints” that allow components to be loaded without necessarily having to process audio samples first. The reduction of latency may reduce processing time relative to other approaches to voice recognitions systems that exclusively perform server-side context recognition/classification.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: June 23, 2020
    Assignee: Intel IP Corporation
    Inventors: Piotr Rozen, Tobias Bocklet, Jakub Nowicki, Munir Georges
  • Patent number: 10685650
    Abstract: A mobile terminal including a touch screen; a microphone configured to receive voice information from a user; and a controller configured to analyze the voice information using a voice recognition algorithm, extract a term predicted to be unfamiliar to the user from the analyzed voice information based on a pre-stored knowledge database, search for information on the extracted term based on a context of the analyzed voice information, and display the searched information to the touch screen.
    Type: Grant
    Filed: December 29, 2017
    Date of Patent: June 16, 2020
    Assignee: LG ELECTRONICS INC.
    Inventor: Hyunjoo Jeon
  • Patent number: 10665223
    Abstract: Systems and methods for detecting, classifying, and correcting acoustic (waveform) events are provided. In one example embodiment, a computer-implemented method includes obtaining, by a computing system, audio data from a source. The method includes accessing, by the computing system, data indicative of a machine-learned acoustic detection model. The method includes inputting, by the computing system, the audio data from the source into the machine-learned acoustic detection model. The method includes obtaining, by the computing system, an output from the machine-learned acoustic detection model. The output is indicative of an acoustic event associated with the source. The method includes providing, by the computing system, data indicative of a notification to a user device. The notification indicates the acoustic event and response(s) for selection by a user.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: May 26, 2020
    Assignee: UDIFI, INC.
    Inventor: Jack Edward Neil
  • Patent number: 10657202
    Abstract: A method, computer program product, and computing system for receiving a presentation file including one or more audio portions and one or more textual portions. An audio transcript of the one or more audio portions of the presentation file may be generated. A textual transcript of the one or more textual portions of the presentation file may be generated. One or more rich portions of the presentation file may be determined based upon, at least in part, a comparison of the audio transcript and the textual transcript. At least the one or more rich portions of the presentation file may be presented.
    Type: Grant
    Filed: December 11, 2017
    Date of Patent: May 19, 2020
    Assignee: International Business Machines Corporation
    Inventors: Nan Chen, June-Ray Lin, Ju Ling Liu, Jin Zhang, Li Bo Zhang
  • Patent number: 10621970
    Abstract: Systems and methods for identifying content corresponding to a language are provided. Language spoken by a first user based on verbal input received from the first user is automatically determined with voice recognition circuitry. A database of content sources is cross-referenced to identify a content source associated with a language field value that corresponds to the determined language spoken by the first user. The language field in the database identifies the language that the associated content source transmits content to a plurality of users. A representation of the identified content source is generated for display to the first user.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: April 14, 2020
    Assignee: Rovi Guides, Inc.
    Inventor: Shuchita Mehra
  • Patent number: 10614112
    Abstract: An optimized fact checking system analyzes and determines the factual accuracy of information and/or characterizes the information by comparing the information with source information. The optimized fact checking system automatically monitors information, processes the information, fact checks the information in an optimized manner and/or provides a status of the information. In some embodiments, the optimized fact checking system generates, aggregates, and/or summarizes content.
    Type: Grant
    Filed: September 9, 2016
    Date of Patent: April 7, 2020
    Inventor: Lucas J. Myslinski
  • Patent number: 10602215
    Abstract: Systems and methods are presented herein for recording portions of a media asset relevant to recording criteria. A media application receives input indicating the recording criteria and identifying a first keyword. The media application accesses a data structure to identify a first node associated with the first keyword. The data structure includes the first node and a plurality of nodes connected to the first node via a plurality of paths. The media application receiving audio component data for a portion of the media asset extracts a term from the audio component data, and identifies a second node in the data structure that is associated with the extracted term. The media application calculates a path score for the portion of the media asset based on a path size in the data structure between the first node and the second node. When the score is high enough, the portion of the media asset is recorded.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: March 24, 2020
    Assignee: Rovi Guides, Inc.
    Inventor: Sean Matthews
  • Patent number: 10578450
    Abstract: A computer-implemented method includes receiving at a computer server system, from a computing device that is remote from the server system, a string of text that comprises a search query. The method also includes identifying one or more search results that are responsive to the search query, parsing a document that is a target of one of the one or more results, identifying geographical address information from the parsing, generating a specific geographical indicator corresponding to the one search result, and transmitting for use by the computing device, data for automatically generating a navigational application having a destination at the specific geographical indicator.
    Type: Grant
    Filed: August 31, 2016
    Date of Patent: March 3, 2020
    Assignee: Google LLC
    Inventors: Michael J. LeBeau, Ole CaveLie, Keith Ito, John Nicholas Jitkoff
  • Patent number: 10580436
    Abstract: The present disclosure provides a method and a device for processing a speech based on artificial intelligence. The method includes: grading a current frame included in a speech packet to be decoded by using an acoustic model to obtain a grading result; identifying whether the current frame is a quasi-silent frame according the grading result; and skipping the current frame and not decoding the current frame if the current frame is the quasi-silent frame. In the present disclosure, before the current frame included in the speech pocket to be decoded is decoded, it is identified whether to decode the current frame according to the grading result obtained with the acoustic model. When there is no need to decode the current frame, the current frame is skipped. Thus, a redundancy decoding may be avoided, a speed of decoding is improved and recognition of the speech packet to be decoded is expedited.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: March 3, 2020
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Zhijian Wang, Sheng Qian
  • Patent number: 10558421
    Abstract: A computer-implemented method includes identifying a first set of utterances from a plurality of utterances. The plurality of utterances is associated with a conversation and transmitted via a plurality of audio signals. The computer-implemented method further includes mining the first set of utterances for a first context. The computer-implemented method further includes determining that the first context associated with the first set of utterances is not relevant to a second context associated with the conversation. The computer-implemented method further includes dynamically muting, for at least a first period of time, a first audio signal in the plurality of audio signals corresponding to the first set of utterances. A corresponding computer system and computer program product are also disclosed.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: February 11, 2020
    Assignee: International Business Machines Corporation
    Inventors: Tamer E. Abuelsaad, Gregory J. Boss, John E. Moore, Jr., Randy A. Rendahl
  • Patent number: 10552118
    Abstract: A computer-implemented method includes identifying a first set of utterances from a plurality of utterances. The plurality of utterances is associated with a conversation and transmitted via a plurality of audio signals. The computer-implemented method further includes mining the first set of utterances for a first context. The computer-implemented method further includes determining that the first context associated with the first set of utterances is not relevant to a second context associated with the conversation. The computer-implemented method further includes dynamically muting, for at least a first period of time, a first audio signal in the plurality of audio signals corresponding to the first set of utterances. A corresponding computer system and computer program product are also disclosed.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: February 4, 2020
    Assignee: International Busiess Machines Corporation
    Inventors: Tamer E. Abuelsaad, Gregory J. Boss, John E. Moore, Jr., Randy A. Rendahl
  • Patent number: 10553215
    Abstract: Systems and processes for operating an automated assistant are disclosed. In one example process, an electronic device provides an audio output via a speaker of the electronic device. While providing the audio output, the electronic device receives, via a microphone of the electronic device, a natural language speech input. The electronic device derives a representation of user intent based on the natural language speech input and the audio output, identifies a task based on the derived user intent; and performs the identified task.
    Type: Grant
    Filed: July 2, 2018
    Date of Patent: February 4, 2020
    Assignee: Apple Inc.
    Inventors: Harry J. Saddler, Aimee T. Piercy, Garrett L. Weinberg, Susan L. Booker
  • Patent number: 10468136
    Abstract: Disclosed are embodiments of method and system to predict health condition of a human subject. The method comprises receiving historical human-subject related data including records corresponding to multiple data views. The method estimates one or more latent variables based on: a first value indicative of count of records in a cluster, a second value indicative of count of records, and a third value indicative of a parameter utilizable to predict a fourth value. The fourth value corresponds to selection probability of a D-vine pair copula family, of a D-vine mixture model, utilizable to model a cluster. The method generates the D-vine mixture model based on the estimated one or more latent variables. The method further comprises receiving multi-view data of a second human subject and predicting health condition of the second human subject based on the multi-view data using a classifier trained based on the estimated latent variables.
    Type: Grant
    Filed: August 29, 2016
    Date of Patent: November 5, 2019
    Assignee: CONDUENT BUSINESS SERVICES, LLC
    Inventors: Lavanya Sita Tekumalla, Vaibhav Rajan
  • Patent number: 10460074
    Abstract: Disclosed are embodiments of methods and systems for predicting a health condition of a first human subject. The method comprises receiving a measure of one or more physiological parameters associated with the first human subject. The method estimates one or more latent variables based on a first count indicative of a number of the plurality of d-vines, a second count indicative of a number of the one or more records, a first value that is representative of a number of the one or more records clustered into a d-vine from the plurality of d-vines, and a second value that is representative of a parameter utilizable to predict a third value. The method generates the plurality of d-vines based on the estimated one or more latent variables. The method predicts health condition of the first human subject by utilizing a trained classifier based on the estimated one or more latent variables.
    Type: Grant
    Filed: April 5, 2016
    Date of Patent: October 29, 2019
    Assignee: CONDUENT BUSINESS SERVICES, LLC
    Inventors: Lavanya Sita Tekumalla, Vaibhav Rajan
  • Patent number: 10459008
    Abstract: A method is disclosed for triggering upon signal events occurring in frequency domain signals. The method includes repeatedly sampling a time-varying signal and generating a plurality of digital frequency domain spectrums based on the samples of the time-varying signal. A frequency domain bitmap for the time-varying signal is repeatedly updated via application of the digital frequency domain spectrums. The method further includes selecting a portion of the frequency domain bitmap, determining a signal occupancy in the selected portion, and triggering a capture of the time-varying signal based on and in response to the occupancy determination for the selected portion of the bitmap.
    Type: Grant
    Filed: October 14, 2014
    Date of Patent: October 29, 2019
    Assignee: Tektronix, Inc.
    Inventors: Robert E. Tracy, Kathryn A. Engholm, Alfred K. Hillman, Jr.
  • Patent number: 10448898
    Abstract: Disclosed are embodiments of methods and systems for predicting a health condition of a first human subject. The method comprises extracting a historical data including physiological parameters of second human subjects. Thereafter, a first distribution of a first physiological parameter is determined based on a marginal cumulative distribution of a rank transformed historical data. Further, a second distribution of a second physiological parameter is determined based on the first distribution and a first conditional cumulative distribution of the rank transformed historical data. Further, a latent variable is determined based on the first and the second distributions. Thereafter, one or more parameters of at least one bivariate distribution, corresponding to a D-vine copula, are estimated based on the latent variable. Further, a classifier is trained based on the D-vine copula. The classifier is utilizable to predict the health condition of the first human subject based on his/her physiological parameters.
    Type: Grant
    Filed: July 14, 2015
    Date of Patent: October 22, 2019
    Assignee: CONDUENT BUSINESS SERVICES, LLC
    Inventors: Lavanya Sita Tekumalla, Vaibhav Rajan
  • Patent number: 10360903
    Abstract: According to one embodiment, an apparatus includes a storage unit, a first acquisition unit, a second acquisition unit, an analyzer, and a recognition unit. The storage unit stores first situation information about a situation assumed in advance, a first representation representing a meaning of a sentence assumed, intention information representing an intention to be estimated, and a first value representing a degree of application of the first representation to the first situation information and the intention information. The first acquisition unit acquires a natural sentence. The second acquisition unit acquires second situation information about a situation when acquiring the natural sentence. The analyzer analyzes the natural sentence and generates a second representation representing a meaning of the natural sentence. The recognition unit obtains an estimated value based on the first value associated with the first situation information and the first representation.
    Type: Grant
    Filed: February 15, 2017
    Date of Patent: July 23, 2019
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Hiromi Wakaki, Kenji Iwata, Masayuki Okamoto
  • Patent number: 10354308
    Abstract: Offer listings can be classified as accessory offers or product offers using a classification operation performed on a corpus of offers. Data from the classification operation can be used to classify received queries as either product or accessory, and to classify results as products or accessories for purposes of presenting a relevant list of results to a user.
    Type: Grant
    Filed: July 2, 2015
    Date of Patent: July 16, 2019
    Assignee: GOOGLE LLC
    Inventors: Srinath Sridhar, Ashutosh Garg, Kedar Dhamdhere, Varun Kacholia
  • Patent number: 10339918
    Abstract: An embodiment of a speech endpoint detector apparatus may include a speech detector to detect a presence of speech in an electronic speech signal, a pause duration measurer communicatively coupled to the speech detector to measure a duration of a pause following a period of detected speech, an end of utterance detector communicatively coupled to the pause duration measurer to detect if the pause measured following the period of detected speech is greater than a pause threshold corresponding to an end of an utterance, and a pause threshold adjuster to adaptively adjust the pause threshold corresponding to an end of an utterance based on stored pause information. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: September 27, 2016
    Date of Patent: July 2, 2019
    Assignee: Intel IP Corporation
    Inventors: Joachim Hofer, Munir Georges
  • Patent number: 10337878
    Abstract: An in-vehicle terminal sends a spoken voice as a voice signal to a relay server, and the relay server includes a voice recognition unit which converts the received voice signal into a string, and a control unit which searches for information stored in a main database or a temporary storage database by using the string and sends a search result to the in-vehicle terminal, and, upon searching for information stored in the main database, stores the search result in the temporary storage database. Upon receiving a voice signal, when the search result is stored in the temporary storage database, the control unit searches for information stored in the temporary storage database by using the string converted from the received voice signal, and, when the search result is not stored in the temporary storage database, the control unit searches for information stored in the main database by using the string.
    Type: Grant
    Filed: October 2, 2015
    Date of Patent: July 2, 2019
    Assignee: CLARION CO., LTD.
    Inventors: Susumu Kojima, Takashi Yamaguchi, Hideki Takano, Yasushi Nagai
  • Patent number: 10268989
    Abstract: Example medical device data platforms are disclosed herein. In an example, the platform may include at least one integration device to access information originating from a plurality of implantable medical devices manufactured by a plurality of manufacturers and implanted in a plurality of patients. The system may also include an information processor to process the accessed information to generate at least one of patient-oriented information and provider-oriented information. The system may also include at least one communication device providing at least one of a patient portal and a provider portal to provide the patient-oriented information and the provider-oriented information, respectively.
    Type: Grant
    Filed: April 20, 2016
    Date of Patent: April 23, 2019
    Assignee: Murj, Inc.
    Inventors: Richard Todd Butka, Christopher Steven Irving, Patrick Beaulieu
  • Patent number: 10224026
    Abstract: An electronic device comprising circuitry configured to record sensor data that is obtained from data sources and to retrieve information from the recorded sensor data using concepts that are defined by a user.
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: March 5, 2019
    Assignee: SONY CORPORATION
    Inventors: Aurel Bordewieck, Fabien Cardinaux, Wilhelm Hagg, Thomas Kemp, Stefan Uhlich, Fritz Hohl
  • Patent number: 10224030
    Abstract: In speech processing systems personalization is added in the Natural Language Understanding (NLU) processor by incorporating external knowledge sources of user information to improve entity recognition performance of the speech processing system. Personalization in the NLU is effected by incorporating one or more dictionaries of entries, or gazetteers, with information personal to a respective user, that provide the user's information to permit disambiguation of semantic interpretation for input utterances to improve quality of speech processing results.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: March 5, 2019
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Imre Attila Kiss, Arthur Richard Toth, Lambert Mathias
  • Patent number: 10152899
    Abstract: A training tool, method and a system for measuring crew member communication skills are disclosed, wherein an audio data processing terminal interfaced with a crew training apparatus, typically a crew-operated vehicle simulator. Audio data corresponding to a conversation between at least two crew members is recording during a training session and stored. Respective audio data of each crew member is extracted from the stored audio data, and a series of measures for at least one prosodic parameter in each respective audio data extracted is computed. A correlation coefficient of the series of measures is then computed, wherein the correlation coefficient is indicative of a level of prosodic accommodation between the at least two crew members. Specific communication skills in addition to prosodic accommodation performance can the be determined inferred.
    Type: Grant
    Filed: July 31, 2014
    Date of Patent: December 11, 2018
    Assignee: Crewfactors Limited
    Inventors: Brian Vaughan, Celine De Looze
  • Patent number: 10134386
    Abstract: Systems and methods for identifying content corresponding to a language are provided. Language spoken by a first user based on verbal input received from the first user is automatically determined with voice recognition circuitry. A database of content sources is cross-referenced to identify a content source associated with a language field value that corresponds to the determined language spoken by the first user. The language field in the database identifies the language that the associated content source transmits content to a plurality of users. A representation of the identified content source is generated for display to the first user.
    Type: Grant
    Filed: July 21, 2015
    Date of Patent: November 20, 2018
    Assignee: Rovi Guides, Inc.
    Inventor: Shuchita Mehra
  • Patent number: 10108608
    Abstract: A dialog state tracking system. One aspect of the system is the use of multiple utterance decoders and/or multiple spoken language understanding (SLU) engines generating competing results that improve the likelihood that the correct dialog state is available to the system and provide additional features for scoring dialog state hypotheses. An additional aspect is training a SLU engine and a dialog state scorer/ranker DSR engine using different subsets from a single annotated training data set. A further aspect is training multiple SLU/DSR engine pairs from inverted subsets of the annotated training data set. Another aspect is web-style dialog state ranking based on dialog state features using discriminative models with automatically generated feature conjunctions. Yet another aspect is using multiple parameter sets with each ranking engine and averaging the rankings. Each aspect independently improves dialog state tracking accuracy and may be combined in various combinations for greater improvement.
    Type: Grant
    Filed: June 12, 2014
    Date of Patent: October 23, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jason D. Williams, Geoffrey G. Zweig
  • Patent number: 10043516
    Abstract: Systems and processes for operating an automated assistant are disclosed. In one example process, an electronic device provides an audio output via a speaker of the electronic device. While providing the audio output, the electronic device receives, via a microphone of the electronic device, a natural language speech input. The electronic device derives a representation of user intent based on the natural language speech input and the audio output, identifies a task based on the derived user intent; and performs the identified task.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: August 7, 2018
    Assignee: Apple Inc.
    Inventors: Harry J. Saddler, Aimee T. Piercy, Garrett L. Weinberg, Susan L. Booker
  • Patent number: 9911409
    Abstract: A speech recognition apparatus includes a processor configured to recognize a user's speech using any one or combination of two or more of an acoustic model, a pronunciation dictionary including primitive words, and a language model including primitive words; and correct word spacing in a result of speech recognition based on a word-spacing model.
    Type: Grant
    Filed: July 21, 2016
    Date of Patent: March 6, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Seokjin Hong
  • Patent number: 9805715
    Abstract: A method of recognizing speech commands includes generating a background acoustic model for a sound using a first sound sample, the background acoustic model characterized by a first precision metric. A foreground acoustic model is generated for the sound using a second sound sample, the foreground acoustic model characterized by a second precision metric. A third sound sample is received and decoded by assigning a weight to the third sound sample corresponding to a probability that the sound sample originated in a foreground using the foreground acoustic model and the background acoustic model. The method further includes determining if the weight meets predefined criteria for assigning the third sound sample to the foreground and, when the weight meets the predefined criteria, interpreting the third sound sample as a portion of a speech command. Otherwise, recognition of the third sound sample as a portion of a speech command is forgone.
    Type: Grant
    Filed: December 13, 2013
    Date of Patent: October 31, 2017
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Shuai Yue, Li Lu, Xiang Zhang, Dadong Xie, Haibo Liu, Bo Chen, Jian Liu
  • Patent number: 9784765
    Abstract: A system and method are provided for graphically actuating a trigger in a test and measurement device. The method includes displaying a visual representation of signal properties for one or more time-varying signals. A graphical user input is received, in which a portion of the visual representation is designated. The method further includes configuring a trigger of the test and measurement device in response to the graphical user input, by setting a value for a trigger parameter of the trigger. The set value for the trigger parameters varies with and is dependent upon the particular portion of the visual representation that is designated by the graphical user input. The trigger is then employed in connection with subsequent monitoring of signals within the test and measurement device.
    Type: Grant
    Filed: November 3, 2009
    Date of Patent: October 10, 2017
    Assignee: Tektronix, Inc.
    Inventors: Kathryn A. Engholm, Cecilia A. Case
  • Patent number: 9703394
    Abstract: In some examples, a method includes outputting a graphical keyboard (120) for display and responsive to receiving an indication of a first input (124), determining a new character string that is not included in a language model. The method may include adding the new character string to the language model and associating a likelihood value with the new character string. The method may include, responsive to receiving an indication of a second input, predicting the new character string, and responsive to receiving an indication of a third input that rejects the new character string, decreasing the likelihood value associated with the new character string.
    Type: Grant
    Filed: October 1, 2015
    Date of Patent: July 11, 2017
    Assignee: Google Inc.
    Inventors: Yu Ouyang, Shumin Zhai
  • Patent number: 9704482
    Abstract: A method for spoken term detection, comprising generating a time-marked word list, wherein the time-marked word list is an output of an automatic speech recognition system, generating an index from the time-marked word list, wherein generating the index comprises creating a word loop weighted finite state transducer for each utterance, i, receiving a plurality of keyword queries, and searching the index for a plurality of keyword hits.
    Type: Grant
    Filed: March 11, 2015
    Date of Patent: July 11, 2017
    Assignee: International Business Machines Corporation
    Inventors: Brian E. D. Kingsbury, Lidia Mangu, Michael A. Picheny, George A. Saon
  • Patent number: 9697830
    Abstract: A method for spoken term detection, comprising generating a time-marked word list, wherein the time-marked word list is an output of an automatic speech recognition system, generating an index from the time-marked word list, wherein generating the index comprises creating a word loop weighted finite state transducer for each utterance, i, receiving a plurality of keyword queries, and searching the index for a plurality of keyword hits.
    Type: Grant
    Filed: June 25, 2015
    Date of Patent: July 4, 2017
    Assignee: International Business Machines Corporation
    Inventors: Brian E. D. Kingsbury, Lidia Mangu, Michael A. Picheny, George A. Saon
  • Patent number: 9672201
    Abstract: Systems, methods and apparatus for learning parsing rules and argument identification from crowdsourcing of proposed command inputs. Crowdsourcing techniques to generate rules for parsing input sentences. A parse is used to determine whether the input sentence invokes a specific action, and if so, what arguments are to be passed to the invocation of the action.
    Type: Grant
    Filed: April 27, 2016
    Date of Patent: June 6, 2017
    Assignee: Google Inc.
    Inventors: Jakob D. Uszkoreit, Percy Liang
  • Patent number: 9620109
    Abstract: A server and a guide sentence generating method are provided. The method includes receiving user speech, analyzing the user speech, determining a category of the user speech from among a plurality of categories, storing the user speech in the determined category, determining a usage frequency and a popularity of each of the plurality of categories, selecting a category from among the plurality of categories based on the usage frequency and the popularity, and generating a guide sentence corresponding to the selected category.
    Type: Grant
    Filed: February 18, 2015
    Date of Patent: April 11, 2017
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: In-jee Song, Ji-hye Chung
  • Patent number: 9542936
    Abstract: A method including: receiving, on a computer system, a text search query, the query including one or more query words; generating, on the computer system, for each query word in the query, one or more anchor segments within a plurality of speech recognition processed audio files, the one or more anchor segments identifying possible locations containing the query word; post-processing, on the computer system, the one or more anchor segments, the post-processing including: expanding the one or more anchor segments; sorting the one or more anchor segments; and merging overlapping ones of the one or more anchor segments; and searching, on the computer system, the post-processed one or more anchor segments for instances of at least one of the one or more query words using a constrained grammar.
    Type: Grant
    Filed: May 2, 2013
    Date of Patent: January 10, 2017
    Assignee: Genesys Telecommunications Laboratories, Inc.
    Inventors: Amir Lev-Tov, Avi Faizakof, Yochai Konig
  • Patent number: 9508339
    Abstract: A method for updating language understanding classifier models includes receiving via one or more microphones of a computing device, a digital voice input from a user of the computing device. Natural language processing using the digital voice input is used to determine a user voice request. Upon determining the user voice request does not match at least one of a plurality of pre-defined voice commands in a schema definition of a digital personal assistant, a GUI of an end-user labeling tool is used to receive a user selection of at least one of the following: at least one intent of a plurality of available intents and/or at least one slot for the at least one intent. A labeled data set is generated by pairing the user voice request and the user selection, and is used to update a language understanding classifier.
    Type: Grant
    Filed: January 30, 2015
    Date of Patent: November 29, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vishwac Sena Kannan, Aleksandar Uzelac, Daniel J. Hwang
  • Patent number: 9436382
    Abstract: Natural language image editing techniques are described. In one or more implementations, a natural language input is converted from audio data using a speech-to-text engine. A gesture is recognized from one or more touch inputs detected using one or more touch sensors. Performance is then initiated of an operation identified from a combination of the natural language input and the recognized gesture.
    Type: Grant
    Filed: November 21, 2012
    Date of Patent: September 6, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Gregg D. Wilensky, Walter W. Chang, Lubomira A. Dontcheva, Gierad P. Laput, Aseem O. Agarwala
  • Patent number: 9275139
    Abstract: System and method to search audio data, including: receiving audio data representing speech; receiving a search query related to the audio data; compiling, by use of a processor, the search query into a hierarchy of scored speech recognition sub-searches; searching, by use of a processor, the audio data for speech identified by one or more of the sub-searches to produce hits; and combining, by use of a processor, the hits by use of at least one combination function to provide a composite search score of the audio data. The combination function may include an at-least-M-of-N function that produces a high score when at least M of N function inputs exceed a predetermined threshold value. The composite search score employ a soft time window such as a spline function.
    Type: Grant
    Filed: September 27, 2012
    Date of Patent: March 1, 2016
    Assignee: Aurix Limited
    Inventor: Keith Michael Ponting
  • Patent number: 9251141
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an entity identification model. In one aspect, a method includes obtaining a plurality of complete sentences that each include entity text that references a first entity; for each complete sentence in the plurality of complete sentences: providing a first portion of the complete sentence as input to an entity identification model that determines a predicted entity for the first portion of the complete sentence, the first portion being less than all of the complete sentence; comparing the predicted entity to the first entity; and updating the entity identification model based on the comparison of the predicted entity to the first entity.
    Type: Grant
    Filed: May 12, 2014
    Date of Patent: February 2, 2016
    Assignee: Google Inc.
    Inventors: Maxim Gubin, Sangsoo Sung, Krishna Bharat, Kenneth W. Dauber
  • Patent number: 9117452
    Abstract: A language processing system identifies, from log data, command inputs that parsed to a parsing rule associated with an action. If the command input has a signal indicative of user satisfaction, where the signal is derived from data that is not generated from performance of the action (e.g., user interactions with data provided in response to the performance of another, different action; resources identified in response to the performance of another, different action having a high quality score; etc.), then exception data is generated for the parsing rule. The exception data specifies the particular instance of the sentence parsed by the parsing rule, and precludes invocation of the action associated with the rule.
    Type: Grant
    Filed: June 25, 2013
    Date of Patent: August 25, 2015
    Assignee: Google Inc.
    Inventors: Jakob D. Uszkoreit, Percy Liang, Daniel M. Bikel
  • Patent number: 9037470
    Abstract: Apparatus and methods are provided for using automatic speech recognition to analyze a voice interaction and verify compliance of an agent reading a script to a client during the voice interaction. In one aspect of the invention, a communications system includes a user interface, a communications network, and a call center having an automatic speech recognition component. In other aspects of the invention, a script compliance method includes the steps of conducting a voice interaction between an agent and a client and evaluating the voice interaction with an automatic speech recognition component adapted to analyze the voice interaction and determine whether the agent has adequately followed the script. In yet still further aspects of the invention, the duration of a given interaction can be analyzed, either apart from or in combination with the script compliance analysis above, to seek to identify instances of agent non-compliance, of fraud, or of quality-analysis issues.
    Type: Grant
    Filed: June 25, 2014
    Date of Patent: May 19, 2015
    Assignee: West Business Solutions, LLC
    Inventors: Mark J. Pettay, Fonda J. Narke
  • Patent number: 9026447
    Abstract: A first communication path for receiving a communication is established. The communication includes speech, which is processed. A speech pattern is identified as including a voice-command. A portion of the speech pattern is determined as including the voice-command. That portion of the speech pattern is separated from the speech pattern and compared with a second speech pattern. If the two speech patterns match or resemble each other, the portion of the speech pattern is accepted as the voice-command. An operation corresponding to the voice-command is determined and performed. The operation may perform an operation on a remote device, forward the voice-command to a remote device, or notify a user. The operation may create a second communication path that may allow a headset to join in a communication between another headset and a communication device, several headsets to communicate with each other, or a headset to communicate with several communication devices.
    Type: Grant
    Filed: January 25, 2008
    Date of Patent: May 5, 2015
    Assignee: CenturyLink Intellectual Property LLC
    Inventors: Erik Geldbach, Kelsyn D. Rooks, Sr., Shane M. Smith, Mark Wilmoth
  • Patent number: 9026431
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for semantic parsing with multiple parsers. One of the methods includes obtaining one or more transcribed prompt n-grams from a speech to text recognizer, providing the transcribed prompt n-grams to a first semantic parser that executes on the user device and accesses a first knowledge base for results responsive to the spoken prompt, providing the transcribed prompt n-grams to a second semantic parser that accesses a second knowledge base for results responsive to the spoken prompt, the first knowledge base including first data not included in the second knowledge base, receiving a result responsive to the spoken prompt from the first semantic parser or the second semantic parser, wherein the result is selected from the knowledge base associated with the semantic parser that provided the result to the user device, and performing an operation based on the result.
    Type: Grant
    Filed: July 30, 2013
    Date of Patent: May 5, 2015
    Assignee: Google Inc.
    Inventors: Pedro J. Moreno Mengibar, Diego Melendo Casado, Fadi Biadsy
  • Patent number: 9015043
    Abstract: A computer-implemented method includes receiving an electronic representation of one or more human voices, recognizing words in a first portion of the electronic representation of the one or more human voices, and sending suggested search terms to a display device for display to a user in a text format. The suggested search terms are based on the recognized words in the first portion of the electronic representation of the one or more human voices. A search query is received from the user, which includes one or more of the suggested search terms that were displayed to the user.
    Type: Grant
    Filed: October 1, 2010
    Date of Patent: April 21, 2015
    Assignee: Google Inc.
    Inventor: Scott Jenson