Creating Patterns For Matching Patents (Class 704/243)
  • Patent number: 11068668
    Abstract: The disclosed computer-implemented method for performing natural language translation in AR may include accessing an audio input stream that includes words spoken by a speaking user in a first language. The method may next include performing active noise cancellation on the words in the audio input stream so that the spoken words are suppressed before reaching a listening user. Still further, the method may include processing the audio input stream to identify the words spoken by the speaking user, and translating the identified words spoken by the speaking user into a second, different language. The method may also include generating spoken words in the second, different language using the translated words, and replaying the generated spoken words in the second language to the listening user. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: July 20, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Andrew Lovitt, Antonio John Miller, Philip Robinson, Scott Selfon
  • Patent number: 11069342
    Abstract: A method for training a voice data set is provided. A first test set of data selected from a first voice data set, and a first voice model parameter obtained by performing first voice model training based on a first voice data set, are obtained. Data from a second voice data set is randomly selected to generate a second test set. Further, second voice model training is performed based on the second voice data set and the first voice model parameter when the second test set and the first test set satisfy a similarity condition.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: July 20, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Tao Sun, Yueteng Kang, Xiaoming Zhang, Li Zhang
  • Patent number: 11062705
    Abstract: According to one embodiment, an information processing apparatus includes one or more processors configured to detect a trigger from a voice signal, the trigger indicating start of voice recognition; and to perform voice recognition of a recognition sound section subsequent to a trigger sound section including the detected trigger, referring to a trigger and voice recognition dictionary corresponding to the trigger.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: July 13, 2021
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Nayuko Watanabe, Takehiko Kagoshima, Hiroshi Fujimura
  • Patent number: 11055745
    Abstract: Techniques for linguistic personalization of messages for targeted campaigns are described. In one or more implementations, dependencies between keywords and modifiers are extracted, from one or more segment-specific texts and a product-specific text, to build language models for the one or more segment specific texts and the product specific text. Modifiers with a desired sentiment are extracted from the product specific text and transformation points are identified in a message skeleton. Then one or more of the extracted modifiers are inserted to modify one or more identified keywords in the message skeleton to create a personalized message for a target segment of the targeted marketing campaign.
    Type: Grant
    Filed: December 10, 2014
    Date of Patent: July 6, 2021
    Assignee: Adobe Inc.
    Inventors: Rishiraj Saha Roy, J. Guna Prasaad, Aishwarya Padmakumar, Ponnurangam Kumaraguru
  • Patent number: 11048869
    Abstract: Methods and systems for a transportation vehicle are provided. One method includes receiving a user input for a valid communication session by a processor executable, digital assistant at a device on a transportation vehicle; tagging by the digital assistant, the user input words with a grammatical connotation; generating an action context, a filter context and a response context by a neural network, based on the tagged user input; storing by the digital assistant, a key-value pair for a parameter of the filter context at a short term memory, based on an output from the neural network; updating by the digital assistant, the key-value pair at the short term memory after receiving a reply to a follow-up request and another output from the trained neural network; and providing a response to the reply by the digital assistant.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: June 29, 2021
    Assignee: Panasonic Avionics Corporation
    Inventors: Rawad Hilal, Gurmukh Khabrani, Chin Perng
  • Patent number: 11043211
    Abstract: A speech recognition method includes obtaining captured voice information, and determining semantic information of the captured voice information; segmenting the captured voice information to obtain voice segments when the semantic information does not satisfy a preset rule, and extracting voiceprint information of the voice segments; obtaining an unmatched voiceprint information from a local voiceprint database; matching the voiceprint information of the voice segments with the unmatched voiceprint information to determine a set of filtered voice segments having voiceprint information that successfully matches the unmatched voiceprint information; combining the set of filtered voice segments to obtain combined voice information, and determining combined semantic information of the combined voice information; and using the combined semantic information as a speech recognition result when the combined semantic information satisfies the preset rule.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: June 22, 2021
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Qiusheng Wan
  • Patent number: 11031029
    Abstract: A pitch detection method. Such a pitch detection method may have M-PWVT-TEO algorithm to detect a pitch value from a speech signal, apply a partial auto-correlation to a current signal with the pitch value to compensate the delay of the pitch value. Also, the pitch detection method may apply a full auto-correlation to the speech signal where the pitch value is not detected to recover on-sets of the speech signal.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: June 8, 2021
    Assignee: OmniSpeech LLC
    Inventor: Vahid Khanagha
  • Patent number: 11031108
    Abstract: A medicine management method includes acquiring user information of medicine usage corresponding to at least one assigned medicine; acquiring user information of medicine using reactions; and displaying the user information of medicine usage with first time information corresponding to different time points of using the at least one assigned medicine and the user information of medicine using reactions with second time information corresponding to different time points of occurring medicine using reactions concurrently.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: June 8, 2021
    Assignee: HTC Corporation
    Inventors: Tsung-Hsiang Liu, Ya-Han Yang, Hao-Ting Chang, Chih-Wei Cheng, Ting-Jung Chang
  • Patent number: 11017781
    Abstract: Techniques are provided for reverberation compensation for far-field speaker recognition. A methodology implementing the techniques according to an embodiment includes receiving an authentication audio signal associated with speech of a user and extracting features from the authentication audio signal. The method also includes scoring results of application of one or more speaker models to the extracted features. Each of the speaker models is trained based on a training audio signal processed by a reverberation simulator to simulate selected far-field environmental effects to be associated with that speaker model. The method further includes selecting one of the speaker models, based on the score, and mapping the selected speaker model to a known speaker identification or label that is associated with the user.
    Type: Grant
    Filed: October 6, 2018
    Date of Patent: May 25, 2021
    Assignee: INTEL CORPORATION
    Inventors: Gokcen Cilingir, Narayan Biswal
  • Patent number: 11017022
    Abstract: Methods and systems are disclosed in which audio broadcasts are converted into audio segments, for example, based on segment content. These audio segments are indexed, so as to be searchable, as computer searchable segments, for example, by network search engines and other computerized search tools.
    Type: Grant
    Filed: January 29, 2017
    Date of Patent: May 25, 2021
    Assignee: SubPLY Solutions Ltd.
    Inventors: Gal Klein, Rachel Ludmer
  • Patent number: 11010820
    Abstract: Systems, methods, and apparatus are disclosed for generating and processing natural language requests. A request processing system processes a received natural language request to identify an intent of the natural language request and a confidence level of the identified intent. In response to the confidence level of the identified intent not satisfying a threshold level, the request processing system sends the natural language request to the fulfillment computing device for further processing by a person associated with the fulfillment computing device. In response to the confidence level satisfying the threshold level, the request processing system proceeds with fulfilling the request per the identified intent.
    Type: Grant
    Filed: May 5, 2016
    Date of Patent: May 18, 2021
    Assignee: TRANSFORM SR BRANDS LLC
    Inventors: Edward Lampert, Eui Chung, Bharath Sridharan
  • Patent number: 10978053
    Abstract: A system determines user intent from a received conversation element. A plurality of distinct intent labels are generated for the received conversation element. The generated plurality of distinct intent labels are divided into a plurality of interpretation partitions with overlapping semantic content. for each interpretation partition of the plurality of interpretation partitions, a set of maximal coherent subgroups are defined that do not disagree on labels for terms in each subgroup, a score is computed for each maximal coherent subgroup of the defined set of maximal coherent subgroups, and a maximal coherent subgroup is selected from the set of maximal coherent subgroups based on the computed score. Intent labels are aggregated from the selected maximal coherent subgroup of each interpretation partition of the plurality of interpretation partitions to define a multiple intent interpretation of the received conversation element.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: April 13, 2021
    Assignee: SAS Institute Inc.
    Inventors: Jared Michael Dean Smythe, Richard Welland Crowell
  • Patent number: 10964310
    Abstract: A method of updating speech recognition data including a language model used for speech recognition, the method including obtaining language data including at least one word; detecting a word that does not exist in the language model from among the at least one word; obtaining at least one phoneme sequence regarding the detected word; obtaining components constituting the at least one phoneme sequence by dividing the at least one phoneme sequence into predetermined unit components; determining information regarding probabilities that the respective components constituting each of the at least one phoneme sequence appear during speech recognition; and updating the language model based on the determined probability information.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: March 30, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Chi-youn Park, Il-hwan Kim, Kyung-min Lee, Nam-hoon Kim, Jae-won Lee
  • Patent number: 10950229
    Abstract: A voice command application allows a user to configure an infotainment system to respond to customized voice commands. The voice command application exposes a library of functions to the user which the infotainment system can execute via interaction with the vehicle. The voice command application receives a selection of one or more functions and then receives a speech sample of the voice command. The voice command application generates sample metadata that includes linguistic elements of the voice command, and then generates a command specification. The command specification indicates the selected functions and the sample metadata for storage in a database. Subsequently, the voice command application receives the voice command from the user and locates the associated command specification in the database. The voice command application then extracts the associated set of functions and causes the vehicle to execute those functions to perform vehicle operations.
    Type: Grant
    Filed: August 25, 2017
    Date of Patent: March 16, 2021
    Assignee: Harman International Industries, Incorporated
    Inventors: Rajesh Biswal, Arindam Dasgupta
  • Patent number: 10938389
    Abstract: A method for controlling operation of a power switch includes obtaining, by one or more processors of a power switch, data indicative of one or more non-contact gestures. The method includes determining, by the one or more processors, a control action based at least in part on the data indicative of the one or more non-contact gestures. The method includes implementing, by the one or more processors, the control action.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: March 2, 2021
    Assignee: Hubbell Incorporated
    Inventors: Shawn Monteith, Michael Tetreault, Daniel Gould, Nicholas Kraus
  • Patent number: 10922488
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for computing numeric representations of words. One of the methods includes obtaining a set of training data, wherein the set of training data comprises sequences of words; training a classifier and an embedding function on the set of training data, wherein training the embedding function comprises obtained trained values of the embedding function parameters; processing each word in the vocabulary using the embedding function in accordance with the trained values of the embedding function parameters to generate a respective numerical representation of each word in the vocabulary in the high-dimensional space; and associating each word in the vocabulary with the respective numeric representation of the word in the high-dimensional space.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: February 16, 2021
    Assignee: Google LLC
    Inventors: Tomas Mikolov, Kai Chen, Gregory S. Corrado, Jeffrey A. Dean
  • Patent number: 10915529
    Abstract: A method is provided for generating a classification model configured to select an optimal execution combination for query processing. The method includes providing training queries and different execution combinations for executing the training queries. Each different execution combination involves a respective different query engine and a respective different runtime. The method includes extracting, using Cost-Based Optimizers (CBOs), a set of feature vectors for each training query. The method includes merging the set of feature vectors for the each of the training queries into a respective merged feature vector to obtain a set of merged feature vectors. The method includes adding, to each of the merged feature vectors, a respective label indicative of the optimal execution combination based on actual respective execution times of the different execution combinations, to obtain a set of labels. The method includes training the classification model by learning the merged feature vectors with the labels.
    Type: Grant
    Filed: March 14, 2018
    Date of Patent: February 9, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Tatsuhiro Chiba
  • Patent number: 10909331
    Abstract: Systems and processes for operating an electronic device to train a machine-learning translation system are described. In one process, a first set of training data is obtained. The first set of training data includes at least one payload in a first language and a translation of the at least one payload in a second language. The process further includes obtaining one or more templates for adapting the at least one payload; adapting the at least one payload using the one or more templates to generate at least one adapted payload formulated as a translation request; generating a second set of training data based on the at least one adapted payload; and training the machine-learning translation system using the second set of training data.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: February 2, 2021
    Assignee: Apple Inc.
    Inventors: Stephan Peitz, Udhyakumar Nallasamy, Matthias Paulik, Yun Tang
  • Patent number: 10910000
    Abstract: A method for audio recognition comprises: dividing audio data to be recognized to obtain a plurality of frames of audio data; calculating, based on audio variation trends among the plurality of frames and within each of the plurality of frames, a characteristic value for each frame of the audio data to be recognized; and matching the characteristic value of each frame of the audio data to be recognized with a pre-established audio characteristic value comparison table to obtain a recognition result, wherein the audio characteristic value comparison table is established based on the audio variation trends among the frames and within each of the frames of sample data.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: February 2, 2021
    Assignee: ADVANCED NEW TECHNOLOGIES CO., LTD.
    Inventors: Zhijun Du, Nan Wang
  • Patent number: 10902211
    Abstract: A system determines intent values based on an object in a received phrase, and detail values based on the object in the received phrase. The system determines intent state values based on the intent values and the detail values, and detail state values and an intent detail value based on the intent values and the detail values. The system determines other intent values based on the intent values and another object in the received phrase, and other detail values based on the detail values and the other object in the received phrase. The system determines a general intent value based on the other intent values, the other detail values, and the intent state values, and another intent detail value based on the other intent values, the other detail values, and the detail state values.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: January 26, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yu Wang, Yilin Shen, Hongxia Jin
  • Patent number: 10891441
    Abstract: A system for assisting sharing of information includes circuitry to: input a plurality of sentences each representing a statement made by one of a plurality of users, the sentence being generated by speaking or writing during a meeting or by extracting from at least one of meeting data, email data, electronic file data, and chat data at any time; determine a statement type of the statement represented by each one of the plurality of sentences, the statement type being one of a plurality of statement types previously determined; select, from among the plurality of sentences being input, one or more sentences each representing a statement of a specific statement type of the plurality of types; and output a list of the selected one or more sentences as key statements of the plurality of sentences.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: January 12, 2021
    Assignee: Ricoh Company, Ltd.
    Inventor: Tomohiro Shima
  • Patent number: 10885903
    Abstract: A service for generating textual transcriptions of video content is provided. A textual output generation service utilize machine learning techniques provide additional context for textual transcription. The textual output generation service first utilizes a machine learning algorithm to analyze video data from the video content and identify a set of context keywords corresponding to items identified in the video data. The textual output generation service then identifies one or more custom dictionaries of relevant terms based on the identified keywords. The textual output generation service can then utilize a machine learning algorithm to process the audio data from the video content biased with the selected dictionaries. The processing result can be utilized used to generate closed captioning information, textual content streams or otherwise stored.
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: January 5, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Leah Siddall, Bryan Samis, Shawn Przybilla
  • Patent number: 10885912
    Abstract: Methods and systems for providing a correct voice command. One system includes a communication device that includes an electronic processor configured to receive a first voice command via a microphone and analyze the first voice command using a first type of voice recognition. The electronic processor determines that an action to be performed in accordance with the first voice command is unrecognizable based on the analysis using the first type of voice recognition. The electronic processor transmits the first voice command to a remote electronic computing device accompanying a request requesting that the first voice command be analyzed using a second type of voice recognition different from the first type of voice recognition. The electronic processor receives, from the remote electronic computing device, a second voice command corresponding to the action and different from the first voice command, and outputs, with a speaker, the second voice command.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: January 5, 2021
    Assignee: MOTOROLA SOLUTIONS, INC.
    Inventors: Ming Yeh Koh, Hee Tat Goey, Bing Qin Lim, Yan Pin Ong
  • Patent number: 10878199
    Abstract: A word vector processing method is provided. Word segmentation is performed on a corpus to obtain words, and n-gram strokes corresponding to the words are determined. Each n-gram stroke represents n successive strokes of a corresponding word. Word vectors of the words and stroke vectors of the n-gram strokes are initialized corresponding to the words. After performing the word segmentation, the n-gram strokes are determined, and the word vectors and stroke vectors are determined, training the word vectors and the stroke vectors.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: December 29, 2020
    Assignee: Advanced New Technologies Co., Ltd.
    Inventors: Shaosheng Cao, Xiaolong Li
  • Patent number: 10878814
    Abstract: There is provided an information processing apparatus, information processing method, and program that make it possible to appropriately determine the cluster segment of the character string group that is specified on the basis of the speech recognition of the collected speech. The information processing apparatus includes: an acquisition unit that acquires a detection result relating to a variation of a sound attribute of a collected speech; and a determination unit that determines, on the basis of the detection result, a cluster segment relating to a character string group that is specified on the basis of speech recognition of the speech.
    Type: Grant
    Filed: April 14, 2017
    Date of Patent: December 29, 2020
    Assignee: SONY CORPORATION
    Inventors: Shinichi Kawano, Yuhei Taki
  • Patent number: 10867604
    Abstract: Systems and methods for distributed voice processing are disclosed herein. In one example, the method includes detecting sound via a microphone array of a first playback device and analyzing, via a first wake-word engine of the first playback device, the detected sound. The first playback device may transmit data associated with the detected sound to a second playback device over a local area network. A second wake-word engine of the second playback device may analyze the transmitted data associated with the detected sound. The method may further include identifying that the detected sound contains either a first wake word or a second wake word based on the analysis via the first and second wake-word engines, respectively. Based on the identification, sound data corresponding to the detected sound may be transmitted over a wide area network to a remote computing device associated with a particular voice assistant service.
    Type: Grant
    Filed: February 8, 2019
    Date of Patent: December 15, 2020
    Assignee: Sonos, Inc.
    Inventors: Connor Kristopher Smith, John Tolomei, Betty Lee
  • Patent number: 10860638
    Abstract: A system and method for processing digital multimedia files to provide searchable results includes the steps of converting a digital multimedia file to a plain text data format, annotating each word in the file with an indicator such as a time stamp to indicate where the word appears in the file, converting each indicator to an encoded indicator using characters that are not indexed by search software, indexing the converted, annotated file, storing the converted, annotated file and a file location of the converted, annotated file, receiving a query from a user's computer, and returning search results to the user's computer that include search snippets comprising unindexed portions of one or more files considered responsive to the query and the file location of those files.
    Type: Grant
    Filed: April 7, 2017
    Date of Patent: December 8, 2020
    Inventor: Uday Gorrepati
  • Patent number: 10824962
    Abstract: Techniques for improving quality of classification models for differentiating different user intents by improving the quality of training samples used to train the classification models are described. Pairs of user intents that are difficult to differentiate by classification models trained using the given training samples are identified based upon distinguishability scores (e.g., F-scores). For each of the identified pairs of intents, pairs of training samples each including a training sample associated with a first intent and a training sample associated with a second intent in the pair of intents are ranked based upon a similarity score between the two training samples in each pair of training samples. The identified pairs of intents and the pairs of training samples having the highest similarity scores may be presented to users through a user interface, along with user-selectable options or suggestions for improving the training samples.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: November 3, 2020
    Assignee: Oracle International Corporation
    Inventors: Gautam Singaraju, Jiarui Ding, Vishal Vishnoi, Mark Joseph Sugg, Edward E. Wong
  • Patent number: 10789960
    Abstract: Disclosed is a method including a prior phase for referencing an authorized user, during which this user pronounces a reference phrase at least once, the phrase being converted into a series of reference symbols by a statistical conversion mutual to all of the users to be referenced, and an authentication test phase, including a first step during which a candidate user pronounces the reference phrase at least once, and this pronounced phrase is converted in the same manner as the reference phrase during the prior phase, by using the same conversion, into a sequence of candidate symbols, and a second step during which the series of candidate symbols is compared to the series of reference symbols to determine a comparison result, which is compared to at least one predetermined threshold, determining whether the candidate user who pronounced the phrase during the test phase is indeed the authorized user, providing authentication.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: September 29, 2020
    Assignee: PW GROUP
    Inventors: Gregory Libert, Dijana Petrovski Chollet, Houssemeddine Khemiri
  • Patent number: 10789040
    Abstract: A communication is received. The communication is analyzed to determine a form of the communication and a recipient of the communication. An encoded audio signal is transmitted to the recipient. Responsive to transmitting the encoded audio signal, a response encoded audio signal is received. Responsive to receiving the response encoded audio signal, the communication is transmitted to the recipient based on the response encoded audio signal.
    Type: Grant
    Filed: June 5, 2019
    Date of Patent: September 29, 2020
    Assignee: International Business Machines Corporation
    Inventors: James M. J. Silvester, Livia E. Stacey
  • Patent number: 10783880
    Abstract: A speech processing system includes an input for receiving an input utterance spoken by a user and a word alignment unit configured to align different sequences of acoustic speech models with the input utterance spoken by the user. Each different sequence of acoustic speech models corresponds to a different possible utterance that a user might make. The system identifies any parts of a read prompt text that the user skipped; any parts of the read prompt text that the user repeated; and any speech sounds that the user inserted between words of the read prompt text. The information from the word alignment unit can be used to assess the proficiency and/or fluency of the user's speech.
    Type: Grant
    Filed: November 4, 2016
    Date of Patent: September 22, 2020
    Assignee: THE CHANCELLOR, MASTERS, AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
    Inventors: Thomas William John Ash, Anthony John Robinson
  • Patent number: 10783882
    Abstract: Acoustic change is detected by a method including preparing a first Gaussian Mixture Model (GMM) trained with first audio data of first speech sound from a speaker at a first distance from an audio interface and a second GMM generated from the first GMM using second audio data of second speech sound from the speaker at a second distance from the audio interface; calculating a first output of the first GMM and a second output of the second GMM by inputting obtained third audio data into the first GMM and the second GMM; and transmitting a notification in response to determining at least that a difference between the first output and the second output exceeds a threshold. Each Gaussian distribution of the second GMM has a mean obtained by shifting a mean of a corresponding Gaussian distribution of the first GMM by a common channel bias.
    Type: Grant
    Filed: January 3, 2018
    Date of Patent: September 22, 2020
    Assignee: International Business Machines Corporation
    Inventors: Osamu Ichikawa, Gakuto Kurata, Takashi Fukuda
  • Patent number: 10776712
    Abstract: In various embodiments, the systems and methods described herein relate to generative models. The generative models may be trained using machine learning approaches, with training sets comprising chemical compounds and biological or chemical information that relate to the chemical compounds. Deep learning architectures may be used. In various embodiments, the generative models are used to generate chemical compounds that have desired characteristics, e.g. activity against a selected target. The generative models may be used to generate chemical compounds that satisfy multiple requirements.
    Type: Grant
    Filed: February 3, 2016
    Date of Patent: September 15, 2020
    Assignee: Preferred Networks, Inc.
    Inventors: Kenta Oono, Justin Clayton, Nobuyuki Ota
  • Patent number: 10755709
    Abstract: Systems, methods, and devices for recognizing a user are disclosed. A speech-controlled device captures a spoken utterance, and sends audio data corresponding thereto to a server. The server determines content sources storing or having access to content responsive to the spoken utterance. The server also determines multiple users associated with a profile of the speech-controlled device. Using the audio data, the server may determine user recognition data with respect to each user indicated in the speech-controlled device's profile. The server may also receive user recognition confidence threshold data from each of the content sources. The server may determine user recognition data associated that satisfies (i.e., meets or exceeds) a most stringent (i.e., highest) of the user recognition confidence threshold data. Thereafter, the server may send data indicating a user associated with the user recognition data to all of the content sources.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: August 25, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Natalia Vladimirovna Mamkina, Naomi Bancroft, Nishant Kumar, Shamitha Somashekar
  • Patent number: 10748344
    Abstract: A method includes acquiring, by a camera, an image frame of an object having known geometry in a real scene, and estimating a pose of the object in the image frame with respect to the camera. A cursor is displayed on a display by rendering the cursor at a 3D position in a 3D coordinate system. An output is presented to a user when a predetermined portion of the object falls at the 3D position. The content of the output is based on the predetermined portion of the object.
    Type: Grant
    Filed: September 12, 2018
    Date of Patent: August 18, 2020
    Assignee: SEIKO EPSON CORPORATION
    Inventor: Xiang Guo
  • Patent number: 10733718
    Abstract: In general, a system is described that includes a set of one or more cameras and a computing device. The computing device receives a plurality of images of a three-dimensional environment captured by the one or more cameras, and a respective camera that captures a respective image is distinctly positioned at a respective particular location and in a respective particular direction. The computing device generates a plurality of image sets that each include at least three images. For each image set, the computing device calculates a plurality of predicted pairwise directions. The computing device compares a first sum of model pairwise directions with a second sum of the plurality of predicted pairwise directions and generates an inconsistency score for the respective image set. The computing device then reconstructs a digital representation of the three-dimensional environment depicted in the images.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: August 4, 2020
    Assignee: Regents of the University of Minnesota
    Inventors: Gilad Lerman, Yunpeng Shi
  • Patent number: 10720149
    Abstract: Techniques to dynamically customize a menu system presented to a user by a voice interaction system are provided. Audio data from a user that includes the speech of a user can be received. Features can be extracted from the received audio data, including a vocabulary of the speech of the user. The extracted features can be compared to features associated with a plurality of user group models. A user group model to assign to the user from the plurality of user group models can be determined based on the comparison. The user group models can cluster users together based on estimated characteristics of the users and can specify customized menu systems for each different user group. Audio data can then be generated and provided to the user in response to the received audio data based on the determined user group model assigned to the user.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: July 21, 2020
    Assignee: Capital One Services, LLC
    Inventors: Reza Farivar, Jeremy Edward Goodsitt, Fardin Abdi Taghi Abad, Austin Grant Walters
  • Patent number: 10714077
    Abstract: An apparatus for calculating acoustic score, a method of calculating acoustic score, an apparatus for speech recognition, a method of speech recognition, and an electronic device including the same are provided. An apparatus for calculating acoustic score includes a preprocessor configured to sequentially extract audio frames into windows and a score calculator configured to calculate an acoustic score of a window by using a deep neural network (DNN)-based acoustic model.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: July 14, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Inchul Song, Young Sang Choi
  • Patent number: 10706873
    Abstract: Disclosed are machine learning-based technologies that analyze an audio input and provide speaker state predictions in response to the audio input. The speaker state predictions can be selected and customized for each of a variety of different applications.
    Type: Grant
    Filed: June 10, 2016
    Date of Patent: July 7, 2020
    Assignee: SRI International
    Inventors: Andreas Tsiartas, Elizabeth Shriberg, Cory Albright, Michael W. Frandsen
  • Patent number: 10685670
    Abstract: In one example in accordance with the present disclosure, a method for a web technology responsive to mixtures of emotions includes receiving, from a user, voice information related to the web technology. The method includes generating, using a voice analysis service, percentages or levels of different emotions detected in the voice information. The method includes activating, in the web technology, at least one of multiple defined designs or functions based on the different emotions detected. Each design or function may be activated when a particular percentage or level of an emotion is detected or when a particular mixture of different emotions is detected.
    Type: Grant
    Filed: April 22, 2015
    Date of Patent: June 16, 2020
    Assignee: MICRO FOCUS LLC
    Inventors: Elad Levi, Avigad Mizrahi, Ran Bar Zik
  • Patent number: 10672392
    Abstract: A device, system and method for causing an output device to provide information for voice command functionality is provided. A controller determines when a received textual term, received at the controller via one or more of an input device and a communications unit, is phonetically similar to one or more existing textual terms used for activating functionality at a communication device using a voice recognition algorithm. When the received textual term, are phonetically similar to one or more existing textual terms, the controller: generates one or more suggested textual terms, related to the received textual term, that minimizes phonetic similarities with the one or more existing textual terms; and causes an output device to provide an indication of the one or more suggested textual terms to use in place of the received textual term.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: June 2, 2020
    Assignee: MOTOROLA SOLUTIONS, INC.
    Inventors: Melanie A. King, Craig F Siddoway
  • Patent number: 10643601
    Abstract: A conversational system receives an utterance, and a parser performs a parsing operation on the utterance, resulting in some words being parsed and some words not being parsed. For the words that are not parsed, words or phrases determined to be unimportant are ignored. The resulting unparsed words are processed to determine the likelihood they are important and whether they should be addressed by the automated assistant. For example, if a score associated with an important unparsed word achieves a particular threshold, then a course of action to take for the utterance may include providing a message that the portion of the utterance associated with the important unparsed word cannot be handled.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: May 5, 2020
    Assignee: Semantic Machines, Inc.
    Inventors: David Leo Wright Hall, Daniel Klein
  • Patent number: 10637898
    Abstract: A speaker identification system (“system”) automatically assigns a speaker to voiced segments in a conversation, without requiring any previously recorded voice sample or any other action by the speaker. The system enables unsupervised learning of speakers' fingerprints and using such fingerprints for identifying a speaker in a recording of a conversation. The system identifies one or more speakers, e.g., representatives of an organization, who are in conversation with other speakers, e.g., customers of the organization. The system processes recordings of conversations between a representative and one or more customers to generate multiple voice segments having a human voice, identifies the voice segments that have the same or a similar feature, and determines the voice in the identified voice segments as the voice of the representative.
    Type: Grant
    Filed: May 24, 2017
    Date of Patent: April 28, 2020
    Assignee: AffectLayer, Inc.
    Inventors: Raphael Cohen, Erez Volk, Russell Levy, Micha Yochanan Breakstone
  • Patent number: 10627915
    Abstract: Embodiments described herein includes a system comprising a processor coupled to display devices, sensors, remote client devices, and computer applications. The computer applications orchestrate content of the remote client devices simultaneously across the display devices and the remote client devices, and allow simultaneous control of the display devices. The simultaneous control includes automatically detecting a gesture of at least one object from gesture data received via the sensors. The detecting comprises identifying the gesture using only the gesture data. The computer applications translate the gesture to a gesture signal, and control the display devices in response to the gesture signal.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: April 21, 2020
    Assignee: Oblong Industries, Inc.
    Inventors: David Minnen, Paul Yarin
  • Patent number: 10614162
    Abstract: A system for assisting sharing of information includes circuitry to: input a plurality of sentences each representing a statement made by one of a plurality of users, the sentence being generated by speaking or writing during a meeting or by extracting from at least one of meeting data, email data, electronic file data, and chat data at any time; determine a statement type of the statement represented by each one of the plurality of sentences, the statement type being one of a plurality of statement types previously determined; select, from among the plurality of sentences being input, one or more sentences each representing a statement of a specific statement type of the plurality of types; and output a list of the selected one or more sentences as key statements of the plurality of sentences.
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: April 7, 2020
    Assignee: Ricoh Company, Ltd.
    Inventor: Tomohiro Shima
  • Patent number: 10607188
    Abstract: Systems and methods described herein utilize supervised machine learning to generate a model for scoring interview responses. The system may access a training response, which in one embodiment is an audiovisual recording of a person responding to an interview question. The training response may have an assigned human-determined score. The system may extract at least one delivery feature and at least one content feature from the audiovisual recording of the training response, and use the extracted features and the human-determined score to train a response scoring model for scoring interview responses. The response scoring model may be configured based on the training to automatically assign scores to audiovisual recordings of interview responses. The scores for interview responses may be used by interviewers to assess candidates.
    Type: Grant
    Filed: March 24, 2015
    Date of Patent: March 31, 2020
    Assignee: Educational Testing Service
    Inventors: Patrick Charles Kyllonen, Lei Chen, Michelle Paulette Martin, Isaac Bejar, Chee Wee Leong, Joanna Gorin, David Michael Williamson
  • Patent number: 10607605
    Abstract: Disclosed are apparatuses and methods for processing a control command for an electronic device based on a voice agent. The apparatus includes a command tagger configured to receive at least one control command for the electronic device from at least one voice agent and to tag additional information to the at least one control command, and a command executor configured to, in response to the command tagger receiving a plurality of control commands, integrate the plurality of control commands based on additional information tagged to each of the plurality of control commands and to control the electronic device based on a result of the integration.
    Type: Grant
    Filed: September 20, 2016
    Date of Patent: March 31, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Joo Hyuk Jeon, Kyoung Gu Woo
  • Patent number: 10582355
    Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving a voice query at a mobile computing device and generating data that represents content of the voice query. The data is provided to a server system. A textual query that has been determined by a speech recognizer at the server system to be a textual form of at least part of the data is received at the mobile computing device. The textual query is determined to include a carrier phrase of one or more words that is reserved by a first third-party application program installed on the computing device. The first third-party application is selected, from a group of one or more third-party applications, to receive all or a part of the textual query. All or a part of the textual query is provided to the selected first application program.
    Type: Grant
    Filed: January 24, 2018
    Date of Patent: March 3, 2020
    Assignee: Google LLC
    Inventors: Michael J. LeBeau, John Nicholas Jitkoff, William J. Byrne
  • Patent number: 10580413
    Abstract: Embodiments of the disclosure disclose a method and apparatus for outputting information. A specific embodiment of the method includes: receiving voice information, analyzing the voice information to generate voiceprint information; matching the voiceprint information with at least one piece of pre-stored voiceprint information; outputting, in response to determining the voiceprint information failing to match a piece of pre-stored voiceprint information in the at least one piece of pre-stored voiceprint information, a voice questioning message for determining whether to add a new user, and receiving a voice reply message returned from a user based on the questioning message; and outputting, in response to determining the voice reply message instructing to add the new user, a voice prompt message prompting the user to bind an account. The embodiment has improved the flexibility in the human-computer interaction.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: March 3, 2020
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Zhongqi Zhang, Tian Wang
  • Patent number: 10565986
    Abstract: The present disclosure relates to processing domain-specific natural language commands. An example method generally includes receiving a natural language command. A command processor compares the received natural language command to a corpus of known commands to identify a probable matching command in the corpus of known commands to the received natural language command. The corpus of known commands comprises a plurality of domain-specific commands, each of which is mapped to a domain-specific action. Based on the comparison, the command processor identifies the domain-specific action associated with the probable matching command to perform in response to the received command and executes the identified domain-specific action.
    Type: Grant
    Filed: July 20, 2017
    Date of Patent: February 18, 2020
    Assignee: INTUIT INC.
    Inventors: Prateek Kakirwar, Avinash Thekkumpat, Jeffrey Chen