Distance Patents (Class 704/238)
  • Patent number: 11915698
    Abstract: A system configured to improve track selection while performing audio type detection using sound source localization (SSL) data is provided. A device processes audio data representing sounds from multiple sound sources to determine SSL data that distinguishes between each of the sound sources. The system detects an acoustic event and performs SSL track selection to select the sound source that corresponds to the acoustic event based on input features. To improve SSL track selection, the system detects current conditions of the environment and determines adaptive weight values that vary based on the current conditions, such as a noise level of the environment, whether playback is detected, whether the device is located near one or more walls, etc. By adjusting the adaptive weight values, the system improves an accuracy of the SSL track selection by prioritizing the input features that are most predictive during the current conditions.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: February 27, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Borham Lee, Wai Chung Chu
  • Patent number: 11900059
    Abstract: Methods, apparatuses, systems, computing devices, computing entities, and/or the like are provided. An example method may include retrieving one or more record data elements associated with a client identifier; generating one or more encounter vectors based at least in part on the one or more record data elements; generating a client vector based at least in part on the one or more encounter vectors and a first natural language processing model; generating a prediction data element based at least in part on the client vector and a machine learning model; and perform at least one data operation based at least in part on the prediction data element.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: February 13, 2024
    Assignee: UnitedHealth Group Incorporated
    Inventor: Irfan Bulu
  • Patent number: 11829920
    Abstract: An intelligent prediction system includes one or more processors, one or more memory components, and machine-readable instructions that cause the intelligent prediction system to: receive text data comprising a plurality of speaker turn segments of a transcription of a conversation, each speaker turn segment of the plurality of speaker turn segments representative of a turn in the conversation, the plurality of speaker turn segments collectively representative of the conversation up to a point of time, generate a point in time bind probability based on a speaker turn segment bind probability of a speaker turn segment at the point in time and memory data associated with the plurality of segments up to the point in time, and generate a speaker turn segment impact score at the point in time by subtracting an immediately preceding point in time bind probability from the point in time bind probability.
    Type: Grant
    Filed: July 13, 2020
    Date of Patent: November 28, 2023
    Assignee: Allstate Insurance Company
    Inventors: Eric Pripstein, Garrett Fiddler
  • Patent number: 11790919
    Abstract: Described herein is a system for sentiment detection in audio data. The system is trained using acoustic information and lexical information to determine a sentiment corresponding to an utterance. In some cases when lexical information is not available, the system (trained on acoustic and lexical information) is configured to determine a sentiment using only acoustic information.
    Type: Grant
    Filed: May 10, 2022
    Date of Patent: October 17, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Gustavo Alfonso Aguilar Alas, Viktor Rozgic, Chao Wang
  • Patent number: 11769016
    Abstract: A method includes obtaining user input interaction data. The user input interaction data includes one or more user interaction input values respectively obtained from the corresponding one or more input devices. The user input interaction data includes a word combination. The method includes generating a user interaction-style indicator value corresponding to the word combination in the user input interaction data. The user interaction-style indicator value is a function of the word combination and a portion of the one or more user interaction input values. The method includes determining, using a semantic text analyzer, a semantic assessment of the word combination in the user input interaction data based on the user interaction-style indicator value and a natural language assessment of the word combination. The method includes generating a response to the user input interaction data according to the user interaction-style indicator value and the semantic assessment of the word combination.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: September 26, 2023
    Assignee: APPLE INC.
    Inventors: Barry-John Theobald, Nicholas Elia Apostoloff, Garrett Laws Weinberg, Russell Y. Webb, Katherine Elaine Metcalf
  • Patent number: 11763836
    Abstract: Disclosed is a hierarchical generated audio detection system, comprising an audio preprocessing module, a CQCC feature extraction module, a LFCC feature extraction module, a first-stage lightweight coarse-level detection model and a second-stage fine-level deep identification model; the audio preprocessing module preprocesses collected audio or video data to obtain an audio clip with a length not exceeding the limit; inputting the audio clip into CQCC feature extraction module and LFCC feature extraction module respectively to obtain CQCC feature and LFCC feature; inputting CQCC feature or LFCC feature into the first-stage lightweight coarse-level detection model for first-stage screening to screen out the first-stage real audio and the first-stage generated audio; inputting the CQCC feature or LFCC feature of the first-stage generated audio into the second-stage fine-level deep identification model to identify the second-stage real audio and the second-stage generated audio, and the second-stage generated au
    Type: Grant
    Filed: February 17, 2022
    Date of Patent: September 19, 2023
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jianhua Tao, Zhengkun Tian, Jiangyan Yi
  • Patent number: 11735190
    Abstract: To generate substantially domain-invariant and speaker-discriminative features, embodiments may operate to extract features from input data based on a first set of parameters, generate outputs based on the extracted features and on a second set of parameters, and identify words represented by the input data based on the outputs, wherein the first set of parameters and the second set of parameters have been trained to minimize a network loss associated with the second set of parameters, wherein the first set of parameters has been trained to maximize the domain classification loss of a network comprising 1) an attention network to determine, based on a third set of parameters, relative importances of features extracted based on the first parameters to domain classification and 2) a domain classifier to classify a domain based on the extracted features, the relative importances, and a fourth set of parameters, and wherein the third set of parameters and the fourth set of parameters have been trained to minimize
    Type: Grant
    Filed: October 5, 2021
    Date of Patent: August 22, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Zhong Meng, Jinyu Li, Yifan Gong
  • Patent number: 11699430
    Abstract: A system and method for providing a text to speech output by receiving user audio data, determining a user region-specific-pronunciation classification according to the audio data, determining text for a response to the user according to the audio data, identifying a portion from the text, where a region specific-pronunciation dictionary includes the portion, and using a phoneme string, from the dictionary selected according to the user region-specific pronunciation classification, for the word in a text to speech output to the user.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: July 11, 2023
    Assignee: International Business Machines Corporation
    Inventors: Andrew R. Freed, Vamshi Krishna Thotempudi, Sujatha B. Perepa
  • Patent number: 11698967
    Abstract: A system for automated malicious software detection includes a computing device, the computing device configured to receive a software component, identify at least an element of software component metadata corresponding to the software component, determine a malicious quantifier as a function of the software component metadata, wherein determining the malicious quantifier further comprises obtaining a source repository, the source repository including at least an element of source metadata, and determining the malicious quantifier as a function of the at least an element of software component metadata and the at least an element of source repository metadata using a malicious machine-learning model, and transmit a notification as a function of the malicious quantifier and a predictive threshold.
    Type: Grant
    Filed: August 12, 2022
    Date of Patent: July 11, 2023
    Assignee: SOOS LLC
    Inventors: Joshua Holden Jennings, Timothy Paul Kenney
  • Patent number: 11587558
    Abstract: A computer-implemented method includes generating an empirically derived acoustic confusability measure by processing example utterances and iterating from an initial estimate of the acoustic confusability measure to improve the measure. The method can further include using the acoustic confusability measure to selectively limit phrases to make recognizable by a speech recognition application.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: February 21, 2023
    Assignee: Promptu Systems Corporation
    Inventors: Harry Printz, Naren Chittar
  • Patent number: 11562123
    Abstract: A method and an apparatus for fusing position information, and a non-transitory computer-readable recording medium are provided. In the method, words of an input sentence are segmented to obtain a first sequence of words in the input sentence, and absolute position information of the words in the first sequence is generated. Then, subwords of the words in the first sequence are segmented to obtain a second sequence including subwords, and position information of the subwords in the second sequence are generated, based on the absolute position information of the words in the first sequence, to which the respective subwords belong. Then, the position information of the subwords in the second sequence are fused into a self-attention model to perform model training or model prediction.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: January 24, 2023
    Assignee: Ricoh Company, Ltd.
    Inventors: Yixuan Tong, Yongwei Zhang, Bin Dong, Shanshan Jiang, Jiashi Zhang
  • Patent number: 11550783
    Abstract: Provided is a system and method for detecting a SQL command from a natural language input using neural networks which works even when the SQL command has not been seen before by the neural networks. In one example, the method may include storing a candidate set comprising structured query language (SQL) templates paired with respective text values, reducing, via a first predictive network, the candidate set into a subset of candidates based on a natural language input and the text values included in the candidate set, selecting, via a second predictive network, an SQL template from among the subset of candidates based on the natural language input and text values included in the subset of candidates, and determining a SQL command that corresponds to the natural language input based on the selected SQL template and content from the natural language input.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: January 10, 2023
    Assignee: SAP SE
    Inventors: Dongjun Lee, Jaesik Yoon
  • Patent number: 11527244
    Abstract: A dialogue processing apparatus includes: a speech input device configured to receive a speech signal of a user; a first buffer configured to store the received speech signal therein; an output device; and a controller. The controller is configured to: detect an utterance end time point on the basis of the stored speech signal; generate a second speech recognition result corresponding to a speech signal after the utterance end time point on the basis of whether an intention of the user is to be identified from a first speech recognition result corresponding to a speech signal before the utterance end time point; and control the output device to output a response corresponding to the intention of the user determined on the basis of at least one of the first speech recognition result or the second speech recognition result.
    Type: Grant
    Filed: February 12, 2020
    Date of Patent: December 13, 2022
    Assignees: HYUNDAI MOTOR COMPANY, KIA MOTORS CORPORATION
    Inventors: Jeong-Eom Lee, Youngmin Park, Seona Kim
  • Patent number: 11436330
    Abstract: A system for automated malicious software detection includes a computing device, the computing device configured to receive a software component, identify at least an element of software component metadata corresponding to the software component, determine a malicious quantifier as a function of the software component metadata, wherein determining the malicious quantifier further comprises obtaining a source repository, the source repository including at least an element of source metadata, and determining the malicious quantifier as a function of the at least an element of software component metadata and the at least an element of source repository metadata using a malicious machine-learning model, and transmit a notification as a function of the malicious quantifier and a predictive threshold.
    Type: Grant
    Filed: August 30, 2021
    Date of Patent: September 6, 2022
    Assignee: SOOS LLC
    Inventors: Joshua Holden Jennings, Timothy Paul Kenney
  • Patent number: 11355100
    Abstract: A method for processing information includes that: a current audio is acquired, and a current text corresponding to the current audio is acquired; feature extraction is performed on the current audio through a speech feature extraction portion in a semantic analysis model, to obtain a speech feature of the current audio; feature extraction is performed on the current text through a text feature extraction portion in the semantic analysis model, to obtain a text feature of the current text; semantic classification is performed on the speech feature and the text feature through a classification portion in the semantic analysis model, to obtain a classification result; and recognition of the current audio is rejected in response to the classification result indicating that the current audio is to be rejected for recognition.
    Type: Grant
    Filed: September 26, 2020
    Date of Patent: June 7, 2022
    Assignee: Beijing Xiaomi Pinecone Electronics Co., Ltd.
    Inventors: Zelun Wu, Shiqi Cui, Qiaojing Xie, Chen Wei, Bin Qin, Gang Wang
  • Patent number: 11335337
    Abstract: An information processing apparatus includes a memory; and a processor coupled to the memory and the processor configured to: generate phoneme string information in which a plurality of phonemes included in voice information is arranged in time series, based on a recognition result of the phonemes for the voice information; and learn parameters of a network such that when the phoneme string information is input to the network, output information that is output from the network approaches correct answer information that indicates whether a predetermined conversation situation is included in the voice information that corresponds to the phoneme string information.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: May 17, 2022
    Assignee: FUJITSU LIMITED
    Inventors: Shoji Hayakawa, Shouji Harada
  • Patent number: 11322019
    Abstract: Techniques for determining a direction of arrival of an emergency are discussed. A plurality of audio sensors of a vehicle can receive audio data associated with the vehicle. An audio sensor pair can be selected from the plurality of audio sensors to generate audio data representing sound in an environment of the vehicle. An angular spectrum associated with the audio sensor pair can be determined based on the audio data. A feature associated with the audio data can be determined based on the angular spectrum and/or the audio data itself. A direction of arrival (DoA) value associated with the sound may be determined based on the feature using a machine learned model. An emergency sound (e.g., a siren) can be detected in the audio data and a direction associated with the emergency relative to the vehicle can be determined based on the feature and the DoA value.
    Type: Grant
    Filed: October 23, 2019
    Date of Patent: May 3, 2022
    Assignee: Zoox, Inc.
    Inventors: Nam Gook Cho, Subasingha Shaminda Subasingha, Jonathan Tyler Dowdall, Venkata Subrahmanyam Chandra Sekhar Chebiyyam
  • Patent number: 11081115
    Abstract: A biometric is formed for at least one enrolled speaker by: obtaining a sample of speech of the enrolled speaker; obtaining a measure of a fundamental frequency of the speech of the enrolled speaker in each of a plurality of speech frames; and forming a first distribution function of the fundamental frequency of the speech of the enrolled speaker. Subsequently, for a speaker to be recognised, a sample of speech of the speaker to be recognised is obtained. Then, a measure of a fundamental frequency of the speech of the speaker to be recognised is obtained in each of a plurality of speech frames. A second distribution function of the fundamental frequency of the speech of the speaker to be recognised is formed, the second distribution function and the first distribution function are compared, and it is determined whether the speaker to be recognised is the enrolled speaker based on a result of comparing the second distribution function and the first distribution function.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: August 3, 2021
    Assignee: Cirrus Logic, Inc.
    Inventor: John Paul Lesso
  • Patent number: 11068659
    Abstract: Described herein are methods, systems and computer program products for determining a decodability index for one or more words. One of the methods of determining a decodability index for one or more words comprises receiving one or more words for analysis; analyzing the received one or more words using a plurality of effects; and assigning a decodability index to the received one or more words based on the analysis of the received one or more words using the plurality of effects, wherein the assigned decodability index indicates an ability of a person to pronounce or sound out the one or more words.
    Type: Grant
    Filed: May 23, 2018
    Date of Patent: July 20, 2021
    Assignee: Vanderbilt University
    Inventors: Laura Elizabeth Cutting, Neena Marie Saha, Ted Stephen Hasselbring
  • Patent number: 11062621
    Abstract: Techniques are disclosed relating to determining phonetic similarity using machine learning. The techniques include accessing training data that includes a first set of words of a native language and a second set of words corresponding to verified transliterations of the first set of words from the native language to a target language. Further, they include generating a set of new transliterations of the first set of words from the native language to the target language and storing comparison information based on a comparison between words from the second set of words and word from the set of new transliterations of the first set of words. Finally, a similarity score is determined between a first word of the target language and a second word of the target language based on the comparison information.
    Type: Grant
    Filed: December 26, 2018
    Date of Patent: July 13, 2021
    Assignee: PayPal, Inc.
    Inventors: Rushik Upadhyay, Dhamodharan Lakshmipathy, Nandhini Ramesh, Aditya Kaulagi
  • Patent number: 11055458
    Abstract: Verification for a design can include, for a covergroup corresponding to a variable of the design, generating a state coverage data structure specifying a plurality of transition bins. Each transition bin can include a sequence. Each sequence can specify states of the variable to be traversed in order during simulation of the design. Verification can include generating a state sequence table configured to use state values as keys and one or more of the sequences as data for the respective keys, and during simulation of the design, maintaining a sequence list specifying each sequence that is running based on sample values of the variable. Hit counts for the transition bins can be updated during the simulation.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: July 6, 2021
    Assignee: Xilinx, Inc.
    Inventors: Aparna Suresh, Tapodyuti Mandal, Vinayak Thonda
  • Patent number: 10963063
    Abstract: There is provided an information processing apparatus, an information processing method, and a program, the information processing apparatus including: an acquisition unit configured to acquire a recognition accuracy related to a recognition based on sensing data; and a control unit configured to make a first user operation recognizable when the recognition accuracy is included in a first range, and make a second user operation recognizable when the recognition accuracy is included in a second range different from the first range, the second user operation being different from the first user operation and related to the first user operation.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: March 30, 2021
    Assignee: SONY CORPORATION
    Inventors: Ryo Fukazawa, Kuniaki Torii, Takahiro Okayama
  • Patent number: 10861450
    Abstract: A method for managing voice-based interaction in an Internet of things (IoT) network system is provided. The method includes identifying a first voice utterance from a first IoT device among a plurality of IoT devices in the IoT network system. Further, the method includes identifying at least one second voice utterance from at least one second IoT device among the plurality of IoT devices in the IoT network system. Further, the method includes determining a voice command by combining the first voice utterance and the at least one second voice utterance. Furthermore, the method includes triggering at least one IoT device among the plurality of IoT devices in the IoT network system to perform at least one action corresponding to the voice command.
    Type: Grant
    Filed: February 9, 2018
    Date of Patent: December 8, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Vijaya Kumar Tukka, Deepraj Prabhakar Patkar, Rakesh Kumar, Sujay Mohan, Vinay Kumar
  • Patent number: 10854192
    Abstract: An automatic speech recognition (ASR) system detects an endpoint of an utterance based on a domain of the utterance. The ASR system processes a first portion of the utterance to determine the domain and then determines an endpoint of the remainder of the utterance depending on the domain.
    Type: Grant
    Filed: March 30, 2016
    Date of Patent: December 1, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Roland Maas, Ariya Rastrow, Rohit Prasad
  • Patent number: 10839789
    Abstract: An acoustic coprocessor is provided. The acoustic coprocessor may include an interface for receiving at least one feature vector and a calculating apparatus for calculating distances indicating the similarity between the at least one feature vector and at least one acoustic state of an acoustic model read from an acoustic model memory. The acoustic coprocessor may also include an interface for sending at least one distance calculated by the calculating apparatus.
    Type: Grant
    Filed: August 8, 2018
    Date of Patent: November 17, 2020
    Assignee: Zentian Limited
    Inventors: Guy Larri, Mark Catchpole, Damian Kelly Harris-Dowsett, Timothy Brian Reynolds
  • Patent number: 10818298
    Abstract: A method of audio processing comprises receiving an audio signal. A plurality of framed versions of the received audio signal are formed, each of the framed versions having a respective frame start position. One of the plurality of framed versions of the received audio signal is selected. The selected one of the plurality of framed versions of the received audio signal is used in a subsequent process.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: October 27, 2020
    Assignee: Cirrus Logic, Inc.
    Inventors: John Paul Lesso, Gordon Richard McLeod
  • Patent number: 10810452
    Abstract: Methods, apparatuses and systems directed to pattern identification and pattern recognition. In some particular implementations, the invention provides a flexible pattern recognition platform including pattern recognition engines that can be dynamically adjusted to implement specific pattern recognition configurations for individual pattern recognition applications. In some implementations, the present invention also provides for a partition configuration where knowledge elements can be grouped and pattern recognition operations can be individually configured and arranged to allow for multi-level pattern recognition schemes.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: October 20, 2020
    Assignee: Rokio, Inc.
    Inventor: Jeffrey Brian Adams
  • Patent number: 10770065
    Abstract: A speech recognition method and a speech recognition apparatus which pre-download a speech recognition model predicted to be used and use the speech recognition model in speech recognition is provided. The speech recognition method, performed by the speech recognition apparatus, includes determining a speech recognition model, based on user information downloading the speech recognition model, performing speech recognition, based on the speech recognition model, and outputting a result of performing the speech recognition.
    Type: Grant
    Filed: December 14, 2017
    Date of Patent: September 8, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sang-yoon Kim, Sung-soo Kim, Il-hwan Kim, Kyung-min Lee, Nam-hoon Kim, Jong-youb Ryu, Jae-won Lee
  • Patent number: 10672386
    Abstract: Voice command recognition with dialect translation is disclosed. User voice input can be translated to a standard voice pattern using a dialect translation unit. A control command can then be generated based on the translated user voice input. In certain embodiments, the voice command recognition system with dialect translation can be implemented in a driving apparatus. In those embodiments, various control commands to control the driving apparatus can be generated by a user with a dialect input. The generated voice control commands for the driving apparatus can include starting the driving apparatus, turning on/off A/C unit, controlling the A/C unit, turning on/off entertainment system, controlling the entertainment system, turning on/off certain safety features, turning on/off certain driving features, adjusting seat, adjusting steering wheel, taking a picture of surroundings and/or any other control commands that can control various functions of the driving apparatus.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: June 2, 2020
    Assignee: Thunder Power New Energy Vehicle Development Company Limited
    Inventor: Yong-Syuan Chen
  • Patent number: 10643032
    Abstract: An output sentence generation apparatus for automatically generating one output sentence from a plurality of input keywords includes a candidate sentence generator incorporating a learned neural network configured to take in the plurality of keywords and generate a plurality of candidate sentences each including at least some of the plurality of keywords, and an evaluation outputter configured to calculate an overlap ratio for each of the plurality of candidate sentences generated by the candidate sentence generator and increase an evaluation of the candidate sentence with a small overlap ratio to thereby determine an output sentence from the plurality of candidate sentences. The overlap ratio is the number of occurrences of an overlapping word with respect to the number of occurrences of all words included in the corresponding candidate sentence.
    Type: Grant
    Filed: February 2, 2018
    Date of Patent: May 5, 2020
    Assignees: TOYOTA JIDOSHA KABUSHIKI KAISHA, Tokyo Metropolitan University
    Inventors: Mamoru Komachi, Shin Kanouchi, Tomoya Ogata, Tomoya Takatani
  • Patent number: 10601599
    Abstract: An audio processing device comprises audio input circuitry operable to receive audio input signals and to process the audio input signals to generate audio samples at a first rate. The audio processing device further comprises a first trigger engine operable to detect a keyword within the audio samples. Also, the audio processing device comprises a delay buffer operable to continuously receive and store the audio samples. The delay buffer is further operable to transfer the audio samples that are stored within the delay buffer to a host across a data bus at a second rate, which is faster than the first rate. Further, the delay buffer is operable to transfer the audio samples that are stored within the delay buffer to the host at the first rate, after the stored audio samples are transmitted.
    Type: Grant
    Filed: December 29, 2017
    Date of Patent: March 24, 2020
    Assignee: SYNAPTICS INCORPORATED
    Inventors: Manish J. Patel, Stanton Renna, Vamshi Duligunti, Johnny Wang, Huanqi Chen
  • Patent number: 10572812
    Abstract: According to an embodiment, a detection apparatus detects a partial series similar to a search pattern from a parameter series including a sequence of parameters. The apparatus includes a local score acquirer, a difference score calculator, an accumulative score calculator, and a determiner. The local score acquirer is configured to acquire a local score representing a likelihood of the parameter in the search pattern for each of the parameters. The difference score calculator is configured to calculate a difference score by subtracting a threshold from the local score for each of the parameters. The accumulative score calculator is configured to calculate an accumulative score by accumulating the difference scores. The determiner is configured to compare the accumulative score with a reference value in size to determine whether the partial series is similar to the search pattern.
    Type: Grant
    Filed: March 16, 2016
    Date of Patent: February 25, 2020
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventor: Yu Nasu
  • Patent number: 10553240
    Abstract: A conversation evaluation device includes a storage medium and a processor. The storage medium stores a program configured to evaluate a conversation that includes first voice and second voice as a response to the first voice. The processor executes the program. The program causes a processor to acquire first pitch information related to the first voice. The program also causes the processor to acquire second pitch information related to the second voice. The program also causes the processor to evaluate comfortableness of the second voice based on the acquired first and second pitch information.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: February 4, 2020
    Assignee: Yamaha Corporation
    Inventor: Hiraku Kayama
  • Patent number: 10397082
    Abstract: The technology disclosed relates to refined survey of Internet infrastructures. A pattern of measurements is disclosed that can improve data collection by increasing the number of measurements per survey session according to a function described in areas that have few measurements, and decreasing the average number of measurements per session in heavily measured areas. These are new problems that arise from implementation of technology developed by these inventors and their colleagues.
    Type: Grant
    Filed: August 7, 2014
    Date of Patent: August 27, 2019
    Assignee: Citrix Systems, Inc.
    Inventors: Martin Kagan, Jacob Wan
  • Patent number: 10397402
    Abstract: A device for determining a behavioral deviation for an individual. The device includes a memory and a processor. The memory may store instructions. The processor may be coupled to the memory. When the processor executes the instructions, the processor may: generate a profile for a first individual using data associated with an identifier for the first individual, wherein the profile comprises behavioral information that matches a characteristic of the data; receive, from a first electronic device, a first multimedia item representing a first communication by the first individual; determine that a characteristic of the first multimedia item does not match the characteristic of the data; and send a first notification to a second electronic device indicating that a behavior of the first individual deviated from the behavioral information of the profile.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: August 27, 2019
    Inventor: Eric Wold
  • Patent number: 10388275
    Abstract: The present invention relates to a method and apparatus for improving spontaneous speech recognition performance. The present invention is directed to providing a method and apparatus for improving spontaneous speech recognition performance by extracting a phase feature as well as a magnitude feature of a voice signal transformed to the frequency domain, detecting a syllabic nucleus on the basis of a deep neural network using a multi-frame output, determining a speaking rate by dividing the number of syllabic nuclei by a voice section interval detected by a voice detector, calculating a length variation or an overlap factor according to the speaking rate, and performing cepstrum length normalization or time scale modification with a voice length appropriate for an acoustic model.
    Type: Grant
    Filed: September 7, 2017
    Date of Patent: August 20, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Hyun Woo Kim, Ho Young Jung, Jeon Gue Park, Yun Keun Lee
  • Patent number: 10318633
    Abstract: An approach is provided that receives a word that belongs to a first natural language and retrieves a first set of complexity data pertaining to the word in the first natural language. The approach translates the word to one or more translated words, with each of the translated words corresponding to one or more second natural languages. The approach then retrieves sets of complexity data, with the sets of complexity data corresponding to a different translated word. The approach determines a complexity of the word in the first natural language based on an analysis of the first and second sets of complexity data.
    Type: Grant
    Filed: January 2, 2017
    Date of Patent: June 11, 2019
    Assignee: International Business Machines Corporation
    Inventors: Bharath Dandala, Ravi S. Sinha
  • Patent number: 10318634
    Abstract: An approach is provided that returns a simplified set of text to a user of a natural language processing (NLP) system with the simplified set of text having a complexity appropriate to the reading level of the user. The approach receives a word that belongs to a first natural language and retrieves a first set of complexity data pertaining to the word in the first natural language. The approach translates the word to one or more translated words, with each of the translated words corresponding to one or more second natural languages. The approach then retrieves sets of complexity data, with the sets of complexity data corresponding to a different translated word. The approach determines a complexity of the word in the first natural language based on an analysis of the first and second sets of complexity data.
    Type: Grant
    Filed: January 2, 2017
    Date of Patent: June 11, 2019
    Assignee: International Business Machines Corporation
    Inventors: Bharath Dandala, Ravi S. Sinha
  • Patent number: 10264317
    Abstract: In various embodiments, a method of content access device geolocation verification includes determining local geolocation information, identifying proximate content access devices that are associated with a content delivery network provider, and transmitting the information to a content delivery network provider device that takes an action if a location of a content access device mismatches a recorded location. In some embodiments, a content delivery network provider device receives local geolocation information and data regarding identified proximate content access devices from an electronic device, analyzes the information to determine whether a location of a content access device mismatches a recorded location, and, if the location of the content access device mismatches the recorded location, takes an action.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: April 16, 2019
    Assignee: T-MOBILE USA, INC.
    Inventor: Jeffrey Binder
  • Patent number: 10199037
    Abstract: A reduced latency system for automatic speech recognition (ASR). The system can use certain feature values describing the state of ASR processing to estimate how far a lowest scoring node for an audio frame is from a potential node likely be part of the Viterbi path. The system can then adjust its beam width in a manner likely to encompass the node likely to be on the Viterbi path, thus pruning unnecessary nodes and reducing latency. The feature values and estimated distances may be based on a set of training data, where the system identifies specific nodes on the Viterbi path and determines what feature values correspond to what desired beam widths. Trained models or other data may be created at training and used at runtime to dynamically adjust the beam width, as well as other settings such as threshold number of active nodes.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: February 5, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Denis Sergeyevich Filimonov, Yuan Shangguan
  • Patent number: 10171964
    Abstract: A location of a first mobile device associated with a first user is determined, and a location of a second mobile device associated with a second user is determined. A relationship between the first user and the second user is determined, and a proximity of the first mobile device relative to the second mobile device is determined. A location-oriented data service is provided to at least one of the first mobile device and the second mobile device.
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: January 1, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Edith H. Stern, Patrick J. O'Sullivan, Robert C. Weir, Barry E. Willner
  • Patent number: 10134060
    Abstract: The system and method described herein may use various natural language models to deliver targeted advertisements and/or provide natural language processing based on advertisements. In one implementation, an advertisement associated with a product or service may be provided for presentation to a user. A natural language utterance of the user may be received. The natural language utterance may be interpreted based on the advertisement and, responsive to the existence of a pronoun in the natural language utterance, a determination of whether the pronoun refers to one or more of the product or service or a provider of the product or service may be effectuated.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: November 20, 2018
    Assignee: VB Assets, LLC
    Inventors: Tom Freeman, Mike Kennewick
  • Patent number: 10115393
    Abstract: A computer-readable speaker-adapted speech engine acoustic model can be generated. The generating of the acoustic model can include performing speaker-specific adaptation of one or more layers of the model to produce one or more adaptive layers comprising layer weights, with the speaker-specific adaptation comprising a data size reduction technique. The data size reduction technique can be threshold value adaptation, corner area adaptation, diagonal-based quantization, adaptive matrix reduction, or a combination of these reduction techniques. The speaker-adapted speech engine model can be accessed and used in performing speech recognition on computer-readable audio speech input via a computerized speech recognition engine.
    Type: Grant
    Filed: October 31, 2016
    Date of Patent: October 30, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kshitiz Kumar, Chaojun Liu, Yifan Gong
  • Patent number: 10089989
    Abstract: Aspects of the present disclosure involve a method for a voice trigger device that can be used to interrupt an externally connected system. The current disclosure also presents the architecture for the voice trigger device used for searching and matching an audio signature with a reference signature. In one embodiment a reverse matching mechanism is performed. In another embodiment, the reverse search and match operation is performed using an exponential normalization technique.
    Type: Grant
    Filed: May 6, 2016
    Date of Patent: October 2, 2018
    Assignee: SEMICONDUCTOR COMPONENTS INDUSTRIES, LLC
    Inventors: Mark Melvin, Robert L. Brennan
  • Patent number: 10084758
    Abstract: A method, system, and recording medium for communication comparison including encrypting a first communication and a second communication, determining a list of frequencies and intensities based on the first communication and the second communication, projecting light based on the list of frequencies and intensities of the first communication onto an object, reading the frequencies and intensities of the light based on the first communication from the object, and comparing the light read in the reading with the list of frequencies and intensities of the second communication to calculate a semantic overlap between the frequencies and intensities of the first communication and the second communication.
    Type: Grant
    Filed: October 28, 2015
    Date of Patent: September 25, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Nicholas Stephen Kersting
  • Patent number: 10062377
    Abstract: A speech recognition circuit comprising a circuit for providing state identifiers which identify states corresponding to nodes or groups of adjacent nodes in a lexical tree, and for providing scores corresponding to said state identifiers, the lexical tree comprising a model of words.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: August 28, 2018
    Assignee: Zentian Limited
    Inventors: Guy Larri, Mark Catchpole, Damian Kelly Harris-Dowsett, Timothy Brian Reynolds
  • Patent number: 10051445
    Abstract: A location of a first mobile device associated with a first user is determined, and a location of a second mobile device associated with a second user is determined. A relationship between the first user and the second user is determined, and a proximity of the first mobile device relative to the second mobile device is determined. A location-oriented data service is provided to at least one of the first mobile device and the second mobile device.
    Type: Grant
    Filed: October 24, 2016
    Date of Patent: August 14, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Edith H. Stern, Patrick J. O'Sullivan, Robert C. Weir, Barry E. Willner
  • Patent number: 9990926
    Abstract: Techniques for passive enrollment of a user in a speaker identification (ID) device are provided. One technique includes: parsing, by a processor of the speaker ID device, a speech sample, spoken by the user, into a keyword phrase sample and a command phrase sample; identifying, by a text-dependent speaker ID circuit of the speaker ID device, the user as the speaker of the keyword phrase sample; associating the command phrase sample with the identified user; determining if the command phrase sample in conjunction with one or more earlier command phrase samples associated with the user is sufficient command phrase sampling to enroll the user in a text-independent speaker ID circuit of the speaker ID device; and enrolling the user in the text-independent speaker ID circuit using the command phrase samples associated with the user after determining there is sufficient command phrase sampling to enroll the user.
    Type: Grant
    Filed: March 13, 2017
    Date of Patent: June 5, 2018
    Assignee: INTEL CORPORATION
    Inventor: David Pearce
  • Patent number: 9959850
    Abstract: It is inter alia disclosed a method comprising: determining a divergence measure between a statistical distribution of audio features of a first audio track and a statistical distribution of audio features of at least one further audio track; determining a divergence measure threshold value from at least the divergence measure between the statistical distribution of audio features of a first audio track and the statistical distribution of audio features of the at least one further audio track; and comparing the divergence measure with the divergence measure threshold value.
    Type: Grant
    Filed: June 9, 2014
    Date of Patent: May 1, 2018
    Assignee: Nokia Technologies Oy
    Inventors: Antti Eronen, Jussi Leppänen
  • Patent number: 9928839
    Abstract: Methods and systems for authenticating a user are described. In some embodiments, a one-time token and a recording of the one-time token is read aloud by the user. The voice characteristics derived from the recording of the one-time token are compared with voice characteristics derived from samples of the user's voice. The user may be authenticated when the one-time token is verified and when a match of the voice characteristics derived from the recording of the one-time token and the voice characteristics derived from the samples of the user's voice meet or exceed a threshold.
    Type: Grant
    Filed: April 16, 2014
    Date of Patent: March 27, 2018
    Assignee: United Services Automobile Association (USAA)
    Inventors: Michael Wayne Lester, Debra Randall Casillas, Sudarshan Rangarajan, John Shelton, Maland Keith Mortensen