Subportions Patents (Class 704/254)
-
Patent number: 12170088Abstract: An electronic device is provided. The electronic device according to an embodiment includes a microphone, a communicator comprising communication circuitry, and a processor configured to control the communicator to transmit a control command to an external audio device for reducing an audio output level of the external audio device in response to a trigger signal for starting a voice control mode being received through the microphone and to control the electronic device to operate in the voice control mode.Type: GrantFiled: July 7, 2023Date of Patent: December 17, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Min-seok Kim, Min-ho Lee
-
Patent number: 12142279Abstract: A speaker extracting unit extracts a speaker area from an image. A first utterance data generating unit, on the basis of the shape of the lips of the speaker, generates first utterance data indicating the content of the utterance by the speaker. A second utterance data generating unit, on the basis of a speech signal corresponding to the utterance by the speaker, generates second utterance data indicating the content of the utterance by the speaker. A comparison unit compares the first utterance data and the second utterance data with each other.Type: GrantFiled: July 29, 2020Date of Patent: November 12, 2024Assignee: NEC CORPORATIONInventor: Kazuyuki Sasaki
-
Patent number: 12130899Abstract: This application provides a voiceprint recognition method and device. The method includes: calculating, by an electronic device a first confidence value that an entered voice belongs to a first registered user, and calculating a second confidence value that the entered voice belongs to a second registered user. The method further includes: calculating, by another electronic device, a third confidence value that the entered voice belongs to the first registered user, and calculating a fourth confidence value that the entered voice belongs to the second registered user. A server determines, based on the first confidence value and the third confidence value, a fifth confidence value that a user is the first registered user, and determines, based on the second confidence value and the fourth confidence value, a sixth confidence value that the user is the second registered user.Type: GrantFiled: January 28, 2022Date of Patent: October 29, 2024Assignee: Huawei Technologies Co., Ltd.Inventors: Yuan Sun, Shuwei Li, Youyu Jiang, Shen Qu, Ming Kuang
-
Patent number: 12100407Abstract: A method for phoneme identification. The method includes receiving an audio signal from a speaker, performing initial processing comprising filtering the audio signal to remove audio features, the initial processing resulting in a modified audio signal, transmitting the modified audio signal to a phoneme identification method and a phoneme replacement method to further process the modified audio signal, and transmitting the modified audio signal to a speaker. Also, a system for identifying and processing audio signals. The system includes at least one speaker, at least one microphone, and at least one processor, wherein the processor processes audio signals received using a method for phoneme replacement.Type: GrantFiled: August 1, 2022Date of Patent: September 24, 2024Assignee: DEKA Products Limited PartnershipInventors: Dean Kamen, Derek G. Kane
-
Patent number: 12095939Abstract: Techniques are disclosed for connecting third-party accessories to a cellular-capable device to participate in a telephone call. In one example, a user can voice a request to make a call to an accessory device. The accessory device can transmit the request to a controller device. Upon processing the request, the controller device can identify an appropriate cellular-capable device and instruct the cellular-capable device to place the requested call. The controller device can also instruct the cellular-capable device to establish an audio connection with the accessory device to relay the call audio. In another example, the controller device can listen for a word spoken at the accessory device indicating the end of the call. Upon receiving the end of call word, the controller device can instruct the cellular-capable device to terminate the call. While in the listening state, the controller device may continue processing user requests received at other accessory devices.Type: GrantFiled: April 12, 2022Date of Patent: September 17, 2024Assignee: Apple Inc.Inventors: Jared S. Grubb, Robert M. Stewart, Gabriel Sanchez, Zaka ur Rehman Ashraf, Anshul Jain
-
Patent number: 12079544Abstract: A display device according to an embodiment of the present invention may comprise: a display unit for displaying a content image; a microphone for receiving voice commands from a user; a network interface unit for communicating with a natural language processing server and a search server; and a control unit for transmitting the received voice commands to the natural language processing server, receiving intention analysis result information indicating the user's intention corresponding to the voice commands from the natural language processing server, and performing a function of the display device according to the received intention analysis result information.Type: GrantFiled: June 26, 2023Date of Patent: September 3, 2024Assignee: LG ELECTRONICS INC.Inventors: Sunki Min, Kiwoong Lee, Hyangjin Lee, Jeean Chang, Seunghyun Heo, Jaekyung Lee
-
Patent number: 12073824Abstract: Two-pass automatic speech recognition (ASR) models can be used to perform streaming on-device ASR to generate a text representation of an utterance captured in audio data. Various implementations include a first-pass portion of the ASR model used to generate streaming candidate recognition(s) of an utterance captured in audio data. For example, the first-pass portion can include a recurrent neural network transformer (RNN-T) decoder. Various implementations include a second-pass portion of the ASR model used to revise the streaming candidate recognition(s) of the utterance and generate a text representation of the utterance. For example, the second-pass portion can include a listen attend spell (LAS) decoder. Various implementations include a shared encoder shared between the RNN-T decoder and the LAS decoder.Type: GrantFiled: December 3, 2020Date of Patent: August 27, 2024Assignee: GOOGLE LLCInventors: Tara N. Sainath, Yanzhang He, Bo Li, Arun Narayanan, Ruoming Pang, Antoine Jean Bruguier, Shuo-Yiin Chang, Wei Li
-
Patent number: 12073826Abstract: A method for detecting freeze words includes receiving audio data that corresponds to an utterance spoken by a user and captured by a user device associated with the user. The method also includes processing, using a speech recognizer, the audio data to determine that the utterance includes a query for a digital assistant to perform an operation. The speech recognizer is configured to trigger endpointing of the utterance after a predetermined duration of non-speech in the audio data. Before the predetermined duration of non-speech, the method includes detecting a freeze word in the audio data. In response to detecting the freeze word in the audio data, the method also includes triggering a hard microphone closing event at the user device. The hard microphone closing event prevents the user device from capturing any audio subsequent to the freeze word.Type: GrantFiled: May 23, 2023Date of Patent: August 27, 2024Assignee: Google LLCInventors: Matthew Sharifi, Aleksandar Kracun
-
Patent number: 12067978Abstract: Methods and systems are disclosed herein for improvements relating to compressed automatic speech recognition (ASR) systems. The ASR system may comprise a compressed acoustic engine and an adaptive decoder. The adaptive decoder may be dynamically compiled based on characteristics of the compressed acoustic engine and a current state of the application device. In some embodiments, a dynamic command list is used to manage context-specific commands. Two or more commands recognized by the adaptive decoder may be confusable due to compression of the ASR system. Alternate commands may be determined that are semantically equivalent but phonetically different than the confusable commands to reduce classification error of the adaptive decoder. An alternate command may replace one or more of the confusable commands in the adaptive decoder. In some embodiments, a user interface is displayed to a user of the ASR system to select the alternate command for replacement in the decoder.Type: GrantFiled: June 1, 2021Date of Patent: August 20, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Fuliang Weng, Alexei Ivanov, Stephen Cradock
-
Patent number: 12062370Abstract: An electronic device and a method for controlling the electronic device are disclosed. The method includes receiving a trigger speech of a user, entering a speech recognition mode to recognize a speech command of the user in response, and transmitting information to enter the speech recognition mode to at least one external device located at home. Further, the method includes obtaining a first speech information corresponding to a speech uttered by a user from a microphone included in the electronic device and receiving at least one second speech information corresponding to a speech uttered by the user from the at least one external device; identifying a task corresponding to a speech uttered by the user and an external device to perform the task based on the first speech information and the at least one second speech information; and transmitting, to the identified external device, information about the task.Type: GrantFiled: January 29, 2021Date of Patent: August 13, 2024Assignee: Samsung Electronics Co., Ltd.Inventor: Jeonghyun Yun
-
Patent number: 12039280Abstract: Machine classifiers in accordance with embodiments of the invention capture long-term temporal dependencies in particular tasks, such as turn-based dialogues. Machine classifiers may be used to help users to perform tasks indicated by the user. When a user utterance is received, natural language processing techniques may be used to understand the user's intent. Templates may be determined based on the user's intent in the generation of responses to solicit information from the user. A variety of persona attributes may be determined for a user. The persona attributes may be determined based on the user's utterances and/or provided as metadata included with the user's utterances. A response persona may be used to generate responses to the user's utterances such that the generated responses match a tone appropriate to the task. A response persona may be used to generate templates to solicit additional information and/or generate responses appropriate to the task.Type: GrantFiled: April 17, 2023Date of Patent: July 16, 2024Assignee: Capital One Services, LLCInventors: Oluwatobi Olabiyi, Erik T. Mueller, Rui Zhang, Zachary Kulis, Varun Singh
-
Patent number: 12026453Abstract: An annotation method, a method of relation extraction, a non-transient computer storage medium, and a computing device are provided. The annotation method includes: traversing each sentence in a text to be annotated to generate a first template and selecting the first template; traversing each sentence in the text to be annotated, based on the selected first template, to match at least one new seed; evaluating the at least one new seed having been matched; repeating the above steps until a selected condition is met, and outputting the matched correct seed and a classification relationship between a first entity and a second entity in the matched correct seed.Type: GrantFiled: February 26, 2021Date of Patent: July 2, 2024Assignee: BOE TECHNOLOGY GROUP CO., LTD.Inventor: Yafei Dai
-
Patent number: 12026753Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech recognition are disclosed. In one aspect, a method includes receiving a candidate adword from an advertiser. The method further includes generating a score for the candidate adword based on a likelihood of a speech recognizer generating, based on an utterance of the candidate adword, a transcription that includes a word that is associated with an expected pronunciation of the candidate adword. The method further includes classifying, based at least on the score, the candidate adword as an appropriate adword for use in a bidding process for advertisements that are selected based on a transcription of a speech query or as not an appropriate adword for use in the bidding process for advertisements that are selected based on the transcription of the speech query.Type: GrantFiled: May 5, 2021Date of Patent: July 2, 2024Assignee: Google LLCInventors: Petar Aleksic, Pedro J. Moreno Mengibar
-
Patent number: 12002452Abstract: Implementations relate to techniques for providing context-dependent search results. A computer-implemented method includes receiving an audio stream at a computing device during a time interval, the audio stream comprising user speech data and background audio, separating the audio stream into a first substream that includes the user speech data and a second substream that includes the background audio, identifying concepts related to the background audio, generating a set of terms related to the identified concepts, influencing a speech recognizer based on at least one of the terms related to the background audio, and obtaining a recognized version of the user speech data using the speech recognizer.Type: GrantFiled: December 21, 2022Date of Patent: June 4, 2024Assignee: Google LLCInventors: Jason Sanders, Gabriel Taubman, John J. Lee
-
Patent number: 11990119Abstract: An interactive system may be implemented in part by an audio device located within a user environment, which may accept speech commands from a user and may also interact with the user by means of generated speech. In order to improve performance of the interactive system, a user may use a separate device, such as a personal computer or mobile device, to access a graphical user interface that lists details of historical speech interactions. The graphical user interface may be configured to allow the user to provide feedback and/or corrections regarding the details of specific interactions.Type: GrantFiled: March 8, 2021Date of Patent: May 21, 2024Assignee: Amazon Technologies, Inc.Inventors: Gilles Jean Roger Belin, Charles S. Rogers, III, Robert David Owen, Jeffrey Penrod Adams, Rajiv Ramachandran, Gregory Michael Hart
-
Patent number: 11978440Abstract: Techniques for processing input data for a detected user are described. Received image data is processed to identify an indicated user. Based on the user a machine learning model is implemented. The machine learning model is then used to process input data for a user input. An action is performed using the resulting output data.Type: GrantFiled: May 25, 2023Date of Patent: May 7, 2024Assignee: Amazon Technologies, Inc.Inventors: Deepak Yavagal, Ajith Prabhakara, John Gray
-
Patent number: 11978113Abstract: Techniques are described for cross-device presentation. During a speech interaction with a conversational user interface (CUI) executing on an input device, such as a personal assistant (PA) device or other computing device, a user may utter one or more search terms to search for an item, such as a vehicle to purchase. The search term(s) may be employed by a search engine to identify one or more items that correspond to the search term(s). The search engine can generate recommendation information that includes a description of the item(s) corresponding to the search term(s). The recommendation information can be communicated to an output device that is registered to, or otherwise associated with, the user who spoke the search term(s) to the input device. In some instances, the recommendation information can be presented through speech output on the PA device or other device.Type: GrantFiled: May 31, 2023Date of Patent: May 7, 2024Assignee: United Services Automobile Association (USAA)Inventors: Philip Andrew Leal, Ricardo Alcantar
-
Patent number: 11978434Abstract: A computer-implemented technique identifies terms in an original reference transcription and original ASR output results that are considered valid variants of each other, even though these terms have different textual forms. Based on this finding, the technique produces a normalized reference transcription and normalized ASR output results in which valid variants are assigned the same textual form. In some implementations, the technique uses the normalized text to develop a model for an ASR system. For example, the technique may generate a word error rate (WER) measure by comparing the normalized reference transcription with the normalized ASR output results, and use the WER measure as guidance in developing the model. Some aspects of the technique involve identifying occasions in which a term can be properly split into component parts. Other aspects can identify other ways in which two terms may vary in spelling, but nonetheless remain valid variants.Type: GrantFiled: September 29, 2021Date of Patent: May 7, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Satarupa Guha, Ankur Gupta, Rahul Ambavat, Rupeshkumar Rasiklal Mehta
-
Patent number: 11955119Abstract: A speech recognition method includes receiving speech data, obtaining, from the received speech data, a candidate text including at least one word and a phonetic symbol sequence associated with a pronunciation of a target word included in the received speech data, using a speech recognition model, replacing the phonetic symbol sequence included in the candidate text with a replacement word corresponding to the phonetic symbol sequence, and determining a target text corresponding to the received speech data based on a result of the replacing.Type: GrantFiled: December 16, 2022Date of Patent: April 9, 2024Assignee: Samsung Electronics Co., Ltd.Inventor: Jihyun Lee
-
Patent number: 11948566Abstract: The present disclosure describes systems and methods for extensible search, content, and dialog management. Embodiments of the present disclosure provide a dialog system with a trained intent recognition model (e.g., a deep learning model) to receive and understand a natural language query from a user. In cases where intent is not identified for a received query, the dialog system generates one or more candidate responses that may be refined (e.g., using human-in-the-loop curation) to generate a response. The intent recognition model may be updated (e.g., retrained) the accordingly. Upon receiving a subsequent query with similar intent, the dialog system may identify the intent using the updated intent recognition model.Type: GrantFiled: March 24, 2021Date of Patent: April 2, 2024Assignee: ADOBE INC.Inventors: Oliver Brdiczka, Kyoung Tak Kim, Charat Maheshwari
-
Patent number: 11935525Abstract: Systems and methods for utilizing microphone array information for acoustic modeling are disclosed. Audio data may be received from a device having a microphone array configuration. Microphone configuration data may also be received that indicates the configuration of the microphone array. The microphone configuration data may be utilized as an input vector to an acoustic model, along with the audio data, to generate phoneme data. Additionally, the microphone configuration data may be utilized to train and/or generate acoustic models, select an acoustic model to perform speech recognition with, and/or to improve trigger sound detection.Type: GrantFiled: June 8, 2020Date of Patent: March 19, 2024Assignee: Amazon Technologies, Inc.Inventors: Shiva Kumar Sundaram, Minhua Wu, Anirudh Raju, Spyridon Matsoukas, Arindam Mandal, Kenichi Kumatani
-
Patent number: 11906317Abstract: Systems and methods for presenting information and executing a task. In an aspect, when a user gazes at a display of a standby device, location related information is presented. In another aspect, when a user utters a voice command and gazes or gestures at a device, a task is executed. In another aspect, a voice input, a gesture, and user information are used to determine a destination for a trip or a product for a purchase. In another aspect, a voice input and user information are used to determine a destination when a user hails a vehicle.Type: GrantFiled: June 8, 2021Date of Patent: February 20, 2024Inventor: Chian Chiu Li
-
Patent number: 11900958Abstract: Embodiments of the present disclosure provide methods and systems for processing a speech signal. The method can include: processing the speech signal to generate a plurality of speech frames; generating a first number of acoustic features based on the plurality of speech frames using a frame shift at a given frequency; and generating a second number of posteriori probability vectors based on the first number of acoustic features using an acoustic model, wherein each of the posteriori probability vectors comprises probabilities of the acoustic features corresponding to a plurality of modeling units, respectively.Type: GrantFiled: December 26, 2022Date of Patent: February 13, 2024Assignee: Alibaba Group Holding LimitedInventors: Shiliang Zhang, Ming Lei, Wei Li, Haitao Yao
-
Patent number: 11900939Abstract: A display apparatus includes an input unit configured to receive a user command; an output unit configured to output a registration suitability determination result for the user command; and a processor configured to generate phonetic symbols for the user command, analyze the generated phonetic symbols to determine registration suitability for the user command, and control the output unit to output the registration suitability determination result for the user command. Therefore, the display apparatus may register a user command which is resistant to misrecognition and guarantees high recognition rate among user commands defined by a user.Type: GrantFiled: October 7, 2022Date of Patent: February 13, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Nam-yeong Kwon, Kyung-mi Park
-
Patent number: 11862166Abstract: A display apparatus includes an input unit configured to receive a user command; an output unit configured to output a registration suitability determination result for the user command; and a processor configured to generate phonetic symbols for the user command, analyze the generated phonetic symbols to determine registration suitability for the user command, and control the output unit to output the registration suitability determination result for the user command. Therefore, the display apparatus may register a user command which is resistant to misrecognition and guarantees high recognition rate among user commands defined by a user.Type: GrantFiled: October 7, 2022Date of Patent: January 2, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Nam-yeong Kwon, Kyung-mi Park
-
Patent number: 11854528Abstract: An apparatus for detecting unsupported utterances in natural language understanding, includes a memory storing instructions, and at least one processor configured to execute the instructions to classify a feature that is extracted from an input utterance of a user, as one of in-domain and out-of-domain (OOD) for a response to the input utterance, obtain an OOD score of the extracted feature, and identify whether the feature is classified as OOD. The at least one processor is further configured to executed the instructions to, based on the feature being identified to be classified as in-domain, identify whether the obtained OOD score is greater than a predefined threshold, and based on the OOD score being identified to be greater than the predefined threshold, re-classify the feature as OOD.Type: GrantFiled: August 13, 2021Date of Patent: December 26, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Yen-Chang Hsu, Yilin Shen, Avik Ray, Hongxia Jin
-
Patent number: 11844043Abstract: Methods, apparatus, systems and articles of manufacture for near real time out of home audience measurement are disclosed. An example apparatus includes at least one memory; instructions; and processor circuitry to execute the instructions to at least: receive a first data transmission request at a first portable meter; send a second data transmission request from the first portable meter to a second portable meter; determine whether the first portable meter is capable of transmitting at least one data packet, based at least in part on an indication the second portable meter is capable of transmitting the at least one data packet; and in response to determining the first portable meter is capable of transmitting the at least one data packet, transmit the at least one data packet.Type: GrantFiled: June 28, 2021Date of Patent: December 12, 2023Assignee: The Nielsen Company (US), LLCInventors: John T. Livoti, Stanley Wellington Woodruff
-
Patent number: 11842726Abstract: A computer-implemented method for speech recognition is disclosed. The method includes extracting a feature word associated with location information from a speech to be recognized, and calculating a similarity between the feature word and respective ones of a plurality of candidate words in a corpus. The corpus includes a first sub-corpus associated with at least one user, and the plurality of candidate words include, in the first sub-corpus, a first standard candidate word and at least one first erroneous candidate word. The at least one first erroneous candidate word has a preset correspondence with the first standard candidate word. The method further includes in response to the similarity between the feature word and one or more of the at least one first erroneous candidate word satisfying a predetermined condition, outputting the first standard candidate word as a recognition result based on the preset correspondence.Type: GrantFiled: September 8, 2021Date of Patent: December 12, 2023Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Jing Pei, Xiantao Chen, Meng Xu
-
Patent number: 11837221Abstract: Systems and methods are described to receive a query from a user and provide a reply that is appropriate for an age group of the user. A query for a media asset is received, where such query comprises an inputted term, and the query is determined to be received from a user belonging to a first age group. A context of the inputted term within the query is identified, and in response to the determining, based on the identified context, that the inputted term of the query is inappropriate for the first age group, a replacement term for the inputted term that is related to the inputted term and is appropriate for the first age group in the context of the query is identified. The query is modified to replace the inputted term with the identified replacement term, and a reply to the modified query is generated for output.Type: GrantFiled: February 26, 2021Date of Patent: December 5, 2023Assignee: Rovi Guides, Inc.Inventors: Ankur Anil Aher, Jeffry Copps Robert Jose
-
Patent number: 11810568Abstract: A computer-implemented method for transcribing an utterance includes receiving, at a computing system, speech data that characterizes an utterance of a user. A first set of candidate transcriptions of the utterance can be generated using a static class-based language model that includes a plurality of classes that are each populated with class-based terms selected independently of the utterance or the user. The computing system can then determine whether the first set of candidate transcriptions includes class-based terms. Based on whether the first set of candidate transcriptions includes class-based terms, the computing system can determine whether to generate a dynamic class-based language model that includes at least one class that is populated with class-based terms selected based on a context associated with at least one of the utterance and the user.Type: GrantFiled: December 10, 2020Date of Patent: November 7, 2023Assignee: Google LLCInventors: Petar Aleksic, Pedro J. Moreno Mengibar
-
Patent number: 11783830Abstract: Various embodiments contemplate systems and methods for performing automatic speech recognition (ASR) and natural language understanding (NLU) that enable high accuracy recognition and understanding of freely spoken utterances which may contain proper names and similar entities. The proper name entities may contain or be comprised wholly of words that are not present in the vocabularies of these systems as normally constituted. Recognition of the other words in the utterances in question, e.g. words that are not part of the proper name entities, may occur at regular, high recognition accuracy. Various embodiments provide as output not only accurately transcribed running text of the complete utterance, but also a symbolic representation of the meaning of the input, including appropriate symbolic representations of proper name entities, adequate to allow a computer system to respond appropriately to the spoken request without further analysis of the user's input.Type: GrantFiled: May 26, 2021Date of Patent: October 10, 2023Assignee: Promptu Systems CorporationInventor: Harry William Printz
-
Patent number: 11763100Abstract: A system is provided comprising a processor and a memory storing instructions which configure the processor to process an original sentence structure through an encoder neural network to decompose the original sentence structure into an original semantics component and an original syntax component, process the original syntax component through a syntax variation autoencoder (VAE) to receive a syntax mean vector and a syntax covariance matrix, obtain a sampled syntax value from a syntax Gaussian posterior parameterized by the syntax mean vector and the syntax covariance matrix, process the original semantics component through a semantics VAE to receive a semantics mean vector and a semantics covariance matrix, obtain a sampled semantics vector from the Gaussian semantics posterior parameterized by the semantics mean vector and the semantics covariance matrix, and process the sampled syntax vector and the sampled semantics vector through a decoder neural network to compose a new sentence.Type: GrantFiled: May 22, 2020Date of Patent: September 19, 2023Assignee: ROYAL BANK OF CANADAInventors: Peng Xu, Yanshuai Cao, Jackie C. K. Cheung
-
Patent number: 11735191Abstract: This application describes methods and apparatus for speaker recognition. An apparatus according to an embodiment has an analyzer for analyzing each frame of a sequence of frames of audio data which correspond to speech sounds uttered by a user to determine at least one characteristic of the speech sound of that frame. An assessment module determines, for each frame of audio data, a contribution indicator of the extent to which that frame of audio data should be used for speaker recognition processing based on the determined characteristic of the speech sound. Said contribution indicator comprises a weighting to be applied to each frame in the speaker recognition processing. In this way frames which correspond to speech sounds that are of most use for speaker discrimination may be emphasized and/or frames which correspond to speech sounds that are of least use for speaker discrimination may be de-emphasized.Type: GrantFiled: June 25, 2019Date of Patent: August 22, 2023Assignee: Cirrus Logic, Inc.Inventors: John Paul Lesso, John Laurence Melanson
-
Patent number: 11735167Abstract: Disclosed is an electronic device recognizing an utterance voice in units of individual characters. The electronic device includes: a voice receiver; and a processor configured to: obtain a recognition character converted from a character section of a user voice received through the voice receiver, and recognize a candidate character having high acoustic feature related similarity with the character section among a plurality of acquired candidate characters as an utterance character of the character section based on a confusion possibility with the acquired recognition character.Type: GrantFiled: November 24, 2020Date of Patent: August 22, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Jihun Park, Dongheon Seok
-
Patent number: 11704089Abstract: A display device according to an embodiment of the present invention may comprise: a display unit for displaying a content image; a microphone for receiving voice commands from a user; a network interface unit for communicating with a natural language processing server and a search server; and a control unit for transmitting the received voice commands to the natural language processing server, receiving intention analysis result information indicating the user's intention corresponding to the voice commands from the natural language processing server, and performing a function of the display device according to the received intention analysis result information.Type: GrantFiled: January 11, 2021Date of Patent: July 18, 2023Assignee: LG ELECTRONICS INC.Inventors: Sunki Min, Kiwoong Lee, Hyangjin Lee, Jeean Chang, Seunghyun Heo, Jaekyung Lee
-
Patent number: 11676073Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed that analyze performance of manufacturer independent devices. An example apparatus includes a software development kit (SDK) deployment engine to deploy an SDK to a manufacturer of a device, the SDK to define heartbeat data to be collected from the device and interfacing techniques to transmit the heartbeat data to a measurement entity. In some examples, the apparatus includes a machine learning engine to predict whether the device is associated with one or more failure modes. The example apparatus also includes an alert generator to generate an alert based on a prediction, the alert to indicate at least one of a type of a first one of the failure modes or at least one component of the device to be remedied according to the first one of the one or more failure modes, and transmit the alert to a management agent.Type: GrantFiled: July 12, 2021Date of Patent: June 13, 2023Assignee: The Nielsen Company (US), LLCInventors: John T. Livoti, Susan Cimino, Stanley Wellington Woodruff, Rajakumar Madhanganesh, Alok Garg
-
Patent number: 11657799Abstract: Techniques performed by a data processing system for training a Recurrent Neural Network Transducer (RNN-T) herein include encoder pretraining by training a neural network-based token classification model using first token-aligned training data representing a plurality of utterances, where each utterance is associated with a plurality of frames of audio data and tokens representing each utterance are aligned with frame boundaries of the plurality of audio frames; obtaining first cross-entropy (CE) criterion from the token classification model, wherein the CE criterion represent a divergence between expected outputs and reference outputs of the model; pretraining an encoder of an RNN-T based on the first CE criterion; and training the RNN-T with second training data after pretraining the encoder of the RNN-T. These techniques also include whole-network pre-training of the RNN-T.Type: GrantFiled: April 3, 2020Date of Patent: May 23, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Rui Zhao, Jinyu Li, Liang Lu, Yifan Gong, Hu Hu
-
Patent number: 11636146Abstract: Methods and apparatus for improving speech recognition accuracy in media content searches are described. An advertisement for a media content item is analyzed to identify keywords that may describe the media content item. The identified keywords are associated with the media content item for use during a voice search to locate the media content item. A user may speak the one or more of the keywords as a search input and be provided with the media content item as a result of the search.Type: GrantFiled: September 23, 2019Date of Patent: April 25, 2023Assignee: Comcast Cable Communications, LLCInventor: George Thomas Des Jardins
-
Patent number: 11600284Abstract: A voice morphing apparatus having adjustable parameters is described. The disclosed system and method include a voice morphing apparatus that morphs input audio to mask a speaker's identity. Parameter adjustment uses evaluation of an objective function that is based on the input audio and output of the voice morphing apparatus. The voice morphing apparatus includes objectives that are based adversarially on speaker identification and positively on audio fidelity. Thus, the voice morphing apparatus is adjusted to reduce identifiability of speakers while maintaining fidelity of the morphed audio. The voice morphing apparatus may be used as part of an automatic speech recognition system.Type: GrantFiled: January 11, 2020Date of Patent: March 7, 2023Assignee: SOUNDHOUND, INC.Inventor: Steve Pearson
-
Patent number: 11593621Abstract: An information processing apparatus according to an embodiment includes one or more hardware processors. The hardware processors obtain a first categorical distribution sequence corresponding to first input data and obtain a second categorical distribution sequence corresponding to second input data neighboring the first input data, by using a prediction model outputting a categorical distribution sequence representing a sequence of L categorical distributions for a single input data piece, where, L is a natural number of two or more. The hardware processors calculate, for each i of 1 to L, an inter-distribution distance between i-th categorical distributions in the first and second categorical distribution sequences. The hardware processors calculate a sum of L inter-distribution distances. The hardware processors update the prediction model's parameters to lessen the sum.Type: GrantFiled: January 24, 2020Date of Patent: February 28, 2023Assignees: KABUSHIKI KAISHA TOSHIBA, TOSHIBA DIGITAL SOLUTIONS CORPORATIONInventor: Ryohei Tanaka
-
Patent number: 11574127Abstract: Systems and methods for training a classifier binary model of a natural language understanding (NLU) system are disclosed herein. A determination is made as to whether a text string, with a content entity, includes an obsequious expression. In response to determining the text string includes an obsequious expression, a determination is made as to whether the obsequious expression describes the content entity. The model is trained based on a determination of at least one of: an absence of an obsequious expression in response to determining the obsequious expression describes the content entity; a presence of an obsequious expression in response to determining the obsequious expression describes the content entity; an absence of an obsequious expression in response to determining the obsequious expression does not describe the content entity, and a presence of an obsequious expression in response to determining the obsequious expression does not describe the content entity.Type: GrantFiled: February 28, 2020Date of Patent: February 7, 2023Assignee: Rovi Guides, Inc.Inventors: Jeffry Copps Robert Jose, Mithun Umesh
-
Patent number: 11574012Abstract: The present application provides an error correction method and device for search terms. The method comprises: identifying an incorrect search term; calculating weighted edit distances between the search term and pre-obtained hot terms by using a weighted edit distance algorithm, wherein, during the calculation of the weighted edit distances, different weights are set respectively for the following operations of transforming from the search term to the hot terms: an operation of inserting characters, an operation of deleting characters, an operation of replacing by characters with similar appearance or pronunciation, an operation of replacing by characters with dissimilar appearance or pronunciation, and an operation of exchanging characters; and selecting a predetermined number of hot terms based on the weighted edit distances and popularity of the hot terms for error correction prompt. The method and device of the present application can improve the error correction accuracy of error search terms.Type: GrantFiled: August 14, 2017Date of Patent: February 7, 2023Assignee: BEIJING QIYI CENTURY SCIENCE & TECHNOLOGY CO., LTD.Inventors: Jun Hu, Yingjie Chen, Tianchang Wang, Chengcan Ye
-
Patent number: 11557280Abstract: Implementations relate to techniques for providing context-dependent search results. A computer-implemented method includes receiving an audio stream at a computing device during a time interval, the audio stream comprising user speech data and background audio, separating the audio stream into a first substream that includes the user speech data and a second substream that includes the background audio, identifying concepts related to the background audio, generating a set of terms related to the identified concepts, influencing a speech recognizer based on at least one of the terms related to the background audio, and obtaining a recognized version of the user speech data using the speech recognizer.Type: GrantFiled: November 23, 2020Date of Patent: January 17, 2023Assignee: Google LLCInventors: Jason Sanders, Gabriel Taubman, John J. Lee
-
Patent number: 11557286Abstract: A speech recognition method includes receiving speech data, obtaining, from the received speech data, a candidate text including at least one word and a phonetic symbol sequence associated with a pronunciation of a target word included in the received speech data, using a speech recognition model, replacing the phonetic symbol sequence included in the candidate text with a replacement word corresponding to the phonetic symbol sequence, and determining a target text corresponding to the received speech data based on a result of the replacing.Type: GrantFiled: December 30, 2019Date of Patent: January 17, 2023Assignee: Samsung Electronics Co., Ltd.Inventor: Jihyun Lee
-
Patent number: 11557274Abstract: Embodiments may provide improved techniques to assess model checkpoint stability on unseen data on-the-fly, so as to prevent unstable checkpoints from being saved, and to avoid or reduce the need for an expensive thorough evaluation. For example, a method may comprise passing a set of input sequences through a checkpoint of a sequence to sequence model in inference mode to obtain a set of generated sequences of feature vectors, determining whether each of a plurality of generated sequences of feature vectors is complete, counting a number of incomplete generated sequences of feature vectors among the plurality of generated sequences of feature vectors, generating a score indicating a stability of the model based on the count of incomplete generated sequences of feature vectors, and storing the model checkpoint when the score indicating the stability of the model is above a predetermined threshold.Type: GrantFiled: March 15, 2021Date of Patent: January 17, 2023Assignee: International Business Machines CorporationInventor: Vyacheslav Shechtman
-
Patent number: 11537950Abstract: This disclosure describes one or more implementations of a text sequence labeling system that accurately and efficiently utilize a joint-learning self-distillation approach to improve text sequence labeling machine-learning models. For example, in various implementations, the text sequence labeling system trains a text sequence labeling machine-learning teacher model to generate text sequence labels. The text sequence labeling system then creates and trains a text sequence labeling machine-learning student model utilizing the training and the output of the teacher model. Upon the student model achieving improved results over the teacher model, the text sequence labeling system re-initializes the teacher model with the learned model parameters of the student model and repeats the above joint-learning self-distillation framework. The text sequence labeling system then utilizes a trained text sequence labeling model to generate text sequence labels from input documents.Type: GrantFiled: October 14, 2020Date of Patent: December 27, 2022Assignee: Adobe Inc.Inventors: Trung Bui, Tuan Manh Lai, Quan Tran, Doo Soon Kim
-
Patent number: 11538488Abstract: Embodiments of the present disclosure provide methods and systems for processing a speech signal. The method can include: processing the speech signal to generate a plurality of speech frames; generating a first number of acoustic features based on the plurality of speech frames using a frame shift at a given frequency; and generating a second number of posteriori probability vectors based on the first number of acoustic features using an acoustic model, wherein each of the posteriori probability vectors comprises probabilities of the acoustic features corresponding to a plurality of modeling units, respectively.Type: GrantFiled: November 27, 2019Date of Patent: December 27, 2022Assignee: Alibaba Group Holding LimitedInventors: Shiliang Zhang, Ming Lei, Wei Li, Haitao Yao
-
Patent number: 11538470Abstract: A system includes at least one communication interface, at least one processor operatively connected to the at least one communication interface, and at least one memory operatively connected to the at least one processor and storing a plurality of natural language understanding (NLU) models. The at least one memory stores instructions that, when executed, cause the processor to receive first information associated with a user from an external electronic device associated with a user account, using the at least one communication interface, to select at least one of the plurality of NLU models, based on at least part of the first information, and to transmit the selected at least one NLU model to the external electronic device, using the at least one communication interface such that the external electronic device uses the selected at least one NLU model for natural language processing.Type: GrantFiled: June 29, 2020Date of Patent: December 27, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Sean Minsung Kim, Jaeyung Yeo
-
Patent number: 11508355Abstract: Systems and methods are disclosed herein for discerning aspects of user speech to determine user intent and/or other acoustic features of a sound input without the use of an ASR engine. To this end, a processor may receive a sound signal comprising raw acoustic data from a client device, and divides the data into acoustic units. The processor feeds the acoustic units through a first machine learning model to obtain a first output and determines a first mapping, using the first output, of each respective acoustic unit to a plurality of candidate representations of the respective acoustic unit. The processor feeds each candidate representation of the plurality through a second machine learning model to obtain a second output, determines a second mapping, using the second output, of each candidate representation to a known condition, and determines a label for the sound signal based on the second mapping.Type: GrantFiled: October 26, 2018Date of Patent: November 22, 2022Assignee: Interactions LLCInventors: Ryan Price, Srinivas Bangalore
-
Patent number: 11495228Abstract: An apparatus including a user input receiver; a user voice input receiver; a display; and a processor. The processor is configured to: (a) based on a user input being received through the user input receiver, perform a function corresponding to voice input state for receiving a user voice input; (b) receive a user voice input through the user voice input receiver; (c) identify whether or not a text corresponding to the received user voice input is related to a pre-registered voice command or a prohibited expression; and (d) based on the text being related to the pre-registered voice command or the prohibited expression, control the display to display an indicator that the text is related to the pre-registered voice command or the prohibited expression. A method and non-transitory computer-readable medium are also provided.Type: GrantFiled: November 30, 2020Date of Patent: November 8, 2022Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Nam-yeong Kwon, Kyung-mi Park