Speech Controlled System Patents (Class 704/275)
  • Patent number: 10862686
    Abstract: Provided are an application decryption method, a terminal and a non-transitory computer-readable storage medium and relates to a technical field of terminals. In the method, a touch operation is acquired through a display screen of a terminal; fingerprint information of the touch operation is acquired through a fingerprint sensor located at a position corresponding to the touch operation, the fingerprint sensor being arranged below the display screen of the terminal; a target application that is encrypted is decrypted in a case where the fingerprint information of the touch operation is matched with encryption fingerprint information of the target application.
    Type: Grant
    Filed: August 16, 2018
    Date of Patent: December 8, 2020
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventor: Zhenzhen Chen
  • Patent number: 10863279
    Abstract: The disclosure is directed to a voice-controlled Bluetooth headset, which includes a receiver, a storage module, an offline voice recognition module, and a Bluetooth module. The offline voice recognition module is used to activate and recognize a preset voice when a preset activation password is received. The Bluetooth module is electrically connected to other modules, and is used for system control, Bluetooth transmission, and processing of instructions output by the preset voice recognition module and performs corresponding functions. With the above structure, the present disclosure can implement voice control of the Bluetooth headset according to a simple preset password, without manual operation, and is simple and convenient to use.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: December 8, 2020
    Assignee: Wudi Industrial (Shanghai) Co., Ltd.
    Inventor: Peng Wu
  • Patent number: 10852155
    Abstract: A network apparatus receives a language request provided by a requesting user apparatus. The network apparatus generates and provides poll requests to responding user apparatuses and receives poll responses from the responding user apparatuses. Each poll response is associated with a language and indicates a current location of the corresponding responding user apparatus. Based on the poll responses, the network apparatus identifies areas or points of interest (POIs) that have a density of poll responses that (a) indicate a current location that corresponds to the area and/or POI and (b) is associated with a particular language. The network apparatus generates and provides a request response comprising information identifying at least one density area and/or POI. The requesting user apparatus receives the request response and provides information regarding the at least one density area and/or POI via an interactive user interface.
    Type: Grant
    Filed: February 4, 2019
    Date of Patent: December 1, 2020
    Assignee: HERE Global B.V.
    Inventors: Sophia Ramirez-Saenz, Chris Dougherty
  • Patent number: 10854191
    Abstract: Techniques for optimizing a system to improve an overall user satisfaction in a speech controlled system are described. A user speaks an utterance and the system compares an expected sum of user satisfaction values for each action to make a decision as to how best to process the utterance. As a result, the system may make a decision that decreases user satisfaction in the short term but increases user satisfaction in the long term. The system may estimate a user satisfaction value and associate the estimated user satisfaction value with a current dialog state. By tracking user satisfaction values over time, the system may train machine learning models to optimize the expected sum of user satisfaction values. This improves how the system selects an action or application to which to dispatch the dialog state and how a specific application selects an action or intent corresponding to the command.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: December 1, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Alborz Geramifard, Shiladitya Roy, Ruhi Sarikaya
  • Patent number: 10855676
    Abstract: One or more techniques and/or systems are provided for audio verification. An audio signal, comprising a code for user verification, may be identified. A second audio signal is created comprising speech. The audio signal and the second audio signal may be altered to comprise a same or similar volume, pitch, amplitude, and/or speech rate. The audio signal and the second audio signal may be combined to generate a verification audio signal. The verification audio signal may be presented to a user for the user verification. Verification may be performed to determine whether the user has access to content or a service based upon user input, obtained in response to the user verification audio signal, matching the code within the user verification audio signal. In an example, the user verification may comprise verifying that the user is human.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: December 1, 2020
    Assignee: Oath Inc.
    Inventors: Manjana Chandrasekharan, Keiko Horiguchi, Amanda Joy Stent, Ricardo Alberto Baeza-Yates, Jeffrey Kuwano, Achint Oommen Thomas, Yi Chang
  • Patent number: 10855486
    Abstract: Embodiments of the present invention provide a method and system for dynamically controlling an appliance based on information received from a wearable device, to regulate the user's health. A wearable device is identified and configured to monitor at least one physiological aspect of the user. A controllable appliance with at least one sensor and at least one controllable setting is also identified. Health information of the user is received and utilized in generating, a user profile which comprises parameters related to the health of the user. Data from the wearable device and date from the controllable appliance is analyzed and it is determined whether the data matches the parameters related to the health of the user. If the data does not match the parameters related to the health of the user, then at least one controllable setting of the at least one controllable appliance is adjusted.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: December 1, 2020
    Assignee: ECOBEE INC.
    Inventors: Sandeep Bazar, Kaustubh I. Katruwar, Sandeep R. Patil, Sachin C. Punadikar
  • Patent number: 10850745
    Abstract: An apparatus for recommending a function of a vehicle includes an input module, a memory, an output module, and a processor. The processor obtains intention information indicating an action associated with each of a plurality of sentences, nuance information indicating a positive, neutral, or negative meaning included in each of the plurality of sentences, and one or more keywords for executing a function associated with the intention information among a plurality of functions embedded in the vehicle by analyzing each of the plurality included in the conversation, determines a task, associated with the function, to be recommended to at least some of the plurality of users, based on the intention information, the nuance information, and the one or more keywords, and outputs a message of recommending the task using the output module, when the end of the conversation is recognized.
    Type: Grant
    Filed: September 10, 2018
    Date of Patent: December 1, 2020
    Assignees: HYUNDAI MOTOR COMPANY, KIA MOTORS CORPORATION
    Inventors: Bi Ho Kim, Sung Soo Park
  • Patent number: 10849569
    Abstract: According to one embodiment, a biological information measurement device includes: a biological information measurer configured to carry out intermittent measurement of biological information of a user; a motion information measurer configured to measure motion information of the user; a feature calculator configured to calculate a feature from the motion information; a behavior state determiner configured to determine a behavior state of the user on the basis of the feature; and a measurement interval controller configured to select one intermittent measurement from a plurality of intermittent measurements having different measurement intervals on the basis of the determined behavior state of the user and control the biological information measurer.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: December 1, 2020
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Takashi Sudo, Takaya Matsuno, Masataka Osada
  • Patent number: 10847024
    Abstract: A method of controlling an external apparatus includes receiving a user input information; obtaining apparatus information regarding a plurality of external apparatuses; selecting one or more external apparatuses, from the plurality of external apparatuses, which is communicable with and controllable based on the user input information; generating a control information for controlling the one or more external apparatuses based on a user's input and the apparatus information; and transmitting a control command to the one or more external apparatuses, the control command being generated based on the received control information.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: November 24, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hang-sik Shin, Jae-woo Ko, Se-jun Park
  • Patent number: 10848392
    Abstract: In one aspect, a first device includes a processor and storage accessible to the processor. The storage includes instructions executable by the processor to receive, via a digital assistant, input pertaining a second device joining a network. The instructions are also executable by the processor to use the input to assist the second device in joining the network.
    Type: Grant
    Filed: February 15, 2018
    Date of Patent: November 24, 2020
    Assignee: Lenovo (Singapore) Pte. Ltd.
    Inventors: Robert James Kapinos, Scott Wentao Li, Timothy Winthrop Kingsbury, Russell Speight VanBlon
  • Patent number: 10835822
    Abstract: Embodiments of the present invention relate to the field of Internet technologies, and disclose an application control method and a terminal device. The method includes: determining whether a currently running application meets a condition for casting a skill, and if the currently running application meets the condition for casting a skill, outputting a skill name corresponding to at least one castable skill; detecting a target skill name input by a user in voice mode; recognizing the target skill name and determining whether the target skill name belongs to the output skill name corresponding to the at least one castable skill; and if the target skill name belongs to the output skill name corresponding to the at least one castable skill, casting a skill corresponding to the target skill name in the application. By implementing embodiments of the present invention, applications may be controlled easily.
    Type: Grant
    Filed: December 15, 2016
    Date of Patent: November 17, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Delin Zhang
  • Patent number: 10839167
    Abstract: A system described herein may provide for the adaptation and/or expansion of a natural language processing (“NLP”) platform, that supports only a limited quantity of intents, such that the described system may support an unlimited (or nearly unlimited) quantity of intents. For example, a hierarchical structure of agents may be used, where each agent includes multiple intents. A top-level (e.g., master) agent may handle initial user interactions, and may indicate a next-level agent to handle subsequent interactions.
    Type: Grant
    Filed: December 4, 2018
    Date of Patent: November 17, 2020
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Trisha Mahajan, Stephen Soltys, Neil Thomas Razzano, Sankar Shanmugam
  • Patent number: 10839803
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for contextual hotwords are disclosed. In one aspect, a method, during a boot process of a computing device, includes the actions of determining, by a computing device, a context associated with the computing device. The actions further include, based on the context associated with the computing device, determining a hotword. The actions further include, after determining the hotword, receiving audio data that corresponds to an utterance. The actions further include determining that the audio data includes the hotword. The actions further include, in response to determining that the audio data includes the hotword, performing an operation associated with the hotword.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: November 17, 2020
    Assignee: Google LLC
    Inventors: Christopher Thaddeus Hughes, Ignacio Lopez Moreno, Aleksandar Kracun
  • Patent number: 10839670
    Abstract: Example meters disclosed herein include a prompting indicator to emit a prompt for user input. Disclosed example meters also include a controller to determine whether the meter is to transition to a first prompting mode, or whether the meter is to transition to a second prompting mode different from the first prompting mode, the meter to be able to operate in at least a quiet mode, the first prompting mode or the second prompting mode. The controller is also to activate the prompting indicator and a light projector when the meter is to transition to the second prompting mode, the light projector to project light. The controller is further to activate the prompting indicator, but not activate the light projector, when the meter is to transition to the first prompting mode.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: November 17, 2020
    Assignee: The Nielsen Company (US), LLC
    Inventors: James Joseph Vitt, Sachin Suresh Nilugal
  • Patent number: 10839799
    Abstract: Methods, systems, and apparatus for receiving data identifying an application and a voice command trigger term, validating the received data, inducting the received data to generate an intent that specifies the application, the voice command trigger term, and one or more other voice command trigger terms that are determined based at least on the voice command trigger term, and storing the intent at a contextual intent database, wherein the contextual intent database comprises one or more other intents.
    Type: Grant
    Filed: May 23, 2018
    Date of Patent: November 17, 2020
    Assignee: GOOGLE LLC
    Inventors: Bo Wang, Sunil Vemuri, Nitin Mangesh Shetti, Pravir Kumar Gupta, Scott B Huffman, Javier Alejandro Rey, Jeffrey A. Boortz
  • Patent number: 10832655
    Abstract: A method for providing a context awareness service is provided. The method includes defining a control command for the context awareness service depending on a user input, triggering a playback mode and the context awareness service in response to a user selection, receiving external audio through a microphone in the playback mode, determining whether the received audio corresponds to the control command, and executing a particular action assigned to the control command when the received audio corresponds to the control command.
    Type: Grant
    Filed: August 7, 2019
    Date of Patent: November 10, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jin Park, Jiyeon Jung
  • Patent number: 10827973
    Abstract: A system and method for measuring an infant's pain intensity is presented. The method for assessing an infant's pain intensity based on facial expressions is comprised of three main stages: detection of an infant's face in video sequence followed by preprocessing operations including face alignment; expression segmentation; and expression recognition or classification. Also presented is a multimodal system for assessing an infant's pain intensity using the following classifiers: facial expression classifier; vital sign classifier; crying recognition classifier; body motion classifier and state of arousal classifier. Each classifier generates an individual score, all of which are normalized and weighed to generate a total pain score that indicates pain intensity.
    Type: Grant
    Filed: January 6, 2016
    Date of Patent: November 10, 2020
    Assignee: University of South Florida
    Inventors: Ghadh A. Alzamzmi, Dmitry Goldgof, Yu Sun, Rangachar Kasturi, Terri Ashmeade
  • Patent number: 10832145
    Abstract: A technique for resolving entities provided in a question includes creating respective entity context vectors (ECVs) for respective entities in an applicable knowledge graph (KG). A question is received from a user. A first entity is identified in the question. The first entity is associated with a matching one of the entities in the KG. An ECV for the matching one of the entities in the KG is modified. An answer to the question is generated based on the modified ECV.
    Type: Grant
    Filed: October 5, 2015
    Date of Patent: November 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Swaminathan Chandrasekaran, Joseph M. Kaufmann, Lakshminarayanan Krishnamurthy
  • Patent number: 10832006
    Abstract: A method, apparatus and computer program product for responding to an indirect utterance in a dialogue between a user and a conversational system is described. An indirect utterance is received. A parse structure of the indirect utterance is generated. The indirect utterance is an utterance which does not match a user goal expressed as elements of a knowledge graph. The parse structure is connected through the knowledge graph to a user goal to issue a user request which is not stated in the indirect utterance. The parse structure is connected using a matching process which matches the parse structure with the connected user goal in the knowledge graph according to a similarity of the parse structure and a portion of the knowledge graph including the connected user goal. A system response is performed based on the connected user goal.
    Type: Grant
    Filed: November 7, 2019
    Date of Patent: November 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Mustafa Canim, Robert G Farrell, Achille B Fokoue-Nkoutche, John A Gunnels, Ryan A Musa, Vijay A Saraswat
  • Patent number: 10829130
    Abstract: Driver assistance is provided. An issue corresponding to a driver of a vehicle is automatically identified based on analysis of collected data. A set of actions is selected to address the identified issue corresponding to the driver based on the analysis of the collected data and preference data of the driver. The driver is notified of the selected set of actions to address the identified issue corresponding to the driver.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: November 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Sarbajit K. Rakshit, James E Bostick, John M. Ganci, Jr., Martin G. Keen
  • Patent number: 10832685
    Abstract: According to an embodiment, a speech processing device includes an extractor, a classifier, a similarity calculator, and an identifier. The extractor is configured to extract a speech feature from utterance data. The classifier is configured to classify the utterance data into a set of utterances for each speaker based on the extracted speech feature. The similarity calculator is configured to calculate a similarity between the speech feature of the utterance data included in the set and each of a plurality of speaker models. The identifier is configured to identify a speaker for each set based on the calculated similarity.
    Type: Grant
    Filed: September 1, 2016
    Date of Patent: November 10, 2020
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Ning Ding, Makoto Hirohata
  • Patent number: 10824664
    Abstract: Embodiments of the present disclosure disclose a method and apparatus for processing information. A specific implementation of the method includes: parsing a received voice query request sent by a user to obtain text query information corresponding to the voice query request; obtaining text push information obtained by searching using the text query information; processing the text push information to obtain to-be-pushed information corresponding to the text push information; and playing the to-be-pushed information. The implementation can play information when the user is not convenient to browse the information, so that the user can obtain the information in time.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: November 3, 2020
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO, LTD.
    Inventors: Hualong Zu, Haiguang Yuan, Ran Xu, Chen Chen, Lei Shi, Xin Li, Liou Chen
  • Patent number: 10825468
    Abstract: Systems and methods for providing natural language annunciations are provided. In one embodiment, a method can include receiving a set of data indicative of a user input associated with one or more travel modes. Information indicative of the one or more travel modes can be provided for display on a first display device. The method can further include generating an output indicative of a natural language annunciation based at least in part on the first set of data. The natural language annunciation can be indicative of the one or more travel modes using natural language syntax. The method can include sending the output indicative of the natural language annunciation to one or more other computing devices associated with a second display device.
    Type: Grant
    Filed: June 8, 2017
    Date of Patent: November 3, 2020
    Assignee: GE AVIATION SYSTEMS LIMITED
    Inventor: George R. Henderson
  • Patent number: 10818294
    Abstract: A voice activation system for a vehicle. The voice activation system for a vehicle which has at least one sound panel capable of providing vibrations of a user's voice from the outside of the vehicle into an inside area of the vehicle. A laser listening device is operably connected to the panel for receiving vibrations from a user's voice. A controller receives a pre-identified command of the user from the laser listener and operates an action in the vehicle in response thereto.
    Type: Grant
    Filed: February 16, 2018
    Date of Patent: October 27, 2020
    Assignee: Magna Exteriors, Inc.
    Inventor: Steven S. Grgac
  • Patent number: 10818197
    Abstract: A construction site status monitoring device is provided including processing circuitry configured to receive teaching data from a construction device in a teaching mode based on an operator performing an operation with the construction device and generate an operation profile based on the teaching data for execution by one or more construction devices. The operation profile defines parameters associated with the operation to enable one or more construction devices to repeat the operation in an operate mode.
    Type: Grant
    Filed: March 3, 2017
    Date of Patent: October 27, 2020
    Assignee: HUSQVARNA AB
    Inventors: Ulf Pettersson, Johan Berg, Anders Erestam
  • Patent number: 10811031
    Abstract: Embodiments of the present disclosure disclose a method and a device for obtaining an amplitude for a sound zone, a related electronic device and a storage medium. The method includes the following. Speech data of a target sound zone is obtained in real time. The speech data includes audio signals corresponding to a plurality of sampling points. The audio signals are stored by comparing an amplitude of a current audio signal to be stored with an amplitude of each stored audio signal to determine whether to store the current audio signal according to a comparison result. A current amplitude for the target sound zone is calculated according to amplitudes of all stored audio signals.
    Type: Grant
    Filed: December 26, 2019
    Date of Patent: October 20, 2020
    Assignee: Baidu Online NetworkTechnology (Beijing) Co., Ltd.
    Inventors: Hanying Peng, Nengjun Ouyang
  • Patent number: 10811008
    Abstract: A system for processing a user utterance is provided. The system includes at least one network interface; at least one processor operatively connected to the at least one network interface; and at least one memory operatively connected to the at least one processor, wherein the at least one memory stores a plurality of specified sequences of states of at least one external electronic device, wherein each of the specified sequences is associated with a respective one of domains, wherein the at least one memory further stores instructions that, when executed, cause the at least one processor to receive first data associated with the user utterance provided via a first of the at least one external electronic device, wherein the user utterance includes a request for performing a task using the first of the at least one external device.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: October 20, 2020
    Assignee: Samsung Electronics Co., Ltd
    Inventors: Kyoung Gu Woo, Woo Up Kwon, Jin Woo Park, Eun Taek Lim, Joo Hyuk Jeon, Ji Hyun Kim, Dong Ho Jang
  • Patent number: 10813195
    Abstract: A lighting device includes a microphone, a camera, and a controller. The controller is configured to control a light source of the lighting device and determine whether an utterance captured by the microphone or a gesture captured by the camera corresponds to a wake-word. The controller is further configured to generate a command based on at least an image of an item captured by the camera if the controller determines that the utterance or the gesture corresponds to the wake-word. The controller is also configured to send the command to a cloud server and to provide a response to the command, where the response is received from the cloud server.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: October 20, 2020
    Assignee: SIGNIFY HOLDING B.V.
    Inventors: Pengseng Tan, Nam Chin Cho, Vaibhav Chavan, Parth Joshi
  • Patent number: 10803251
    Abstract: A method and device for extracting Action of Interest (AOI) from natural language sentences is disclosed. The method includes creating an input vector comprising a plurality of parameters for each target word in a sentence inputted by a user. The method further includes processing for each target word, the input vector through a trained neural network with RELU activation, which is trained to identify AOI from a plurality of sentences. The method includes assigning AOI tags to each target word in the sentence based on processing of associated input vector through the trained neural network with RELU activation. The method further includes extracting AOI text from the sentence based on the AOI tags assigned to each target word in the sentence. The method further includes providing a response to the sentence inputted by the user based on the AOI text extracted from the sentence.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: October 13, 2020
    Assignee: Wipro Limited
    Inventors: Arindam Chatterjee, Kartik Subodh Ballal
  • Patent number: 10803253
    Abstract: A method and device for extracting Point of Interest (POI) from natural language sentences is disclosed. The method includes creating an input vector comprising a plurality of parameters for each target word in a sentence inputted by a user. The method further includes processing for each target word, the input vector through a trained bidirectional LSTM neural network, which is trained to identify POI from a plurality of sentences. The method includes associating POI tags to each target word in the sentence based on processing of associated input vector through the trained bidirectional LSTM neural network. The method further includes extracting POI text from the sentence based on the POI tags associated with each target word in the sentence. The method further includes providing a response to the sentence inputted by the user based on the POI text extracted from the sentence.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: October 13, 2020
    Assignee: Wipro Limited
    Inventors: Arindam Chatterjee, Kartik Subodh Ballal
  • Patent number: 10805465
    Abstract: A system includes one or more processors configured to receive call-specific data during a call between a customer and a customer service representative, and the call-specific data includes a verbal input. The one or more processors are configured to determine one or more characteristics of the verbal input and to determine an initial inquiry of the customer based at least in part on the one or more characteristics of the verbal input. The one or more processors are also configured to determine one or more follow-up inquiries based at least in part on the initial inquiry and to provide information related to the one or more follow-up questions in a window on a display of a computing system for visualization by the customer service representative.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: October 13, 2020
    Assignee: United Services Automobile Association (USAA)
    Inventors: Emily Kathleen Krebs, Victor Kwak, Rachel Ann Krebs
  • Patent number: 10803869
    Abstract: Methods and devices for enabling and disabling applications using voice are described herein. In some embodiments, an individual speak an utterance to their electronic device, which may send audio data representing the utterance to a backend system. The backend system may generate text data representing the utterance, and may determine that an intent of the utterance was for an application to be enabled or disabled for their user account on the backend system. If, for instance, the intent was to enable the application, the backend system may receive one or more rules for performing functionalities of the application, as well as one or more sample templates of sample utterances and sample responses that future utterances may use when requesting the application. Furthermore, one or more invocation phrases that may be used within the future utterances to invoke the application may be received, along with slot values for the sample templates.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: October 13, 2020
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Shaman D'Souza, Ian Suttle, Srikanth Nori, Rajiv Reddy, Amol Kanitkar, Tina Orooji
  • Patent number: 10803252
    Abstract: A method and device for extracting attributes associated with Center of Interest (COI) from natural language sentences is disclosed. The method includes creating an input vector comprising a plurality of parameters for each target word in a sentence inputted by a user. The method further includes processing for each target word, the input vector through a trained bidirectional GRU neural network, which is trained to identify attributes associated with COI from a plurality of sentences. The method includes associating COI attribute tags to each target word in the sentence based on processing of associated input vector through the trained bidirectional GRU neural network. The method further includes extracting attributes from the sentence based on the COI attribute tags associated with each target word in the sentence. The method further includes providing a response to the sentence inputted by the user based on the attributes extracted from the sentence.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: October 13, 2020
    Assignee: Wipro Limited
    Inventors: Arindam Chatterjee, Kartik Subodh Ballal
  • Patent number: 10800043
    Abstract: Disclosed herein are an interaction apparatus and method. The interaction apparatus includes an input unit for receiving multimodal information including an image and a voice of a target to allow the interaction apparatus to interact with the target, a recognition unit for recognizing turn-taking behavior of the target using the multimodal information, and an execution unit for taking an activity for interacting with the target based on results of recognition of the turn-taking behavior.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: October 13, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Cheon-Shu Park, Jae-Hong Kim, Jae-Yeon Lee, Min-Su Jang
  • Patent number: 10796098
    Abstract: A new technology of prediction of manipulability in response even to an instruction with missing information in an object manipulation task to have a robot manipulate some kind of object is provided. An instruction understanding system includes an obtaining engine configured to obtain a linguistic expression of a name of an object to be manipulated and a linguistic expression of a situation where the object corresponding to the name is placed in a real environment and a classifier configured to receive input of the linguistic expression of the name and the linguistic expression of the situation and output manipulability of the object corresponding to the name in the real environment.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: October 6, 2020
    Assignee: NATIONAL INSTITUTE OF INFORMATION AND COMMUNICATIONS TECHNOLOGY
    Inventors: Komei Sugiura, Hisashi Kawai
  • Patent number: 10789959
    Abstract: Techniques for training a speaker recognition model used for interacting with a digital assistant are provided. In some examples, user authentication information is obtained at a first time. At a second time, a user utterance representing a user request is received. A voice print is generated from the user utterance. A determination is made as to whether a plurality of conditions are satisfied. The plurality of conditions includes a first condition that the user authentication information corresponds to one or more authentication credentials assigned to a registered user of an electronic device. The plurality of conditions further includes a second condition that the first time and the second time are not separated by more than a predefined time period. In accordance with a determination that the plurality of conditions are satisfied, a speaker profile assigned to the registered user is updated based on the voice print.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: September 29, 2020
    Assignee: Apple Inc.
    Inventor: Sachin S. Kajarekar
  • Patent number: 10789953
    Abstract: A system and method for providing a voice assistant including receiving, at a first device, a first audio input from a user requesting a first action; performing automatic speech recognition on the first audio input; obtaining a context of user; performing natural language understanding based on the speech recognition of the first audio input; and taking the first action based on the context of the user and the natural language understanding.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: September 29, 2020
    Assignee: XBrain, Inc.
    Inventors: Gregory Renard, Mathias Herbaux
  • Patent number: 10783889
    Abstract: The present disclosure is generally related to a data processing system to validate vehicular functions in a voice activated computer network environment. The data processing system can improve the efficiency of the network by discarding action data structures and requests that invalid prior to their transmission across the network. The system can invalidate requests by comparing attributes of a vehicular state to attributes of a request state.
    Type: Grant
    Filed: October 3, 2017
    Date of Patent: September 22, 2020
    Assignee: GOOGLE LLC
    Inventors: Haris Ramic, Vikram Aggarwal, Moises Morgenstern Gali, David Roy Schairer, Yao Chen
  • Patent number: 10783888
    Abstract: Disclosed is an apparatus and method for determining which controllable device an audible command is directed towards, the method comprising: receiving at each of two or more controlling devices the audible command signal, the audible command being directed to control at least one of two or more controllable devices controlled by a respective one of the two or more controlling devices; digitizing each of the received audible command signals; attaching a unique identifier to each digitized audible command so as to uniquely correlate it to a respective controlling device; determining a magnitude of each of the digitized audible command; determining a digitized audible command with the greatest magnitude, and further determining to which controlling device the audible command is directed to on the basis of the unique identifier associated with the digitized audible command with the greatest magnitude; performing speech recognition on the digitized audible command with the greatest magnitude; and forwarding a com
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: September 22, 2020
    Assignee: Crestron Electronics Inc.
    Inventors: Fred Bargetzi, Ara Seferian, Josh Stene, Mark LaBosco
  • Patent number: 10783189
    Abstract: Among other things, this document describes a computer-implemented method for storing and retrieving information about the locations of objects. The method can include receiving a first query that includes one or more terms identifying an object. The first query can be determined to include a command to store location information for the object. The first query can be parsed to determine identifying information for the object, and a location can be determined for the object. The method further includes identifying one or more attributes of the object that are not specified in the first query, and causing a first set of data to be stored that characterizes the identifying information for the objet, the location of the object, and the one or more attributes of the object.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: September 22, 2020
    Assignee: Google LLC
    Inventor: Ajay Joshi
  • Patent number: 10783872
    Abstract: A speech-enabled dialog system responds to a plurality of wake-up phrases. Based on which wake-up phrase is detected, the system's configuration is modified accordingly. Various configurable aspects of the system include selection and morphine of a text-to-speech voice; configuration of acoustic model, language model, vocabulary, and grammar; configuration of a graphic animation; configuration of virtual assistant personality parameters; invocation of a particular user profile; invocation of an authentication function; and configuration of an open sound. Configuration depends on a target market segment. Configuration also depends on the state of the dialog system, such as whether a previous utterance was an information query.
    Type: Grant
    Filed: January 13, 2019
    Date of Patent: September 22, 2020
    Assignee: SoundHound, Inc.
    Inventors: Monika Almudafar-Depeyrot, Keyvan Mohajer, Mark Stevans
  • Patent number: 10785365
    Abstract: A system senses audio, imagery, and/or other stimulus from a user's environment, and responds to fulfill user desires. In one particular arrangement, a discovery session is launched when the user speaks a cueing expression, which serves to switch the system from a lower activity state to a heightened alert state. The system may recognize that the speech expresses a user request that requires analysis of camera-captured imagery to fulfill. In response the system can apply an operation, such as a recognition operation (e.g., barcode decoding), to the imagery and take an action based on resulting information. Operation of the system can be aided by collateral information, such as context. A great number of other features and arrangements are also detailed.
    Type: Grant
    Filed: June 12, 2017
    Date of Patent: September 22, 2020
    Assignee: Digimarc Corporation
    Inventors: Tony F. Rodriguez, Geoffrey B. Rhoads, Bruce L. Davis
  • Patent number: 10776977
    Abstract: A device includes a processor and a memory that stores predetermined data including a progressive transition rule and animation models. Each of the animation models corresponds to a respective phoneme. The memory stores instructions including receiving a request from a user and obtaining an answer to the request. The answer includes first and second indicators that correspond to first and second phonemes. The instructions include, according to the first indicator, identifying a first animation model that corresponds to the first phoneme. The instructions include, according to the second indicator, identifying a second animation model that corresponds to the second phoneme. The instructions include generating a transition animation model according to the progressive transition rule using the first and second animation models. The instructions include generating images according to the first, second, and transition animation models. The instructions include outputting the images to the user via a display.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: September 15, 2020
    Assignee: TD Ameritrade IP Company, Inc.
    Inventor: Abd Alrazzak Habra
  • Patent number: 10769384
    Abstract: A system and method for intelligently configuring a machine learning-based dialogue system includes a conversational deficiency assessment of a target dialog system, wherein implementing the conversational deficiency assessment includes: (i) identifying distinct corpora of mishandled utterances based on an assessment of the distinct corpora of dialogue data; (ii) identifying candidate corpus of mishandled utterances from the distinct corpora of mishandled utterances as suitable candidates for building new dialogue competencies for the target dialogue system if candidate metrics of the candidate corpus of mishandled utterances satisfy a candidate threshold; building the new dialogue competencies for the target dialogue system for each of the candidate corpus of mishandled utterances having candidate metrics that satisfy the candidate threshold; and configuring a dialogue system control structure for the target dialogue system based on the new dialogue competencies, wherein the dialogue system control structure
    Type: Grant
    Filed: March 10, 2020
    Date of Patent: September 8, 2020
    Assignee: Clinc, Inc.
    Inventors: Jason Mars, Lingjia Tang, Michael A. Laurenzano, Johann Hauswald, Parker Hill, Yiping Kang, Yunqi Zhang
  • Patent number: 10770061
    Abstract: A method for confirming a trigger in an audio input detected by a voice-activated intelligent device that listens for and detects a trigger in the audio input, confirms whether the trigger in the audio input is intended to wake the device, and if confirmed, the device is instructed to activate. If the trigger cannot be confirmed, the device is instructed to ignore the trigger. The step of confirming whether the trigger is intended to activate the device may include determining whether the audio input is human generated speech, which may also include detecting a fingerprint in the audio input.
    Type: Grant
    Filed: October 6, 2018
    Date of Patent: September 8, 2020
    Assignee: Harman International Industries, Incorporated
    Inventor: Kevin Hague
  • Patent number: 10770075
    Abstract: A method, which is performed in an electronic device, for activating a target application is disclosed. The method may include receiving an input sound stream including an activation keyword for activating the target application and a speech command indicative of a function of the target application. The method may also detect the activation keyword from the input sound stream. If the activation keyword is detected, a portion of the input sound stream including at least a portion of the speech command may be buffered in a buffer memory. In addition, in response to detecting the activation keyword, the target application may be activated to perform the function of the target application.
    Type: Grant
    Filed: April 21, 2014
    Date of Patent: September 8, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Taesu Kim, Minsub Lee
  • Patent number: 10762900
    Abstract: In non-limiting examples of the present disclosure, systems, methods and devices for executing a command by a digital assistant in a group device environment are presented. A plurality of devices with digital assistants may be clustered for the duration of an event. One of the devices of the cluster may be assigned as an arbitrator device for the cluster. A user may issue a verbal command executable by a digital assistant of the cluster. The user that issued the verbal command may be identified via voice analysis. A determination may be made as to whether the verbal command corresponds to an intent to share content with a plurality of members of the cluster, or a specific member of the cluster, and a device of the cluster may be selected for executing a reply to the verbal command based on the determined intent and the executing device's presentation capabilities.
    Type: Grant
    Filed: March 7, 2018
    Date of Patent: September 1, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Karen Master Ben-dor, Roni Karassik, Adi Diamant, Adi Miller
  • Patent number: 10762161
    Abstract: Methods and systems including computer programs encoded on a computer storage medium, for interactive content recommendation. In one aspect, a method includes receiving a request for content by a user, determining a user intent based on the received request, providing to the user a first attribute responsive to the user intent, receiving a first attribute value responsive to the first attribute, providing a second attribute, and receiving a second attribute value responsive to the second attribute. A particular content vector including a first content attribute and a second content attribute for a particular content item is identified where the first content attribute and the second content attribute sufficiently match the first attribute value and the second attribute value. The particular content item is provided as a suggested content item, and, responsive to a user selection of the particular content item, provided for presentation on the user device.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: September 1, 2020
    Assignee: Accenture Global Solutions Limited
    Inventors: Srikanth G. Rao, Roshni Ramesh Ramnani, Tarun Singhal, Shubhashis Sengupta, Tirupal Rao Ravilla, Dongay Choudary Nuvvula, Soumya Chandran, Sumitraj Ganapat Patil, Rakesh Thimmaiah, Sanjay Podder, Surya Kumar IVG, Ranjana Bhalchandra Narawane
  • Patent number: 10762450
    Abstract: A computer-implemented method for producing healthcare data records from graphical inputs by computer users includes receiving, on a graphical user interface of a computer system, a user identification of a diagnosis for a patient, the user identification produced by user selection on the graphical user interface; identifying one or more parameters that characterize the diagnosis; displaying on the graphical user interface a plurality of selectable values for particular ones of identified parameters; receiving sequential user selections representations of particulars ones of the values; and generating an electronic medical record representation that represents the identified diagnosis having the selected values for the one or more parameters.
    Type: Grant
    Filed: April 3, 2017
    Date of Patent: September 1, 2020
    Assignee: Zeus Data Solutions, Inc.
    Inventors: Alan J. Sorkey, Steven Allen Conrad
  • Patent number: 10757323
    Abstract: A method, and corresponding electronic device, receives, at a user interface of the electronic device, a command to capture one or more images. An imager of the electronic device initiates capturing the one or more images. One or more sensors of the electronic device, optionally in conjunction with one or more processors, identify a source of the command to capture the one or more images. The one or more processors can then apply a digital data identifier to the one or more images, the digital data identifier identifying the source of the command to capture the one or more images.
    Type: Grant
    Filed: April 5, 2018
    Date of Patent: August 25, 2020
    Assignee: Motorola Mobility LLC
    Inventors: Rachid Alameh, Thomas Merrell, Jarrett Simerson, Amitkumar Balar