Speech Assisted Network Patents (Class 704/270.1)
  • Patent number: 10635463
    Abstract: Methods, systems, and computer program products for adapting the tone of the user interface of a cloud-hosted application based on user behavior patterns are provided herein. A computer-implemented method includes analyzing behavior of a user with respect to one or more software applications; automatically detecting, from a pre-established collection of multiple software tone settings, one or more appropriate software tone settings to be applied to the one or more software applications based on the analyzed behavior; dynamically updating the software tone settings of the one or more software applications, wherein updating comprises (i) defining the value for one or more strings of the one or more software applications as one or more run-time attributes and (ii) resolving the one or more run-time attributes upon detecting the one or more appropriate software tone settings; and outputting the one or more dynamically updated software applications to at least a display.
    Type: Grant
    Filed: May 23, 2017
    Date of Patent: April 28, 2020
    Assignee: International Business Machines Corporation
    Inventors: Manish Kataria, Manu Kuchhal
  • Patent number: 10628754
    Abstract: In one example, the present disclosure describes a device, computer-readable medium, and method for automatically learning and facilitating interaction routines involving at least one human participant. In one example, a method includes learning an interaction routine conducted between a human user and a second party, wherein the interaction routine comprises a series of prompts and responses designed to identify and deliver desired information, storing a template of the interaction routine based on the learning, wherein the template includes at least a portion of the series of prompts and responses, detecting, in the course of a new instance of the interaction routine, at least one prompt from the second party that requests a response from the human user, and using the template to provide a response to the prompt so that involvement of the human user in the new instance of the interaction routine is minimized.
    Type: Grant
    Filed: June 6, 2017
    Date of Patent: April 21, 2020
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Harry Blanchard, Lan Zhang, Gregory Pulz
  • Patent number: 10630827
    Abstract: A device and method for responding to a user voice including an inquiry by outputting a response to the user's voice through a speaker and providing a guide screen including a response to the user's voice.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: April 21, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Changhwan Choi, Beomseok Lee, Seoyoung Jo
  • Patent number: 10616286
    Abstract: An integrated system for managing changes in regulatory and nonregulatory requirements for business activities at an industrial or commercial facility. Application of this system to environmental, health and safety activities, and to food, drug, cosmetic, and medical treatment and device activities, are discussed as examples.
    Type: Grant
    Filed: December 2, 2016
    Date of Patent: April 7, 2020
    Assignee: Applications in Internet Time LLC
    Inventors: Richard Frankland, Christopher M. Mitchell, Joseph D. Ferguson, Anthony T. Sziklai, Ashish K. Verma, Judith E. Popowski, Douglas H. Sturgeon
  • Patent number: 10614029
    Abstract: Computer systems configured to correlate instances of empirical data, gathered from ambient observation of a person, as being potentially relevant to each other vis-à-vis one particular behavior. Such computer systems facilitate transmission of a digital message, the content of which may be determined in response to the correlated instances of empirical data and the particular behavior. The digital message might be used to assess or alter the particular behavior of the person.
    Type: Grant
    Filed: January 20, 2017
    Date of Patent: April 7, 2020
    Inventor: Andrew L. DiRienzo
  • Patent number: 10607599
    Abstract: Described herein are curation of a glossary and its utilization for automatic speech recognition (ASR). In one embodiment, a server receives an audio recording of speech, taken over a period spanning at least two hours. During the first hour, the server generates, utilizing an ASR system, a transcription of a segment of the audio, recorded during the first twenty minutes. The server receives, from a transcriber, a phrase that does not appear in the transcription, but was spoken in the segment, and adds the phrase to a glossary. After the first hour of the period, the server generates, utilizing the ASR system, a second transcription of a second segment of the audio, provides the second transcription and the glossary to a second transcriber, and receives a corrected transcription, in which the second transcriber substituted a second phrase in the second transcription, which was not in the glossary, with the phrase.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: March 31, 2020
    Assignee: Verbit Software Ltd.
    Inventors: Eric Ariel Shellef, Yaakov Kobi Ben Tsvi, Iris Getz, Tom Livne, Roman Himmelreich, Elad Shtilerman
  • Patent number: 10607629
    Abstract: A method for hybrid speech enhancement which employs parametric-coded enhancement (or blend of parametric-coded and waveform-coded enhancement) under some signal conditions and waveform-coded enhancement (or a different blend of parametric-coded and waveform-coded enhancement) under other signal conditions. Other aspects are methods for generating a bitstream indicative of an audio program including speech and other content, such that hybrid speech enhancement can be performed on the program, a decoder including a buffer which stores at least one segment of an encoded audio bitstream generated by any embodiment of the inventive method, and a system or device (e.g., an encoder or decoder) configured (e.g., programmed) to perform any embodiment of the inventive method. At least some of speech enhancement operations are performed by a recipient audio decoder with Mid/Side speech enhancement metadata generated by an upstream audio encoder.
    Type: Grant
    Filed: October 22, 2018
    Date of Patent: March 31, 2020
    Assignees: Dolby Laboratories Licensing Corporation, Dolby International AB
    Inventors: Jeroen Koppens, Hannes Muesch
  • Patent number: 10600410
    Abstract: According to one embodiment, an edit assisting system includes a server device and a client device. The client device displays a first object, which indicates first speech of a user and a first portion of the first speech, and a second object, which indicates second speech generated by the server device and a second portion of the second speech, on a screen based on a scenario indicated in scenario data. The first and second portions are editable. The client device transmits edit data indicating the first portion which is edited and/or the second portion which is edited to the server device. The server device rewrites the scenario data by changing the first portion of the first speech and/or the second portion of the second speech is the scenario by using the edit data.
    Type: Grant
    Filed: August 30, 2017
    Date of Patent: March 24, 2020
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Hiroshi Fujimura, Kenji Iwata
  • Patent number: 10599729
    Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving user input that defines a search query, and providing the search query to a server system. Information that a search engine system determined was responsive to the search query is received at a computing device. The computing device is identified as in a first state, and a first output mode for audibly outputting at least a portion of the information is selected. The first output mode is selected from a collection of the first output mode and a second output mode. The second output mode is selected in response to the computing device being in a second state and is for visually outputting at least the portion of the information and not audibly outputting the at least portion of the information. At least the portion of information is audibly output.
    Type: Grant
    Filed: February 5, 2016
    Date of Patent: March 24, 2020
    Assignee: Google LLC
    Inventors: John Nicholas Jitkoff, Michael J. LeBeau, William J. Byrne, David P. Singleton
  • Patent number: 10594837
    Abstract: From intent data of a conversational system, a set of intent sequences and a model predicting a next intent for an intent sequence are constructed. A first intent is received as an input. Using the model, a next intent corresponding to the first intent is predicted. A service required by the next intent is determined. A resource consumption of the service is forecasted. Responsive to the forecasted resource consumption exceeding a present resource allocation to the service, it is concluded that the service requires upscaling before becoming available for use by the next intent. An availability time by which the service is required to be available for use by the next intent is determined. An initial time at which upscaling must begin to ensure that the service is available at the availability time is determined. Upscaling of the service is caused to be scheduled for the initial time.
    Type: Grant
    Filed: November 2, 2018
    Date of Patent: March 17, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Carmine M. Dimascio, Tamer E. Abuelsaad, Bruce Raymond Slawson
  • Patent number: 10580405
    Abstract: A system configured to enable remote control to allow a first user to provide assistance to a second user. The system may receive a command from the second user granting remote control to the first user, enabling the first user to initiate a voice command on behalf of the second user. In some examples, the system may enable the remote control by treating a voice command originating from the first user as though it originated from the second user instead. For example, the system may receive the voice command from a first device associated with the first user but may route the voice command as though it was received by a second device associated with the second user.
    Type: Grant
    Filed: December 27, 2016
    Date of Patent: March 3, 2020
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Peng Wang, Pathivada Rajsekhar Naidu
  • Patent number: 10573310
    Abstract: A method for responding to a voice activated request includes receiving a speech input request from a smart speaker requesting energy management data associated with energy consumption at a premises of the smart speaker. The method also includes generating a voice service request including a first query for a first data source. The first query includes a request for the energy management data. Additionally, the method includes communicating the first query to the first data source and receiving a first response to the first query from the first data source. Further, the method includes generating an audible speech output in response to the speech input request based on the first response to the first query and transmitting the audible speech output to the smart speaker. The smart speaker audibly transmits the audible speech output.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: February 25, 2020
    Assignee: Landis+Gyr Innovations, Inc.
    Inventors: Keith Mario Torpy, James Randall Turner, David Decker, Ruben E. Salazar Cardozo
  • Patent number: 10574598
    Abstract: Aspects of the present invention disclose a method, computer program product, and system for detecting and mitigating adversarial virtual interactions. The method includes one or more processors detecting a user communication that is interacting with a virtual agent. The method further includes one or more processors determining a risk level associated with the detected user communication based on one or more actions performed by the detected user while interacting with the virtual agent. The method further includes one or more processors in response to determining that the determined risk level associated with the detected user communication exceeds a risk level threshold, initiating, a mitigation protocol on interactions between the detected user and the virtual agent, where the mitigation protocol is based on the actions performed by the detected user while interacting with the virtual agent.
    Type: Grant
    Filed: October 18, 2017
    Date of Patent: February 25, 2020
    Assignee: International Business Machines Corporation
    Inventors: Guillaume A. Baudart, Julian T. Dolby, Evelyn Duesterwald, David J. Piorkowski
  • Patent number: 10567247
    Abstract: An example method can include receiving a traffic report from a sensor and using the traffic report to detect intra-datacenter flows. These intra-datacenter flows can then be compared with a description of historical flows. The description of historical flows can identify characteristics of normal and malicious flows. Based on the comparison, the flows can be classified and tagged as normal, malicious, or anomalous. If the flows are tagged as malicious or anomalous, corrective action can be taken with respect to the flows. A description of the flows can then be added to the description of historical flows.
    Type: Grant
    Filed: May 3, 2016
    Date of Patent: February 18, 2020
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Ashutosh Kulshreshtha, Supreeth Hosur Nagesh Rao, Navindra Yadav, Anubhav Gupta, Sunil Kumar Gupta, Varun Sagar Malhorta, Shashidhar Gandham
  • Patent number: 10565983
    Abstract: The present disclosure provides an artificial intelligence-based acoustic model training method and apparatus, a device and a storage medium, wherein the method comprises: obtaining manually-annotated speech data; training according to the manually-annotated speech data to obtain a first acoustic model; obtaining unannotated speech data; training according to the unannotated speech data and the first acoustic model to obtain a desired second acoustic model. The solution of the present disclosure can be applied to save manpower costs and improve the training efficiency.
    Type: Grant
    Filed: April 24, 2018
    Date of Patent: February 18, 2020
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Bin Huang, Yiping Peng
  • Patent number: 10567320
    Abstract: A messaging balancing and control (B&C) system is disclosed. The system configured to handle message transfers having different message exchange patterns, including: in-only exchange patterns, out-only exchange patterns, in-optional-out exchange patterns, out-optional-in exchange patterns, robust in-only exchange patterns, and robust out-only exchange patterns. The system may write a message transfer confirmation in response to a message transfer between a consumer system and a provider system, with the confirmation including at least a first hash of the message. The system may also write a message acknowledgement to the blockchain with the acknowledgement including at least a second hash of the message. The blockchain may execute a smart contract to compare the first hash of the message to the second hash of the message to identify an out-of-balance message transfer event. A monitoring device of the system may read the out-of-balance message transfer event from the blockchain.
    Type: Grant
    Filed: October 17, 2017
    Date of Patent: February 18, 2020
    Assignee: AMERICAN EXPRESS TRAVEL RELATED SERVICES COMPANY, INC.
    Inventors: Shyamala Chalakudi, Ming Yin
  • Patent number: 10560974
    Abstract: Disclosed are a method and an apparatus for selecting and connecting a gateway by a user device by using Bluetooth low energy technology, and a voice recognition system including a first device, a first gateway, and a voice recognition server. The first device broadcasts a voice signal to a neighboring gateway, the voice signal is forwarded to the voice recognition server by neighboring gateways, and the voice recognition server transmits a connection request message to an optimal gateway by processing the voice signal. After authentication of the voice recognition server is performed, the optimal gateway receiving the connection request message and a user device are connected to each other.
    Type: Grant
    Filed: September 11, 2017
    Date of Patent: February 11, 2020
    Assignee: LG ELECTRONICS INC.
    Inventors: Taeyoung Song, Jingu Choi, Younghwan Kwon
  • Patent number: 10553216
    Abstract: A system and method for an integrated, multi-modal, multi-device natural language voice services environment may be provided. In particular, the environment may include a plurality of voice-enabled devices each having intent determination capabilities for processing multi-modal natural language utterances in addition to knowledge of the intent determination capabilities of other devices in the environment. Further, the environment may be arranged in a centralized manner, a distributed peer-to-peer manner, or various combinations thereof. As such, the various devices may cooperate to determine intent of multi-modal natural language utterances, and commands, queries, or other requests may be routed to one or more of the devices best suited to take action in response thereto.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: February 4, 2020
    Assignee: Oracle International Corporation
    Inventors: Robert A. Kennewick, Chris Weider
  • Patent number: 10547747
    Abstract: A technology is described for configurable contact flows implemented using a contact flow service. An example method may include activating a contact flow in response to a request to establish a contact center session. The contact flow may be used to provide automated contact service communications to end users using computing resources hosted within a computing service provider environment. A starting prompt specified by the contact flow may be output using a communication channel. Input data may be received via the communication channel in response to the starting prompt. The input data may be analyzed to identify an intent identifier included in the input data and a contact flow action linked to the intent identifier may be executed.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: January 28, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Saket Agarwal, Joseph Daniel Sullivan, Pasquale DeMaio, Jon Russell Jay, Jaswinder Singh Randhawa, Nihal Chand Jain
  • Patent number: 10546598
    Abstract: Audio information defining audio content may be accessed. The audio content may have a duration. The audio content may be segmented into audio segments. Individual audio segments may correspond to a portion of the duration. The audio segments may include a first audio segment corresponding to a first portion of the duration. Energy features, entropy features, frequency features, and/or other features of the audio segments may be determined. Energy features may characterize energy of the audio segments. Entropy features may characterize spectral flatness of the audio segments. Frequency features may characterize highest frequencies of the audio segments. One or more of the audio segments may be identified as containing speech based on the energy features, the entropy features, the frequency features, and/or other information. Storage of the identification of the one or more of the audio segments as containing speech in one or more storage media may be effectuated.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: January 28, 2020
    Assignee: GoPro, Inc.
    Inventor: Tom Médioni
  • Patent number: 10534802
    Abstract: A computer-implemented method of providing text entry assistance data includes receiving at a system location information associated with a user, receiving at the system information indicative of predictive textual outcomes, generating dictionary data using the location information, and providing the dictionary data to a remote device.
    Type: Grant
    Filed: August 8, 2017
    Date of Patent: January 14, 2020
    Assignee: Google LLC
    Inventors: Shumeet Baluja, Maryam Kamvar, Elad Gil
  • Patent number: 10536402
    Abstract: Examples are generally directed towards context-sensitive generation of conversational responses. Context-message-response n-tuples are extracted from at least one source of conversational data to generate a set of training context-message-response n-tuples. A response generation engine is trained on the set of training context-message-response n-tuples. The trained response generation engine automatically generates a context-sensitive response based on a user generated input message and conversational context data. A digital assistant utilizes the trained response generation engine to generate context-sensitive, natural language responses that are pertinent to user queries.
    Type: Grant
    Filed: August 24, 2018
    Date of Patent: January 14, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michel Galley, Alessandro Sordoni, Christopher John Brockett, Jianfeng Gao, William Brennan Dolan, Yangfeng Ji, Michael Auli, Margaret Ann Mitchell, Jian-Yun Nie
  • Patent number: 10528670
    Abstract: An amendment source-positioning method and apparatus, a computer device and a readable medium. The method includes: obtaining a first target word identifying an amendment source and defining parameters of the amendment source, from semantic parsing information of a user-input speech error correction instruction; positioning the amendment source from a to-be-corrected text according to the first target word and the defining parameters. As compared with the template matching and positioning scheme employed in the prior art, the technical solution of the present disclosure can support a speech error correction instruction in any form, and exhibits a more flexible amendment source-positioning manner, thereby effectively improving the amendment source-positioning efficiency.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: January 7, 2020
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Shujie Yao, Qin Qu, Zejin Hu
  • Patent number: 10529332
    Abstract: At an electronic device with a display, a microphone, and an input device: while the display is on, receiving user input via the input device, the user input meeting a predetermined condition; in accordance with receiving the user input meeting the predetermined condition, sampling audio input received via the microphone; determining whether the audio input comprises a spoken trigger; and in accordance with a determination that audio input comprises the spoken trigger, triggering a virtual assistant session.
    Type: Grant
    Filed: January 4, 2018
    Date of Patent: January 7, 2020
    Assignee: Apple Inc.
    Inventors: Stephen O. Lemay, Brandon J. Newendorp, Jonathan R. Dascola
  • Patent number: 10523667
    Abstract: Methods and system are disclosed that execute an operation associated with a system. In one aspect, upon receiving a request to execute an operation, a connectivity model establishes a connection with a framework. The framework processes the received request and instantiates a system model to execute a user authentication model to authenticate the user initiating the request. Upon authenticating the user, a request model may be executed at the framework. The execution of the request model may process and route the received request to a specific system. Subsequently, a user session may be established by executing a session model at the framework. Upon establishing the user session and receiving the routed request, an operation associated with the request that may be executed may be determined. The determined operation may be executed via the framework.
    Type: Grant
    Filed: November 8, 2016
    Date of Patent: December 31, 2019
    Assignee: SAP SE
    Inventor: Meenakshi Sundaram P
  • Patent number: 10522136
    Abstract: Embodiments of the present disclosure provide a method and a device for training an acoustic model, a computer device and a storage medium. The method includes obtaining supervised speech data and unsupervised speech data, in which, the supervised speech data is speech data with manual annotation and the unsupervised speech data is speech data with machine annotation; extracting speech features from the supervised speech data and the unsupervised speech data; and performing a multi-task learning having a supervised learning task and an unsupervised learning task on the speech features of the supervised speech data and the unsupervised speech data by using a deep learning network, to train and obtain the acoustic model.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: December 31, 2019
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Bin Huang, Yiping Peng, Xiangang Li
  • Patent number: 10505734
    Abstract: A method and system for providing unencrypted access to encrypted data that may be stored on a device, sent as a message, or sent as a real-time communications stream. The method may include using public key cryptography to securely enable accessing the encrypted data stored on a device or communicated by a device. For instance, the method may include using a device vendor's public key to securely enable that vendor to enable only authorized parties to themselves decrypt previously-encrypted device storage, messages, or real-time communications streams.
    Type: Grant
    Filed: February 17, 2017
    Date of Patent: December 10, 2019
    Inventor: Raymond Edward Ozzie
  • Patent number: 10482159
    Abstract: Aspects create a multimedia presentation wherein processors are configured to calculate a time it would take to narrate a plurality of words in a document at a specified speech speed in response to determining that the time it would take to narrate the plurality of words in the document at the specified speech speed exceeds a specified maximum time, generate a long summary of the document as a subset of the plurality of words, generate audio content for a first portion of the plurality of words of the long summary by applying a text-to-speech processing mechanism to the portion of the long summary at the desired speech speed, and create a multimedia slide of a multimedia presentation by adding the generated audio content to a presentation of text from a remainder portion of the plurality of words of the long summary.
    Type: Grant
    Filed: November 2, 2017
    Date of Patent: November 19, 2019
    Assignee: International Business Machines Corporation
    Inventors: Nicolas Bainer, Dario Alejando Falasca, Federico Tomas Gimenez Molinelli, Nicolas O. Nappe, Gaston Alejo Rius, Nicolas Tcherechansky, Facundo J. Tomaselli
  • Patent number: 10460125
    Abstract: Apparatuses and methods for automatic query processing are disclosed, which includes a query analyzer configured to a query analyzer configured to extract the query condition from the input query, a scheduler configured to create one or more sub-queries to verify whether the query condition is satisfied, and to determine an execution condition for the one or more sub-queries, and a condition verifier configured to execute the one or more sub-queries according to the determined execution condition, and to verify whether the query condition is satisfied using results of executing the one or more sub-queries.
    Type: Grant
    Filed: August 18, 2016
    Date of Patent: October 29, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Byung Kon Kang, Kyoung Gu Woo
  • Patent number: 10431242
    Abstract: Audio information defining audio content may be accessed. The audio content may have a duration. The audio content may be segmented into audio segments. Individual audio segments may correspond to a portion of the duration. The audio segments may include a first audio segment corresponding to a first portion of the duration. Energy features, entropy features, frequency features, and/or other features of the audio segments may be determined. Energy features may characterize energy of the audio segments. Entropy features may characterize spectral flatness of the audio segments. Frequency features may characterize highest frequencies of the audio segments. One or more of the audio segments may be identified as containing speech based on the energy features, the entropy features, the frequency features, and/or other information. Storage of the identification of the one or more of the audio segments as containing speech in one or more storage media may be effectuated.
    Type: Grant
    Filed: November 2, 2017
    Date of Patent: October 1, 2019
    Assignee: GoPro, Inc.
    Inventor: Tom Médioni
  • Patent number: 10431237
    Abstract: A device and method for adjusting speech intelligibility at an audio device is provided. The device comprises a microphone, a transmitter and a controller. The controller is configured to: determine a noise level at the microphone; select a voice tag, of a plurality of voice tags, based on the noise level, each of the plurality of voice tags associated with respective noise levels; determine an intelligibility rating of a mix of the voice tag and noise received at the microphone; and when the intelligibility rating is below a threshold intelligibility rating, enhance speech received the microphone based on the intelligibility rating prior to transmitting, at the transmitter, a signal representing intelligibility enhanced speech.
    Type: Grant
    Filed: September 13, 2017
    Date of Patent: October 1, 2019
    Assignee: MOTOROLA SOLUTIONS, INC.
    Inventors: Glenn Andrew Mohan, Maurice D. Howell, Juan J. Giol, Christian Ibarra
  • Patent number: 10403280
    Abstract: A lamp device for inputting or outputting a voice signal and a method of driving the same. The method of driving a lamp device includes receiving an audio signal; performing voice recognition of a first audio signal among the received audio signals; generating an activation signal based on the voice recognition result; transmitting the activation signal to the external device; receiving a first control signal from the external device; and transmitting a second audio signal among the received audio signals to the external device in response to the first control signal. Alternatively, various exemplary embodiment may be further included.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: September 3, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yohan Lee, Jungkyun Ryu, Junho Park, Wonsik Song, Seungyong Lee, Youngsu Lee
  • Patent number: 10397353
    Abstract: A method of enhancing log packets with context metadata is provided. The method at a redirecting filter on a host in a datacenter, intercepts a packet from a data compute node (DCN) of a datacenter tenant. The method determines that the intercepted packet is a log packet. The method forwards the log packet and a first set of associated context metadata to a proxy logging server. The first set of context metadata is associated with the log packet based on the DCN that generated the packet. The method, at the proxy logging server, associates a second set of context metadata with the log packet. The second set of context metadata is received from a compute manager of the datacenter. The method sending the log packet and the first and second sets of context metadata from the proxy logging server to a central logging server associated with the tenant.
    Type: Grant
    Filed: January 14, 2016
    Date of Patent: August 27, 2019
    Assignee: NICIRA, INC.
    Inventors: Jayant Jain, Anirban Sengupta, Mayank Agarwal, Raju Koganty, Chidambareswaran Raman, Nishant Jain, Jeremy Olmsted-Thompson, Srinivas Nimmagadda
  • Patent number: 10388277
    Abstract: Speech processing tasks may be allocated at least partly to a local device (e.g., user computing device that receives spoken words) and at least partly to a remote device to determine one or more user commands or tasks to be performed by the local device. The remote device may be used to process speech that the local device could not process or understand, or for other reasons, such as for error checking. The local device may then execute or begin to execute locally determined tasks to reduce user-perceived latency. Meanwhile, the entire media input, or a portion thereof, may be sent to the remote device to process speech, verify the tasks and/or identify other user commands in the media input (or portion thereof).
    Type: Grant
    Filed: June 25, 2015
    Date of Patent: August 20, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Sanjoy Ghosh, Pieter Sierd van der Meulen
  • Patent number: 10368333
    Abstract: Dynamically adapting provision of notification output to reduce distractions and/or to mitigate usage of computational resources. In some implementations, an automated assistant application predicts a level of engagement for a user and determines, based on the predicted level of engagement (and optionally future predicted level(s) of engagement), provisioning (e.g., whether, when, and/or how) of output that is based on a received notification. For example, the automated assistant application can, based on predicted level(s) of engagement, determine whether to provide any output based on a received notification, determine whether to suppress provision of output that is based on the received notification (e.g.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: July 30, 2019
    Assignee: GOOGLE LLC
    Inventors: Vikram Aggarwal, Moises Morgenstern Gali
  • Patent number: 10354650
    Abstract: In one aspect, a method comprises accessing audio data generated by a computing device based on audio input from a user, the audio data encoding one or more user utterances. The method further comprises generating a first transcription of the utterances by performing speech recognition on the audio data using a first speech recognizer that employs a language model based on user-specific data. The method further comprises generating a second transcription of the utterances by performing speech recognition on the audio data using a second speech recognizer that employs a language model independent of user-specific data. The method further comprises determining that the second transcription of the utterances includes a term from a predefined set of one or more terms. The method further comprises, based on determining that the second transcription of the utterance includes the term, providing an output of the first transcription of the utterance.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: July 16, 2019
    Assignee: Google LLC
    Inventors: Alexander H. Gruenstein, Petar Aleksic
  • Patent number: 10338713
    Abstract: Method, apparatus, and computer-readable media for touch and speech interface, with audio location, includes structure and/or function whereby at least one processor: (i) receives a touch input from a touch device; (ii) establishes a touch-speech time window; (iii) receives a speech input from a speech device; (iii) determines whether the speech input is present in a global dictionary; (iv) determines a location of a sound source from the speech device; (v) determines whether the touch input and the location of the speech input are both within a same region; (vi) if the speech input is in the dictionary, determines whether the speech input has been received within the window; and (vii) if the speech input has been received within the window, and the touch input and the speech input are both within the same region, activates an action corresponding to both the touch input and the speech input.
    Type: Grant
    Filed: June 6, 2017
    Date of Patent: July 2, 2019
    Assignee: Nureva, Inc.
    Inventors: David Popovich, David Douglas Springgay, David Frederick Gurnsey
  • Patent number: 10331794
    Abstract: A hybrid speech translation system whereby a wireless-enabled client computing device can, in an offline mode, translate input speech utterances from one language to another locally, and also, in an online mode when there is wireless network connectivity, have a remote computer perform the translation and transmit it back to the client computing device via the wireless network for audible outputting by client computing device. The user of the client computing device can transition between modes or the transition can be automatic based on user preferences or settings. The back-end speech translation server system can adapt the various recognition and translation models used by the client computing device in the offline mode based on analysis of user data over time, to thereby configure the client computing device with scaled-down, yet more efficient and faster, models than the back-end speech translation server system, while still be adapted for the user's domain.
    Type: Grant
    Filed: August 26, 2016
    Date of Patent: June 25, 2019
    Assignee: Facebook, Inc.
    Inventors: Naomi Aoki Waibel, Alexander Waibel, Christian Fuegen, Kay Rottmann
  • Patent number: 10319379
    Abstract: A voice dialog system includes: a voice input unit which acquires a user utterance, an intention understanding unit that interprets an intention of utterance of a voice acquired by the voice input unit, a dialog text creator that creates a text of a system utterance, and a voice output unit that outputs the system utterance as voice data. When creating a text of a system utterance, the dialog text creator creates the text by inserting a tag in a position in the system utterance. The intention understanding unit interprets an utterance intention of a user in accordance with whether a timing at which the user utterance is made is before or after an output of a system utterance at a position corresponding to the tag from the voice output unit.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: June 11, 2019
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Atsushi Ikeno, Yusuke Jinguji, Toshifumi Nishijima, Fuminori Kataoka, Hiromi Tonegawa, Norihide Umeyama
  • Patent number: 10311114
    Abstract: Systems, software, and computer implemented methods can be used to present stylized text snippets with search results received from a search query. A search query is received and at least one web-addressable document responsive to the search query is identified. At least a portion of the text associated with the at least one responsive document and including at least a portion of the search term is retrieved. Further, style information associated with the retrieved portion of text is also retrieved. The style information is then applied to the associated portion of text to create a stylized portion of text associated with the at least one responsive document. A set of search query results including a listing of responsive documents and, for at least one of those documents, a stylized portion of text, is presented.
    Type: Grant
    Filed: November 3, 2014
    Date of Patent: June 4, 2019
    Assignee: Google LLC
    Inventor: Vijayakrishna Griddaluru
  • Patent number: 10303741
    Abstract: A method for adapting tabular data for narration is provided in the illustrative embodiments. A set of categories used to organize data is identified in a first tabular portion of a document. A structure of the categories is analyzed. An inference is drawn about data in a first cell in the first tabular portion based on a position of the first cell in the structure. The first tabular portion of the document is transformed into a first narrative form using the inference.
    Type: Grant
    Filed: November 26, 2013
    Date of Patent: May 28, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Donna K. Byron, Alexander Pikovsky, Matthew B. Sanchez
  • Patent number: 10289433
    Abstract: Systems and processes for generating output dialogs for virtual assistants are provided. An output dialog can be generated from multiple output segments that can each include a string of one or more characters or words. The contents of an output segment can be selected from multiple possible outputs based on a predetermined order, conditional logic, or a random selection. The output segments can be concatenated to form the output dialog. In one example, a dialog generation file that defines the possible outputs for each output segment, an ordering of the output segments within the output dialog, and format for the output dialog can be used to generate the output dialog. The dialog generation file can include any number of functional blocks, which can each output an output segment, that can be arranged hierarchically and in a particular order to generate a desired output dialog.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: May 14, 2019
    Assignee: Apple Inc.
    Inventors: Harry J. Saddler, Nicolas Zeitlin
  • Patent number: 10282165
    Abstract: In an approach for selectively displaying a push notification, audio is captured using a microphone. A processor receives a push notification, wherein the push notification includes information. A processor identifies a keyword associated with the push notification based on the information. A processor determines that the captured audio includes the keyword. A processor determines whether to display the push notification based on the determination of whether the captured audio includes the keyword.
    Type: Grant
    Filed: April 6, 2016
    Date of Patent: May 7, 2019
    Assignee: International Business Machines Corporation
    Inventors: James E. Bostick, John M. Ganci, Jr., Martin G. Keen, Sarbajit K. Rakshit
  • Patent number: 10269371
    Abstract: A computer-implemented technique can include establishing an audio communication session between first and second computing devices and obtaining, by the first computing device, an audio input signal using audio data captured by a microphone. The first computing device can analyze the audio input signal to detect a speech input by its first user and can determine a duration of a detection period from when the audio input signal was obtained until the analyzing has completed. The first computing device can then transmit, to the second computing device, (i) a portion of the audio input signal beginning at a start of the speech input and (ii) the detection period duration, wherein receipt of the portion of the audio input signal and the detection period duration causes the second computing device to accelerate playback of the portion of the audio input signal to compensate for the detection period duration.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: April 23, 2019
    Assignee: Google LLC
    Inventors: Erik Kay, Jonas Erik Lindberg, Serge Lachapelle, Henrik Lundin
  • Patent number: 10257362
    Abstract: Provided is a voice gateway, which in communication with at least one mobile terminal. The voice gateway includes: a terminal connection module configured to establish communication with the mobile terminal; a processor connected with the terminal connection module and configured to process a voice or data service request initiated by the mobile terminal; and a communication module connected with the processor and configured to communicate, according to the voice or data service request, with an external network. The mobile terminal can select a number from the voice gateway as the number to initiate the voice or data service request, and the voice gateway establishes, according to the voice or data service request, voice or data communication with a called party or an external network. Therefore, the mobile terminal can carry out voice or data service communication not only with a local number, but also through the voice gateway.
    Type: Grant
    Filed: January 25, 2017
    Date of Patent: April 9, 2019
    Inventors: Xinming Zheng, Shu Zhou
  • Patent number: 10250685
    Abstract: Techniques for creating layer 2 (L2) extension networks are disclosed. One embodiment permits an L2 extension network to be created by deploying, configuring, and connecting a pair of virtual appliances in the data center and the cloud so that the appliances communicate via secure tunnels and bridge networks in the data center and the cloud. A pair of virtual appliances are first deployed in the data center and the cloud, and secure tunnels are then created between the virtual appliances. Thereafter, a stretched network is created by connecting a network interface in each of the virtual appliances to a respective local network, configuring virtual switch ports to which the virtual appliances are connected as sink ports that receive traffic with non-local destinations, and configuring each of the virtual appliances to bridge the network interface therein that is connected to the local network and tunnels between the pair of virtual appliances.
    Type: Grant
    Filed: August 29, 2017
    Date of Patent: April 2, 2019
    Assignee: VMWARE, INC.
    Inventors: Aravind Srinivasan, Narendra Kumar Basur Shankarappa, Sachin Thakkar, Serge Maskalik, Debashis Basak
  • Patent number: 10250446
    Abstract: The disclosed technology relates to a distributed policy store. A system is configured to locate, in an index, an entry for a network entity, determine, based on the entry, a file identifier for a file containing a record for the network entity and an offset indicating a location of the record in the file. The system is further configured to locate the file in a distributed file system using the file identifier, locate the record in the file using the offset, and retrieve the record.
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: April 2, 2019
    Assignee: Cisco Technology, Inc.
    Inventors: Rohit Prasad, Shashi Gandham, Hai Vu, Varun Malhotra, Sunil Gupta, Abhishek Singh, Navindra Yadav, Ali Parandehgheibi, Ravi Prasad, Praneeth Vallem, Paul Lesiak, Hoang Nguyen
  • Patent number: 10237209
    Abstract: Methods, apparatus, systems, and computer-readable media are provided for invoking an agent module in an automated assistant application in response to user selection of a selectable element presented at a graphical user interface rendered by a non-automated assistant application. The invoked agent module can be associated with other content rendered in the non-automated assistant graphical user interface, and can optionally be invoked with values that are based on user interactions via the non-automated assistant application. Responsive content can be received from the agent module in response to the invocation, and corresponding content provided by the automated assistant application via an automated assistant interface. In these and other manners, selection of the selectable element causes transition from a non-conversational interface, to a conversational automated assistant interface—where an agent (relevant to content in the non-conversational interface) is invoked in the automated assistant interface.
    Type: Grant
    Filed: May 8, 2017
    Date of Patent: March 19, 2019
    Assignee: GOOGLE LLC
    Inventors: Vikram Aggarwal, Dina Elhaddad
  • Patent number: 10229450
    Abstract: There is provided systems and method for generating sale transaction from voice data input by a user. A user device may receive voice data including a preference for purchasing an item. The user device may convert the voice data to the preferences and perform a search for a sales transaction corresponding to the preferences. The search may include parameters about the user, such as a location. The sales transaction may include purchase prices, times, locations, or other relevant data. A user may accept or decline the sales transaction with additional user data. If the user accepts the sales transaction, the sales transaction may be completed with a payment provider and a transaction history given to the user for later redemption of the item. If the user declines the sales transaction, further sale transactions with additional items may be present to the user.
    Type: Grant
    Filed: June 4, 2014
    Date of Patent: March 12, 2019
    Assignee: PAYPAL, INC.
    Inventors: Hyunju Lee, Joel P. Yarbrough, Francisco Vittorio Octavio Joachin D. Barretto, Gokul G Narayana Pillai
  • Patent number: 10220305
    Abstract: A method for handling communication for a computer game is provided. Once example method includes executing an application for recording one or more audio messages authored by a user for automatic transmission to one or more recipients during game play of a video game. During game play, the method includes detecting that a qualifying event has occurred, and, in response, selecting an audio message from the one or more audio messages for the automatic transmission. The method further includes transmitting, automatically, the audio message to a client device of a predefined recipient for presentation of the audio message.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: March 5, 2019
    Assignee: Sony Interactive Entertainment America LLC
    Inventor: Gary Zalewski