Patents Examined by Michael N. Opsasnick
-
Patent number: 11184704Abstract: Methods and apparatus for identifying a music service based on a user command. A content type is identified from a received user command and a music service is selected that supports the content type. A selected music service can then transmit audio content associated with the content type for playback.Type: GrantFiled: February 3, 2020Date of Patent: November 23, 2021Assignee: Sonos, Inc.Inventors: Simon Jarvis, Mark Plagge, Christopher Butts
-
Patent number: 11176324Abstract: The present disclosure involves systems, software, and computer implemented methods for creating line item information from tabular data. One example method includes receiving event data values at a system. Column headers of columns in the event data values are identified. At least one column header is not included in standard line item terms used by the system. Column values of the columns in the event data values are identified. The identified column headers and the identified column values are processed using one or more models to map each column to a standard line item term used by the system. The processing includes using context determination and content recognition to identify standard line item terms. An event is created in the system, including the creation of line items from the identified column value. Each line item includes standard line item terms mapped to the columns.Type: GrantFiled: September 26, 2019Date of Patent: November 16, 2021Assignee: SAP SEInventors: Kumaraswamy Gowda, Nithya Rajagopalan, Nishant Kumar, Panish Ramakrishna, Rajendra Vuppala, Erica Vandenhoek
-
Patent number: 11163956Abstract: A natural language processing method and system utilizes a combination of rules-based processes, vector-based processes, and machine learning-based processes to identify the meaning of terms extracted from data management system related text. Once the meaning of the terms has been identified, the method and system can automatically incorporate new forms and text into a data management system.Type: GrantFiled: May 23, 2019Date of Patent: November 2, 2021Assignee: Intuit Inc.Inventors: Conrad De Peuter, Karpaga Ganesh Patchirajan, Saikat Mukherjee
-
Patent number: 11159664Abstract: A method and system on an electronic device which uses speech recognition to initiate a communication from a mobile device having access to contact information for a number of contacts. In one example, the method comprises receiving through an audio input interface a voice input for initiating a communication, extracting from the voice input a type of communication and at least part of a contact name, and outputting, to an output interface, a selectable list of all contacts from the contact information which have the part of the contact name and which have a contact address associated with the type of communication. The mobile device may also be configured to access remote contact information from a remote server.Type: GrantFiled: July 24, 2019Date of Patent: October 26, 2021Assignees: BlackBerry Limited, 2236008 Ontario Inc.Inventors: Stephen Lau, Darrin Kenneth John Fry, Jianqiang Shi
-
Patent number: 11152003Abstract: Mechanisms are provided to implement an intelligent service broker for routing a voice command from a user to one or more virtual assistants based on each virtual assistant's capability to provide an accurate response. Responsive to receiving a voice command with a wake word associated with the intelligent service broker, the intelligent service broker identifies a subject or category of the voice command. Using the identified subject or category, the intelligent service broker selects one or more virtual assistants using a set of ranking values and a set of characteristics that indicate which ranking values to evaluate. The intelligent service broker sends the voice command to the identified virtual assistants and, responsive to receiving responses from more than one virtual assistant, identifies a confidence ranking for each of the responding virtual assistants and provides one or more of the responses based on a set of user configuration settings.Type: GrantFiled: September 27, 2018Date of Patent: October 19, 2021Assignee: International Business Machines CorporationInventor: Brian O'Donovan
-
Patent number: 11145305Abstract: A method and a device for identifying an utterance from a signal is disclosed. the method includes acquiring a set of features for a respective segment of the signal, and an indication of an end-of-utterance moment in time in the signal corresponding to a moment in time after which the utterance has ended. The method includes determining an adjusted end-of-utterance moment in time, and labels for respective sets of features based on the adjusted end-of-utterance moment in time and time intervals of the corresponding segments. A given label is indicative of whether the utterance has ended during the corresponding segment associated with the respective set of features. The method also includes using the sets of features and the respective labels for training a Neural Network to predict during which segment of the digital audio signal the utterance has ended.Type: GrantFiled: August 13, 2019Date of Patent: October 12, 2021Assignee: YANDEX EUROPE AGInventor: Fedor Aleksandrovich Minkin
-
Patent number: 11138388Abstract: The present teaching relates to facilitating a user-machine conversation. In one example, a query is obtained from a user. The query is directed to a first conversational bot. A reply in response to the query is obtained from the first conversational bot. A degree of validity of the reply is determined based on the reply and the query. A second conversational bot is determined based on the query and the degree of validity. The conversation is directed to the second conversational bot with the query.Type: GrantFiled: December 22, 2016Date of Patent: October 5, 2021Assignee: Verizon Media Inc.Inventors: Michael Emery, Lavanya Colinjivadi Viswanathan
-
Patent number: 11133022Abstract: A method may include dividing input audio into frames and calculating a characteristic value for each of the frames. The method may include establishing a voting matrix having a first dimension representing a quantity of segments of sample audio and a second dimension representing a quantity of frames of each segment. The method may include marking voting labels in the voting matrix corresponding to frames of the sample audio when the characteristic values of corresponding frames of the input audio and sample audio match. The method may include determining a frame to be a recognition result when a sum of the voting labels at a corresponding position is higher than a threshold.Type: GrantFiled: January 6, 2021Date of Patent: September 28, 2021Assignee: ADVANCED NEW TECHNOLOGIES CO., LTD.Inventors: Zhijun Du, Nan Wang
-
Patent number: 11132516Abstract: A sequence conversion method includes receiving a source sequence, converting the source sequence into a source vector representation sequence, obtaining at least two candidate target sequences and a translation probability value of each of the at least two candidate target sequences according to the source vector representation sequence, adjusting the translation probability value of each candidate target sequence, selecting an output target sequence from the at least two candidate target sequences according to an adjusted translation probability value of each candidate target sequence, and outputting the output target sequence. Hence, loyalty of a target sequence to a source sequence can be improved during sequence conversion.Type: GrantFiled: April 26, 2019Date of Patent: September 28, 2021Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Zhaopeng Tu, Lifeng Shang, Xiaohua Liu, Hang Li
-
Patent number: 11100290Abstract: A method and system for improving linguistic data and storage technology is provided. The method includes receiving data input text from a user and identifying text within the data input text. The data input text is edited and improvements in the data input text are detected via a machine learning process. In response, a modified version of the user interface is generated for allowing additional users to view and modify additional data input text. Change attributes associated with the data input text with respect to the modified version of the user interface are determined and alternative input suggestions are ranked. Editing data and code are generated in response to an editor engine interacting with a hardware controller. The editing data and code is executed thereby updating and modifying functions associated with software engines to increase an efficiency of future recommendations associated with future data input text analysis.Type: GrantFiled: May 30, 2019Date of Patent: August 24, 2021Assignee: International Business Machines CorporationInventors: Jason Boada, Qin Shirley Held, Rachel Cohen, Munish Goyal, Dangaia Sims
-
Patent number: 11100297Abstract: One embodiment provides a method, including: receiving, from a user and at a user interface of a conversational agent, a query related to a business process; identifying, using process entity extraction on the query, (i) the business process and (ii) a business object corresponding to an entity of the query; mapping the business object to code corresponding to the business object, wherein the mapping comprises (i) mapping the business object to an object within a business process model using a domain dictionary and (ii) accessing code corresponding to the object within the business process model; generating a natural language response responsive to the received query by (i) extracting the code corresponding to the business object, (ii) identifying a rule within the extracted code corresponding to a variable of the query, and (iii) generating the natural language response from the identified rule; and providing the natural language response.Type: GrantFiled: April 15, 2019Date of Patent: August 24, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Sampath Dechu, Neelamadhav Gantayat, Monika Gupta
-
Patent number: 11074037Abstract: A voice broadcast method includes: generating a voice broadcast instruction when a voice broadcast operation is received; according to the voice broadcast instruction, starting from a current focus node, detecting nodes from a current web page interface, and when a target node is determined, broadcasting text content of the target node, wherein the target node is a node with text information, has no sub-nodes and does not respond to an operation event.Type: GrantFiled: February 17, 2017Date of Patent: July 27, 2021Assignee: ZTE CORPORATIONInventor: Gang Cao
-
Patent number: 11074414Abstract: A test controller submits testing phrases to a text classifier and receives, from the text classifier, classification labels each comprising one or more respective heatmap values each associated with a separate word. The test controller aligns each of the classification labels corresponding with a respective testing phrase. The test controller identifies one or more anomalies of a selection of one or more classification labels that are different from an expected classification label for the respective testing phrase. The test controller outputs a graphical representation in a user interface of the selection of one or more classification labels and one or more respective testing phrases with visual indicators based on one or more respective heatmap values.Type: GrantFiled: June 27, 2019Date of Patent: July 27, 2021Assignee: International Business Machines CorporationInventors: Ming Tan, Saloni Potdar, Lakshminarayanan Krishnamurthy
-
Patent number: 11074928Abstract: A computer-implemented method includes determining a meeting has initialized between a first user and a second user, wherein vocal and video recordings are produced for at least the first user. The method receives the vocal and video recordings for the first user. The method analyzes the vocal and video recordings for the first user according to one or more parameters for speech and one or more parameters for gestures. The method determines one or more emotions and a role in the meeting for the first user based at least on the analyzed vocal and video recordings. The method sends an output of analysis to at least one of the first user and the second user, wherein the output of analysis includes at least the determined one or more emotions and the role in the meeting for the first user.Type: GrantFiled: January 26, 2018Date of Patent: July 27, 2021Assignee: International Business Machines CorporationInventors: Eli M. Dow, Thomas D. Fitzsimmons, Tynan J. Garrett, Emily M. Metruck
-
Patent number: 11068656Abstract: A test controller submits testing phrases to a text classifier and receives, from the text classifier, classification labels each comprising one or more respective heatmap values each associated with a separate word. The test controller aligns each of the classification labels corresponding with a respective testing phrase. The test controller identifies one or more anomalies of a selection of one or more classification labels that are different from an expected classification label for the respective testing phrase. The test controller outputs a graphical representation in a user interface of the selection of one or more classification labels and one or more respective testing phrases with visual indicators based on one or more respective heatmap values.Type: GrantFiled: April 10, 2019Date of Patent: July 20, 2021Assignee: International Business Machines CorporationInventors: Ming Tan, Saloni Potdar, Lakshminarayanan Krishnamurthy
-
Patent number: 11069345Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing speech recognition by generating a neural network output from an audio data input sequence, where the neural network output characterizes words spoken in the audio data input sequence. One of the methods includes, for each of the audio data inputs, providing a current audio data input sequence that comprises the audio data input and the audio data inputs preceding the audio data input in the audio data input sequence to a convolutional subnetwork comprising a plurality of dilated convolutional neural network layers, wherein the convolutional subnetwork is configured to, for each of the plurality of audio data inputs: receive the current audio data input sequence for the audio data input, and process the current audio data input sequence to generate an alternative representation for the audio data input.Type: GrantFiled: December 18, 2019Date of Patent: July 20, 2021Assignee: DeepMind Technologies LimitedInventors: Aaron Gerard Antonius van den Oord, Sander Etienne Lea Dieleman, Nal Emmerich Kalchbrenner, Karen Simonyan, Oriol Vinyals, Lasse Espeholt
-
Patent number: 11062086Abstract: A method, computer system, and a computer program product for automatically recommending a plurality of text snippets from a book script for a movie adaptation is provided. The present invention may include receiving, by a user, a piece of input data, wherein the input data includes the book script, and a plurality of past book-to-movie adaptations. The present invention may then include identifying a plurality of text snippets associated with the received book script. The present invention may also include recommending the plurality of text snippets associated with the received book script to include in a movie based on the plurality of past book-to-movie adaptations and a plurality of movie reviews corresponding with the plurality of past book-to-movie adaptations.Type: GrantFiled: April 15, 2019Date of Patent: July 13, 2021Assignee: International Business Machines CorporationInventors: Priyanka Agrawal, Srikanth Govindaraj Tamilselvam, Amrita Saha, Pankaj S. Dayama
-
Patent number: 11049506Abstract: An apparatus for decoding an encoded audio signal, includes: a spectral domain audio decoder for generating a first decoded representation of a first set of first spectral portions being spectral prediction residual values; a frequency regenerator for generating a reconstructed second spectral portion using a first spectral portion of the first set of first spectral portions, wherein the reconstructed second spectral portion additionally includes spectral prediction residual values; and an inverse prediction filter for performing an inverse prediction over frequency using the spectral residual values for the first set of first spectral portions and the reconstructed second spectral portion using prediction filter information included in the encoded audio signal.Type: GrantFiled: May 20, 2019Date of Patent: June 29, 2021Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.Inventors: Sascha Disch, Frederik Nagel, Ralf Geiger, Balaji Nagendran Thoshkahna, Konstantin Schmidt, Stefan Bayer, Christian Neukam, Bernd Edler, Christian Helmrich
-
Patent number: 11049493Abstract: [Problem] With conventional technology, it is impossible to appropriately support spoken dialog that is carried out in multiple languages.Type: GrantFiled: July 24, 2017Date of Patent: June 29, 2021Assignee: NATIONAL INSTITUTE OF INFORMATION AND COMMUNICATIONS TECHNOLOGYInventors: Atsuo Hiroe, Takuma Okamoto
-
Patent number: 11042713Abstract: Disclosed herein is computer technology that applies natural language processing (NLP) techniques to training data to generate information used to train a natural language generation (NLG) system to produce output that stylistically resembles the training data. In this fashion, the NLG system can be readily trained with training data supplied by a user so that the NLG system is adapted to produce output that stylistically resembles such training data. In an example, an NLP system detects a plurality of linguistic features in the training data. These detected linguistic features are then aggregated into a specification data structure that is arranged for training the NLG system to produce natural language output that stylistically resembles the training data. Parameters in the specification data structure can be linked to objects in an ontology used by the NLG system to facilitate the training of the NLG system based on the detected linguistic features.Type: GrantFiled: June 18, 2019Date of Patent: June 22, 2021Assignee: NARRATIVE SCIENC INC.Inventors: Daniel Joseph Platt, Nathan D. Nichols, Michael Justin Smathers, Jared Lorince