Patents by Inventor Akash Krishnan

Akash Krishnan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104153
    Abstract: A method includes receiving, at a search toolbar, a search query from a machine in a network. The machine has an associated machine profile for participating in the network as an entity. The machine profile includes a machine identifier and machine metadata. A query type is determined from the search query. A search context for the machine is determined using a semantic graph of the network. From a set of services for the network, one or more relevant services to respond to the search query are identified based on the query type and the search context. The search query is applied to the one or more relevant services to obtain a set of responses. A set of relevant results for the search query is determined from the set of responses. The set of relevant results is transmitted to the machine.
    Type: Application
    Filed: September 23, 2022
    Publication date: March 28, 2024
    Applicant: SAP SE
    Inventors: Gopi Kishan, Rohit Jalagadugula, Kavitha Krishnan, Sai Hareesh Anamandra, Akash Srivastava
  • Publication number: 20240095105
    Abstract: A method includes receiving a message query from an entity identifier participating in a social network. The message query specifies one or more entities, one or more requirements, and one or more constraints. A set of message query parameters is generated based on the message query. A set of queries for a semantic graph of the social network is generated based on the set of message query parameters. The set of queries is applied to the semantic graph to obtain a set of query results. A message context of the entity identifier is determined based on the set of query results and the set of message query parameters. A set of messages from a message repository is determined based on the message context. The set of messages can be presented on a client computer associated with the entity identifier.
    Type: Application
    Filed: September 20, 2022
    Publication date: March 21, 2024
    Applicant: SAP SE
    Inventors: Sai Hareesh Anamandra, Gopi Kishan, Kavitha Krishnan, Rohit Jalagadugula, Akash Srivastava
  • Publication number: 20240078495
    Abstract: Systems, methods, and computer media for determining compatible users through machine learning are provided herein. Previous interactions between some users in a group can be used to determine a first set of user-to-user compatibility scores. Both the first set of compatibility scores and attributes for the users in the group can be provided as inputs to a machine learning model that can be used to determine a second set of user-to-user compatibility scores for user pairs who do not have an interaction history. Along with input constraints, the first and second sets of user-to-user compatibility scores can be used to select compatible user groups.
    Type: Application
    Filed: August 29, 2022
    Publication date: March 7, 2024
    Applicant: SAP SE
    Inventors: Sai Hareesh Anamandra, Gopi Kishan, Rohit Jalagadugula, Akash Srivastava, Kavitha Krishnan, Vinay George Roy
  • Patent number: 9549068
    Abstract: A method for adaptive voice interaction includes monitoring voice communications between a service recipient and a service representative, measuring a set of voice communication features based upon the voice communications between the service recipient and the service representative, analyzing the set of voice communication features to generate emotion metric values, and generating a response based on the analysis of the set of voice communication features.
    Type: Grant
    Filed: January 28, 2015
    Date of Patent: January 17, 2017
    Assignee: Simple Emotion, Inc.
    Inventors: Akash Krishnan, Matthew Fernandez
  • Publication number: 20150279364
    Abstract: The invention described here uses a Mouth Phoneme Model that relates phonemes and visemes using audio and visual information. This method allows for the direct conversion between lip movements and phonemes, and furthermore, the lip reading of any word in the English language. Speech API was used to extract phonemes from audio data obtained from a database which consists of video and audio information of humans speaking a word in different accents. A machine learning algorithm similar to WEKA (Waikato Environment for Knowledge Analysis) was used to train the lip reading system.
    Type: Application
    Filed: March 29, 2014
    Publication date: October 1, 2015
    Inventors: Ajay Krishnan, Akash Krishnan
  • Publication number: 20150213800
    Abstract: A method for adaptive voice interaction includes monitoring voice communications between a service recipient and a service representative, measuring a set of features based upon the voice communications, and analyzing the set of features to generate emotion metric values.
    Type: Application
    Filed: January 28, 2015
    Publication date: July 30, 2015
    Inventors: Akash Krishnan, Matthew Fernandez
  • Patent number: 8825479
    Abstract: A computerized method, software, and system for recognizing emotions from a speech signal, wherein statistical and MFCC features are extracted from the speech signal, the MFCC features are sorted to provide a basis for comparison between the speech signal and reference samples, the statistical and MFCC features are compared between the speech signal and reference samples, a scoring system is used to compare relative correlation to different emotions, a probable emotional state is assigned to the speech signal based on the scoring system and the probable emotional state is communicated to a user.
    Type: Grant
    Filed: October 24, 2013
    Date of Patent: September 2, 2014
    Assignee: Simple Emotion, Inc.
    Inventors: Akash Krishnan, Matthew Fernandez
  • Publication number: 20140052448
    Abstract: A computerized method, software, and system for recognizing emotions from a speech signal, wherein statistical and MFCC features are extracted from the speech signal, the MFCC features are sorted to provide a basis for comparison between the speech signal and reference samples, the statistical and MFCC features are compared between the speech signal and reference samples, a scoring system is used to compare relative correlation to different emotions, a probable emotional state is assigned to the speech signal based on the scoring system and the probable emotional state is communicated to a user.
    Type: Application
    Filed: October 24, 2013
    Publication date: February 20, 2014
    Applicant: Simple Emotion, Inc.
    Inventors: Akash Krishnan, Matthew Fernandez
  • Patent number: 8595005
    Abstract: A computerized method, software, and system for recognizing emotions from a speech signal, wherein statistical and MFCC features are extracted from the speech signal, the MFCC features are sorted to provide a basis for comparison between the speech signal and reference samples, the statistical and MFCC features are compared between the speech signal and reference samples, a scoring system is used to compare relative correlation to different emotions, a probable emotional state is assigned to the speech signal based on the scoring system, and the probable emotional state is communicated to a user.
    Type: Grant
    Filed: April 22, 2011
    Date of Patent: November 26, 2013
    Assignee: Simple Emotion, Inc.
    Inventors: Akash Krishnan, Matthew Fernandez
  • Publication number: 20110295607
    Abstract: A computerized method, software, and system for recognizing emotions from a speech signal, wherein statistical and MFCC features are extracted from the speech signal, the MFCC features are sorted to provide a basis for comparison between the speech signal and reference samples, the statistical and MFCC features are compared between the speech signal and reference samples, a scoring system is used to compare relative correlation to different emotions, a probable emotional state is assigned to the speech signal based on the scoring system, and the probable emotional state is communicated to a user.
    Type: Application
    Filed: April 22, 2011
    Publication date: December 1, 2011
    Inventors: Akash Krishnan, Matthew Fernandez