Patents by Inventor Harshawardhan Madhukar Wabgaonkar

Harshawardhan Madhukar Wabgaonkar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11397888
    Abstract: A virtual agent with a dialogue management system and a method of training the dialogue management system is disclosed. The dialogue management system is trained using a deep reinforcement learning process. Training involves obtaining or simulating training dialogue data. During the training process, actions for the dialogue management system are selected using a Deep Q Network to process observations. The Deep Q Network is updated using a target function that includes a reward. The reward may be generated by considering one or more of the following metrics: task completion percentage, dialogue length, sentiment analysis of the user's response, emotional analysis of the user's state, explicit user feedback, and assessed quality of the action. The set of actions that the dialogue management system can take at any time may be limited by an action screener that predicts the subset of actions that the agent should consider for a given state of the system.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: July 26, 2022
    Assignee: Accenture Global Solutions Limited
    Inventors: Harshawardhan Madhukar Wabgaonkar, Shubhashis Sengupta, Tulika Saha
  • Patent number: 11093307
    Abstract: A device may receive first information that identifies an input associated with a virtual agent application executing on a user device. The virtual agent application may provide an interface for a project involving a plurality of user devices. The device may determine, based on the first information that identifies the input, a first response based on second information. The device may determine, based on at least one of the first information that identifies the input or the first response and without user input, a second response. The device may provide, to the virtual agent application of the user device, fourth information that identifies at least one of the first response or the second response.
    Type: Grant
    Filed: April 13, 2017
    Date of Patent: August 17, 2021
    Assignee: Accenture Global Solutions Limited
    Inventors: Roshni Ramesh Ramnani, Harshawardhan Madhukar Wabgaonkar, Shubhashis Sengupta, Sanjay Podder, Neville Dubash, Tirupal Rao Ravilla, Sumitraj Ganapat Patil, Rakesh Thimmaiah, Priyavanshi Pathania, Reeja Jose, Chaitra Hareesh
  • Patent number: 10950223
    Abstract: The system and method generally include identifying whether an utterance spoken by a user (e.g., customer) is a complete or incomplete sentence. For example, the system may include a partial utterance detection module that determines whether an utterance spoken by a user is a partial utterance. The detection process may include providing a detection advice code that gives a recommendation for handling the utterance of interest. If it is determined that the utterance is an incomplete sentence, then the system and method can identify the type of utterance. For example, the system may include a partial utterance classification module that predicts the class of a partial utterance. The classification process may include providing a classification advice code that gives a recommendation for handling the utterance of interest.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: March 16, 2021
    Assignee: Accenture Global Solutions Limited
    Inventors: Poulami Debnath, Shubhashis Sengupta, Harshawardhan Madhukar Wabgaonkar
  • Patent number: 10679613
    Abstract: A system and method for spoken language understanding using recurrent neural networks (“RNNs”) is disclosed. The system and method jointly performs the following three functions when processing the word sequence of a user utterance: (1) classify a user's speech act into a dialogue act category, (2) identify a user's intent, and (3) extract semantic constituents from the word sequence. The system and method includes using a bidirectional RNN to convert a word sequence into a hidden state representation. By providing two different orderings of the word sequence, the bidirectional nature of the RNN improves the accuracy of performing the above-mentioned three functions. The system and method includes performing the three functions jointly. The system and method uses attention, which improves the efficiency and accuracy of the spoken language understanding system by focusing on certain parts of a word sequence. The three functions can be jointly trained, which increases efficiency.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: June 9, 2020
    Assignee: Accenture Global Solutions Limited
    Inventors: Harshawardhan Madhukar Wabgaonkar, Shubhashis Sengupta, Tirupal Rao Ravilla, Sumitraj Ganapat Patil, Poulami Debnath, Sushravya G M, Roshni Ramesh Ramnani, Gurudatta Mishra, Moushumi Mahato, Mauajama Firdaus
  • Publication number: 20200058295
    Abstract: The system and method generally include identifying whether an utterance spoken by a user (e.g., customer) is a complete or incomplete sentence. For example, the system may include a partial utterance detection module that determines whether an utterance spoken by a user is a partial utterance. The detection process may include providing a detection advice code that gives a recommendation for handling the utterance of interest. If it is determined that the utterance is an incomplete sentence, then the system and method can identify the type of utterance. For example, the system may include a partial utterance classification module that predicts the class of a partial utterance. The classification process may include providing a classification advice code that gives a recommendation for handling the utterance of interest.
    Type: Application
    Filed: August 20, 2018
    Publication date: February 20, 2020
    Inventors: Poulami Debnath, Shubhashis Sengupta, Harshawardhan Madhukar Wabgaonkar
  • Publication number: 20190385051
    Abstract: A virtual agent with a dialogue management system and a method of training the dialogue management system is disclosed. The dialogue management system is trained using a deep reinforcement learning process. Training involves obtaining or simulating training dialogue data. During the training process, actions for the dialogue management system are selected using a Deep Q Network to process observations. The Deep Q Network is updated using a target function that includes a reward. The reward may be generated by considering one or more of the following metrics: task completion percentage, dialogue length, sentiment analysis of the user's response, emotional analysis of the user's state, explicit user feedback, and assessed quality of the action. The set of actions that the dialogue management system can take at any time may be limited by an action screener that predicts the subset of actions that the agent should consider for a given state of the system.
    Type: Application
    Filed: June 14, 2018
    Publication date: December 19, 2019
    Inventors: Harshawardhan Madhukar Wabgaonkar, Shubhashis Sengupta, Tulika Saha
  • Publication number: 20190385595
    Abstract: A system and method for spoken language understanding using recurrent neural networks (“RNNs”) is disclosed. The system and method jointly performs the following three functions when processing the word sequence of a user utterance: (1) classify a user's speech act into a dialogue act category, (2) identify a user's intent, and (3) extract semantic constituents from the word sequence. The system and method includes using a bidirectional RNN to convert a word sequence into a hidden state representation. By providing two different orderings of the word sequence, the bidirectional nature of the RNN improves the accuracy of performing the above-mentioned three functions. The system and method includes performing the three functions jointly. The system and method uses attention, which improves the efficiency and accuracy of the spoken language understanding system by focusing on certain parts of a word sequence. The three functions can be jointly trained, which increases efficiency.
    Type: Application
    Filed: June 14, 2018
    Publication date: December 19, 2019
    Inventors: Harshawardhan Madhukar Wabgaonkar, Shubhashis Sengupta, Tirupal Rao Ravilla, Sumitraj Ganapat Patil, Poulami Debnath, Sushravya G M, Roshni Ramesh Ramnani, Gurudatta Mishra, Moushumi Mahato, Mauajama Firdaus
  • Publication number: 20180165379
    Abstract: A device may receive first information that identifies an input associated with a virtual agent application executing on a user device. The virtual agent application may provide an interface for a project involving a plurality of user devices. The device may determine, based on the first information that identifies the input, a first response based on second information. The device may determine, based on at least one of the first information that identifies the input or the first response and without user input, a second response. The device may provide, to the virtual agent application of the user device, fourth information that identifies at least one of the first response or the second response.
    Type: Application
    Filed: April 13, 2017
    Publication date: June 14, 2018
    Inventors: Roshni Ramesh RAMNANI, Harshawardhan MADHUKAR WABGAONKAR, Shubhashi Sengupta, Sanjay Podder, Neville Dubash, Tirupal Rao RAVILLA, Sumitraj GANAPAT PATIL, Rakesh THIMMAIAH, Priyavanshi Pathania, Reeja Jose, Chaitra Hareesh
  • Patent number: 9747498
    Abstract: A fused image of the person's hand is accessed, the fused image having been generated using a segmented graylevel image and a segmented color image. The hand in the fused image is identified. One or more finger tips and one or more finger valleys in the fused image are identified. One or more fingers of the hand are segmented, based on the identified finger tips and finger valleys. The one or more fingers of the hand are labeled. One or more features for each finger of the hand are determined.
    Type: Grant
    Filed: February 12, 2016
    Date of Patent: August 29, 2017
    Assignee: Accenture Global Services Limited
    Inventors: Harshawardhan Madhukar Wabgaonkar, Sanjoy Paul
  • Publication number: 20160196469
    Abstract: A fused image of the person's hand is accessed, the fused image having been generated using a segmented graylevel image and a segmented color image. The hand in the fused image is identified. One or more finger tips and one or more finger valleys in the fused image are identified. One or more fingers of the hand are segmented, based on the identified finger tips and finger valleys. The one or more fingers of the hand are labeled. One or more features for each finger of the hand are determined.
    Type: Application
    Filed: February 12, 2016
    Publication date: July 7, 2016
    Inventors: Harshawardhan Madhukar Wabgaonkar, Sanjoy Paul
  • Patent number: 9262675
    Abstract: A fused image of the person's hand is accessed, the fused image having been generated using a segmented graylevel image and a segmented color image. The hand in the fused image is identified. One or more finger tips and one or more finger valleys in the fused image are identified. One or more fingers of the hand are segmented, based on the identified finger tips and finger valleys. The one or more fingers of the hand are labeled. One or more features for each finger of the hand are determined.
    Type: Grant
    Filed: April 24, 2014
    Date of Patent: February 16, 2016
    Assignee: Accenture Global Services Limited
    Inventors: Harshawardhan Madhukar Wabgaonkar, Sanjoy Paul
  • Publication number: 20140321718
    Abstract: A fused image of the person's hand is accessed, the fused image having been generated using a segmented graylevel image and a segmented color image. The hand in the fused image is identified. One or more finger tips and one or more finger valleys in the fused image are identified. One or more fingers of the hand are segmented, based on the identified finger tips and finger valleys. The one or more fingers of the hand are labeled. One or more features for each finger of the hand are determined.
    Type: Application
    Filed: April 24, 2014
    Publication date: October 30, 2014
    Inventors: Harshawardhan Madhukar Wabgaonkar, Sanjoy Paul