Patents by Inventor Harshawardhan Madhukar Wabgaonkar
Harshawardhan Madhukar Wabgaonkar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11397888Abstract: A virtual agent with a dialogue management system and a method of training the dialogue management system is disclosed. The dialogue management system is trained using a deep reinforcement learning process. Training involves obtaining or simulating training dialogue data. During the training process, actions for the dialogue management system are selected using a Deep Q Network to process observations. The Deep Q Network is updated using a target function that includes a reward. The reward may be generated by considering one or more of the following metrics: task completion percentage, dialogue length, sentiment analysis of the user's response, emotional analysis of the user's state, explicit user feedback, and assessed quality of the action. The set of actions that the dialogue management system can take at any time may be limited by an action screener that predicts the subset of actions that the agent should consider for a given state of the system.Type: GrantFiled: June 14, 2018Date of Patent: July 26, 2022Assignee: Accenture Global Solutions LimitedInventors: Harshawardhan Madhukar Wabgaonkar, Shubhashis Sengupta, Tulika Saha
-
Patent number: 11093307Abstract: A device may receive first information that identifies an input associated with a virtual agent application executing on a user device. The virtual agent application may provide an interface for a project involving a plurality of user devices. The device may determine, based on the first information that identifies the input, a first response based on second information. The device may determine, based on at least one of the first information that identifies the input or the first response and without user input, a second response. The device may provide, to the virtual agent application of the user device, fourth information that identifies at least one of the first response or the second response.Type: GrantFiled: April 13, 2017Date of Patent: August 17, 2021Assignee: Accenture Global Solutions LimitedInventors: Roshni Ramesh Ramnani, Harshawardhan Madhukar Wabgaonkar, Shubhashis Sengupta, Sanjay Podder, Neville Dubash, Tirupal Rao Ravilla, Sumitraj Ganapat Patil, Rakesh Thimmaiah, Priyavanshi Pathania, Reeja Jose, Chaitra Hareesh
-
Patent number: 10950223Abstract: The system and method generally include identifying whether an utterance spoken by a user (e.g., customer) is a complete or incomplete sentence. For example, the system may include a partial utterance detection module that determines whether an utterance spoken by a user is a partial utterance. The detection process may include providing a detection advice code that gives a recommendation for handling the utterance of interest. If it is determined that the utterance is an incomplete sentence, then the system and method can identify the type of utterance. For example, the system may include a partial utterance classification module that predicts the class of a partial utterance. The classification process may include providing a classification advice code that gives a recommendation for handling the utterance of interest.Type: GrantFiled: August 20, 2018Date of Patent: March 16, 2021Assignee: Accenture Global Solutions LimitedInventors: Poulami Debnath, Shubhashis Sengupta, Harshawardhan Madhukar Wabgaonkar
-
Patent number: 10679613Abstract: A system and method for spoken language understanding using recurrent neural networks (“RNNs”) is disclosed. The system and method jointly performs the following three functions when processing the word sequence of a user utterance: (1) classify a user's speech act into a dialogue act category, (2) identify a user's intent, and (3) extract semantic constituents from the word sequence. The system and method includes using a bidirectional RNN to convert a word sequence into a hidden state representation. By providing two different orderings of the word sequence, the bidirectional nature of the RNN improves the accuracy of performing the above-mentioned three functions. The system and method includes performing the three functions jointly. The system and method uses attention, which improves the efficiency and accuracy of the spoken language understanding system by focusing on certain parts of a word sequence. The three functions can be jointly trained, which increases efficiency.Type: GrantFiled: June 14, 2018Date of Patent: June 9, 2020Assignee: Accenture Global Solutions LimitedInventors: Harshawardhan Madhukar Wabgaonkar, Shubhashis Sengupta, Tirupal Rao Ravilla, Sumitraj Ganapat Patil, Poulami Debnath, Sushravya G M, Roshni Ramesh Ramnani, Gurudatta Mishra, Moushumi Mahato, Mauajama Firdaus
-
Publication number: 20200058295Abstract: The system and method generally include identifying whether an utterance spoken by a user (e.g., customer) is a complete or incomplete sentence. For example, the system may include a partial utterance detection module that determines whether an utterance spoken by a user is a partial utterance. The detection process may include providing a detection advice code that gives a recommendation for handling the utterance of interest. If it is determined that the utterance is an incomplete sentence, then the system and method can identify the type of utterance. For example, the system may include a partial utterance classification module that predicts the class of a partial utterance. The classification process may include providing a classification advice code that gives a recommendation for handling the utterance of interest.Type: ApplicationFiled: August 20, 2018Publication date: February 20, 2020Inventors: Poulami Debnath, Shubhashis Sengupta, Harshawardhan Madhukar Wabgaonkar
-
Publication number: 20190385051Abstract: A virtual agent with a dialogue management system and a method of training the dialogue management system is disclosed. The dialogue management system is trained using a deep reinforcement learning process. Training involves obtaining or simulating training dialogue data. During the training process, actions for the dialogue management system are selected using a Deep Q Network to process observations. The Deep Q Network is updated using a target function that includes a reward. The reward may be generated by considering one or more of the following metrics: task completion percentage, dialogue length, sentiment analysis of the user's response, emotional analysis of the user's state, explicit user feedback, and assessed quality of the action. The set of actions that the dialogue management system can take at any time may be limited by an action screener that predicts the subset of actions that the agent should consider for a given state of the system.Type: ApplicationFiled: June 14, 2018Publication date: December 19, 2019Inventors: Harshawardhan Madhukar Wabgaonkar, Shubhashis Sengupta, Tulika Saha
-
Publication number: 20190385595Abstract: A system and method for spoken language understanding using recurrent neural networks (“RNNs”) is disclosed. The system and method jointly performs the following three functions when processing the word sequence of a user utterance: (1) classify a user's speech act into a dialogue act category, (2) identify a user's intent, and (3) extract semantic constituents from the word sequence. The system and method includes using a bidirectional RNN to convert a word sequence into a hidden state representation. By providing two different orderings of the word sequence, the bidirectional nature of the RNN improves the accuracy of performing the above-mentioned three functions. The system and method includes performing the three functions jointly. The system and method uses attention, which improves the efficiency and accuracy of the spoken language understanding system by focusing on certain parts of a word sequence. The three functions can be jointly trained, which increases efficiency.Type: ApplicationFiled: June 14, 2018Publication date: December 19, 2019Inventors: Harshawardhan Madhukar Wabgaonkar, Shubhashis Sengupta, Tirupal Rao Ravilla, Sumitraj Ganapat Patil, Poulami Debnath, Sushravya G M, Roshni Ramesh Ramnani, Gurudatta Mishra, Moushumi Mahato, Mauajama Firdaus
-
Publication number: 20180165379Abstract: A device may receive first information that identifies an input associated with a virtual agent application executing on a user device. The virtual agent application may provide an interface for a project involving a plurality of user devices. The device may determine, based on the first information that identifies the input, a first response based on second information. The device may determine, based on at least one of the first information that identifies the input or the first response and without user input, a second response. The device may provide, to the virtual agent application of the user device, fourth information that identifies at least one of the first response or the second response.Type: ApplicationFiled: April 13, 2017Publication date: June 14, 2018Inventors: Roshni Ramesh RAMNANI, Harshawardhan MADHUKAR WABGAONKAR, Shubhashi Sengupta, Sanjay Podder, Neville Dubash, Tirupal Rao RAVILLA, Sumitraj GANAPAT PATIL, Rakesh THIMMAIAH, Priyavanshi Pathania, Reeja Jose, Chaitra Hareesh
-
Patent number: 9747498Abstract: A fused image of the person's hand is accessed, the fused image having been generated using a segmented graylevel image and a segmented color image. The hand in the fused image is identified. One or more finger tips and one or more finger valleys in the fused image are identified. One or more fingers of the hand are segmented, based on the identified finger tips and finger valleys. The one or more fingers of the hand are labeled. One or more features for each finger of the hand are determined.Type: GrantFiled: February 12, 2016Date of Patent: August 29, 2017Assignee: Accenture Global Services LimitedInventors: Harshawardhan Madhukar Wabgaonkar, Sanjoy Paul
-
Publication number: 20160196469Abstract: A fused image of the person's hand is accessed, the fused image having been generated using a segmented graylevel image and a segmented color image. The hand in the fused image is identified. One or more finger tips and one or more finger valleys in the fused image are identified. One or more fingers of the hand are segmented, based on the identified finger tips and finger valleys. The one or more fingers of the hand are labeled. One or more features for each finger of the hand are determined.Type: ApplicationFiled: February 12, 2016Publication date: July 7, 2016Inventors: Harshawardhan Madhukar Wabgaonkar, Sanjoy Paul
-
Patent number: 9262675Abstract: A fused image of the person's hand is accessed, the fused image having been generated using a segmented graylevel image and a segmented color image. The hand in the fused image is identified. One or more finger tips and one or more finger valleys in the fused image are identified. One or more fingers of the hand are segmented, based on the identified finger tips and finger valleys. The one or more fingers of the hand are labeled. One or more features for each finger of the hand are determined.Type: GrantFiled: April 24, 2014Date of Patent: February 16, 2016Assignee: Accenture Global Services LimitedInventors: Harshawardhan Madhukar Wabgaonkar, Sanjoy Paul
-
Publication number: 20140321718Abstract: A fused image of the person's hand is accessed, the fused image having been generated using a segmented graylevel image and a segmented color image. The hand in the fused image is identified. One or more finger tips and one or more finger valleys in the fused image are identified. One or more fingers of the hand are segmented, based on the identified finger tips and finger valleys. The one or more fingers of the hand are labeled. One or more features for each finger of the hand are determined.Type: ApplicationFiled: April 24, 2014Publication date: October 30, 2014Inventors: Harshawardhan Madhukar Wabgaonkar, Sanjoy Paul