Patents by Inventor Mayur PATIDAR
Mayur PATIDAR has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240127309Abstract: E-commerce industry is currently expanding rapidly, worldwide. A process of generating product copy for grocery items, which is very challenging as food items, do not have features in common, unlike fashion products. A data associated with one or more grocery products is received as an input. The data is processed to obtain one or more sorted similar grocery products. One or more relevant attributes and allergen information associated with the one or more sorted similar grocery products are extracted. A vocabulary model is created based the one or more relevant attributes and the allergen information associated with the one or more sorted similar grocery products. The vocabulary model is validated based on one or more assigned weights on training data. The one or more descriptive copies associated with grocery products are generated by mapping the validated vocabulary model with the training data.Type: ApplicationFiled: January 30, 2023Publication date: April 18, 2024Applicant: Tata Consultancy Service LimitedInventors: Bagya Lakshmi VASUDEVAN, Sudesna BARUAH, Mayur PATIDAR, Meghna Kishor MAHAJAN
-
Publication number: 20240013006Abstract: Existing semi-supervised and unsupervised approaches for intent discovery require an estimate of the number of new intents present in the user logs. The present disclosure receives labeled utterances from known intents and update parameters of a pre-trained language model (PLM). Representation learning and clustering is performed iteratively using labeled and unlabeled utterances from known intents and unlabeled utterances from unknown intents to fine-tune PLM and a plurality of clusters is generated. Cluster merger algorithm is executed iteratively on generated plurality of clusters. A query cluster is obtained by randomly selecting one cluster from the plurality of clusters and by obtaining a corresponding plurality of nearest neighbors based on a cosine-similarity. A response for merging the query cluster and corresponding plurality of nearest neighbors is obtained, and a new cluster is created.Type: ApplicationFiled: June 29, 2023Publication date: January 11, 2024Applicant: Tata Consultancy Services LimitedInventors: Rajat KUMAR, Gautam SHROFF, Mayur PATIDAR, Lovekesh VIG, Vaibhav VARSHNEY
-
PROMPT AUGMENTED GENERATIVE REPLAY VIA SUPERVISED CONTRASTIVE TRAINING FOR LIFELONG INTENT DETECTION
Publication number: 20240013094Abstract: Embodiments disclosed herein model lifelong intent detection as a class-incremental learning where a new set of intents/classes are added at each incremental step. To address the issue of catastrophic forgetting during lifelong intent detection (LID), an incremental learner is provided with Prompt Augmented Generative Replay, wherein unlike existing approaches that store real samples in replay memory, only concept words obtained from old intents are stored, which reduces memory consumption and speeds up incremental training still enabling not forgetting the old intents. Joint training of an incremental learner is carried out for LID and a pseudo-labeled utterance generation with objective is to classify a user utterance into one of multiple pre-defined intents by minimizing a total Loss function comprising a LID loss function, a Labeled Utterance Generation loss function, a Supervised Contrastive Training loss function, and a Knowledge Distillation loss function.Type: ApplicationFiled: June 29, 2023Publication date: January 11, 2024Applicant: Tata Consultancy Services LimitedInventors: Vaibhav VARSHNEY, Mayur PATIDAR, Rajat KUMAR, Gautam SHROFF, Lovekesh VIG -
Patent number: 11551142Abstract: Users have to assign labels to a ticket to route to right domain expert for resolving issue(s). In practice, labels are large and organized in form of a tree. Lack in clarity in problem description has resulted in inconsistent and incorrect labeling of data, making it hard for one to learn/interpret. Embodiments of the present disclosure provide systems and methods that identify relevant queries to obtain user response, for identification of right category and ticket logging there. This is achieved by implementing attention based sequence to sequence (seq2seq) hierarchical classification model to assign the hierarchical categories to tickets, followed by a slot filling model to enable identifying/deciding right set of queries, if the top-k model predictions are not consistent. Further, training data for slot filling model is automatically generated based on attention weight in the hierarchical classification model.Type: GrantFiled: October 15, 2019Date of Patent: January 10, 2023Assignee: TATA CONSULTANCY SERVICES LIMITEDInventors: Puneet Agarwal, Mayur Patidar, Lovekesh Vig, Gautam Shroff
-
Patent number: 11373090Abstract: In automated assistant systems, a deep-learning model in form of a long short-term memory (LSTM) classifier is used for mapping questions to classes, with each class having a manually curated answer. A team of experts manually create the training data used to train this classifier. Relying on human curation often results in such linguistic training biases creeping into training data, since every individual has a specific style of writing natural language and uses some words in specific context only. Deep models end up learning these biases, instead of the core concept words of the target classes. In order to correct these biases, meaningful sentences are automatically generated using a generative model, and then used for training a classification model. For example, a variational autoencoder (VAE) is used as the generative model for generating novel sentences and a language model (LM) is utilized for selecting sentences based on likelihood.Type: GrantFiled: September 18, 2018Date of Patent: June 28, 2022Assignee: Tata Consultancy Services LimitedInventors: Puneet Agarwal, Mayur Patidar, Lovekesh Vig, Gautam Shroff
-
Patent number: 10891438Abstract: Systems and methods for Deep Learning techniques based multi-purpose conversational agents for processing natural language queries. The traditional systems and methods provide for conversational systems for processing natural language queries but do not employ Deep Learning techniques, and thus are unable to process large number of intents. Embodiments of the present disclosure provide for Deep Learning techniques based multi-purpose conversational agents for processing the natural language queries by defining and logically integrating a plurality of components comprising of multi-purpose conversational agents, identifying an appropriate agent to process one or more natural language queries by a High Level Intent Identification technique, predicting a probable user intent, classifying the query, and generate a set of responses by querying or updating one or more knowledge graphs.Type: GrantFiled: April 15, 2019Date of Patent: January 12, 2021Assignee: Tata Consultancy Services LimitedInventors: Mahesh Prasad Singh, Puneet Agarwal, Ashish Chaudhary, Gautam Shroff, Prerna Khurana, Mayur Patidar, Vivek Bisht, Rachit Bansal, Prateek Sachan, Rohit Kumar
-
Publication number: 20200125992Abstract: Users have to assign labels to a ticket to route to right domain expert for resolving issue(s). In practice, labels are large and organized in form of a tree. Lack in clarity in problem description has resulted in inconsistent and incorrect labeling of data, making it hard for one to learn/interpret. Embodiments of the present disclosure provide systems and methods that identify relevant queries to obtain user response, for identification of right category and ticket logging there. This is achieved by implementing attention based sequence to sequence (seq2seq) hierarchical classification model to assign the hierarchical categories to tickets, followed by a slot filling model to enable identifying/deciding right set of queries, if the top-k model predictions are not consistent. Further, training data for slot filling model is automatically generated based on attention weight in the hierarchical classification model.Type: ApplicationFiled: October 15, 2019Publication date: April 23, 2020Applicant: Tata Consultancy Services LimitedInventors: Puneet AGARWAL, Mayur PATIDAR, Lovekesh VIG, Gautam SHROFF
-
Publication number: 20190317994Abstract: Systems and methods for Deep Learning techniques based multi-purpose conversational agents for processing natural language queries. The traditional systems and methods provide for conversational systems for processing natural language queries but do not employ Deep Learning techniques, and thus are unable to process large number of intents. Embodiments of the present disclosure provide for Deep Learning techniques based multi-purpose conversational agents for processing the natural language queries by defining and logically integrating a plurality of components comprising of multi-purpose conversational agents, identifying an appropriate agent to process one or more natural language queries by a High Level Intent Identification technique, predicting a probable user intent, classifying the query, and generate a set of responses by querying or updating one or more knowledge graphs.Type: ApplicationFiled: April 15, 2019Publication date: October 17, 2019Applicant: Tata Consultancy Services LimitedInventors: Mahesh Prasad SINGH, Puneet AGARWAL, Ashish CHAUDHARY, Gautam SHROFF, Prerna KHURANA, Mayur PATIDAR, Vivek BISHT, Rachit BANSAL, Prateek SACHAN, Rohit KUMAR
-
Publication number: 20190087728Abstract: In automated assistant systems, a deep-learning model in form of a long short-term memory (LSTM) classifier is used for mapping questions to classes, with each class having a manually curated answer. A team of experts manually create the training data used to train this classifier. Relying on human curation often results in such linguistic training biases creeping into training data, since every individual has a specific style of writing natural language and uses some words in specific context only. Deep models end up learning these biases, instead of the core concept words of the target classes. In order to correct these biases, meaningful sentences are automatically generated using a generative model, and then used for training a classification model. For example, a variational autoencoder (VAE) is used as the generative model for generating novel sentences and a language model (LM) is utilized for selecting sentences based on likelihood.Type: ApplicationFiled: September 18, 2018Publication date: March 21, 2019Applicant: Tata Consultancy Services LimitedInventors: Puneet AGARWAL, Mayur PATIDAR, Lovekesh VIG, Gautam SHROFF