Patents by Inventor Sunil Kumar KOPPARAPU
Sunil Kumar KOPPARAPU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240071373Abstract: State of the art Acoustic Models (AM), which are trained using data from one environment, may fail to adapt to another environment, and as a result, application is restricted. The disclosure herein generally relates to speech signal processing, and, more particularly, to a method and system for Automatic Speech Recognition (ASR) using Multi-task Learned Embeddings (MTL). In this approach, MTL embeddings are extracted from an MTL neural network that has been trained using feature vectors from a plurality of speech files. The MTL embeddings are then used for generating an acoustic model, which maybe then used for the purpose of Automatic Speech Recognition, along with the feature vectors and the MTL embeddings.Type: ApplicationFiled: August 11, 2023Publication date: February 29, 2024Applicant: Tata Consultancy Services LimitedInventors: ASHISH PANDA, SUNIL KUMAR KOPPARAPU, ADITYA RAIKAR, MEETKUMAR HEMAKSHU SONI
-
Publication number: 20230300128Abstract: This disclosure relates to systems and methods for performing single input based multifactor authentication. Multifactor authentication refers to an authentication system with enhanced security which utilizes more than one authentication forms to validate identity of a user. Conventionally, the process of multifactor authentication is a serial process which involves inputting of authentication information multiple times. However, with conventional approaches, delay is introduced in execution of the multifactor authentication process. The method of the present disclosure addresses unresolved problems of multifactor authentication by enabling two or more factors to be assessed simultaneously making the authentication process faster without sacrificing the robustness of authentication process. Embodiments of the present disclosure analyzes spoken response of the user to a dynamically generated question for multifactor authentication.Type: ApplicationFiled: November 29, 2022Publication date: September 21, 2023Applicant: Tata Consultancy Services LimitedInventors: SUNIL KUMAR KOPPARAPU, BIMAL PRAVIN SHAH
-
Publication number: 20230109692Abstract: This disclosure relates generally to method and system for providing assistance to interviewers. Technical interviewing is immensely important for enterprise but requires significant domain expertise and investment of time. The present disclosure aids assists interviewers with a framework via an interview assistant bot. The method initiates an interview session for a job description by selecting a set of qualified candidates resume to be interviewed. Further, the IA bot recommends each interviewer with a set of question and reference answer pairs prior initiating the interview. At each interview step, the IA bot records interview history and recommends interviewer with the revised set of questions. Further, an assessment score is determined for the candidate using the reference answer extracted from a resource corpus. Additionally, statistics about the interview process is generated, such as number and nature of questions asked, and its variation across to identify outliers for corrective actions.Type: ApplicationFiled: August 26, 2022Publication date: April 13, 2023Applicant: Tata Consultancy Services LimitedInventors: ANUMITA DASGUPTA, INDRAJIT BHATTACHARYA, GIRISH KESHAV PALSHIKAR, PRATIK SAINI, SANGAMESHWAR SURYAKANT PATIL, SOHAM DATTA, PRABIR MALLICK, SAMIRAN PAL, SUNIL KUMAR KOPPARAPU, AISHWARYA CHHABRA, AVINASH KUMAR SINGH, KAUSTUV MUKHERJI, MEGHNA ABHISHEK PANDHARIPANDE, ANIKET PRAMANICK, ARPITA KUNDU, SUBHASISH GHOSH, CHANDRASEKHAR ANANTARAM, ANAND SIVASUBRAMANIAM, GAUTAM SHROFF
-
Patent number: 11593641Abstract: Statistical pattern recognition relies on substantial amount of annotated samples for better learning and learning is insufficient in low resource scenarios. Creating annotated databases itself is a challenging task, requires lot of effort and cost, which may not always be feasible. Such challenges are addressed by the present disclosure by generating synthetic samples through automatic transformation using Deep Autoencoders (DAE). An autoencoder is trained using all possible combination of pairs between a plurality of classes that could be formed from a limited number of handful samples in a low resource database, and then the DAE is used to generate new samples when one class samples are given as input to the autoencoder. Again, the system of the present disclosure can be configured to generate number of training samples as required. Also, the deep autoencoder can be dynamically configured to meet requirements.Type: GrantFiled: September 19, 2019Date of Patent: February 28, 2023Assignee: Tata Consultancy Services LimitedInventors: Rupayan Chakraborty, Sunil Kumar Kopparapu
-
Patent number: 11443179Abstract: The disclosure presents herein a method to train a classifier in a machine learning using more than one simultaneous sample to address class imbalance problem in any discriminative classifier. A modified representation of the training dataset is obtained by simultaneously considering features based representations of more than one sample. A modification to an architecture of a classifier is needed into handling the modified date representation of the more than one samples. The modification of the classifier directs same number of units in the input layer as to accept the plurality of simultaneous samples in the training dataset. The output layer will consist of units equal to twice the considered number of classes in the classification task, therefore, the output layer herein will have four units for two-class classification task. The disclosure herein can be implemented to resolve the problem of learning from low resourced data.Type: GrantFiled: May 18, 2018Date of Patent: September 13, 2022Assignee: TATA CONSULTANCY SERVICES LIMITEDInventors: Sri Harsha Dumpala, Rupayan Chakraborty, Sunil Kumar Kopparapu
-
Patent number: 11340863Abstract: Audio based transactions are getting more popular and are envisaged to become common in years to come. With the rise in data protection regulations, muting portions of the audio files is necessary to hide sensitive information from an eavesdropper or accidental hearing by an entity who gets unauthorized access to these audio files. However, it is realized that deleted transaction information in a muted audio files make audit of the transaction challenging and impossible. Embodiments of the present disclosure provide systems and methods of muting audio information in multimedia files and retrieval thereof which is masked and further allows for reconstruction of the original audio conversation or restoration Private to an Entity (P2aE) information without original audio reconstruction when auditing is being exercised.Type: GrantFiled: February 26, 2020Date of Patent: May 24, 2022Assignee: Tata Consultancy Services LimitedInventors: Sunil Kumar Kopparapu, Ashish Panda
-
Patent number: 11316977Abstract: A system and method for monitoring behavior of voice agents in a simulated environment of voice-based call center to route a call. It includes a set of models and wearable devices to estimate and analyze cognitive load and emotional state of a voice agent which are obtained using wearable devices in the real time. It collects physiological signals from the voice agents and analyze them along with skill-set profiles of the voice agent to identify best suited voice agent based on agent-customer matching score obtained using skill-set profile analysis, cognitive load and a predicted emotive state of the voice agent. It may assist the voice agent in call if the cognitive load of the voice agent raises beyond predefined threshold using brain computer interfacing.Type: GrantFiled: July 3, 2018Date of Patent: April 26, 2022Assignee: Tata Consultancy Services LimitedInventors: Sri Harsha Dumpala, Sunil Kumar Kopparapu
-
Patent number: 10930286Abstract: This disclosure relates generally to a method and system for muting of classified information from an audio using a fuzzy approach. The method comprises converting the received audio signal into text using a speech recognition engine to identify a plurality of classified words from the text to obtain a first set of parameters. Further, a plurality of subwords associated with each classified word are identified to obtain a second set of parameters associated with each subword of corresponding classified word. A relative score is computed for each subword associated with the classified word based on a plurality of similar pairs for the corresponding classified word. A fuzzy muting function is generated using the first set of parameters, the second set of parameters and the relative score associated with each subword. The plurality of subwords associated with each classified word is muted in accordance with the generated fuzzy muting function.Type: GrantFiled: January 22, 2019Date of Patent: February 23, 2021Assignee: Tata Consultancy Services LimitedInventors: Imran Ahamad Sheikh, Sunil Kumar Kopparapu, Bhavikkumar Bhagvanbhai Vachhani, Bala Mallikarjunarao Garlapati, Srinivasa Rao Chalamala
-
Patent number: 10813571Abstract: Devices and methods are provided for non-invasive goal oriented and personalized monitoring of substance consumption directed towards aiding reduction of substance intake by a user. Based on the substance consumption characteristics and the user's profile, the user's substance consumption profile is identified and average amount of the substance in the body at a given time is computed. A threshold corresponding to amount of substance the body can sustain is then computed based on goals set by the user and the substance consumption characteristics and the user's profile. Alerts can be generated and transmitted to the user based on pre-determined conditions to help the user achieve his set goals.Type: GrantFiled: December 12, 2016Date of Patent: October 27, 2020Assignee: Tata Consultancy Services LimitedInventors: Sanjay Madhukar Kimbahune, Sunil Kumar Kopparapu, Syed Mohammad Ghouse, Pankaj Harish Doke
-
Publication number: 20200310746Abstract: Audio based transactions are getting more popular and are envisaged to become common in years to come. With the rise in data protection regulations, muting portions of the audio files is necessary to hide sensitive information from an eavesdropper or accidental hearing by an entity who gets unauthorized access to these audio files. However, it is realized that deleted transaction information in a muted audio files make audit of the transaction challenging and impossible. Embodiments of the present disclosure provide systems and methods of muting audio information in multimedia files and retrieval thereof which is masked and further allows for reconstruction of the original audio conversation or restoration Private to an Entity (P2aE) information without original audio reconstruction when auditing is being exercised.Type: ApplicationFiled: February 26, 2020Publication date: October 1, 2020Applicant: Tata Consultancy Services LimitedInventors: Sunil Kumar Kopparapu, Ashish Panda
-
Publication number: 20200090041Abstract: Statistical pattern recognition relies on substantial amount of annotated samples for better learning and learning is insufficient in low resource scenarios. Creating annotated databases itself is a challenging task, requires lot of effort and cost, which may not always be feasible. Such challenges are addressed by the present disclosure by generating synthetic samples through automatic transformation using Deep Autoencoders (DAE). An autoencoder is trained using all possible combination of pairs between a plurality of classes that could be formed from a limited number of handful samples in a low resource database, and then the DAE is used to generate new samples when one class samples are given as input to the autoencoder. Again, the system of the present disclosure can be configured to generate number of training samples as required. Also, the deep autoencoder can be dynamically configured to meet requirements.Type: ApplicationFiled: September 19, 2019Publication date: March 19, 2020Applicant: Tata Consultancy Services LimitedInventors: Rupayan CHAKRABORTY, Sunil Kumar KOPPARAPU
-
Publication number: 20200020340Abstract: This disclosure relates generally to a method and system for muting of classified information from an audio using a fuzzy approach. The method comprises converting the received audio signal into text using a speech recognition engine to identify a plurality of classified words from the text to obtain a first set of parameters. Further, a plurality of subwords associated with each classified word are identified to obtain a second set of parameters associated with each subword of corresponding classified word. A relative score is computed for each subword associated with the classified word based on a plurality of similar pairs for the corresponding classified word. A fuzzy muting function is generated using the first set of parameters, the second set of parameters and the relative score associated with each subword. The plurality of subwords associated with each classified word is muted in accordance with the generated fuzzy muting function.Type: ApplicationFiled: January 22, 2019Publication date: January 16, 2020Applicant: Tata Consultancy Services LimitedInventors: Imran Ahamad SHEIKH, Sunil Kumar KOPPARAPU, Bhavikkumar Bhagvanbhai VACHHANI, Bala Mallikarjunarao GARLAPATI, Srinivasa Rao CHALAMALA
-
Patent number: 10460732Abstract: A system and method to insert visual subtitles in videos is described. The method comprises segmenting an input video signal to extract the speech segments and music segments. Next, a speaker representation is associated for each speech segment corresponding to a speaker visible in the frame. Further, speech segments are analyzed to compute the phones and the duration of each phone. The phones are mapped to a corresponding viseme and a viseme based language model is created with a corresponding score. Most relevant viseme is selected for the speech segments by computing a total viseme score. Further, a speaker representation sequence is created such that phones and emotions in the speech segments are represented as reconstructed lip movements and eyebrow movements. The speaker representation sequence is then integrated with the music segments and super imposed on the input video signal to create subtitles.Type: GrantFiled: March 29, 2017Date of Patent: October 29, 2019Assignee: Tata Consultancy Services LimitedInventors: Chitralekha Bhat, Sunil Kumar Kopparapu, Ashish Panda
-
Patent number: 10410622Abstract: Text output of speech recognition engines tend to be erroneous when spoken data has domain specific terms. The present disclosure facilitates automatic correction of errors in speech to text conversion using abstractions of evolutionary development and artificial development. The words in a speech recognition engine text output are treated as a set of injured genes in a biological cell that need repair which are then repaired and form genotypes that are then repaired to phenotypes through a series of repair steps based on a matching, mapping and linguistic repair through a fitness criteria. A basic genetic level repair involves phonetic MATCHING function together with a FITNESS function to select the best among the matching genes. A second genetic level repair involves a contextual MAPPING function for repairing remaining ‘injured’ genes of the speech recognition engine output. Finally, a genotype to phenotype repair involves using linguistic rules and semantic rules of the domain.Type: GrantFiled: July 13, 2017Date of Patent: September 10, 2019Assignee: Tata Consultancy Services LimitedInventors: Chandrasekhar Anantaram, Sunil Kumar Kopparapu, Chiragkumar Rameshbhai Patel, Aditya Mittal
-
Patent number: 10388283Abstract: This disclosure relates generally to audio-to-text conversion for an audio conversation, and particularly to system and method for improving call-center audio transcription. In one embodiment, a method includes deriving temporal information and contextual information from an audio segment of an audio conversation corresponding to interaction of speakers, and input parameters are extracted from the temporal and contextual information associated with the audio segment. Language model (LM) and an acoustic model (AM) of an automatic speech recognition (ASR) engine are dynamically tuned based on the input parameters. A subsequent audio segment is processed by using the tuned AM and LM for the audio-to-text conversion.Type: GrantFiled: March 13, 2018Date of Patent: August 20, 2019Assignee: Tata Consultancy Services LimitedInventors: Bhavikkumar Vachhani, Sunil Kumar Kopparapu
-
Patent number: 10319377Abstract: A method and system is provided for estimating clean speech parameters from noisy speech parameters. The method is performed by acquiring speech signals, estimating noise from the acquired speech signals, computing speech features from the acquired speech signals, estimating model parameters from the computed speech features and estimating clean parameters from the estimated noise and the estimated model parameters.Type: GrantFiled: February 28, 2017Date of Patent: June 11, 2019Assignee: Tata Consultancy Services LimitedInventors: Ashish Panda, Sunil Kumar Kopparapu
-
Publication number: 20190132450Abstract: A system and method for monitoring behavior of voice agents in a simulated environment of voice-based call center to route a call. It includes a set of models and wearable devices to estimate and analyze cognitive load and emotional state of a voice agent which are obtained using wearable devices in the real time. It collects physiological signals from the voice agents and analyze them along with skill-set profiles of the voice agent to identify best suited voice agent based on agent-customer matching score obtained using skill-set profile analysis, cognitive load and a predicted emotive state of the voice agent. It may assist the voice agent in call if the cognitive load of the voice agent raises beyond predefined threshold using brain computer interfacing.Type: ApplicationFiled: July 3, 2018Publication date: May 2, 2019Applicant: Tata Consultancy Services LimitedInventors: Sri Harsha DUMPALA, Sunil Kumar KOPPARAPU
-
Publication number: 20190088260Abstract: This disclosure relates generally to audio-to-text conversion for an audio conversation, and particularly to system and method for improving call-center audio transcription. In one embodiment, a method includes deriving temporal information and contextual information from an audio segment of an audio conversation corresponding to interaction of speakers, and input parameters are extracted from the temporal and contextual information associated with the audio segment. Language model (LM) and an acoustic model (AM) of an automatic speech recognition (ASR) engine are dynamically tuned based on the input parameters. A subsequent audio segment is processed by using the tuned AM and LM for the audio-to-text conversion.Type: ApplicationFiled: March 13, 2018Publication date: March 21, 2019Applicant: Tata Consultancy Services LimitedInventors: BHAVIKKUMAR VACHHANI, Sunil Kumar KOPPARAPU
-
Publication number: 20190042938Abstract: The disclosure presents herein a method to train a classifier in a machine learning using more than one simultaneous sample to address class imbalance problem in any discriminative classifier. A modified representation of the training dataset is obtained by simultaneously considering features based representations of more than one sample. A modification to an architecture of a classifier is needed into handling the modified date representation of the more than one samples. The modification of the classifier directs same number of units in the input layer as to accept the plurality of simultaneous samples in the training dataset. The output layer will consist of units equal to twice the considered number of classes in the classification task, therefore, the output layer herein will have four units for two-class classification task. The disclosure herein can be implemented to resolve the problem of learning from low resourced data.Type: ApplicationFiled: May 18, 2018Publication date: February 7, 2019Applicant: Tata Consultancy Services LimitedInventors: Sri Harsha Dumpala, Rupayan Chakraborty, Sunil Kumar Kopparapu
-
Patent number: 10163313Abstract: A system and method to detect an event by analyzing sound signals received from a plurality of configured sensors. The sensors can be fixed or mobile and sensor activity is tracked in a sensor map. The frame analyzer of the system compares sound signals received from the sensors and applies knowledge data to determine if any deviation observed can be determined to be an uncharacteristic event. A rule data set comprising priority data, type of event, location is applied to the output of the frame analyzer to determine if the uncharacteristic sound observed is an event. On detection of an event, alerts are issued to appropriate authority. Further, sound frame and contextual data associated with the event are stored to serve as continuous learning for the system.Type: GrantFiled: March 13, 2017Date of Patent: December 25, 2018Assignee: Tata Consultancy Services LimitedInventors: Sivakumar Subramanian, Sunil Kumar Kopparapu