Patents by Inventor L. Venkata Subramaniam

L. Venkata Subramaniam has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20090132442
    Abstract: A method for determining a decision point in real-time for a data stream from a conversation includes receiving streaming conversational data; and determining when to classify the streaming conversational data, using a measure of certainty, by performing certainty calculations at a plurality of time instances during the conversation and by selecting a decision point in response to the certainty calculations, the decision point not being based on a fixed window of conversational data but being based on accumulated conversational data available at different ones of the plurality of time instances. Systems and computer program products are also provided.
    Type: Application
    Filed: November 15, 2007
    Publication date: May 21, 2009
    Inventors: L. Venkata Subramaniam, Ganesh Ramakrishnan, Tanveer A. Faruquie
  • Publication number: 20090112588
    Abstract: A method is provided for forming discrete segment clusters of one or more sequential sentences from a corpus of communication transcripts of transactional communications that comprises dividing the communication transcripts of the corpus into a first set of sentences spoken by a caller and a second set of sentences spoken by a responder; generating a specified number of sentence clusters by grouping the first and second sets of sentences according to a measure of lexical similarity using an unsupervised partitional clustering method; generating a collection of sequences of sentence types by assigning a distinct sentence type to each sentence cluster and representing each sentence of each communication transcript of the corpus with the sentence type assigned to the sentence cluster into which the sentence is grouped; and generating a specified number of discrete segment clusters by successively merging sentence clusters according to a proximity-based measure between the sentence types assigned to the sentence
    Type: Application
    Filed: October 31, 2007
    Publication date: April 30, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Krishna Kummamuru, Deepak S. Padmanabhan, Shourya Roy, L. Venkata Subramaniam
  • Publication number: 20090112571
    Abstract: A method is provided for forming discrete segment clusters of one or more sequential sentences from a corpus of communication transcripts of transactional communications that comprises dividing the communication transcripts of the corpus into a first set of sentences spoken by a caller and a second set of sentences spoken by a responder; generating a set of sentence clusters by grouping the first and second sets of sentences according to a measure of lexical similarity using an unsupervised partitional clustering method; generating a collection of sequences of sentence types by assigning a distinct sentence type to each sentence cluster and representing each sentence of each communication transcript of the corpus with the sentence type assigned to the sentence cluster into which the sentence is grouped; and generating a specified number of discrete segment clusters by successively merging sentence clusters according to a proximity-based measure between the sentence types assigned to the sentence clusters with
    Type: Application
    Filed: April 1, 2008
    Publication date: April 30, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Krishna Kummamuru, Deepak S. Padmanabhan, Shourya Roy, L. Venkata Subramaniam
  • Publication number: 20090063150
    Abstract: Sentence boundaries in noisy conversational transcription data are automatically identified. Noise and transcription symbols are removed, and a training set is formed with sentence boundaries marked based on long silences or on manual markings in the transcribed data. Frequencies of head and tail n-grams that occur at the beginning and ending of sentences are determined from the training set. N-grams that occur a significant number of times in the middle of sentences in relation to their occurrences at the beginning or ending of sentences are filtered out. A boundary is marked before every head n-gram and after every tail n-gram occurring in the conversational data and remaining after filtering. Turns are identified. A boundary is marked after each turn, unless the turn ends with an impermissible tail word or is an incomplete turn. The marked boundaries in the conversational data identify sentence boundaries.
    Type: Application
    Filed: August 27, 2007
    Publication date: March 5, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Tetsuya Nasukawa, Diwakar Punjani, Shourya Roy, L. Venkata Subramaniam, Hironori Takeuchi
  • Publication number: 20080256063
    Abstract: A keyword search system including a text input unit for inputting subtexts obtained by dividing each text into parts, while associating the subtexts with an event through a process recorded in the text; a prediction device adjuster for adjusting a corresponding event prediction device to maximize the percentage of text in which the inputted event is identical to a prediction result in a first text group selected from the subtexts; a prediction processor for generating a prediction result for each section, by inputting each text in a second text group selected from the corresponding subtexts in the adjusted event prediction device; and a search unit for calculating the prediction precision for the second text group of the event prediction device using a comparison between the inputted event and the prediction result for each subtext, and searching for keywords in sections with a certain degree of prediction precision.
    Type: Application
    Filed: March 7, 2008
    Publication date: October 16, 2008
    Applicant: International Business Machines Corporation
    Inventors: Tetsuya Nasukawa, Shourya Roy, L. Venkata Subramaniam, Hironori Takeuchi
  • Patent number: 7295979
    Abstract: Bootstrapping of a system from one language to another often works well when the two languages share the similar acoustic space. However, when the new language has sounds that do not occur in the language from which the bootstrapping is to be done, bootstrapping does not produce good initial models and the new language data is not properly aligned to these models. The present invention provides techniques to generate context dependent labeling of the new language data using the recognition system of another language. Then, this labeled data is used to generate models for the new language phones.
    Type: Grant
    Filed: February 22, 2001
    Date of Patent: November 13, 2007
    Assignee: International Business Machines Corporation
    Inventors: Chalapathy Venkata Neti, Nitendra Rajput, L. Venkata Subramaniam, Ashish Verma
  • Patent number: 6813607
    Abstract: A computer implemented method in a language independent system generates audio-driven facial animation given the speech recognition system for just one language. The method is based on the recognition that once alignment is generated, the mapping and the animation hardly have any language dependency in them. Translingual visual speech synthesis can be achieved if the first step of alignment generation can be made speech independent. Given a speech recognition system for a base language, the method synthesizes video with speech of any novel language as the input.
    Type: Grant
    Filed: January 31, 2000
    Date of Patent: November 2, 2004
    Assignee: International Business Machines Corporation
    Inventors: Tanveer Afzal Faruquie, Chalapathy Neti, Nitendra Rajput, L. Venkata Subramaniam, Ashish Verma
  • Publication number: 20020152068
    Abstract: Bootstrapping of a system from one language to another often works well when the two languages share the similar acoustic space. However, when the new language has sounds that do not occur in the language from which the bootstrapping is to be done, bootstrapping does not produce good initial models and the new language data is not properly aligned to these models. The present invention provides techniques to generate context dependent labeling of the new language data using the recognition system of another language. Then, this labeled data is used to generate models for the new language phones.
    Type: Application
    Filed: February 22, 2001
    Publication date: October 17, 2002
    Applicant: International Business Machines Corporation
    Inventors: Chalapathy Venkata Neti, Nitendra Rajput, L. Venkata Subramaniam, Ashish Verma
  • Patent number: 6366885
    Abstract: A method of speech driven lip synthesis which applies viseme based training models to units of visual speech. The audio data is grouped into a smaller number of visually distinct visemes rather than the larger number of phonemes. These visemes then form the basis for a Hidden Markov Model (HMM) state sequence or the output nodes of a neural network. During the training phase, audio and visual features are extracted from input speech, which is then aligned according to the apparent viseme sequence with the corresponding audio features being used to calculate the HMM state output probabilities or the output of the neutral network. During the synthesis phase, the acoustic input is aligned with the most likely viseme HMM sequence (in the case of an HMM based model) or with the nodes of the network (in the case of a neural network based system), which is then used for animation.
    Type: Grant
    Filed: August 27, 1999
    Date of Patent: April 2, 2002
    Assignee: International Business Machines Corporation
    Inventors: Sankar Basu, Tanveer Atzal Faruquie, Chalapathy V. Neti, Nitendra Rajput, Andrew William Senior, L. Venkata Subramaniam, Ashish Verma