Patents by Inventor L. Venkata Subramaniam

L. Venkata Subramaniam has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20080256063
    Abstract: A keyword search system including a text input unit for inputting subtexts obtained by dividing each text into parts, while associating the subtexts with an event through a process recorded in the text; a prediction device adjuster for adjusting a corresponding event prediction device to maximize the percentage of text in which the inputted event is identical to a prediction result in a first text group selected from the subtexts; a prediction processor for generating a prediction result for each section, by inputting each text in a second text group selected from the corresponding subtexts in the adjusted event prediction device; and a search unit for calculating the prediction precision for the second text group of the event prediction device using a comparison between the inputted event and the prediction result for each subtext, and searching for keywords in sections with a certain degree of prediction precision.
    Type: Application
    Filed: March 7, 2008
    Publication date: October 16, 2008
    Applicant: International Business Machines Corporation
    Inventors: Tetsuya Nasukawa, Shourya Roy, L. Venkata Subramaniam, Hironori Takeuchi
  • Patent number: 7295979
    Abstract: Bootstrapping of a system from one language to another often works well when the two languages share the similar acoustic space. However, when the new language has sounds that do not occur in the language from which the bootstrapping is to be done, bootstrapping does not produce good initial models and the new language data is not properly aligned to these models. The present invention provides techniques to generate context dependent labeling of the new language data using the recognition system of another language. Then, this labeled data is used to generate models for the new language phones.
    Type: Grant
    Filed: February 22, 2001
    Date of Patent: November 13, 2007
    Assignee: International Business Machines Corporation
    Inventors: Chalapathy Venkata Neti, Nitendra Rajput, L. Venkata Subramaniam, Ashish Verma
  • Patent number: 6813607
    Abstract: A computer implemented method in a language independent system generates audio-driven facial animation given the speech recognition system for just one language. The method is based on the recognition that once alignment is generated, the mapping and the animation hardly have any language dependency in them. Translingual visual speech synthesis can be achieved if the first step of alignment generation can be made speech independent. Given a speech recognition system for a base language, the method synthesizes video with speech of any novel language as the input.
    Type: Grant
    Filed: January 31, 2000
    Date of Patent: November 2, 2004
    Assignee: International Business Machines Corporation
    Inventors: Tanveer Afzal Faruquie, Chalapathy Neti, Nitendra Rajput, L. Venkata Subramaniam, Ashish Verma
  • Publication number: 20020152068
    Abstract: Bootstrapping of a system from one language to another often works well when the two languages share the similar acoustic space. However, when the new language has sounds that do not occur in the language from which the bootstrapping is to be done, bootstrapping does not produce good initial models and the new language data is not properly aligned to these models. The present invention provides techniques to generate context dependent labeling of the new language data using the recognition system of another language. Then, this labeled data is used to generate models for the new language phones.
    Type: Application
    Filed: February 22, 2001
    Publication date: October 17, 2002
    Applicant: International Business Machines Corporation
    Inventors: Chalapathy Venkata Neti, Nitendra Rajput, L. Venkata Subramaniam, Ashish Verma
  • Patent number: 6366885
    Abstract: A method of speech driven lip synthesis which applies viseme based training models to units of visual speech. The audio data is grouped into a smaller number of visually distinct visemes rather than the larger number of phonemes. These visemes then form the basis for a Hidden Markov Model (HMM) state sequence or the output nodes of a neural network. During the training phase, audio and visual features are extracted from input speech, which is then aligned according to the apparent viseme sequence with the corresponding audio features being used to calculate the HMM state output probabilities or the output of the neutral network. During the synthesis phase, the acoustic input is aligned with the most likely viseme HMM sequence (in the case of an HMM based model) or with the nodes of the network (in the case of a neural network based system), which is then used for animation.
    Type: Grant
    Filed: August 27, 1999
    Date of Patent: April 2, 2002
    Assignee: International Business Machines Corporation
    Inventors: Sankar Basu, Tanveer Atzal Faruquie, Chalapathy V. Neti, Nitendra Rajput, Andrew William Senior, L. Venkata Subramaniam, Ashish Verma