Patents by Inventor David M. Lubensky

David M. Lubensky has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8346556
    Abstract: Systems and methods are provided to automatically determine culture-based behavioral tendencies and preferences of individuals in the context of customer service interactions. For example, systems and methods are provided to process natural language dialog input of an individual to detect linguistic features indicative of individualistic and collectivistic behavioral tendencies and predict whether such individual will be cooperative or uncooperative with automated customer service.
    Type: Grant
    Filed: August 22, 2008
    Date of Patent: January 1, 2013
    Assignee: International Business Machines Corporation
    Inventors: Osamuyimen T. Stewart, David M. Lubensky, Joyram Chakraborty
  • Publication number: 20120310629
    Abstract: Systems and methods are provided to automatically determine culture-based behavioral tendencies and preferences of individuals in the context of customer service interactions. For example, systems and methods are provided to process natural language dialog input of an individual to detect linguistic features indicative of individualistic and collectivistic behavioral tendencies and predict whether such individual will be cooperative or uncooperative with automated customer service.
    Type: Application
    Filed: August 10, 2012
    Publication date: December 6, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Osamuyimen T. Stewart, David M. Lubensky, Joyram Chakraborty
  • Patent number: 8326622
    Abstract: The invention discloses a system and method for filling out a form from a dialog between a caller and a call center agent. The caller and the caller center agent can have the dialog in the form of telephone conversation, instant messaging chat or email exchange. The system and method provides a list of named entities specific to the call center operation and uses a translation and transcription minor to filter relevant elements from the dialog between the caller and the call center agent. The relevant elements filtered from the dialog are subsequently displayed on the call center agent's computer screen to fill out application forms automatically or through drag and drop operations by the call center agent.
    Type: Grant
    Filed: September 23, 2008
    Date of Patent: December 4, 2012
    Assignee: International Business Machines Corporation
    Inventors: Carl Joseph Kraenzel, David M. Lubensky, Baiju Dhirajlal Mandalia
  • Publication number: 20120136646
    Abstract: A method, computer system, and computer program product for translating information. The computer system receives the information for a translation. The computer system identifies portions of the information based on a set of rules for security for the information in response to receiving the information. The computer system sends the portions of the information to a plurality of translation systems. In response to receiving translation results from the plurality of translation systems for respective portions of the information, the computer system combines the translation results for the respective portions to form a consolidated translation of the information.
    Type: Application
    Filed: November 30, 2010
    Publication date: May 31, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Carl J. Kraenzel, David M. Lubensky, Baiju Dhirajlal Mandalia, Cheng Wu
  • Publication number: 20110069822
    Abstract: A call routing system is created by receiving a set of initial target classes and a corresponding set of topic descriptions. Non-overlapping semantic tokens in the set of topic descriptions are identified. A set of clear target classes from the non-overlapping semantic tokens and the initial target classes is identified. Overlapping semantic tokens from the set of topic descriptions are identified. A set of vague classes is identified from the overlapping semantic tokens and the initial target classes. A set of disambiguation dialogues and a set of grammar prompts is generated according to the overlapping and non-overlapping semantic tokens. The call routing system is then created based on the set of clear target classes, the set of vague target classes, and the set of disambiguation dialogues.
    Type: Application
    Filed: September 24, 2009
    Publication date: March 24, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ea-Ee Jan, Hong-Kwang Jeff Kuo, David M. Lubensky
  • Publication number: 20100076760
    Abstract: The invention discloses a system and method for filling out a form from a dialog between a caller and a call center agent. The caller and the caller center agent can have the dialog in the form of telephone conversation, instant messaging chat or email exchange. The system and method provides a list of named entities specific to the call center operation and uses a translation and transcription minor to filter relevant elements from the dialog between the caller and the call center agent. The relevant elements filtered from the dialog are subsequently displayed on the call center agent's computer screen to fill out application forms automatically or through drag and drop operations by the call center agent.
    Type: Application
    Filed: September 23, 2008
    Publication date: March 25, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Carl Joseph Kraenzel, Baiju Dhirajlal Mandalia, David M. Lubensky
  • Patent number: 7680661
    Abstract: A method for speech recognition includes: prompting a user with a first query to input speech into a speech recognition engine; determining if the inputted speech is correctly recognized; wherein in the event the inputted speech is correctly recognized proceeding to a new task; wherein in the event the inputted speech is not correctly recognized, prompting the user repeatedly with the first query to input speech into the speech recognition engine, and determining if the inputted speech is correctly recognized until a predefined limit on repetitions has been met; wherein in the event the predefined limit has been met without correctly recognizing the inputted user speech, prompting speech input from the user with a secondary query for redundant information; and cross-referencing the user's n-best result from the first query with the n-best result from the second query to obtain a top hypothesis.
    Type: Grant
    Filed: May 14, 2008
    Date of Patent: March 16, 2010
    Assignee: Nuance Communications, Inc.
    Inventors: Raymond L. Co, Ea-Ee Jan, David M. Lubensky
  • Publication number: 20100049520
    Abstract: Systems and methods are provided to automatically determine culture-based behavioral tendencies and preferences of individuals in the context of customer service interactions. For example, systems and methods are provided to process natural language dialog input of an individual to detect linguistic features indicative of individualistic and collectivistic behavioral tendencies and predict whether such individual will be cooperative or uncooperative with automated customer service.
    Type: Application
    Filed: August 22, 2008
    Publication date: February 25, 2010
    Inventors: Osamuyimen T. Stewart, David M. Lubensky, Joyram Chakraborty
  • Patent number: 7624014
    Abstract: A method, system and computer readable device for recognizing a partial utterance in an automatic speech recognition (ASR) system where said method comprising the steps of, receiving, by a ASR recognition unit, an input signal representing a speech utterance or word and transcribing the input signal into text, interpreting, by a ASR interpreter unit, whether the text is either a positive or a negative match to a list of automated options by matching the text with a grammar or semantic database representing the list of automated options, wherein if the ASR interpreter unit results in said positive match proceeding to a next input signal and if the ASR interpreter unit results in said negative match rejecting the text as representing said partial utterance, and processing, by a linguistic filtering unit, the rejected text to derive a correct match between the rejected text and the grammar or semantic database.
    Type: Grant
    Filed: September 8, 2008
    Date of Patent: November 24, 2009
    Assignee: Nuance Communications, Inc.
    Inventors: Osamuyimen T. Stewart, David M. Lubensky
  • Publication number: 20090287483
    Abstract: A method for speech recognition includes: prompting a user with a first query to input speech into a speech recognition engine; determining if the inputted speech is correctly recognized; wherein in the event the inputted speech is correctly recognized proceeding to a new task; wherein in the event the inputted speech is not correctly recognized, prompting the user repeatedly with the first query to input speech into the speech recognition engine, and determining if the inputted speech is correctly recognized until a predefined limit on repetitions has been met; wherein in the event the predefined limit has been met without correctly recognizing the inputted user speech, prompting speech input from the user with a secondary query for redundant information; and cross-referencing the user's n-best result from the first query with the n-best result from the second query to obtain a top hypothesis.
    Type: Application
    Filed: May 14, 2008
    Publication date: November 19, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Raymond L. Co, Ee-ee Jan, David M. Lubensky
  • Patent number: 7558734
    Abstract: In one example, this invention presents a method of providing the same self-service content that is available on the web interface to users contacting by telephone, knowing that the web and telephone are fundamentally different user interfaces. In one embodiment, this patent seeks to protect the general idea of how to playback web data in real-time to the user over the speech interface. For this purpose, a method is presented comprising of the general steps through which the web data is initially sent to an automatic transformation module. Then, that transformation module refines or re-structures the web data to make it suitable for the speech interface. The algorithm in the module is predicated on the user interface principles of cognitive complexity and limitations on short term memory based on which FAQ types are classified into one of the following four classes: simple, medium, complex, and complex-complex.
    Type: Grant
    Filed: November 16, 2008
    Date of Patent: July 7, 2009
    Assignee: International Business Machines Corporation
    Inventors: Osamuyimen T. Stewart, David M. Lubensky, Ea-Ee Jan, Xiang Li
  • Publication number: 20090157405
    Abstract: A method, system and computer readable device for recognizing a partial utterance in an automatic speech recognition (ASR) system where said method comprising the steps of, receiving, by a ASR recognition unit, an input signal representing a speech utterance or word and transcribing the input signal into text, interpreting, by a ASR interpreter unit, whether the text is either a positive or a negative match to a list of automated options by matching the text with a grammar or semantic database representing the list of automated options, wherein if the ASR interpreter unit results in said positive match proceeding to a next input signal and if the ASR interpreter unit results in said negative match rejecting the text as representing said partial utterance, and processing, by a linguistic filtering unit, the rejected text to derive a correct match between the rejected text and the grammar or semantic database.
    Type: Application
    Filed: September 8, 2008
    Publication date: June 18, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Osamuyimen T. Stewart, David M. Lubensky
  • Patent number: 7487084
    Abstract: A testing arrangement provided for speech recognition systems in vehicles. Preferably included are a “mobile client” secured in the vehicle and driven around at a desired speed, an audio system and speaker which plays back a set of prerecorded utterances stored digitally in a computer arrangement such that the speech of a human being is simulated, transmission of the speech signal to a server, followed by speech recognition and signal-to-noise ratio (SNR) computation. Here, the acceptability of the vehicular speech recognition system is preferably determined via comparison with pre-specified standards of recognition accuracy and SNR values.
    Type: Grant
    Filed: July 31, 2002
    Date of Patent: February 3, 2009
    Assignee: International Business Machines Corporation
    Inventors: Andrew Aaron, Subrata K. Das, David M. Lubensky
  • Patent number: 7467090
    Abstract: In one example, this invention presents a method of providing the same self-service content that is available on the web interface to users contacting by telephone, knowing that the web and telephone are fundamentally different user interfaces. In one embodiment, this seeks to protect the general idea of how to playback web data in real-time to the user over the speech interface. For this purpose, a method is presented comprising of the general steps through which the web data is initially sent to an automatic transformation module. Then, that transformation module refines or re-structures the web data to make it suitable for the speech interface. The algorithm in the module is predicated on the user interface principles of cognitive complexity and limitations on short term memory based on which FAQ types are classified into one of the following four classes: simple, medium, complex, and complex-complex.
    Type: Grant
    Filed: February 27, 2008
    Date of Patent: December 16, 2008
    Assignee: International Business Machines Corporation
    Inventors: Osamuyimen T. Stewart, David M. Lubensky, Ea-Ee Jan, Xiang Li
  • Patent number: 7437707
    Abstract: Systems, methods and tools are provided for generating applications that are automatically optimized for efficient deployment in a computing environment based on parameterized criteria. In particular, systems, methods and tools for generating network applications are provided, which automatically partition a functional description of a network application into a set of application modules (e.g., pages) according to parameterized criteria that optimizes the network application for efficient network performance by minimizing application latency.
    Type: Grant
    Filed: December 12, 2003
    Date of Patent: October 14, 2008
    Assignee: International Business Machines Corporation
    Inventors: Juan M. Huerta, David M. Lubensky, Chaitanya J. K. Ekanadham
  • Patent number: 7437291
    Abstract: A method, system and computer readable device for recognizing a partial utterance in an automatic speech recognition (ASR) system where said method comprising the steps of, receiving, by a ASR recognition unit, an input signal representing a speech utterance or word and transcribing the input signal into text, interpreting, by a ASR interpreter unit, whether the text is either a positive or a negative match to a list of automated options by matching the text with a grammar or semantic database representing the list of automated options, wherein if the ASR interpreter unit results in said positive match proceeding to a next input signal and if the ASR interpreter unit results in said negative match rejecting the text as representing said partial utterance, and processing, by a linguistic filtering unit, the rejected text to derive a correct match between the rejected text and the grammar or semantic database.
    Type: Grant
    Filed: December 13, 2007
    Date of Patent: October 14, 2008
    Assignee: International Business Machines Corporation
    Inventors: Osamuyimen T. Stewart, David M. Lubensky
  • Patent number: 7269555
    Abstract: In a speech recognition system, a method of transforming speech feature vectors associated with speech data provided to the speech recognition system includes the steps of receiving likelihood of utterance information corresponding to a previous feature vector transformation, estimating one or more transformation parameters based, at least in part, on the likelihood of utterance information corresponding to a previous feature vector transformation, and transforming a current feature vector based on maximum likelihood criteria and/or the estimated transformation parameters, the transformation being performed in a linear spectral domain. The step of estimating the one or more transformation parameters includes the step of estimating convolutional noise Ni? and additive noise Ni? for each ith component of a speech vector corresponding to the speech data provided to the speech recognition system.
    Type: Grant
    Filed: August 30, 2005
    Date of Patent: September 11, 2007
    Assignee: International Business Machines Corporation
    Inventors: Dongsuk Yuk, David M. Lubensky
  • Patent number: 6999926
    Abstract: A maximum likelihood spectral transformation (MLST) technique is proposed for rapid speech recognition under mismatched training and testing conditions. Speech feature vectors of real-time utterances are transformed in a linear spectral domain such that a likelihood of the utterances is increased after the transformation. Cepstral vectors are computed from the transformed spectra. The MLST function used for the spectral transformation is configured to handle both convolutional and additive noise. Since the function has small number of parameters to be estimated, only a few utterances are required for accurate adaptation, thus essentially eliminating the need for training speech data. Furthermore, the computation for parameter estimation and spectral transformation can be done efficiently in linear time. Therefore, the techniques of the present invention are well-suited for rapid online adaptation.
    Type: Grant
    Filed: July 23, 2001
    Date of Patent: February 14, 2006
    Assignee: International Business Machines Corporation
    Inventors: Dongsuk Yuk, David M. Lubensky
  • Patent number: 6801604
    Abstract: Systems and methods for conversational computing and, in particular, to systems and methods for building distributed conversational applications using a Web services-based model wherein speech engines (e.g., speech recognition) and audio I/O systems are programmable services that can be asynchronously programmed by an application using a standard, extensible SERCP (speech engine remote control protocol), to thereby provide scalable and flexible IP-based architectures that enable deployment of the same application or application development environment across a wide range of voice processing platforms and networks/gateways (e.g., PSTN (public switched telephone network), Wireless, Internet, and VoIP (voice over IP)). Systems and methods are further provided for dynamically allocating, assigning, configuring and controlling speech resources such as speech engines, speech pre/post processing systems, audio subsystems, and exchanges between speech engines using SERCP in a web service-based framework.
    Type: Grant
    Filed: June 25, 2002
    Date of Patent: October 5, 2004
    Assignee: International Business Machines Corporation
    Inventors: Stephane H. Maes, David M. Lubensky, Andrzej Sakrajda
  • Publication number: 20030236672
    Abstract: A testing arrangement provided for speech recognition systems in vehicles. Preferably included are a “mobile client” secured in the vehicle and driven around at a desired speed, an audio system and speaker which plays back a set of prerecorded utterances stored digitally in a computer arrangement such that the speech of a human being is simulated, transmission of the speech signal to a server, followed by speech recognition and signal-to-noise ratio (SNR) computation. Here, the acceptability of the vehicular speech recognition system is preferably determined via comparison with pre-specified standards of recognition accuracy and SNR values.
    Type: Application
    Filed: July 31, 2002
    Publication date: December 25, 2003
    Applicant: IBM Corporation
    Inventors: Andrew Aaron, Subrata K. Das, David M. Lubensky