Patents by Inventor Michael A. Zaitzeff

Michael A. Zaitzeff has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8965772
    Abstract: Methods, systems, and products are disclosed for displaying speech command input state information in a multimodal browser including displaying an icon representing a speech command type and displaying an icon representing the input state of the speech command. In typical embodiments, the icon representing a speech command type and the icon representing the input state of the speech command also includes attributes of a single icon. Typical embodiments include accepting from a user a speech command of the speech command type, changing the input state of the speech command, and displaying another icon representing the changed input state of the speech command. Typical embodiments also include displaying the text of the speech command in association with the icon representing the speech command type.
    Type: Grant
    Filed: March 20, 2014
    Date of Patent: February 24, 2015
    Assignee: Nuance Communications, Inc.
    Inventors: Charles W. Cross, Jr., Michael C. Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
  • Publication number: 20140208210
    Abstract: Methods, systems, and products are disclosed for displaying speech command input state information in a multimodal browser including displaying an icon representing a speech command type and displaying an icon representing the input state of the speech command. In typical embodiments, the icon representing a speech command type and the icon representing the input state of the speech command also includes attributes of a single icon. Typical embodiments include accepting from a user a speech command of the speech command type, changing the input state of the speech command, and displaying another icon representing the changed input state of the speech command. Typical embodiments also include displaying the text of the speech command in association with the icon representing the speech command type.
    Type: Application
    Filed: March 20, 2014
    Publication date: July 24, 2014
    Applicant: Nuance Communications, Inc.
    Inventors: Charles W. Cross, JR., Michael C. Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
  • Patent number: 8719034
    Abstract: Methods, systems, and products are disclosed for displaying speech command input state information in a multimodal browser including displaying an icon representing a speech command type and displaying an icon representing the input state of the speech command. In typical embodiments, the icon representing a speech command type and the icon representing the input state of the speech command also includes attributes of a single icon. Typical embodiments include accepting from a user a speech command of the speech command type, changing the input state of the speech command, and displaying another icon representing the changed input state of the speech command. Typical embodiments also include displaying the text of the speech command in association with the icon representing the speech command type.
    Type: Grant
    Filed: September 13, 2005
    Date of Patent: May 6, 2014
    Assignee: Nuance Communications, Inc.
    Inventors: Charles W. Cross, Jr., Michael Charles Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
  • Patent number: 8571872
    Abstract: Exemplary methods, systems, and products are disclosed for synchronizing visual and speech events in a multimodal application, including receiving from a user speech; determining a semantic interpretation of the speech; calling a global application update handler; identifying, by the global application update handler, an additional processing function in dependence upon the semantic interpretation; and executing the additional function. Typical embodiments may include updating a visual element after executing the additional function. Typical embodiments may include updating a voice form after executing the additional function. Typical embodiments also may include updating a state table after updating the voice form. Typical embodiments also may include restarting the voice form after executing the additional function.
    Type: Grant
    Filed: September 30, 2011
    Date of Patent: October 29, 2013
    Assignee: Nuance Communications, Inc.
    Inventors: Charles W. Cross, Jr., Michael C. Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
  • Publication number: 20120022875
    Abstract: Exemplary methods, systems, and products are disclosed for synchronizing visual and speech events in a multimodal application, including receiving from a user speech; determining a semantic interpretation of the speech; calling a global application update handler; identifying, by the global application update handler, an additional processing function in dependence upon the semantic interpretation; and executing the additional function. Typical embodiments may include updating a visual element after executing the additional function. Typical embodiments may include updating a voice form after executing the additional function. Typical embodiments also may include updating a state table after updating the voice form. Typical embodiments also may include restarting the voice form after executing the additional function.
    Type: Application
    Filed: September 30, 2011
    Publication date: January 26, 2012
    Applicant: Nuance Communications, Inc.
    Inventors: Charles W. Cross, JR., Michael C. Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
  • Patent number: 8090584
    Abstract: Methods, systems, and computer program products are provided for modifying a grammar of a hierarchical multimodal menu that include monitoring a user invoking a speech command in a first tier grammar, and adding the speech command to a second tier grammar in dependence upon the frequency of the user invoking the speech command. Adding the speech command to a second tier grammar may be carried out by adding the speech command to a higher tier grammar or by adding the speech command to a lower tier grammar. Adding the speech command to a second tier grammar may include storing the speech command in a grammar cache in the second tier grammar.
    Type: Grant
    Filed: June 16, 2005
    Date of Patent: January 3, 2012
    Assignee: Nuance Communications, Inc.
    Inventors: Charles W. Cross, Jr., Michael C. Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
  • Patent number: 8055504
    Abstract: Exemplary methods, systems, and products are disclosed for synchronizing visual and speech events in a multimodal application, including receiving from a user speech; determining a semantic interpretation of the speech; calling a global application update handler; identifying, by the global application update handler, an additional processing function in dependence upon the semantic interpretation; and executing the additional function. Typical embodiments may include updating a visual element after executing the additional function. Typical embodiments may include updating a voice form after executing the additional function. Typical embodiments also may include updating a state table after updating the voice form. Typical embodiments also may include restarting the voice form after executing the additional function.
    Type: Grant
    Filed: April 3, 2008
    Date of Patent: November 8, 2011
    Assignee: Nuance Communications, Inc.
    Inventors: Charles W. Cross, Michael C. Hollinger, Igor R. Jablokov, David B. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
  • Patent number: 8032825
    Abstract: Methods, systems, and products for dynamically creating a multimodal markup document are provided that include selecting a multimodal markup template, identifying in dependence upon the multimodal markup template a dynamic content module, instantiating the dynamic content module, executing a dynamic content creation function in the instantiated dynamic content module, receiving dynamic content from the dynamic content creation function, and including the dynamic content in the multimodal markup template. Selecting a multimodal markup template may be carried out by identifying a multimodal markup template from URI encoded data embedded in a request for a multimodal markup document from a multimodal browser. The multimodal markup template may include static content and the dynamic content may include XHTML+Voice content.
    Type: Grant
    Filed: June 16, 2005
    Date of Patent: October 4, 2011
    Assignee: International Business Machines Corporation
    Inventors: Charles W. Cross, Jr., Michael C. Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
  • Patent number: 7917365
    Abstract: Exemplary methods, systems, and products are disclosed for synchronizing visual and speech events in a multimodal application, including receiving from a user speech; determining a semantic interpretation of the speech; calling a global application update handler; identifying, by the global application update handler, an additional processing function in dependence upon the semantic interpretation; and executing the additional function. Typical embodiments may include updating a visual element after executing the additional function. Typical embodiments may include updating a voice form after executing the additional function. Typical embodiments also may include updating a state table after updating the voice form. Typical embodiments also may include restarting the voice form after executing the additional function.
    Type: Grant
    Filed: June 16, 2005
    Date of Patent: March 29, 2011
    Assignee: Nuance Communications, Inc.
    Inventors: Charles W. Cross, Jr., Michael C. Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
  • Publication number: 20080177530
    Abstract: Exemplary methods, systems, and products are disclosed for synchronizing visual and speech events in a multimodal application, including receiving from a user speech; determining a semantic interpretation of the speech; calling a global application update handler; identifying, by the global application update handler, an additional processing function in dependence upon the semantic interpretation; and executing the additional function. Typical embodiments may include updating a visual element after executing the additional function. Typical embodiments may include updating a voice form after executing the additional function. Typical embodiments also may include updating a state table after updating the voice form. Typical embodiments also may include restarting the voice form after executing the additional function.
    Type: Application
    Filed: April 3, 2008
    Publication date: July 24, 2008
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Charles W. Cross, Michael C. Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hillary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
  • Publication number: 20070061148
    Abstract: Methods, systems, and products are disclosed for displaying speech command input state information in a multimodal browser including displaying an icon representing a speech command type and displaying an icon representing the input state of the speech command. In typical embodiments, the icon representing a speech command type and the icon representing the input state of the speech command also includes attributes of a single icon. Typical embodiments include accepting from a user a speech command of the speech command type, changing the input state of the speech command, and displaying another icon representing the changed input state of the speech command. Typical embodiments also include displaying the text of the speech command in association with the icon representing the speech command type.
    Type: Application
    Filed: September 13, 2005
    Publication date: March 15, 2007
    Inventors: Charles Cross, Michael Hollinger, Igor Jablokov, Benjamin Lewis, Hilary Pike, Daniel Smith, David Wintermute, Michael Zaitzeff
  • Publication number: 20060287858
    Abstract: Services, systems, and computer program products are provided for modifying a grammar of a hierarchical multimodal menu that include selling to a customer a keyword, selling to a customer a location in a grammar in a hierarchical multimodal menu, and storing the keyword in the location. Storing the keyword in the location may be carried out by storing the keyword in a grammar cache in the grammar.
    Type: Application
    Filed: June 16, 2005
    Publication date: December 21, 2006
    Inventors: Charles Cross, Michael Hollinger, Igor Jablokov, Benjamin Lewis, Hilary Pike, Daniel Smith, David Wintermute, Michael Zaitzeff
  • Publication number: 20060287866
    Abstract: Methods, systems, and computer program products are provided for modifying a grammar of a hierarchical multimodal menu that include monitoring a user invoking a speech command in a first tier grammar, and adding the speech command to a second tier grammar in dependence upon the frequency of the user invoking the speech command. Adding the speech command to a second tier grammar may be carried out by adding the speech command to a higher tier grammar or by adding the speech command to a lower tier grammar. Adding the speech command to a second tier grammar may include storing the speech command in a grammar cache in the second tier grammar.
    Type: Application
    Filed: June 16, 2005
    Publication date: December 21, 2006
    Inventors: Charles Cross, Michael Hollinger, Igor Jablokov, Benjamin Lewis, Hilary Pike, Daniel Smith, David Wintermute, Michael Zaitzeff
  • Publication number: 20060288328
    Abstract: Methods, systems, and products for dynamically creating a multimodal markup document are provided that include selecting a multimodal markup template, identifying in dependence upon the multimodal markup template a dynamic content module, instantiating the dynamic content module, executing a dynamic content creation function in the instantiated dynamic content module, receiving dynamic content from the dynamic content creation function, and including the dynamic content in the multimodal markup template. Selecting a multimodal markup template may be carried out by identifying a multimodal markup template from URI encoded data embedded in a request for a multimodal markup document from a multimodal browser. The multimodal markup template may include static content and the dynamic content may include XHTML+Voice content.
    Type: Application
    Filed: June 16, 2005
    Publication date: December 21, 2006
    Inventors: Charles Cross, Michael Hollinger, Igor Jablokov, Benjamin Lewis, Hilary Pike, Daniel Smith, David Wintermute, Michael Zaitzeff
  • Publication number: 20060287845
    Abstract: Exemplary methods, systems, and products are disclosed for synchronizing visual and speech events in a multimodal application, including receiving from a user speech; determining a semantic interpretation of the speech; calling a global application update handler; identifying, by the global application update handler, an additional processing function in dependence upon the semantic interpretation; and executing the additional function. Typical embodiments may include updating a visual element after executing the additional function. Typical embodiments may include updating a voice form after executing the additional function. Typical embodiments also may include updating a state table after updating the voice form. Typical embodiments also may include restarting the voice form after executing the additional function.
    Type: Application
    Filed: June 16, 2005
    Publication date: December 21, 2006
    Inventors: Charles Cross, Michael Hollinger, Igor Jablokov, Benjamin Lewis, Hilary Pike, Daniel Smith, David Wintermute, Michael Zaitzeff
  • Publication number: 20060288309
    Abstract: Methods, systems, and products are disclosed for displaying available menu choices in a multimodal browser including presenting a user a plurality of GUI menu fields; selecting one of the plurality of GUI menu fields; and displaying, in a GUI display box for the plurality of GUI menu fields, menu choices for the selected GUI menu field.
    Type: Application
    Filed: June 16, 2005
    Publication date: December 21, 2006
    Inventors: Charles Cross, Michael Hollinger, Igor Jablokov, Benjamin Lewis, Hilary Pike, Daniel Smith, David Wintermute, Michael Zaitzeff
  • Publication number: 20060287865
    Abstract: Establishing a multimodal application voice including selecting a voice personality for the multimodal application and creating in dependence upon the voice personality a VoiceXML dialog. Selecting a voice personality for the multimodal application may also include retrieving a user profile and selecting a voice personality for the multimodal application in dependence upon the user profile. Selecting a voice personality for the multimodal application may also include retrieving a sponsor profile and selecting a voice personality for the multimodal application in dependence upon the sponsor profile. Selecting a voice personality for the multimodal application may also include retrieving a system profile and selecting a voice personality for the multimodal application in dependence upon the system profile.
    Type: Application
    Filed: June 16, 2005
    Publication date: December 21, 2006
    Inventors: Charles Cross, Michael Hollinger, Igor Jablokov, Benjamin Lewis, Hilary Pike, Daniel Smith, David Wintermute, Michael Zaitzeff