Patents by Inventor Michael Zaitzeff
Michael Zaitzeff has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8965772Abstract: Methods, systems, and products are disclosed for displaying speech command input state information in a multimodal browser including displaying an icon representing a speech command type and displaying an icon representing the input state of the speech command. In typical embodiments, the icon representing a speech command type and the icon representing the input state of the speech command also includes attributes of a single icon. Typical embodiments include accepting from a user a speech command of the speech command type, changing the input state of the speech command, and displaying another icon representing the changed input state of the speech command. Typical embodiments also include displaying the text of the speech command in association with the icon representing the speech command type.Type: GrantFiled: March 20, 2014Date of Patent: February 24, 2015Assignee: Nuance Communications, Inc.Inventors: Charles W. Cross, Jr., Michael C. Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
-
Publication number: 20140208210Abstract: Methods, systems, and products are disclosed for displaying speech command input state information in a multimodal browser including displaying an icon representing a speech command type and displaying an icon representing the input state of the speech command. In typical embodiments, the icon representing a speech command type and the icon representing the input state of the speech command also includes attributes of a single icon. Typical embodiments include accepting from a user a speech command of the speech command type, changing the input state of the speech command, and displaying another icon representing the changed input state of the speech command. Typical embodiments also include displaying the text of the speech command in association with the icon representing the speech command type.Type: ApplicationFiled: March 20, 2014Publication date: July 24, 2014Applicant: Nuance Communications, Inc.Inventors: Charles W. Cross, JR., Michael C. Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
-
Patent number: 8719034Abstract: Methods, systems, and products are disclosed for displaying speech command input state information in a multimodal browser including displaying an icon representing a speech command type and displaying an icon representing the input state of the speech command. In typical embodiments, the icon representing a speech command type and the icon representing the input state of the speech command also includes attributes of a single icon. Typical embodiments include accepting from a user a speech command of the speech command type, changing the input state of the speech command, and displaying another icon representing the changed input state of the speech command. Typical embodiments also include displaying the text of the speech command in association with the icon representing the speech command type.Type: GrantFiled: September 13, 2005Date of Patent: May 6, 2014Assignee: Nuance Communications, Inc.Inventors: Charles W. Cross, Jr., Michael Charles Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
-
Patent number: 8571872Abstract: Exemplary methods, systems, and products are disclosed for synchronizing visual and speech events in a multimodal application, including receiving from a user speech; determining a semantic interpretation of the speech; calling a global application update handler; identifying, by the global application update handler, an additional processing function in dependence upon the semantic interpretation; and executing the additional function. Typical embodiments may include updating a visual element after executing the additional function. Typical embodiments may include updating a voice form after executing the additional function. Typical embodiments also may include updating a state table after updating the voice form. Typical embodiments also may include restarting the voice form after executing the additional function.Type: GrantFiled: September 30, 2011Date of Patent: October 29, 2013Assignee: Nuance Communications, Inc.Inventors: Charles W. Cross, Jr., Michael C. Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
-
Publication number: 20120022875Abstract: Exemplary methods, systems, and products are disclosed for synchronizing visual and speech events in a multimodal application, including receiving from a user speech; determining a semantic interpretation of the speech; calling a global application update handler; identifying, by the global application update handler, an additional processing function in dependence upon the semantic interpretation; and executing the additional function. Typical embodiments may include updating a visual element after executing the additional function. Typical embodiments may include updating a voice form after executing the additional function. Typical embodiments also may include updating a state table after updating the voice form. Typical embodiments also may include restarting the voice form after executing the additional function.Type: ApplicationFiled: September 30, 2011Publication date: January 26, 2012Applicant: Nuance Communications, Inc.Inventors: Charles W. Cross, JR., Michael C. Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
-
Patent number: 8090584Abstract: Methods, systems, and computer program products are provided for modifying a grammar of a hierarchical multimodal menu that include monitoring a user invoking a speech command in a first tier grammar, and adding the speech command to a second tier grammar in dependence upon the frequency of the user invoking the speech command. Adding the speech command to a second tier grammar may be carried out by adding the speech command to a higher tier grammar or by adding the speech command to a lower tier grammar. Adding the speech command to a second tier grammar may include storing the speech command in a grammar cache in the second tier grammar.Type: GrantFiled: June 16, 2005Date of Patent: January 3, 2012Assignee: Nuance Communications, Inc.Inventors: Charles W. Cross, Jr., Michael C. Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
-
Patent number: 8055504Abstract: Exemplary methods, systems, and products are disclosed for synchronizing visual and speech events in a multimodal application, including receiving from a user speech; determining a semantic interpretation of the speech; calling a global application update handler; identifying, by the global application update handler, an additional processing function in dependence upon the semantic interpretation; and executing the additional function. Typical embodiments may include updating a visual element after executing the additional function. Typical embodiments may include updating a voice form after executing the additional function. Typical embodiments also may include updating a state table after updating the voice form. Typical embodiments also may include restarting the voice form after executing the additional function.Type: GrantFiled: April 3, 2008Date of Patent: November 8, 2011Assignee: Nuance Communications, Inc.Inventors: Charles W. Cross, Michael C. Hollinger, Igor R. Jablokov, David B. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
-
Patent number: 8032825Abstract: Methods, systems, and products for dynamically creating a multimodal markup document are provided that include selecting a multimodal markup template, identifying in dependence upon the multimodal markup template a dynamic content module, instantiating the dynamic content module, executing a dynamic content creation function in the instantiated dynamic content module, receiving dynamic content from the dynamic content creation function, and including the dynamic content in the multimodal markup template. Selecting a multimodal markup template may be carried out by identifying a multimodal markup template from URI encoded data embedded in a request for a multimodal markup document from a multimodal browser. The multimodal markup template may include static content and the dynamic content may include XHTML+Voice content.Type: GrantFiled: June 16, 2005Date of Patent: October 4, 2011Assignee: International Business Machines CorporationInventors: Charles W. Cross, Jr., Michael C. Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
-
Patent number: 7917365Abstract: Exemplary methods, systems, and products are disclosed for synchronizing visual and speech events in a multimodal application, including receiving from a user speech; determining a semantic interpretation of the speech; calling a global application update handler; identifying, by the global application update handler, an additional processing function in dependence upon the semantic interpretation; and executing the additional function. Typical embodiments may include updating a visual element after executing the additional function. Typical embodiments may include updating a voice form after executing the additional function. Typical embodiments also may include updating a state table after updating the voice form. Typical embodiments also may include restarting the voice form after executing the additional function.Type: GrantFiled: June 16, 2005Date of Patent: March 29, 2011Assignee: Nuance Communications, Inc.Inventors: Charles W. Cross, Jr., Michael C. Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hilary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
-
Publication number: 20080177530Abstract: Exemplary methods, systems, and products are disclosed for synchronizing visual and speech events in a multimodal application, including receiving from a user speech; determining a semantic interpretation of the speech; calling a global application update handler; identifying, by the global application update handler, an additional processing function in dependence upon the semantic interpretation; and executing the additional function. Typical embodiments may include updating a visual element after executing the additional function. Typical embodiments may include updating a voice form after executing the additional function. Typical embodiments also may include updating a state table after updating the voice form. Typical embodiments also may include restarting the voice form after executing the additional function.Type: ApplicationFiled: April 3, 2008Publication date: July 24, 2008Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Charles W. Cross, Michael C. Hollinger, Igor R. Jablokov, Benjamin D. Lewis, Hillary A. Pike, Daniel M. Smith, David W. Wintermute, Michael A. Zaitzeff
-
Publication number: 20070061148Abstract: Methods, systems, and products are disclosed for displaying speech command input state information in a multimodal browser including displaying an icon representing a speech command type and displaying an icon representing the input state of the speech command. In typical embodiments, the icon representing a speech command type and the icon representing the input state of the speech command also includes attributes of a single icon. Typical embodiments include accepting from a user a speech command of the speech command type, changing the input state of the speech command, and displaying another icon representing the changed input state of the speech command. Typical embodiments also include displaying the text of the speech command in association with the icon representing the speech command type.Type: ApplicationFiled: September 13, 2005Publication date: March 15, 2007Inventors: Charles Cross, Michael Hollinger, Igor Jablokov, Benjamin Lewis, Hilary Pike, Daniel Smith, David Wintermute, Michael Zaitzeff
-
Publication number: 20060287858Abstract: Services, systems, and computer program products are provided for modifying a grammar of a hierarchical multimodal menu that include selling to a customer a keyword, selling to a customer a location in a grammar in a hierarchical multimodal menu, and storing the keyword in the location. Storing the keyword in the location may be carried out by storing the keyword in a grammar cache in the grammar.Type: ApplicationFiled: June 16, 2005Publication date: December 21, 2006Inventors: Charles Cross, Michael Hollinger, Igor Jablokov, Benjamin Lewis, Hilary Pike, Daniel Smith, David Wintermute, Michael Zaitzeff
-
Publication number: 20060287866Abstract: Methods, systems, and computer program products are provided for modifying a grammar of a hierarchical multimodal menu that include monitoring a user invoking a speech command in a first tier grammar, and adding the speech command to a second tier grammar in dependence upon the frequency of the user invoking the speech command. Adding the speech command to a second tier grammar may be carried out by adding the speech command to a higher tier grammar or by adding the speech command to a lower tier grammar. Adding the speech command to a second tier grammar may include storing the speech command in a grammar cache in the second tier grammar.Type: ApplicationFiled: June 16, 2005Publication date: December 21, 2006Inventors: Charles Cross, Michael Hollinger, Igor Jablokov, Benjamin Lewis, Hilary Pike, Daniel Smith, David Wintermute, Michael Zaitzeff
-
Publication number: 20060288328Abstract: Methods, systems, and products for dynamically creating a multimodal markup document are provided that include selecting a multimodal markup template, identifying in dependence upon the multimodal markup template a dynamic content module, instantiating the dynamic content module, executing a dynamic content creation function in the instantiated dynamic content module, receiving dynamic content from the dynamic content creation function, and including the dynamic content in the multimodal markup template. Selecting a multimodal markup template may be carried out by identifying a multimodal markup template from URI encoded data embedded in a request for a multimodal markup document from a multimodal browser. The multimodal markup template may include static content and the dynamic content may include XHTML+Voice content.Type: ApplicationFiled: June 16, 2005Publication date: December 21, 2006Inventors: Charles Cross, Michael Hollinger, Igor Jablokov, Benjamin Lewis, Hilary Pike, Daniel Smith, David Wintermute, Michael Zaitzeff
-
Publication number: 20060287845Abstract: Exemplary methods, systems, and products are disclosed for synchronizing visual and speech events in a multimodal application, including receiving from a user speech; determining a semantic interpretation of the speech; calling a global application update handler; identifying, by the global application update handler, an additional processing function in dependence upon the semantic interpretation; and executing the additional function. Typical embodiments may include updating a visual element after executing the additional function. Typical embodiments may include updating a voice form after executing the additional function. Typical embodiments also may include updating a state table after updating the voice form. Typical embodiments also may include restarting the voice form after executing the additional function.Type: ApplicationFiled: June 16, 2005Publication date: December 21, 2006Inventors: Charles Cross, Michael Hollinger, Igor Jablokov, Benjamin Lewis, Hilary Pike, Daniel Smith, David Wintermute, Michael Zaitzeff
-
Publication number: 20060288309Abstract: Methods, systems, and products are disclosed for displaying available menu choices in a multimodal browser including presenting a user a plurality of GUI menu fields; selecting one of the plurality of GUI menu fields; and displaying, in a GUI display box for the plurality of GUI menu fields, menu choices for the selected GUI menu field.Type: ApplicationFiled: June 16, 2005Publication date: December 21, 2006Inventors: Charles Cross, Michael Hollinger, Igor Jablokov, Benjamin Lewis, Hilary Pike, Daniel Smith, David Wintermute, Michael Zaitzeff
-
Publication number: 20060287865Abstract: Establishing a multimodal application voice including selecting a voice personality for the multimodal application and creating in dependence upon the voice personality a VoiceXML dialog. Selecting a voice personality for the multimodal application may also include retrieving a user profile and selecting a voice personality for the multimodal application in dependence upon the user profile. Selecting a voice personality for the multimodal application may also include retrieving a sponsor profile and selecting a voice personality for the multimodal application in dependence upon the sponsor profile. Selecting a voice personality for the multimodal application may also include retrieving a system profile and selecting a voice personality for the multimodal application in dependence upon the system profile.Type: ApplicationFiled: June 16, 2005Publication date: December 21, 2006Inventors: Charles Cross, Michael Hollinger, Igor Jablokov, Benjamin Lewis, Hilary Pike, Daniel Smith, David Wintermute, Michael Zaitzeff