Patents by Inventor Lucas R. A. Ives
Lucas R. A. Ives has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11586936Abstract: Systems and methods for both technical and non-technical users to create content for interactive synthetic characters are provided. In some embodiments, a conversation editor may be configured to create a traversable script for an interactive synthetic character by receiving a set of conversation rules from a user. These rules can be used to match certain words or phrases that a user speaks or types, or to monitor for a physical movement of the user or synthetic character. Each conversation rule can include responses to be performed by the interactive synthetic character. The responses can include, for example, producing audible or textual speech for the synthetic character, performing one or more animations, playing one or more sound effects, retrieving data from one or more data sources, and the like. A traversable script can be generated from the set of conversation rules that when executed by the synthetic character allows for the dynamic interactions.Type: GrantFiled: January 18, 2019Date of Patent: February 21, 2023Assignee: Chatterbox Capital LLCInventors: Martin Reddy, Oren M. Jacob, Robert G. Podesta, Lucas R. Ives, Kathleen Hale
-
Publication number: 20190385065Abstract: Systems and methods for both technical and non-technical users to create content for interactive synthetic characters are provided. In some embodiments, a conversation editor may be configured to create a traversable script for an interactive synthetic character by receiving a set of conversation rules from a user. These rules can be used to match certain words or phrases that a user speaks or types, or to monitor for a physical movement of the user or synthetic character. Each conversation rule can include responses to be performed by the interactive synthetic character. The responses can include, for example, producing audible or textual speech for the synthetic character, performing one or more animations, playing one or more sound effects, retrieving data from one or more data sources, and the like. A traversable script can be generated from the set of conversation rules that when executed by the synthetic character allows for the dynamic interactions.Type: ApplicationFiled: January 18, 2019Publication date: December 19, 2019Inventors: Martin Reddy, Oren M. Jacob, Robert G. Podesta, Lucas R.A. Ives, Kathleen Hale
-
Patent number: 10223636Abstract: Systems and methods to create content for interactive synthetic characters are provided. In some embodiments, a conversation editor may be configured to create a traversable script for an interactive synthetic character by receiving conversation rules from a user. These rules can be used to match words or phrases that a user speaks or types, or to monitor for a physical movement of the user or synthetic character. Each rule can include responses to be performed by the interactive synthetic character. Examples of responses include producing audible or textual speech for the synthetic character, performing animations, playing sound effects, retrieving data, and the like. A traversable script can be generated from the conversation rules that when executed by the synthetic character allows for the dynamic interactions. In some embodiments, the traversable script can be navigated by a state engine using navigational directives associated with the conversation rules.Type: GrantFiled: July 25, 2012Date of Patent: March 5, 2019Assignee: PULLSTRING, INC.Inventors: Martin Reddy, Oren M. Jacob, Robert G. Podesta, Lucas R. A. Ives, Kathleen Hale
-
Patent number: 9454838Abstract: Various of the disclosed embodiments relate to systems and methods for providing animated multimedia, e.g. animated shows, to an audience over a network. Particularly, some embodiments provide systems and methods for generating and providing audio, animation, and other experience-related information so that user's may readily experience the content in a seamless manner (e.g., as an audience member watching a show, playing a video game, etc.). Various embodiments animate “to the audience” based, e.g., on what content the audience is consuming. The animations may be generated in real-time from constituent components and assets in response to user behavior.Type: GrantFiled: May 28, 2014Date of Patent: September 27, 2016Assignee: PULLSTRING, INC.Inventors: Michael Chann, Jon Collins, Benjamin Morse, Lucas R. A. Ives, Martin Reddy, Oren M. Jacob
-
Patent number: 9443337Abstract: Various of the disclosed embodiments relate to systems and methods for providing animated multimedia, e.g. animated shows, to an audience over a network. Particularly, some embodiments provide systems and methods for generating and providing audio, animation, and other experience-related information so that user's may readily experience the content in a seamless manner (e.g., as an audience member watching a show, playing a video game, etc.). Various embodiments animate “to the audience” based, e.g., on what content the audience is consuming. The animations may be generated in real-time from constituent components and assets in response to user behavior.Type: GrantFiled: May 28, 2014Date of Patent: September 13, 2016Assignee: PULLSTRING, INC.Inventors: Michael Chann, Jon Collins, Benjamin Morse, Lucas R. A. Ives, Martin Reddy, Oren M. Jacob
-
Publication number: 20150062131Abstract: Various of the disclosed embodiments relate to systems and methods for providing animated multimedia, e.g. animated shows, to an audience over a network. Particularly, some embodiments provide systems and methods for generating and providing audio, animation, and other experience-related information so that user's may readily experience the content in a seamless manner (e.g., as an audience member watching a show, playing a video game, etc.). Various embodiments animate “to the audience” based, e.g., on what content the audience is consuming. The animations may be generated in real-time from constituent components and assets in response to user behavior.Type: ApplicationFiled: May 28, 2014Publication date: March 5, 2015Inventors: Michael Chann, Jon Collins, Benjamin Morse, Lucas R.A. Ives, Martin Reddy, Oren M. Jacob
-
Publication number: 20150062132Abstract: Various of the disclosed embodiments relate to systems and methods for providing animated multimedia, e.g. animated shows, to an audience over a network. Particularly, some embodiments provide systems and methods for generating and providing audio, animation, and other experience-related information so that user's may readily experience the content in a seamless manner (e.g., as an audience member watching a show, playing a video game, etc.). Various embodiments animate “to the audience” based, e.g., on what content the audience is consuming. The animations may be generated in real-time from constituent components and assets in response to user behavior.Type: ApplicationFiled: May 28, 2014Publication date: March 5, 2015Inventors: Michael Chann, Jon Collins, Benjamin Morse, Lucas R.A. Ives, Martin Reddy, Oren M. Jacob
-
Publication number: 20140278403Abstract: Various of the disclosed embodiments concern systems and methods for conversation-based human-computer interactions. In some embodiments, the system includes a plurality of interactive scenes. A user may access each scene and engage in conversation with a synthetic character regarding an activity associated with that active scene. In certain embodiments, a central server may house a plurality of waveforms associated with the synthetic character's speech, and may dynamically deliver the waveforms to a user device in conjunction with the operation of an artificial intelligence. In other embodiments, the character's speech is generated using a text-to-speech system.Type: ApplicationFiled: March 14, 2013Publication date: September 18, 2014Applicant: TOYTALK, INC.Inventors: Oren M. Jacob, Martin Reddy, Lucas R.A. Ives, Robert G. Podesta
-
Publication number: 20140272827Abstract: Various of the disclosed embodiments relate to systems and methods for managing a vocal performance. In some embodiments, a central hosting server may maintain a repository of speech text, waveforms, and metadata supplied by a plurality of development team members. The central hosting server may facilitate modification of the metadata and collaborative commentary procedures so that the development team members may generate higher quality voice assets more efficiently.Type: ApplicationFiled: March 14, 2013Publication date: September 18, 2014Applicant: TOYTALK, INC.Inventors: Oren M. Jacobs, Martin Reddy, Lucas R.A. Ives
-
Publication number: 20140032471Abstract: Systems and methods to create content for interactive synthetic characters are provided. In some embodiments, a conversation editor may be configured to create a traversable script for an interactive synthetic character by receiving conversation rules from a user. These rules can be used to match words or phrases that a user speaks or types, or to monitor for a physical movement of the user or synthetic character. Each rule can include responses to be performed by the interactive synthetic character. Examples of responses include producing audible or textual speech for the synthetic character, performing animations, playing sound effects, retrieving data, and the like. A traversable script can be generated from the conversation rules that when executed by the synthetic character allows for the dynamic interactions. In some embodiments, the traversable script can be navigated by a state engine using navigational directives associated with the conversation rules.Type: ApplicationFiled: July 25, 2012Publication date: January 30, 2014Applicant: TOYTALK, INC.Inventors: Martin Reddy, Oren M. Jacob, Robert G. Podesta, Lucas R. A. Ives, Kathleen Hale