Patents by Inventor Martin Reddy
Martin Reddy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230386460Abstract: Systems and processes for operating an intelligent automated assistant are provided. For example, an electronic device can provide an output in an audio, visual, or mixed mode. In one example process, an output mode is selected, and, in response to determining that an output is to be provided to a user, an output data structure is obtained. The output data structure includes pattern components sorted into output groups. At least one pattern component from at least one output group is selected based on the output mode and provided in the output to the user.Type: ApplicationFiled: September 14, 2022Publication date: November 30, 2023Inventors: Rebecca P. FISH, Neal S. ELLIS, Aswini RAMESH, Martin REDDY, Patchaya B. SEILAUDOM, Andrew P. TENNANT
-
Patent number: 11586936Abstract: Systems and methods for both technical and non-technical users to create content for interactive synthetic characters are provided. In some embodiments, a conversation editor may be configured to create a traversable script for an interactive synthetic character by receiving a set of conversation rules from a user. These rules can be used to match certain words or phrases that a user speaks or types, or to monitor for a physical movement of the user or synthetic character. Each conversation rule can include responses to be performed by the interactive synthetic character. The responses can include, for example, producing audible or textual speech for the synthetic character, performing one or more animations, playing one or more sound effects, retrieving data from one or more data sources, and the like. A traversable script can be generated from the set of conversation rules that when executed by the synthetic character allows for the dynamic interactions.Type: GrantFiled: January 18, 2019Date of Patent: February 21, 2023Assignee: Chatterbox Capital LLCInventors: Martin Reddy, Oren M. Jacob, Robert G. Podesta, Lucas R. Ives, Kathleen Hale
-
Publication number: 20190385065Abstract: Systems and methods for both technical and non-technical users to create content for interactive synthetic characters are provided. In some embodiments, a conversation editor may be configured to create a traversable script for an interactive synthetic character by receiving a set of conversation rules from a user. These rules can be used to match certain words or phrases that a user speaks or types, or to monitor for a physical movement of the user or synthetic character. Each conversation rule can include responses to be performed by the interactive synthetic character. The responses can include, for example, producing audible or textual speech for the synthetic character, performing one or more animations, playing one or more sound effects, retrieving data from one or more data sources, and the like. A traversable script can be generated from the set of conversation rules that when executed by the synthetic character allows for the dynamic interactions.Type: ApplicationFiled: January 18, 2019Publication date: December 19, 2019Inventors: Martin Reddy, Oren M. Jacob, Robert G. Podesta, Lucas R.A. Ives, Kathleen Hale
-
Patent number: 10223636Abstract: Systems and methods to create content for interactive synthetic characters are provided. In some embodiments, a conversation editor may be configured to create a traversable script for an interactive synthetic character by receiving conversation rules from a user. These rules can be used to match words or phrases that a user speaks or types, or to monitor for a physical movement of the user or synthetic character. Each rule can include responses to be performed by the interactive synthetic character. Examples of responses include producing audible or textual speech for the synthetic character, performing animations, playing sound effects, retrieving data, and the like. A traversable script can be generated from the conversation rules that when executed by the synthetic character allows for the dynamic interactions. In some embodiments, the traversable script can be navigated by a state engine using navigational directives associated with the conversation rules.Type: GrantFiled: July 25, 2012Date of Patent: March 5, 2019Assignee: PULLSTRING, INC.Inventors: Martin Reddy, Oren M. Jacob, Robert G. Podesta, Lucas R. A. Ives, Kathleen Hale
-
Patent number: 9454838Abstract: Various of the disclosed embodiments relate to systems and methods for providing animated multimedia, e.g. animated shows, to an audience over a network. Particularly, some embodiments provide systems and methods for generating and providing audio, animation, and other experience-related information so that user's may readily experience the content in a seamless manner (e.g., as an audience member watching a show, playing a video game, etc.). Various embodiments animate “to the audience” based, e.g., on what content the audience is consuming. The animations may be generated in real-time from constituent components and assets in response to user behavior.Type: GrantFiled: May 28, 2014Date of Patent: September 27, 2016Assignee: PULLSTRING, INC.Inventors: Michael Chann, Jon Collins, Benjamin Morse, Lucas R. A. Ives, Martin Reddy, Oren M. Jacob
-
Patent number: 9443337Abstract: Various of the disclosed embodiments relate to systems and methods for providing animated multimedia, e.g. animated shows, to an audience over a network. Particularly, some embodiments provide systems and methods for generating and providing audio, animation, and other experience-related information so that user's may readily experience the content in a seamless manner (e.g., as an audience member watching a show, playing a video game, etc.). Various embodiments animate “to the audience” based, e.g., on what content the audience is consuming. The animations may be generated in real-time from constituent components and assets in response to user behavior.Type: GrantFiled: May 28, 2014Date of Patent: September 13, 2016Assignee: PULLSTRING, INC.Inventors: Michael Chann, Jon Collins, Benjamin Morse, Lucas R. A. Ives, Martin Reddy, Oren M. Jacob
-
Patent number: 9437207Abstract: Various of the disclosed embodiments relate to systems and methods for extracting audio information, e.g. a textual description of speech, from a speech recording while retaining the anonymity of the speaker. In certain embodiments, a third party may perform various aspects of the anonymization and speech processing. Certain embodiments facilitate anonymization in compliance with various legislative requirements even when third parties are involved.Type: GrantFiled: April 3, 2013Date of Patent: September 6, 2016Assignee: PULLSTRING, INC.Inventors: Oren M Jacob, Martin Reddy, Brian Langner
-
Publication number: 20150243279Abstract: Various of the disclosed embodiments concern systems and methods for identifying and recommending interesting user responses that are obtained by an interactive device (e.g., audio responses to a virtual character as part of a virtual interaction). In some embodiments, a user may interact with one or more virtual characters via a mobile device, tablet, desktop computer, or the like. During the interaction, the user may respond to one or more questions posed by the virtual characters or to contexts presented by the interactive device. The system may record these user responses, analyze the audio data to extract one or more features, and prepare a ranking of the user responses. The extracted features can be augmented with human-generated metadata or ground truth values. A reviewer can review, share, etc., the user response.Type: ApplicationFiled: February 26, 2015Publication date: August 27, 2015Inventors: Benjamin Morse, Martin Reddy, Aurelio Tinio, James Chalfant
-
Publication number: 20150062132Abstract: Various of the disclosed embodiments relate to systems and methods for providing animated multimedia, e.g. animated shows, to an audience over a network. Particularly, some embodiments provide systems and methods for generating and providing audio, animation, and other experience-related information so that user's may readily experience the content in a seamless manner (e.g., as an audience member watching a show, playing a video game, etc.). Various embodiments animate “to the audience” based, e.g., on what content the audience is consuming. The animations may be generated in real-time from constituent components and assets in response to user behavior.Type: ApplicationFiled: May 28, 2014Publication date: March 5, 2015Inventors: Michael Chann, Jon Collins, Benjamin Morse, Lucas R.A. Ives, Martin Reddy, Oren M. Jacob
-
Publication number: 20150062131Abstract: Various of the disclosed embodiments relate to systems and methods for providing animated multimedia, e.g. animated shows, to an audience over a network. Particularly, some embodiments provide systems and methods for generating and providing audio, animation, and other experience-related information so that user's may readily experience the content in a seamless manner (e.g., as an audience member watching a show, playing a video game, etc.). Various embodiments animate “to the audience” based, e.g., on what content the audience is consuming. The animations may be generated in real-time from constituent components and assets in response to user behavior.Type: ApplicationFiled: May 28, 2014Publication date: March 5, 2015Inventors: Michael Chann, Jon Collins, Benjamin Morse, Lucas R.A. Ives, Martin Reddy, Oren M. Jacob
-
Patent number: 8972324Abstract: Systems and methods for modifying content for interactive synthetic characters are provided. In some embodiments, a traversable script for an interactive synthetic character may be modified based on analytics relating to the use of the interactive synthetic character (e.g. words spoken by the user, language spoken, user demographics, length of interactions, visually detected objects, current environmental conditions, or location). The traversable script may include conversation rules that include actions to be performed by the interactive synthetic character. The actions can include, for example, producing audible or textual speech, performing animations, playing sound effects, retrieving data from data sources, and the like. The traversable script can be modified, customized, or improved based on the feedback from the use of the character by a plurality of users. The modifications may happen automatically or be reviewed by a content creator using a graphical user interface.Type: GrantFiled: July 25, 2012Date of Patent: March 3, 2015Assignee: Toytalk, Inc.Inventors: Martin Reddy, Oren M. Jacob, Robert G. Podesta
-
Publication number: 20140278403Abstract: Various of the disclosed embodiments concern systems and methods for conversation-based human-computer interactions. In some embodiments, the system includes a plurality of interactive scenes. A user may access each scene and engage in conversation with a synthetic character regarding an activity associated with that active scene. In certain embodiments, a central server may house a plurality of waveforms associated with the synthetic character's speech, and may dynamically deliver the waveforms to a user device in conjunction with the operation of an artificial intelligence. In other embodiments, the character's speech is generated using a text-to-speech system.Type: ApplicationFiled: March 14, 2013Publication date: September 18, 2014Applicant: TOYTALK, INC.Inventors: Oren M. Jacob, Martin Reddy, Lucas R.A. Ives, Robert G. Podesta
-
Publication number: 20140272827Abstract: Various of the disclosed embodiments relate to systems and methods for managing a vocal performance. In some embodiments, a central hosting server may maintain a repository of speech text, waveforms, and metadata supplied by a plurality of development team members. The central hosting server may facilitate modification of the metadata and collaborative commentary procedures so that the development team members may generate higher quality voice assets more efficiently.Type: ApplicationFiled: March 14, 2013Publication date: September 18, 2014Applicant: TOYTALK, INC.Inventors: Oren M. Jacobs, Martin Reddy, Lucas R.A. Ives
-
Publication number: 20140278366Abstract: Various of the disclosed embodiments relate to systems and methods for extracting audio information, e.g. a textual description of speech, from a speech recording while retaining the anonymity of the speaker. In certain embodiments, a third party may perform various aspects of the anonymization and speech processing. Certain embodiments facilitate anonymization in compliance with various legislative requirements even when third parties are involved.Type: ApplicationFiled: April 3, 2013Publication date: September 18, 2014Applicant: ToyTalk, Inc.Inventors: Oren M. Jacob, Martin Reddy, Brian Langner
-
Patent number: 8737677Abstract: A device/system and method for creating customized audio segments related to an object of interest are disclosed. The device and/or system can create an additional level of interaction with the object of interest by creating customized audio segments based on the identity of the object of interest and/or the user's interaction with the object of interest. Thus, the mobile device can create an interactive environment for a user interacting with an otherwise inanimate object.Type: GrantFiled: July 19, 2011Date of Patent: May 27, 2014Assignee: Toytalk, Inc.Inventors: Oren M. Jacob, Martin Reddy
-
Publication number: 20140032471Abstract: Systems and methods to create content for interactive synthetic characters are provided. In some embodiments, a conversation editor may be configured to create a traversable script for an interactive synthetic character by receiving conversation rules from a user. These rules can be used to match words or phrases that a user speaks or types, or to monitor for a physical movement of the user or synthetic character. Each rule can include responses to be performed by the interactive synthetic character. Examples of responses include producing audible or textual speech for the synthetic character, performing animations, playing sound effects, retrieving data, and the like. A traversable script can be generated from the conversation rules that when executed by the synthetic character allows for the dynamic interactions. In some embodiments, the traversable script can be navigated by a state engine using navigational directives associated with the conversation rules.Type: ApplicationFiled: July 25, 2012Publication date: January 30, 2014Applicant: TOYTALK, INC.Inventors: Martin Reddy, Oren M. Jacob, Robert G. Podesta, Lucas R. A. Ives, Kathleen Hale
-
Publication number: 20140032467Abstract: Systems and methods for modifying content for interactive synthetic characters are provided. In some embodiments, a traversable script for an interactive synthetic character may be modified based on a set of analytics relating to the use of the interactive synthetic character. These uses may include words spoken by the user, language spoken, user demographics, length of interactions, visually detected objects, current environmental conditions, or location. The traversable script may include a set of conversation rules that include actions to be performed by the interactive synthetic character. The actions can include, for example, producing audible or textual speech, performing one or more animations, playing one or more sound effects, retrieving data from one or more data sources, and the like. The traversable script can be modified, customized, or improved based on the feedback from the use of the character by a plurality of users.Type: ApplicationFiled: July 25, 2012Publication date: January 30, 2014Applicant: TOYTALK, INC.Inventors: Martin Reddy, Oren M. Jacob, Robert G. Podesta
-
Publication number: 20130022232Abstract: A device/system and method for creating customized audio segments related to an object of interest are disclosed. The device and/or system can create an additional level of interaction with the object of interest by creating customized audio segments based on the identity of the object of interest and/or the user's interaction with the object of interest. Thus, the mobile device can create an interactive environment for a user interacting with an otherwise inanimate object.Type: ApplicationFiled: July 19, 2011Publication date: January 24, 2013Inventors: Oren M. Jacob, Martin Reddy
-
Patent number: 8319778Abstract: Variable motion blur is created by varying the evaluation time used to determine the poses of objects according to motion blur parameters when evaluating a blur frame. A blur parameter can be associated with one or more objects, portions of objects, or animation variables. The animation system modifies the time of the blur frame by a function including the blur parameter to determine poses of objects or portions thereof associated with the blur parameter in a blur frame. The animation system determines the values of animation variables at their modified times, rather than at the time of the blur frame, and poses objects or portions thereof accordingly. Multiple blur parameters can be used to evaluate the poses of different portions of a scene at different times for a blur frame. Portions of an object can be associated with different blur parameters, enabling motion blur to be varied within an object.Type: GrantFiled: January 31, 2008Date of Patent: November 27, 2012Assignee: PixarInventors: Rick Sayre, Martin Reddy, Peter Bernard Demoreuille
-
Patent number: 8281281Abstract: Visual representations of different versions of an object are automatically generated and presented to users. Users can compare these visual representations to determine appropriate level of detail transition points based on the visual appearances of different versions of an object. The specified level of detail transition points are converted into level of detail parameter values to be used to select one or more versions of an object for rendering, simulation, or other tasks. The visual representation of each version of an object can include an image sequence of the version at a range of distances from the camera. Each image corresponds to a view of the version at a specific level of detail parameter value. A user interface allows users to view the image sequences associated versions of an object as still images or animation. Users can select images in image sequences as level of detail transition points.Type: GrantFiled: September 14, 2006Date of Patent: October 2, 2012Assignee: PixarInventors: Eliot Smyrl, Martin Reddy