Patents by Inventor Mubbasir Kapadia
Mubbasir Kapadia has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11721081Abstract: Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing (live-action/animation/cinematic) virtual reality (VR) experiences or content. A script can be parsed to identify one or more elements in a script, and a VR representation of the one or more elements can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner along with the VR representation. The user may edit the VR representation, and the visual and temporal presentation of the script can be commensurately represented. The script may be analyzed for consistency and/or cohesiveness in the context of the VR representation or experience. A preview of the VR experience or content can be generated from the script and/or the VR representation.Type: GrantFiled: February 20, 2020Date of Patent: August 8, 2023Assignee: Disney Enterprises, Inc.Inventors: Sasha Anna Schriber, Isa Simo, Merada Richter, Mubbasir Kapadia, Markus Gross
-
Publication number: 20220366210Abstract: A system for simulating human-like affect-driven behavior includes a hardware processor and a system memory storing software code providing a virtual agent. The hardware processor executes the software code to identify a character assumed by the virtual agent and having a personality, a target state of motivational fulfillment, a baseline mood, and emotions. The software code identifies current physical and motivational fulfillment states, and currently active emotions of the character, and determines a current mood of the character based on the baseline mood and the currently active emotions. The software code further detects an experience by the character and plans multiple behaviors including a first behavior based on the experience and the current physical state, the second behavior based on the experience, the personality, the current mood, and the currently active emotions, and a third behavior based on the target and current states of motivational fulfillment.Type: ApplicationFiled: July 7, 2022Publication date: November 17, 2022Inventors: Jakob Buhmann, Douglas A. Fidaleo, Maayan Shvo, Mubbasir Kapadia, Jill Fain Lehman, Sarah K. Wulfeck, Rylan Gibbens, Steven Poulakos, John Wiseman
-
Patent number: 11416732Abstract: A system for simulating human-like affect-driven behavior includes a hardware processor and a system memory storing software code providing a virtual agent. The hardware processor executes the software code to identify a character assumed by the virtual agent and having a personality, a target state of motivational fulfillment, a baseline mood, and emotions. The software code identifies current physical and motivational fulfillment states, and currently active emotions of the character, and determines a current mood of the character based on the baseline mood and the currently active emotions. The software code further detects an experience by the character and plans multiple behaviors including a first behavior based on the experience and the current physical state, the second behavior based on the experience, the personality, the current mood, and the currently active emotions, and a third behavior based on the target and current states of motivational fulfillment.Type: GrantFiled: December 5, 2018Date of Patent: August 16, 2022Assignee: Disney Enterprises, Inc.Inventors: Jakob Buhmann, Douglas A. Fidaleo, Maayan Shvo, Mubbasir Kapadia, Jill Fain Lehman, Sarah K. Wulfeck, Rylan Gibbens, Steven Poulakos, John Wiseman
-
Patent number: 11269941Abstract: Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing scripts. A script can be parsed to identify one or more elements in a script, and various visual representations of the one or more elements and/or a scene characterized in the script can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner. Information parsed from the script can be stored in basic information elements, and used to create a knowledge bases.Type: GrantFiled: October 9, 2018Date of Patent: March 8, 2022Assignee: Disney Enterprises, Inc.Inventors: Sasha Anna Schriber, Rushit Sanghrajka, Wojciech Witon, Isabel Simo, Mubbasir Kapadia, Markus Gross, Daniel Inversini, Max Grosse, Eleftheria Tsipidi
-
Patent number: 10818312Abstract: According to one implementation, an affect-driven dialog generation system includes a computing platform having a hardware processor and a system memory storing a software code including a sequence-to-sequence (seq2seq) architecture trained using a loss function having an affective regularizer term based on a difference in emotional content between a target dialog response and a dialog sequence determined by the seq2seq architecture during training. The hardware processor executes the software code to receive an input dialog sequence, and to use the seq2seq architecture to generate emotionally diverse dialog responses based on the input dialog sequence and a predetermined target emotion. The hardware processor further executes the software code to determine, using the seq2seq architecture, a final dialog sequence responsive to the input dialog sequence based on an emotional relevance of each of the emotionally diverse dialog responses, and to provide the final dialog sequence as an output.Type: GrantFiled: December 19, 2018Date of Patent: October 27, 2020Assignee: Disney Enterprises, Inc.Inventors: Ashutosh Modi, Mubbasir Kapadia, Douglas A. Fidaleo, James R. Kennedy, Wojciech Witon, Pierre Colombo
-
Patent number: 10783689Abstract: A user interface may be presented to a user that may provide an intuitive interface configured for generating animations. The user interface may be users of varying levels of expertise, while retaining the freedom of authoring complex narratives in an animation. A given narrative may be constrained by user input related to one or more of events to occur in an animation, individual animation components that may be included in the events, and/or other input. The system may be configured to “fill in” missing gaps in a narrative to generate a consistent animation while still meeting one or more narrative constraints specified by user input. By way of non-limiting example, gaps may be “filled in” by effectuating non-user selections of one or more of events, animation components, and/or other information used to generate an animation.Type: GrantFiled: November 19, 2015Date of Patent: September 22, 2020Assignee: Disney Enterprises, Inc.Inventors: Alexander Shoulson, Mubbasir Kapadia, Robert Sumner
-
Publication number: 20200202887Abstract: According to one implementation, an affect-driven dialog generation system includes a computing platform having a hardware processor and a system memory storing a software code including a sequence-to-sequence (seq2seq) architecture trained using a loss function having an affective regularizer term based on a difference in emotional content between a target dialog response and a dialog sequence determined by the seq2seq architecture during training. The hardware processor executes the software code to receive an input dialog sequence, and to use the seq2seq architecture to generate emotionally diverse dialog responses based on the input dialog sequence and a predetermined target emotion. The hardware processor further executes the software code to determine, using the seq2seq architecture, a final dialog sequence responsive to the input dialog sequence based on an emotional relevance of each of the emotionally diverse dialog responses, and to provide the final dialog sequence as an output.Type: ApplicationFiled: December 19, 2018Publication date: June 25, 2020Inventors: Ashutosh Modi, Mubbasir Kapadia, Douglas A. Fidaleo, James R. Kennedy, Wojciech Witon, Pierre Colombo
-
Publication number: 20200193718Abstract: Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing (live-action/animation/cinematic) virtual reality (VR) experiences or content. A script can be parsed to identify one or more elements in a script, and a VR representation of the one or more elements can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner along with the VR representation. The user may edit the VR representation, and the visual and temporal presentation of the script can be commensurately represented. The script may be analyzed for consistency and/or cohesiveness in the context of the VR representation or experience. A preview of the VR experience or content can be generated from the script and/or the VR representation.Type: ApplicationFiled: February 20, 2020Publication date: June 18, 2020Inventors: Sasha Anna SCHRIBER, Isa SIMO, Merada RICHTER, Mubbasir KAPADIA, Markus GROSS
-
Publication number: 20200184306Abstract: A system for simulating human-like affect-driven behavior includes a hardware processor and a system memory storing software code providing a virtual agent. The hardware processor executes the software code to identify a character assumed by the virtual agent and having a personality, a target state of motivational fulfillment, a baseline mood, and emotions. The software code identifies current physical and motivational fulfillment states, and currently active emotions of the character, and determines a current mood of the character based on the baseline mood and the currently active emotions. The software code further detects an experience by the character and plans multiple behaviors including a first behavior based on the experience and the current physical state, the second behavior based on the experience, the personality, the current mood, and the currently active emotions, and a third behavior based on the target and current states of motivational fulfillment.Type: ApplicationFiled: December 5, 2018Publication date: June 11, 2020Inventors: Jakob Buhmann, Douglas A. Fidaleo, Maayan Shvo, Mubbasir Kapadia, Jill Fain Lehman, Sarah K. Wulfeck, Rylan Gibbens, Steven Poulakos, John Wiseman
-
Patent number: 10586399Abstract: Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing (live-action/animation/cinematic) virtual reality (VR) experiences or content. A script can be parsed to identify one or more elements in a script, and a VR representation of the one or more elements can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner along with the VR representation. The user may edit the VR representation, and the visual and temporal presentation of the script can be commensurately represented. The script may be analyzed for consistency and/or cohesiveness in the context of the VR representation or experience. A preview of the VR experience or content can be generated from the script and/or the VR representation.Type: GrantFiled: June 19, 2017Date of Patent: March 10, 2020Assignee: DISNEY ENTERPRISES, INC.Inventors: Sasha Anna Schriber, Isa Simo, Merada Richter, Mubbasir Kapadia, Markus Gross
-
Patent number: 10575113Abstract: Methods and systems for sound propagation and perception for autonomous agents in dynamic environments are described. Adaptive discretization of continuous sound signals allows one to obtain a minimal, yet sufficient sound packet representation (SPR) for human-like perception, and a hierarchical clustering scheme to facilitate approximate perception. Planar sound propagation of discretized sound signals exhibit acoustic properties such as attenuation, reflection, refraction, and diffraction, as well as multiple convoluted sound signals. Agent-based sound perceptions using hierarchical clustering analysis that accommodates natural sound degradation due to audio distortion facilitate approximate human-like perception.Type: GrantFiled: March 1, 2018Date of Patent: February 25, 2020Assignee: The Trustees of the University of PennsylvaniaInventors: Norman I. Badler, Pengfei Huang, Mubbasir Kapadia
-
Publication number: 20190107927Abstract: Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing scripts. A script can be parsed to identify one or more elements in a script, and various visual representations of the one or more elements and/or a scene characterized in the script can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner. Information parsed from the script can be stored in basic information elements, and used to create a knowledge bases.Type: ApplicationFiled: October 9, 2018Publication date: April 11, 2019Applicant: Disney Enterprises, Inc.Inventors: Sasha Anna Schriber, Rushit Sanghrajka, Wojciech Witon, Isabel Simo, Mubbasir Kapadia, Markus Gross, Daniel Inversini, Max Grosse, Eleftheria Tsipidi
-
Publication number: 20180300958Abstract: Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing (live-action/animation/cinematic) virtual reality (VR) experiences or content. A script can be parsed to identify one or more elements in a script, and a VR representation of the one or more elements can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner along with the VR representation. The user may edit the VR representation, and the visual and temporal presentation of the script can be commensurately represented. The script may be analyzed for consistency and/or cohesiveness in the context of the VR representation or experience. A preview of the VR experience or content can be generated from the script and/or the VR representation.Type: ApplicationFiled: June 19, 2017Publication date: October 18, 2018Applicant: Disney Enterprises, Inc.Inventors: SASHA ANNA SCHRIBER, ISA SIMO, MERADA RICHTER, MUBBASIR KAPADIA, MARKUS GROSS
-
Patent number: 10067775Abstract: There are provided systems and methods for guided authoring of interactive content. A content generation system enabling such guided authoring includes a system processor, a system memory, and an interactive content authoring engine stored in the system memory. The system processor is configured to execute the interactive content authoring engine to receive data corresponding to an interactive content through an authoring interface of the interactive content authoring engine, and to detect at least one of an inconsistency in the interactive content and a possible conflict arising from a user interaction with the interactive content. The system processor is further configured to to execute the interactive content authoring engine to identify at least one solution for each inconsistency and/or possible conflict, and to resolve the inconsistency or inconsistencies and the possible conflict(s) to enable generation of a substantially conflict and inconsistency free interactive content.Type: GrantFiled: February 19, 2015Date of Patent: September 4, 2018Assignees: Disney Enterprises, Inc., ETH ZurichInventors: Mubbasir Kapadia, Fabio Zund, Robert Sumner
-
Publication number: 20180227693Abstract: Methods and systems for sound propagation and perception for autonomous agents in dynamic environments are described. Adaptive discretization of continuous sound signals allows one to obtain a minimal, yet sufficient sound packet representation (SPR) for human-like perception, and a hierarchical clustering scheme to facilitate approximate perception. Planar sound propagation of discretized sound signals exhibit acoustic properties such as attenuation, reflection, refraction, and diffraction, as well as multiple convoluted sound signals. Agent-based sound perceptions using hierarchical clustering analysis that accommodates natural sound degradation due to audio distortion facilitate approximate human-like perception.Type: ApplicationFiled: March 1, 2018Publication date: August 9, 2018Inventors: Norman I. Badler, Pengfei Huang, Mubbasir Kapadia
-
Patent number: 10042506Abstract: There is provided a method for use by an interactive story development system. The method includes receiving a story state including an attribute state of each of a plurality of items present in a storyline, wherein the plurality of items include a plurality of objects of the storyline and a plurality of characters of the storyline. The method also includes creating a story web, wherein the story web includes a node for every possible interaction between the plurality of objects and the plurality of characters in the storyline, and calculating a narrative value for each of the nodes of the story web, receiving a first input from a user selecting user criteria including at least one of a story telling option of the storyline and a sentiment selection, and determining based on the narrative value and the user criteria, a plurality of candidate nodes of the story web.Type: GrantFiled: March 19, 2015Date of Patent: August 7, 2018Assignee: Disney Enterprises, Inc.Inventors: Mubbasir Kapadia, Robert Sumner, Alexander Shoulson
-
Patent number: 9942683Abstract: Methods and systems for sound propagation and perception for autonomous agents in dynamic environments are described. Adaptive discretization of continuous sound signals allows one to obtain a minimal, yet sufficient sound packet representation (SPR) for human-like perception, and a hierarchical clustering scheme to facilitate approximate perception. Planar sound propagation of discretized sound signals exhibit acoustic properties such as attenuation, reflection, refraction, and diffraction, as well as multiple convoluted sound signals. Agent-based sound perceptions using hierarchical clustering analysis that accommodates natural sound degradation due to audio distortion facilitate approximate human-like perception.Type: GrantFiled: July 16, 2014Date of Patent: April 10, 2018Assignee: The Trustees of the University of PennsylvaniaInventors: Norman I. Badler, Pengfei Huang, Mubbasir Kapadia
-
Publication number: 20170148200Abstract: A user interface may be presented to a user that may provide an intuitive interface configured for generating animations. The user interface may be users of varying levels of expertise, while retaining the freedom of authoring complex narratives in an animation. A given narrative may be constrained by user input related to one or more of events to occur in an animation, individual animation components that may be included in the events, and/or other input. The system may be configured to “fill in” missing gaps in a narrative to generate a consistent animation while still meeting one or more narrative constraints specified by user input. By way of non-limiting example, gaps may be “filled in” by effectuating non-user selections of one or more of events, animation components, and/or other information used to generate an animation.Type: ApplicationFiled: November 19, 2015Publication date: May 25, 2017Inventors: Alexander Shoulson, Mubbasir Kapadia, Robert Sumner
-
Publication number: 20160274705Abstract: There is provided a method for use by an interactive story development system. The method includes receiving a story state including an attribute state of each of a plurality of items present in a storyline, wherein the plurality of items include a plurality of objects of the storyline and a plurality of characters of the storyline. The method also includes creating a story web, wherein the story web includes a node for every possible interaction between the plurality of objects and the plurality of characters in the storyline, and calculating a narrative value for each of the nodes of the story web, receiving a first input from a user selecting user criteria including at least one of a story telling option of the storyline and a sentiment selection, and determining based on the narrative value and the user criteria, a plurality of candidate nodes of the story web.Type: ApplicationFiled: March 19, 2015Publication date: September 22, 2016Inventors: Mubbasir Kapadia, Robert Sumner, Alexander Shoulson
-
Publication number: 20160246613Abstract: There are provided systems and methods for guided authoring of interactive content. A content generation system enabling such guided authoring includes a system processor, a system memory, and an interactive content authoring engine stored in the system memory. The system processor is configured to execute the interactive content authoring engine to receive data corresponding to an interactive content through an authoring interface of the interactive content authoring engine, and to detect at least one of an inconsistency in the interactive content and a possible conflict arising from a user interaction with the interactive content. The system processor is further configured to to execute the interactive content authoring engine to identify at least one solution for each inconsistency and/or possible conflict, and to resolve the inconsistency or inconsistencies and the possible conflict(s) to enable generation of a substantially conflict and inconsistency free interactive content.Type: ApplicationFiled: February 19, 2015Publication date: August 25, 2016Inventors: Mubbasir Kapadia, Fabio Zund, Robert Sumner