Patents by Inventor Mubbasir Kapadia

Mubbasir Kapadia has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11721081
    Abstract: Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing (live-action/animation/cinematic) virtual reality (VR) experiences or content. A script can be parsed to identify one or more elements in a script, and a VR representation of the one or more elements can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner along with the VR representation. The user may edit the VR representation, and the visual and temporal presentation of the script can be commensurately represented. The script may be analyzed for consistency and/or cohesiveness in the context of the VR representation or experience. A preview of the VR experience or content can be generated from the script and/or the VR representation.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: August 8, 2023
    Assignee: Disney Enterprises, Inc.
    Inventors: Sasha Anna Schriber, Isa Simo, Merada Richter, Mubbasir Kapadia, Markus Gross
  • Publication number: 20220366210
    Abstract: A system for simulating human-like affect-driven behavior includes a hardware processor and a system memory storing software code providing a virtual agent. The hardware processor executes the software code to identify a character assumed by the virtual agent and having a personality, a target state of motivational fulfillment, a baseline mood, and emotions. The software code identifies current physical and motivational fulfillment states, and currently active emotions of the character, and determines a current mood of the character based on the baseline mood and the currently active emotions. The software code further detects an experience by the character and plans multiple behaviors including a first behavior based on the experience and the current physical state, the second behavior based on the experience, the personality, the current mood, and the currently active emotions, and a third behavior based on the target and current states of motivational fulfillment.
    Type: Application
    Filed: July 7, 2022
    Publication date: November 17, 2022
    Inventors: Jakob Buhmann, Douglas A. Fidaleo, Maayan Shvo, Mubbasir Kapadia, Jill Fain Lehman, Sarah K. Wulfeck, Rylan Gibbens, Steven Poulakos, John Wiseman
  • Patent number: 11416732
    Abstract: A system for simulating human-like affect-driven behavior includes a hardware processor and a system memory storing software code providing a virtual agent. The hardware processor executes the software code to identify a character assumed by the virtual agent and having a personality, a target state of motivational fulfillment, a baseline mood, and emotions. The software code identifies current physical and motivational fulfillment states, and currently active emotions of the character, and determines a current mood of the character based on the baseline mood and the currently active emotions. The software code further detects an experience by the character and plans multiple behaviors including a first behavior based on the experience and the current physical state, the second behavior based on the experience, the personality, the current mood, and the currently active emotions, and a third behavior based on the target and current states of motivational fulfillment.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: August 16, 2022
    Assignee: Disney Enterprises, Inc.
    Inventors: Jakob Buhmann, Douglas A. Fidaleo, Maayan Shvo, Mubbasir Kapadia, Jill Fain Lehman, Sarah K. Wulfeck, Rylan Gibbens, Steven Poulakos, John Wiseman
  • Patent number: 11269941
    Abstract: Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing scripts. A script can be parsed to identify one or more elements in a script, and various visual representations of the one or more elements and/or a scene characterized in the script can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner. Information parsed from the script can be stored in basic information elements, and used to create a knowledge bases.
    Type: Grant
    Filed: October 9, 2018
    Date of Patent: March 8, 2022
    Assignee: Disney Enterprises, Inc.
    Inventors: Sasha Anna Schriber, Rushit Sanghrajka, Wojciech Witon, Isabel Simo, Mubbasir Kapadia, Markus Gross, Daniel Inversini, Max Grosse, Eleftheria Tsipidi
  • Patent number: 10818312
    Abstract: According to one implementation, an affect-driven dialog generation system includes a computing platform having a hardware processor and a system memory storing a software code including a sequence-to-sequence (seq2seq) architecture trained using a loss function having an affective regularizer term based on a difference in emotional content between a target dialog response and a dialog sequence determined by the seq2seq architecture during training. The hardware processor executes the software code to receive an input dialog sequence, and to use the seq2seq architecture to generate emotionally diverse dialog responses based on the input dialog sequence and a predetermined target emotion. The hardware processor further executes the software code to determine, using the seq2seq architecture, a final dialog sequence responsive to the input dialog sequence based on an emotional relevance of each of the emotionally diverse dialog responses, and to provide the final dialog sequence as an output.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: October 27, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Ashutosh Modi, Mubbasir Kapadia, Douglas A. Fidaleo, James R. Kennedy, Wojciech Witon, Pierre Colombo
  • Patent number: 10783689
    Abstract: A user interface may be presented to a user that may provide an intuitive interface configured for generating animations. The user interface may be users of varying levels of expertise, while retaining the freedom of authoring complex narratives in an animation. A given narrative may be constrained by user input related to one or more of events to occur in an animation, individual animation components that may be included in the events, and/or other input. The system may be configured to “fill in” missing gaps in a narrative to generate a consistent animation while still meeting one or more narrative constraints specified by user input. By way of non-limiting example, gaps may be “filled in” by effectuating non-user selections of one or more of events, animation components, and/or other information used to generate an animation.
    Type: Grant
    Filed: November 19, 2015
    Date of Patent: September 22, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Alexander Shoulson, Mubbasir Kapadia, Robert Sumner
  • Publication number: 20200202887
    Abstract: According to one implementation, an affect-driven dialog generation system includes a computing platform having a hardware processor and a system memory storing a software code including a sequence-to-sequence (seq2seq) architecture trained using a loss function having an affective regularizer term based on a difference in emotional content between a target dialog response and a dialog sequence determined by the seq2seq architecture during training. The hardware processor executes the software code to receive an input dialog sequence, and to use the seq2seq architecture to generate emotionally diverse dialog responses based on the input dialog sequence and a predetermined target emotion. The hardware processor further executes the software code to determine, using the seq2seq architecture, a final dialog sequence responsive to the input dialog sequence based on an emotional relevance of each of the emotionally diverse dialog responses, and to provide the final dialog sequence as an output.
    Type: Application
    Filed: December 19, 2018
    Publication date: June 25, 2020
    Inventors: Ashutosh Modi, Mubbasir Kapadia, Douglas A. Fidaleo, James R. Kennedy, Wojciech Witon, Pierre Colombo
  • Publication number: 20200193718
    Abstract: Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing (live-action/animation/cinematic) virtual reality (VR) experiences or content. A script can be parsed to identify one or more elements in a script, and a VR representation of the one or more elements can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner along with the VR representation. The user may edit the VR representation, and the visual and temporal presentation of the script can be commensurately represented. The script may be analyzed for consistency and/or cohesiveness in the context of the VR representation or experience. A preview of the VR experience or content can be generated from the script and/or the VR representation.
    Type: Application
    Filed: February 20, 2020
    Publication date: June 18, 2020
    Inventors: Sasha Anna SCHRIBER, Isa SIMO, Merada RICHTER, Mubbasir KAPADIA, Markus GROSS
  • Publication number: 20200184306
    Abstract: A system for simulating human-like affect-driven behavior includes a hardware processor and a system memory storing software code providing a virtual agent. The hardware processor executes the software code to identify a character assumed by the virtual agent and having a personality, a target state of motivational fulfillment, a baseline mood, and emotions. The software code identifies current physical and motivational fulfillment states, and currently active emotions of the character, and determines a current mood of the character based on the baseline mood and the currently active emotions. The software code further detects an experience by the character and plans multiple behaviors including a first behavior based on the experience and the current physical state, the second behavior based on the experience, the personality, the current mood, and the currently active emotions, and a third behavior based on the target and current states of motivational fulfillment.
    Type: Application
    Filed: December 5, 2018
    Publication date: June 11, 2020
    Inventors: Jakob Buhmann, Douglas A. Fidaleo, Maayan Shvo, Mubbasir Kapadia, Jill Fain Lehman, Sarah K. Wulfeck, Rylan Gibbens, Steven Poulakos, John Wiseman
  • Patent number: 10586399
    Abstract: Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing (live-action/animation/cinematic) virtual reality (VR) experiences or content. A script can be parsed to identify one or more elements in a script, and a VR representation of the one or more elements can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner along with the VR representation. The user may edit the VR representation, and the visual and temporal presentation of the script can be commensurately represented. The script may be analyzed for consistency and/or cohesiveness in the context of the VR representation or experience. A preview of the VR experience or content can be generated from the script and/or the VR representation.
    Type: Grant
    Filed: June 19, 2017
    Date of Patent: March 10, 2020
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Sasha Anna Schriber, Isa Simo, Merada Richter, Mubbasir Kapadia, Markus Gross
  • Patent number: 10575113
    Abstract: Methods and systems for sound propagation and perception for autonomous agents in dynamic environments are described. Adaptive discretization of continuous sound signals allows one to obtain a minimal, yet sufficient sound packet representation (SPR) for human-like perception, and a hierarchical clustering scheme to facilitate approximate perception. Planar sound propagation of discretized sound signals exhibit acoustic properties such as attenuation, reflection, refraction, and diffraction, as well as multiple convoluted sound signals. Agent-based sound perceptions using hierarchical clustering analysis that accommodates natural sound degradation due to audio distortion facilitate approximate human-like perception.
    Type: Grant
    Filed: March 1, 2018
    Date of Patent: February 25, 2020
    Assignee: The Trustees of the University of Pennsylvania
    Inventors: Norman I. Badler, Pengfei Huang, Mubbasir Kapadia
  • Publication number: 20190107927
    Abstract: Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing scripts. A script can be parsed to identify one or more elements in a script, and various visual representations of the one or more elements and/or a scene characterized in the script can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner. Information parsed from the script can be stored in basic information elements, and used to create a knowledge bases.
    Type: Application
    Filed: October 9, 2018
    Publication date: April 11, 2019
    Applicant: Disney Enterprises, Inc.
    Inventors: Sasha Anna Schriber, Rushit Sanghrajka, Wojciech Witon, Isabel Simo, Mubbasir Kapadia, Markus Gross, Daniel Inversini, Max Grosse, Eleftheria Tsipidi
  • Publication number: 20180300958
    Abstract: Systems and methods are provided for a workflow framework that scriptwriters can utilize when developing (live-action/animation/cinematic) virtual reality (VR) experiences or content. A script can be parsed to identify one or more elements in a script, and a VR representation of the one or more elements can be automatically generated. A user may develop or edit the script which can be presented in a visual and temporal manner along with the VR representation. The user may edit the VR representation, and the visual and temporal presentation of the script can be commensurately represented. The script may be analyzed for consistency and/or cohesiveness in the context of the VR representation or experience. A preview of the VR experience or content can be generated from the script and/or the VR representation.
    Type: Application
    Filed: June 19, 2017
    Publication date: October 18, 2018
    Applicant: Disney Enterprises, Inc.
    Inventors: SASHA ANNA SCHRIBER, ISA SIMO, MERADA RICHTER, MUBBASIR KAPADIA, MARKUS GROSS
  • Patent number: 10067775
    Abstract: There are provided systems and methods for guided authoring of interactive content. A content generation system enabling such guided authoring includes a system processor, a system memory, and an interactive content authoring engine stored in the system memory. The system processor is configured to execute the interactive content authoring engine to receive data corresponding to an interactive content through an authoring interface of the interactive content authoring engine, and to detect at least one of an inconsistency in the interactive content and a possible conflict arising from a user interaction with the interactive content. The system processor is further configured to to execute the interactive content authoring engine to identify at least one solution for each inconsistency and/or possible conflict, and to resolve the inconsistency or inconsistencies and the possible conflict(s) to enable generation of a substantially conflict and inconsistency free interactive content.
    Type: Grant
    Filed: February 19, 2015
    Date of Patent: September 4, 2018
    Assignees: Disney Enterprises, Inc., ETH Zurich
    Inventors: Mubbasir Kapadia, Fabio Zund, Robert Sumner
  • Publication number: 20180227693
    Abstract: Methods and systems for sound propagation and perception for autonomous agents in dynamic environments are described. Adaptive discretization of continuous sound signals allows one to obtain a minimal, yet sufficient sound packet representation (SPR) for human-like perception, and a hierarchical clustering scheme to facilitate approximate perception. Planar sound propagation of discretized sound signals exhibit acoustic properties such as attenuation, reflection, refraction, and diffraction, as well as multiple convoluted sound signals. Agent-based sound perceptions using hierarchical clustering analysis that accommodates natural sound degradation due to audio distortion facilitate approximate human-like perception.
    Type: Application
    Filed: March 1, 2018
    Publication date: August 9, 2018
    Inventors: Norman I. Badler, Pengfei Huang, Mubbasir Kapadia
  • Patent number: 10042506
    Abstract: There is provided a method for use by an interactive story development system. The method includes receiving a story state including an attribute state of each of a plurality of items present in a storyline, wherein the plurality of items include a plurality of objects of the storyline and a plurality of characters of the storyline. The method also includes creating a story web, wherein the story web includes a node for every possible interaction between the plurality of objects and the plurality of characters in the storyline, and calculating a narrative value for each of the nodes of the story web, receiving a first input from a user selecting user criteria including at least one of a story telling option of the storyline and a sentiment selection, and determining based on the narrative value and the user criteria, a plurality of candidate nodes of the story web.
    Type: Grant
    Filed: March 19, 2015
    Date of Patent: August 7, 2018
    Assignee: Disney Enterprises, Inc.
    Inventors: Mubbasir Kapadia, Robert Sumner, Alexander Shoulson
  • Patent number: 9942683
    Abstract: Methods and systems for sound propagation and perception for autonomous agents in dynamic environments are described. Adaptive discretization of continuous sound signals allows one to obtain a minimal, yet sufficient sound packet representation (SPR) for human-like perception, and a hierarchical clustering scheme to facilitate approximate perception. Planar sound propagation of discretized sound signals exhibit acoustic properties such as attenuation, reflection, refraction, and diffraction, as well as multiple convoluted sound signals. Agent-based sound perceptions using hierarchical clustering analysis that accommodates natural sound degradation due to audio distortion facilitate approximate human-like perception.
    Type: Grant
    Filed: July 16, 2014
    Date of Patent: April 10, 2018
    Assignee: The Trustees of the University of Pennsylvania
    Inventors: Norman I. Badler, Pengfei Huang, Mubbasir Kapadia
  • Publication number: 20170148200
    Abstract: A user interface may be presented to a user that may provide an intuitive interface configured for generating animations. The user interface may be users of varying levels of expertise, while retaining the freedom of authoring complex narratives in an animation. A given narrative may be constrained by user input related to one or more of events to occur in an animation, individual animation components that may be included in the events, and/or other input. The system may be configured to “fill in” missing gaps in a narrative to generate a consistent animation while still meeting one or more narrative constraints specified by user input. By way of non-limiting example, gaps may be “filled in” by effectuating non-user selections of one or more of events, animation components, and/or other information used to generate an animation.
    Type: Application
    Filed: November 19, 2015
    Publication date: May 25, 2017
    Inventors: Alexander Shoulson, Mubbasir Kapadia, Robert Sumner
  • Publication number: 20160274705
    Abstract: There is provided a method for use by an interactive story development system. The method includes receiving a story state including an attribute state of each of a plurality of items present in a storyline, wherein the plurality of items include a plurality of objects of the storyline and a plurality of characters of the storyline. The method also includes creating a story web, wherein the story web includes a node for every possible interaction between the plurality of objects and the plurality of characters in the storyline, and calculating a narrative value for each of the nodes of the story web, receiving a first input from a user selecting user criteria including at least one of a story telling option of the storyline and a sentiment selection, and determining based on the narrative value and the user criteria, a plurality of candidate nodes of the story web.
    Type: Application
    Filed: March 19, 2015
    Publication date: September 22, 2016
    Inventors: Mubbasir Kapadia, Robert Sumner, Alexander Shoulson
  • Publication number: 20160246613
    Abstract: There are provided systems and methods for guided authoring of interactive content. A content generation system enabling such guided authoring includes a system processor, a system memory, and an interactive content authoring engine stored in the system memory. The system processor is configured to execute the interactive content authoring engine to receive data corresponding to an interactive content through an authoring interface of the interactive content authoring engine, and to detect at least one of an inconsistency in the interactive content and a possible conflict arising from a user interaction with the interactive content. The system processor is further configured to to execute the interactive content authoring engine to identify at least one solution for each inconsistency and/or possible conflict, and to resolve the inconsistency or inconsistencies and the possible conflict(s) to enable generation of a substantially conflict and inconsistency free interactive content.
    Type: Application
    Filed: February 19, 2015
    Publication date: August 25, 2016
    Inventors: Mubbasir Kapadia, Fabio Zund, Robert Sumner