Patents by Inventor Douglas Fidaleo

Douglas Fidaleo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11099641
    Abstract: The present disclosure relates to calibration, customization, and improved user experiences for smart or bionic lenses that are worn by a user. The calibration techniques include detecting and correcting distortion of a display of the bionic lenses, as well as distortion due to characteristics of the lens or eyes of the user. The customization techniques include utilizing the bionic lenses to detect eye characteristics that can be used to improve insertion of the bionic lenses, track health over time, and provide user alerts. The user experiences include interactive environments and animation techniques that are improved via the bionic lenses.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: August 24, 2021
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Quinn Y. J. Smithwick, Jon H. Snoddy, Douglas A. Fidaleo
  • Patent number: 11074736
    Abstract: According to one implementation, a system for performing cosmetic transformation through image synthesis includes a computing platform having a hardware processor and a system memory storing an image synthesis software code. The hardware processor executes the image synthesis software code to receive a user image depicting a user of the system, to receive a reference image for transforming to resemble the user image, the reference image projecting an identifiable persona, and to generate a user image metadata describing the user image, based on the user image. The hardware processor further executes the image synthesis software code to identify a reference image metadata describing the reference image, and to transform the reference image to resemble the user image, based on the user image metadata and the reference image metadata, while preserving the identifiable persona projected by the reference image.
    Type: Grant
    Filed: October 22, 2019
    Date of Patent: July 27, 2021
    Assignee: Disney Enterprises, Inc.
    Inventor: Douglas A. Fidaleo
  • Patent number: 11062692
    Abstract: An audio processing system for generating audio including emotionally expressive synthesized content includes a computing platform having a hardware processor and a memory storing a software code including a trained neural network. The hardware processor is configured to execute the software code to receive an audio sequence template including one or more audio segment(s) and an audio gap, and to receive data describing one or more words for insertion into the audio gap. The hardware processor is configured to further execute the software code to use the trained neural network to generate an integrated audio sequence using the audio sequence template and the data, the integrated audio sequence including the one or more audio segment(s) and at least one synthesized word corresponding to the one or more words described by the data.
    Type: Grant
    Filed: September 23, 2019
    Date of Patent: July 13, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Salvator D. Lombardo, Komath Naveen Kumar, Douglas A. Fidaleo
  • Patent number: 11018997
    Abstract: Systems and methods for an interactive communications system capable of generating a response to conversational input are provided. The interactive communications system analyzes the conversational input to determine relevant topics of discussion. The interactive communications system further determines which of the relevant topics of discussion can potentially lead to an unwanted end to a conversation. The interactive communications system redirects the conversation by providing responses to the conversational input that are intended simply to avoid the unwanted end to the conversation.
    Type: Grant
    Filed: April 12, 2018
    Date of Patent: May 25, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Raymond Scanlon, Douglas Fidaleo
  • Publication number: 20210090549
    Abstract: An audio processing system for generating audio including emotionally expressive synthesized content includes a computing platform having a hardware processor and a memory storing a software code including a trained neural network. The hardware processor is configured to execute the software code to receive an audio sequence template including one or more audio segment(s) and an audio gap, and to receive data describing one or more words for insertion into the audio gap. The hardware processor is configured to further execute the software code to use the trained neural network to generate an integrated audio sequence using the audio sequence template and the data, the integrated audio sequence including the one or more audio segment(s) and at least one synthesized word corresponding to the one or more words described by the data.
    Type: Application
    Filed: September 23, 2019
    Publication date: March 25, 2021
    Inventors: Salvator D. Lombardo, Komath Naveen Kumar, Douglas A. Fidaleo
  • Publication number: 20210019509
    Abstract: Systems and methods are described for dynamically adjusting an amount of retrieved recognition data based on the needs of a show, experience, or other event where participants are recognized. The retrieved recognition data may be deleted once it is no longer needed for the event. Recognition data retrieval is limited to just what is needed for the particular task, minimizing the uniqueness of any retrieved recognition data to respect participant privacy while providing an enhanced participant experience through recognition.
    Type: Application
    Filed: September 30, 2020
    Publication date: January 21, 2021
    Applicant: Disney Enterprises, Inc.
    Inventor: Douglas Fidaleo
  • Publication number: 20200409458
    Abstract: The present disclosure relates to calibration, customization, and improved user experiences for smart or bionic lenses that are worn by a user. The calibration techniques include detecting and correcting distortion of a display of the bionic lenses, as well as distortion due to characteristics of the lens or eyes of the user. The customization techniques include utilizing the bionic lenses to detect eye characteristics that can be used to improve insertion of the bionic lenses, track health over time, and provide user alerts. The user experiences include interactive environments and animation techniques that are improved via the bionic lenses.
    Type: Application
    Filed: June 27, 2019
    Publication date: December 31, 2020
    Inventors: Quinn Y.J. Smithwick, Jon H. Snoddy, Douglas A. Fidaleo
  • Patent number: 10832043
    Abstract: Systems and methods are described for dynamically adjusting an amount of retrieved recognition data based on the needs of a show, experience, or other event where participants are recognized. The retrieved recognition data may be deleted once it is no longer needed for the event. Recognition data retrieval is limited to just what is needed for the particular task, minimizing the uniqueness of any retrieved recognition data to respect participant privacy while providing an enhanced participant experience through recognition.
    Type: Grant
    Filed: February 6, 2018
    Date of Patent: November 10, 2020
    Assignee: Disney Enterprises, Inc.
    Inventor: Douglas Fidaleo
  • Patent number: 10818312
    Abstract: According to one implementation, an affect-driven dialog generation system includes a computing platform having a hardware processor and a system memory storing a software code including a sequence-to-sequence (seq2seq) architecture trained using a loss function having an affective regularizer term based on a difference in emotional content between a target dialog response and a dialog sequence determined by the seq2seq architecture during training. The hardware processor executes the software code to receive an input dialog sequence, and to use the seq2seq architecture to generate emotionally diverse dialog responses based on the input dialog sequence and a predetermined target emotion. The hardware processor further executes the software code to determine, using the seq2seq architecture, a final dialog sequence responsive to the input dialog sequence based on an emotional relevance of each of the emotionally diverse dialog responses, and to provide the final dialog sequence as an output.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: October 27, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Ashutosh Modi, Mubbasir Kapadia, Douglas A. Fidaleo, James R. Kennedy, Wojciech Witon, Pierre Colombo
  • Publication number: 20200320427
    Abstract: A system providing an interactive social agent can include a computing platform having a hardware processor and a memory storing a training content standardization software code configured to receive content depicting human expressions and including annotation data describing the human expressions from multiple content annotation sources, generate a corresponding content descriptor for each content annotation source to translate the annotation data into a standardized data format, and transform the annotation data into the standardized data format using the corresponding content descriptor. The content and the annotation data in the to standardized format are stored as training data for use in training expressions for the interactive social agent.
    Type: Application
    Filed: April 3, 2019
    Publication date: October 8, 2020
    Inventors: James R. Kennedy, Jill Fain Lehman, Kevin El Haddad, Douglas A. Fidaleo
  • Publication number: 20200202887
    Abstract: According to one implementation, an affect-driven dialog generation system includes a computing platform having a hardware processor and a system memory storing a software code including a sequence-to-sequence (seq2seq) architecture trained using a loss function having an affective regularizer term based on a difference in emotional content between a target dialog response and a dialog sequence determined by the seq2seq architecture during training. The hardware processor executes the software code to receive an input dialog sequence, and to use the seq2seq architecture to generate emotionally diverse dialog responses based on the input dialog sequence and a predetermined target emotion. The hardware processor further executes the software code to determine, using the seq2seq architecture, a final dialog sequence responsive to the input dialog sequence based on an emotional relevance of each of the emotionally diverse dialog responses, and to provide the final dialog sequence as an output.
    Type: Application
    Filed: December 19, 2018
    Publication date: June 25, 2020
    Inventors: Ashutosh Modi, Mubbasir Kapadia, Douglas A. Fidaleo, James R. Kennedy, Wojciech Witon, Pierre Colombo
  • Publication number: 20200184306
    Abstract: A system for simulating human-like affect-driven behavior includes a hardware processor and a system memory storing software code providing a virtual agent. The hardware processor executes the software code to identify a character assumed by the virtual agent and having a personality, a target state of motivational fulfillment, a baseline mood, and emotions. The software code identifies current physical and motivational fulfillment states, and currently active emotions of the character, and determines a current mood of the character based on the baseline mood and the currently active emotions. The software code further detects an experience by the character and plans multiple behaviors including a first behavior based on the experience and the current physical state, the second behavior based on the experience, the personality, the current mood, and the currently active emotions, and a third behavior based on the target and current states of motivational fulfillment.
    Type: Application
    Filed: December 5, 2018
    Publication date: June 11, 2020
    Inventors: Jakob Buhmann, Douglas A. Fidaleo, Maayan Shvo, Mubbasir Kapadia, Jill Fain Lehman, Sarah K. Wulfeck, Rylan Gibbens, Steven Poulakos, John Wiseman
  • Patent number: 10671461
    Abstract: A system for mediating interactions among system agents and system clients includes a computing platform having a hardware processor and a system memory storing an interaction cueing software code including decision trees corresponding to storylines. The hardware processor executes the interaction cueing software code to receive interaction data corresponding to an interaction of a system client with a first system agent, identify a storyline for use in guiding subsequent interactions with the system client based on the interaction data, and store the interaction data and data identifying the storyline in a client profile assigned to the system client. The interaction cueing software code further determines an interaction cue or cues for coaching the same or another system agent in a second interaction with the system client based on the interaction data and a decision tree corresponding to the storyline, and transmits the interaction cue(s) to the system agent.
    Type: Grant
    Filed: July 17, 2019
    Date of Patent: June 2, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Raymond J. Scanlon, Douglas A. Fidaleo, Robert P. Michel, Daniel C. Pike, Jordan K. Weisman
  • Publication number: 20200051301
    Abstract: According to one implementation, a system for performing cosmetic transformation through image synthesis includes a computing platform having a hardware processor and a system memory storing an image synthesis software code. The hardware processor executes the image synthesis software code to receive a user image depicting a user of the system, to receive a reference image for transforming to resemble the user image, the reference image projecting an identifiable persona, and to generate a user image metadata describing the user image, based on the user image. The hardware processor further executes the image synthesis software code to identify to a reference image metadata describing the reference image, and to transform the reference image to resemble the user image, based on the user image metadata and the reference image metadata, while preserving the identifiable persona projected by the reference image.
    Type: Application
    Filed: October 22, 2019
    Publication date: February 13, 2020
    Inventor: Douglas A. Fidaleo
  • Patent number: 10489952
    Abstract: According to one implementation, a system for performing cosmetic transformation through image synthesis includes a computing platform having a hardware processor and a system memory storing an image synthesis software code. The hardware processor executes the image synthesis software code to receive a user image depicting a user of the system, to receive a reference image for transforming to resemble the user image, the reference image projecting an identifiable persona, and to generate a user image metadata describing the user image, based on the user image. The hardware processor further executes the image synthesis software code to identify a reference image metadata describing the reference image, and to transform the reference image to resemble the user image, based on the user image metadata and the reference image metadata, while preserving the identifiable persona projected by the reference image.
    Type: Grant
    Filed: November 1, 2017
    Date of Patent: November 26, 2019
    Assignee: Disney Enterprises, Inc.
    Inventor: Douglas A. Fidaleo
  • Publication number: 20190340044
    Abstract: A system for mediating interactions among system agents and system clients includes a computing platform having a hardware processor and a system memory storing an interaction cueing software code including decision trees corresponding to storylines. The hardware processor executes the interaction cueing software code to receive interaction data corresponding to an interaction of a system client with a first system agent, identify a storyline for use in guiding subsequent interactions with the system client based on the interaction data, and store the interaction data and data identifying the storyline in a client profile assigned to the system client. The interaction cueing software code further determines an interaction cue or cues for coaching the same or another system agent in a second interaction with the system client based on the interaction data and a decision tree corresponding to the storyline, and transmits the interaction cue(s) to the system agent.
    Type: Application
    Filed: July 17, 2019
    Publication date: November 7, 2019
    Inventors: Raymond J. Scanlon, Douglas A. Fidaleo, Robert P. Michel, Daniel C. Pike, Jordan K. Weisman
  • Publication number: 20190319898
    Abstract: Systems and methods for an interactive communications system capable of generating a response to conversational input are provided. The interactive communications system analyzes the conversational input to determine relevant topics of discussion. The interactive communications system further determines which of the relevant topics of discussion can potentially lead to an unwanted end to a conversation. The interactive communications system redirects the conversation by providing responses to the conversational input that are intended simply to avoid the unwanted end to the conversation.
    Type: Application
    Filed: April 12, 2018
    Publication date: October 17, 2019
    Applicant: Disney Enterprises, Inc.
    Inventors: Raymond Scanlon, Douglas Fidaleo
  • Patent number: 10402240
    Abstract: A system for mediating interactions among system agents and system clients includes a computing platform having a hardware processor and a system memory storing an interaction cueing software code including decision trees corresponding to storylines. The hardware processor executes the interaction cueing software code to receive interaction data corresponding to an interaction of a system client with a first system agent, identify a storyline for use in guiding subsequent interactions with the system client based on the interaction data, and store the interaction data and data identifying the storyline in a client profile assigned to the system client. The interaction cueing software code further determines an interaction cue or cues for coaching the same or another system agent in a second interaction with the system client based on the interaction data and a decision tree corresponding to the storyline, and transmits the interaction cue(s) to the system agent.
    Type: Grant
    Filed: December 14, 2017
    Date of Patent: September 3, 2019
    Assignee: Disney Enterprises, Inc.
    Inventors: Raymond J. Scanlon, Douglas A. Fidaleo, Robert P. Michel, Daniel C. Pike, Jordan K. Weisman
  • Publication number: 20190244018
    Abstract: Systems and methods are described for dynamically adjusting an amount of retrieved recognition data based on the needs of a show, experience, or other event where participants are recognized. The retrieved recognition data may be deleted once it is no longer needed for the event. Recognition data retrieval is limited to just what is needed for the particular task, minimizing the uniqueness of any retrieved recognition data to respect participant privacy while providing an enhanced participant experience through recognition.
    Type: Application
    Filed: February 6, 2018
    Publication date: August 8, 2019
    Applicant: Disney Enterprises, Inc.
    Inventor: Douglas Fidaleo
  • Publication number: 20190188061
    Abstract: A system for mediating interactions among system agents and system clients includes a computing platform having a hardware processor and a system memory storing an interaction cueing software code including decision trees corresponding to storylines. The hardware processor executes the interaction cueing software code to receive interaction data corresponding to an interaction of a system client with a first system agent, identify a storyline for use in guiding subsequent interactions with the system client based on the interaction data, and store the interaction data and data identifying the storyline in a client profile assigned to the system client. The interaction cueing software code further determines an interaction cue or cues for coaching the same or another system agent in a second interaction with the system client based on the interaction data and a decision tree corresponding to the storyline, and transmits the interaction cue(s) to the system agent.
    Type: Application
    Filed: December 14, 2017
    Publication date: June 20, 2019
    Inventors: Raymond J. Scanlon, Douglas A . Fidaleo, Robert P. Michel, Daniel C. Pike, Jordan K. Weisman