Patents by Inventor Douglas Fidaleo
Douglas Fidaleo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11099641Abstract: The present disclosure relates to calibration, customization, and improved user experiences for smart or bionic lenses that are worn by a user. The calibration techniques include detecting and correcting distortion of a display of the bionic lenses, as well as distortion due to characteristics of the lens or eyes of the user. The customization techniques include utilizing the bionic lenses to detect eye characteristics that can be used to improve insertion of the bionic lenses, track health over time, and provide user alerts. The user experiences include interactive environments and animation techniques that are improved via the bionic lenses.Type: GrantFiled: June 27, 2019Date of Patent: August 24, 2021Assignee: DISNEY ENTERPRISES, INC.Inventors: Quinn Y. J. Smithwick, Jon H. Snoddy, Douglas A. Fidaleo
-
Patent number: 11074736Abstract: According to one implementation, a system for performing cosmetic transformation through image synthesis includes a computing platform having a hardware processor and a system memory storing an image synthesis software code. The hardware processor executes the image synthesis software code to receive a user image depicting a user of the system, to receive a reference image for transforming to resemble the user image, the reference image projecting an identifiable persona, and to generate a user image metadata describing the user image, based on the user image. The hardware processor further executes the image synthesis software code to identify a reference image metadata describing the reference image, and to transform the reference image to resemble the user image, based on the user image metadata and the reference image metadata, while preserving the identifiable persona projected by the reference image.Type: GrantFiled: October 22, 2019Date of Patent: July 27, 2021Assignee: Disney Enterprises, Inc.Inventor: Douglas A. Fidaleo
-
Patent number: 11062692Abstract: An audio processing system for generating audio including emotionally expressive synthesized content includes a computing platform having a hardware processor and a memory storing a software code including a trained neural network. The hardware processor is configured to execute the software code to receive an audio sequence template including one or more audio segment(s) and an audio gap, and to receive data describing one or more words for insertion into the audio gap. The hardware processor is configured to further execute the software code to use the trained neural network to generate an integrated audio sequence using the audio sequence template and the data, the integrated audio sequence including the one or more audio segment(s) and at least one synthesized word corresponding to the one or more words described by the data.Type: GrantFiled: September 23, 2019Date of Patent: July 13, 2021Assignee: Disney Enterprises, Inc.Inventors: Salvator D. Lombardo, Komath Naveen Kumar, Douglas A. Fidaleo
-
Patent number: 11018997Abstract: Systems and methods for an interactive communications system capable of generating a response to conversational input are provided. The interactive communications system analyzes the conversational input to determine relevant topics of discussion. The interactive communications system further determines which of the relevant topics of discussion can potentially lead to an unwanted end to a conversation. The interactive communications system redirects the conversation by providing responses to the conversational input that are intended simply to avoid the unwanted end to the conversation.Type: GrantFiled: April 12, 2018Date of Patent: May 25, 2021Assignee: Disney Enterprises, Inc.Inventors: Raymond Scanlon, Douglas Fidaleo
-
Publication number: 20210090549Abstract: An audio processing system for generating audio including emotionally expressive synthesized content includes a computing platform having a hardware processor and a memory storing a software code including a trained neural network. The hardware processor is configured to execute the software code to receive an audio sequence template including one or more audio segment(s) and an audio gap, and to receive data describing one or more words for insertion into the audio gap. The hardware processor is configured to further execute the software code to use the trained neural network to generate an integrated audio sequence using the audio sequence template and the data, the integrated audio sequence including the one or more audio segment(s) and at least one synthesized word corresponding to the one or more words described by the data.Type: ApplicationFiled: September 23, 2019Publication date: March 25, 2021Inventors: Salvator D. Lombardo, Komath Naveen Kumar, Douglas A. Fidaleo
-
Publication number: 20210019509Abstract: Systems and methods are described for dynamically adjusting an amount of retrieved recognition data based on the needs of a show, experience, or other event where participants are recognized. The retrieved recognition data may be deleted once it is no longer needed for the event. Recognition data retrieval is limited to just what is needed for the particular task, minimizing the uniqueness of any retrieved recognition data to respect participant privacy while providing an enhanced participant experience through recognition.Type: ApplicationFiled: September 30, 2020Publication date: January 21, 2021Applicant: Disney Enterprises, Inc.Inventor: Douglas Fidaleo
-
Publication number: 20200409458Abstract: The present disclosure relates to calibration, customization, and improved user experiences for smart or bionic lenses that are worn by a user. The calibration techniques include detecting and correcting distortion of a display of the bionic lenses, as well as distortion due to characteristics of the lens or eyes of the user. The customization techniques include utilizing the bionic lenses to detect eye characteristics that can be used to improve insertion of the bionic lenses, track health over time, and provide user alerts. The user experiences include interactive environments and animation techniques that are improved via the bionic lenses.Type: ApplicationFiled: June 27, 2019Publication date: December 31, 2020Inventors: Quinn Y.J. Smithwick, Jon H. Snoddy, Douglas A. Fidaleo
-
Patent number: 10832043Abstract: Systems and methods are described for dynamically adjusting an amount of retrieved recognition data based on the needs of a show, experience, or other event where participants are recognized. The retrieved recognition data may be deleted once it is no longer needed for the event. Recognition data retrieval is limited to just what is needed for the particular task, minimizing the uniqueness of any retrieved recognition data to respect participant privacy while providing an enhanced participant experience through recognition.Type: GrantFiled: February 6, 2018Date of Patent: November 10, 2020Assignee: Disney Enterprises, Inc.Inventor: Douglas Fidaleo
-
Patent number: 10818312Abstract: According to one implementation, an affect-driven dialog generation system includes a computing platform having a hardware processor and a system memory storing a software code including a sequence-to-sequence (seq2seq) architecture trained using a loss function having an affective regularizer term based on a difference in emotional content between a target dialog response and a dialog sequence determined by the seq2seq architecture during training. The hardware processor executes the software code to receive an input dialog sequence, and to use the seq2seq architecture to generate emotionally diverse dialog responses based on the input dialog sequence and a predetermined target emotion. The hardware processor further executes the software code to determine, using the seq2seq architecture, a final dialog sequence responsive to the input dialog sequence based on an emotional relevance of each of the emotionally diverse dialog responses, and to provide the final dialog sequence as an output.Type: GrantFiled: December 19, 2018Date of Patent: October 27, 2020Assignee: Disney Enterprises, Inc.Inventors: Ashutosh Modi, Mubbasir Kapadia, Douglas A. Fidaleo, James R. Kennedy, Wojciech Witon, Pierre Colombo
-
Publication number: 20200320427Abstract: A system providing an interactive social agent can include a computing platform having a hardware processor and a memory storing a training content standardization software code configured to receive content depicting human expressions and including annotation data describing the human expressions from multiple content annotation sources, generate a corresponding content descriptor for each content annotation source to translate the annotation data into a standardized data format, and transform the annotation data into the standardized data format using the corresponding content descriptor. The content and the annotation data in the to standardized format are stored as training data for use in training expressions for the interactive social agent.Type: ApplicationFiled: April 3, 2019Publication date: October 8, 2020Inventors: James R. Kennedy, Jill Fain Lehman, Kevin El Haddad, Douglas A. Fidaleo
-
Publication number: 20200202887Abstract: According to one implementation, an affect-driven dialog generation system includes a computing platform having a hardware processor and a system memory storing a software code including a sequence-to-sequence (seq2seq) architecture trained using a loss function having an affective regularizer term based on a difference in emotional content between a target dialog response and a dialog sequence determined by the seq2seq architecture during training. The hardware processor executes the software code to receive an input dialog sequence, and to use the seq2seq architecture to generate emotionally diverse dialog responses based on the input dialog sequence and a predetermined target emotion. The hardware processor further executes the software code to determine, using the seq2seq architecture, a final dialog sequence responsive to the input dialog sequence based on an emotional relevance of each of the emotionally diverse dialog responses, and to provide the final dialog sequence as an output.Type: ApplicationFiled: December 19, 2018Publication date: June 25, 2020Inventors: Ashutosh Modi, Mubbasir Kapadia, Douglas A. Fidaleo, James R. Kennedy, Wojciech Witon, Pierre Colombo
-
Publication number: 20200184306Abstract: A system for simulating human-like affect-driven behavior includes a hardware processor and a system memory storing software code providing a virtual agent. The hardware processor executes the software code to identify a character assumed by the virtual agent and having a personality, a target state of motivational fulfillment, a baseline mood, and emotions. The software code identifies current physical and motivational fulfillment states, and currently active emotions of the character, and determines a current mood of the character based on the baseline mood and the currently active emotions. The software code further detects an experience by the character and plans multiple behaviors including a first behavior based on the experience and the current physical state, the second behavior based on the experience, the personality, the current mood, and the currently active emotions, and a third behavior based on the target and current states of motivational fulfillment.Type: ApplicationFiled: December 5, 2018Publication date: June 11, 2020Inventors: Jakob Buhmann, Douglas A. Fidaleo, Maayan Shvo, Mubbasir Kapadia, Jill Fain Lehman, Sarah K. Wulfeck, Rylan Gibbens, Steven Poulakos, John Wiseman
-
Patent number: 10671461Abstract: A system for mediating interactions among system agents and system clients includes a computing platform having a hardware processor and a system memory storing an interaction cueing software code including decision trees corresponding to storylines. The hardware processor executes the interaction cueing software code to receive interaction data corresponding to an interaction of a system client with a first system agent, identify a storyline for use in guiding subsequent interactions with the system client based on the interaction data, and store the interaction data and data identifying the storyline in a client profile assigned to the system client. The interaction cueing software code further determines an interaction cue or cues for coaching the same or another system agent in a second interaction with the system client based on the interaction data and a decision tree corresponding to the storyline, and transmits the interaction cue(s) to the system agent.Type: GrantFiled: July 17, 2019Date of Patent: June 2, 2020Assignee: Disney Enterprises, Inc.Inventors: Raymond J. Scanlon, Douglas A. Fidaleo, Robert P. Michel, Daniel C. Pike, Jordan K. Weisman
-
Publication number: 20200051301Abstract: According to one implementation, a system for performing cosmetic transformation through image synthesis includes a computing platform having a hardware processor and a system memory storing an image synthesis software code. The hardware processor executes the image synthesis software code to receive a user image depicting a user of the system, to receive a reference image for transforming to resemble the user image, the reference image projecting an identifiable persona, and to generate a user image metadata describing the user image, based on the user image. The hardware processor further executes the image synthesis software code to identify to a reference image metadata describing the reference image, and to transform the reference image to resemble the user image, based on the user image metadata and the reference image metadata, while preserving the identifiable persona projected by the reference image.Type: ApplicationFiled: October 22, 2019Publication date: February 13, 2020Inventor: Douglas A. Fidaleo
-
Patent number: 10489952Abstract: According to one implementation, a system for performing cosmetic transformation through image synthesis includes a computing platform having a hardware processor and a system memory storing an image synthesis software code. The hardware processor executes the image synthesis software code to receive a user image depicting a user of the system, to receive a reference image for transforming to resemble the user image, the reference image projecting an identifiable persona, and to generate a user image metadata describing the user image, based on the user image. The hardware processor further executes the image synthesis software code to identify a reference image metadata describing the reference image, and to transform the reference image to resemble the user image, based on the user image metadata and the reference image metadata, while preserving the identifiable persona projected by the reference image.Type: GrantFiled: November 1, 2017Date of Patent: November 26, 2019Assignee: Disney Enterprises, Inc.Inventor: Douglas A. Fidaleo
-
Publication number: 20190340044Abstract: A system for mediating interactions among system agents and system clients includes a computing platform having a hardware processor and a system memory storing an interaction cueing software code including decision trees corresponding to storylines. The hardware processor executes the interaction cueing software code to receive interaction data corresponding to an interaction of a system client with a first system agent, identify a storyline for use in guiding subsequent interactions with the system client based on the interaction data, and store the interaction data and data identifying the storyline in a client profile assigned to the system client. The interaction cueing software code further determines an interaction cue or cues for coaching the same or another system agent in a second interaction with the system client based on the interaction data and a decision tree corresponding to the storyline, and transmits the interaction cue(s) to the system agent.Type: ApplicationFiled: July 17, 2019Publication date: November 7, 2019Inventors: Raymond J. Scanlon, Douglas A. Fidaleo, Robert P. Michel, Daniel C. Pike, Jordan K. Weisman
-
Publication number: 20190319898Abstract: Systems and methods for an interactive communications system capable of generating a response to conversational input are provided. The interactive communications system analyzes the conversational input to determine relevant topics of discussion. The interactive communications system further determines which of the relevant topics of discussion can potentially lead to an unwanted end to a conversation. The interactive communications system redirects the conversation by providing responses to the conversational input that are intended simply to avoid the unwanted end to the conversation.Type: ApplicationFiled: April 12, 2018Publication date: October 17, 2019Applicant: Disney Enterprises, Inc.Inventors: Raymond Scanlon, Douglas Fidaleo
-
Patent number: 10402240Abstract: A system for mediating interactions among system agents and system clients includes a computing platform having a hardware processor and a system memory storing an interaction cueing software code including decision trees corresponding to storylines. The hardware processor executes the interaction cueing software code to receive interaction data corresponding to an interaction of a system client with a first system agent, identify a storyline for use in guiding subsequent interactions with the system client based on the interaction data, and store the interaction data and data identifying the storyline in a client profile assigned to the system client. The interaction cueing software code further determines an interaction cue or cues for coaching the same or another system agent in a second interaction with the system client based on the interaction data and a decision tree corresponding to the storyline, and transmits the interaction cue(s) to the system agent.Type: GrantFiled: December 14, 2017Date of Patent: September 3, 2019Assignee: Disney Enterprises, Inc.Inventors: Raymond J. Scanlon, Douglas A. Fidaleo, Robert P. Michel, Daniel C. Pike, Jordan K. Weisman
-
Publication number: 20190244018Abstract: Systems and methods are described for dynamically adjusting an amount of retrieved recognition data based on the needs of a show, experience, or other event where participants are recognized. The retrieved recognition data may be deleted once it is no longer needed for the event. Recognition data retrieval is limited to just what is needed for the particular task, minimizing the uniqueness of any retrieved recognition data to respect participant privacy while providing an enhanced participant experience through recognition.Type: ApplicationFiled: February 6, 2018Publication date: August 8, 2019Applicant: Disney Enterprises, Inc.Inventor: Douglas Fidaleo
-
Publication number: 20190188061Abstract: A system for mediating interactions among system agents and system clients includes a computing platform having a hardware processor and a system memory storing an interaction cueing software code including decision trees corresponding to storylines. The hardware processor executes the interaction cueing software code to receive interaction data corresponding to an interaction of a system client with a first system agent, identify a storyline for use in guiding subsequent interactions with the system client based on the interaction data, and store the interaction data and data identifying the storyline in a client profile assigned to the system client. The interaction cueing software code further determines an interaction cue or cues for coaching the same or another system agent in a second interaction with the system client based on the interaction data and a decision tree corresponding to the storyline, and transmits the interaction cue(s) to the system agent.Type: ApplicationFiled: December 14, 2017Publication date: June 20, 2019Inventors: Raymond J. Scanlon, Douglas A . Fidaleo, Robert P. Michel, Daniel C. Pike, Jordan K. Weisman