Patents by Inventor Erika Doggett

Erika Doggett has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240062022
    Abstract: Techniques for generating translated audio output based on media content are disclosed. Text is accessed corresponding to media content. One or more untranslated mouth shape indicia are determined based on the text. The text is parsed into one or more text chunks when one or more dubbing parameters are met. The parsed text is translated from a first spoken language to a second spoken language. One or more translated mouth shape indicia are determined. The one or more translated mouth shape indicia and the one or more untranslated mouth shape indicia are compared based on a predetermined tolerance threshold. A translated audio output is generated based on the translated text.
    Type: Application
    Filed: October 31, 2023
    Publication date: February 22, 2024
    Inventor: Erika Doggett
  • Patent number: 11887600
    Abstract: In various embodiments, a communication fusion application enables other software application(s) to interpret spoken user input. In operation, a communication fusion application determines that a prediction is relevant to a text input derived from a spoken input received from a user. Subsequently, the communication fusion application generates a predicted context based on the prediction. The communication fusion application then transmits the predicted context and the text input to the other software application(s). The other software application(s) perform additional action(s) based on the text input and the predicted context. Advantageously, by providing additional, relevant information to the software application(s), the communication fusion application increases the level of understanding during interactions with the user and the overall user experience is improved.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: January 30, 2024
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Erika Doggett, Nathan Nocon, Ashutosh Modi, Joseph Charles Sengir, Maxwell McCoy
  • Patent number: 11847425
    Abstract: A process receives, with a processor, audio corresponding to media content. Further, the process converts, with the processor, the audio to text. In addition, the process concatenates, with the processor, the text with one or more time codes. The process also parses, with the processor, the concatenated text into one or more text chunks according to one or more subtitle parameters. Further, the process automatically translates, with the processor, the parsed text from a first spoken language to a second spoken language. Moreover, the process determines, with the processor, if the language translation complies with the one or more subtitle parameters. Additionally, the process outputs, with the processor, the language translation to a display device for display of the one or more text chunks as one or more subtitles at one or more times corresponding to the one or more time codes.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: December 19, 2023
    Assignee: Disney Enterprises, Inc.
    Inventor: Erika Doggett
  • Publication number: 20230077379
    Abstract: Systems and methods are disclosed for compressing a target video. A computer-implemented method may use a computer system that include one or more physical computer processors and non-transient electronic storage. The computer-implemented method may include: obtaining the target video, extracting one or more frames from the target video, and generating an estimated optical flow based on a displacement of pixels between the one or more frames. The one or more frames may include one or more of a key frame and a target frame.
    Type: Application
    Filed: October 24, 2022
    Publication date: March 16, 2023
    Inventors: Christopher SCHROERS, Simone SCHAUB, Erika DOGGETT, Jared MCPHILLEN, Scott LABROZZI, Abdelaziz DJELOUAH
  • Publication number: 20220215595
    Abstract: Systems and methods for predicting a target set of pixels are disclosed. In one embodiment, a method may include obtaining target content. The target content may include a target set of pixels to be predicted. The method may also include convolving the target set of pixels to generate an estimated set of pixels. The method may include matching a second set of pixels in the target content to the target set of pixels. The second set of pixels may be within a distance from the target set of pixels. The method may include refining the estimated set of pixels to generate a refined set of pixels using a second set of pixels in the target content.
    Type: Application
    Filed: March 25, 2022
    Publication date: July 7, 2022
    Inventors: Christopher SCHROERS, Erika DOGGETT, Stephan MANDT, Jared MCPHILLEN, Scott LABROZZI, Romann WEBER, Mauro BAMERT
  • Patent number: 11335034
    Abstract: Systems and methods for predicting a target set of pixels are disclosed. In one embodiment, a method may include obtaining target content. The target content may include a target set of pixels to be predicted. The method may also include convolving the target set of pixels to generate an estimated set of pixels. The method may include matching a second set of pixels in the target content to the target set of pixels. The second set of pixels may be within a distance from the target set of pixels. The method may include refining the estimated set of pixels to generate a refined set of pixels using a second set of pixels in the target content.
    Type: Grant
    Filed: January 16, 2019
    Date of Patent: May 17, 2022
    Assignee: Disney Enterprises, Inc.
    Inventors: Christopher Schroers, Erika Doggett, Stephan Marcel Mandt, Jared McPhillen, Scott Labrozzi, Romann Weber, Mauro Bamert
  • Patent number: 11151186
    Abstract: Systems, devices, and methods are disclosed for presenting an interactive narrative. An apparatus includes a user interface. The apparatus also includes one or more processors operatively coupled to the user interface and a non-transitory computer-readable medium. The non-transitory computer-readable medium stores instructions that, when executed, cause the one or more processors to present a first piece of content corresponding to a given narrative via the user interface. The given narrative includes one or more characteristics. The one or more processors are caused to receive user input via the user interface. The one or more processors are caused to classify the user input into one of a plurality of response models. The one or more processors are caused to dynamically respond to the user input by presenting a second piece of content. The second piece of content is based on a selected response model corresponding to the user input.
    Type: Grant
    Filed: June 18, 2018
    Date of Patent: October 19, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Erika Doggett, Alethia Shih
  • Patent number: 11080835
    Abstract: A process receives, with a processor, video content. Further, the process splices, with the processor, the video content into a plurality of video frames. In addition, the process splices, with the processor, at least one of the plurality of video frames into a plurality of image patches. Moreover, the process performs, with a neural network, an image reconstruction of at least one of the plurality of image patches to generate a reconstructed image patch. The process also compares, with the processor, the reconstructed image patch with the at least one of the plurality of image patches. Finally, the process determines, with the processor, a pixel error within the at least one of the plurality of image patches based on a discrepancy between the reconstructed image patch and the at least one of the plurality of image patches.
    Type: Grant
    Filed: January 9, 2019
    Date of Patent: August 3, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Erika Doggett, Anna Wolak, Penelope Daphne Tsatsoulis, Nicholas McCarthy, Stephan Mandt
  • Publication number: 20210104241
    Abstract: In various embodiments, a communication fusion application enables other software application(s) to interpret spoken user input. In operation, a communication fusion application determines that a prediction is relevant to a text input derived from a spoken input received from a user. Subsequently, the communication fusion application generates a predicted context based on the prediction. The communication fusion application then transmits the predicted context and the text input to the other software application(s). The other software application(s) perform additional action(s) based on the text input and the predicted context. Advantageously, by providing additional, relevant information to the software application(s), the communication fusion application increases the level of understanding during interactions with the user and the overall user experience is improved.
    Type: Application
    Filed: October 4, 2019
    Publication date: April 8, 2021
    Inventors: Erika DOGGETT, Nathan NOCON, Ashutosh MODI, Joseph Charles SENGIR, Maxwell MCCOY
  • Patent number: 10832383
    Abstract: Systems and methods for distortion removal at multiple quality levels are disclosed. In one embodiment, a method may include receiving training content. The training content may include original content, reconstructed content, and training distortion quality levels corresponding to the reconstructed content. The reconstructed content may be derived from distorted original content. The method may also include training distortion quality levels corresponding to the reconstructed content. The method may further include receiving an initial distortion removal model. The method may include generating a conditioned distortion removal model by training the initial distortion removal model using the training content. The method may further include storing the conditioned distortion removal model.
    Type: Grant
    Filed: October 22, 2018
    Date of Patent: November 10, 2020
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Christopher Schroers, Mauro Bamert, Erika Doggett, Jared McPhillen, Scott Labrozzi, Romann Weber
  • Patent number: 10832380
    Abstract: Systems and methods for correcting color of uncalibrated material is disclosed. Example embodiments include a system to correct color of uncalibrated material. The system may include a non-transitory computer-readable medium operatively coupled to processors. The non-transitory computer-readable medium may store instructions that, when executed cause the processors to perform a number of operations. One operation is to obtain a target image of a degraded target material with one or more objects. The degraded target material comprises degraded colors and light information corresponding to light sources in the degraded target material. Another operations is to obtain color reference data. Another operation is to identify an object in the target image that corresponds to the color reference data. Yet another operation is to correct the identified object in the target image. Another operation is to correct the target image.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: November 10, 2020
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Steven Chapman, Mehul Patel, Joseph Popp, Ty Popko, Erika Doggett
  • Publication number: 20200226797
    Abstract: Systems and methods for predicting a target set of pixels are disclosed. In one embodiment, a method may include obtaining target content. The target content may include a target set of pixels to be predicted. The method may also include convolving the target set of pixels to generate an estimated set of pixels. The method may include matching a second set of pixels in the target content to the target set of pixels. The second set of pixels may be within a distance from the target set of pixels. The method may include refining the estimated set of pixels to generate a refined set of pixels using a second set of pixels in the target content.
    Type: Application
    Filed: January 16, 2019
    Publication date: July 16, 2020
    Applicant: Disney Enterprises, Inc.
    Inventors: Christopher Schroers, Erika Doggett, Stephan Marcel Mandt, Jared McPhillen, Scott Labrozzi, Romann Weber, Mauro Bamert
  • Publication number: 20200219245
    Abstract: A process receives, with a processor, video content. Further, the process splices, with the processor, the video content into a plurality of video frames. In addition, the process splices, with the processor, at least one of the plurality of video frames into a plurality of image patches. Moreover, the process performs, with a neural network, an image reconstruction of at least one of the plurality of image patches to generate a reconstructed image patch. The process also compares, with the processor, the reconstructed image patch with the at least one of the plurality of image patches. Finally, the process determines, with the processor, a pixel error within the at least one of the plurality of image patches based on a discrepancy between the reconstructed image patch and the at least one of the plurality of image patches.
    Type: Application
    Filed: January 9, 2019
    Publication date: July 9, 2020
    Inventors: Erika Doggett, Anna Wolak, Penelope Daphne Tsatsoulis, Nicholas McCarthy, Stephan Mandt
  • Patent number: 10691894
    Abstract: A process receives a user input in a human-to-machine interaction. The process generates, with a natural language generation engine, one or more response candidates. Further, the process measures, with the natural language generation engine, the semantic similarity of the one or more response candidates. In addition, the process selects, with the natural language generation engine, a response candidate from the one or more response candidates. The process measures, with the natural language generation engine, an offensiveness measurement and a politeness measurement of the selected response. The process determines, with the natural language generation engine, that the offensiveness measurement or the politeness measurement lacks compliance with one or more predefined criteria.
    Type: Grant
    Filed: May 1, 2018
    Date of Patent: June 23, 2020
    Assignee: Disney Enterprises, Inc.
    Inventor: Erika Doggett
  • Publication number: 20200053388
    Abstract: Systems and methods are disclosed for compressing a target video. A computer-implemented method may use a computer system that include one or more physical computer processors and non-transient electronic storage. The computer-implemented method may include: obtaining the target video, extracting one or more frames from the target video, and generating an estimated optical flow based on a displacement of pixels between the one or more frames. The one or more frames may include one or more of a key frame and a target frame.
    Type: Application
    Filed: January 29, 2019
    Publication date: February 13, 2020
    Applicant: Disney Enterprises, Inc.
    Inventors: Christopher Schroers, Simone Schaub, Erika Doggett, Jared McPhillen, Scott Labrozzi, Abdelaziz Djelouah
  • Publication number: 20200042601
    Abstract: A process receives, with a processor, audio corresponding to media content. Further, the process converts, with the processor, the audio to text. In addition, the process concatenates, with the processor, the text with one or more time codes. The process also parses, with the processor, the concatenated text into one or more text chunks according to one or more subtitle parameters. Further, the process automatically translates, with the processor, the parsed text from a first spoken language to a second spoken language. Moreover, the process determines, with the processor, if the language translation complies with the one or more subtitle parameters. Additionally, the process outputs, with the processor, the language translation to a display device for display of the one or more text chunks as one or more subtitles at one or more times corresponding to the one or more time codes.
    Type: Application
    Filed: August 1, 2018
    Publication date: February 6, 2020
    Inventor: Erika Doggett
  • Publication number: 20190384826
    Abstract: Systems, devices, and methods are disclosed for presenting an interactive narrative. An apparatus includes a user interface. The apparatus also includes one or more processors operatively coupled to the user interface and a non-transitory computer-readable medium. The non-transitory computer-readable medium stores instructions that, when executed, cause the one or more processors to present a first piece of content corresponding to a given narrative via the user interface. The given narrative includes one or more characteristics. The one or more processors are caused to receive user input via the user interface. The one or more processors are caused to classify the user input into one of a plurality of response models. The one or more processors are caused to dynamically respond to the user input by presenting a second piece of content. The second piece of content is based on a selected response model corresponding to the user input.
    Type: Application
    Filed: June 18, 2018
    Publication date: December 19, 2019
    Applicant: Disney Enterprises, Inc.
    Inventors: Erika DOGGETT, Alethia SHIH
  • Publication number: 20190370939
    Abstract: Systems and methods for correcting color of uncalibrated material is disclosed. Example embodiments include a system to correct color of uncalibrated material. The system may include a non-transitory computer-readable medium operatively coupled to processors. The non-transitory computer-readable medium may store instructions that, when executed cause the processors to perform a number of operations. One operation is to obtain a target image of a degraded target material with one or more objects. The degraded target material comprises degraded colors and light information corresponding to light sources in the degraded target material. Another operations is to obtain color reference data. Another operation is to identify an object in the target image that corresponds to the color reference data. Yet another operation is to correct the identified object in the target image. Another operation is to correct the target image.
    Type: Application
    Filed: June 4, 2018
    Publication date: December 5, 2019
    Applicant: Disney Enterprises, Inc.
    Inventors: Steven CHAPMAN, Mehul PATEL, Joseph POPP, Ty POPKO, Erika DOGGETT
  • Publication number: 20190340238
    Abstract: A process receives a user input in a human-to-machine interaction. The process generates, with a natural language generation engine, one or more response candidates. Further, the process measures, with the natural language generation engine, the semantic similarity of the one or more response candidates. In addition, the process selects, with the natural language generation engine, a response candidate from the one or more response candidates. The process measures, with the natural language generation engine, an offensiveness measurement and a politeness measurement of the selected response. The process determines, with the natural language generation engine, that the offensiveness measurement or the politeness measurement lacks compliance with one or more predefined criteria.
    Type: Application
    Filed: May 1, 2018
    Publication date: November 7, 2019
    Inventor: Erika Doggett
  • Publication number: 20190333190
    Abstract: Systems and methods for distortion removal at multiple quality levels are disclosed. In one embodiment, a method may include receiving training content. The training content may include original content, reconstructed content, and training distortion quality levels corresponding to the reconstructed content. The reconstructed content may be derived from distorted original content. The method may also include training distortion quality levels corresponding to the reconstructed content. The method may further include receiving an initial distortion removal model. The method may include generating a conditioned distortion removal model by training the initial distortion removal model using the training content. The method may further include storing the conditioned distortion removal model.
    Type: Application
    Filed: October 22, 2018
    Publication date: October 31, 2019
    Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)
    Inventors: Christopher Schroers, Mauro Bamert, Erika Doggett, Jared McPhillen, Scott Labrozzi, Romann Weber