Patents by Inventor Erika Doggett
Erika Doggett has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240062022Abstract: Techniques for generating translated audio output based on media content are disclosed. Text is accessed corresponding to media content. One or more untranslated mouth shape indicia are determined based on the text. The text is parsed into one or more text chunks when one or more dubbing parameters are met. The parsed text is translated from a first spoken language to a second spoken language. One or more translated mouth shape indicia are determined. The one or more translated mouth shape indicia and the one or more untranslated mouth shape indicia are compared based on a predetermined tolerance threshold. A translated audio output is generated based on the translated text.Type: ApplicationFiled: October 31, 2023Publication date: February 22, 2024Inventor: Erika Doggett
-
Patent number: 11887600Abstract: In various embodiments, a communication fusion application enables other software application(s) to interpret spoken user input. In operation, a communication fusion application determines that a prediction is relevant to a text input derived from a spoken input received from a user. Subsequently, the communication fusion application generates a predicted context based on the prediction. The communication fusion application then transmits the predicted context and the text input to the other software application(s). The other software application(s) perform additional action(s) based on the text input and the predicted context. Advantageously, by providing additional, relevant information to the software application(s), the communication fusion application increases the level of understanding during interactions with the user and the overall user experience is improved.Type: GrantFiled: October 4, 2019Date of Patent: January 30, 2024Assignee: DISNEY ENTERPRISES, INC.Inventors: Erika Doggett, Nathan Nocon, Ashutosh Modi, Joseph Charles Sengir, Maxwell McCoy
-
Patent number: 11847425Abstract: A process receives, with a processor, audio corresponding to media content. Further, the process converts, with the processor, the audio to text. In addition, the process concatenates, with the processor, the text with one or more time codes. The process also parses, with the processor, the concatenated text into one or more text chunks according to one or more subtitle parameters. Further, the process automatically translates, with the processor, the parsed text from a first spoken language to a second spoken language. Moreover, the process determines, with the processor, if the language translation complies with the one or more subtitle parameters. Additionally, the process outputs, with the processor, the language translation to a display device for display of the one or more text chunks as one or more subtitles at one or more times corresponding to the one or more time codes.Type: GrantFiled: August 1, 2018Date of Patent: December 19, 2023Assignee: Disney Enterprises, Inc.Inventor: Erika Doggett
-
Publication number: 20230077379Abstract: Systems and methods are disclosed for compressing a target video. A computer-implemented method may use a computer system that include one or more physical computer processors and non-transient electronic storage. The computer-implemented method may include: obtaining the target video, extracting one or more frames from the target video, and generating an estimated optical flow based on a displacement of pixels between the one or more frames. The one or more frames may include one or more of a key frame and a target frame.Type: ApplicationFiled: October 24, 2022Publication date: March 16, 2023Inventors: Christopher SCHROERS, Simone SCHAUB, Erika DOGGETT, Jared MCPHILLEN, Scott LABROZZI, Abdelaziz DJELOUAH
-
Publication number: 20220215595Abstract: Systems and methods for predicting a target set of pixels are disclosed. In one embodiment, a method may include obtaining target content. The target content may include a target set of pixels to be predicted. The method may also include convolving the target set of pixels to generate an estimated set of pixels. The method may include matching a second set of pixels in the target content to the target set of pixels. The second set of pixels may be within a distance from the target set of pixels. The method may include refining the estimated set of pixels to generate a refined set of pixels using a second set of pixels in the target content.Type: ApplicationFiled: March 25, 2022Publication date: July 7, 2022Inventors: Christopher SCHROERS, Erika DOGGETT, Stephan MANDT, Jared MCPHILLEN, Scott LABROZZI, Romann WEBER, Mauro BAMERT
-
Patent number: 11335034Abstract: Systems and methods for predicting a target set of pixels are disclosed. In one embodiment, a method may include obtaining target content. The target content may include a target set of pixels to be predicted. The method may also include convolving the target set of pixels to generate an estimated set of pixels. The method may include matching a second set of pixels in the target content to the target set of pixels. The second set of pixels may be within a distance from the target set of pixels. The method may include refining the estimated set of pixels to generate a refined set of pixels using a second set of pixels in the target content.Type: GrantFiled: January 16, 2019Date of Patent: May 17, 2022Assignee: Disney Enterprises, Inc.Inventors: Christopher Schroers, Erika Doggett, Stephan Marcel Mandt, Jared McPhillen, Scott Labrozzi, Romann Weber, Mauro Bamert
-
Patent number: 11151186Abstract: Systems, devices, and methods are disclosed for presenting an interactive narrative. An apparatus includes a user interface. The apparatus also includes one or more processors operatively coupled to the user interface and a non-transitory computer-readable medium. The non-transitory computer-readable medium stores instructions that, when executed, cause the one or more processors to present a first piece of content corresponding to a given narrative via the user interface. The given narrative includes one or more characteristics. The one or more processors are caused to receive user input via the user interface. The one or more processors are caused to classify the user input into one of a plurality of response models. The one or more processors are caused to dynamically respond to the user input by presenting a second piece of content. The second piece of content is based on a selected response model corresponding to the user input.Type: GrantFiled: June 18, 2018Date of Patent: October 19, 2021Assignee: Disney Enterprises, Inc.Inventors: Erika Doggett, Alethia Shih
-
Patent number: 11080835Abstract: A process receives, with a processor, video content. Further, the process splices, with the processor, the video content into a plurality of video frames. In addition, the process splices, with the processor, at least one of the plurality of video frames into a plurality of image patches. Moreover, the process performs, with a neural network, an image reconstruction of at least one of the plurality of image patches to generate a reconstructed image patch. The process also compares, with the processor, the reconstructed image patch with the at least one of the plurality of image patches. Finally, the process determines, with the processor, a pixel error within the at least one of the plurality of image patches based on a discrepancy between the reconstructed image patch and the at least one of the plurality of image patches.Type: GrantFiled: January 9, 2019Date of Patent: August 3, 2021Assignee: Disney Enterprises, Inc.Inventors: Erika Doggett, Anna Wolak, Penelope Daphne Tsatsoulis, Nicholas McCarthy, Stephan Mandt
-
Publication number: 20210104241Abstract: In various embodiments, a communication fusion application enables other software application(s) to interpret spoken user input. In operation, a communication fusion application determines that a prediction is relevant to a text input derived from a spoken input received from a user. Subsequently, the communication fusion application generates a predicted context based on the prediction. The communication fusion application then transmits the predicted context and the text input to the other software application(s). The other software application(s) perform additional action(s) based on the text input and the predicted context. Advantageously, by providing additional, relevant information to the software application(s), the communication fusion application increases the level of understanding during interactions with the user and the overall user experience is improved.Type: ApplicationFiled: October 4, 2019Publication date: April 8, 2021Inventors: Erika DOGGETT, Nathan NOCON, Ashutosh MODI, Joseph Charles SENGIR, Maxwell MCCOY
-
Patent number: 10832383Abstract: Systems and methods for distortion removal at multiple quality levels are disclosed. In one embodiment, a method may include receiving training content. The training content may include original content, reconstructed content, and training distortion quality levels corresponding to the reconstructed content. The reconstructed content may be derived from distorted original content. The method may also include training distortion quality levels corresponding to the reconstructed content. The method may further include receiving an initial distortion removal model. The method may include generating a conditioned distortion removal model by training the initial distortion removal model using the training content. The method may further include storing the conditioned distortion removal model.Type: GrantFiled: October 22, 2018Date of Patent: November 10, 2020Assignee: DISNEY ENTERPRISES, INC.Inventors: Christopher Schroers, Mauro Bamert, Erika Doggett, Jared McPhillen, Scott Labrozzi, Romann Weber
-
Patent number: 10832380Abstract: Systems and methods for correcting color of uncalibrated material is disclosed. Example embodiments include a system to correct color of uncalibrated material. The system may include a non-transitory computer-readable medium operatively coupled to processors. The non-transitory computer-readable medium may store instructions that, when executed cause the processors to perform a number of operations. One operation is to obtain a target image of a degraded target material with one or more objects. The degraded target material comprises degraded colors and light information corresponding to light sources in the degraded target material. Another operations is to obtain color reference data. Another operation is to identify an object in the target image that corresponds to the color reference data. Yet another operation is to correct the identified object in the target image. Another operation is to correct the target image.Type: GrantFiled: June 4, 2018Date of Patent: November 10, 2020Assignee: DISNEY ENTERPRISES, INC.Inventors: Steven Chapman, Mehul Patel, Joseph Popp, Ty Popko, Erika Doggett
-
Publication number: 20200226797Abstract: Systems and methods for predicting a target set of pixels are disclosed. In one embodiment, a method may include obtaining target content. The target content may include a target set of pixels to be predicted. The method may also include convolving the target set of pixels to generate an estimated set of pixels. The method may include matching a second set of pixels in the target content to the target set of pixels. The second set of pixels may be within a distance from the target set of pixels. The method may include refining the estimated set of pixels to generate a refined set of pixels using a second set of pixels in the target content.Type: ApplicationFiled: January 16, 2019Publication date: July 16, 2020Applicant: Disney Enterprises, Inc.Inventors: Christopher Schroers, Erika Doggett, Stephan Marcel Mandt, Jared McPhillen, Scott Labrozzi, Romann Weber, Mauro Bamert
-
Publication number: 20200219245Abstract: A process receives, with a processor, video content. Further, the process splices, with the processor, the video content into a plurality of video frames. In addition, the process splices, with the processor, at least one of the plurality of video frames into a plurality of image patches. Moreover, the process performs, with a neural network, an image reconstruction of at least one of the plurality of image patches to generate a reconstructed image patch. The process also compares, with the processor, the reconstructed image patch with the at least one of the plurality of image patches. Finally, the process determines, with the processor, a pixel error within the at least one of the plurality of image patches based on a discrepancy between the reconstructed image patch and the at least one of the plurality of image patches.Type: ApplicationFiled: January 9, 2019Publication date: July 9, 2020Inventors: Erika Doggett, Anna Wolak, Penelope Daphne Tsatsoulis, Nicholas McCarthy, Stephan Mandt
-
Patent number: 10691894Abstract: A process receives a user input in a human-to-machine interaction. The process generates, with a natural language generation engine, one or more response candidates. Further, the process measures, with the natural language generation engine, the semantic similarity of the one or more response candidates. In addition, the process selects, with the natural language generation engine, a response candidate from the one or more response candidates. The process measures, with the natural language generation engine, an offensiveness measurement and a politeness measurement of the selected response. The process determines, with the natural language generation engine, that the offensiveness measurement or the politeness measurement lacks compliance with one or more predefined criteria.Type: GrantFiled: May 1, 2018Date of Patent: June 23, 2020Assignee: Disney Enterprises, Inc.Inventor: Erika Doggett
-
Publication number: 20200053388Abstract: Systems and methods are disclosed for compressing a target video. A computer-implemented method may use a computer system that include one or more physical computer processors and non-transient electronic storage. The computer-implemented method may include: obtaining the target video, extracting one or more frames from the target video, and generating an estimated optical flow based on a displacement of pixels between the one or more frames. The one or more frames may include one or more of a key frame and a target frame.Type: ApplicationFiled: January 29, 2019Publication date: February 13, 2020Applicant: Disney Enterprises, Inc.Inventors: Christopher Schroers, Simone Schaub, Erika Doggett, Jared McPhillen, Scott Labrozzi, Abdelaziz Djelouah
-
Publication number: 20200042601Abstract: A process receives, with a processor, audio corresponding to media content. Further, the process converts, with the processor, the audio to text. In addition, the process concatenates, with the processor, the text with one or more time codes. The process also parses, with the processor, the concatenated text into one or more text chunks according to one or more subtitle parameters. Further, the process automatically translates, with the processor, the parsed text from a first spoken language to a second spoken language. Moreover, the process determines, with the processor, if the language translation complies with the one or more subtitle parameters. Additionally, the process outputs, with the processor, the language translation to a display device for display of the one or more text chunks as one or more subtitles at one or more times corresponding to the one or more time codes.Type: ApplicationFiled: August 1, 2018Publication date: February 6, 2020Inventor: Erika Doggett
-
Publication number: 20190384826Abstract: Systems, devices, and methods are disclosed for presenting an interactive narrative. An apparatus includes a user interface. The apparatus also includes one or more processors operatively coupled to the user interface and a non-transitory computer-readable medium. The non-transitory computer-readable medium stores instructions that, when executed, cause the one or more processors to present a first piece of content corresponding to a given narrative via the user interface. The given narrative includes one or more characteristics. The one or more processors are caused to receive user input via the user interface. The one or more processors are caused to classify the user input into one of a plurality of response models. The one or more processors are caused to dynamically respond to the user input by presenting a second piece of content. The second piece of content is based on a selected response model corresponding to the user input.Type: ApplicationFiled: June 18, 2018Publication date: December 19, 2019Applicant: Disney Enterprises, Inc.Inventors: Erika DOGGETT, Alethia SHIH
-
Publication number: 20190370939Abstract: Systems and methods for correcting color of uncalibrated material is disclosed. Example embodiments include a system to correct color of uncalibrated material. The system may include a non-transitory computer-readable medium operatively coupled to processors. The non-transitory computer-readable medium may store instructions that, when executed cause the processors to perform a number of operations. One operation is to obtain a target image of a degraded target material with one or more objects. The degraded target material comprises degraded colors and light information corresponding to light sources in the degraded target material. Another operations is to obtain color reference data. Another operation is to identify an object in the target image that corresponds to the color reference data. Yet another operation is to correct the identified object in the target image. Another operation is to correct the target image.Type: ApplicationFiled: June 4, 2018Publication date: December 5, 2019Applicant: Disney Enterprises, Inc.Inventors: Steven CHAPMAN, Mehul PATEL, Joseph POPP, Ty POPKO, Erika DOGGETT
-
Publication number: 20190340238Abstract: A process receives a user input in a human-to-machine interaction. The process generates, with a natural language generation engine, one or more response candidates. Further, the process measures, with the natural language generation engine, the semantic similarity of the one or more response candidates. In addition, the process selects, with the natural language generation engine, a response candidate from the one or more response candidates. The process measures, with the natural language generation engine, an offensiveness measurement and a politeness measurement of the selected response. The process determines, with the natural language generation engine, that the offensiveness measurement or the politeness measurement lacks compliance with one or more predefined criteria.Type: ApplicationFiled: May 1, 2018Publication date: November 7, 2019Inventor: Erika Doggett
-
Publication number: 20190333190Abstract: Systems and methods for distortion removal at multiple quality levels are disclosed. In one embodiment, a method may include receiving training content. The training content may include original content, reconstructed content, and training distortion quality levels corresponding to the reconstructed content. The reconstructed content may be derived from distorted original content. The method may also include training distortion quality levels corresponding to the reconstructed content. The method may further include receiving an initial distortion removal model. The method may include generating a conditioned distortion removal model by training the initial distortion removal model using the training content. The method may further include storing the conditioned distortion removal model.Type: ApplicationFiled: October 22, 2018Publication date: October 31, 2019Applicants: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Christopher Schroers, Mauro Bamert, Erika Doggett, Jared McPhillen, Scott Labrozzi, Romann Weber