Patents by Inventor Kyle Tacke

Kyle Tacke has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11924481
    Abstract: The disclosed computer-implemented method may include (1) accessing a first media data object and a different, second media data object that, when played back, each render temporally sequenced content, (2) comparing first temporally sequenced content represented by the first media data object with second temporally sequenced content represented by the second media data object to identify a set of common temporal subsequences between the first media data object and the second media data object, (3) identifying a set of edits relative to the set of common temporal subsequences that describe a difference between the temporally sequenced content of the first media data object and the temporally sequenced content of the second media data object, and (4) executing a workflow relating to the first media data object and/or the second media data object based on the set of edits. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: March 20, 2023
    Date of Patent: March 5, 2024
    Assignee: Netflix, Inc.
    Inventors: Yadong Wang, Chih-Wei Wu, Kyle Tacke, Shilpa Jois Rao, Boney Sekh, Andrew Swan, Raja Ranjan Senapati
  • Publication number: 20230232055
    Abstract: The disclosed computer-implemented method may include (1) accessing a first media data object and a different, second media data object that, when played back, each render temporally sequenced content, (2) comparing first temporally sequenced content represented by the first media data object with second temporally sequenced content represented by the second media data object to identify a set of common temporal subsequences between the first media data object and the second media data object, (3) identifying a set of edits relative to the set of common temporal subsequences that describe a difference between the temporally sequenced content of the first media data object and the temporally sequenced content of the second media data object, and (4) executing a workflow relating to the first media data object and/or the second media data object based on the set of edits. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Application
    Filed: March 20, 2023
    Publication date: July 20, 2023
    Inventors: Yadong Wang, Chih-Wei Wu, Kyle Tacke, Shilpa Jois Rao, Boney Sekh, Andrew Swan, Raja Ranjan Senapati
  • Patent number: 11659214
    Abstract: The disclosed computer-implemented method may include (1) accessing a first media data object and a different, second media data object that, when played back, each render temporally sequenced content, (2) comparing first temporally sequenced content represented by the first media data object with second temporally sequenced content represented by the second media data object to identify a set of common temporal subsequences between the first media data object and the second media data object, (3) identifying a set of edits relative to the set of common temporal subsequences that describe a difference between the temporally sequenced content of the first media data object and the temporally sequenced content of the second media data object, and (4) executing a workflow relating to the first media data object and/or the second media data object based on the set of edits. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: May 23, 2023
    Assignee: Netflix, Inc.
    Inventors: Yadong Wang, Chih-Wei Wu, Kyle Tacke, Shilpa Jois Rao, Boney Sekh, Andrew Swan, Raja Ranjan Senapati
  • Patent number: 11430485
    Abstract: The disclosed computer-implemented method may include accessing an audio track that is associated with a video recording, identifying a section of the accessed audio track having a specific audio characteristic, reducing a volume level of the audio track in the identified section, accessing an audio segment that includes a synthesized voice and inserting the accessed audio segment into the identified section of the audio track, where the inserted segment has a higher volume level than the reduced volume level of the audio track in the identified section. The synthesized voice description can be used to provide additional information to a visually impaired viewer without interrupting the audio track that is associated with the video recording, typically by inserting the synthesized voice description into a segment of the audio track in which there is no dialog. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: January 20, 2020
    Date of Patent: August 30, 2022
    Assignee: Netflix, Inc.
    Inventors: Yadong Wang, Murthy Parthasarathi, Andrew Swan, Raja Ranjan Senapati, Shilpa Jois Rao, Anjali Chablani, Kyle Tacke
  • Publication number: 20220115030
    Abstract: The disclosed computer-implemented method may include obtaining an audio sample from a content source, inputting the obtained audio sample into a trained machine learning model, obtaining the output of the trained machine learning model, wherein the output is a profile of an environment in which the input audio sample was recorded, obtaining an acoustic impulse response corresponding to the profile of the environment in which the input audio sample was recorded, obtaining a second audio sample, processing the obtained acoustic impulse response with the second audio sample, and inserting a result of processing the obtained acoustic impulse response and the second audio sample into an audio track. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Application
    Filed: December 17, 2021
    Publication date: April 14, 2022
    Inventors: Yadong Wang, Shilpa Jois Rao, Murthy Parthasarathi, Kyle Tacke
  • Patent number: 11238888
    Abstract: The disclosed computer-implemented method may include obtaining an audio sample from a content source, inputting the obtained audio sample into a trained machine learning model, obtaining the output of the trained machine learning model, wherein the output is a profile of an environment in which the input audio sample was recorded, obtaining an acoustic impulse response corresponding to the profile of the environment in which the input audio sample was recorded, obtaining a second audio sample, processing the obtained acoustic impulse response with the second audio sample, and inserting a result of processing the obtained acoustic impulse response and the second audio sample into an audio track. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: February 1, 2022
    Assignee: Netflix, Inc.
    Inventors: Yadong Wang, Shilpa Jois Rao, Murthy Parthasarathi, Kyle Tacke
  • Publication number: 20220021911
    Abstract: The disclosed computer-implemented method may include (1) accessing a first media data object and a different, second media data object that, when played back, each render temporally sequenced content, (2) comparing first temporally sequenced content represented by the first media data object with second temporally sequenced content represented by the second media data object to identify a set of common temporal subsequences between the first media data object and the second media data object, (3) identifying a set of edits relative to the set of common temporal subsequences that describe a difference between the temporally sequenced content of the first media data object and the temporally sequenced content of the second media data object, and (4) executing a workflow relating to the first media data object and/or the second media data object based on the set of edits. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Application
    Filed: April 30, 2021
    Publication date: January 20, 2022
    Inventors: Yadong Wang, Chih-Wei Wu, Kyle Tacke, Shilpa Jois Rao, Boney Sekh, Andrew Swan, Raja Ranjan Senapati
  • Publication number: 20210201931
    Abstract: The disclosed computer-implemented method may include obtaining an audio sample from a content source, inputting the obtained audio sample into a trained machine learning model, obtaining the output of the trained machine learning model, wherein the output is a profile of an environment in which the input audio sample was recorded, obtaining an acoustic impulse response corresponding to the profile of the environment in which the input audio sample was recorded, obtaining a second audio sample, processing the obtained acoustic impulse response with the second audio sample, and inserting a result of processing the obtained acoustic impulse response and the second audio sample into an audio track. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Application
    Filed: December 31, 2019
    Publication date: July 1, 2021
    Inventors: Yadong Wang, Shilpa Jois Rao, Murthy Parthasarathi, Kyle Tacke
  • Publication number: 20210151082
    Abstract: The disclosed computer-implemented method may include accessing an audio track that is associated with a video recording, identifying a section of the accessed audio track having a specific audio characteristic, reducing a volume level of the audio track in the identified section, accessing an audio segment that includes a synthesized voice and inserting the accessed audio segment into the identified section of the audio track, where the inserted segment has a higher volume level than the reduced volume level of the audio track in the identified section. The synthesized voice description can be used to provide additional information to a visually impaired viewer without interrupting the audio track that is associated with the video recording, typically by inserting the synthesized voice description into a segment of the audio track in which there is no dialog. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Application
    Filed: January 20, 2020
    Publication date: May 20, 2021
    Inventors: Yadong Wang, Murthy Parthasarathi, Andrew Swan, Raja Ranjan Senapati, Shilpa Jois Rao, Anjali Chablani, Kyle Tacke