Patents by Inventor Charles Effinger

Charles Effinger has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11871068
    Abstract: Techniques for identifying synchronization errors between audio and video are described herein. Audio portions in audio for media content may be identified based at least in part on a sound level associated with first respective segments of the audio portions. A subset of the audio portions may be selected based at least in part on a duration associated with the audio portions. For a segment of the subset a first number of frames in the audio and a second number of frames in the video for the segment may be determined. A determination may be made that the segment includes a conversation segment based at least in part on the first number of frames, the second number of frames, and a first threshold. A synchronization error may be identified in the conversation segment based on a difference between the audio and the video of the conversation segment.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: January 9, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Christian Garcia Siagian, Ryan Barlow Dall, Charles Effinger, Ramakanth Mudumba
  • Patent number: 11070891
    Abstract: A subtitle management system is provided that analyzes and adjusts subtitles for video content to improve the experience of viewers. Subtitles may be optimized or otherwise adjusted to display in particular regions of the video content, to display in synchronization with audio presentation of the spoken dialogue represented by the subtitles, to display in particular colors, and the like. Subtitles that are permanently integrated into the video content may be identified and addressed. These and other adjustments may be applied to address any of a variety of subtitle issues and shortcomings with conventional methods of generating subtitles.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: July 20, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Charles Effinger, Ryan Barlow Dall, Christian Garcia Siagian, Ramakanth Mudumba, Lawrence Kyuil Chang
  • Patent number: 10904476
    Abstract: Techniques for automated up-sampling of media files are provided. In some examples, a title associated with a media file, a metadata file associated with the title, and the media file may be received. The media file may be partitioned into one or more scene files, each scene file including a plurality of frame images in a sequence. One or more up-sampled scene files may be generated, each corresponding to a scene file of the one or more scene files. An up-sampled media file may be generated by combining at least a subset of the one or more up-sampled scene files. Generating one or more up-sampled scene files may include identifying one or more characters in a frame image of the plurality of frame images, based at least in part on implementation of a facial recognition algorithm including deep learning features in a neural network.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: January 26, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Christian Garcia Siagian, Charles Effinger, David Niu, Yang Yu, Narayan Sundaram, Arjun Cholkar, Ramakanth Mudumba
  • Patent number: 10841666
    Abstract: Technologies are provided for generation of points of insertion of directed content into a video asset. In some embodiments, multiple time offsets within an interval spanned by the video asset can be determined using audio data corresponding to the video asset. A time offset defines a boundary between first and second segments of the video asset. Using image data corresponding to the video asset, respective pairs of video clips for the multiple time offsets can be generated. Visual features, aural features, and language features pertaining to the respective pairs of video clips can then be generated. Scores for the multiple time offsets can be generated using the visual features, the aural features, and the language features. A score represents an assessment of suitability to insert directed content into the video asset at a time offset. A file that contains specific time offsets can be generated.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: November 17, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Charles Effinger, Ryan Barlow Dall, Christian Garcia Siagian, Jonathan Y Ito, Brady Court Tsurutani, Vadim Volovik