Patents by Inventor Christian Garcia Siagian

Christian Garcia Siagian has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11871068
    Abstract: Techniques for identifying synchronization errors between audio and video are described herein. Audio portions in audio for media content may be identified based at least in part on a sound level associated with first respective segments of the audio portions. A subset of the audio portions may be selected based at least in part on a duration associated with the audio portions. For a segment of the subset a first number of frames in the audio and a second number of frames in the video for the segment may be determined. A determination may be made that the segment includes a conversation segment based at least in part on the first number of frames, the second number of frames, and a first threshold. A synchronization error may be identified in the conversation segment based on a difference between the audio and the video of the conversation segment.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: January 9, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Christian Garcia Siagian, Ryan Barlow Dall, Charles Effinger, Ramakanth Mudumba
  • Patent number: 11342003
    Abstract: Disclosed are various embodiments for segmenting and classifying video content using sounds. In one embodiment, a plurality of segments of a video content item are generated by analyzing audio accompanying the video content item. A subset of the plurality of segments that correspond to music segments is selected based at least in part on an audio characteristic of the subset of the plurality of segments. Individual segments of the subset of the plurality of segments are processed to determine whether a classification applies to the individual segments. A list of segments of the video content item to which the classification applies is generated.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: May 24, 2022
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Christian Garcia Siagian, Christian Ciabattoni, David Niu, Lawrence Kyuil Chang, Gordon Zheng, Ritesh Pase, Shiva Krishnamurthy, Ramakanth Mudumba
  • Patent number: 11120839
    Abstract: Disclosed are various embodiments for segmenting and classifying video content using conversation. In one embodiment, a plurality of segments of a video content item are generated by analyzing audio accompanying the video content item. A subset of the plurality of segments that correspond to conversation segments are selected. Individual segments of the subset of the plurality of segments are processed to determine whether a classification applies to the individual segments. A list of segments of the video content item to which the classification applies is generated.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: September 14, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Christian Garcia Siagian, Christian Ciabattoni, David Niu, Lawrence Kyuil Chang, Gordon Zheng, Ritesh Pase, Shiva Krishnamurthy, Ramakanth Mudumba
  • Patent number: 11070891
    Abstract: A subtitle management system is provided that analyzes and adjusts subtitles for video content to improve the experience of viewers. Subtitles may be optimized or otherwise adjusted to display in particular regions of the video content, to display in synchronization with audio presentation of the spoken dialogue represented by the subtitles, to display in particular colors, and the like. Subtitles that are permanently integrated into the video content may be identified and addressed. These and other adjustments may be applied to address any of a variety of subtitle issues and shortcomings with conventional methods of generating subtitles.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: July 20, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Charles Effinger, Ryan Barlow Dall, Christian Garcia Siagian, Ramakanth Mudumba, Lawrence Kyuil Chang
  • Patent number: 10924629
    Abstract: Techniques for automated content validation are provided. In some examples, a media file and a metadata file associated with a title of the media file may be received. One or more scene paragraphs may be identified based at least in part on information in the metadata file. Scenes may be identified at least in part by using a transformer model implemented in a neural network. One or more scene files may be generated from the media file. One or more characters in a scene file of the one or more scene files may be identified. A match score may be determined, based at least in part on an association of the scene file to a scene paragraph of the plurality of scene paragraphs. A validity criterion may be determined for the title associated with the media file based at least in part on the match score.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: February 16, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Christian Garcia Siagian, Christian Ciabattoni, Yang Yu, Yik Pui Suen, Ryan Barlow Dall, Ritesh Pase, Ramakanth Mudumba
  • Patent number: 10904476
    Abstract: Techniques for automated up-sampling of media files are provided. In some examples, a title associated with a media file, a metadata file associated with the title, and the media file may be received. The media file may be partitioned into one or more scene files, each scene file including a plurality of frame images in a sequence. One or more up-sampled scene files may be generated, each corresponding to a scene file of the one or more scene files. An up-sampled media file may be generated by combining at least a subset of the one or more up-sampled scene files. Generating one or more up-sampled scene files may include identifying one or more characters in a frame image of the plurality of frame images, based at least in part on implementation of a facial recognition algorithm including deep learning features in a neural network.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: January 26, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Christian Garcia Siagian, Charles Effinger, David Niu, Yang Yu, Narayan Sundaram, Arjun Cholkar, Ramakanth Mudumba
  • Patent number: 10841666
    Abstract: Technologies are provided for generation of points of insertion of directed content into a video asset. In some embodiments, multiple time offsets within an interval spanned by the video asset can be determined using audio data corresponding to the video asset. A time offset defines a boundary between first and second segments of the video asset. Using image data corresponding to the video asset, respective pairs of video clips for the multiple time offsets can be generated. Visual features, aural features, and language features pertaining to the respective pairs of video clips can then be generated. Scores for the multiple time offsets can be generated using the visual features, the aural features, and the language features. A score represents an assessment of suitability to insert directed content into the video asset at a time offset. A file that contains specific time offsets can be generated.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: November 17, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Charles Effinger, Ryan Barlow Dall, Christian Garcia Siagian, Jonathan Y Ito, Brady Court Tsurutani, Vadim Volovik