Patents by Inventor Aljosa Aleksej Andrej Smolic

Aljosa Aleksej Andrej Smolic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10694263
    Abstract: Embodiments provide techniques for distributing supplemental content based on content entities within video content. Embodiments include analyzing video data to identify a known content entity within two or more frames of the video data. For each of the two or more frames, a region of pixels within the respective frame is determined that corresponds to the known content entity. Embodiments further include determining supplemental content corresponding to the known content entity. A watermark is embedded at a first position within the video data, such that the watermark corresponds to an identifier associated with the determined supplemental content. Upon receiving a message specifying the identifier, embodiments include transmitting the supplement content to a client device for output together with the video data.
    Type: Grant
    Filed: November 1, 2018
    Date of Patent: June 23, 2020
    Assignee: Disney Enterprises, Inc.
    Inventors: Anthony M. Accardo, Skarphedinn S. Hedinsson, Aljosa Aleksej Andrej Smolic, Miquel A. Farré Guiu, Thabo D. Beeler
  • Patent number: 10547871
    Abstract: The disclosure provides an approach for edge-aware spatio-temporal filtering. In one embodiment, a filtering application receives as input a guiding video sequence and video sequence(s) from additional channel(s). The filtering application estimates a sparse optical flow from the guiding video sequence using a novel binary feature descriptor integrated into the Coarse-to-fine PatchMatch method to compute a quasi-dense nearest neighbor field. The filtering application then performs spatial edge-aware filtering of the sparse optical flow (to obtain a dense flow) and the additional channel(s), using an efficient evaluation of the permeability filter with only two scan-line passes per iteration. Further, the filtering application performs temporal filtering of the optical flow using an infinite impulse response filter that only requires one filter state updated based on new guiding video sequence video frames.
    Type: Grant
    Filed: May 5, 2017
    Date of Patent: January 28, 2020
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)
    Inventors: Tunc Ozan Aydin, Florian Michael Scheidegger, Michael Stefano Fritz Schaffner, Lukas Cavigelli, Luca Benini, Aljosa Aleksej Andrej Smolic
  • Patent number: 10299012
    Abstract: Embodiments provide techniques for distributing supplemental content based on content entities within video content. Embodiments include analyzing video data to identify a known content entity within two or more frames of the video data. For each of the two or more frames, a region of pixels within the respective frame is determined that corresponds to the known content entity. Embodiments further include determining supplemental content corresponding to the known content entity. A watermark is embedded at a first position within the video data, such that the watermark corresponds to an identifier associated with the determined supplemental content. Upon receiving a message specifying the identifier, embodiments include transmitting the supplement content to a client device for output together with the video data.
    Type: Grant
    Filed: October 28, 2014
    Date of Patent: May 21, 2019
    Assignee: Disney Enterprises, Inc.
    Inventors: Anthony M. Accardo, Skarphedinn S. Hedinsson, Aljosa Aleksej Andrej Smolic, Miquel À. Farré Guiu, Thabo D. Beeler
  • Patent number: 10264329
    Abstract: Embodiments provide techniques for distributing supplemental content based on content entities within video content. Embodiments include receiving video data containing an embedded watermark at a first position within the video data. The embedded watermark is detected at the first position within the video data. Embodiments also include transmitting, to a remote content server, a message specifying a time stamp corresponding to the first position within the video data. In response to transmitting the message, supplemental content corresponding to a content entity depicted within the video content at the first position within the video content is received from the remote content server. Embodiments also include outputting the video data for display together with at least an indication of the supplemental content.
    Type: Grant
    Filed: October 28, 2014
    Date of Patent: April 16, 2019
    Assignee: Disney Enterprises, Inc.
    Inventors: Anthony M. Accardo, Skarphedinn S. Hedinsson, Aljosa Aleksej Andrej Smolic, Miquel À. Farré Guiu, Thabo D. Beeler
  • Publication number: 20190082235
    Abstract: Embodiments provide techniques for distributing supplemental content based on content entities within video content. Embodiments include analyzing video data to identify a known content entity within two or more frames of the video data. For each of the two or more frames, a region of pixels within the respective frame is determined that corresponds to the known content entity. Embodiments further include determining supplemental content corresponding to the known content entity. A watermark is embedded at a first position within the video data, such that the watermark corresponds to an identifier associated with the determined supplemental content. Upon receiving a message specifying the identifier, embodiments include transmitting the supplement content to a client device for output together with the video data.
    Type: Application
    Filed: November 1, 2018
    Publication date: March 14, 2019
    Inventors: Anthony M. ACCARDO, Skarphedinn S. HEDINSSON, Aljosa Aleksej Andrej SMOLIC, Miquel A. FARRÉ GUIU, Thabo D. BEELER
  • Publication number: 20180324465
    Abstract: The disclosure provides an approach for edge-aware spatio-temporal filtering. In one embodiment, a filtering application receives as input a guiding video sequence and video sequence(s) from additional channel(s). The filtering application estimates a sparse optical flow from the guiding video sequence using a novel binary feature descriptor integrated into the Coarse-to-fine PatchMatch method to compute a quasi-dense nearest neighbor field. The filtering application then performs spatial edge-aware filtering of the sparse optical flow (to obtain a dense flow) and the additional channel(s), using an efficient evaluation of the permeability filter with only two scan-line passes per iteration. Further, the filtering application performs temporal filtering of the optical flow using an infinite impulse response filter that only requires one filter state updated based on new guiding video sequence video frames.
    Type: Application
    Filed: May 5, 2017
    Publication date: November 8, 2018
    Inventors: Tunc Ozan AYDIN, Florian Michael SCHEIDEGGER, Michael Stefano Fritz SCHAFFNER, Lukas CAVIGELLI, Luca BENINI, Aljosa Aleksej Andrej SMOLIC
  • Patent number: 10095953
    Abstract: Techniques of depth modification for display applications are disclosed. A multiscopic video frame, including at least first and second images depicting a scene, is received. Depth information pertaining to the multiscopic video frame is received. A depth mapping function is determined based on one or more image characteristics of the multiscopic video frame, other than depth. The depth information is modified based on the depth mapping function. At least a third image is generated based on the modified depth information and based further on at least one of the first and second images, where the third image is output for display.
    Type: Grant
    Filed: May 28, 2014
    Date of Patent: October 9, 2018
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)
    Inventors: Aljosa Aleksej Andrej Smolic, Tunc Ozan Aydin, Steven Charles Poulakos, Simon Heinzle, Alexandre Chapiro
  • Patent number: 9979895
    Abstract: Embodiments herein disclose a tone mapping system that generates a base layer by filtering a frame in an HDR video. The tone mapping system then generates a tone curve using a tone mapping parameter derived from the base layer—e.g., a mean or maximum luminance value. Once the tone curve parameter is identified, the system performs temporal filtering to smooth out inconsistencies between the current value of the tone curve parameter and the values of tone curve parameter of at least one previous frame in the HDR video. The tone mapping system applies the temporally filtered tone curve to the base layer to generate a temporally coherent layer. The temporally coherent base layer can be combined with a detail layer derived from the HDR video to generate a frame of a tone mapped video.
    Type: Grant
    Filed: November 25, 2015
    Date of Patent: May 22, 2018
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)
    Inventors: Aljosa Aleksej Andrej Smolic, Tunc Ozan Aydin, Simone Maurizio Croci, Nikolce Stefanoski
  • Patent number: 9912974
    Abstract: Systems, methods, and computer program products to perform an operation comprising receiving a plurality of superclusters that includes at least one of a plurality of shot clusters, wherein each of the shot clusters includes at least one of a plurality of video shots, and wherein each video shot includes one or more video frames, and computing an expected value for a metric based on the plurality of superclusters.
    Type: Grant
    Filed: March 1, 2016
    Date of Patent: March 6, 2018
    Assignee: Disney Enterprises, Inc.
    Inventors: Miquel Á. Farré Guiu, Seth Frey, Aljosa Aleksej Andrej Smolic, Michael R. Clements, Pablo Beltran Sanchidrian
  • Publication number: 20170257653
    Abstract: Systems, methods, and computer program products to perform an operation comprising receiving a plurality of superclusters that includes at least one of a plurality of shot clusters, wherein each of the shot clusters includes at least one of a plurality of video shots, and wherein each video shot includes one or more video frames, and computing an expected value for a metric based on the plurality of superclusters.
    Type: Application
    Filed: March 1, 2016
    Publication date: September 7, 2017
    Inventors: Miquel Á. FARRÉ GUIU, Seth FREY, Aljosa Aleksej Andrej SMOLIC, Michael R. CLEMENTS, Pablo Beltran SANCHIDRIAN
  • Publication number: 20170070719
    Abstract: Embodiments herein disclose a tone mapping system that generates a base layer by filtering a frame in an HDR video. The tone mapping system then generates a tone curve using a tone mapping parameter derived from the base layer—e.g., a mean or maximum luminance value. Once the tone curve parameter is identified, the system performs temporal filtering to smooth out inconsistencies between the current value of the tone curve parameter and the values of tone curve parameter of at least one previous frame in the HDR video. The tone mapping system applies the temporally filtered tone curve to the base layer to generate a temporally coherent layer. The temporally coherent base layer can be combined with a detail layer derived from the HDR video to generate a frame of a tone mapped video.
    Type: Application
    Filed: November 25, 2015
    Publication date: March 9, 2017
    Inventors: Aljosa Aleksej Andrej SMOLIC, Tunc Ozan AYDIN, Simone Maurizio CROCI, Nikolce STEFANOSKI
  • Patent number: 9565414
    Abstract: Approaches are described for generating a multiview autostereoscopic image from a stereo three-dimensional input image pair. A stereo to multiview rendering system receives a stereo three-dimensional image including a left image and a right image at a first view position and a second view position, respectively. The system generates a first input warp and a second input warp that maps the left image and the right image to a third and fourth positions, respectively, where the third and fourth positions lie between the first and second view positions. The system generates a plurality of output warps based on the first warp and the second warp. The system resamples each output warp in the plurality of output warps to create a plurality of partial output images. The system interleaves the plurality of partial output images to generate a composite output image.
    Type: Grant
    Filed: December 18, 2013
    Date of Patent: February 7, 2017
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)
    Inventors: Aljosa Aleksej Andrej Smolic, Pierre Greisen, Simon Heinzle, Michael Schaffner, Frank Ka{hacek over (g)}an Gürkaynak
  • Patent number: 9378543
    Abstract: Approaches are described for tone-mapping an image frame in a sequence of image frames. A tone-mapping pipeline applies a spatiotemporal filter to each pixel of the image frame, based on a forward optical flow and a backward optical flow, to produce a base layer image. The tone-mapping pipeline applies a temporal filter to each pixel of the image frame, based on the forward and backward optical flows, to produce a temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame. The tone-mapping pipeline applies a tone curve to the base and detail layer images to produce a tone-mapped base and detail layer image, respectively, and combines the tone-mapped base and detail layer images to produce a tone-mapped image frame.
    Type: Grant
    Filed: July 28, 2014
    Date of Patent: June 28, 2016
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eldgenoessische Technische Hochschule Zurich)
    Inventors: Tunc Ozan Aydin, Simone Croci, Aljosa Aleksej Andrej Smolic, Nikolce Stefanoski, Markus Gross
  • Patent number: 9361679
    Abstract: Approaches are described for filtering a first image frame in a sequence of image frames. A tone-mapping pipeline applies a spatiotemporal filter to each pixel of the first image frame, based on a forward optical flow and a backward optical flow, to produce a base layer image. The tone-mapping pipeline applies a temporal filter to each pixel of the first image frame, based on the forward optical flow and the backward optical flow, to produce a temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame.
    Type: Grant
    Filed: July 28, 2014
    Date of Patent: June 7, 2016
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)
    Inventors: Tunc Ozan Aydin, Simone Croci, Aljosa Aleksej Andrej Smolic, Nikolce Stefanoski, Markus Gross
  • Publication number: 20160119655
    Abstract: Embodiments provide techniques for distributing supplemental content based on content entities within video content. Embodiments include analyzing video data to identify a known content entity within two or more frames of the video data. For each of the two or more frames, a region of pixels within the respective frame is determined that corresponds to the known content entity. Embodiments further include determining supplemental content corresponding to the known content entity. A watermark is embedded at a first position within the video data, such that the watermark corresponds to an identifier associated with the determined supplemental content. Upon receiving a message specifying the identifier, embodiments include transmitting the supplement content to a client device for output together with the video data.
    Type: Application
    Filed: October 28, 2014
    Publication date: April 28, 2016
    Applicant: DISNEY ENTERPRISES, INC.
    Inventors: Anthony M. ACCARDO, Skarphedinn S. HEDINSSON, Aljosa Aleksej Andrej SMOLIC, Miquel À FARRÉ GUIU, Thabo D. BEELER
  • Publication number: 20160119691
    Abstract: Embodiments provide techniques for distributing supplemental content based on content entities within video content. Embodiments include receiving video data containing an embedded watermark at a first position within the video data. The embedded watermark is detected at the first position within the video data. Embodiments also include transmitting, to a remote content server, a message specifying a time stamp corresponding to the first position within the video data. In response to transmitting the message, supplemental content corresponding to a content entity depicted within the video content at the first position within the video content is received from the remote content server. Embodiments also include outputting the video data for display together with at least an indication of the supplemental content.
    Type: Application
    Filed: October 28, 2014
    Publication date: April 28, 2016
    Inventors: Anthony M. ACCARDO, Skarphedinn S. HEDINSSON, Aljosa Aleksej Andrej SMOLIC, Miquel À. FARRÉ GUIU, Thabo D. BEELER
  • Publication number: 20160027160
    Abstract: Approaches are described for tone-mapping an image frame in a sequence of image frames. A tone-mapping pipeline applies a spatiotemporal filter to each pixel of the image frame, based on a forward optical flow and a backward optical flow, to produce a base layer image. The tone-mapping pipeline applies a temporal filter to each pixel of the image frame, based on the forward and backward optical flows, to produce a temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame. The tone-mapping pipeline applies a tone curve to the base and detail layer images to produce a tone-mapped base and detail layer image, respectively, and combines the tone-mapped base and detail layer images to produce a tone-mapped image frame.
    Type: Application
    Filed: July 28, 2014
    Publication date: January 28, 2016
    Inventors: Tunc Ozan Aydin, Simone Croci, Aljosa Aleksej Andrej Smolic, Nikolce Stefanoski, Markus Gross
  • Patent number: 9215440
    Abstract: Techniques are disclosed for rendering images. The techniques include receiving an input image associated with a source space, the input image comprising a plurality of source pixels, and applying an adaptive transformation to a source pixel, where the adaptive transformation maps the source pixel to a target space associated with an output image comprising a plurality of target pixels. The techniques further include determining a target pixel affected by the source pixel based on the adaptive transformation. The techniques further include writing the transformed source pixel into a location in the output image associated with the target pixel.
    Type: Grant
    Filed: October 17, 2012
    Date of Patent: December 15, 2015
    Assignee: Disney Enterprises, Inc.
    Inventors: Pierre Greisen, Simon Heinzle, Michael Schaffner, Aljosa Aleksej Andrej Smolic
  • Publication number: 20150348273
    Abstract: Techniques of depth modification for display applications are disclosed. A multiscopic video frame, including at least first and second images depicting a scene, is received. Depth information pertaining to the multiscopic video frame is received. A depth mapping function is determined based on one or more image characteristics of the multiscopic video frame, other than depth. The depth information is modified based on the depth mapping function. At least a third image is generated based on the modified depth information and based further on at least one of the first and second images, where the third image is output for display.
    Type: Application
    Filed: May 28, 2014
    Publication date: December 3, 2015
    Applicant: Disney Enterprises, Inc.
    Inventors: Alexandre CHAPIRO, Tunc Ozan AYDIN, Steven Charles POULAKOS, Simon HEINZLE, Aljosa Aleksej Andrej SMOLIC
  • Patent number: 9202258
    Abstract: Techniques are disclosed for retargeting images. The techniques include receiving one or more input images, computing a two-dimensional saliency map based on the input images in order to determine one or more visually important features associated with the input images, projecting the saliency map horizontally and vertically to create at least one of a horizontal and vertical saliency profile, and scaling at least one of the horizontal and vertical saliency profiles. The techniques further include creating an output image based on the scaled saliency profiles. Low saliency areas are scaled non-uniformly while high saliency areas are scaled uniformly. Temporal stability is achieved by filtering the horizontal resampling pattern and the vertical resampling pattern over time. Image retargeting is achieved with greater efficiency and lower compute power, resulting in a retargeting architecture that may be implemented in a circuit suitable for mobile applications such as mobile phones and tablet computers.
    Type: Grant
    Filed: October 17, 2012
    Date of Patent: December 1, 2015
    Assignee: Disney Enterprises, Inc.
    Inventors: Pierre Greisen, Aljosa Aleksej Andrej Smolic, Simon Heinzle, Manuel Lang