Patents by Inventor Aljosa Aleksej Andrej Smolic
Aljosa Aleksej Andrej Smolic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10694263Abstract: Embodiments provide techniques for distributing supplemental content based on content entities within video content. Embodiments include analyzing video data to identify a known content entity within two or more frames of the video data. For each of the two or more frames, a region of pixels within the respective frame is determined that corresponds to the known content entity. Embodiments further include determining supplemental content corresponding to the known content entity. A watermark is embedded at a first position within the video data, such that the watermark corresponds to an identifier associated with the determined supplemental content. Upon receiving a message specifying the identifier, embodiments include transmitting the supplement content to a client device for output together with the video data.Type: GrantFiled: November 1, 2018Date of Patent: June 23, 2020Assignee: Disney Enterprises, Inc.Inventors: Anthony M. Accardo, Skarphedinn S. Hedinsson, Aljosa Aleksej Andrej Smolic, Miquel A. Farré Guiu, Thabo D. Beeler
-
Patent number: 10547871Abstract: The disclosure provides an approach for edge-aware spatio-temporal filtering. In one embodiment, a filtering application receives as input a guiding video sequence and video sequence(s) from additional channel(s). The filtering application estimates a sparse optical flow from the guiding video sequence using a novel binary feature descriptor integrated into the Coarse-to-fine PatchMatch method to compute a quasi-dense nearest neighbor field. The filtering application then performs spatial edge-aware filtering of the sparse optical flow (to obtain a dense flow) and the additional channel(s), using an efficient evaluation of the permeability filter with only two scan-line passes per iteration. Further, the filtering application performs temporal filtering of the optical flow using an infinite impulse response filter that only requires one filter state updated based on new guiding video sequence video frames.Type: GrantFiled: May 5, 2017Date of Patent: January 28, 2020Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)Inventors: Tunc Ozan Aydin, Florian Michael Scheidegger, Michael Stefano Fritz Schaffner, Lukas Cavigelli, Luca Benini, Aljosa Aleksej Andrej Smolic
-
Patent number: 10299012Abstract: Embodiments provide techniques for distributing supplemental content based on content entities within video content. Embodiments include analyzing video data to identify a known content entity within two or more frames of the video data. For each of the two or more frames, a region of pixels within the respective frame is determined that corresponds to the known content entity. Embodiments further include determining supplemental content corresponding to the known content entity. A watermark is embedded at a first position within the video data, such that the watermark corresponds to an identifier associated with the determined supplemental content. Upon receiving a message specifying the identifier, embodiments include transmitting the supplement content to a client device for output together with the video data.Type: GrantFiled: October 28, 2014Date of Patent: May 21, 2019Assignee: Disney Enterprises, Inc.Inventors: Anthony M. Accardo, Skarphedinn S. Hedinsson, Aljosa Aleksej Andrej Smolic, Miquel À. Farré Guiu, Thabo D. Beeler
-
Patent number: 10264329Abstract: Embodiments provide techniques for distributing supplemental content based on content entities within video content. Embodiments include receiving video data containing an embedded watermark at a first position within the video data. The embedded watermark is detected at the first position within the video data. Embodiments also include transmitting, to a remote content server, a message specifying a time stamp corresponding to the first position within the video data. In response to transmitting the message, supplemental content corresponding to a content entity depicted within the video content at the first position within the video content is received from the remote content server. Embodiments also include outputting the video data for display together with at least an indication of the supplemental content.Type: GrantFiled: October 28, 2014Date of Patent: April 16, 2019Assignee: Disney Enterprises, Inc.Inventors: Anthony M. Accardo, Skarphedinn S. Hedinsson, Aljosa Aleksej Andrej Smolic, Miquel À. Farré Guiu, Thabo D. Beeler
-
Publication number: 20190082235Abstract: Embodiments provide techniques for distributing supplemental content based on content entities within video content. Embodiments include analyzing video data to identify a known content entity within two or more frames of the video data. For each of the two or more frames, a region of pixels within the respective frame is determined that corresponds to the known content entity. Embodiments further include determining supplemental content corresponding to the known content entity. A watermark is embedded at a first position within the video data, such that the watermark corresponds to an identifier associated with the determined supplemental content. Upon receiving a message specifying the identifier, embodiments include transmitting the supplement content to a client device for output together with the video data.Type: ApplicationFiled: November 1, 2018Publication date: March 14, 2019Inventors: Anthony M. ACCARDO, Skarphedinn S. HEDINSSON, Aljosa Aleksej Andrej SMOLIC, Miquel A. FARRÉ GUIU, Thabo D. BEELER
-
Publication number: 20180324465Abstract: The disclosure provides an approach for edge-aware spatio-temporal filtering. In one embodiment, a filtering application receives as input a guiding video sequence and video sequence(s) from additional channel(s). The filtering application estimates a sparse optical flow from the guiding video sequence using a novel binary feature descriptor integrated into the Coarse-to-fine PatchMatch method to compute a quasi-dense nearest neighbor field. The filtering application then performs spatial edge-aware filtering of the sparse optical flow (to obtain a dense flow) and the additional channel(s), using an efficient evaluation of the permeability filter with only two scan-line passes per iteration. Further, the filtering application performs temporal filtering of the optical flow using an infinite impulse response filter that only requires one filter state updated based on new guiding video sequence video frames.Type: ApplicationFiled: May 5, 2017Publication date: November 8, 2018Inventors: Tunc Ozan AYDIN, Florian Michael SCHEIDEGGER, Michael Stefano Fritz SCHAFFNER, Lukas CAVIGELLI, Luca BENINI, Aljosa Aleksej Andrej SMOLIC
-
Patent number: 10095953Abstract: Techniques of depth modification for display applications are disclosed. A multiscopic video frame, including at least first and second images depicting a scene, is received. Depth information pertaining to the multiscopic video frame is received. A depth mapping function is determined based on one or more image characteristics of the multiscopic video frame, other than depth. The depth information is modified based on the depth mapping function. At least a third image is generated based on the modified depth information and based further on at least one of the first and second images, where the third image is output for display.Type: GrantFiled: May 28, 2014Date of Patent: October 9, 2018Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)Inventors: Aljosa Aleksej Andrej Smolic, Tunc Ozan Aydin, Steven Charles Poulakos, Simon Heinzle, Alexandre Chapiro
-
Patent number: 9979895Abstract: Embodiments herein disclose a tone mapping system that generates a base layer by filtering a frame in an HDR video. The tone mapping system then generates a tone curve using a tone mapping parameter derived from the base layer—e.g., a mean or maximum luminance value. Once the tone curve parameter is identified, the system performs temporal filtering to smooth out inconsistencies between the current value of the tone curve parameter and the values of tone curve parameter of at least one previous frame in the HDR video. The tone mapping system applies the temporally filtered tone curve to the base layer to generate a temporally coherent layer. The temporally coherent base layer can be combined with a detail layer derived from the HDR video to generate a frame of a tone mapped video.Type: GrantFiled: November 25, 2015Date of Patent: May 22, 2018Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)Inventors: Aljosa Aleksej Andrej Smolic, Tunc Ozan Aydin, Simone Maurizio Croci, Nikolce Stefanoski
-
Patent number: 9912974Abstract: Systems, methods, and computer program products to perform an operation comprising receiving a plurality of superclusters that includes at least one of a plurality of shot clusters, wherein each of the shot clusters includes at least one of a plurality of video shots, and wherein each video shot includes one or more video frames, and computing an expected value for a metric based on the plurality of superclusters.Type: GrantFiled: March 1, 2016Date of Patent: March 6, 2018Assignee: Disney Enterprises, Inc.Inventors: Miquel Á. Farré Guiu, Seth Frey, Aljosa Aleksej Andrej Smolic, Michael R. Clements, Pablo Beltran Sanchidrian
-
Publication number: 20170257653Abstract: Systems, methods, and computer program products to perform an operation comprising receiving a plurality of superclusters that includes at least one of a plurality of shot clusters, wherein each of the shot clusters includes at least one of a plurality of video shots, and wherein each video shot includes one or more video frames, and computing an expected value for a metric based on the plurality of superclusters.Type: ApplicationFiled: March 1, 2016Publication date: September 7, 2017Inventors: Miquel Á. FARRÉ GUIU, Seth FREY, Aljosa Aleksej Andrej SMOLIC, Michael R. CLEMENTS, Pablo Beltran SANCHIDRIAN
-
Publication number: 20170070719Abstract: Embodiments herein disclose a tone mapping system that generates a base layer by filtering a frame in an HDR video. The tone mapping system then generates a tone curve using a tone mapping parameter derived from the base layer—e.g., a mean or maximum luminance value. Once the tone curve parameter is identified, the system performs temporal filtering to smooth out inconsistencies between the current value of the tone curve parameter and the values of tone curve parameter of at least one previous frame in the HDR video. The tone mapping system applies the temporally filtered tone curve to the base layer to generate a temporally coherent layer. The temporally coherent base layer can be combined with a detail layer derived from the HDR video to generate a frame of a tone mapped video.Type: ApplicationFiled: November 25, 2015Publication date: March 9, 2017Inventors: Aljosa Aleksej Andrej SMOLIC, Tunc Ozan AYDIN, Simone Maurizio CROCI, Nikolce STEFANOSKI
-
Patent number: 9565414Abstract: Approaches are described for generating a multiview autostereoscopic image from a stereo three-dimensional input image pair. A stereo to multiview rendering system receives a stereo three-dimensional image including a left image and a right image at a first view position and a second view position, respectively. The system generates a first input warp and a second input warp that maps the left image and the right image to a third and fourth positions, respectively, where the third and fourth positions lie between the first and second view positions. The system generates a plurality of output warps based on the first warp and the second warp. The system resamples each output warp in the plurality of output warps to create a plurality of partial output images. The system interleaves the plurality of partial output images to generate a composite output image.Type: GrantFiled: December 18, 2013Date of Patent: February 7, 2017Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)Inventors: Aljosa Aleksej Andrej Smolic, Pierre Greisen, Simon Heinzle, Michael Schaffner, Frank Ka{hacek over (g)}an Gürkaynak
-
Patent number: 9378543Abstract: Approaches are described for tone-mapping an image frame in a sequence of image frames. A tone-mapping pipeline applies a spatiotemporal filter to each pixel of the image frame, based on a forward optical flow and a backward optical flow, to produce a base layer image. The tone-mapping pipeline applies a temporal filter to each pixel of the image frame, based on the forward and backward optical flows, to produce a temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame. The tone-mapping pipeline applies a tone curve to the base and detail layer images to produce a tone-mapped base and detail layer image, respectively, and combines the tone-mapped base and detail layer images to produce a tone-mapped image frame.Type: GrantFiled: July 28, 2014Date of Patent: June 28, 2016Assignees: Disney Enterprises, Inc., ETH Zurich (Eldgenoessische Technische Hochschule Zurich)Inventors: Tunc Ozan Aydin, Simone Croci, Aljosa Aleksej Andrej Smolic, Nikolce Stefanoski, Markus Gross
-
Patent number: 9361679Abstract: Approaches are described for filtering a first image frame in a sequence of image frames. A tone-mapping pipeline applies a spatiotemporal filter to each pixel of the first image frame, based on a forward optical flow and a backward optical flow, to produce a base layer image. The tone-mapping pipeline applies a temporal filter to each pixel of the first image frame, based on the forward optical flow and the backward optical flow, to produce a temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame.Type: GrantFiled: July 28, 2014Date of Patent: June 7, 2016Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)Inventors: Tunc Ozan Aydin, Simone Croci, Aljosa Aleksej Andrej Smolic, Nikolce Stefanoski, Markus Gross
-
Publication number: 20160119655Abstract: Embodiments provide techniques for distributing supplemental content based on content entities within video content. Embodiments include analyzing video data to identify a known content entity within two or more frames of the video data. For each of the two or more frames, a region of pixels within the respective frame is determined that corresponds to the known content entity. Embodiments further include determining supplemental content corresponding to the known content entity. A watermark is embedded at a first position within the video data, such that the watermark corresponds to an identifier associated with the determined supplemental content. Upon receiving a message specifying the identifier, embodiments include transmitting the supplement content to a client device for output together with the video data.Type: ApplicationFiled: October 28, 2014Publication date: April 28, 2016Applicant: DISNEY ENTERPRISES, INC.Inventors: Anthony M. ACCARDO, Skarphedinn S. HEDINSSON, Aljosa Aleksej Andrej SMOLIC, Miquel À FARRÉ GUIU, Thabo D. BEELER
-
Publication number: 20160119691Abstract: Embodiments provide techniques for distributing supplemental content based on content entities within video content. Embodiments include receiving video data containing an embedded watermark at a first position within the video data. The embedded watermark is detected at the first position within the video data. Embodiments also include transmitting, to a remote content server, a message specifying a time stamp corresponding to the first position within the video data. In response to transmitting the message, supplemental content corresponding to a content entity depicted within the video content at the first position within the video content is received from the remote content server. Embodiments also include outputting the video data for display together with at least an indication of the supplemental content.Type: ApplicationFiled: October 28, 2014Publication date: April 28, 2016Inventors: Anthony M. ACCARDO, Skarphedinn S. HEDINSSON, Aljosa Aleksej Andrej SMOLIC, Miquel À. FARRÉ GUIU, Thabo D. BEELER
-
Publication number: 20160027160Abstract: Approaches are described for tone-mapping an image frame in a sequence of image frames. A tone-mapping pipeline applies a spatiotemporal filter to each pixel of the image frame, based on a forward optical flow and a backward optical flow, to produce a base layer image. The tone-mapping pipeline applies a temporal filter to each pixel of the image frame, based on the forward and backward optical flows, to produce a temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame. The tone-mapping pipeline applies a tone curve to the base and detail layer images to produce a tone-mapped base and detail layer image, respectively, and combines the tone-mapped base and detail layer images to produce a tone-mapped image frame.Type: ApplicationFiled: July 28, 2014Publication date: January 28, 2016Inventors: Tunc Ozan Aydin, Simone Croci, Aljosa Aleksej Andrej Smolic, Nikolce Stefanoski, Markus Gross
-
Patent number: 9215440Abstract: Techniques are disclosed for rendering images. The techniques include receiving an input image associated with a source space, the input image comprising a plurality of source pixels, and applying an adaptive transformation to a source pixel, where the adaptive transformation maps the source pixel to a target space associated with an output image comprising a plurality of target pixels. The techniques further include determining a target pixel affected by the source pixel based on the adaptive transformation. The techniques further include writing the transformed source pixel into a location in the output image associated with the target pixel.Type: GrantFiled: October 17, 2012Date of Patent: December 15, 2015Assignee: Disney Enterprises, Inc.Inventors: Pierre Greisen, Simon Heinzle, Michael Schaffner, Aljosa Aleksej Andrej Smolic
-
Publication number: 20150348273Abstract: Techniques of depth modification for display applications are disclosed. A multiscopic video frame, including at least first and second images depicting a scene, is received. Depth information pertaining to the multiscopic video frame is received. A depth mapping function is determined based on one or more image characteristics of the multiscopic video frame, other than depth. The depth information is modified based on the depth mapping function. At least a third image is generated based on the modified depth information and based further on at least one of the first and second images, where the third image is output for display.Type: ApplicationFiled: May 28, 2014Publication date: December 3, 2015Applicant: Disney Enterprises, Inc.Inventors: Alexandre CHAPIRO, Tunc Ozan AYDIN, Steven Charles POULAKOS, Simon HEINZLE, Aljosa Aleksej Andrej SMOLIC
-
Patent number: 9202258Abstract: Techniques are disclosed for retargeting images. The techniques include receiving one or more input images, computing a two-dimensional saliency map based on the input images in order to determine one or more visually important features associated with the input images, projecting the saliency map horizontally and vertically to create at least one of a horizontal and vertical saliency profile, and scaling at least one of the horizontal and vertical saliency profiles. The techniques further include creating an output image based on the scaled saliency profiles. Low saliency areas are scaled non-uniformly while high saliency areas are scaled uniformly. Temporal stability is achieved by filtering the horizontal resampling pattern and the vertical resampling pattern over time. Image retargeting is achieved with greater efficiency and lower compute power, resulting in a retargeting architecture that may be implemented in a circuit suitable for mobile applications such as mobile phones and tablet computers.Type: GrantFiled: October 17, 2012Date of Patent: December 1, 2015Assignee: Disney Enterprises, Inc.Inventors: Pierre Greisen, Aljosa Aleksej Andrej Smolic, Simon Heinzle, Manuel Lang