Patents by Inventor Aljosa Smolic

Aljosa Smolic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9146455
    Abstract: Embodiments provide techniques for creating a composite video stream. Additionally, a first selection of pixels from the first video stream and a second selection of pixels from the second video stream are received. Here, both the first selection of pixels and the second selection of pixels indicate pixels that are to be included in the composite video stream. Embodiments identify a plurality of spatiotemporal seams across the first video stream and the second video stream, based at least in part on the first selection of pixels and the second selection of pixels. The first video stream and the second video stream are then composited into the composite video stream, by joining frames from the first video stream and the second video stream at the identified plurality of spatiotemporal seams.
    Type: Grant
    Filed: September 9, 2013
    Date of Patent: September 29, 2015
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)
    Inventors: Jan Ruegg, Oliver Wang, Aljosa Smolic, Markus Gross
  • Publication number: 20150193950
    Abstract: As described herein, an electronic device with a display screen may simulate the color diffusion that occurs in a physical painting process. For instance, the user may perform one or more actions that simulate a brushstroke on the display screen such as swiping a touch-sensitive area or dragging a cursor across the screen. The electronic device then calculates a geodesic distance between a pixel inside a region defined by the brushstroke and a pixel located outside this region based on the physical distance between the two pixels and a weighting factor that varies depending on whether an image boundary is between the two pixels. Based on the geodesic distance, the electronic device uses a color diffusion relationship that defines the effect of the color of the brushstroke on the pixel and a time delay controlling when the color of the brushstroke reaches the pixel in order to simulate color diffusion.
    Type: Application
    Filed: January 9, 2014
    Publication date: July 9, 2015
    Applicant: Disney Enterprises, Inc.
    Inventors: Aljosa SMOLIC, Oliver WANG, Nicolas MÄRKI
  • Patent number: 9013497
    Abstract: An approach for determining transducer functions for mapping objective image attribute values to estimated subjective attribute values. The approach includes determining objective attribute values for each of one or more aesthetic attributes for each image in a first set of images. The approach further includes determining, for each aesthetic attribute, a mapping from the objective attribute values to respective estimated subjective attribute values based on the objective attribute values and corresponding experimentally-determined attribute values. Using the determined mappings, aesthetic signatures, which include estimates of subjective image aesthetics across multiple dimensions, may be generated.
    Type: Grant
    Filed: July 27, 2012
    Date of Patent: April 21, 2015
    Assignee: Disney Enterprises, Inc.
    Inventors: Tunc Ozan Aydin, Aljosa Smolic
  • Publication number: 20150071612
    Abstract: Embodiments provide techniques for creating a composite video stream. Additionally, a first selection of pixels from the first video stream and a second selection of pixels from the second video stream are received. Here, both the first selection of pixels and the second selection of pixels indicate pixels that are to be included in the composite video stream. Embodiments identify a plurality of spatiotemporal seams across the first video stream and the second video stream, based at least in part on the first selection of pixels and the second selection of pixels. The first video stream and the second video stream are then composited into the composite video stream, by joining frames from the first video stream and the second video stream at the identified plurality of spatiotemporal seams.
    Type: Application
    Filed: September 9, 2013
    Publication date: March 12, 2015
    Applicant: Disney Enterprises, Inc.
    Inventors: Jan RUEGG, Oliver WANG, Aljosa SMOLIC, Markus GROSS
  • Publication number: 20150063709
    Abstract: Methods and systems described herein detect object boundaries of videos. A window around the pixel may be followed in adjacent image frames of the image frame to determine object boundaries. Inconsistencies in image patches over a temporal window are detected, and each pixel of the image frame of a video is assigned an object boundary probability. The pixel may belong to a texture edge if the window content does not change throughout the adjacent image frames, or the pixel may belong to an object boundary if the window content changes. A probability value indicating the likelihood of the pixel belonging to an object boundary is determined based on the window content change and is assigned to the corresponding pixel.
    Type: Application
    Filed: August 29, 2013
    Publication date: March 5, 2015
    Applicant: Disney Enterprises, Inc.
    Inventors: OLIVER WANG, ALJOSA SMOLIC
  • Publication number: 20140307048
    Abstract: Techniques are disclosed for view generation based on a video coding scheme. A bitstream is received that is encoded based on the video coding scheme. The bitstream includes video, quantized warp map offsets, and a message of a message type specified by the video coding scheme. Depth samples decoded from the first bitstream are interpreted as quantized warp map offsets, based on a first syntax element contained in the message. Warp maps are generated based on the quantized warp map offsets and a second syntax element contained in the message. Views are generated using image-domain warping and based on the video and the warp maps.
    Type: Application
    Filed: December 26, 2013
    Publication date: October 16, 2014
    Applicant: DISNEY ENTERPRISES, INC.
    Inventors: Aljosa SMOLIC, Nikolce STEFANOSKI
  • Patent number: 8666146
    Abstract: Systems and methods for generating a stereoscopic pair of images from a monoscopic image input are described. At least one brushstroke input corresponding to a location in the monoscopic image is received. A saliency map and edge map of the monoscopic image are computed. A first image warp and a second image warp are computed using the at least one brushstroke, the saliency map, and the edge map. A stereoscopic pair of images are generated from the first image warp and the second image warp.
    Type: Grant
    Filed: January 18, 2012
    Date of Patent: March 4, 2014
    Assignee: Disney Enterprises, Inc.
    Inventors: Aljosa A. Smolic, Manuel Lang, Alexander Hornung, Oliver Wang, Markus Gross
  • Publication number: 20140028695
    Abstract: The disclosure provides an approach for determining transducer functions for mapping objective image attribute values to estimated subjective attribute values. The approach includes determining objective attribute values for each of one or more aesthetic attributes for each image in a first set of images. The approach further includes determining, for each aesthetic attribute, a mapping from the objective attribute values to respective estimated subjective attribute values based on the objective attribute values and corresponding experimentally-determined attribute values. Using the determined mappings, aesthetic signatures, which include estimates of subjective image aesthetics across multiple dimensions, may be generated.
    Type: Application
    Filed: July 27, 2012
    Publication date: January 30, 2014
    Applicant: DISNEY ENTERPRISES, INC.
    Inventors: Tunc Ozan AYDIN, Aljosa SMOLIC
  • Patent number: 8514932
    Abstract: Systems, methods and articles of manufacture are disclosed for performing scalable video coding. In one embodiment, non-linear functions are used to predict source video data using retargeted video data. Differences may be determined between the predicted video data and the source video data. The retargeted video data, the non-linear functions, and the differences may be jointly encoded into a scalable bitstream. The scalable bitstream may be transmitted and selectively decoded to produce output video for one of a plurality of predefined target platforms.
    Type: Grant
    Filed: February 8, 2010
    Date of Patent: August 20, 2013
    Assignee: Disney Enterprises, Inc.
    Inventors: Nikolce Stefanoski, Aljosa Smolic, Yongzhe Wang, Manuel Lang, Alexander Hornung, Markus Gross
  • Publication number: 20110261050
    Abstract: An intermediate view synthesis apparatus for synthesizing an intermediate view image from a first image corresponding to a first view and a second image corresponding to a second view different from the first view, the first and second images including depth information, wherein the second image being is divided-up into a non-boundary portion, and a foreground/background boundary region, wherein the intermediate view synthesis apparatus is configured to project and merge the first image and the second image into the intermediate view to obtain an intermediate view image, with treating the foreground/background boundary region subordinated relative to the non-boundary portion. A multi-view data signal extraction apparatus for extracting a multiview data signal from a multi-view representation including a first image corresponding to a first view and a second image corresponding to a second view being different from the first view is also described, the first and second images including depth information.
    Type: Application
    Filed: April 1, 2011
    Publication date: October 27, 2011
    Inventors: Aljosa Smolic, Karsten Mueller, Kristina Dix
  • Publication number: 20110194024
    Abstract: Systems, methods and articles of manufacture are disclosed for performing scalable video coding. In one embodiment, non-linear functions are used to predict source video data using retargeted video data. Differences may be determined between the predicted video data and the source video data. The retargeted video data, the non-linear functions, and the differences may be jointly encoded into a scalable bitstream. The scalable bitstream may be transmitted and selectively decoded to produce output video for one of a plurality of predefined target platforms.
    Type: Application
    Filed: February 8, 2010
    Publication date: August 11, 2011
    Inventors: Nikolce STEFANOSKI, Aljosa SMOLIC, Yongzhe WANG, Manuel LANG, Alexander HORNUNG, Markus GROSS