Patents by Inventor Nikolce Stefanoski

Nikolce Stefanoski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160373795
    Abstract: There is provided a server for providing an interactive broadcast. The server includes a memory configured to store a story manager including an event controller, a story controller and an action processor, and a hardware processor configured to execute the story manager. The story manager is configured to provide, using the event controller, an event based on a control script, story elements metadata, one or more user performance analyses, and one or more user preferences. The story manager is also configured to generate, using the story controller, an action command based on the event received from the event controller and a story state. The story manager is further configured to determine, using the action processor, an action corresponding to the action command for initiating one or more control processes for distributing the interactive broadcast.
    Type: Application
    Filed: August 18, 2015
    Publication date: December 22, 2016
    Inventors: Nikolce Stefanoski, Aljoscha Smolic
  • Publication number: 20160353164
    Abstract: Novel systems and methods are described for creating, compressing, and distributing video or image content graded for a plurality of displays with different dynamic ranges. In implementations, the created content is “continuous dynamic range” (CDR) content—a novel representation of pixel-luminance as a function of display dynamic range. The creation of the CDR content includes grading a source content for a minimum dynamic range and a maximum dynamic range, and defining a luminance of each pixel of an image or video frame of the source content as a continuous function between the minimum and the maximum dynamic ranges. In additional implementations, a novel graphical user interface for creating and editing the CDR content is described.
    Type: Application
    Filed: September 22, 2015
    Publication date: December 1, 2016
    Inventors: ALJOSCHA SMOLIC, ALEXANDRE CHAPIRO, SIMONE CROCI, TUNC OZAN AYDIN, NIKOLCE STEFANOSKI, MARKUS GROSS
  • Publication number: 20160323556
    Abstract: A video coding pipeline is provided that can accommodate high dynamic range (HDR) and wide color gamut (WCG) content at a fixed bitrate. The video coding pipeline relies on separate chromaticity and luminance-specific transforms in order to process image content. Image content may be converted into a nearly perceptually uniform color space for coding in constant luminance. Moreover, chromaticity transforms are utilized which reduce coding errors in the chroma components (at the fixed bitrate) by enlarging the distribution of code words for compression.
    Type: Application
    Filed: September 22, 2015
    Publication date: November 3, 2016
    Inventors: DANIEL LUGINBUHL, TUNC OZAN AYDIN, ALJOSA SMOLIC, NIKOLCE STEFANOSKI
  • Patent number: 9445072
    Abstract: Techniques are disclosed for generating autostereoscopic video content. A multiscopic video frame is received that includes a first image and a second image. The first and second images are analyzed to determine a set of image characteristics. A mapping function is determined based on the set of image characteristics. At least a third image is generated based on the mapping function and added to the multiscopic video frame.
    Type: Grant
    Filed: August 31, 2012
    Date of Patent: September 13, 2016
    Assignees: DISNEY ENTERPRISES, INC., ETH ZURICH (EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZURICH)
    Inventors: Nikolce Stefanoski, Aljoscha Smolic, Manuel Lang, Miquel À Farré, Alexander Hornung, Pedro Christian Espinosa Fricke, Oliver Wang
  • Publication number: 20160261927
    Abstract: A device and method for receiving video content, generating at least two overlays for the video content, generating an information message containing information enabling a receiver of the video content and of the at least two overlays to selectively display or hide the generated overlays, and transmitting, using a multi-stream transmission including a primary stream and auxiliary streams, the information message, the video content in the primary stream and the at least two overlays in the auxiliary streams.
    Type: Application
    Filed: March 31, 2016
    Publication date: September 8, 2016
    Inventors: Aljoscha SMOLIC, Nikolce STEFANOSKI, Oliver WANG
  • Publication number: 20160225342
    Abstract: Methods and systems for calibrating devices reproducing high dimensional data, such as calibrating High Dynamic Range (HDR) displays that reproduce chromatic data. Methods include mapping input data into calibrated data using calibration information retrieved from spatial data structures that encode a calibration function. The calibration function may be represented by any multidimensional scattered data interpolation methods such as Thin-Plate Splines. To efficiently represent and access the calibration information in runtime, the calibration function is recursively sampled based on guidance dataset. In an embodiment, an HDR display may be adaptively calibrated using a dynamic color guidance dataset and dynamic spatial data structures.
    Type: Application
    Filed: February 2, 2015
    Publication date: August 4, 2016
    Inventors: Aljosa SMOLIC, Nikolce STEFANOSKI, Tunc Ozan AYDIN, Jing LlU, Anselm GRUNDHOFER
  • Patent number: 9378543
    Abstract: Approaches are described for tone-mapping an image frame in a sequence of image frames. A tone-mapping pipeline applies a spatiotemporal filter to each pixel of the image frame, based on a forward optical flow and a backward optical flow, to produce a base layer image. The tone-mapping pipeline applies a temporal filter to each pixel of the image frame, based on the forward and backward optical flows, to produce a temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame. The tone-mapping pipeline applies a tone curve to the base and detail layer images to produce a tone-mapped base and detail layer image, respectively, and combines the tone-mapped base and detail layer images to produce a tone-mapped image frame.
    Type: Grant
    Filed: July 28, 2014
    Date of Patent: June 28, 2016
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eldgenoessische Technische Hochschule Zurich)
    Inventors: Tunc Ozan Aydin, Simone Croci, Aljosa Aleksej Andrej Smolic, Nikolce Stefanoski, Markus Gross
  • Patent number: 9361679
    Abstract: Approaches are described for filtering a first image frame in a sequence of image frames. A tone-mapping pipeline applies a spatiotemporal filter to each pixel of the first image frame, based on a forward optical flow and a backward optical flow, to produce a base layer image. The tone-mapping pipeline applies a temporal filter to each pixel of the first image frame, based on the forward optical flow and the backward optical flow, to produce a temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame.
    Type: Grant
    Filed: July 28, 2014
    Date of Patent: June 7, 2016
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)
    Inventors: Tunc Ozan Aydin, Simone Croci, Aljosa Aleksej Andrej Smolic, Nikolce Stefanoski, Markus Gross
  • Publication number: 20160027161
    Abstract: Approaches are described for filtering a first image frame in a sequence of image frames. A tone-mapping pipeline applies a spatiotemporal filter to each pixel of the first image frame, based on a forward optical flow and a backward optical flow, to produce a base layer image. The tone-mapping pipeline applies a temporal filter to each pixel of the first image frame, based on the forward optical flow and the backward optical flow, to produce a temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame.
    Type: Application
    Filed: July 28, 2014
    Publication date: January 28, 2016
    Inventors: Tunc Ozan Aydin, Simone Croci, Aijosa Aleksej Andrej Smolic, Nikolce Stefanoski, Markus Gross
  • Publication number: 20160027160
    Abstract: Approaches are described for tone-mapping an image frame in a sequence of image frames. A tone-mapping pipeline applies a spatiotemporal filter to each pixel of the image frame, based on a forward optical flow and a backward optical flow, to produce a base layer image. The tone-mapping pipeline applies a temporal filter to each pixel of the image frame, based on the forward and backward optical flows, to produce a temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame. The tone-mapping pipeline applies a tone curve to the base and detail layer images to produce a tone-mapped base and detail layer image, respectively, and combines the tone-mapped base and detail layer images to produce a tone-mapped image frame.
    Type: Application
    Filed: July 28, 2014
    Publication date: January 28, 2016
    Inventors: Tunc Ozan Aydin, Simone Croci, Aljosa Aleksej Andrej Smolic, Nikolce Stefanoski, Markus Gross
  • Publication number: 20150135212
    Abstract: A method including receiving video of an event; generating an overlay for the video; generating an information message containing information enabling a receiver of the video and the overlay to selectively display or hide the overlay; and transmitting the video, the overlay, and the information message. The video is transmitted in a primary stream of a multi-stream transmission including a primary stream and one or more auxiliary streams. The overlay is transmitted in a first one of the auxiliary streams.
    Type: Application
    Filed: October 9, 2014
    Publication date: May 14, 2015
    Inventors: Aljoscha Smolic, Nikolce Stefanoski, Oliver Wang
  • Publication number: 20140307048
    Abstract: Techniques are disclosed for view generation based on a video coding scheme. A bitstream is received that is encoded based on the video coding scheme. The bitstream includes video, quantized warp map offsets, and a message of a message type specified by the video coding scheme. Depth samples decoded from the first bitstream are interpreted as quantized warp map offsets, based on a first syntax element contained in the message. Warp maps are generated based on the quantized warp map offsets and a second syntax element contained in the message. Views are generated using image-domain warping and based on the video and the warp maps.
    Type: Application
    Filed: December 26, 2013
    Publication date: October 16, 2014
    Applicant: DISNEY ENTERPRISES, INC.
    Inventors: Aljosa SMOLIC, Nikolce STEFANOSKI
  • Patent number: 8514932
    Abstract: Systems, methods and articles of manufacture are disclosed for performing scalable video coding. In one embodiment, non-linear functions are used to predict source video data using retargeted video data. Differences may be determined between the predicted video data and the source video data. The retargeted video data, the non-linear functions, and the differences may be jointly encoded into a scalable bitstream. The scalable bitstream may be transmitted and selectively decoded to produce output video for one of a plurality of predefined target platforms.
    Type: Grant
    Filed: February 8, 2010
    Date of Patent: August 20, 2013
    Assignee: Disney Enterprises, Inc.
    Inventors: Nikolce Stefanoski, Aljosa Smolic, Yongzhe Wang, Manuel Lang, Alexander Hornung, Markus Gross
  • Patent number: 8502815
    Abstract: We present a method for predictive compression of time-consistent 3D mesh sequences supporting and exploiting scalability. The applied method decomposes each frame of a mesh sequence in layers, which provides a time-consistent multi-resolution representation. Following the predictive coding paradigm, local temporal and spatial dependencies between layers and frames are exploited for layer-wise compression. Prediction is performed vertex-wise from coarse to fine layers exploiting the motion of already encoded neighboring vertices for prediction of the current vertex location. Consequently, successive layer-wise decoding allows to reconstruct frames with increasing levels of detail.
    Type: Grant
    Filed: April 18, 2008
    Date of Patent: August 6, 2013
    Assignee: Gottfried Wilhelm Leibniz Universitat Hannover
    Inventors: Nikolce Stefanoski, Jörn Ostermann, Patrick Klie
  • Publication number: 20130057644
    Abstract: Techniques are disclosed for generating autostereoscopic video content. A multiscopic video frame is received that includes a first image and a second image. The first and second images are analyzed to determine a set of image characteristics. A mapping function is determined based on the set of image characteristics. At least a third image is generated based on the mapping function and added to the multiscopic video frame.
    Type: Application
    Filed: August 31, 2012
    Publication date: March 7, 2013
    Applicant: DISNEY ENTERPRISES, INC.
    Inventors: Nikolce Stefanoski, Aljoscha Smolic, Manuel Lang, Miquel À. Farré, Alexander Hornung, Pedro Christian Espinosa Fricke, Oliver Wang
  • Publication number: 20120262444
    Abstract: We present a method for predictive compression of time-consistent 3D mesh sequences supporting and exploiting scalability. The applied method decomposes each frame of a mesh sequence in layers, which provides a time-consistent multi-resolution representation. Following the predictive coding paradigm, local temporal and spatial dependencies between layers and frames are exploited for layer-wise compression. Prediction is performed vertex-wise from coarse to fine layers exploiting the motion of already encoded neighboring vertices for prediction of the current vertex location. Consequently, successive layer-wise decoding allows to reconstruct frames with increasing levels of detail.
    Type: Application
    Filed: April 18, 2008
    Publication date: October 18, 2012
    Applicant: GOTTFRIED WILHELM LEIBNIZ UNIVERSITAT HANNOVER
    Inventors: Nikolce Stefanoski, Jörn Ostermann, Patrick Klie
  • Publication number: 20110194024
    Abstract: Systems, methods and articles of manufacture are disclosed for performing scalable video coding. In one embodiment, non-linear functions are used to predict source video data using retargeted video data. Differences may be determined between the predicted video data and the source video data. The retargeted video data, the non-linear functions, and the differences may be jointly encoded into a scalable bitstream. The scalable bitstream may be transmitted and selectively decoded to produce output video for one of a plurality of predefined target platforms.
    Type: Application
    Filed: February 8, 2010
    Publication date: August 11, 2011
    Inventors: Nikolce STEFANOSKI, Aljosa SMOLIC, Yongzhe WANG, Manuel LANG, Alexander HORNUNG, Markus GROSS
  • Publication number: 20100150232
    Abstract: A method of concealing a packet loss during video decoding is provided. An input stream having a plurality of network abstraction layer units NAL is received. A loss of a network abstraction layer unit in a group of pictures in the input stream is detected. A valid network abstraction layer unit order from the available network abstraction layer units is outputted. The network abstraction layer unit order is received by a video coding layer (VCL) and data is outputted.
    Type: Application
    Filed: October 31, 2007
    Publication date: June 17, 2010
    Applicant: GOTTFRIED WILHELM LEIBNIZ UNIVERSITAT HANNOVER
    Inventors: Dieu Thanh Nguyen, Bernd Edler, Jorn Ostermann, Nikolce Stefanoski