Patents by Inventor Nikolce Stefanoski
Nikolce Stefanoski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20160373795Abstract: There is provided a server for providing an interactive broadcast. The server includes a memory configured to store a story manager including an event controller, a story controller and an action processor, and a hardware processor configured to execute the story manager. The story manager is configured to provide, using the event controller, an event based on a control script, story elements metadata, one or more user performance analyses, and one or more user preferences. The story manager is also configured to generate, using the story controller, an action command based on the event received from the event controller and a story state. The story manager is further configured to determine, using the action processor, an action corresponding to the action command for initiating one or more control processes for distributing the interactive broadcast.Type: ApplicationFiled: August 18, 2015Publication date: December 22, 2016Inventors: Nikolce Stefanoski, Aljoscha Smolic
-
Publication number: 20160353164Abstract: Novel systems and methods are described for creating, compressing, and distributing video or image content graded for a plurality of displays with different dynamic ranges. In implementations, the created content is “continuous dynamic range” (CDR) content—a novel representation of pixel-luminance as a function of display dynamic range. The creation of the CDR content includes grading a source content for a minimum dynamic range and a maximum dynamic range, and defining a luminance of each pixel of an image or video frame of the source content as a continuous function between the minimum and the maximum dynamic ranges. In additional implementations, a novel graphical user interface for creating and editing the CDR content is described.Type: ApplicationFiled: September 22, 2015Publication date: December 1, 2016Inventors: ALJOSCHA SMOLIC, ALEXANDRE CHAPIRO, SIMONE CROCI, TUNC OZAN AYDIN, NIKOLCE STEFANOSKI, MARKUS GROSS
-
Publication number: 20160323556Abstract: A video coding pipeline is provided that can accommodate high dynamic range (HDR) and wide color gamut (WCG) content at a fixed bitrate. The video coding pipeline relies on separate chromaticity and luminance-specific transforms in order to process image content. Image content may be converted into a nearly perceptually uniform color space for coding in constant luminance. Moreover, chromaticity transforms are utilized which reduce coding errors in the chroma components (at the fixed bitrate) by enlarging the distribution of code words for compression.Type: ApplicationFiled: September 22, 2015Publication date: November 3, 2016Inventors: DANIEL LUGINBUHL, TUNC OZAN AYDIN, ALJOSA SMOLIC, NIKOLCE STEFANOSKI
-
Patent number: 9445072Abstract: Techniques are disclosed for generating autostereoscopic video content. A multiscopic video frame is received that includes a first image and a second image. The first and second images are analyzed to determine a set of image characteristics. A mapping function is determined based on the set of image characteristics. At least a third image is generated based on the mapping function and added to the multiscopic video frame.Type: GrantFiled: August 31, 2012Date of Patent: September 13, 2016Assignees: DISNEY ENTERPRISES, INC., ETH ZURICH (EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZURICH)Inventors: Nikolce Stefanoski, Aljoscha Smolic, Manuel Lang, Miquel À Farré, Alexander Hornung, Pedro Christian Espinosa Fricke, Oliver Wang
-
Publication number: 20160261927Abstract: A device and method for receiving video content, generating at least two overlays for the video content, generating an information message containing information enabling a receiver of the video content and of the at least two overlays to selectively display or hide the generated overlays, and transmitting, using a multi-stream transmission including a primary stream and auxiliary streams, the information message, the video content in the primary stream and the at least two overlays in the auxiliary streams.Type: ApplicationFiled: March 31, 2016Publication date: September 8, 2016Inventors: Aljoscha SMOLIC, Nikolce STEFANOSKI, Oliver WANG
-
Publication number: 20160225342Abstract: Methods and systems for calibrating devices reproducing high dimensional data, such as calibrating High Dynamic Range (HDR) displays that reproduce chromatic data. Methods include mapping input data into calibrated data using calibration information retrieved from spatial data structures that encode a calibration function. The calibration function may be represented by any multidimensional scattered data interpolation methods such as Thin-Plate Splines. To efficiently represent and access the calibration information in runtime, the calibration function is recursively sampled based on guidance dataset. In an embodiment, an HDR display may be adaptively calibrated using a dynamic color guidance dataset and dynamic spatial data structures.Type: ApplicationFiled: February 2, 2015Publication date: August 4, 2016Inventors: Aljosa SMOLIC, Nikolce STEFANOSKI, Tunc Ozan AYDIN, Jing LlU, Anselm GRUNDHOFER
-
Patent number: 9378543Abstract: Approaches are described for tone-mapping an image frame in a sequence of image frames. A tone-mapping pipeline applies a spatiotemporal filter to each pixel of the image frame, based on a forward optical flow and a backward optical flow, to produce a base layer image. The tone-mapping pipeline applies a temporal filter to each pixel of the image frame, based on the forward and backward optical flows, to produce a temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame. The tone-mapping pipeline applies a tone curve to the base and detail layer images to produce a tone-mapped base and detail layer image, respectively, and combines the tone-mapped base and detail layer images to produce a tone-mapped image frame.Type: GrantFiled: July 28, 2014Date of Patent: June 28, 2016Assignees: Disney Enterprises, Inc., ETH Zurich (Eldgenoessische Technische Hochschule Zurich)Inventors: Tunc Ozan Aydin, Simone Croci, Aljosa Aleksej Andrej Smolic, Nikolce Stefanoski, Markus Gross
-
Patent number: 9361679Abstract: Approaches are described for filtering a first image frame in a sequence of image frames. A tone-mapping pipeline applies a spatiotemporal filter to each pixel of the first image frame, based on a forward optical flow and a backward optical flow, to produce a base layer image. The tone-mapping pipeline applies a temporal filter to each pixel of the first image frame, based on the forward optical flow and the backward optical flow, to produce a temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame.Type: GrantFiled: July 28, 2014Date of Patent: June 7, 2016Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)Inventors: Tunc Ozan Aydin, Simone Croci, Aljosa Aleksej Andrej Smolic, Nikolce Stefanoski, Markus Gross
-
Publication number: 20160027161Abstract: Approaches are described for filtering a first image frame in a sequence of image frames. A tone-mapping pipeline applies a spatiotemporal filter to each pixel of the first image frame, based on a forward optical flow and a backward optical flow, to produce a base layer image. The tone-mapping pipeline applies a temporal filter to each pixel of the first image frame, based on the forward optical flow and the backward optical flow, to produce a temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame.Type: ApplicationFiled: July 28, 2014Publication date: January 28, 2016Inventors: Tunc Ozan Aydin, Simone Croci, Aijosa Aleksej Andrej Smolic, Nikolce Stefanoski, Markus Gross
-
Publication number: 20160027160Abstract: Approaches are described for tone-mapping an image frame in a sequence of image frames. A tone-mapping pipeline applies a spatiotemporal filter to each pixel of the image frame, based on a forward optical flow and a backward optical flow, to produce a base layer image. The tone-mapping pipeline applies a temporal filter to each pixel of the image frame, based on the forward and backward optical flows, to produce a temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame. The tone-mapping pipeline produces a detail layer image based on the base layer image and the temporally filtered frame. The tone-mapping pipeline applies a tone curve to the base and detail layer images to produce a tone-mapped base and detail layer image, respectively, and combines the tone-mapped base and detail layer images to produce a tone-mapped image frame.Type: ApplicationFiled: July 28, 2014Publication date: January 28, 2016Inventors: Tunc Ozan Aydin, Simone Croci, Aljosa Aleksej Andrej Smolic, Nikolce Stefanoski, Markus Gross
-
Publication number: 20150135212Abstract: A method including receiving video of an event; generating an overlay for the video; generating an information message containing information enabling a receiver of the video and the overlay to selectively display or hide the overlay; and transmitting the video, the overlay, and the information message. The video is transmitted in a primary stream of a multi-stream transmission including a primary stream and one or more auxiliary streams. The overlay is transmitted in a first one of the auxiliary streams.Type: ApplicationFiled: October 9, 2014Publication date: May 14, 2015Inventors: Aljoscha Smolic, Nikolce Stefanoski, Oliver Wang
-
Publication number: 20140307048Abstract: Techniques are disclosed for view generation based on a video coding scheme. A bitstream is received that is encoded based on the video coding scheme. The bitstream includes video, quantized warp map offsets, and a message of a message type specified by the video coding scheme. Depth samples decoded from the first bitstream are interpreted as quantized warp map offsets, based on a first syntax element contained in the message. Warp maps are generated based on the quantized warp map offsets and a second syntax element contained in the message. Views are generated using image-domain warping and based on the video and the warp maps.Type: ApplicationFiled: December 26, 2013Publication date: October 16, 2014Applicant: DISNEY ENTERPRISES, INC.Inventors: Aljosa SMOLIC, Nikolce STEFANOSKI
-
Patent number: 8514932Abstract: Systems, methods and articles of manufacture are disclosed for performing scalable video coding. In one embodiment, non-linear functions are used to predict source video data using retargeted video data. Differences may be determined between the predicted video data and the source video data. The retargeted video data, the non-linear functions, and the differences may be jointly encoded into a scalable bitstream. The scalable bitstream may be transmitted and selectively decoded to produce output video for one of a plurality of predefined target platforms.Type: GrantFiled: February 8, 2010Date of Patent: August 20, 2013Assignee: Disney Enterprises, Inc.Inventors: Nikolce Stefanoski, Aljosa Smolic, Yongzhe Wang, Manuel Lang, Alexander Hornung, Markus Gross
-
Patent number: 8502815Abstract: We present a method for predictive compression of time-consistent 3D mesh sequences supporting and exploiting scalability. The applied method decomposes each frame of a mesh sequence in layers, which provides a time-consistent multi-resolution representation. Following the predictive coding paradigm, local temporal and spatial dependencies between layers and frames are exploited for layer-wise compression. Prediction is performed vertex-wise from coarse to fine layers exploiting the motion of already encoded neighboring vertices for prediction of the current vertex location. Consequently, successive layer-wise decoding allows to reconstruct frames with increasing levels of detail.Type: GrantFiled: April 18, 2008Date of Patent: August 6, 2013Assignee: Gottfried Wilhelm Leibniz Universitat HannoverInventors: Nikolce Stefanoski, Jörn Ostermann, Patrick Klie
-
Publication number: 20130057644Abstract: Techniques are disclosed for generating autostereoscopic video content. A multiscopic video frame is received that includes a first image and a second image. The first and second images are analyzed to determine a set of image characteristics. A mapping function is determined based on the set of image characteristics. At least a third image is generated based on the mapping function and added to the multiscopic video frame.Type: ApplicationFiled: August 31, 2012Publication date: March 7, 2013Applicant: DISNEY ENTERPRISES, INC.Inventors: Nikolce Stefanoski, Aljoscha Smolic, Manuel Lang, Miquel À. Farré, Alexander Hornung, Pedro Christian Espinosa Fricke, Oliver Wang
-
Publication number: 20120262444Abstract: We present a method for predictive compression of time-consistent 3D mesh sequences supporting and exploiting scalability. The applied method decomposes each frame of a mesh sequence in layers, which provides a time-consistent multi-resolution representation. Following the predictive coding paradigm, local temporal and spatial dependencies between layers and frames are exploited for layer-wise compression. Prediction is performed vertex-wise from coarse to fine layers exploiting the motion of already encoded neighboring vertices for prediction of the current vertex location. Consequently, successive layer-wise decoding allows to reconstruct frames with increasing levels of detail.Type: ApplicationFiled: April 18, 2008Publication date: October 18, 2012Applicant: GOTTFRIED WILHELM LEIBNIZ UNIVERSITAT HANNOVERInventors: Nikolce Stefanoski, Jörn Ostermann, Patrick Klie
-
Publication number: 20110194024Abstract: Systems, methods and articles of manufacture are disclosed for performing scalable video coding. In one embodiment, non-linear functions are used to predict source video data using retargeted video data. Differences may be determined between the predicted video data and the source video data. The retargeted video data, the non-linear functions, and the differences may be jointly encoded into a scalable bitstream. The scalable bitstream may be transmitted and selectively decoded to produce output video for one of a plurality of predefined target platforms.Type: ApplicationFiled: February 8, 2010Publication date: August 11, 2011Inventors: Nikolce STEFANOSKI, Aljosa SMOLIC, Yongzhe WANG, Manuel LANG, Alexander HORNUNG, Markus GROSS
-
Publication number: 20100150232Abstract: A method of concealing a packet loss during video decoding is provided. An input stream having a plurality of network abstraction layer units NAL is received. A loss of a network abstraction layer unit in a group of pictures in the input stream is detected. A valid network abstraction layer unit order from the available network abstraction layer units is outputted. The network abstraction layer unit order is received by a video coding layer (VCL) and data is outputted.Type: ApplicationFiled: October 31, 2007Publication date: June 17, 2010Applicant: GOTTFRIED WILHELM LEIBNIZ UNIVERSITAT HANNOVERInventors: Dieu Thanh Nguyen, Bernd Edler, Jorn Ostermann, Nikolce Stefanoski