Patents by Inventor Ilya Klebanov

Ilya Klebanov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8024767
    Abstract: A method and apparatus for storing a compressed video stream or an uncompressed video stream is disclosed. The uncompressed video stream may be ZOOM VIDEO data. The compressed video stream may be a TRANSPORT STREAM data from a High Definition Television (HDTV) broadcast. A video graphics adapter is configured to properly receive one of the two types of video data. The received data and control signals are monitored to provide a second set of control of data signal which are used by a packer and an window control to provide data of a predetermined width and control to an address generator. The data is buffered within a graphics memory such as a frame buffer. The graphics memory can be written to system memory when full, or accessed by the system memory controller during the fill operation if a multi-ported memory is used.
    Type: Grant
    Filed: September 14, 1999
    Date of Patent: September 20, 2011
    Assignee: ATI Technologies ULC
    Inventors: Ilya Klebanov, Edward G. Callway, Chun Wang, Ivan W. Y. Yang
  • Patent number: 8004617
    Abstract: A device for rapidly instituting an active mode of a digital-television enabled system, the system including a first, volatile memory configured to load and store software instructions, includes: an input configured to receive first digital audio and video information; a first output configured to convey second audio and information toward a display regarding the first audio and video information; at least one second output configured to convey commands to, and receive information from, the first memory; and a processor configured to perform functions in accordance with software instructions stored in first and second memories and to cause the first memory to load software instructions for provision to the processor such that first instructions for processing at least one of the first audio information and the first video information are loaded and stored by the first memory with a higher priority than second instructions for performing other functionality.
    Type: Grant
    Filed: August 30, 2006
    Date of Patent: August 23, 2011
    Assignee: ATI Technologies ULC
    Inventors: Ilya Klebanov, Kwok P. Hui
  • Publication number: 20110149040
    Abstract: A video processing device may generate and/or capture a plurality of view sequences of video frames, decimate at least some of the plurality of view sequences, and may generating a three-dimension (3D) video stream comprising the plurality of view sequences based on that decimation. The decimation may be achieved by converting one or more of the plurality of view sequences from progressive to interlaced video. The interlacing may be performed by removing top or bottom fields in each frame of those one or more view sequences during the conversion to interlaced video. The removed fields may be selected based on corresponding conversion to interlaced video of one or more corresponding view sequences. The video processing device may determine bandwidth limitations existing during direct and/or indirect transfer or communication of the generated 3D video stream. The decimation may be performed based on this determination of bandwidth limitations.
    Type: Application
    Filed: February 2, 2010
    Publication date: June 23, 2011
    Inventors: Ilya Klebanov, Xuemin Chen, Samir Hulyalkar, Marcus Kellerman
  • Publication number: 20110149020
    Abstract: A media player may read three-dimensional (3D) video data comprising a plurality of view sequences of frames or fields from a media storage device, and may decimate one or more of the view sequences to enable transferring the video data to a display device. The media player may determine operational parameter(s) and/or transfer limitation(s) of a connecting subsystem used to transfer the video data to the display device. The decimation may be performed based on this determination of transfer limitation(s). The decimation may be performed temporally and/or spatially. The plurality of view sequences may comprise sequences of stereoscopic left and right view reference frames or fields. The decimation may be performed such that the removed data for each view sequence may be reconstructed, after reception, based on remaining data in the same view sequence and/or video data of other corresponding view sequences.
    Type: Application
    Filed: January 19, 2010
    Publication date: June 23, 2011
    Inventors: Ilya Klebanov, Xuemin Chen, Samir Hulyalkar, Marcus Kellerman
  • Publication number: 20110149021
    Abstract: A video processing device may enhance sharpness of one or more of a plurality of view sequences extracted from a three dimensional (3D) input video stream. The plurality of extracted view sequences may comprise stereoscopic left and right view sequences of reference fields or frames. The sharpness enhancement processing may be performed based on sharpness related video information, which may be derived from other sequences in the plurality of view sequences, user input, embedded control data, and/or preconfigured parameters. The sharpness related video information may enable classifying images in the 3D input video streams into different regions, and may comprise depth related data and/or point-of-focus related data. Sharpness enhancement processing may be performed variably on background and foreground regions, and/or on in-focus or out-of-focus regions. A 3D output video stream for display may be generated from the plurality of view sequences based on the sharpness processing.
    Type: Application
    Filed: February 2, 2010
    Publication date: June 23, 2011
    Inventors: Samir Hulyalkar, Ilya Klebanov, Xuemin Chen, Marcus Kellerman
  • Publication number: 20110150355
    Abstract: A video processing device may enhance contrast of one or more of a plurality of view sequences extracted from a three dimensional (3D) input video stream based on contrast information derived from other sequences in the plurality of view sequences. The view sequences that are subjected to contrast enhancement and/or whose contrast information may be utilized during contrast enhancement may be selected based on one or more selection criteria, which may comprise compression bitrate utilized during communication of the input video stream. The video processing device may also perform noise reduction on one or more of the plurality of extracted view sequences during contrast enhancement operations. Noise reduction may be performed using digital noise reduction (DNR). The nose reduction may be performed separately and/or independently on each view sequence in the plurality of extracted view sequences.
    Type: Application
    Filed: January 19, 2010
    Publication date: June 23, 2011
    Inventors: Marcus Kellerman, Xuemin Chen, Samir Hulyalkar, Ilya Klebanov
  • Publication number: 20110149019
    Abstract: A video processing device may generate a two dimensional (2D) output video stream from a three dimensional (3D) input video stream that comprises a plurality of view sequences. The plurality of view sequences may comprise sequences of stereoscopic left and right reference fields or frames. A view sequence may initially be selected as a base sequence for the 2D output video stream, and the 2D output video stream may be enhanced using video content and/or information from unselected view sequences. The video content and/or information utilized in enhancing the 2D output video stream may comprise depth information, and/or foreground and/or background information. The enhancement of the 2D input video stream may comprise improving depth, contrast, sharpness, and/or rate upconversion using frame and/or field based interpolation of images in the 2D output video stream.
    Type: Application
    Filed: January 19, 2010
    Publication date: June 23, 2011
    Inventors: Marcus Kellerman, Xuemin Chen, Samir Hulyalkar, Ilya Klebanov
  • Publication number: 20110149028
    Abstract: 3D glasses may communicate with a video device that is used for playback of 3D video content to determine an operating mode used during the 3D video content playback and to synchronize viewing operations via the 3D glasses during the 3D video content playback based on the determined operating mode. Exemplary operating modes include polarization mode or shutter mode. The 3D video content may comprise stereoscopic left and right views. Polarization of the 3D glasses may be synchronized to polarization of the right and left views in polarization mode; whereas shuttering of the 3D glasses may be synchronized to the frequency of alternating rendering of right and left views in shuttering mode. Synchronization of the 3D glasses may be performed prior to start of the 3D video content playback and/or dynamically during the 3D video content playback. The 3D glasses may communicate with the video device via wireless interfaces.
    Type: Application
    Filed: February 2, 2010
    Publication date: June 23, 2011
    Inventors: Ilya Klebanov, Xuemin Chen, Samir Hulyalkar, Marcus Kellerman
  • Publication number: 20110149022
    Abstract: A video processing device may extract a plurality of view sequences from a three-dimensional (3D) input video stream and generate a plurality of graphics sequences that correspond to local graphics content. Each of the plurality of graphics sequences may be blended with a corresponding view sequence from the extracted plurality of view sequences to generate a plurality of combined sequences The local graphics content may comprise on-screen display (OSD) graphics, and may initially be generated as two-dimensional (2D) graphics. The plurality of graphics sequences may be generated from the local graphics content, based on, for example, video information for the input 3D video stream, user input, and/or preconfigured conversion data. After blending the view sequences with the graphics sequences, the video processing device may generate a 3D output video stream. The generated 3D output video stream may then be transformed to 2D video stream if 3D playback is not available.
    Type: Application
    Filed: February 2, 2010
    Publication date: June 23, 2011
    Inventors: Ilya Klebanov, Xuemin Chen, Samir Hulyalkar, Marcus Kellerman
  • Publication number: 20110149029
    Abstract: A video processing device may perform pulldown when generating an output video stream that corresponds to received input 3D video stream. The pulldown may be performed based on determined native characteristics of the received input 3D video stream and display parameters corresponding to display device used for presenting the generated output video stream. The native characteristics of the received input 3D video stream may comprise film mode, which may be used to determine capture frame rate. The display parameters may comprise scan mode and/or display frame rate. A left view or a right view frame in every group of frames in the input 3D video stream comprising two consecutive left view frames and corresponding two consecutive right view frames may be duplicated when the input 3D video stream comprises a film mode with 24 fps capture frame rate and the display device uses 60 Hz progressive scanning.
    Type: Application
    Filed: February 18, 2010
    Publication date: June 23, 2011
    Inventors: Marcus Kellerman, Xuemin Chen, Samir Hulyalkar, Ilya Klebanov
  • Publication number: 20110115883
    Abstract: A 2D and/or 3D video processing device comprising a camera and a display captures images of a viewer as the viewer observes displayed 2D and/or 3D video content in a viewport. Face and/or eye tracking of viewer images is utilized to generate a different viewport. Current and different viewports may comprise 2D and/or 3D video content from a single source or from different sources. The sources of 2D and/or 3D content may be scrolled, zoomed and/or navigated through for generating the different viewport. Content for the different viewport may be processed. Images of a viewer's positions, angles and/or movements of face, facial expression, eyes and/or physical gestures are captured by the camera and interpreted by face and/or eye tracking. The different viewport may be generated for navigating through 3D content and/or for rotating a 3D object. The 2D and/or 3D video processing device communicates via wire, wireless and/or optical interfaces.
    Type: Application
    Filed: November 16, 2009
    Publication date: May 19, 2011
    Inventors: Marcus Kellerman, Xuemin Chen, Samir Hulyalkar, Ilya Klebanov
  • Publication number: 20110096146
    Abstract: A sequential pattern comprising contiguous black frames inserted between left and right 3D video and/or graphics frames may be displayed on an LCD display. The pattern may comprise two or three contiguous left frames followed by contiguous black frames followed by two or three contiguous right frames followed by contiguous black frames. The left and/or right frames may comprise interpolated frames and/or may be displayed in ascending order. The contiguous black frames are displayed longer than liquid crystal response time. 3D shutter glasses are synchronized with the black frames. A left lens transmits light when left frames followed by contiguous black frames are displayed and a right lens transmits light when right frames followed by contiguous black frames are displayed. A 3D pair of 24 Hz frames or two 3D pairs of 60 Hz frames per pattern are displayed on a 240 Hz display.
    Type: Application
    Filed: October 23, 2009
    Publication date: April 28, 2011
    Inventors: Samir Hulyalkar, Xuemin Chen, Marcus Kellerman, Ilya Klebanov, Sunkwang Hong
  • Publication number: 20110096151
    Abstract: A video processing system receives left and right 3D video and/or graphics frames and generates noise reduced left 3D video, right 3D video and/or graphics frames based on parallax compensated left and right frames. Displacement of imagery and/or pixel structures is determined relative to opposite side left and/or right frames. Parallax vectors are determined for parallax compensated left 3D video, right 3D video and/or graphics frames. A search area for displacement may be bounded by parallax limitations. Left 3D frames may be blended with the parallax compensated right 3D frames. Right 3D frames may be blended with the parallax compensated left 3D frames. The left 3D video, right 3D video and/or graphics frames comprise images that are captured, representative of and/or are displayed at a same time instant or at different time instants. Motion estimation, motion adaptation and/or motion compensation techniques may be utilized with parallax techniques.
    Type: Application
    Filed: October 23, 2009
    Publication date: April 28, 2011
    Inventors: Samir Hulyalkar, Xuemin Chen, Marcus Kellerman, Ilya Klebanov
  • Publication number: 20110085023
    Abstract: A first video processing device, for example, a set-top-box, receives and decodes left and right video streams and generates left and right graphics streams. The left and right video streams and left and right graphics streams are compressed and wirelessly communicated to a second video processing device, for example, a 3D and/or 2D television. The graphics streams are generated by a graphics processor on the first video processing device utilizing stored and/or received graphics information. The second video processing device wirelessly receives and decompresses the video and graphics. Blending of the left video with graphics and/or blending the right video with graphics may be done prior to wireless communication by the first video processing device or after wireless reception by the second video processing device. The second video processing device displays the blended left video and graphics and/or the blended right video and graphics.
    Type: Application
    Filed: October 13, 2009
    Publication date: April 14, 2011
    Inventors: Samir Hulyalkar, Xuemin Chen, Marcus Kellerman, Ilya Klebanov
  • Publication number: 20110081133
    Abstract: Receiver receives a compressed 3D video comprising a base view video and an enhancement view video. The video receiver determines a random access that occurs at a two-view misaligned base view RAP to start decoding activities on the received compressed 3D video based on a corresponding two-view aligned random access point (RAP). The corresponding two-view aligned RAP is adjacent to the two-view misaligned base view RAP. Pictures in the received compressed 3D video are buffered for the two-view misaligned base view RAP to be decoded staring from the corresponding two-view aligned RAP. One or more pictures in the enhancement view video are interpolated based on the two-view misaligned base view RAP. The video receiver selects a portion of the buffered pictures to be decoded to facilitate a trick mode in personal video recording (PVR) operations for random access at the two-view misaligned RAP.
    Type: Application
    Filed: October 5, 2009
    Publication date: April 7, 2011
    Inventors: Xuemin Chen, Samir Hulyalkar, Marcus Kellerman, Ilya Klebanov
  • Publication number: 20110080948
    Abstract: A video receiver receives a layered and predicted compressed 3D video comprising a base view video and an enhancement view video. A portion of pictures in the received compressed 3D video are selected to be decoded for display at an intended pace. Pictures in the received compressed 3D video are generated based on a tier system framework with tiers ordered hierarchically according to corresponding decodability. Each picture in the base view and enhancement view videos belongs to one of the plurality of tiers. A picture in a particular tier does not depend directly or indirectly on pictures in a higher tier. Each tier comprises one or more pictures with the same coding order. The video receiver decodes the pictures with the same coding order in parallel, and adaptively decodes the selected pictures according to corresponding coding layer information. The selected pictures are determined based on a particular display rate.
    Type: Application
    Filed: October 5, 2009
    Publication date: April 7, 2011
    Inventors: Xuemin Chen, Samir Hulyalkar, Marcus Kellerman, Ilya Klebanov
  • Publication number: 20110064220
    Abstract: A video receiver receives a compressed 3D video comprising a base view video and an enhancement view video. The base view video and the enhancement view video are encrypted using same encryption engine and buffered into corresponding coded data buffers (CDBs), respectively. The buffered base view and enhancement view videos are decrypted using same decryption engine corresponding to the encryption engine. The decrypted base view and enhancement view videos are decoded for viewing. The video receiver is also operable to encrypt video content of the received compressed 3D video according to corresponding view information and/or coding layer information. The resulting encrypted video content and unencrypted video content of the received compressed 3D video are buffered into corresponding CDBs, respectively. The buffered encrypted video content are decrypted and are decoded together with the buffered unencrypted video content of the received compressed 3D video for reviewing.
    Type: Application
    Filed: September 16, 2009
    Publication date: March 17, 2011
    Inventors: Xuemin Chen, Samir Hulyalkar, Marcus Kellerman, Ilya Klebanov
  • Publication number: 20110063298
    Abstract: A first 3D graphics and/or 3D video processing device generates left and right view 3D graphics frames comprising 3D content which are communicated to a 3D display device for display. The 3D frames are generated based on a display format utilized by the 3D display device. The first 3D device may comprise a set-top-box and/or computer. The left and/or right 3D graphics frames may be generated based on time sequential display and/or polarizing display. Sub-sampling 3D graphics frames may be based on odd and even row display polarization patterns and/or checkerboard polarization patterns. Left and right 3D graphics pixels may be blended with video pixels. Left and/or right 3D graphics frames may be displayed sequentially in time. Left and/or right 3D graphics frames may be sub-sampled in complimentary pixel patterns, interleaved in a single frame and displayed utilizing varying polarization orientations for left and right pixels.
    Type: Application
    Filed: October 23, 2009
    Publication date: March 17, 2011
    Inventors: Samir Hulyalkar, Xuemin Chen, Marcus Kellerman, Ilya Klebanov
  • Publication number: 20110064262
    Abstract: A video transmitter identifies regions in pictures in a compressed three-dimensional (3D) video comprising a base view video and an enhancement view video. The identified regions are not referenced by other pictures in the compressed 3D video. The identified regions are watermarked. Pictures such as a high layer picture in the base view video and the enhancement view video are identified for watermarking. The identified regions in the base view and/or enhancement view videos are watermarked and multiplexed into a transport stream for transmission. An intended video receiver extracts the base view video, the enhancement view video and corresponding watermark data from the received transport stream. The corresponding extracted watermark data are synchronized with the extracted base view video and the extracted enhancement view video, respectively, for watermark insertion. The resulting base view and enhancement view videos are decoded into a left view video and a right view video, respectively.
    Type: Application
    Filed: September 16, 2009
    Publication date: March 17, 2011
    Inventors: Xuemin Chen, Samir Hulyalkar, Marcus Kellerman, Ilya Klebanov
  • Publication number: 20110063414
    Abstract: A video receiver receives a compressed 3D video comprising a base view video and a residual view video from a video transmitter. The video receiver decodes the received base view video and an enhancement view video of the received compressed 3D video into a left view video and a right view video. Base view pictures are generated selectively based on available memory resource. The residual view video is generated by subtracting base view pictures from corresponding enhancement view pictures. The received base view and residual view videos are buffered for video decoding. Pictures in the buffered residual view video are added to corresponding pictures in the buffered base view video for enhancement view decoding. The left view video and/or the right view video are generated from the resulting decoded base view and enhancement view pictures. A motion vector used for a disparity predicted macroblock is applied to adjacent macroblock pre-fetching.
    Type: Application
    Filed: September 16, 2009
    Publication date: March 17, 2011
    Inventors: Xuemin Chen, Marcus Kellerman, Samir Hulyalkar, Ilya Klebanov