Patents by Inventor Samir Hulyalkar

Samir Hulyalkar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20140036147
    Abstract: Presented herein are system(s), method(s), and apparatus for providing high resolution frames. In one embodiment, there is a method comprising receiving upscaled frames; motion estimating the upscaled frames; and motion compensating the upscaled frames.
    Type: Application
    Filed: October 14, 2013
    Publication date: February 6, 2014
    Applicant: Broadcom Corporation
    Inventors: YUNWEI JIA, SAMIR HULYALKAR, STEVEN HANNA, LEE SHU KEY
  • Publication number: 20140037205
    Abstract: Inter-color image prediction is based on color grading modeling. Prediction is applied to the efficient coding of images and video signals of high dynamic range. Prediction models may include a color transformation matrix that models hue and saturation color changes and a non-linear function modeling color correction changes. Under the assumption that the color grading process uses a slope, offset, and power (SOP) operations, an example non linear prediction model is presented.
    Type: Application
    Filed: April 13, 2012
    Publication date: February 6, 2014
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Guan-Ming Su, Sheng Qu, Hubert Koepfer, Yufei Yuan, Samir Hulyalkar, Walter Gish
  • Publication number: 20140029675
    Abstract: Inter-color image prediction is based on multi-channel multiple regression (MMR) models. Image prediction is applied to the efficient coding of images and video signals of high dynamic range. MMR models may include first order parameters, second order parameters, and cross-pixel parameters. MMR models using extension parameters incorporating neighbor pixel relations are also presented. Using minimum means-square error criteria, closed form solutions for the prediction parameters are presented for a variety of MMR models.
    Type: Application
    Filed: April 13, 2012
    Publication date: January 30, 2014
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Guan-Ming Su, Sheng Qu, Hubert Koepfer, Yufei Yuan, Samir Hulyalkar
  • Publication number: 20130272566
    Abstract: A video transmitter identifies regions in pictures in a compressed three-dimensional (3D) video comprising a base view video and an enhancement view video. The identified regions are not referenced by other pictures in the compressed 3D video. The identified regions are watermarked. Pictures such as a high layer picture in the base view video and the enhancement view video are identified for watermarking. The identified regions in the base view and/or enhancement view videos are watermarked and multiplexed into a transport stream for transmission. An intended video receiver extracts the base view video, the enhancement view video and corresponding watermark data from the received transport stream. The corresponding extracted watermark data are synchronized with the extracted base view video and the extracted enhancement view video, respectively, for watermark insertion. The resulting base view and enhancement view videos are decoded into a left view video and a right view video, respectively.
    Type: Application
    Filed: October 4, 2012
    Publication date: October 17, 2013
    Inventors: Xuemin CHEN, Samir HULYALKAR, Marcus KELLERMAN, Ilya KLEBANOV
  • Patent number: 8558948
    Abstract: Presented herein are system(s), method(s), and apparatus for providing high resolution frames. In one embodiment, there is a method comprising receiving upscaled frames; motion estimating the upscaled frames; and motion compensating the upscaled frames.
    Type: Grant
    Filed: December 17, 2009
    Date of Patent: October 15, 2013
    Assignee: Broadcom Corporation
    Inventors: Yunwei Jia, Samir Hulyalkar, Steven Hanna, Keith Lee
  • Publication number: 20130235157
    Abstract: A video receiver receives a compressed 3D video comprising a base view video and a residual view video from a video transmitter. The video receiver decodes the received base view video and an enhancement view video of the received compressed 3D video into a left view video and a right view video. Base view pictures are generated selectively based on available memory resource. The residual view video is generated by subtracting base view pictures from corresponding enhancement view pictures. The received base view and residual view videos are buffered for video decoding. Pictures in the buffered residual view video are added to corresponding pictures in the buffered base view video for enhancement view decoding. The left view video and/or the right view video are generated from the resulting decoded base view and enhancement view pictures. A motion vector used for a disparity predicted macroblock is applied to adjacent macroblock pre-fetching.
    Type: Application
    Filed: April 22, 2013
    Publication date: September 12, 2013
    Applicant: Broadcom Corporation
    Inventors: XUEMIN CHEN, MARCUS KELLERMAN, SAMIR HULYALKAR, ILYA KLEBANOV
  • Patent number: 8487981
    Abstract: A video processor decompresses stereoscopic left and right reference frames of compressed 3D video. New left and right frames are interpolated. The frames may be stored and/or communicated for display. The left and right frames are combined into a single frame of a single stream or may be sequenced in separate left and right streams. The left and right frames are interpolated based on the combined single stream and/or based on the separate left and right streams. Motion vectors are determined for one of the separate left or right streams. The frames are interpolated utilizing motion compensation. Areas of occlusion are determined in the separate left and right streams. Pixels are interpolated for occluded areas of left or right frames of separate streams from uncovered areas in corresponding opposite side frames. The left and right interpolated and/or reference frames are displayed as 3D and/or 2D video.
    Type: Grant
    Filed: September 4, 2009
    Date of Patent: July 16, 2013
    Assignee: Broadcom Corporation
    Inventors: Samir Hulyalkar, Xuemin Chen, Marcus Kellerman, Ilya Klebanov
  • Patent number: 8472625
    Abstract: A video receiver receives a compressed 3D video comprising a base view video and an enhancement view video. The base view video and the enhancement view video are encrypted using same encryption engine and buffered into corresponding coded data buffers (CDBs), respectively. The buffered base view and enhancement view videos are decrypted using same decryption engine corresponding to the encryption engine. The decrypted base view and enhancement view videos are decoded for viewing. The video receiver is also operable to encrypt video content of the received compressed 3D video according to corresponding view information and/or coding layer information. The resulting encrypted video content and unencrypted video content of the received compressed 3D video are buffered into corresponding CDBs, respectively. The buffered encrypted video content are decrypted and are decoded together with the buffered unencrypted video content of the received compressed 3D video for reviewing.
    Type: Grant
    Filed: September 16, 2009
    Date of Patent: June 25, 2013
    Assignee: Broadcom Corporation
    Inventors: Xuemin Chen, Samir Hulyalkar, Marcus Kellerman, Ilya Klebanov
  • Patent number: 8428122
    Abstract: A video receiver receives a compressed 3D video comprising a base view video and a residual view video from a video transmitter. The video receiver decodes the received base view video and an enhancement view video of the received compressed 3D video into a left view video and a right view video. Base view pictures are generated selectively based on available memory resource. The residual view video is generated by subtracting base view pictures from corresponding enhancement view pictures. The received base view and residual view videos are buffered for video decoding. Pictures in the buffered residual view video are added to corresponding pictures in the buffered base view video for enhancement view decoding. The left view video and/or the right view video are generated from the resulting decoded base view and enhancement view pictures. A motion vector used for a disparity predicted macroblock is applied to adjacent macroblock pre-fetching.
    Type: Grant
    Filed: September 16, 2009
    Date of Patent: April 23, 2013
    Assignee: Broadcom Corporation
    Inventors: Xuemin Chen, Marcus Kellerman, Samir Hulyalkar, Ilya Klebanov
  • Publication number: 20130033586
    Abstract: A method of and system and apparatus for, generating visual information from left and right (L/R) view information and depth information, comprising computing left and right projections of L/R view information in three-dimensional space, combining the occluded portions of the computed projections in three-dimensional space, and mapping the combined projections to two-dimensional space according to a desired projection point.
    Type: Application
    Filed: April 19, 2011
    Publication date: February 7, 2013
    Inventor: Samir Hulyalkar
  • Publication number: 20120314944
    Abstract: HDR images are coded and distributed. An initial HDR image is received. Processing the received HDR image creates a JPEG-2000 DCI-compliant coded baseline image and an HDR-enhancement image. The coded baseline image has one or more color components, each of which provide enhancement information that allows reconstruction of an instance of the initial HDR image using the baseline image and the HDR-enhancement images. A data packet is computed, which has a first and a second data set. The first data set relates to the baseline image color components, each of which has an application marker that relates to the HDR-enhancement images. The second data set relates to the HDR-enhancement image. The data packets are sent in a DCI-compliant bit stream.
    Type: Application
    Filed: May 10, 2012
    Publication date: December 13, 2012
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: AJIT NINAN, SAMIR HULYALKAR
  • Publication number: 20120314773
    Abstract: A visual dynamic range (VDR) signal and a standard dynamic range (SDR) signal are received. A first (e.g., MPEG-2) encoder encodes a base layer (BL) signal. A second encoder encodes an enhancement layer (EL). The EL signal represents information with which the VDR signal may be reconstructed, e.g., using the BL and the EL signals. The first encoder encodes the SDR signal with inverse discrete cosine transform (IDCT) coefficients that have a fixed precision, e.g., which represent fixed-point approximations of transform coefficients that may have arbitrary precisions. The BL signal is encoded in a stream that conforms with an Advanced Television Standards Committee (ATSC) standard. The EL is encoded in a stream that conforms with an ATSC enhanced vestigial sideband (E-VSB) standard. The BL and EL signals are combined; e.g., multiplexed, and transmitted together.
    Type: Application
    Filed: May 23, 2012
    Publication date: December 13, 2012
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: WALTER C. GISH, SAMIR HULYALKAR
  • Patent number: 8300881
    Abstract: A video transmitter identifies regions in pictures in a compressed three-dimensional (3D) video comprising a base view video and an enhancement view video. The identified regions are not referenced by other pictures in the compressed 3D video. The identified regions are watermarked. Pictures such as a high layer picture in the base view video and the enhancement view video are identified for watermarking. The identified regions in the base view and/or enhancement view videos are watermarked and multiplexed into a transport stream for transmission. An intended video receiver extracts the base view video, the enhancement view video and corresponding watermark data from the received transport stream. The corresponding extracted watermark data are synchronized with the extracted base view video and the extracted enhancement view video, respectively, for watermark insertion. The resulting base view and enhancement view videos are decoded into a left view video and a right view video, respectively.
    Type: Grant
    Filed: September 16, 2009
    Date of Patent: October 30, 2012
    Assignee: Broadcom Corporation
    Inventors: Xuemin Chen, Samir Hulyalkar, Marcus Kellerman, Ilya Klebanov
  • Patent number: 8300087
    Abstract: A sequential pattern comprising contiguous black frames inserted between left and right 3D video and/or graphics frames may be displayed on an LCD display. The pattern may comprise two or three contiguous left frames followed by contiguous black frames followed by two or three contiguous right frames followed by contiguous black frames. The left and/or right frames may comprise interpolated frames and/or may be displayed in ascending order. The contiguous black frames are displayed longer than liquid crystal response time. 3D shutter glasses are synchronized with the black frames. A left lens transmits light when left frames followed by contiguous black frames are displayed and a right lens transmits light when right frames followed by contiguous black frames are displayed. A 3D pair of 24 Hz frames or two 3D pairs of 60 Hz frames per pattern are displayed on a 240 Hz display.
    Type: Grant
    Filed: October 23, 2009
    Date of Patent: October 30, 2012
    Assignee: Broadcom Corporation
    Inventors: Samir Hulyalkar, Xuemin Chen, Marcus Kellerman, Ilya Klebanov, Sunkwang Hong
  • Publication number: 20110149029
    Abstract: A video processing device may perform pulldown when generating an output video stream that corresponds to received input 3D video stream. The pulldown may be performed based on determined native characteristics of the received input 3D video stream and display parameters corresponding to display device used for presenting the generated output video stream. The native characteristics of the received input 3D video stream may comprise film mode, which may be used to determine capture frame rate. The display parameters may comprise scan mode and/or display frame rate. A left view or a right view frame in every group of frames in the input 3D video stream comprising two consecutive left view frames and corresponding two consecutive right view frames may be duplicated when the input 3D video stream comprises a film mode with 24 fps capture frame rate and the display device uses 60 Hz progressive scanning.
    Type: Application
    Filed: February 18, 2010
    Publication date: June 23, 2011
    Inventors: Marcus Kellerman, Xuemin Chen, Samir Hulyalkar, Ilya Klebanov
  • Publication number: 20110150355
    Abstract: A video processing device may enhance contrast of one or more of a plurality of view sequences extracted from a three dimensional (3D) input video stream based on contrast information derived from other sequences in the plurality of view sequences. The view sequences that are subjected to contrast enhancement and/or whose contrast information may be utilized during contrast enhancement may be selected based on one or more selection criteria, which may comprise compression bitrate utilized during communication of the input video stream. The video processing device may also perform noise reduction on one or more of the plurality of extracted view sequences during contrast enhancement operations. Noise reduction may be performed using digital noise reduction (DNR). The nose reduction may be performed separately and/or independently on each view sequence in the plurality of extracted view sequences.
    Type: Application
    Filed: January 19, 2010
    Publication date: June 23, 2011
    Inventors: Marcus Kellerman, Xuemin Chen, Samir Hulyalkar, Ilya Klebanov
  • Publication number: 20110149028
    Abstract: 3D glasses may communicate with a video device that is used for playback of 3D video content to determine an operating mode used during the 3D video content playback and to synchronize viewing operations via the 3D glasses during the 3D video content playback based on the determined operating mode. Exemplary operating modes include polarization mode or shutter mode. The 3D video content may comprise stereoscopic left and right views. Polarization of the 3D glasses may be synchronized to polarization of the right and left views in polarization mode; whereas shuttering of the 3D glasses may be synchronized to the frequency of alternating rendering of right and left views in shuttering mode. Synchronization of the 3D glasses may be performed prior to start of the 3D video content playback and/or dynamically during the 3D video content playback. The 3D glasses may communicate with the video device via wireless interfaces.
    Type: Application
    Filed: February 2, 2010
    Publication date: June 23, 2011
    Inventors: Ilya Klebanov, Xuemin Chen, Samir Hulyalkar, Marcus Kellerman
  • Publication number: 20110149019
    Abstract: A video processing device may generate a two dimensional (2D) output video stream from a three dimensional (3D) input video stream that comprises a plurality of view sequences. The plurality of view sequences may comprise sequences of stereoscopic left and right reference fields or frames. A view sequence may initially be selected as a base sequence for the 2D output video stream, and the 2D output video stream may be enhanced using video content and/or information from unselected view sequences. The video content and/or information utilized in enhancing the 2D output video stream may comprise depth information, and/or foreground and/or background information. The enhancement of the 2D input video stream may comprise improving depth, contrast, sharpness, and/or rate upconversion using frame and/or field based interpolation of images in the 2D output video stream.
    Type: Application
    Filed: January 19, 2010
    Publication date: June 23, 2011
    Inventors: Marcus Kellerman, Xuemin Chen, Samir Hulyalkar, Ilya Klebanov
  • Publication number: 20110149150
    Abstract: Presented herein are system(s), method(s), and apparatus for providing high resolution frames. In one embodiment, there is a method comprising receiving upscaled frames; motion estimating the upscaled frames; and motion compensating the upscaled frames.
    Type: Application
    Filed: December 17, 2009
    Publication date: June 23, 2011
    Inventors: Yunwei Jia, Samir Hulyalkar, Steven Hanna, Keith Lee
  • Publication number: 20110149022
    Abstract: A video processing device may extract a plurality of view sequences from a three-dimensional (3D) input video stream and generate a plurality of graphics sequences that correspond to local graphics content. Each of the plurality of graphics sequences may be blended with a corresponding view sequence from the extracted plurality of view sequences to generate a plurality of combined sequences The local graphics content may comprise on-screen display (OSD) graphics, and may initially be generated as two-dimensional (2D) graphics. The plurality of graphics sequences may be generated from the local graphics content, based on, for example, video information for the input 3D video stream, user input, and/or preconfigured conversion data. After blending the view sequences with the graphics sequences, the video processing device may generate a 3D output video stream. The generated 3D output video stream may then be transformed to 2D video stream if 3D playback is not available.
    Type: Application
    Filed: February 2, 2010
    Publication date: June 23, 2011
    Inventors: Ilya Klebanov, Xuemin Chen, Samir Hulyalkar, Marcus Kellerman