Patents by Inventor Samir Hulyalkar

Samir Hulyalkar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20110149021
    Abstract: A video processing device may enhance sharpness of one or more of a plurality of view sequences extracted from a three dimensional (3D) input video stream. The plurality of extracted view sequences may comprise stereoscopic left and right view sequences of reference fields or frames. The sharpness enhancement processing may be performed based on sharpness related video information, which may be derived from other sequences in the plurality of view sequences, user input, embedded control data, and/or preconfigured parameters. The sharpness related video information may enable classifying images in the 3D input video streams into different regions, and may comprise depth related data and/or point-of-focus related data. Sharpness enhancement processing may be performed variably on background and foreground regions, and/or on in-focus or out-of-focus regions. A 3D output video stream for display may be generated from the plurality of view sequences based on the sharpness processing.
    Type: Application
    Filed: February 2, 2010
    Publication date: June 23, 2011
    Inventors: Samir Hulyalkar, Ilya Klebanov, Xuemin Chen, Marcus Kellerman
  • Publication number: 20110149040
    Abstract: A video processing device may generate and/or capture a plurality of view sequences of video frames, decimate at least some of the plurality of view sequences, and may generating a three-dimension (3D) video stream comprising the plurality of view sequences based on that decimation. The decimation may be achieved by converting one or more of the plurality of view sequences from progressive to interlaced video. The interlacing may be performed by removing top or bottom fields in each frame of those one or more view sequences during the conversion to interlaced video. The removed fields may be selected based on corresponding conversion to interlaced video of one or more corresponding view sequences. The video processing device may determine bandwidth limitations existing during direct and/or indirect transfer or communication of the generated 3D video stream. The decimation may be performed based on this determination of bandwidth limitations.
    Type: Application
    Filed: February 2, 2010
    Publication date: June 23, 2011
    Inventors: Ilya Klebanov, Xuemin Chen, Samir Hulyalkar, Marcus Kellerman
  • Publication number: 20110149020
    Abstract: A media player may read three-dimensional (3D) video data comprising a plurality of view sequences of frames or fields from a media storage device, and may decimate one or more of the view sequences to enable transferring the video data to a display device. The media player may determine operational parameter(s) and/or transfer limitation(s) of a connecting subsystem used to transfer the video data to the display device. The decimation may be performed based on this determination of transfer limitation(s). The decimation may be performed temporally and/or spatially. The plurality of view sequences may comprise sequences of stereoscopic left and right view reference frames or fields. The decimation may be performed such that the removed data for each view sequence may be reconstructed, after reception, based on remaining data in the same view sequence and/or video data of other corresponding view sequences.
    Type: Application
    Filed: January 19, 2010
    Publication date: June 23, 2011
    Inventors: Ilya Klebanov, Xuemin Chen, Samir Hulyalkar, Marcus Kellerman
  • Publication number: 20110115883
    Abstract: A 2D and/or 3D video processing device comprising a camera and a display captures images of a viewer as the viewer observes displayed 2D and/or 3D video content in a viewport. Face and/or eye tracking of viewer images is utilized to generate a different viewport. Current and different viewports may comprise 2D and/or 3D video content from a single source or from different sources. The sources of 2D and/or 3D content may be scrolled, zoomed and/or navigated through for generating the different viewport. Content for the different viewport may be processed. Images of a viewer's positions, angles and/or movements of face, facial expression, eyes and/or physical gestures are captured by the camera and interpreted by face and/or eye tracking. The different viewport may be generated for navigating through 3D content and/or for rotating a 3D object. The 2D and/or 3D video processing device communicates via wire, wireless and/or optical interfaces.
    Type: Application
    Filed: November 16, 2009
    Publication date: May 19, 2011
    Inventors: Marcus Kellerman, Xuemin Chen, Samir Hulyalkar, Ilya Klebanov
  • Publication number: 20110096151
    Abstract: A video processing system receives left and right 3D video and/or graphics frames and generates noise reduced left 3D video, right 3D video and/or graphics frames based on parallax compensated left and right frames. Displacement of imagery and/or pixel structures is determined relative to opposite side left and/or right frames. Parallax vectors are determined for parallax compensated left 3D video, right 3D video and/or graphics frames. A search area for displacement may be bounded by parallax limitations. Left 3D frames may be blended with the parallax compensated right 3D frames. Right 3D frames may be blended with the parallax compensated left 3D frames. The left 3D video, right 3D video and/or graphics frames comprise images that are captured, representative of and/or are displayed at a same time instant or at different time instants. Motion estimation, motion adaptation and/or motion compensation techniques may be utilized with parallax techniques.
    Type: Application
    Filed: October 23, 2009
    Publication date: April 28, 2011
    Inventors: Samir Hulyalkar, Xuemin Chen, Marcus Kellerman, Ilya Klebanov
  • Publication number: 20110096146
    Abstract: A sequential pattern comprising contiguous black frames inserted between left and right 3D video and/or graphics frames may be displayed on an LCD display. The pattern may comprise two or three contiguous left frames followed by contiguous black frames followed by two or three contiguous right frames followed by contiguous black frames. The left and/or right frames may comprise interpolated frames and/or may be displayed in ascending order. The contiguous black frames are displayed longer than liquid crystal response time. 3D shutter glasses are synchronized with the black frames. A left lens transmits light when left frames followed by contiguous black frames are displayed and a right lens transmits light when right frames followed by contiguous black frames are displayed. A 3D pair of 24 Hz frames or two 3D pairs of 60 Hz frames per pattern are displayed on a 240 Hz display.
    Type: Application
    Filed: October 23, 2009
    Publication date: April 28, 2011
    Inventors: Samir Hulyalkar, Xuemin Chen, Marcus Kellerman, Ilya Klebanov, Sunkwang Hong
  • Publication number: 20110085023
    Abstract: A first video processing device, for example, a set-top-box, receives and decodes left and right video streams and generates left and right graphics streams. The left and right video streams and left and right graphics streams are compressed and wirelessly communicated to a second video processing device, for example, a 3D and/or 2D television. The graphics streams are generated by a graphics processor on the first video processing device utilizing stored and/or received graphics information. The second video processing device wirelessly receives and decompresses the video and graphics. Blending of the left video with graphics and/or blending the right video with graphics may be done prior to wireless communication by the first video processing device or after wireless reception by the second video processing device. The second video processing device displays the blended left video and graphics and/or the blended right video and graphics.
    Type: Application
    Filed: October 13, 2009
    Publication date: April 14, 2011
    Inventors: Samir Hulyalkar, Xuemin Chen, Marcus Kellerman, Ilya Klebanov
  • Publication number: 20110080948
    Abstract: A video receiver receives a layered and predicted compressed 3D video comprising a base view video and an enhancement view video. A portion of pictures in the received compressed 3D video are selected to be decoded for display at an intended pace. Pictures in the received compressed 3D video are generated based on a tier system framework with tiers ordered hierarchically according to corresponding decodability. Each picture in the base view and enhancement view videos belongs to one of the plurality of tiers. A picture in a particular tier does not depend directly or indirectly on pictures in a higher tier. Each tier comprises one or more pictures with the same coding order. The video receiver decodes the pictures with the same coding order in parallel, and adaptively decodes the selected pictures according to corresponding coding layer information. The selected pictures are determined based on a particular display rate.
    Type: Application
    Filed: October 5, 2009
    Publication date: April 7, 2011
    Inventors: Xuemin Chen, Samir Hulyalkar, Marcus Kellerman, Ilya Klebanov
  • Publication number: 20110081133
    Abstract: Receiver receives a compressed 3D video comprising a base view video and an enhancement view video. The video receiver determines a random access that occurs at a two-view misaligned base view RAP to start decoding activities on the received compressed 3D video based on a corresponding two-view aligned random access point (RAP). The corresponding two-view aligned RAP is adjacent to the two-view misaligned base view RAP. Pictures in the received compressed 3D video are buffered for the two-view misaligned base view RAP to be decoded staring from the corresponding two-view aligned RAP. One or more pictures in the enhancement view video are interpolated based on the two-view misaligned base view RAP. The video receiver selects a portion of the buffered pictures to be decoded to facilitate a trick mode in personal video recording (PVR) operations for random access at the two-view misaligned RAP.
    Type: Application
    Filed: October 5, 2009
    Publication date: April 7, 2011
    Inventors: Xuemin Chen, Samir Hulyalkar, Marcus Kellerman, Ilya Klebanov
  • Publication number: 20110064262
    Abstract: A video transmitter identifies regions in pictures in a compressed three-dimensional (3D) video comprising a base view video and an enhancement view video. The identified regions are not referenced by other pictures in the compressed 3D video. The identified regions are watermarked. Pictures such as a high layer picture in the base view video and the enhancement view video are identified for watermarking. The identified regions in the base view and/or enhancement view videos are watermarked and multiplexed into a transport stream for transmission. An intended video receiver extracts the base view video, the enhancement view video and corresponding watermark data from the received transport stream. The corresponding extracted watermark data are synchronized with the extracted base view video and the extracted enhancement view video, respectively, for watermark insertion. The resulting base view and enhancement view videos are decoded into a left view video and a right view video, respectively.
    Type: Application
    Filed: September 16, 2009
    Publication date: March 17, 2011
    Inventors: Xuemin Chen, Samir Hulyalkar, Marcus Kellerman, Ilya Klebanov
  • Publication number: 20110063414
    Abstract: A video receiver receives a compressed 3D video comprising a base view video and a residual view video from a video transmitter. The video receiver decodes the received base view video and an enhancement view video of the received compressed 3D video into a left view video and a right view video. Base view pictures are generated selectively based on available memory resource. The residual view video is generated by subtracting base view pictures from corresponding enhancement view pictures. The received base view and residual view videos are buffered for video decoding. Pictures in the buffered residual view video are added to corresponding pictures in the buffered base view video for enhancement view decoding. The left view video and/or the right view video are generated from the resulting decoded base view and enhancement view pictures. A motion vector used for a disparity predicted macroblock is applied to adjacent macroblock pre-fetching.
    Type: Application
    Filed: September 16, 2009
    Publication date: March 17, 2011
    Inventors: Xuemin Chen, Marcus Kellerman, Samir Hulyalkar, Ilya Klebanov
  • Publication number: 20110064220
    Abstract: A video receiver receives a compressed 3D video comprising a base view video and an enhancement view video. The base view video and the enhancement view video are encrypted using same encryption engine and buffered into corresponding coded data buffers (CDBs), respectively. The buffered base view and enhancement view videos are decrypted using same decryption engine corresponding to the encryption engine. The decrypted base view and enhancement view videos are decoded for viewing. The video receiver is also operable to encrypt video content of the received compressed 3D video according to corresponding view information and/or coding layer information. The resulting encrypted video content and unencrypted video content of the received compressed 3D video are buffered into corresponding CDBs, respectively. The buffered encrypted video content are decrypted and are decoded together with the buffered unencrypted video content of the received compressed 3D video for reviewing.
    Type: Application
    Filed: September 16, 2009
    Publication date: March 17, 2011
    Inventors: Xuemin Chen, Samir Hulyalkar, Marcus Kellerman, Ilya Klebanov
  • Publication number: 20110063298
    Abstract: A first 3D graphics and/or 3D video processing device generates left and right view 3D graphics frames comprising 3D content which are communicated to a 3D display device for display. The 3D frames are generated based on a display format utilized by the 3D display device. The first 3D device may comprise a set-top-box and/or computer. The left and/or right 3D graphics frames may be generated based on time sequential display and/or polarizing display. Sub-sampling 3D graphics frames may be based on odd and even row display polarization patterns and/or checkerboard polarization patterns. Left and right 3D graphics pixels may be blended with video pixels. Left and/or right 3D graphics frames may be displayed sequentially in time. Left and/or right 3D graphics frames may be sub-sampled in complimentary pixel patterns, interleaved in a single frame and displayed utilizing varying polarization orientations for left and right pixels.
    Type: Application
    Filed: October 23, 2009
    Publication date: March 17, 2011
    Inventors: Samir Hulyalkar, Xuemin Chen, Marcus Kellerman, Ilya Klebanov
  • Publication number: 20110058016
    Abstract: A video processor decompresses stereoscopic left and right reference frames of compressed 3D video. New left and right frames are interpolated. The frames may be stored and/or communicated for display. The left and right frames are combined into a single frame of a single stream or may be sequenced in separate left and right streams. The left and right frames are interpolated based on the combined single stream and/or based on the separate left and right streams. Motion vectors are determined for one of the separate left or right streams. The frames are interpolated utilizing motion compensation. Areas of occlusion are determined in the separate left and right streams. Pixels are interpolated for occluded areas of left or right frames of separate streams from uncovered areas in corresponding opposite side frames. The left and right interpolated and/or reference frames are displayed as 3D and/or 2D video.
    Type: Application
    Filed: September 4, 2009
    Publication date: March 10, 2011
    Inventors: Samir Hulyalkar, Xuemin Chen, Marcus Kellerman, Ilya Klebanov
  • Publication number: 20110043608
    Abstract: A video transmitter compresses an uncompressed 3D video into a base view video and an enhancement view video using MPEG-4 MVC standard. The video transmitter allocates bits to compressed pictures of the uncompressed 3D video based on corresponding picture type. More bits are allocated to I-pictures than P-pictures, and more bits are allocated to P-pictures than B-pictures in a given coding view. More bits are allocated to a compressed picture of the base view video than a same type compressed picture of the enhancement view video. The correlation level between the base view video and the enhancement view video is utilized for bit-allocation in video compression. More bits are allocated to a picture in a lower coding layer than to the same type picture in a higher coding layer in a given coding view. Pictures with the same cording order are identified from different view videos for a joint bit-allocation.
    Type: Application
    Filed: August 21, 2009
    Publication date: February 24, 2011
    Inventors: Xuemin Chen, Samir Hulyalkar, Marcus Kellerman, llya Klebanov
  • Publication number: 20110043524
    Abstract: A video receiver receives a compound transport stream (TS) comprising 3D program video streams and spliced advertising streams. The received one or more 3D program video streams are extracted and decoded. Targeted advertising streams are extracted from the received advertising streams according to user criteria. Targeted advertising graphic objects of the extracted or replaced targeted advertising streams are spliced into the decoded 3D program video streams. The decoded 3D program video with the spliced targeted advertising graphic objects is presented in a 2D video. The extracted or replaced targeted advertising streams are processed to generate the targeted advertising graphic objects to be spliced based on focal point of view. The generated targeted advertising graphic objects are located according to associated scene graph information. The decoded 3D program video streams and the spliced targeted advertising graphic objects are converted into a 2D video for display.
    Type: Application
    Filed: August 24, 2009
    Publication date: February 24, 2011
    Inventors: Xuemin Chen, Samir Hulyalkar, Marcus Kellerman, Ilya Klebanov
  • Publication number: 20090273710
    Abstract: An image processing method for frame rate conversion, comprising: receiving a stream of input pictures at an input frame rate, at least some of the input pictures being new pictures, the new pictures appearing within the stream of input pictures at an underlying new picture rate; generating interpolated pictures from certain ones of the input pictures; outputting a stream of output pictures at an output frame rate, the stream of output pictures including a blend of the new pictures and the interpolated pictures, the interpolated pictures appearing in the stream of output pictures at an average interpolated picture rate; and causing a variation in the average interpolated picture rate in response to detection of a variation in the underlying new picture rate.
    Type: Application
    Filed: April 30, 2009
    Publication date: November 5, 2009
    Inventors: Larry Pearlstein, Samir Hulyalkar
  • Publication number: 20070237263
    Abstract: A robust data extension added to a standard 8VSB digital television signal is used to improve the performance of a digital television receiver. The robust data extension is added to a standard 8VSB digital television transmission system by encoding high priority data packets in a rate 1/2 trellis encoder. The high priority data 1/2 trellis encoded packets are then multiplexed with normal data packets and input into the normal data service of an 8VSB system, which further contains a rate 2/3 trellis encoder. The combined trellis encoding results in a rate 1/3 trellis encoding for robust data packets and a rate 2/3 trellis encoding for normal packets. Backward compatibility with existing receivers is maintained for 1) 8VSB signaling, 2) trellis encoding and decoding, 3) Reed Solomon encoding and decoding, and 4) MPEG compatibility. In addition to delivery of robust data for mobile applications, the redundant robust data packets also improve the performance of the receiver in the normal tier of service.
    Type: Application
    Filed: March 19, 2007
    Publication date: October 11, 2007
    Inventors: Christopher Strolle, Samir Hulyalkar, Jeffrey Hamilton, Haosong Fu, Troy Schaffer
  • Publication number: 20070183452
    Abstract: An apparatus, configured to receive from a receiver a multiplexed data stream of multiplexed video data packets, the multiplexed data stream being produced from multiple input video streams received by the receiver. The apparatus includes a point-of-deployment (POD) module controller configured to use a POD module to decrypt the multiplexed data stream and a demultiplexor connected to the decryption module and configured to demultiplex the multiplexed data stream such that the video data packets are grouped in respective output video data streams, the demultiplexor being further configured to use timing information associated with the multiplexed data stream such that packets in the output video data streams have time spacings in accordance with the timing information.
    Type: Application
    Filed: February 3, 2006
    Publication date: August 9, 2007
    Inventors: Mark Hryszko, Raul Casas, Samir Hulyalkar
  • Publication number: 20070002763
    Abstract: Centrally controlled wireless networks require reliable communications between the central controller and each of the stations within the wireless networks. The structure of a wireless network is often dynamic, or ad-hoc, as stations enter and exit the network, or are physically relocated. The selection of the central controller for the network may also be dynamic, either because the current central controller desires to exit the network, or because the communication between the current central controller and one or more of the stations is poor. This invention discloses a method and apparatus for assessing the quality of the communication paths among all stations in the network. This assessment is useful as a continual monitor of the quality of the network, and can be utilized to select an alternative central control station based upon the quality of communication paths to and from this station. Additionally, the quality assessment can be utilized to establish relay communication paths, as required.
    Type: Application
    Filed: June 27, 2006
    Publication date: January 4, 2007
    Inventors: Samir Hulyalkar, Yonggang Du, Christoph Herrmann, Chiu Ngo, Peter May