Patents by Inventor Hamid R. Sheikh

Hamid R. Sheikh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11048325
    Abstract: An embodiment of this disclosure provides a wearable device. The wearable device includes a memory configured to store a plurality of content for display, a transceiver configured to receive the plurality of content from a connected device, a display configured to display the plurality of content, and a processor coupled to the memory, the display, and the transceiver. The processor is configured to control the display to display at least some of the plurality of content in a spatially arranged format. The displayed content is on the display at a display position. The plurality of content, when shown on the connected device, is not in the spatially arranged format. The processor is also configured to receive movement information based on a movement of the wearable device. The processor is also configured to adjust the display position of the displayed content according to the movement information of the wearable device.
    Type: Grant
    Filed: July 10, 2017
    Date of Patent: June 29, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sourabh Ravindran, Hamid R Sheikh, Michael Polley, Youngjun Yoo
  • Patent number: 11025942
    Abstract: Methods and systems for compressed domain progressive application of computer vision techniques. A method for decoding video data includes receiving a video stream that is encoded for multi-stage decoding. The method includes partially decoding the video stream by performing one or more stages of the multi-stage decoding. The method includes determining whether a decision for a computer vision system can be identified based on the partially decoded video stream. Additionally, the method includes generating the decision for the computer vision system based on decoding of the video stream. A system for encoding video data includes a processor configured to receive the video data from a camera, encode the video data received from the camera into a video stream for consumption by a computer vision system, and include metadata with the encoded video stream to indicate whether a decision for the computer vision system can be identified from the metadata.
    Type: Grant
    Filed: February 8, 2018
    Date of Patent: June 1, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hamid R. Sheikh, Youngjun Yoo, Michael Polley, Chenchi Luo, David Liu
  • Patent number: 10944914
    Abstract: A method includes obtaining, using at least one image sensor of an electronic device, a first image frame of a scene. The method also includes using a convolutional neural network to generate, from the first image frame, multiple second image frames simulated to have different exposures. One or more objects in the scene in each second image frame are aligned with one or more corresponding objects in the scene in at least one other second image frame and are aligned with one or more corresponding objects in the scene in the first image frame. The method further includes blending the multiple second image frames to generate a final image of the scene.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: March 9, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Long N. Le, Hamid R. Sheikh, Zeeshan Nadir, John W. Glotzbach
  • Publication number: 20210042941
    Abstract: A method includes receiving a reference image and a non-reference image; dividing the reference image into a plurality of tiles; determining, using an electronic device, a motion vector map using coarse-to-fine based motion vector estimation; and generating an output frame using the motion vector map with the reference image and the non-reference image.
    Type: Application
    Filed: December 26, 2019
    Publication date: February 11, 2021
    Inventors: Ruiwen Zhen, John W. Glotzbach, Hamid R. Sheikh
  • Publication number: 20210042897
    Abstract: A method includes obtaining, using at least one sensor of an electronic device, multiple image frames of a scene. The multiple image frames include a first image frame and a second image frame captured using different exposures. The method also includes excluding, using at least one processor of the electronic device, pixels in the first and second image frames based on a coarse motion map. The method further includes generating, using the at least one processor, multiple local histogram match maps based on different portions of the first and second image frames. In addition, the method includes generating, using the at least one processor, an image of the scene using the local histogram match maps.
    Type: Application
    Filed: December 9, 2019
    Publication date: February 11, 2021
    Inventors: Zhen Tong, John W. Glotzbach, Ruiwen Zhen, Hamid R. Sheikh
  • Patent number: 10911691
    Abstract: A method includes obtaining, using at least one image sensor of an electronic device, multiple image frames of a scene. The multiple image frames include a plurality of short image frames at a first exposure level and a plurality of long image frames at a second exposure level longer than the first exposure level. The method also includes generating a short reference image frame and a long reference image frame using the multiple image frames. The method further includes selecting, using a processor of the electronic device, the short reference image frame or the long reference image frame as a reference frame, where the selection is based on an amount of saturated motion in the long image frame and an amount of a shadow region in the short image frame. In addition, the method includes generating a final image of the scene using the reference frame.
    Type: Grant
    Filed: December 18, 2019
    Date of Patent: February 2, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Long N. Le, Ruiwen Zhen, John W. Glotzbach, Hamid R. Sheikh
  • Publication number: 20200396370
    Abstract: A method includes obtaining multiple image frames of a scene using at least one sensor of an electronic device. The multiple image frames include a first image frame and a second image frame having a longer exposure than the first image frame. The method also includes generating a label map that identifies pixels in the multiple image frames that are to be used in an image. The method further includes generating the image of the scene using the pixels extracted from the image frames based on the label map. The label map may include multiple labels, and each label may be associated with at least one corresponding pixel and may include a discrete value that identifies one of the multiple image frames from which the at least one corresponding pixel is extracted.
    Type: Application
    Filed: August 19, 2019
    Publication date: December 17, 2020
    Inventors: Ruiwen Zhen, John W. Glotzbach, Hamid R. Sheikh, Ibrahim Pekkucuksen, Zhen Tong, Long N. Le
  • Publication number: 20200357102
    Abstract: A method for multi-frame blending includes obtaining at least two image frames of a scene. One of the image frames is associated with a shorter exposure time and a higher sensitivity and representing a reference image frame. At least one other of the image frames is associated with a longer exposure time and a lower sensitivity and representing at least one non-reference image frame. The method also includes blending the reference and non-reference image frames into a blended image such that (i) one or more motion regions of the blended image are based more on the reference image frame and (ii) one or more stationary regions of the blended image are based more on the at least one non-reference image frame.
    Type: Application
    Filed: September 16, 2019
    Publication date: November 12, 2020
    Inventors: Ibrahim Pekkucuksen, Hamid R. Sheikh, John W. Glotzbach
  • Patent number: 10805649
    Abstract: A method and device for blending multiple related frames into a single frame to reduce noise is disclosed. A method includes comparing an input frame to a corresponding reference frame in order to determine if at least one object that is in both frames moves in the input frame, and also to determine edge strengths of the at least one object. The method further includes, based on the comparison, determining which regions of the input frame to blend with corresponding regions of the reference frame, which regions of the input frame not to blend with corresponding regions of the reference frame, and which regions of the input frame to partially blend with corresponding regions of the reference frame.
    Type: Grant
    Filed: January 4, 2018
    Date of Patent: October 13, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Ibrahim E. Pekkucuksen, John Glotzbach, Hamid R. Sheikh, Rahul Rithe
  • Publication number: 20200267300
    Abstract: An electronic device, method, and computer readable medium for compositing high dynamic range frames are provided. The electronic device includes a camera, and a processor coupled to the camera. The processor registers the plurality of multi-exposure frames with a hybrid of matched features to align non-reference frames with a reference frame; generates blending maps of the plurality of multi-exposure frames to reduce moving ghost artifacts and identify local areas that are well-exposed in the plurality of multi-exposure frames; and blends the plurality of multi-exposure frames weighted by the blending maps using a two-step weight-constrained exposure fusion technique into a high dynamic range (HDR) frame.
    Type: Application
    Filed: February 15, 2019
    Publication date: August 20, 2020
    Inventors: Ruiwen Zhen, John Glotzbach, Ibrahim Pekkucuksen, Hamid R. Sheikh
  • Publication number: 20200265555
    Abstract: A method includes obtaining multiple image frames of a scene using at least one camera of an electronic device and processing the multiple image frames to generate a higher-resolution image of the scene. Processing the multiple image frames includes generating an initial estimate of the scene based on the multiple image frames. Processing the multiple image frames also includes, in each of multiple iterations, (i) generating a current estimate of the scene based on the image frames and a prior estimate of the scene and (ii) regularizing the generated current estimate of the scene. The regularized current estimate of the scene from one iteration represents the prior estimate of the scene in a subsequent iteration. The iterations continue until the estimates of the scene converge on the higher-resolution image of the scene.
    Type: Application
    Filed: February 18, 2019
    Publication date: August 20, 2020
    Inventors: Omar A. Elgendy, John W. Glotzbach, Hamid R. Sheikh
  • Publication number: 20200267299
    Abstract: A method includes capturing multiple ambient images of a scene using at least one camera of an electronic device and without using a flash of the electronic device. The method also includes capturing multiple flash images of the scene using the at least one camera of the electronic device and during firing of a pilot flash sequence using the flash. The method further includes analyzing multiple pairs of images to estimate exposure differences obtained using the flash, where each pair of images includes one of the ambient images and one of the flash images that are both captured using a common camera exposure and where different pairs of images are captured using different camera exposures. In addition, the method includes determining a flash strength for the scene based on the estimate of the exposure differences and firing the flash based on the determined flash strength.
    Type: Application
    Filed: February 18, 2019
    Publication date: August 20, 2020
    Inventors: Long N. Le, Hamid R. Sheikh, John W. Glotzbach
  • Publication number: 20200265567
    Abstract: A method includes obtaining multiple image frames of a scene using at least one camera of an electronic device. The method also includes using a convolutional neural network to generate blending maps associated with the image frames. The blending maps contain or are based on both a measure of motion in the image frames and a measure of how well exposed different portions of the image frames are. The method further includes generating a final image of the scene using at least some of the image frames and at least some of the blending maps. The final image of the scene may be generated by blending the at least some of the image frames using the at least some of the blending maps, and the final image of the scene may include image details that are lost in at least one of the image frames due to over-exposure or under-exposure.
    Type: Application
    Filed: February 18, 2019
    Publication date: August 20, 2020
    Inventors: Yuting Hu, Ruiwen Zhen, John W. Glotzbach, Ibrahim Pekkucuksen, Hamid R. Sheikh
  • Patent number: 10742892
    Abstract: A method includes capturing multiple ambient images of a scene using at least one camera of an electronic device and without using a flash of the electronic device. The method also includes capturing multiple flash images of the scene using the at least one camera of the electronic device and during firing of a pilot flash sequence using the flash. The method further includes analyzing multiple pairs of images to estimate exposure differences obtained using the flash, where each pair of images includes one of the ambient images and one of the flash images that are both captured using a common camera exposure and where different pairs of images are captured using different camera exposures. In addition, the method includes determining a flash strength for the scene based on the estimate of the exposure differences and firing the flash based on the determined flash strength.
    Type: Grant
    Filed: February 18, 2019
    Date of Patent: August 11, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Long N. Le, Hamid R. Sheikh, John W. Glotzbach
  • Patent number: 10719927
    Abstract: An electronic device, method, and computer readable medium for multi-frame image processing using semantic saliency are provided. The electronic device includes a camera, a display, and a processor. The processor is coupled to the camera and the display. The processor receives a plurality of frames captured by the camera during a capture event; identifies a salient region in each of the plurality of frames; determines a reference frame from the plurality of frames based on the identified salient regions; fuses non-reference frames with the determined reference frame into a completed image output.
    Type: Grant
    Filed: January 4, 2018
    Date of Patent: July 21, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Raja Bala, Hamid R. Sheikh, John Glotzbach
  • Publication number: 20200186710
    Abstract: A method includes, in a first mode, positioning first and second tiltable image sensor modules of an image sensor array of an electronic device so that a first optical axis of the first tiltable image sensor module and a second optical axis of the second tiltable image sensor module are substantially perpendicular to a surface of the electronic device, and the first and second tiltable image sensor modules are within a thickness profile of the electronic device. The method also includes, in a second mode, tilting the first and second tiltable image sensor modules so that the first optical axis of the first tiltable image sensor module and the second optical axis of the second tiltable image sensor module are not perpendicular to the surface of the electronic device, and at least part of the first and second tiltable image sensor modules are no longer within the thickness profile of the electronic device.
    Type: Application
    Filed: December 5, 2019
    Publication date: June 11, 2020
    Inventors: Hamid R. Sheikh, Youngjun Yoo, Seok-Jun Lee, Michael O. Polley
  • Patent number: 10554890
    Abstract: A method includes capturing multiple pairs of images of a scene at different exposures using at least one camera of an electronic device. Each pair of images includes (i) an ambient image of the scene captured without using a flash of the electronic device and (ii) a flash image of the scene captured using the flash of the electronic device. The method also includes rendering a final image of the scene with a bokeh that is determined using the multiple pairs of images. One of the ambient images or the flash images are captured in order of increasing exposure time, and the other of the ambient images or the flash images are captured in order of decreasing exposure time. The method may also include estimating a depth map associated with the scene using the pairs of images, where the bokeh is based on the depth map.
    Type: Grant
    Filed: February 18, 2019
    Date of Patent: February 4, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Long N. Le, John W. Glotzbach, Hamid R. Sheikh, Michael O. Polley
  • Publication number: 20190246130
    Abstract: Methods and systems for compressed domain progressive application of computer vision techniques. A method for decoding video data includes receiving a video stream that is encoded for multi-stage decoding. The method includes partially decoding the video stream by performing one or more stages of the multi-stage decoding. The method includes determining whether a decision for a computer vision system can be identified based on the partially decoded video stream. Additionally, the method includes generating the decision for the computer vision system based on decoding of the video stream. A system for encoding video data includes a processor configured to receive the video data from a camera, encode the video data received from the camera into a video stream for consumption by a computer vision system, and include metadata with the encoded video stream to indicate whether a decision for the computer vision system can be identified from the metadata.
    Type: Application
    Filed: February 8, 2018
    Publication date: August 8, 2019
    Inventors: Hamid R. Sheikh, Youngjun Yoo, Michael Polley, Chenchi Luo, David Liu
  • Publication number: 20190011978
    Abstract: An embodiment of this disclosure provides a wearable device. The wearable device includes a memory configured to store a plurality of content for display, a transceiver configured to receive the plurality of content from a connected device, a display configured to display the plurality of content, and a processor coupled to the memory, the display, and the transceiver. The processor is configured to control the display to display at least some of the plurality of content in a spatially arranged format. The displayed content is on the display at a display position. The plurality of content, when shown on the connected device, is not in the spatially arranged format. The processor is also configured to receive movement information based on a movement of the wearable device. The processor is also configured to adjust the display position of the displayed content according to the movement information of the wearable device.
    Type: Application
    Filed: July 10, 2017
    Publication date: January 10, 2019
    Inventors: Sourabh Ravindran, Hamid R. Sheikh, Michael Polley, Youngjun Yoo
  • Publication number: 20180192098
    Abstract: A method and device for blending multiple related frames into a single frame to reduce noise is disclosed. A method includes comparing an input frame to a corresponding reference frame in order to determine if at least one object that is in both frames moves in the input frame, and also to determine edge strengths of the at least one object. The method further includes, based on the comparison, determining which regions of the input frame to blend with corresponding regions of the reference frame, which regions of the input frame not to blend with corresponding regions of the reference frame, and which regions of the input frame to partially blend with corresponding regions of the reference frame.
    Type: Application
    Filed: January 4, 2018
    Publication date: July 5, 2018
    Inventors: Ibrahim E. Pekkucuksen, John Glotzbach, Hamid R. Sheikh, Rahul Rithe