Patents by Inventor Richard D. Seely

Richard D. Seely has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11776141
    Abstract: Devices, methods, and non-transitory program storage devices (NPSDs) for an improved, so-called “hybrid” image registration process are disclosed herein, comprising: obtaining a first set of captured images, wherein the first set of captured images comprises a reference image and one or more bracketed images; and for each of the one or more bracketed images: performing a first (e.g., global) registration operation and a second (e.g., dense, or other localized) registration operation on the bracketed image with respect to the reference image, wherein each of the first and second registration operations produces an output; generating a blend map for the bracketed image, wherein each value in the blend map indicates whether to use the first or second registration operation output for a corresponding one or more pixels when registering the bracketed image with the reference image; and registering the bracketed image with the reference image, according to the generated blend map.
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: October 3, 2023
    Assignee: Apple Inc.
    Inventors: Gijesh Varghese, Farhan A. Baqai, Giancarlo Todone, Hao Sun, Richard D. Seely
  • Patent number: 11682108
    Abstract: This disclosure relates to various implementations that dynamically adjust one or more shallow depth of field (SDOF) parameters based on a designated, artificial aperture value. The implementations obtain a designated, artificial aperture value that modifies an initial aperture value for an image frame. The designated, artificial aperture value generates a determined amount of synthetically-produced blur within the image frame. The implementations determine an aperture adjustment factor based on the designated, artificial aperture value in relation to a default so-called “tuning aperture value” (for which the camera's operations may have been optimized). The implementations may then modify, based on the aperture adjustment factor, one or more SDOF parameters for an SDOF operation, which may, e.g., be configured to render a determined amount of synthetic bokeh within the image frame.
    Type: Grant
    Filed: September 13, 2021
    Date of Patent: June 20, 2023
    Assignee: Apple Inc.
    Inventors: Richard D. Seely, Alexandre Naaman, Patrick Shehane, Andre Souza Dos Santos, Behkish J. Manzari
  • Patent number: 11570374
    Abstract: Devices, methods, and computer-readable media are disclosed, describing an adaptive, subject-aware approach for image bracket selection and fusion, e.g., to generate high quality images in a wide variety of capturing conditions, including low light conditions. An incoming image stream may be obtained from an image capture device, comprising images captured using differing default exposure values, e.g., according to a predetermined pattern. When a capture request is received, it may be detected whether one or more human or animal subjects are present in the incoming image stream. If a subject is detected, an exposure time of one or more images selected from the incoming image stream may be reduced relative to its default exposure time. Prior to the fusion operation, one of the selected images may be designated a reference image for the fusion operation based, at least in part, on a sharpness score and/or a blink score of the image.
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: January 31, 2023
    Assignee: Apple Inc.
    Inventors: Hao Sun, Farhan A. Baqai, Giancarlo Todone, Gijesh Varghese, Morten Poulsen, Richard D. Seely, Richard J. Shields, Srivani Pinneli, Wu Cheng
  • Publication number: 20210407050
    Abstract: This disclosure relates to various implementations that dynamically adjust one or more shallow depth of field (SDOF) parameters based on a designated, artificial aperture value. The implementations obtain a designated, artificial aperture value that modifies an initial aperture value for an image frame. The designated, artificial aperture value generates a determined amount of synthetically-produced blur within the image frame. The implementations determine an aperture adjustment factor based on the designated, artificial aperture value in relation to a default so-called “tuning aperture value” (for which the camera's operations may have been optimized). The implementations may then modify, based on the aperture adjustment factor, one or more SDOF parameters for an SDOF operation, which may, e.g., be configured to render a determined amount of synthetic bokeh within the image frame.
    Type: Application
    Filed: September 13, 2021
    Publication date: December 30, 2021
    Inventors: Richard D. Seely, Alexandre Naaman, Patrick Shehane, Andre Souza Dos Santos, Behkish J. Manzari
  • Patent number: 11120528
    Abstract: This disclosure relates to various implementations that dynamically adjust one or more shallow depth of field (SDOF) parameters based on a designated, artificial aperture value. The implementations obtain a designated, artificial aperture value that modifies an initial aperture value for an image frame. The designated, artificial aperture value generates a determined amount of synthetically-produced blur within the image frame. The implementations determine an aperture adjustment factor based on the designated, artificial aperture value in relation to a default so-called “tuning aperture value” (for which the camera's operations may have been optimized). The implementations may then modify, based on the aperture adjustment factor, one or more SDOF parameters for an SDOF operation, which may, e.g., be configured to render a determined amount of synthetic bokeh within the image frame.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: September 14, 2021
    Assignee: Apple Inc.
    Inventors: Richard D. Seely, Alexandre Naaman, Patrick Shehane, Andre Souza Dos Santos, Behkish J. Manzari
  • Patent number: 11102421
    Abstract: An incoming image stream may be obtained from an image capture device operating in low-light conditions and/or a simulated long exposure image capture mode. As images are obtained, a weighting operation may be performed on the pixels of the captured images to generate and/or update an accumulative weight map, wherein the weighting is based, e.g., on the proximity of the captured pixels' values to the respective image capture device's maximum observable pixel value. As batches of images are obtained, they may be fused, e.g., according to the accumulative weight map, in a memory-efficient manner that places an upper limit on the overall memory footprint of the fusion operations, to simulate an actual long exposure image capture. In some embodiments, the weight map may be stored at a lower resolution than the obtained images and then upscaled, e.g., via the use of guided filters, before being applied in the fusion operations.
    Type: Grant
    Filed: August 18, 2020
    Date of Patent: August 24, 2021
    Assignee: Apple Inc.
    Inventors: Richard D. Seely, Giancarlo Todone, Hao Sun, Farhan A. Baqai
  • Patent number: 10992845
    Abstract: This disclosure relates to techniques for synthesizing out of focus blurring effects in digital images. Cameras having wide aperture lenses typically capture images with a shallow depth of field (SDOF). SDOF cameras are often used in portrait photography, since they emphasize subjects, while deemphasizing the background via blurring. Simulating this kind of blurring using a large depth of field (LDOF) camera may require a large amount of computational resources, i.e., to simulate the physical effects of using a wide aperture lens, while constructing a synthetic SDOF image. Moreover, cameras having smaller lens apertures, such as those in mobile phones, may not have the ability to accurately estimate or recreate the true color of clipped background light sources. Described herein are techniques to synthesize out of focus background blurring effects that attempt to reproduce accurate light intensity and color values for clipped background light sources in images captured by LDOF cameras.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: April 27, 2021
    Assignee: Apple Inc.
    Inventors: Richard D. Seely, Shuang Gao, Alexandre Naaman, Patrick Shehane
  • Patent number: 10410327
    Abstract: This disclosure relates to techniques for synthesizing out of focus effects in digital images. Digital single-lens reflex (DSLR) cameras and other cameras having wide aperture lenses typically capture images with a shallow depth of field (SDOF). SDOF photography is often used in portrait photography, since it emphasizes the subject, while deemphasizing the background via blurring. Simulating this kind of blurring using a large depth of field (LDOF) camera may require a large amount of computational resources, i.e., in order to simulate the physical effects of using a wide aperture lens while constructing a synthetic SDOF image. However, cameras having smaller lens apertures, such as mobile phones, may not have the processing power to simulate the spreading of all background light sources in a reasonable amount of time. Thus, described herein are techniques to synthesize out-of-focus background blurring effects in a computationally-efficient manner for images captured by LDOF cameras.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: September 10, 2019
    Assignee: Apple Inc.
    Inventors: Richard D. Seely, Michael W. Tao, Alexander Lindskog, Geoffrey T. Anneheim
  • Publication number: 20180350043
    Abstract: This disclosure relates to techniques for synthesizing out of focus effects in digital images. Digital single-lens reflex (DSLR) cameras and other cameras having wide aperture lenses typically capture images with a shallow depth of field (SDOF). SDOF photography is often used in portrait photography, since it emphasizes the subject, while deemphasizing the background via blurring. Simulating this kind of blurring using a large depth of field (LDOF) camera may require a large amount of computational resources, i.e., in order to simulate the physical effects of using a wide aperture lens while constructing a synthetic SDOF image. However, cameras having smaller lens apertures, such as mobile phones, may not have the processing power to simulate the spreading of all background light sources in a reasonable amount of time. Thus, described herein are techniques to synthesize out-of-focus background blurring effects in a computationally-efficient manner for images captured by LDOF cameras.
    Type: Application
    Filed: May 25, 2018
    Publication date: December 6, 2018
    Inventors: Richard D. Seely, Michael W. Tao, Alexander Lindskog, Geoffrey T. Anneheim
  • Patent number: 9773192
    Abstract: Techniques to identify and track a pre-identified region-of-interest (ROI) through a temporal sequence of frames/images are described. In general, a down-sampled color gradient (edge map) of an arbitrary sized ROI from a prior frame may be used to generate a small template. This initial template may be used to identify a region of a new or current frame that may be overscan and used to create a current frame's edge map. By comparing the prior frame's template to the current frame's edge map, a cost value or image may be found and used to identify the current frame's ROI center. The size of the current frame's ROI may be found by varying the size of putative new ROIs and testing for their congruence with the prior frame's template. Subsequent ROI's for subsequent frames may be identified to, effectively, track an arbitrarily sized ROI through a sequence of video frames.
    Type: Grant
    Filed: June 7, 2015
    Date of Patent: September 26, 2017
    Assignee: Apple Inc.
    Inventors: Xiaoxing Li, Geoffrey T. Anneheim, Jianping Zhou, Richard D. Seely, Marco Zuliani
  • Publication number: 20160358341
    Abstract: Techniques to identify and track a pre-identified region-of-interest (ROI) through a temporal sequence of frames/images are described. In general, a down-sampled color gradient (edge map) of an arbitrary sized ROI from a prior frame may be used to generate a small template. This initial template may be used to identify a region of a new or current frame that may be overscan and used to create a current frame's edge map. By comparing the prior frame's template to the current frame's edge map, a cost value or image may be found and used to identify the current frame's ROI center. The size of the current frame's ROI may be found by varying the size of putative new ROIs and testing for their congruence with the prior frame's template. Subsequent ROI's for subsequent frames may be identified to, effectively, track an arbitrarily sized ROI through a sequence of video frames.
    Type: Application
    Filed: June 7, 2015
    Publication date: December 8, 2016
    Inventors: Xiaoxing Li, Geoffrey T. Anneheim, Jianping Zhou, Richard D. Seely, Marco Zuliani