Patents by Inventor Frank Doepke

Frank Doepke has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160358634
    Abstract: The invention relates to systems, methods, and computer readable media for responding to a user snapshot request by capturing anticipatory pre-snapshot image data as well as post-snapshot image data. The captured information may be used, depending upon the embodiment, to create archival image information and image presentation information that is both useful and pleasing to a user. The captured information may automatically be trimmed or edited to facilitate creating an enhanced image, such as a moving still image. Varying embodiments of the invention offer techniques for trimming and editing based upon the following: exposure, brightness, focus, white balance, detected motion of the camera, substantive image analysis, detected sound, image metadata, and/or any combination of the foregoing.
    Type: Application
    Filed: September 25, 2015
    Publication date: December 8, 2016
    Inventors: Claus Molgaard, Brett M. Keating, George E. Williams, Marco Zuliani, Vincent Y. Wong, Frank Doepke, Ethan J. Tira-Thompson
  • Patent number: 9426409
    Abstract: Traditionally, time-lapse videos are constructed from images captured at time intervals called “temporal points of interests” or “temporal POIs.” Disclosed herein are systems and methods of constructing improved, motion-stabilized time-lapse videos using temporal points of interest and image similarity comparisons. According to some embodiments, a “burst” of images may be captured, centered around the aforementioned temporal points of interest. Then, each burst sequence of images may be analyzed, e.g., by performing an image similarity comparison between each image in the burst sequence and the image selected at the previous temporal point of interest.
    Type: Grant
    Filed: February 3, 2015
    Date of Patent: August 23, 2016
    Assignee: Apple Inc.
    Inventors: Sebastien X. Beysserie, Jason Klivington, Rolf Toft, Frank Doepke
  • Patent number: 9417763
    Abstract: The techniques disclosed herein use a compass, MEMS accelerometer, GPS module, and MEMS gyrometer to infer a frame of reference for a hand-held device. This can provide a true Frenet frame, i.e., X- and Y-vectors for the display, and also a Z-vector that points perpendicularly to the display. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track the Frenet frame of the device in real time to provide a continuous 3D frame-of-reference. Once this continuous frame of reference is known, the position of a user's eyes may either be inferred or calculated directly by using a device's front-facing camera. With the position of the user's eyes and a continuous 3D frame-of-reference for the display, more realistic virtual 3D depictions of the objects on the device's display may be created and interacted with by the user.
    Type: Grant
    Filed: December 15, 2014
    Date of Patent: August 16, 2016
    Assignee: Apple Inc.
    Inventors: Mark Zimmer, Geoff Stahl, David Hayward, Frank Doepke
  • Patent number: 9411413
    Abstract: The techniques disclosed herein may use various sensors to infer a frame of reference for a hand-held device. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track a Frenet frame of the device in real time to provide an instantaneous (or continuous) 3D frame-of-reference. In addition to—or in place of—calculating this instantaneous (or continuous) frame of reference, the position of a user's head may either be inferred or calculated directly by using one or more of a device's optical sensors, e.g., an optical camera, infrared camera, laser, etc. With knowledge of the 3D frame-of-reference for the display and/or knowledge of the position of the user's head, more realistic virtual 3D depictions of the graphical objects on the device's display may be created—and interacted with—by the user.
    Type: Grant
    Filed: July 11, 2014
    Date of Patent: August 9, 2016
    Assignee: Apple Inc.
    Inventors: Ricardo Motta, Mark Zimmer, Geoff Stahl, David Hayward, Frank Doepke
  • Publication number: 20160227137
    Abstract: Lens flare mitigation techniques determine which pixels in images of a sequence of images are likely to be pixels affected by lens flare. Once the lens flare areas of the images are determined, unwanted lens flare effects may be mitigated by various approaches, including reducing border artifacts along a seam between successive images, discarding entire images of the sequence that contain lens flare areas, and using tone-mapping to reduce the visibility of lens flare.
    Type: Application
    Filed: February 1, 2016
    Publication date: August 4, 2016
    Inventors: Marius Tico, Paul M. Hubel, Frank Doepke, Todd S. Sachs
  • Patent number: 9324376
    Abstract: Traditionally, time-lapse videos are constructed from images captured at given time intervals called “temporal points of interests” or “temporal POIs.” Disclosed herein are intelligent systems and methods of capturing and selecting better images around temporal points of interest for the construction of improved time-lapse videos. According to some embodiments, a small “burst” of images may be captured, centered around the aforementioned temporal points of interest. Then, each burst sequence of images may be analyzed, e.g., by performing a similarity comparison between each image in the burst sequence and the image selected at the previous temporal point of interest. Selecting the image from a given burst that is most similar to the previous selected image allows the intelligent systems and methods described herein to improve the quality of the resultant time-lapse video by discarding “outlier” or other undesirable images captured in the burst sequence around a particular temporal point of interest.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: April 26, 2016
    Assignee: Apple Inc.
    Inventor: Frank Doepke
  • Publication number: 20160093335
    Abstract: Traditionally, time-lapse videos are constructed from images captured at given time intervals called “temporal points of interests” or “temporal POIs.” Disclosed herein are intelligent systems and methods of capturing and selecting better images around temporal points of interest for the construction of improved time-lapse videos. According to some embodiments, a small “burst” of images may be captured, centered around the aforementioned temporal points of interest. Then, each burst sequence of images may be analyzed, e.g., by performing a similarity comparison between each image in the burst sequence and the image selected at the previous temporal point of interest. Selecting the image from a given burst that is most similar to the previous selected image allows the intelligent systems and methods described herein to improve the quality of the resultant time-lapse video by discarding “outlier” or other undesirable images captured in the burst sequence around a particular temporal point of interest.
    Type: Application
    Filed: September 30, 2014
    Publication date: March 31, 2016
    Inventor: Frank Doepke
  • Publication number: 20160094801
    Abstract: Traditionally, time-lapse videos are constructed from images captured at time intervals called “temporal points of interests” or “temporal POIs.” Disclosed herein are systems and methods of constructing improved, motion-stabilized time-lapse videos using temporal points of interest and image similarity comparisons. According to some embodiments, a “burst” of images may be captured, centered around the aforementioned temporal points of interest. Then, each burst sequence of images may be analyzed, e.g., by performing an image similarity comparison between each image in the burst sequence and the image selected at the previous temporal point of interest.
    Type: Application
    Filed: February 3, 2015
    Publication date: March 31, 2016
    Inventors: Sebastien X. Beysserie, Jason Klivington, Rolf Toft, Frank Doepke
  • Patent number: 9251431
    Abstract: Differing embodiments of this disclosure may employ one or all of the several techniques described herein to utilize a “split” image processing pipeline, wherein one part of the “split” image processing pipeline runs an object-of-interest recognition algorithm on scaled down (also referred to herein as “low-resolution”) frames received from a camera of a computing device, while the second part of the “split” image processing pipeline concurrently runs an object-of-interest detector in the background on full resolution (also referred to herein as “high-resolution”) image frames received from the camera.
    Type: Grant
    Filed: May 30, 2014
    Date of Patent: February 2, 2016
    Assignee: Apple Inc.
    Inventors: Frank Doepke, Kevin Hunter
  • Patent number: 9253373
    Abstract: Lens flare mitigation techniques determine which pixels in images of a sequence of images are likely to be pixels affected by lens flare. Once the lens flare areas of the images are determined, unwanted lens flare effects may be mitigated by various approaches, including reducing border artifacts along a seam between successive images, discarding entire images of the sequence that contain lens flare areas, and using tone-mapping to reduce the visibility of lens flare.
    Type: Grant
    Filed: June 11, 2012
    Date of Patent: February 2, 2016
    Assignee: Apple Inc.
    Inventors: Marius Tico, Paul M. Hubel, Frank Doepke, Todd S. Sachs
  • Patent number: 9247133
    Abstract: This disclosure pertains to devices, methods, and computer readable media for performing image registration. A few generalized steps may be used to carry out the image registration techniques described herein: 1) acquiring image data from an image sensor; 2) selecting a pair of overlapping image portions from the acquired image data for registration; 3) determining an area of “maximum energy” in one of the image portions being registered; 4) placing an image registration window over both image portions at the determined location of maximum energy; 5) registering the overlapping image portions using only the image data falling within the image registration windows; and 6) determining, according to one or more metrics, whether the image registration window should be shifted from a current location before registering subsequently acquired image portions.
    Type: Grant
    Filed: June 1, 2011
    Date of Patent: January 26, 2016
    Assignee: Apple Inc.
    Inventors: Frank Doepke, Todd S. Sachs, Kevin L. Hunter, Alexis Gatt
  • Publication number: 20150347861
    Abstract: Differing embodiments of this disclosure may employ one or all of the several techniques described herein to utilize a “split” image processing pipeline, wherein one part of the “split” image processing pipeline runs an object-of-interest recognition algorithm on scaled down (also referred to herein as “low-resolution”) frames received from a camera of a computing device, while the second part of the “split” image processing pipeline concurrently runs an object-of-interest detector in the background on full resolution (also referred to herein as “high-resolution”) image frames received from the camera.
    Type: Application
    Filed: May 30, 2014
    Publication date: December 3, 2015
    Inventors: Frank Doepke, Kevin Hunter
  • Publication number: 20150350547
    Abstract: Techniques to detect subject and camera motion in a set of consecutively captured image frames are disclosed. More particularly, techniques disclosed herein temporally track two sets of downscaled images to detect motion. One set may contain higher resolution and the other set lower resolution of the same images. For each set, a coefficient of variation may be computed across the set of images for each sample in the downscaled image to detect motion and generate a change mask. The information in the change mask can be used for various applications, including determining how to capture a next image in the sequence.
    Type: Application
    Filed: September 30, 2014
    Publication date: December 3, 2015
    Inventors: Anita Nariani-Schulze, Benjamin M. Olson, Ralph Brunner, Suk Hwan Lim, Frank Doepke
  • Patent number: 9088714
    Abstract: This disclosure pertains to devices, methods, and computer readable media for performing positional sensor-assisted panoramic photography techniques in handheld personal electronic devices. Generalized steps that may be used to carry out the panoramic photography techniques described herein include, but are not necessarily limited to: 1.) acquiring image data from the electronic device's image sensor; 2.) performing “motion filtering” on the acquired image data, e.g., using information returned from positional sensors of the electronic device to inform the processing of the image data; 3.) performing image registration between adjacent captured images; 4.) performing geometric corrections on captured image data, e.g., due to perspective changes and/or camera rotation about a non-center of perspective (COP) camera point; and 5.) “stitching” the captured images together to create the panoramic scene, e.g., blending the image data in the overlap area between adjacent captured images.
    Type: Grant
    Filed: May 17, 2011
    Date of Patent: July 21, 2015
    Assignee: Apple Inc.
    Inventors: Frank Doepke, Ralph Brunner
  • Patent number: 9058655
    Abstract: Techniques for registering images based on an identified region of interest (ROI) are described. In general, the disclosed techniques identify a region of ROI within an image and assign areas within the image corresponding to those regions more importance during the registration process. More particularly, the disclosed techniques may employ user-input or image content information to identify the ROI. Once identified, features within the ROI may be given more weight or significance during registration operations than other areas of the image having high-feature content but which are not as important to the individual capturing the image.
    Type: Grant
    Filed: November 6, 2012
    Date of Patent: June 16, 2015
    Assignee: Apple Inc.
    Inventors: Frank Doepke, Marius Tico
  • Patent number: 9042679
    Abstract: Systems, methods, and computer readable media to register images in real-time and that are capable of producing reliable registrations even when the number of high frequency image features is small. The disclosed techniques may also provide a quantitative measure of a registration's quality. The latter may be used to inform the user and/or to automatically determine when visual registration techniques may be less accurate than motion sensor-based approaches. When such a case is detected, an image capture device may be automatically switched from visual-based to sensor-based registration. Disclosed techniques quickly determine indicators of an image's overall composition (row and column projections) which may be used to determine the translation of a first image, relative to a second image. The translation so determined may be used to align/register the two images.
    Type: Grant
    Filed: June 6, 2012
    Date of Patent: May 26, 2015
    Assignee: Apple Inc.
    Inventors: Marco Zuliani, Kevin L. Hunter, Jianping Zhou, Todd Sachs, Frank Doepke
  • Publication number: 20150106768
    Abstract: The techniques disclosed herein use a compass, MEMS accelerometer, GPS module, and MEMS gyrometer to infer a frame of reference for a hand-held device. This can provide a true Frenet frame, i.e., X- and Y-vectors for the display, and also a Z-vector that points perpendicularly to the display. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track the Frenet frame of the device in real time to provide a continuous 3D frame-of-reference. Once this continuous frame of reference is known, the position of a user's eyes may either be inferred or calculated directly by using a device's front-facing camera. With the position of the user's eyes and a continuous 3D frame-of-reference for the display, more realistic virtual 3D depictions of the objects on the device's display may be created and interacted with by the user.
    Type: Application
    Filed: December 15, 2014
    Publication date: April 16, 2015
    Inventors: Mark Zimmer, Geoff Stahl, David Hayward, Frank Doepke
  • Patent number: 8957944
    Abstract: This disclosure pertains to devices, methods, and computer readable media for perforating positional sensor-assisted panoramic photography techniques in handheld personal electronic devices. Generalized steps that may be used to carry out the panoramic photography techniques described herein include, but are not necessarily limited to: 1.) acquiring image data from the electronic device's image sensor; 2.) performing “motion filtering” on the acquired image data, e.g., using information returned from positional sensors of the electronic device to inform the processing of the image data; 3.) performing image registration between adjacent captured images; 4.) performing geometric corrections on captured image data, e.g., due to perspective changes and/or camera rotation about a non-center of perspective (COP) camera point; and 5.) “stitching” the captured images together to create the panoramic scene, e.g., blending the image data in the overlap area between adjacent captured images.
    Type: Grant
    Filed: May 17, 2011
    Date of Patent: February 17, 2015
    Assignee: Apple Inc.
    Inventors: Frank Doepke, Jianping Zhou
  • Publication number: 20150009130
    Abstract: The techniques disclosed herein may use various sensors to infer a frame of reference for a hand-held device. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track a Frenet frame of the device in real time to provide an instantaneous (or continuous) 3D frame-of-reference. In addition to—or in place of—calculating this instantaneous (or continuous) frame of reference, the position of a user's head may either be inferred or calculated directly by using one or more of a device's optical sensors, e.g., an optical camera, infrared camera, laser, etc. With knowledge of the 3D frame-of-reference for the display and/or knowledge of the position of the user's head, more realistic virtual 3D depictions of the graphical objects on the device's display may be created—and interacted with—by the user.
    Type: Application
    Filed: July 11, 2014
    Publication date: January 8, 2015
    Inventors: Ricardo Motta, Mark Zimmer, Geoff Stahl, David Hayward, Frank Doepke
  • Patent number: 8913056
    Abstract: The techniques disclosed herein use a compass, MEMS accelerometer, GPS module, and MEMS gyrometer to infer a frame of reference for a hand-held device. This can provide a true Frenet frame, i.e., X- and Y-vectors for the display, and also a Z-vector that points perpendicularly to the display. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track the Frenet frame of the device in real time to provide a continuous 3D frame-of-reference. Once this continuous frame of reference is known, the position of a user's eyes may either be inferred or calculated directly by using a device's front-facing camera. With the position of the user's eyes and a continuous 3D frame-of-reference for the display, more realistic virtual 3D depictions of the objects on the device's display may be created and interacted with by the user.
    Type: Grant
    Filed: August 4, 2010
    Date of Patent: December 16, 2014
    Assignee: Apple Inc.
    Inventors: Mark Zimmer, Geoff Stahl, David Hayward, Frank Doepke