Patents by Inventor Alexander LINDSKOG

Alexander LINDSKOG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11526995
    Abstract: This disclosure relates to techniques for generating robust depth estimations for captured images using semantic segmentation. Semantic segmentation may be defined as a process of creating a mask over an image, wherein pixels are segmented into a predefined set of semantic classes. Such segmentations may be binary (e.g., a ‘person pixel’ or a ‘non-person pixel’) or multi-class (e.g., a pixel may be labelled as: ‘person,’ ‘dog,’ ‘cat,’ etc.). As semantic segmentation techniques grow in accuracy and adoption, it is becoming increasingly important to develop methods of utilizing such segmentations and developing flexible techniques for integrating segmentation information into existing computer vision applications, such as depth and/or disparity estimation, to yield improved results in a wide range of image capture scenarios.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: December 13, 2022
    Assignee: Apple Inc.
    Inventors: Mark N. Jouppi, Alexander Lindskog, Michael W. Tao
  • Patent number: 11250571
    Abstract: This disclosure relates to techniques for the robust usage of semantic segmentation information in image processing techniques, e.g., shallow depth of field (SDOF) renderings. Semantic segmentation may be defined as a process of creating a mask over an image, wherein pixels are segmented into a predefined set of semantic classes. Segmentations may be binary (e.g., a ‘person pixel’ or a ‘non-person pixel’) or multi-class (e.g., a pixel may be labelled as: ‘person,’ ‘dog,’ ‘cat,’ etc.). As semantic segmentation techniques grow in accuracy and adoption, it is becoming increasingly important to develop methods of utilizing such segmentations and developing flexible techniques for integrating segmentation information into existing computer vision applications, such as synthetic SDOF renderings, to yield improved results in a wide range of image capture scenarios.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: February 15, 2022
    Assignee: Apple Inc.
    Inventors: Alexander Lindskog, Michael W. Tao, Alexandre Naaman
  • Patent number: 10762655
    Abstract: The disclosure pertains to techniques for image processing. One such technique comprises a method for image processing comprising obtaining first light information from a set of light-sensitive pixels for a scene, the pixels including phase detection (PD) pixels and non-PD pixels, generating a first PD pixel image from the first light information, the first PD pixel image having a first resolution, generating a higher resolution image from the plurality of non-PD pixels, wherein the higher resolution image has a resolution greater than the resolution of the first PD pixel image, matching a first pixel of the first PD pixel image to the higher resolution image, wherein the matching is based on a set of correlations between the first pixel and non-PD pixel within a predetermined distance of the first pixel, and determining a disparity map for an image associated with the first light information, based on the match.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: September 1, 2020
    Assignee: Apple Inc.
    Inventors: Alexander Lindskog, Michael W. Tao, Mark N. Jouppi
  • Publication number: 20200082535
    Abstract: This disclosure relates to techniques for the robust usage of semantic segmentation information in image processing techniques, e.g., shallow depth of field (SDOF) renderings. Semantic segmentation may be defined as a process of creating a mask over an image, wherein pixels are segmented into a predefined set of semantic classes. Segmentations may be binary (e.g., a ‘person pixel’ or a ‘non-person pixel’) or multi-class (e.g., a pixel may be labelled as: ‘person,’ ‘dog,’ ‘cat,’ etc.). As semantic segmentation techniques grow in accuracy and adoption, it is becoming increasingly important to develop methods of utilizing such segmentations and developing flexible techniques for integrating segmentation information into existing computer vision applications, such as synthetic SDOF renderings, to yield improved results in a wide range of image capture scenarios.
    Type: Application
    Filed: September 10, 2019
    Publication date: March 12, 2020
    Inventors: Alexander Lindskog, Michael W. Tao, Alexandre Naaman
  • Publication number: 20200082541
    Abstract: This disclosure relates to techniques for generating robust depth estimations for captured images using semantic segmentation. Semantic segmentation may be defined as a process of creating a mask over an image, wherein pixels are segmented into a predefined set of semantic classes. Such segmentations may be binary (e.g., a ‘person pixel’ or a ‘non-person pixel’) or multi-class (e.g., a pixel may be labelled as: ‘person,’ ‘dog,’ ‘cat,’ etc.). As semantic segmentation techniques grow in accuracy and adoption, it is becoming increasingly important to develop methods of utilizing such segmentations and developing flexible techniques for integrating segmentation information into existing computer vision applications, such as depth and/or disparity estimation, to yield improved results in a wide range of image capture scenarios.
    Type: Application
    Filed: September 10, 2019
    Publication date: March 12, 2020
    Inventors: Mark N. Jouppi, Alexander Lindskog, Michael W. Tao
  • Patent number: 10410327
    Abstract: This disclosure relates to techniques for synthesizing out of focus effects in digital images. Digital single-lens reflex (DSLR) cameras and other cameras having wide aperture lenses typically capture images with a shallow depth of field (SDOF). SDOF photography is often used in portrait photography, since it emphasizes the subject, while deemphasizing the background via blurring. Simulating this kind of blurring using a large depth of field (LDOF) camera may require a large amount of computational resources, i.e., in order to simulate the physical effects of using a wide aperture lens while constructing a synthetic SDOF image. However, cameras having smaller lens apertures, such as mobile phones, may not have the processing power to simulate the spreading of all background light sources in a reasonable amount of time. Thus, described herein are techniques to synthesize out-of-focus background blurring effects in a computationally-efficient manner for images captured by LDOF cameras.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: September 10, 2019
    Assignee: Apple Inc.
    Inventors: Richard D. Seely, Michael W. Tao, Alexander Lindskog, Geoffrey T. Anneheim
  • Patent number: 10284835
    Abstract: Generating an image with a selected level of background blur includes capturing, by a first image capture device, a plurality of frames of a scene, wherein each of the plurality of frames has a different focus depth, obtaining a depth map of the scene, determining a target object and a background in the scene based on the depth map, determining a goal blur for the background, and selecting, for each pixel in an output image, a corresponding pixel from the focus stack.
    Type: Grant
    Filed: September 24, 2015
    Date of Patent: May 7, 2019
    Assignee: Apple Inc.
    Inventors: Thomas E. Bishop, Alexander Lindskog, Claus Molgaard, Frank Doepke
  • Publication number: 20180350043
    Abstract: This disclosure relates to techniques for synthesizing out of focus effects in digital images. Digital single-lens reflex (DSLR) cameras and other cameras having wide aperture lenses typically capture images with a shallow depth of field (SDOF). SDOF photography is often used in portrait photography, since it emphasizes the subject, while deemphasizing the background via blurring. Simulating this kind of blurring using a large depth of field (LDOF) camera may require a large amount of computational resources, i.e., in order to simulate the physical effects of using a wide aperture lens while constructing a synthetic SDOF image. However, cameras having smaller lens apertures, such as mobile phones, may not have the processing power to simulate the spreading of all background light sources in a reasonable amount of time. Thus, described herein are techniques to synthesize out-of-focus background blurring effects in a computationally-efficient manner for images captured by LDOF cameras.
    Type: Application
    Filed: May 25, 2018
    Publication date: December 6, 2018
    Inventors: Richard D. Seely, Michael W. Tao, Alexander Lindskog, Geoffrey T. Anneheim
  • Patent number: 9928628
    Abstract: In an example embodiment a method, apparatus and computer program product are provided. The method includes facilitating access of a plurality of images associated with a scene comprising at least one moving object, and segmenting the plurality of images into foreground regions and background regions based on changes in corresponding image regions between the images. The foreground regions comprise the at least one moving object. The method includes determining at least one object parameter associated with the at least one moving object in the foreground regions and generating a background image based on the background regions, and modifying at least one of the foreground regions and the background image to represent a motion of the at least one moving object based on the at least one object parameter. The method includes generating a composite image based on the modified at least one of the foreground regions and the background image.
    Type: Grant
    Filed: May 13, 2013
    Date of Patent: March 27, 2018
    Assignee: Nokia Technologies Oy
    Inventors: Johan Windmark, Alexander Lindskog, Tobias Karlsson
  • Patent number: 9691127
    Abstract: In an example embodiment a method, apparatus and computer program product are provided. The method includes calculating directionality values for pixels of a first image and pixels of a second image, where a directionality value for a pixel is calculated based on gradient differences between the pixel and a plurality of neighboring pixels. The method includes determining a plurality of similarity values between the first image and the second images for a plurality of alignment positions of the first image and the second image based on the directionality values for the pixels of the first image and the directionality values for corresponding pixels of the second image. The method further includes selecting an alignment position from among the plurality of alignment positions for aligning the first image and the second image based on comparison of the plurality of similarity values.
    Type: Grant
    Filed: June 14, 2013
    Date of Patent: June 27, 2017
    Assignee: Nokia Technologies Oy
    Inventor: Alexander Lindskog
  • Publication number: 20170070731
    Abstract: Camera calibration includes capturing a first image of an object by a first camera, determining spatial parameters between the first camera and the object using the first image, obtaining a first estimate for an optical center, iteratively calculating a best set of optical characteristics and test setup parameters based on the first estimate for the optical center until the difference in a most recent calculated set of optical characteristics and previously calculated set of optical characteristics satisfies a predetermined threshold, and calibrating the first camera based on the best set of optical characteristics. Multi-camera system calibration may include calibrating, based on a detected misalignment of features in multiple images, the multi-camera system using a context of the multi-camera system and one or more prior stored contexts.
    Type: Application
    Filed: September 3, 2016
    Publication date: March 9, 2017
    Inventors: Benjamin A. Darling, Thomas E. Bishop, Kevin A. Gross, Paul M. Hubel, Todd S. Sachs, Guangzhi Cao, Alexander Lindskog, Stefan Weber, Jianping Zhou
  • Publication number: 20170070720
    Abstract: Generating an image with a selected level of background blur includes capturing, by a first image capture device, a plurality of frames of a scene, wherein each of the plurality of frames has a different focus depth, obtaining a depth map of the scene, determining a target object and a background in the scene based on the depth map, determining a goal blur for the background, and selecting, for each pixel in an output image, a corresponding pixel from the focus stack.
    Type: Application
    Filed: September 24, 2015
    Publication date: March 9, 2017
    Inventors: Thomas E. Bishop, Alexander Lindskog, Claus Molgaard, Frank Doepke
  • Patent number: 9565356
    Abstract: Generating a focus stack, including receiving initial focus data that identifies a plurality of target depths, positioning a lens at a first position to capture a first image at a first target depth of the plurality of target depths, determining, in response to capturing the first image and prior to capturing additional images, a sharpness metric for the first image, capturing, in response to determining that the sharpness metric for the first image is an unacceptable value, a second image at a second position based on the sharpness metric, wherein the second position is not included in the plurality of target depths, determining that a sharpness metric for the second image is an acceptable value, and generating a focus stack using the second image.
    Type: Grant
    Filed: September 24, 2015
    Date of Patent: February 7, 2017
    Assignee: Apple Inc.
    Inventors: Alexander Lindskog, Frank Doepke, Ralf Brunner, Thomas E. Bishop
  • Publication number: 20160360091
    Abstract: Generating a focus stack, including receiving initial focus data that identifies a plurality of target depths, positioning a lens at a first position to capture a first image at a first target depth of the plurality of target depths, determining, in response to capturing the first image and prior to capturing additional images, a sharpness metric for the first image, capturing, in response to determining that the sharpness metric for the first image is an unacceptable value, a second image at a second position based on the sharpness metric, wherein the second position is not included in the plurality of target depths, determining that a sharpness metric for the second image is an acceptable value, and generating a focus stack using the second image.
    Type: Application
    Filed: September 24, 2015
    Publication date: December 8, 2016
    Inventors: Alexander Lindskog, Frank Doepke, Ralf Brunner, Thomas E. Bishop
  • Patent number: 9396569
    Abstract: There is disclosed a method for seamlessly replacing areas in a digital image with corresponding data from temporally close digital images depicting substantially the same scene. The method uses localized image registration error minimization over a fixed preliminary boundary. A least cost closed path which constitutes a boundary for the area to be replaced is calculated using dynamic programming. The replacement area is blended such that image data information from one image is seamlessly replaced with image data information from another image.
    Type: Grant
    Filed: September 26, 2013
    Date of Patent: July 19, 2016
    Assignee: Mobile Imaging in Sweden AB
    Inventors: Alexander Lindskog, Gustaf Pettersson, Ulf Holmstedt, Johan Windmark, Sami Niemi
  • Publication number: 20160180499
    Abstract: In an example embodiment a method, apparatus and computer program product are provided. The method includes calculating directionality values for pixels of a first image and pixels of a second image, where a directionality value for a pixel is calculated based on gradient differences between the pixel and a plurality of neighboring pixels. The method includes determining a plurality of similarity values between the first image and the second images for a plurality of alignment positions of the first image and the second image based on the directionality values for the pixels of the first image and the directionality values for corresponding pixels of the second image. The method further includes selecting an alignment position from among the plurality of alignment positions for aligning the first image and the second image based on comparison of the plurality of similarity values.
    Type: Application
    Filed: June 14, 2013
    Publication date: June 23, 2016
    Inventor: Alexander Lindskog
  • Publication number: 20160125633
    Abstract: In an example embodiment a method, apparatus and computer program product are provided. The method includes facilitating access of a plurality of images associated with a scene comprising at least one moving object, and segmenting the plurality of images into foreground regions and background regions based on changes in corresponding image regions between the images. The foreground regions comprise the at least one moving object. The method includes determining at least one object parameter associated with the at least one moving object in the foreground regions and generating a background image based on the background regions, and modifying at least one of the foreground regions and the background image to represent a motion of the at least one moving object based on the at least one object parameter. The method includes generating a composite image based on the modified at least one of the foreground regions and the background image.
    Type: Application
    Filed: May 13, 2013
    Publication date: May 5, 2016
    Applicant: Nokia Technologies Oy
    Inventors: Johan Windmark, Alexander Lindskog, Tobias Karlsson
  • Patent number: 9282235
    Abstract: A method to correct an autofocus operation of a digital image capture device based on an empirical evaluation of image capture metadata is disclosed. The method includes capturing an image of a scene (the image including one or more autofocus windows), obtaining an initial focus score for at least one of the image's one or more autofocus windows, obtaining image capture metadata for at least one of the one or more autofocus windows, determining a focus adjustment score for the one autofocus window based on a combination of the autofocus window's image capture metadata (wherein the focus adjustment score is indicative of the autofocus window's noise), and determining a corrected focus score for the one autofocus window based on the initial focus score and the focus adjustment score.
    Type: Grant
    Filed: May 30, 2014
    Date of Patent: March 8, 2016
    Assignee: Apple Inc.
    Inventors: Alexander Lindskog, Ralph Brunner
  • Publication number: 20150350522
    Abstract: A method to correct an autofocus operation of a digital image capture device based on an empirical evaluation of image capture metadata is disclosed. The method includes capturing an image of a scene (the image including one or more autofocus windows), obtaining an initial focus score for at least one of the image's one or more autofocus windows, obtaining image capture metadata for at least one of the one or more autofocus windows, determining a focus adjustment score for the one autofocus window based on a combination of the autofocus window's image capture metadata (wherein the focus adjustment score is indicative of the autofocus window's noise), and determining a corrected focus score for the one autofocus window based on the initial focus score and the focus adjustment score.
    Type: Application
    Filed: May 30, 2014
    Publication date: December 3, 2015
    Applicant: Apple Inc.
    Inventors: Alexander Lindskog, Ralph Brunner
  • Patent number: 9204052
    Abstract: A method comprising operating in a single-frame capture mode, receiving indication of a first input associated with invocation of a first zoom out operation, and transitioning from the single-frame capture mode to a first multiple-frame capture mode based, at least in part, on the first zoom out operation is disclosed.
    Type: Grant
    Filed: February 12, 2013
    Date of Patent: December 1, 2015
    Assignee: Nokia Technologies Oy
    Inventors: Gustaf Pettersson, Johan Windmark, Alexander Lindskog, Adam Fejne