Patents by Inventor Erik Fonseka

Erik Fonseka has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11902677
    Abstract: An image sensor suitable for use in an augmented reality system to provide low latency image analysis with low power consumption. The augmented reality system can be compact, and may be small enough to be packaged within a wearable device such as a set of goggles or mounted on a frame resembling ordinary eyeglasses. The image sensor may receive information about a region of an imaging array associated with a movable object and selectively output imaging information for that region. The region may be updated dynamically as the image sensor and/or the object moves. Such an image sensor provides a small amount of data from which object information used in rendering an augmented reality scene can be developed. The amount of data may be further reduced by configuring the image sensor to output indications of pixels for which the measured intensity of incident light changes.
    Type: Grant
    Filed: October 30, 2019
    Date of Patent: February 13, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Martin Georg Zahnert, Alexander Ilic, Erik Fonseka
  • Patent number: 11631213
    Abstract: Forming a 3D representation of an object using a portable electronic device with a camera. A sequence of image frames, captured with the camera may be processed to generate 3D information about the object. This 3D information may be used to present a visual representation of the object as real-time feedback to a user, indicating confidence of 3D information for regions of the object. Feedback may be a composite image based on surfaces in a 3D model derived from the 3D information given visual characteristics derived from the image frames. The 3D information may be used in a number of ways, including as an input to a 3D printer or an input, representing, in whole or in part, an avatar for a game or other purposes. To enable processing to be performed on a portable device, a portion of the image frames may be selected for processing with higher resolution.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: April 18, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Erik Fonseka, Alexander Ilic
  • Publication number: 20230037046
    Abstract: A display system includes a head-mounted display configured to project light, having different amounts of wavefront divergence, to an eye of a user to display virtual image content appearing to be disposed at different depth planes. The wavefront divergence may be changed in discrete steps, with the change in steps being triggered based upon whether the user is fixating on a particular depth plane. The display system may be calibrated for switching depth planes for a main user. Upon determining that a guest user is utilizing the system, rather than undergoing a full calibration, the display system may be configured to switch depth planes based on a rough determination of the virtual content that the user is looking at. The virtual content has an associated depth plane and the display system may be configured to switch to the depth plane of that virtual content.
    Type: Application
    Filed: October 7, 2022
    Publication date: February 2, 2023
    Inventors: Samuel A. Miller, Lomesh Agarwal, Lionel Ernest Edwin, Ivan Li Chuen Yeoh, Daniel Farmer, Sergey Fyodorovich Prokushkin, Yonatan Munk, Edwin Joseph Selker, Erik Fonseka, Paul M. Greco, Jeffrey Scott Sommers, Bradley Vincent Stuart, Shiuli Das, Suraj Manjunath Shanbhag
  • Patent number: 11563926
    Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: January 24, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Alexander Ilic, Peter Weigand, Erik Fonseka
  • Patent number: 11468640
    Abstract: A display system includes a head-mounted display configured to project light, having different amounts of wavefront divergence, to an eye of a user to display virtual image content appearing to be disposed at different depth planes. The wavefront divergence may be changed in discrete steps, with the change in steps being triggered based upon whether the user is fixating on a particular depth plane. The display system may be calibrated for switching depth planes for a main user. Upon determining that a guest user is utilizing the system, rather than undergoing a full calibration, the display system may be configured to switch depth planes based on a rough determination of the virtual content that the user is looking at. The virtual content has an associated depth plane and the display system may be configured to switch to the depth plane of that virtual content.
    Type: Grant
    Filed: August 2, 2019
    Date of Patent: October 11, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Samuel A. Miller, Lomesh Agarwal, Lionel Ernest Edwin, Ivan Li Chuen Yeoh, Daniel Farmer, Sergey Fyodorovich Prokushkin, Yonatan Munk, Edwin Joseph Selker, Erik Fonseka, Paul M. Greco, Jeffrey Scott Sommers, Bradley Vincent Stuart, Shiuli Das, Suraj Manjunath Shanbhag
  • Publication number: 20220301217
    Abstract: Systems and methods for eye tracking latency enhancements. An example head-mounted system obtains a first image of an eye of a user. The first image is provided as input to a machine learning model which has been trained to generate iris and pupil segmentation data given an image of an eye. A second image of the eye is obtained. A set of locations in the second image at which one or more glints are shown is detected based on iris segmentation data generated for the first image. A region of the second image at which the pupil of the eye of the user is shown is identified based on pupil segmentation data generated for the first image. A pose of the eye of the user is determined based on the detected set of glint locations in the second image and the identified region of the second image.
    Type: Application
    Filed: July 1, 2020
    Publication date: September 22, 2022
    Inventors: Bradley Vincent Stuart, Daniel Farmer, Tiejian Zhang, Shiuli Das, Suraj Manjunath Shanbhag, Erik Fonseka
  • Publication number: 20220014689
    Abstract: A high-resolution image sensor suitable for use in an augmented reality (AR) system. The AR system may be small enough to be packaged within a wearable device such as a set of goggles or mounted on a frame resembling ordinary eyeglasses. The image sensor may have pixels configured to output events indicating changes in sensed IR light. Those pixels may be sensitive to IR light of the same frequency source as an active IR light source, and may be part of an eye tracking camera, oriented toward a user's eye. Changes in IR light may be used to determine the location of the user's pupil, which may be used in rendering virtual objects. The events may be generated and processed at a high rate, enabling the system to render the virtual object based on the user's gaze so that the virtual object will appear more realistic to the user.
    Type: Application
    Filed: November 11, 2019
    Publication date: January 13, 2022
    Applicant: Magic Leap, Inc.
    Inventors: Martin Georg Zahnert, Alexander llic, Erik Fonseka
  • Publication number: 20210409632
    Abstract: An image sensor suitable for use in an augmented reality system to provide low latency image analysis with low power consumption. The image sensor may have multiple pixel cells, each of the pixel cells comprising a photodetector to generate an electric signal based on an intensity of light incident upon the photodetector. The pixel cells may include multiple subsets of pixel cells, each subset of pixel cells including at least one angle-of-arrival to-intensity converter to modulate incident light reaching one or more of the pixel cells in the subset based on an angle of arrival of the incident light. Each pixel cell within the plurality of pixel cells may include differential readout circuitry configured to output a readout signal only when an amplitude of a current electric signal from the photodetector is different from an amplitude of a previous electric signal from the photodetector.
    Type: Application
    Filed: October 30, 2019
    Publication date: December 30, 2021
    Applicant: Magic Leap, Inc.
    Inventors: Martin Georg Zahnert, Alecander Ilic, Erik Fonseka
  • Publication number: 20210409626
    Abstract: An image sensor suitable for use in an augmented reality system to provide low latency image analysis with low power consumption. The augmented reality system can be compact, and may be small enough to be packaged within a wearable device such as a set of goggles or mounted on a frame resembling ordinary eyeglasses. The image sensor may receive information about a region of an imaging array associated with a movable object and selectively output imaging information for that region. The region may be updated dynamically as the image sensor and/or the object moves. Such an image sensor provides a small amount of data from which object information used in rendering an augmented reality scene can be developed. The amount of data may be further reduced by configuring the image sensor to output indications of pixels for which the measured intensity of incident light changes.
    Type: Application
    Filed: October 30, 2019
    Publication date: December 30, 2021
    Applicant: Magic Leap, Inc.
    Inventors: Martin Georg Zahnert, Alexander llic, Erik Fonseka
  • Publication number: 20210209835
    Abstract: Forming a 3D representation of an object using a portable electronic device with a camera. A sequence of image frames, captured with the camera may be processed to generate 3D information about the object. This 3D information may be used to present a visual representation of the object as real-time feedback to a user, indicating confidence of 3D information for regions of the object. Feedback may be a composite image based on surfaces in a 3D model derived from the 3D information given visual characteristics derived from the image frames. The 3D information may be used in a number of ways, including as an input to a 3D printer or an input, representing, in whole or in part, an avatar for a game or other purposes. To enable processing to be performed on a portable device, a portion of the image frames may be selected for processing with higher resolution.
    Type: Application
    Filed: December 30, 2016
    Publication date: July 8, 2021
    Applicant: ML Netherlands C.V.
    Inventors: Erik Fonseka, Alexander llic
  • Publication number: 20210144353
    Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.
    Type: Application
    Filed: November 17, 2020
    Publication date: May 13, 2021
    Applicant: ML Netherlands C.V.
    Inventors: Alexander Ilic, Peter Weigand, Erik Fonseka
  • Patent number: 10841551
    Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: November 17, 2020
    Assignee: ML Netherlands C.V.
    Inventors: Alexander Ilic, Peter Weigand, Erik Fonseka
  • Publication number: 20200043236
    Abstract: A display system includes a head-mounted display configured to project light, having different amounts of wavefront divergence, to an eye of a user to display virtual image content appearing to be disposed at different depth planes. The wavefront divergence may be changed in discrete steps, with the change in steps being triggered based upon whether the user is fixating on a particular depth plane. The display system may be calibrated for switching depth planes for a main user. Upon determining that a guest user is utilizing the system, rather than undergoing a full calibration, the display system may be configured to switch depth planes based on a rough determination of the virtual content that the user is looking at. The virtual content has an associated depth plane and the display system may be configured to switch to the depth plane of that virtual content.
    Type: Application
    Filed: August 2, 2019
    Publication date: February 6, 2020
    Inventors: Samuel A. Miller, Lomesh Agarwal, Lionel Ernest Edwin, Ivan Li Chuen Yeoh, Daniel Farmer, Sergey Fyodorovich Prokushkin, Yonatan Munk, Edwin Joseph Selker, Erik Fonseka, Paul M. Greco, Jeffrey Scott Sommers, Bradley Vincent Stuart, Shiuli Das, Suraj Manjunath Shanbhag
  • Publication number: 20190342533
    Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.
    Type: Application
    Filed: May 20, 2019
    Publication date: November 7, 2019
    Applicant: ML Netherlands C.V.
    Inventors: Alexander Ilic, Peter Weigand, Erik Fonseka
  • Patent number: 10298898
    Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.
    Type: Grant
    Filed: August 29, 2014
    Date of Patent: May 21, 2019
    Assignee: ML Netherlands C.V.
    Inventors: Alexander Ilic, Peter Weigand, Erik Fonseka
  • Patent number: 10225428
    Abstract: A computer peripheral that may operate as a scanner. The scanner captures image frames as it is moved across an object. The image frames are formed into a composite image based on computations in two processes. In a first process, fast track processing determines a coarse position of each of the image frames based on a relative position between each successive image frame and a respective preceding image determine by matching overlapping portions of the image frames. In a second process, fine position adjustments are computed to reduce inconsistencies from determining positions of image frames based on relative positions to multiple prior image frames. The peripheral may also act as a mouse and may be configured with one or more navigation sensors that can be used to reduce processing time required to match a successive image frame to a preceding image frame.
    Type: Grant
    Filed: February 19, 2016
    Date of Patent: March 5, 2019
    Assignee: ML Netherlands C.V.
    Inventors: Martin Georg Zahnert, Erik Fonseka, Alexander Ilic, Simon Meier, Andreas Breitenmoser
  • Publication number: 20160227181
    Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.
    Type: Application
    Filed: August 29, 2014
    Publication date: August 4, 2016
    Applicant: Dacuda AG
    Inventors: Alexander Ilic, Peter Wiegand, Erik Fonseka
  • Publication number: 20160173716
    Abstract: A computer peripheral that may operate as a scanner. The scanner captures image frames as it is moved across an object. The image frames are formed into a composite image based on computations in two processes. In a first process, fast track processing determines a coarse position of each of the image frames based on a relative position between each successive image frame and a respective preceding image determine by matching overlapping portions of the image frames. In a second process, fine position adjustments are computed to reduce inconsistencies from determining positions of image frames based on relative positions to multiple prior image frames. The peripheral may also act as a mouse and may be configured with one or more navigation sensors that can be used to reduce processing time required to match a successive image frame to a preceding image frame.
    Type: Application
    Filed: February 19, 2016
    Publication date: June 16, 2016
    Applicant: Dacuda AG
    Inventors: Martin Georg Zahnert, Erik Fonseka, Alexander IIic, Simon Meier, Andreas Breitenmoser
  • Patent number: 9300834
    Abstract: A computer peripheral that may operate as a scanner. The scanner captures image frames as it is moved across an object. The image frames are formed into a composite image based on computations in two processes. In a first process, fast track processing determines a coarse position of each of the image frames based on a relative position between each successive image frame and a respective preceding image determine by matching overlapping portions of the image frames. In a second process, fine position adjustments are computed to reduce inconsistencies from determining positions of image frames based on relative positions to multiple prior image frames. The peripheral may also act as a mouse and may be configured with one or more navigation sensors that can be used to reduce processing time required to match a successive image frame to a preceding image frame.
    Type: Grant
    Filed: May 17, 2010
    Date of Patent: March 29, 2016
    Assignee: Dacuda AG
    Inventors: Martin Georg Zahnert, Erik Fonseka, Alexander Ilic, Simon Meier, Andreas Breitenmoser
  • Patent number: 8723885
    Abstract: A computer peripheral that may operate as a scanner. The scanner captures image frames as it is moved across an object. The image frames are formed into a composite image based on computations in two processes. In a first process, fast track processing determines a coarse position of each of the image frames based on a relative position between each successive image frame and a respective preceding image determine by matching overlapping portions of the image frames. In a second process, fine position adjustments are computed to reduce inconsistencies from determining positions of image frames based on relative positions to multiple prior image frames. As a result, a composite image of an object being scanned may be presented in real time to a user, providing a user feedback or portions of the object that have been scanned and those that have not.
    Type: Grant
    Filed: May 20, 2010
    Date of Patent: May 13, 2014
    Assignee: Dacuda AG
    Inventors: Martin Georg Zahnert, Erik Fonseka, Alexander Ilic, Simon Meier, Andreas Breitenmoser