Patents by Inventor Erik Fonseka
Erik Fonseka has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11902677Abstract: An image sensor suitable for use in an augmented reality system to provide low latency image analysis with low power consumption. The augmented reality system can be compact, and may be small enough to be packaged within a wearable device such as a set of goggles or mounted on a frame resembling ordinary eyeglasses. The image sensor may receive information about a region of an imaging array associated with a movable object and selectively output imaging information for that region. The region may be updated dynamically as the image sensor and/or the object moves. Such an image sensor provides a small amount of data from which object information used in rendering an augmented reality scene can be developed. The amount of data may be further reduced by configuring the image sensor to output indications of pixels for which the measured intensity of incident light changes.Type: GrantFiled: October 30, 2019Date of Patent: February 13, 2024Assignee: Magic Leap, Inc.Inventors: Martin Georg Zahnert, Alexander Ilic, Erik Fonseka
-
Patent number: 11631213Abstract: Forming a 3D representation of an object using a portable electronic device with a camera. A sequence of image frames, captured with the camera may be processed to generate 3D information about the object. This 3D information may be used to present a visual representation of the object as real-time feedback to a user, indicating confidence of 3D information for regions of the object. Feedback may be a composite image based on surfaces in a 3D model derived from the 3D information given visual characteristics derived from the image frames. The 3D information may be used in a number of ways, including as an input to a 3D printer or an input, representing, in whole or in part, an avatar for a game or other purposes. To enable processing to be performed on a portable device, a portion of the image frames may be selected for processing with higher resolution.Type: GrantFiled: December 30, 2016Date of Patent: April 18, 2023Assignee: Magic Leap, Inc.Inventors: Erik Fonseka, Alexander Ilic
-
Publication number: 20230037046Abstract: A display system includes a head-mounted display configured to project light, having different amounts of wavefront divergence, to an eye of a user to display virtual image content appearing to be disposed at different depth planes. The wavefront divergence may be changed in discrete steps, with the change in steps being triggered based upon whether the user is fixating on a particular depth plane. The display system may be calibrated for switching depth planes for a main user. Upon determining that a guest user is utilizing the system, rather than undergoing a full calibration, the display system may be configured to switch depth planes based on a rough determination of the virtual content that the user is looking at. The virtual content has an associated depth plane and the display system may be configured to switch to the depth plane of that virtual content.Type: ApplicationFiled: October 7, 2022Publication date: February 2, 2023Inventors: Samuel A. Miller, Lomesh Agarwal, Lionel Ernest Edwin, Ivan Li Chuen Yeoh, Daniel Farmer, Sergey Fyodorovich Prokushkin, Yonatan Munk, Edwin Joseph Selker, Erik Fonseka, Paul M. Greco, Jeffrey Scott Sommers, Bradley Vincent Stuart, Shiuli Das, Suraj Manjunath Shanbhag
-
Patent number: 11563926Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.Type: GrantFiled: November 17, 2020Date of Patent: January 24, 2023Assignee: Magic Leap, Inc.Inventors: Alexander Ilic, Peter Weigand, Erik Fonseka
-
Patent number: 11468640Abstract: A display system includes a head-mounted display configured to project light, having different amounts of wavefront divergence, to an eye of a user to display virtual image content appearing to be disposed at different depth planes. The wavefront divergence may be changed in discrete steps, with the change in steps being triggered based upon whether the user is fixating on a particular depth plane. The display system may be calibrated for switching depth planes for a main user. Upon determining that a guest user is utilizing the system, rather than undergoing a full calibration, the display system may be configured to switch depth planes based on a rough determination of the virtual content that the user is looking at. The virtual content has an associated depth plane and the display system may be configured to switch to the depth plane of that virtual content.Type: GrantFiled: August 2, 2019Date of Patent: October 11, 2022Assignee: Magic Leap, Inc.Inventors: Samuel A. Miller, Lomesh Agarwal, Lionel Ernest Edwin, Ivan Li Chuen Yeoh, Daniel Farmer, Sergey Fyodorovich Prokushkin, Yonatan Munk, Edwin Joseph Selker, Erik Fonseka, Paul M. Greco, Jeffrey Scott Sommers, Bradley Vincent Stuart, Shiuli Das, Suraj Manjunath Shanbhag
-
Publication number: 20220301217Abstract: Systems and methods for eye tracking latency enhancements. An example head-mounted system obtains a first image of an eye of a user. The first image is provided as input to a machine learning model which has been trained to generate iris and pupil segmentation data given an image of an eye. A second image of the eye is obtained. A set of locations in the second image at which one or more glints are shown is detected based on iris segmentation data generated for the first image. A region of the second image at which the pupil of the eye of the user is shown is identified based on pupil segmentation data generated for the first image. A pose of the eye of the user is determined based on the detected set of glint locations in the second image and the identified region of the second image.Type: ApplicationFiled: July 1, 2020Publication date: September 22, 2022Inventors: Bradley Vincent Stuart, Daniel Farmer, Tiejian Zhang, Shiuli Das, Suraj Manjunath Shanbhag, Erik Fonseka
-
Publication number: 20220014689Abstract: A high-resolution image sensor suitable for use in an augmented reality (AR) system. The AR system may be small enough to be packaged within a wearable device such as a set of goggles or mounted on a frame resembling ordinary eyeglasses. The image sensor may have pixels configured to output events indicating changes in sensed IR light. Those pixels may be sensitive to IR light of the same frequency source as an active IR light source, and may be part of an eye tracking camera, oriented toward a user's eye. Changes in IR light may be used to determine the location of the user's pupil, which may be used in rendering virtual objects. The events may be generated and processed at a high rate, enabling the system to render the virtual object based on the user's gaze so that the virtual object will appear more realistic to the user.Type: ApplicationFiled: November 11, 2019Publication date: January 13, 2022Applicant: Magic Leap, Inc.Inventors: Martin Georg Zahnert, Alexander llic, Erik Fonseka
-
Publication number: 20210409632Abstract: An image sensor suitable for use in an augmented reality system to provide low latency image analysis with low power consumption. The image sensor may have multiple pixel cells, each of the pixel cells comprising a photodetector to generate an electric signal based on an intensity of light incident upon the photodetector. The pixel cells may include multiple subsets of pixel cells, each subset of pixel cells including at least one angle-of-arrival to-intensity converter to modulate incident light reaching one or more of the pixel cells in the subset based on an angle of arrival of the incident light. Each pixel cell within the plurality of pixel cells may include differential readout circuitry configured to output a readout signal only when an amplitude of a current electric signal from the photodetector is different from an amplitude of a previous electric signal from the photodetector.Type: ApplicationFiled: October 30, 2019Publication date: December 30, 2021Applicant: Magic Leap, Inc.Inventors: Martin Georg Zahnert, Alecander Ilic, Erik Fonseka
-
Publication number: 20210409626Abstract: An image sensor suitable for use in an augmented reality system to provide low latency image analysis with low power consumption. The augmented reality system can be compact, and may be small enough to be packaged within a wearable device such as a set of goggles or mounted on a frame resembling ordinary eyeglasses. The image sensor may receive information about a region of an imaging array associated with a movable object and selectively output imaging information for that region. The region may be updated dynamically as the image sensor and/or the object moves. Such an image sensor provides a small amount of data from which object information used in rendering an augmented reality scene can be developed. The amount of data may be further reduced by configuring the image sensor to output indications of pixels for which the measured intensity of incident light changes.Type: ApplicationFiled: October 30, 2019Publication date: December 30, 2021Applicant: Magic Leap, Inc.Inventors: Martin Georg Zahnert, Alexander llic, Erik Fonseka
-
Publication number: 20210209835Abstract: Forming a 3D representation of an object using a portable electronic device with a camera. A sequence of image frames, captured with the camera may be processed to generate 3D information about the object. This 3D information may be used to present a visual representation of the object as real-time feedback to a user, indicating confidence of 3D information for regions of the object. Feedback may be a composite image based on surfaces in a 3D model derived from the 3D information given visual characteristics derived from the image frames. The 3D information may be used in a number of ways, including as an input to a 3D printer or an input, representing, in whole or in part, an avatar for a game or other purposes. To enable processing to be performed on a portable device, a portion of the image frames may be selected for processing with higher resolution.Type: ApplicationFiled: December 30, 2016Publication date: July 8, 2021Applicant: ML Netherlands C.V.Inventors: Erik Fonseka, Alexander llic
-
Publication number: 20210144353Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.Type: ApplicationFiled: November 17, 2020Publication date: May 13, 2021Applicant: ML Netherlands C.V.Inventors: Alexander Ilic, Peter Weigand, Erik Fonseka
-
Patent number: 10841551Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.Type: GrantFiled: May 20, 2019Date of Patent: November 17, 2020Assignee: ML Netherlands C.V.Inventors: Alexander Ilic, Peter Weigand, Erik Fonseka
-
Publication number: 20200043236Abstract: A display system includes a head-mounted display configured to project light, having different amounts of wavefront divergence, to an eye of a user to display virtual image content appearing to be disposed at different depth planes. The wavefront divergence may be changed in discrete steps, with the change in steps being triggered based upon whether the user is fixating on a particular depth plane. The display system may be calibrated for switching depth planes for a main user. Upon determining that a guest user is utilizing the system, rather than undergoing a full calibration, the display system may be configured to switch depth planes based on a rough determination of the virtual content that the user is looking at. The virtual content has an associated depth plane and the display system may be configured to switch to the depth plane of that virtual content.Type: ApplicationFiled: August 2, 2019Publication date: February 6, 2020Inventors: Samuel A. Miller, Lomesh Agarwal, Lionel Ernest Edwin, Ivan Li Chuen Yeoh, Daniel Farmer, Sergey Fyodorovich Prokushkin, Yonatan Munk, Edwin Joseph Selker, Erik Fonseka, Paul M. Greco, Jeffrey Scott Sommers, Bradley Vincent Stuart, Shiuli Das, Suraj Manjunath Shanbhag
-
Publication number: 20190342533Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.Type: ApplicationFiled: May 20, 2019Publication date: November 7, 2019Applicant: ML Netherlands C.V.Inventors: Alexander Ilic, Peter Weigand, Erik Fonseka
-
Patent number: 10298898Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.Type: GrantFiled: August 29, 2014Date of Patent: May 21, 2019Assignee: ML Netherlands C.V.Inventors: Alexander Ilic, Peter Weigand, Erik Fonseka
-
Patent number: 10225428Abstract: A computer peripheral that may operate as a scanner. The scanner captures image frames as it is moved across an object. The image frames are formed into a composite image based on computations in two processes. In a first process, fast track processing determines a coarse position of each of the image frames based on a relative position between each successive image frame and a respective preceding image determine by matching overlapping portions of the image frames. In a second process, fine position adjustments are computed to reduce inconsistencies from determining positions of image frames based on relative positions to multiple prior image frames. The peripheral may also act as a mouse and may be configured with one or more navigation sensors that can be used to reduce processing time required to match a successive image frame to a preceding image frame.Type: GrantFiled: February 19, 2016Date of Patent: March 5, 2019Assignee: ML Netherlands C.V.Inventors: Martin Georg Zahnert, Erik Fonseka, Alexander Ilic, Simon Meier, Andreas Breitenmoser
-
Publication number: 20160227181Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.Type: ApplicationFiled: August 29, 2014Publication date: August 4, 2016Applicant: Dacuda AGInventors: Alexander Ilic, Peter Wiegand, Erik Fonseka
-
Publication number: 20160173716Abstract: A computer peripheral that may operate as a scanner. The scanner captures image frames as it is moved across an object. The image frames are formed into a composite image based on computations in two processes. In a first process, fast track processing determines a coarse position of each of the image frames based on a relative position between each successive image frame and a respective preceding image determine by matching overlapping portions of the image frames. In a second process, fine position adjustments are computed to reduce inconsistencies from determining positions of image frames based on relative positions to multiple prior image frames. The peripheral may also act as a mouse and may be configured with one or more navigation sensors that can be used to reduce processing time required to match a successive image frame to a preceding image frame.Type: ApplicationFiled: February 19, 2016Publication date: June 16, 2016Applicant: Dacuda AGInventors: Martin Georg Zahnert, Erik Fonseka, Alexander IIic, Simon Meier, Andreas Breitenmoser
-
Patent number: 9300834Abstract: A computer peripheral that may operate as a scanner. The scanner captures image frames as it is moved across an object. The image frames are formed into a composite image based on computations in two processes. In a first process, fast track processing determines a coarse position of each of the image frames based on a relative position between each successive image frame and a respective preceding image determine by matching overlapping portions of the image frames. In a second process, fine position adjustments are computed to reduce inconsistencies from determining positions of image frames based on relative positions to multiple prior image frames. The peripheral may also act as a mouse and may be configured with one or more navigation sensors that can be used to reduce processing time required to match a successive image frame to a preceding image frame.Type: GrantFiled: May 17, 2010Date of Patent: March 29, 2016Assignee: Dacuda AGInventors: Martin Georg Zahnert, Erik Fonseka, Alexander Ilic, Simon Meier, Andreas Breitenmoser
-
Patent number: 8723885Abstract: A computer peripheral that may operate as a scanner. The scanner captures image frames as it is moved across an object. The image frames are formed into a composite image based on computations in two processes. In a first process, fast track processing determines a coarse position of each of the image frames based on a relative position between each successive image frame and a respective preceding image determine by matching overlapping portions of the image frames. In a second process, fine position adjustments are computed to reduce inconsistencies from determining positions of image frames based on relative positions to multiple prior image frames. As a result, a composite image of an object being scanned may be presented in real time to a user, providing a user feedback or portions of the object that have been scanned and those that have not.Type: GrantFiled: May 20, 2010Date of Patent: May 13, 2014Assignee: Dacuda AGInventors: Martin Georg Zahnert, Erik Fonseka, Alexander Ilic, Simon Meier, Andreas Breitenmoser