Patents by Inventor Alexander Ilic

Alexander Ilic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240137664
    Abstract: An image sensor suitable for use in an augmented reality system to provide low latency image analysis with low power consumption. The augmented reality system can be compact, and may be small enough to be packaged within a wearable device such as a set of goggles or mounted on a frame resembling ordinary eyeglasses. The image sensor may receive information about a region of an imaging array associated with a movable object and selectively output imaging information for that region. The region may be updated dynamically as the image sensor and/or the object moves. Such an image sensor provides a small amount of data from which object information used in rendering an augmented reality scene can be developed. The amount of data may be further reduced by configuring the image sensor to output indications of pixels for which the measured intensity of incident light changes.
    Type: Application
    Filed: January 3, 2024
    Publication date: April 25, 2024
    Applicant: Magic Leap, Inc.
    Inventors: Martin Georg Zahnert, Alexander Ilic, Erik Fonseka
  • Publication number: 20240137665
    Abstract: A wearable display system including multiple cameras and a processor is disclosed. A greyscale camera and a color camera can be arranged to provide a central view field associated with both cameras and a peripheral view field associated with one of the two cameras. One or more of the two cameras may be a plenoptic camera. The wearable display system may acquire light field information using the at least one plenoptic camera and create a world model using the first light field information and first depth information stereoscopically determined from images acquired by the greyscale camera and the color camera. The wearable display system can track head pose using the at least one plenoptic camera and the world model. The wearable display system can track objects in the central view field and the peripheral view fields using the one or two plenoptic cameras, when the objects satisfy a depth criterion.
    Type: Application
    Filed: December 15, 2023
    Publication date: April 25, 2024
    Applicant: Magic Leap, Inc.
    Inventors: Martin Georg Zahnert, Alexander Ilic, Miguel Andres Granados Velasquez, Javier Victorio
  • Patent number: 11902677
    Abstract: An image sensor suitable for use in an augmented reality system to provide low latency image analysis with low power consumption. The augmented reality system can be compact, and may be small enough to be packaged within a wearable device such as a set of goggles or mounted on a frame resembling ordinary eyeglasses. The image sensor may receive information about a region of an imaging array associated with a movable object and selectively output imaging information for that region. The region may be updated dynamically as the image sensor and/or the object moves. Such an image sensor provides a small amount of data from which object information used in rendering an augmented reality scene can be developed. The amount of data may be further reduced by configuring the image sensor to output indications of pixels for which the measured intensity of incident light changes.
    Type: Grant
    Filed: October 30, 2019
    Date of Patent: February 13, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Martin Georg Zahnert, Alexander Ilic, Erik Fonseka
  • Patent number: 11885871
    Abstract: An augmented reality device has a radar system that generates radar maps of locations of real world objects. An inertial measurement unit detects measurement values such as acceleration, gravitational force and inclination ranges. The values from the measurement unit drift over time. The radar maps are processed to determine fingerprints and the fingerprints are combined with the values from the measurement unit to store a pose estimate. Pose estimates at different times are compared to determine drift of the measurement unit. A measurement unit filter is adjusted to correct for the drift.
    Type: Grant
    Filed: May 24, 2019
    Date of Patent: January 30, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Alexander Ilic, Martin Georg Zahnert, Koon Keong Shee
  • Patent number: 11889209
    Abstract: A wearable display system including multiple cameras and a processor is disclosed. A greyscale camera and a color camera can be arranged to provide a central view field associated with both cameras and a peripheral view field associated with one of the two cameras. One or more of the two cameras may be a plenoptic camera. The wearable display system may acquire light field information using the at least one plenoptic camera and create a world model using the first light field information and first depth information stereoscopically determined from images acquired by the greyscale camera and the color camera. The wearable display system can track head pose using the at least one plenoptic camera and the world model. The wearable display system can track objects in the central view field and the peripheral view fields using the one or two plenoptic cameras, when the objects satisfy a depth criterion.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: January 30, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Martin Georg Zahnert, Alexander Ilic, Miguel Andres Granados Velasquez, Javier Victorio Gomez Gonzalez
  • Patent number: 11809613
    Abstract: A high-resolution image sensor suitable for use in an augmented reality (AR) system to provide low latency image analysis with low power consumption. The AR system can be compact, and may be small enough to be packaged within a wearable device such as a set of goggles or mounted on a frame resembling ordinary eyeglasses. The image sensor may receive information about a region of an imaging array associated with a movable object, selectively output imaging information for that region, and synchronously output high-resolution image frames. The region may be updated dynamically as the image sensor and/or the object moves. The image sensor may output the high-resolution image frames less frequently than the region being updated when the image sensor and/or the object moves. Such an image sensor provides a small amount of data from which object information used in rendering an AR scene can be developed.
    Type: Grant
    Filed: October 30, 2019
    Date of Patent: November 7, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Martin Georg Zahnert, Alexander Ilic
  • Patent number: 11798130
    Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Reflections and shadows may be detected and removed. Further, optical character recognition may be applied. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.
    Type: Grant
    Filed: August 30, 2021
    Date of Patent: October 24, 2023
    Assignee: Magic Leap, Inc.
    Inventor: Alexander Ilic
  • Patent number: 11631213
    Abstract: Forming a 3D representation of an object using a portable electronic device with a camera. A sequence of image frames, captured with the camera may be processed to generate 3D information about the object. This 3D information may be used to present a visual representation of the object as real-time feedback to a user, indicating confidence of 3D information for regions of the object. Feedback may be a composite image based on surfaces in a 3D model derived from the 3D information given visual characteristics derived from the image frames. The 3D information may be used in a number of ways, including as an input to a 3D printer or an input, representing, in whole or in part, an avatar for a game or other purposes. To enable processing to be performed on a portable device, a portion of the image frames may be selected for processing with higher resolution.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: April 18, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Erik Fonseka, Alexander Ilic
  • Patent number: 11598957
    Abstract: Systems and methods for synthesizing an image of the face by a head-mounted display (HMD), such as an augmented reality device, are disclosed. The HMD may be able to observe only a portion of the face with an inward-facing imaging system, e.g., the periocular region. The systems and methods described herein can generate a mapping of a conformation of the portion of the face that is not imaged based at least partly on a conformation of the portion of the face that is imaged. The HMD can receive an image of a portion of the face and use the mapping to determine a conformation of the portion of the face that is not observed. The HMD can combine the observed and unobserved portions to synthesize a full face image.
    Type: Grant
    Filed: April 20, 2022
    Date of Patent: March 7, 2023
    Assignee: MAGIC LEAP, INC.
    Inventors: Alexander Ilic, Martin Georg Zahnert
  • Patent number: 11563926
    Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: January 24, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Alexander Ilic, Peter Weigand, Erik Fonseka
  • Patent number: 11516383
    Abstract: A method of forming a visual representation of an object from a plurality of image frames acquired using a portable electronic device, the method comprising the steps of determining at least one parameter of motion of the portable electronic device; determining at least one capture condition for at least one first image frame of the plurality of image frames; computing, based on the at least one parameter of motion and the at least one capture condition, an indication of blur in the image frame; based on the indication of blur, conditionally taking a corrective action.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: November 29, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Alexander Ilic, Manuel Werlberger
  • Publication number: 20220244532
    Abstract: Systems and methods for synthesizing an image of the face by a head-mounted display (HMD), such as an augmented reality device, are disclosed. The HMD may be able to observe only a portion of the face with an inward-facing imaging system, e.g., the periocular region. The systems and methods described herein can generate a mapping of a conformation of the portion of the face that is not imaged based at least partly on a conformation of the portion of the face that is imaged. The HMD can receive an image of a portion of the face and use the mapping to determine a conformation of the portion of the face that is not observed. The HMD can combine the observed and unobserved portions to synthesize a full face image.
    Type: Application
    Filed: April 20, 2022
    Publication date: August 4, 2022
    Inventors: Alexander Ilic, Martin Georg Zahnert
  • Patent number: 11347051
    Abstract: Systems and methods for synthesizing an image of the face by a head-mounted display (HMD), such as an augmented reality device, are disclosed. The HMD may be able to observe only a portion of the face with an inward-facing imaging system, e.g., the periocular region. The systems and methods described herein can generate a mapping of a conformation of the portion of the face that is not imaged based at least partly on a conformation of the portion of the face that is imaged. The HMD can receive an image of a portion of the face and use the mapping to determine a conformation of the portion of the face that is not observed. The HMD can combine the observed and unobserved portions to synthesize a full face image.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: May 31, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Alexander Ilic, Martin Georg Zahnert
  • Publication number: 20220132056
    Abstract: A wearable display system including multiple cameras and a processor is disclosed. A greyscale camera and a color camera can be arranged to provide a central view field associated with both cameras and a peripheral view field associated with one of the two cameras. One or more of the two cameras may be a plenoptic camera. The wearable display system may acquire light field information using the at least one plenoptic camera and create a world model using the first light field information and first depth information stereoscopically determined from images acquired by the greyscale camera and the color camera. The wearable display system can track head pose using the at least one plenoptic camera and the world model. The wearable display system can track objects in the central view field and the peripheral view fields using the one or two plenoptic cameras, when the objects satisfy a depth criterion.
    Type: Application
    Filed: February 7, 2020
    Publication date: April 28, 2022
    Applicant: Magic Leap, Inc.
    Inventors: Martin Georg Zahnert, Alexander Ilic, Miguel Andres Granados Velasquez, Javier Victorio Gomez Gonzalez
  • Patent number: 11315217
    Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Further, operating conditions may be selected, automatically or based on instructions provided to a user, to reduce motion blur. Techniques, including relocalization such that, allow for user-selected regions of the composite image to be changed.
    Type: Grant
    Filed: August 12, 2019
    Date of Patent: April 26, 2022
    Assignee: ML Netherlands C.V.
    Inventors: Alexander Ilic, Manuel Werlberger, Benedikt Koeppel
  • Publication number: 20220103709
    Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an three-dimensional image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional depth map. Coordinates of the points in the depth map may be estimated with a level of certainty. The level of certainty may be used to determine which points are included in the composite image. The selected points may be smoothed and a mesh model may be formed by creating a convex hull of the selected points. The mesh model and associated texture information may be used to render a three-dimensional representation of the object on a two-dimensional display.
    Type: Application
    Filed: December 10, 2021
    Publication date: March 31, 2022
    Applicant: ML Netherlands C.V.
    Inventors: Alexander Ilic, Benedikt Koeppel
  • Patent number: 11245806
    Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an three-dimensional image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional depth map. Coordinates of the points in the depth map may be estimated with a level of certainty. The level of certainty may be used to determine which points are included in the composite image. The selected points may be smoothed and a mesh model may be formed by creating a convex hull of the selected points. The mesh model and associated texture information may be used to render a three-dimensional representation of the object on a two-dimensional display.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: February 8, 2022
    Assignee: ML Netherlands C.V.
    Inventors: Alexander Ilic, Benedikt Koeppel
  • Publication number: 20210392241
    Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Reflections and shadows may be detected and removed. Further, optical character recognition may be applied. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.
    Type: Application
    Filed: August 30, 2021
    Publication date: December 16, 2021
    Applicant: ML Netherlands C.V.
    Inventor: Alexander Ilic
  • Patent number: 11115565
    Abstract: A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Reflections and shadows may be detected and removed. Further, optical character recognition may be applied. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.
    Type: Grant
    Filed: September 12, 2019
    Date of Patent: September 7, 2021
    Assignee: ML Netherlands C.V.
    Inventor: Alexander Ilic
  • Publication number: 20210177370
    Abstract: A method of viewing a patient including inserting a catheter is described for health procedure navigation. A CT scan is carried out on a body part of a patient. Raw data from the CT scan is processed to create three-dimensional image data, storing the image data in the data store. Projectors receive generated light in a pattern representative of the image data and waveguides guide the light to a retina of an eye of a viewer while light from an external surface of the body transmits to the retina of the eye so that the viewer sees the external surface of the body augmented with the processed data rendering of the body part.
    Type: Application
    Filed: August 22, 2019
    Publication date: June 17, 2021
    Applicant: Magic Leap, Inc.
    Inventors: Nastasja U ROBAINA, Praveen BABU J D, David Charles LUNDMARK, Alexander ILIC