Patents by Inventor Adrian Kaehler

Adrian Kaehler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230018498
    Abstract: A software compensated robotic system makes use of recurrent neural networks and image processing to control operation and/or movement of an end effector. Images are used to compensate for variations in the response of the robotic system to command signals. This compensation allows for the use of components having lower reproducibility, precision and/or accuracy that would otherwise be practical.
    Type: Application
    Filed: April 25, 2022
    Publication date: January 19, 2023
    Applicant: Sanctuary Cognitive Systems Corporation
    Inventor: Adrian KAEHLER
  • Publication number: 20230018982
    Abstract: Apparatus and methods for displaying an image by a rotating structure are provided. The rotating structure can comprise blades of a fan. The fan can be a cooling fan for an electronics device such as an augmented reality display. In some embodiments, the rotating structure comprises light sources that emit light to generate the image. The light sources can comprises light-field emitters. In other embodiments, the rotating structure is illuminated by an external (e.g., non-rotating) light source.
    Type: Application
    Filed: September 22, 2022
    Publication date: January 19, 2023
    Inventors: Guillermo Padin Rohena, Ralph Remsburg, Adrian Kaehler, Evan Francis Rynk
  • Patent number: 11553165
    Abstract: Disclosed are improved methods, systems and devices for color night vision that reduce the number of intensifiers and/or decrease noise. In some embodiments, color night vision is provided in system in which multiple spectral bands are maintained, filtered separately, and then recombined in a unique three-lens-filtering setup. An illustrative four-camera night vision system is unique in that its first three cameras separately filter different bands using a subtractive Cyan, Magenta and Yellow (CMY) color filtering-process, while its fourth camera is used to sense either additional IR illuminators or a luminance channel to increase brightness. In some embodiments, the color night vision is implemented to distinguish details of an image in low light. The unique application of the three-lens subtractive CMY filtering allows for better photon scavenging and preservation of important color information.
    Type: Grant
    Filed: October 5, 2020
    Date of Patent: January 10, 2023
    Assignee: Applied Minds, LLC
    Inventors: Michael Keesling, Bran Ferren, Adrian Kaehler, Dan Ruderman, David Beal, Pablo Maurin, Eric Powers
  • Patent number: 11538280
    Abstract: Systems and methods for eyelid shape estimation are disclosed. In one aspect, after receiving an eye image of an eye (e.g., from an image capture device), an eye pose of the eye in the eye image is determined. From the eye pose, an eyelid shape (of an upper eyelid or a lower eyelid) can be estimated using an eyelid shape mapping model. The eyelid shape mapping model relates the eye pose and the eyelid shape. In another aspect, the eyelid shape mapping model is learned (e.g., using a neural network).
    Type: Grant
    Filed: June 1, 2020
    Date of Patent: December 27, 2022
    Assignee: Magic Leap, Inc.
    Inventor: Adrian Kaehler
  • Publication number: 20220404626
    Abstract: In some embodiments, a system comprises a head-mounted frame removably coupleable to the user's head; one or more light sources coupled to the head-mounted frame and configured to emit light with at least two different wavelengths toward a target object in an irradiation field of view of the light sources; one or more electromagnetic radiation detectors coupled to the head-mounted member and configured to receive light reflected after encountering the target object; and a controller operatively coupled to the one or more light sources and detectors and configured to determine and display an output indicating the identity or property of the target object as determined by the light properties measured by the detectors in relation to the light properties emitted by the light sources.
    Type: Application
    Filed: August 26, 2022
    Publication date: December 22, 2022
    Inventors: Nicole Elizabeth Samec, Nastasja U. Robaina, Adrian Kaehler, Mark Baerenrodt, Eric Baerenrodt, Christopher M. Harrises, Tammy Sherri Powers
  • Publication number: 20220404178
    Abstract: An apparatus is disclosed for capturing image information. The apparatus includes a waveguide having opposed planar input and output faces. A diffractive optical element (DOE) is formed across the waveguide. The DOE is configured to couple a portion of the light passing through the waveguide into the waveguide. The light coupled into the waveguide is directed via total internal reflection to an exit location on the waveguide. The apparatus further includes a light sensor having an input positioned adjacent the exit location of the waveguide to capture light exiting therefrom and generate output signals corresponding thereto. A processor determines the angle and position of the coupled light with respect to the input face of the waveguide based on the output signals.
    Type: Application
    Filed: August 19, 2022
    Publication date: December 22, 2022
    Inventor: Adrian KAEHLER
  • Publication number: 20220357582
    Abstract: Systems and methods for generating a face model for a user of a head-mounted device are disclosed. The head-mounted device can include one or more eye cameras configured to image the face of the user while the user is putting the device on or taking the device off. The images obtained by the eye cameras may be analyzed using a stereoscopic vision technique, a monocular vision technique, or a combination, to generate a face model for the user.
    Type: Application
    Filed: July 25, 2022
    Publication date: November 10, 2022
    Inventors: Gholamreza Amayeh, Adrian Kaehler, Douglas Lee
  • Patent number: 11495154
    Abstract: Apparatus and methods for displaying an image by a rotating structure are provided. The rotating structure can comprise blades of a fan. The fan can be a cooling fan for an electronics device such as an augmented reality display. In some embodiments, the rotating structure comprises light sources that emit light to generate the image. The light sources can comprises light-field emitters. In other embodiments, the rotating structure is illuminated by an external (e.g., non-rotating) light source.
    Type: Grant
    Filed: August 30, 2021
    Date of Patent: November 8, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Guillermo Padin Rohena, Ralph Remsburg, Adrian Kaehler, Evan Francis Rynk
  • Patent number: 11480467
    Abstract: Wearable spectroscopy systems and methods for identifying one or more characteristics of a target object are described. Spectroscopy systems may include a light source configured to emit light in an irradiated field of view and an electromagnetic radiation detector configured to receive reflected light from a target object irradiated by the light source. One or more processors of the systems may identify a characteristic of the target object based on a determined level of light absorption by the target object. Some systems and methods may include one or more corrections for scattered and/or ambient light such as applying an ambient light correction, passing the reflected light through an anti-scatter grid, or using a time-dependent variation in the emitted light.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: October 25, 2022
    Assignee: MAGIC LEAP, INC.
    Inventors: Adrian Kaehler, Christopher M. Harrises, Eric Baerenrodt, Mark Baerenrodt, Nastasja U. Robaina, Nicole Elizabeth Samec, Tammy Sherri Powers, Ivan Li Chuen Yeoh, Adam Carl Wright
  • Patent number: 11478927
    Abstract: Provided is a robot that includes: a first sensor having a first output and configured to sense state of a robot or an environment of the robot; a first hardware machine-learning accelerator coupled to the first output of the first sensor and configured to transform information sensed by the first sensor into a first latent-space representation; a second sensor having a second output and configured to sense state of the robot or the environment of the robot; a second hardware machine-learning accelerator configured to transform information sensed by the second sensor into a second latent-space representation; and a processor configured to control the robot based on both the first latent-space representation and the second latent-space representation.
    Type: Grant
    Filed: April 1, 2022
    Date of Patent: October 25, 2022
    Assignee: Giant.AI, Inc.
    Inventors: Jeff Kranski, Chris Cianci, Carolyn Wales, Adrian Kaehler
  • Publication number: 20220327281
    Abstract: An augmented reality (AR) device can be configured to monitor ambient audio data. The AR device can detect speech in the ambient audio data, convert the detected speech into text, or detect keywords such as rare words in the speech. When a rare word is detected, the AR device can retrieve auxiliary information (e.g., a definition) related to the rare word from a public or private source. The AR device can display the auxiliary information for a user to help the user better understand the speech. The AR device may perform translation of foreign speech, may display text (or the translation) of a speaker's speech to the user, or display statistical or other information associated with the speech.
    Type: Application
    Filed: June 27, 2022
    Publication date: October 13, 2022
    Inventors: Jeffrey Scott Sommers, Jennifer M.R. Devine, Joseph Wayne Seuck, Adrian Kaehler
  • Publication number: 20220317853
    Abstract: Provided is a system that includes: a plurality of touch sensors sharing a signal medium, each touch sensor in the plurality being configured to output set of frequencies on the signal medium responsive to being touched, each touch sensor in the plurality being configured to output a different set of frequencies; an analog to digital converter electrically coupled to the signal medium and configured to receive the sets of frequencies from the touch sensors and convert the sets of frequencies to digital representations of the sets of frequencies in the time domain; a processor communicatively coupled to the analog to digital converter and configured to execute a fast Fourier transform of the digital representations from the time domain into digital representations in the frequency domain; and an address decoder operative to transform the digital representations in the frequency domain into identifiers of touch sensors among the plurality of touch sensors.
    Type: Application
    Filed: April 1, 2022
    Publication date: October 6, 2022
    Inventors: Jeff Kranski, Adrian Kaehler
  • Publication number: 20220314448
    Abstract: Provided is a distributed robot management system, including: a first fleet of robots at a first facility; and a robot management server system remote from the first facility and communicatively coupled with the first fleet of robots via a network, wherein the robot management server system is configured to: provide configuration information to the first fleet of robots, maintain a remote representation of state of robots in the first fleet of robots, receive and store data from the first fleet of robots, and provide computing resources by which robots in the first fleet of robots are trained.
    Type: Application
    Filed: April 1, 2022
    Publication date: October 6, 2022
    Inventors: Carolyn Wales, Chris Cianci, Christopher Bradski, Jeff Kranski, Adrian Kaehler
  • Publication number: 20220317661
    Abstract: Provided is a process, including: obtaining, with a computer system, a set of tasks to be performed by a fleet of robots; obtaining, with the computer system, for each task in the set of tasks, a respective plurality of duty cycles, each corresponding to an amount of usage of a respective actuator of a robot among the fleet of robots upon performing the respective task; accessing, with the computer system, for each robot in the fleet of robots, a current wear-state vector having dimensions corresponding to cumulative wear on actuators of the respective robots; and based on the current wear-state vectors and the duty cycles of the tasks, with the computer system, assigning the tasks to the robots in the fleet of robots.
    Type: Application
    Filed: April 1, 2022
    Publication date: October 6, 2022
    Inventor: Adrian Kaehler
  • Publication number: 20220318678
    Abstract: Provided is a process that includes obtaining data indicative of state of a dynamic mechanical system and an environment of the dynamic mechanical system, the data comprising a plurality of channels of data from a plurality of different sensors including a plurality of cameras and other sensors indicative of state of actuators of the dynamic mechanical system; forming a training set from the obtained data by segmenting the data by time and grouping segments from the different channels by time to form units of training data that span different channels among the plurality of channels; training a metric learning model to encode inputs corresponding to the plurality of channels as vectors in an embedding space with self-supervised learning based on the training set; and using the trained metric learning model to control the dynamic mechanical system or another dynamic mechanical system.
    Type: Application
    Filed: April 1, 2022
    Publication date: October 6, 2022
    Inventors: Jeff Kranski, Chris Cianci, Carolyn Wales, Adrian Kaehler
  • Publication number: 20220314435
    Abstract: A robot system includes a first computing device and a second computing device. The first computing device is configured to control operation of the robot based on data flows received from a plurality of sensors of the robot and the second computing system is configured to receive and process at least some of the data flows concurrently while the first computing system controls operation of the robot.
    Type: Application
    Filed: April 1, 2022
    Publication date: October 6, 2022
    Inventors: Carolyn Wales, Chris Cianci, Jeff Kranski, Adrian Kaehler
  • Publication number: 20220314434
    Abstract: Provided is a robot that includes: a first sensor having a first output and configured to sense state of a robot or an environment of the robot; a first hardware machine-learning accelerator coupled to the first output of the first sensor and configured to transform information sensed by the first sensor into a first latent-space representation; a second sensor having a second output and configured to sense state of the robot or the environment of the robot; a second hardware machine-learning accelerator configured to transform information sensed by the second sensor into a second latent-space representation; and a processor configured to control the robot based on both the first latent-space representation and the second latent-space representation.
    Type: Application
    Filed: April 1, 2022
    Publication date: October 6, 2022
    Inventors: Jeff Kranski, Chris Cianci, Carolyn Wales, Adrian Kaehler
  • Patent number: 11460705
    Abstract: In some embodiments, a system comprises a head-mounted frame removably coupleable to the user's head; one or more light sources coupled to the head-mounted frame and configured to emit light with at least two different wavelengths toward a target object in an irradiation field of view of the light sources; one or more electromagnetic radiation detectors coupled to the head-mounted member and configured to receive light reflected after encountering the target object; and a controller operatively coupled to the one or more light sources and detectors and configured to determine and display an output indicating the identity or property of the target object as determined by the light properties measured by the detectors in relation to the light properties emitted by the light sources.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: October 4, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Nicole Elizabeth Samec, Nastasja U. Robaina, Adrian Kaehler, Mark Baerenrodt, Eric Baerenrodt, Christopher M. Harrises, Tammy Sherri Powers
  • Patent number: 11454523
    Abstract: An apparatus is disclosed for capturing image information. The apparatus includes a waveguide having opposed planar input and output faces. A diffractive optical element (DOE) is formed across the waveguide. The DOE is configured to couple a portion of the light passing through the waveguide into the waveguide. The light coupled into the waveguide is directed via total internal reflection to an exit location on the waveguide. The apparatus further includes a light sensor having an input positioned adjacent the exit location of the waveguide to capture light exiting therefrom and generate output signals corresponding thereto. A processor determines the angle and position of the coupled light with respect to the input face of the waveguide based on the output signals.
    Type: Grant
    Filed: December 3, 2020
    Date of Patent: September 27, 2022
    Assignee: Magic Leap, Inc.
    Inventor: Adrian Kaehler
  • Patent number: 11436625
    Abstract: Head mounted display systems configured to facilitate the exchange of biometric information between the head mounted display system and another computing device are disclosed. The head mounted display system can comprise a virtual or augmented reality device. After displaying a consent request regarding biometric information with the head mounted display system, a response to the consent request that includes a consent indication regarding an aspect of the biometric information can be determined. After obtaining biometric information from a wearer utilizing e.g., a camera of the head mounted display, and processing the biometric information, a biometric information processing result can be generated. The result can be communicated from the head mounted display system to another computing device.
    Type: Grant
    Filed: November 16, 2020
    Date of Patent: September 6, 2022
    Assignee: Magic Leap, Inc.
    Inventor: Adrian Kaehler