Patents by Inventor Adrian Kaehler

Adrian Kaehler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11740474
    Abstract: Systems and methods for generating a face model for a user of a head-mounted device are disclosed. The head-mounted device can include one or more eye cameras configured to image the face of the user while the user is putting the device on or taking the device off. The images obtained by the eye cameras may be analyzed using a stereoscopic vision technique, a monocular vision technique, or a combination, to generate a face model for the user.
    Type: Grant
    Filed: July 25, 2022
    Date of Patent: August 29, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Gholamreza Amayeh, Adrian Kaehler, Douglas Lee
  • Patent number: 11720223
    Abstract: A wearable display system can automatically recognize a physical remote or a device that the remote serves using computer vision techniques. The wearable system can generate a virtual remote with a virtual control panel viewable and interactable by user of the wearable system. The virtual remote can emulate the functionality of the physical remote. The user can select a virtual remote for interaction, for example, by looking or pointing at the parent device or its remote control, or by selecting from a menu of known devices. The virtual remote may include a virtual button, which is associated with a volume in the physical space. The wearable system can detect that a virtual button is actuated by determining whether a portion of the user's body (e.g., the user's finger) has penetrated the volume associated with the virtual button.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: August 8, 2023
    Assignee: MAGIC LEAP, INC.
    Inventors: Adrian Kaehler, John Adam Croston
  • Publication number: 20230244342
    Abstract: A system includes a robot having contact surfaces and a sensor array having a spatially distributed touch sensors disposed on the contact surfaces. Each touch sensor has an identifier and outputs an analog signal at a set of frequencies associated with the identifier The sensor array outputs a combined analog signal representative of a combination of the analog signals outputted by the touch sensors. The system includes an analog-to-digital converter that generates a digital signal in a time domain based on the combined analog signal. The system includes one or more processing units that transforms the digital signal from the time domain to a frequency domain and detects locations of touch within the sensor array based on frequencies observed the frequency domain and the identifiers of the touch sensors.
    Type: Application
    Filed: February 27, 2023
    Publication date: August 3, 2023
    Inventors: Jeff Kranski, Adrian Kaehler
  • Publication number: 20230237378
    Abstract: Provided is a process that includes obtaining data indicative of state of a dynamic mechanical system and an environment of the dynamic mechanical system, the data comprising a plurality of channels of data from a plurality of different sensors including a plurality of cameras and other sensors indicative of state of actuators of the dynamic mechanical system; forming a training set from the obtained data by segmenting the data by time and grouping segments from the different channels by time to form units of training data that span different channels among the plurality of channels; training a metric learning model to encode inputs corresponding to the plurality of channels as vectors in an embedding space with self-supervised learning based on the training set; and using the trained metric learning model to control the dynamic mechanical system or another dynamic mechanical system.
    Type: Application
    Filed: March 27, 2023
    Publication date: July 27, 2023
    Inventors: Jeff Kranski, Chris Cianci, Carolyn Wales, Adrian Kaehler
  • Patent number: 11691278
    Abstract: Provided is a robot that includes: a first sensor having a first output and configured to sense state of a robot or an environment of the robot; a first hardware machine-learning accelerator coupled to the first output of the first sensor and configured to transform information sensed by the first sensor into a first latent-space representation; a second sensor having a second output and configured to sense state of the robot or the environment of the robot; a second hardware machine-learning accelerator configured to transform information sensed by the second sensor into a second latent-space representation; and a processor configured to control the robot based on both the first latent-space representation and the second latent-space representation.
    Type: Grant
    Filed: October 20, 2022
    Date of Patent: July 4, 2023
    Assignee: Sanctuary Cognitive Systems Corporation
    Inventors: Jeff Kranski, Chris Cianci, Carolyn Wales, Adrian Kaehler
  • Patent number: 11691274
    Abstract: A software compensated robotic system makes use of recurrent neural networks and image processing to control operation and/or movement of an end effector. Images are used to compensate for variations in the response of the robotic system to command signals. This compensation allows for the use of components having lower reproducibility, precision and/or accuracy that would otherwise be practical.
    Type: Grant
    Filed: April 25, 2022
    Date of Patent: July 4, 2023
    Assignee: Sanctuary Cognitive Systems Corporation
    Inventor: Adrian Kaehler
  • Publication number: 20230148120
    Abstract: Methods and systems for using a teleoperation system to train a robot to perform tasks using machine learning are described herein. A teleoperation system may be used to record actions of a robot as used by a human teleoperator. The teleoperation system may provide a teleoperator insight into the state of the robot and may provide feedback to the teleoperator allowing the teleoperator to feel what the robot is feeling. For example, sensor information from the robot may be sent to the teleoperation system and output to the teleoperator in various forms including vibrations, video, visual cues, or sound. The teleoperation system may output visual guides to the teleoperator so that the teleoperator may know how to control the robot to complete a task in a desired manner.
    Type: Application
    Filed: December 31, 2022
    Publication date: May 11, 2023
    Inventors: Jeff Kranski, Carolyn Wales, Chris Cianci, Adrian Kaehler
  • Patent number: 11644674
    Abstract: One embodiment is directed to a system comprising a head-mounted member removably coupleable to the user's head; one or more electromagnetic radiation emitters coupled to the head-mounted member and configured to emit light with at least two different wavelengths toward at least one of the eyes of the user; one or more electromagnetic radiation detectors coupled to the head-mounted member and configured to receive light reflected after encountering at least one blood vessel of the eye; and a controller operatively coupled to the one or more electromagnetic radiation emitters and detectors and configured to cause the one or more electromagnetic radiation emitters to emit pulses of light while also causing the one or more electromagnetic radiation detectors to detect levels of light absorption related to the emitted pulses of light, and to produce an output that is proportional to an oxygen saturation level in the blood vessel.
    Type: Grant
    Filed: April 12, 2021
    Date of Patent: May 9, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Nicole Elizabeth Samec, Adrian Kaehler
  • Publication number: 20230126906
    Abstract: Provided is a robot that includes: a first sensor having a first output and configured to sense state of a robot or an environment of the robot; a first hardware machine-learning accelerator coupled to the first output of the first sensor and configured to transform information sensed by the first sensor into a first latent-space representation; a second sensor having a second output and configured to sense state of the robot or the environment of the robot; a second hardware machine-learning accelerator configured to transform information sensed by the second sensor into a second latent-space representation; and a processor configured to control the robot based on both the first latent-space representation and the second latent-space representation.
    Type: Application
    Filed: October 20, 2022
    Publication date: April 27, 2023
    Inventors: Jeff Kranski, Chris Cianci, Carolyn Wales, Adrian Kaehler
  • Patent number: 11636398
    Abstract: Provided is a process that includes obtaining data indicative of state of a dynamic mechanical system and an environment of the dynamic mechanical system, the data comprising a plurality of channels of data from a plurality of different sensors including a plurality of cameras and other sensors indicative of state of actuators of the dynamic mechanical system; forming a training set from the obtained data by segmenting the data by time and grouping segments from the different channels by time to form units of training data that span different channels among the plurality of channels; training a metric learning model to encode inputs corresponding to the plurality of channels as vectors in an embedding space with self-supervised learning based on the training set; and using the trained metric learning model to control the dynamic mechanical system or another dynamic mechanical system.
    Type: Grant
    Filed: April 1, 2022
    Date of Patent: April 25, 2023
    Assignee: Sanctuary Cognitive Systems Corporation
    Inventors: Jeff Kranski, Chris Cianci, Carolyn Wales, Adrian Kaehler
  • Patent number: 11636652
    Abstract: Systems and methods for synthesizing an image of the face by a head-mounted device (HMD) are disclosed. The HMD may not be able to observe a portion of the face. The systems and methods described herein can generate a mapping from a conformation of the portion of the face that is not imaged to a conformation of the portion of the face observed. The HMD can receive an image of a portion of the face and use the mapping to determine a conformation of the portion of the face that is not observed. The HMD can combine the observed and unobserved portions to synthesize a full face image.
    Type: Grant
    Filed: November 5, 2021
    Date of Patent: April 25, 2023
    Assignee: Magic Leap, Inc.
    Inventor: Adrian Kaehler
  • Patent number: 11630314
    Abstract: An example wearable display system can be capable of determining a user interface (UI) event with respect to a virtual UI device (e.g., a button) and a pointer (e.g., a finger or a stylus) using a neural network. The wearable display system can render a representation of the UI device onto an image of the pointer captured when the virtual UI device is shown to the user and the user uses the pointer to interact with the virtual UI device. The representation of the UI device can include concentric shapes (or shapes with similar or the same centers of gravity) of high contrast. The neural network can be trained using training images with representations of virtual UI devices and pointers.
    Type: Grant
    Filed: April 13, 2022
    Date of Patent: April 18, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Adrian Kaehler, Gary R. Bradski, Vijay Badrinarayanan
  • Patent number: 11625122
    Abstract: Provided is a system that includes: a plurality of touch sensors sharing a signal medium, each touch sensor in the plurality being configured to output set of frequencies on the signal medium responsive to being touched, each touch sensor in the plurality being configured to output a different set of frequencies; an analog to digital converter electrically coupled to the signal medium and configured to receive the sets of frequencies from the touch sensors and convert the sets of frequencies to digital representations of the sets of frequencies in the time domain; a processor communicatively coupled to the analog to digital converter and configured to execute a fast Fourier transform of the digital representations from the time domain into digital representations in the frequency domain; and an address decoder operative to transform the digital representations in the frequency domain into identifiers of touch sensors among the plurality of touch sensors.
    Type: Grant
    Filed: April 1, 2022
    Date of Patent: April 11, 2023
    Assignee: Sanctuary Cognitive Systems Corporation
    Inventors: Jeff Kranski, Adrian Kaehler
  • Publication number: 20230109398
    Abstract: Disclosed techniques for decreasing teach times of robot systems may obtain a first set of parameters of a first trained robot-control model of a first robot trained to perform a task and determine, based on the first set of parameters, a second set of parameters of a second robot-control model of a second robot before the second robot is trained to perform the task. In some cases, a plurality of sets of parameters from trained robot-control models of respective robots trained to perform a task may be obtained. Thus, for example, a convergence of values of those parameters on a value, or range of potential values, may be determined. Embodiments may determine values for parameters of the control model of the (e.g., second) robot to be trained within a range, or a threshold, based on values of corresponding parameters of the trained robot(s).
    Type: Application
    Filed: October 6, 2021
    Publication date: April 6, 2023
    Inventors: Jeff Kranski, Chris Cianci, Adrian Kaehler
  • Publication number: 20230087868
    Abstract: Wearable spectroscopy systems and methods for identifying one or more characteristics of a target object are described. Spectroscopy systems may include a light source configured to emit light in an irradiated field of view and an electromagnetic radiation detector configured to receive reflected light from a target object irradiated by the light source. One or more processors of the systems may identify a characteristic of the target object based on a determined level of light absorption by the target object. Some systems and methods may include one or more corrections for scattered and/or ambient light such as applying an ambient light correction, passing the reflected light through an anti-scatter grid, or using a time-dependent variation in the emitted light.
    Type: Application
    Filed: September 30, 2022
    Publication date: March 23, 2023
    Inventors: Adrian Kaehler, Christopher M. Harrises, Eric Baerenrodt, Mark Baerenrodt, Natasja U. Robaina, Nicole Elizabeth Samec, Tammy Sherri Powers, Ivan Li Chuen Yeoh, Adam Carl Wright
  • Publication number: 20230083349
    Abstract: Methods and systems for using a teleoperation system to train a robot to perform tasks using machine learning are described herein. A teleoperation system may be used to record actions of a robot as used by a human teleoperator. The teleoperation system may provide a teleoperator insight into the state of the robot and may provide feedback to the teleoperator allowing the teleoperator to feel what the robot is feeling. For example, sensor information from the robot may be sent to the teleoperation system and output to the teleoperator in various forms including vibrations, video, visual cues, or sound. The teleoperation system may output visual guides to the teleoperator so that the teleoperator may know how to control the robot to complete a task in a desired manner.
    Type: Application
    Filed: September 14, 2021
    Publication date: March 16, 2023
    Inventors: Jeff Kranski, Carolyn Wales, Chris Cianci, Adrian Kaehler
  • Publication number: 20230078625
    Abstract: Provided is a process, including: obtaining, with a computer system, access to a specification indicating which regions of an embedding space are designated as anomalous relative to vectors in the embedding space characterizing past behavior of a first instance of a dynamical system; receiving, with the computer system, multi-channel input indicative of a state of a second instance of the dynamical system; and classifying, with the computer system, whether the state of the second instance of the dynamical system is anomalous by: encoding the multi-channel input into a vector in the embedding space; causing the specification to be applied to the vector; obtaining a result of applying the specification to the vector; and classifying whether the state of the second instance of the dynamical system is anomalous based on the result; and storing the classification in memory.
    Type: Application
    Filed: September 14, 2021
    Publication date: March 16, 2023
    Inventors: Adrian Kaehler, Jeff Kranski, Chris Cianci
  • Patent number: 11579694
    Abstract: Systems and methods for eye image set selection, eye image collection, and eye image combination are described. Embodiments of the systems and methods for eye image set selection can include comparing a determined image quality metric with an image quality threshold to identify an eye image passing an image quality threshold, and selecting, from a plurality of eye images, a set of eye images that passes the image quality threshold.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: February 14, 2023
    Assignee: Magic Leap, Inc.
    Inventor: Adrian Kaehler
  • Patent number: 11568035
    Abstract: Systems and methods for iris authentication are disclosed. In one aspect, a deep neural network (DNN) with a triplet network architecture can be trained to learn an embedding (e.g., another DNN) that maps from the higher dimensional eye image space to a lower dimensional embedding space. The DNN can be trained with segmented iris images or images of the periocular region of the eye (including the eye and portions around the eye such as eyelids, eyebrows, eyelashes, and skin surrounding the eye). With the triplet network architecture, an embedding space representation (ESR) of a person's eye image can be closer to the ESRs of the person's other eye images than it is to the ESR of another person's eye image. In another aspect, to authenticate a user as an authorized user, an ESR of the user's eye image can be sufficiently close to an ESR of the authorized user's eye image.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: January 31, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Alexey Spizhevoy, Adrian Kaehler, Gary Bradski
  • Publication number: 20230018498
    Abstract: A software compensated robotic system makes use of recurrent neural networks and image processing to control operation and/or movement of an end effector. Images are used to compensate for variations in the response of the robotic system to command signals. This compensation allows for the use of components having lower reproducibility, precision and/or accuracy that would otherwise be practical.
    Type: Application
    Filed: April 25, 2022
    Publication date: January 19, 2023
    Applicant: Sanctuary Cognitive Systems Corporation
    Inventor: Adrian KAEHLER