Patents by Inventor Erik Linden
Erik Linden has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240256030Abstract: A motion-capture head module for tracking a head pose of a user comprising: a headband for attaching to a head of the user; a post extending from the headband; and at least three markers, wherein each marker is coupled to the post by a respective one of at least three branching members extending from the post, such that the at least three markers have a first fixed geometrical relationship with respect to each other.Type: ApplicationFiled: January 29, 2024Publication date: August 1, 2024Inventors: ANDERS DAHL, ERIK LINDÉN, ANNA LARSEN REDZ, PER OLOF JONATAN WALCK, PONTUS CHRISTIAN WALCK
-
Publication number: 20240187738Abstract: A method of controlling exposure time is disclosed comprising receiving an image of an eye from an image sensor, the image resulting from the image sensor detecting light during a first exposure time. A pupil intensity is determined as an intensity of a representation of a pupil of the eye in the image and an iris intensity is determined as an intensity of a representation of an iris of the eye in the image. Furthermore, a pupil-iris contrast is determined as a contrast between the representation of the pupil in the image and the representation of the iris in the image. On a condition that the pupil intensity is determined to meet an intensity condition, an intensity compensated exposure time is determined which is different from the first exposure time, and on a condition that the pupil-iris contrast is determined to meet a contrast condition, a contrast compensated exposure time is determined which is different from the first exposure time.Type: ApplicationFiled: December 28, 2017Publication date: June 6, 2024Applicant: Tobii ABInventor: Erik Lindén
-
Patent number: 11650425Abstract: Computer-generated image data is presented on first and second displays of a binocular headset presuming that a user's left and right eyes are located at first and second positions relative to the first and second displays respectively. At least one updated version of the image data is presented, which is rendered presuming that at least one of the user's left and right eyes is located at a position different from the first and second positions respectively in at least one spatial dimension. In response thereto, a user-generated feedback signal is received expressing either: a quality measure of the updated version of the computer-generated image data relative to computer-generated image data presented previously; or a confirmation command. The steps of presenting the updated version of the computer-generated image data and receiving the user-generated feedback signal are repeated until the confirmation command is received.Type: GrantFiled: December 21, 2020Date of Patent: May 16, 2023Assignee: Tobil ABInventors: Geoffrey Cooper, Rickard Lundahl, Erik Lindén, Maria Gordon
-
Publication number: 20210255462Abstract: Computer-generated image data is presented on first and second displays of a binocular headset presuming that a user's left and right eyes are located at first and second positions relative to the first and second displays respectively. At least one updated version of the image data is presented, which is rendered presuming that at least one of the user's left and right eyes is located at a position different from the first and second positions respectively in at least one spatial dimension. In response thereto, a user-generated feedback signal is received expressing either: a quality measure of the updated version of the computer-generated image data relative to computer-generated image data presented previously; or a confirmation command. The steps of presenting the updated version of the computer-generated image data and receiving the user-generated feedback signal are repeated until the confirmation command is received.Type: ApplicationFiled: December 21, 2020Publication date: August 19, 2021Applicant: Tobii ABInventors: Geoffrey Cooper, Rickard Lundahl, Erik Lindén, Maria Gordon
-
Patent number: 11061471Abstract: The present invention relates to a method for establishing the position of an object in relation to a camera in order to enable gaze tracking with a user watching the object, where the user is in view of the camera. The method comprises the steps of showing a known pattern, consisting of a set of stimulus points (s1, s2, . . . , sN), on the object, detecting gaze rays (g1, g2, . . . , gN) from an eye of the user as the user looks at the stimulus points (s1, s2, . . . , sN), and finding, by means of an optimizer, a position and orientation of the object in relation to the camera such that the gaze rays (g1, g2, . . . , gN) approaches the stimulus points (s1, s2, . . . , sN).Type: GrantFiled: December 11, 2019Date of Patent: July 13, 2021Assignee: Tobii ABInventor: Erik Lindén
-
Patent number: 11061473Abstract: A method of updating a cornea model for a cornea of an eye is disclosed, as well as a corresponding system and storage medium. The method comprises controlling a display to display a stimulus at a first depth, wherein the display is capable of displaying objects at different depths, receiving first sensor data obtained by an eye tracking sensor while the stimulus is displayed at the first depth by the display, controlling the display to display a stimulus at a second depth, wherein the second depth is different than the first depth, receiving second sensor data obtained by the eye tracking sensor while the stimulus is displayed at the second depth by the display, and updating the cornea model based on the first sensor data and the second sensor data.Type: GrantFiled: March 30, 2020Date of Patent: July 13, 2021Assignee: Tobii ABInventors: Mark Ryan, Jonas Sjöstrand, Erik Lindén, Pravin Rana
-
Patent number: 10996751Abstract: A gaze tracking model is adapted to predict a gaze ray using an image of the eye. The model is trained using training data which comprises a first image of an eye, reference gaze data indicating a gaze point towards which the eye was gazing when the first image was captured, and images of an eye captured by first and second cameras at a point in time. The training comprises forming a distance between the gaze point and a gaze ray predicted by the model using the first image, forming a consistency measure based on a gaze ray predicted by the model using the image captured by the first camera and a gaze ray predicted by the model using the image captured by the second camera, forming an objective function based on at least the formed distance and the consistency measure, and training the model using the objective function.Type: GrantFiled: December 16, 2019Date of Patent: May 4, 2021Assignee: Tobii ABInventors: David Mohlin, Erik Lindén
-
Patent number: 10955915Abstract: A preliminary path for light travelling towards a camera via corneal reflection is estimated based on a preliminary position and orientation of an eye. A position where the reflection would appear in images captured by the camera is estimated. A distance is formed between a detected position of a corneal reflection of an illuminator and the estimated position. A second preliminary path for light travelling through the cornea or from the sclera towards a camera is estimated based on the preliminary position and orientation, and a position where the second preliminary path would appear to originate in images captured by this camera is estimated. A distance is formed between a detected edge of a pupil or iris and the estimated position where the second preliminary path would appear to originate. An updated position and/or orientation of the eye is determined using an objective function formed based on the formed distances.Type: GrantFiled: December 16, 2019Date of Patent: March 23, 2021Assignee: Tobii ABInventor: Erik Lindén
-
Publication number: 20210012161Abstract: Techniques for generating 3D gaze predictions based on a deep learning system are described. In an example, the deep learning system includes a neural network. The neural network is trained with training images generated by cameras and showing eyes of user while gazing at stimulus points. Some of the stimulus points are in the planes of the camera. Remaining stimulus points are not un the planes of the cameras. The training includes inputting a first training image associated with a stimulus point in a camera plane and inputting a second training image associated with a stimulus point outside the camera plane. The training minimizes a loss function of the neural network based on a distance between at least one of the stimulus points and a gaze line.Type: ApplicationFiled: June 2, 2020Publication date: January 14, 2021Applicant: Tobii ABInventor: Erik Linden
-
Publication number: 20210011549Abstract: A method of updating a cornea model for a cornea of an eye is disclosed, as well as a corresponding system and storage medium. The method comprises controlling a display to display a stimulus at a first depth, wherein the display is capable of displaying objects at different depths, receiving first sensor data obtained by an eye tracking sensor while the stimulus is displayed at the first depth by the display, controlling the display to display a stimulus at a second depth, wherein the second depth is different than the first depth, receiving second sensor data obtained by the eye tracking sensor while the stimulus is displayed at the second depth by the display, and updating the cornea model based on the first sensor data and the second sensor data.Type: ApplicationFiled: March 30, 2020Publication date: January 14, 2021Applicant: Tobii ABInventors: Mark Ryan, Jonas Sjöstrand, Erik Lindén, Pravin Rana
-
Patent number: 10867252Abstract: A method for forming an offset model is described. The offset model represents an estimated offset between a limbus center of a user eye and a pupil center of the user eye as a function of pupil size. The approach includes sampling a set of limbus center values, sampling a set of pupil center values, and sampling a set of radius values. The offset model is formed by comparing a difference between the set of limbus center values and the set of pupil center values at each of the radius values. A system and a computer-readable storage device configured to perform such a method are also disclosed.Type: GrantFiled: December 21, 2018Date of Patent: December 15, 2020Assignee: Tobii ABInventor: Erik Lindén
-
Publication number: 20200387757Abstract: Techniques for generating 3D gaze predictions based on a deep learning system are described. In an example, the deep learning system includes a neural network. The neural network is trained with training images. During the training, calibration parameters are initialized and input to the neural network, and are updated through the training. Accordingly, the network parameters of the neural network are updated based in part on the calibration parameters. Upon completion of the training, the neural network is calibrated for a user. This calibration includes initializing and inputting the calibration parameters along with calibration images showing an eye of the user to the neural network. The calibration includes updating the calibration parameters without changing the network parameters by minimizing the loss function of the neural network based on the calibration images. Upon completion of the calibration, the neural network is used to generate 3D gaze information for the user.Type: ApplicationFiled: January 14, 2020Publication date: December 10, 2020Applicant: Tobii ABInventor: Erik Linden
-
Patent number: 10820796Abstract: A method is disclosed, comprising obtaining a first angular offset between a first eye direction and a first gaze direction of an eye having a first pupil size, obtaining a second angular offset between a second eye direction and a second gaze direction of the eye having a second pupil size, and forming, based on the first angular offset and the second angular offset, a compensation model describing an estimated angular offset as a function of pupil size. A system and a device comprising a circuitry configured to perform such a method are also disclosed.Type: GrantFiled: September 7, 2018Date of Patent: November 3, 2020Assignee: Tobii ABInventors: Mark Ryan, Simon Johansson, Erik Lindén
-
Publication number: 20200250488Abstract: Techniques for generating 3D gaze predictions based on a deep learning system are described. In an example, the deep learning system includes a neural network. A scaled image is generated from 2D image showing a user face based on a rough distance between the user eyes and a camera that generated the 2D image. Image crops at different resolutions are generated from the scaled image and include a crop around each of the user eyes and a crop around the user face. These crops are input to the neural network. In response, the neural network outputs a distance correction and a 2D gaze vector per user eye. A corrected eye-to-camera distance is generated by correcting the rough distance based on the distance correction. A 3D gaze vector for each of the user eyes is generated based on the corresponding 2D gaze vector and the corrected distance.Type: ApplicationFiled: February 11, 2020Publication date: August 6, 2020Applicant: Tobii ABInventor: Erik Linden
-
Publication number: 20200225743Abstract: The present invention relates to a method for establishing the position of an object in relation to a camera in order to enable gaze tracking with a user watching the object, where the user is in view of the camera. The method comprises the steps of showing a known pattern, consisting of a set of stimulus points (s1, s2, . . . , sN), on the object, detecting gaze rays (g1, g2, . . . , gN) from an eye of the user as the user looks at the stimulus points (s1, s2, . . . , sN), and finding, by means of an optimizer, a position and orientation of the object in relation to the camera such that the gaze rays (g1, g2, . . . , gN) approaches the stimulus points (s1, s2, . . . , sN).Type: ApplicationFiled: December 11, 2019Publication date: July 16, 2020Applicant: Tobii ABInventor: Erik Lindén
-
Publication number: 20200225744Abstract: A preliminary path for light travelling towards a camera via corneal reflection is estimated based on a preliminary position and orientation of an eye. A position where the reflection would appear in images captured by the camera is estimated. A distance is formed between a detected position of a corneal reflection of an illuminator and the estimated position. A second preliminary path for light travelling through the cornea or from the sclera towards a camera is estimated based on the preliminary position and orientation, and a position where the second preliminary path would appear to originate in images captured by this camera is estimated. A distance is formed between a detected edge of a pupil or iris and the estimated position where the second preliminary path would appear to originate. An updated position and/or orientation of the eye is determined using an objective function formed based on the formed distances.Type: ApplicationFiled: December 16, 2019Publication date: July 16, 2020Applicant: Tobii ABInventor: Erik Lindén
-
Publication number: 20200225745Abstract: A gaze tracking model is adapted to predict a gaze ray using an image of the eye. The model is trained using training data which comprises a first image of an eye, reference gaze data indicating a gaze point towards which the eye was gazing when the first image was captured, and images of an eye captured by first and second cameras at a point in time. The training comprises forming a distance between the gaze point and a gaze ray predicted by the model using the first image, forming a consistency measure based on a gaze ray predicted by the model using the image captured by the first camera and a gaze ray predicted by the model using the image captured by the second camera, forming an objective function based on at least the formed distance and the consistency measure, and training the model using the objective function.Type: ApplicationFiled: December 16, 2019Publication date: July 16, 2020Applicant: Tobii ABInventors: David Molin, Erik Lindén
-
Patent number: 10671890Abstract: Techniques for generating 3D gaze predictions based on a deep learning system are described. In an example, the deep learning system includes a neural network. The neural network is trained with training images generated by cameras and showing eyes of user while gazing at stimulus points. Some of the stimulus points are in the planes of the camera. Remaining stimulus points are not un the planes of the cameras. The training includes inputting a first training image associated with a stimulus point in a camera plane and inputting a second training image associated with a stimulus point outside the camera plane. The training minimizes a loss function of the neural network based on a distance between at least one of the stimulus points and a gaze line.Type: GrantFiled: March 30, 2018Date of Patent: June 2, 2020Assignee: Tobii ABInventor: Erik Linden
-
Patent number: 10558895Abstract: Techniques for generating 3D gaze predictions based on a deep learning system are described. In an example, the deep learning system includes a neural network. A scaled image is generated from 2D image showing a user face based on a rough distance between the user eyes and a camera that generated the 2D image. Image crops at different resolutions are generated from the scaled image and include a crop around each of the user eyes and a crop around the user face. These crops are input to the neural network. In response, the neural network outputs a distance correction and a 2D gaze vector per user eye. A corrected eye-to-camera distance is generated by correcting the rough distance based on the distance correction. A 3D gaze vector for each of the user eyes is generated based on the corresponding 2D gaze vector and the corrected distance.Type: GrantFiled: March 30, 2018Date of Patent: February 11, 2020Assignee: Tobii ABInventor: Erik Linden
-
Patent number: 10534982Abstract: Techniques for generating 3D gaze predictions based on a deep learning system are described. In an example, the deep learning system includes a neural network. The neural network is trained with training images. During the training, calibration parameters are initialized and input to the neural network, and are updated through the training. Accordingly, the network parameters of the neural network are updated based in part on the calibration parameters. Upon completion of the training, the neural network is calibrated for a user. This calibration includes initializing and inputting the calibration parameters along with calibration images showing an eye of the user to the neural network. The calibration includes updating the calibration parameters without changing the network parameters by minimizing the loss function of the neural network based on the calibration images. Upon completion of the calibration, the neural network is used to generate 3D gaze information for the user.Type: GrantFiled: March 30, 2018Date of Patent: January 14, 2020Assignee: Tobii ABInventor: Erik Linden