Patents Assigned to TOBII AB
-
Publication number: 20210349607Abstract: Visualizable data are obtained that represent a scene with at least one object. The visualizable data describe the scene as seen from a position. First and second measures are determined, which represent extensions of one of the objects in a smallest and a largest dimension respectively. An object aspect ratio is calculated that represents a relationship between the first and second measures. Based on the object aspect ratio, a selection margin is assigned to the object. The selection margin designates a zone outside of the object within which zone the object is validly selectable for manipulation in addition to an area of the object shown towards a view thereof as seen from the position. Thus, it is made easier to manipulatable the visualizable data in response to user input, for instance in the form of gaze-based selection commands.Type: ApplicationFiled: March 31, 2021Publication date: November 11, 2021Applicant: Tobii ABInventors: Robin Thunström, Staffan Widegarn Åhlvik
-
Publication number: 20210350554Abstract: An eye-tracking system configured to: receive a reference-image of an eye of a user, the reference-image being associated with reference-eye-data; receive one or more sample-images of the eye of the user; and, for each of the one or more sample-images: determine a difference between the reference-image and the sample-image to define a corresponding differential-image; and determine eye-data for the sample-image based on the differential-image and the reference-eye-data associated with the reference-image.Type: ApplicationFiled: March 31, 2021Publication date: November 11, 2021Applicant: Tobii ABInventors: David Masko, Mark Ryan, Mattias Kuldkepp
-
Publication number: 20210347364Abstract: The invention is related to a method for driver alertness detection, The method comprises the steps of determining a vanishing point of a vehicle in motion; determining over time, by an eye tracking device, a set of gaze points of the driver of the vehicle; determining a gaze movement from the set of gaze points; and identifying an alertness of the driver, based on a direction of the gaze movement relative to the vanishing point being outward. Further, the invention is related to an eye tracking device for driver alertness detection, an alertness detection system, a computer program, and a computer-readable medium.Type: ApplicationFiled: April 9, 2021Publication date: November 11, 2021Applicant: Tobii ABInventor: Andrew Ratcliff
-
Patent number: 11169604Abstract: A method for determining gaze calibration parameters for gaze estimation of a viewer using an eye-tracking system. The method comprises obtaining a set of data points including gaze tracking data of the viewer and position information of at least one target visual; selecting a first subset of the data points and determining gaze calibration parameters using said first subset. A score for the gaze calibration parameters is determined by using the gaze calibration parameters with a second subset of data points, wherein at least one data point of the subset is not included in the first subset. The score is indicative of the capability of the gaze calibration parameters to reflect position information of the at least one target visual based on the gaze tracking data. The score is compared to a candidate score and if it is higher, the calibration parameters are set to the candidate calibration parameters and the score to the candidate score.Type: GrantFiled: November 16, 2020Date of Patent: November 9, 2021Assignee: Tobii ABInventors: Patrik Barkman, David Molin
-
Publication number: 20210342000Abstract: Techniques for interacting with a first computing device based on gaze information are described. In an example, the first computing device captures a gaze direction of a first user of the first computing device by using an eye tracking device. The first computing device displays a representation of a second user on a display of the first computing device. Further, the first computing device receives from the first user, communication data generated by an input device. The first computing device determines if the gaze direction of the first user is directed to the representation of the second user. If the gaze direction of the first user is directed to the representation of the second user, the first computing device transmits the communication data to a second computing device of the second user.Type: ApplicationFiled: June 1, 2021Publication date: November 4, 2021Applicant: Tobii ABInventors: Daniel Ricknäs, Erland George-Svahn, Rebecka Lannsjö, Andrew Ratcliff, Regimantas Vegele, Geoffrey Cooper, Niklas Blomqvist
-
Patent number: 11156831Abstract: An eye-tracking system for performing a pupil-detection process, the eye-tracking system configured to: receive image-data comprising a plurality of pixel-arrays, each pixel-array having a plurality of pixel locations and an intensity-value at each of the pixel locations; for each pixel location of a region of pixel locations: define an intensity-value-set comprising the intensity-values at the pixel location for two or more of the plurality of pixel-arrays; and determine the pixel location to be an excluded pixel location if the intensity-value-set does not satisfy an intensity condition; and exclude the excluded pixel locations from the pupil-detection process.Type: GrantFiled: December 31, 2019Date of Patent: October 26, 2021Assignee: Tobii ABInventors: Mikael Rosell, Simon Johansson, Johannes Kron
-
Patent number: 11144755Abstract: Methods and corresponding systems of controlling illuminators in an eye tracking system are disclosed. The system includes a first image sensor, a second image sensor, a first close illuminator arranged to capture bright pupil images by the first image sensor, a second close illuminator arranged to capture bright pupil images by the second image sensor and one or more far illuminators arranged to capture dark pupil images by the first image sensor and the second image sensor. In the methods main and support illuminators are controlled during exposure of a first and a second image sensor to produce enhanced contrast and glint position for eye/gaze tracking.Type: GrantFiled: March 28, 2019Date of Patent: October 12, 2021Assignee: Tobii ABInventors: Jonas Sjöstrand, Anders Dahl, Mattias I Karlsson
-
Patent number: 11138429Abstract: An eye-tracking system (e.g., a virtual reality or augmented realty headset) can be used for eye tracking and for iris recognition. Illuminators used to illuminate eyes of a user during eye tracking can be selectively powered on and off in connection with capturing image information in order to obtain image information that suitably depicts an iris region of an eye of the user. This image information can be used to recognize the iris region and by so doing authenticate and/or identify the user.Type: GrantFiled: March 11, 2019Date of Patent: October 5, 2021Assignee: Tobii ABInventors: Henrik Eskilsson, Mårten Skogö
-
Patent number: 11138428Abstract: According to the invention, an image sensor is disclosed. The image sensor may include a plurality of pixels. Each pixel of a first portion of the plurality of pixels may include a near-infrared filter configured to block red, green, and blue light; and pass near-infrared light. Each pixel of a second portion of the plurality of pixels may be configured to receive at least one of red, green, or blue light; and receive near-infrared light.Type: GrantFiled: September 23, 2019Date of Patent: October 5, 2021Assignee: Tobii ABInventors: Mårten Skogö, Peter Blixt, Henrik Jönsson
-
Publication number: 20210303062Abstract: A system for determining a gaze point of a user, the system comprising at least one sensor configured to determine at least one signal representative of a variation in a volume of the interior of a user's ear, and a processor configured to determine a direction of eye movement of the user based on the determined signal, and determine a gaze point of the user based on the direction of eye movement. Further, the disclosure relates to a corresponding method.Type: ApplicationFiled: March 30, 2020Publication date: September 30, 2021Applicant: Tobii ABInventor: Andrew Muehlhausen
-
Patent number: 11129530Abstract: An eye tracking system having circuitry configured to perform a method is disclosed. An estimated radius (r) from an eyeball center to a pupil center in an eye is obtained, and an estimated eyeball center position (e) in the eye in relation to an image sensor for capturing images of the eye is determined, and an image of the eye captured by means of the image sensor, and a position of a representation of the pupil center in the eye in the obtained image is identified. An estimated pupil center position (p?) is then determined based on the estimated eyeball center position (e), the estimated radius (r), and the identified position of the representation of the pupil center in the obtained image.Type: GrantFiled: September 7, 2018Date of Patent: September 28, 2021Assignee: Tobii ABInventors: Simon Johansson, Mark Ryan
-
Publication number: 20210286427Abstract: A system, a head-mounted device, a computer program, a carrier and a method for adding a virtual object to an extended reality view based on gaze-tracking data for a user are disclosed. In the method the method, one or more volumes of interest in world space are defined. Furthermore, a position of the user in world space is obtained, and a gaze direction and a gaze convergence distance of the user are determined. A gaze point in world space of the user is then determined based on the determined gaze direction and gaze convergence distance of the user. On condition that the determined gaze point in world space is consistent with a volume of interest of the defined one or more one volumes of interest in world space, a virtual object is added to the extended reality view.Type: ApplicationFiled: June 29, 2020Publication date: September 16, 2021Applicant: Tobii ABInventor: Sourabh PATERIYA
-
Publication number: 20210287443Abstract: A system, a head-mounted device, a computer program, a carrier and a method for positioning of a virtual object in an extended reality view of at least one user are disclosed. In the method gaze points in world space and respective gaze durations for the gaze points are determined for the at least one user by means of gaze-tracking over a duration of time. Furthermore, gaze heatmap data are determined based on the determined gaze points and respective gaze durations, and the virtual object is positioned in the extended reality view in world space based on the determined gaze heatmap data.Type: ApplicationFiled: June 29, 2020Publication date: September 16, 2021Applicant: Tobii ABInventor: Sourabh PATERIYA
-
Publication number: 20210278678Abstract: Techniques for distributed foveated rendering based on user gaze are described. In an example, an end user device is communicatively coupled with a remote computer and presents images on a display based on gaze data. The user device receives a low resolution background image and high resolution foreground image from the remote computer based on the gaze data. The foreground image is constrained to a foveated region according to the gaze data. The end user device generates a composite image by scaling up the background image and overlaying the foreground image. The composite image is then presented on the display.Type: ApplicationFiled: July 20, 2019Publication date: September 9, 2021Applicant: Tobii ABInventor: Ritchie Brannan
-
Publication number: 20210256980Abstract: Method for voice-based interactive communication using a digital assistant, wherein the method comprises, an attention detection step, in which the digital assistant detects a user attention and as a result is set into a listening mode; a speaker detection step, in which the digital assistant detects the user as a current speaker; a speech sound detection step, in which the digital assistant detects and records speech uttered by the current speaker, which speech sound detection step further comprises a lip movement detection step, in which the digital assistant detects a lip movement of the current speaker; a speech analysis step, in which the digital assistant parses said recorded speech and extracts speech-based verbal informational content from said recorded speech; and a subsequent response step, in which the digital assistant provides feed-back to the user based on said recorded speechType: ApplicationFiled: December 21, 2020Publication date: August 19, 2021Applicant: Tobii ABInventors: Erland George-Svahn, Sourabh PATERIYA, Onur Kurt, Deepak Akkil
-
Publication number: 20210255462Abstract: Computer-generated image data is presented on first and second displays of a binocular headset presuming that a user's left and right eyes are located at first and second positions relative to the first and second displays respectively. At least one updated version of the image data is presented, which is rendered presuming that at least one of the user's left and right eyes is located at a position different from the first and second positions respectively in at least one spatial dimension. In response thereto, a user-generated feedback signal is received expressing either: a quality measure of the updated version of the computer-generated image data relative to computer-generated image data presented previously; or a confirmation command. The steps of presenting the updated version of the computer-generated image data and receiving the user-generated feedback signal are repeated until the confirmation command is received.Type: ApplicationFiled: December 21, 2020Publication date: August 19, 2021Applicant: Tobii ABInventors: Geoffrey Cooper, Rickard Lundahl, Erik Lindén, Maria Gordon
-
Publication number: 20210258464Abstract: There is provided a method, system, and non-transitory computer-readable storage medium for controlling the exposure settings of an rolling shutter image sensor device with global reset. This is achieved by obtaining a first image captured by the image sensor device at a current exposure setting that comprises a partial readout parameter representing a number image parts for partial readout by the image sensor device; determining an intensity value of the first image, comparing the intensity value of the first image to a desired intensity value. If the intensity values differ more than an allowed deviation, an updated number of image parts for partial readout is determined based on the current number of image parts and the intensity value of the first image. Thereafter, the current exposure setting is updated by setting the value of the partial readout parameter to the updated number of image parts.Type: ApplicationFiled: December 21, 2020Publication date: August 19, 2021Applicant: Tobii ABInventors: Viktor Åberg, Niklas Ollesson, Anna Redz, Magnus Ivarsson
-
Publication number: 20210255699Abstract: An eyetracker obtains input signal components (SCR, SP) describing a respective position of each of at least one glint in a subject's eye and a position of a pupil of said eye. Based on the input signal components (SCR, SP), the eyetracker determines if a saccade is in progress, i.e. if the gaze point of the subject's eye moves rapidly from a first point (GP1) to a second point (GP2) where the gaze point is fixed. During the saccade, the eyetracker generates a tracking signal describing the gaze point of the eye based on a subset (SCR) of the input signal components, which subset (SCR) describes a cornea reference point for a subject's eye (E). After the saccade, however, the tracking signal is preferably again based on all the input signal components (SCR, SP).Type: ApplicationFiled: September 30, 2020Publication date: August 19, 2021Applicant: Tobii ABInventor: Richard Andersson
-
Publication number: 20210256353Abstract: Techniques for using a deep generative model to generate synthetic data sets that can be used to boost the performance of a discriminative model are described. In an example, an autoencoding generative adversarial network (AEGAN) is trained to generate the synthetic data sets. The AEGAN includes an autoencoding network and a generative adversarial network (GAN) that share a generator. The generator learns how to the generate synthetic data sets based on a data distribution from a latent space. Upon training the AEGAN, the generator generates the synthetic data sets. In turn, the synthetic data sets arc used to train a predictive model, such as a convolutional neural network for gaze prediction.Type: ApplicationFiled: May 13, 2019Publication date: August 19, 2021Applicant: Tobii ABInventor: Mårten Nilsson
-
Publication number: 20210255700Abstract: The present invention provides improved methods and systems for assisting a user when interacting with a graphical user interface by combining gaze based input with gesture based user commands. The present invention provide systems, devices and method that enable a user of a computer system without a traditional touch-screen to interact with graphical user interfaces in a touch-screen like manner using a combination of gaze based input and gesture based user commands. Furthermore, the present invention offers a solution for touch-screen like interaction using gaze input and gesture based input as a complement or an alternative to touch-screen interactions with a computer device having a touch-screen, such as for instance in situations where interaction with the regular touch-screen is cumbersome or ergonomically challenging.Type: ApplicationFiled: October 23, 2020Publication date: August 19, 2021Applicant: Tobii ABInventors: Markus Cederlund, Robert Gavelin, Anders Vennström, Anders Kaplan, Anders Olsson, Mårten Skogö