Patents Assigned to Varjo Technologies Oy
  • Publication number: 20250221072
    Abstract: An imaging system includes an image sensor chip with a photo-sensitive surface photo-sensitive cells; and a colour filter array with physical smallest repeating units (PSRUs), a given PSRU having at least four sub-units, a given sub-unit comprising colour filters of at least three different colours; and at least four other colour filters whose size is equal to a size of the given sub-unit; and processor(s) configured to: obtain first image data corresponding to the at least four sub-units in individual ones of the PSRUs, while obtaining second image data corresponding to the at least four other colour filters in the individual ones of the PSRUs; and generate at least one of: a first-resolution image, a second-resolution image, a third-resolution image, a high-dynamic range (HDR) image.
    Type: Application
    Filed: December 20, 2024
    Publication date: July 3, 2025
    Applicant: Varjo Technologies Oy
    Inventor: Mikko Ollila
  • Publication number: 20250220307
    Abstract: Disclosed is a method that includes detecting a beginning of a movement of a user's gaze by processing gaze-tracking data, collected by a gaze-tracking means; predicting a motion blur in an image which is to be captured by at least one camera during the movement of the user's gaze, using a portion of the gaze-tracking data that corresponds to the beginning of the movement of the user's gaze; and compensating for the predicted motion blur while capturing the image by controlling the at least one camera.
    Type: Application
    Filed: December 29, 2023
    Publication date: July 3, 2025
    Applicant: Varjo Technologies Oy
    Inventors: Ville Timonen, Kalle Karhu
  • Patent number: 12348876
    Abstract: A system includes a tracking device and a light source. The tracking device has a camera and a first controller. The camera captures a first image and a second image. The first image is captured during a first period of time (t0-t1) and the second image is captured during a second period of time (t2-t3). The first controller is coupled to the camera and configured to obtain first timing information and second timing information; form timing instructions; and communicate timing instructions to light source over a communication interface. The light source is configured to use timing instructions to illuminate first amount of light and a second amount of light. The first controller is further configured to calculate image intensity difference between the first and second image to identify from first image pixel(s) illuminated by the light source.
    Type: Grant
    Filed: May 11, 2023
    Date of Patent: July 1, 2025
    Assignee: Varjo Technologies Oy
    Inventors: Mikko Strandborg, Mikko Ollila
  • Publication number: 20250209653
    Abstract: Disclosed is a method for depth estimation including receiving first and second colour images and sparse depth map corresponding to first and second colour images; generating first and second depth prediction maps by providing first and second colour images and sparse depth map as input to neural network, wherein neural network has first branch for receiving first colour image and sparse depth map as input to generate first depth prediction map and second branch for receiving second colour image and sparse depth map as input to generate second depth prediction map, and wherein first and second branches of neural network share weights by inter-branch weight sharing mechanism; and fusing first and second depth prediction maps to generate dense depth map for depth estimation.
    Type: Application
    Filed: December 21, 2023
    Publication date: June 26, 2025
    Applicant: Varjo Technologies Oy
    Inventor: Mikko Ollila
  • Publication number: 20250199327
    Abstract: Disclosed is a method including detecting a movement of a user's gaze by processing gaze-tracking data, collected by a gaze-tracking means of a display apparatus; when the movement of the user's gaze is detected, determining a compensatory movement which when implemented by an image stabilization means of the display apparatus, shifts visual content displayed on at least one display of the display apparatus, according to the movement of the user's gaze; and generating a first drive signal for controlling the image stabilization means to implement the compensatory movement, during a frame display time.
    Type: Application
    Filed: December 18, 2023
    Publication date: June 19, 2025
    Applicant: Varjo Technologies Oy
    Inventors: Mikko Strandborg, Mikko Ollila
  • Publication number: 20250200776
    Abstract: Disclosed is an apparatus and a method for depth sensing that includes a scanning light source configured to sequentially illuminate an environment utilizing a scan pattern. The apparatus also includes a birefringent layer adapted to receive and split reflected scan pattern from the environment into an ordinary ray (O-ray) image and an extraordinary ray (E-ray) image. The apparatus further includes an imaging sensor positioned along an optical path of the birefringent layer to capture the O-ray image and the E-ray image as transmitted therefrom. The apparatus further includes a processor configured to compute differences between positions of the O-ray image and the E-ray image and derive a depth map of the environment based on the computed differences.
    Type: Application
    Filed: December 15, 2023
    Publication date: June 19, 2025
    Applicant: Varjo Technologies Oy
    Inventors: Mikko Ollila, Mikko Strandborg
  • Publication number: 20250202965
    Abstract: Disclosed is data processing system having at least one server communicably couplable to at least one display apparatus, wherein at least one server has at least one processor configured to obtain quality of service information from network traffic regulator of communication network, wherein communication network is between at least one server and at least one display apparatus, obtain media processing latency information from at least one rendering application executing at the at least one server, and obtain client device operational information from at least one display apparatus; process quality of service information, media processing latency information, and client device operational information together to generate streaming configuration parameters; and control at least one rendering application to generate images for displaying at the at least one display apparatus and send images via communication network to at least one display apparatus, according to generated streaming configuration parameters.
    Type: Application
    Filed: December 18, 2023
    Publication date: June 19, 2025
    Applicant: Varjo Technologies Oy
    Inventors: Mikko Strandborg, Tarmo Räntilä, Antti Peuhkurinen, Evgeny Zuev, Juho Saarinen
  • Publication number: 20250193554
    Abstract: An image sensor has photo-sensitive cells arranged on a photo-sensitive surface; and a colour filter array with first type(s) of colour filters, second type(s) of colour filters, third type(s) of colour filters, and fourth type(s) of colour filters. Information indicative of a gaze direction is obtained. A gaze region and a peripheral region in the photo-sensitive surface are determined. Image data from the image sensor is read out, wherein when reading out, processor(s) is/are configured to: selectively skip reading out the image data from those photo-sensitive cells in the gaze region that have the fourth type(s) of colour filters; and selectively read out the image data from those photo-sensitive cells in the peripheral region that have the fourth type(s) of colour filters. The image data is processed to generate an image.
    Type: Application
    Filed: December 8, 2023
    Publication date: June 12, 2025
    Applicant: Varjo Technologies Oy
    Inventor: Mikko Ollila
  • Publication number: 20250173884
    Abstract: Disclosed is a method for depth image enhancement implemented in at least one apparatus, the method including: reading and processing Phase Detection Autofocus (PDAF) pixels of a region-of-interest (ROI) area of a gaze region in an image obtained from a color camera sensor; utilizing one or more depth camera sensors to provide one or more depth maps of the ROI area of the gaze region; and combining the processed PDAF pixels of the ROI area of the gaze region and the one or more depth maps of the ROI area of the gaze region to obtain an updated ROI area with complementary depth information of the ROI area of the gaze region.
    Type: Application
    Filed: November 23, 2023
    Publication date: May 29, 2025
    Applicant: Varjo Technologies Oy
    Inventors: Mikko Strandborg, Mikko Ollila
  • Publication number: 20250175711
    Abstract: Disclosed is imaging system with image sensor; pose-tracking means; and processor(s) configured to capture a sequence of images to determine corresponding poses of the image sensor with respect to which images are captured, wherein when capturing image, the processor(s) is configured to perform demosaicing on image data; identify static region(s) in the image(s); obtain new image data to determine a new pose of image sensor; reproject image(s) from a corresponding pose of image sensor with respect to which image(s) is captured to new pose of image sensor, wherein static region(s) in the image(s) is reprojected to generate reprojected static region(s); and generate a new image corresponding to the new image data, by performing temporal reconstruction of static region(s) in the new image that corresponds to the static region(s) of the image(s), based on the reprojected static region(s) of the reprojected image(s).
    Type: Application
    Filed: November 23, 2023
    Publication date: May 29, 2025
    Applicant: Varjo Technologies Oy
    Inventors: Mikko Strandborg, Mikko Ollila
  • Publication number: 20250165064
    Abstract: Disclosed is a system with server(s) that is communicably coupled to display apparatus(es). The server(s) is/are configured to: receive, from display apparatus(es), information indicative of gaze directions of a user's eye; process the information to detect a beginning of a saccade; predict a target gaze location of the saccade, based on the information; and foveate a video stream according to the target gaze location after the beginning of the saccade and before an end of the saccade.
    Type: Application
    Filed: November 19, 2023
    Publication date: May 22, 2025
    Applicant: Varjo Technologies Oy
    Inventors: Mikko Strandborg, Tarmo Räntilä
  • Publication number: 20250166258
    Abstract: Disclosed is an apparatus including an image sensor to capture an input image with one or more textual features. The one or more textual features are present in an unreadable or distorted form. Further, the apparatus includes a processor configured to detect the one or more textual features of the input image and execute a neural network to concurrently deduce a plurality of glyphs that form one or more relevant words or abbreviations based on the detected one or more textual features of the input image. Further, the processor is configured to generate an output image with enhanced one or more textual features in a legible form based on the deduced plurality of glyphs.
    Type: Application
    Filed: November 21, 2023
    Publication date: May 22, 2025
    Applicant: Varjo Technologies Oy
    Inventors: Kai Inha, Mikko Ollila, Mikko Strandborg
  • Publication number: 20250157437
    Abstract: Disclosed is a system with server(s) configured to: receive, from a display apparatus, hardware parameters of a display of the display apparatus and optical path information pertaining to an optical path between the display and a user's eye; determine an effective resolution for each point on the display, based at least on the hardware parameters of the display and the optical path information; determine a foveation setting to be employed for image processing; determine image processing setting(s) to be employed for image processing, based on the effective resolution for each point on the display and the foveation setting; process an image according to the image processing setting(s); and send the image to the display apparatus.
    Type: Application
    Filed: November 14, 2023
    Publication date: May 15, 2025
    Applicant: Varjo Technologies Oy
    Inventors: Antti Hirvonen, Mikko Strandborg
  • Patent number: 12294789
    Abstract: An imaging system is disclosed. In a cycle, two or three consecutive pairs of first sub-images and second sub-images are captured using a first image sensor and a second image sensor respectively, while wobulators are controlled to perform, during the cycle, one or two first sub-pixel shifts and one or two second sub-pixel shifts, respectively. A given first sub-pixel shift is performed in first direction, while a given second sub-pixel shift is performed in a second direction different from the first direction The first sub-images and the second sub-images of the cycle are processed, to generate a first image and a second image, respectively.
    Type: Grant
    Filed: July 24, 2023
    Date of Patent: May 6, 2025
    Assignee: Varjo Technologies Oy
    Inventor: Mikko Ollila
  • Patent number: 12293454
    Abstract: A system and method for receiving colour images, depth images and viewpoint information; dividing 3D space occupied by real-world environment into 3D grid(s) of voxels; create 3D data structure(s) comprising nodes, each node representing corresponding voxel; dividing colour image and depth image into colour tiles and depth tiles, respectively; mapping colour tile to voxel(s) whose colour information is captured in colour tile; storing, in node representing voxel(s), viewpoint information indicative of viewpoint from which colour and depth images are captured, along with any of: colour tile that captures colour information of voxel(s) and corresponding depth tile that captured depth information, or reference information indicative of unique identification of colour tile and corresponding depth tile; and utilising 3D data structure(s) for training neural network(s), wherein input of neural network(s) comprises 3D position of point and output of neural network(s) comprises colour and opacity of point.
    Type: Grant
    Filed: February 17, 2023
    Date of Patent: May 6, 2025
    Assignee: Varjo Technologies Oy
    Inventors: Kimmo Roimela, Mikko Strandborg
  • Publication number: 20250142037
    Abstract: Disclosed is computer-implemented method including marching first ray and second ray, along first gaze direction and second gaze direction that are estimated using gaze-tracking means, from given viewpoint into depth map, to determine first optical depth and second optical depth corresponding to first eye and second eye, respectively; calculating gaze convergence distance, based on first gaze direction and second gaze direction; detecting whether first optical depth lies within predefined threshold percent from second optical depth; and when it is detected that first optical depth lies within predefined threshold percent from second optical depth, selecting given focus distance as an average of at least two of: first optical depth, second optical depth, gaze convergence distance; and employing given focus distance for capturing given image using at least one variable-focus camera.
    Type: Application
    Filed: October 31, 2023
    Publication date: May 1, 2025
    Applicant: Varjo Technologies Oy
    Inventor: Ville Timonen
  • Patent number: 12289538
    Abstract: Disclosed is imaging system (200) including controllable light source; image sensor; metalens to focus light onto IS; and processor(s). The processor(s) is configured to control CLS to illuminate given part of field of view of IS at a first instant, while controlling IS to capture first image (FImg). The image segment(s) represent a given part as illuminated and remaining image segment(s) represent a remaining part of the FOV as non-illuminated. The processor controls the CLS to illuminate the remaining part at second instant, while controlling IS to capture a second image whose image segment(s) represents a given part as non-illuminated and a remaining image segment(s) represents a remaining part as illuminated. Output image is generated based on: (i) image segment(s) of FImg and remaining image segment(s) of SImg, and/or (ii) remaining image segment(s) of FImg and image segment(s) of SImg.
    Type: Grant
    Filed: April 25, 2023
    Date of Patent: April 29, 2025
    Assignee: Varjo Technologies Oy
    Inventor: Mikko Ollila
  • Publication number: 20250124667
    Abstract: Disclosed is a system comprising data repository(ies) and server(s) configured to: access, from data repository(ies), a first three-dimensional (3D) model of a virtual environment and a second 3D model of a real-world environment; combine the first 3D model and the second 3D model to generate a combined 3D model of an extended-reality (XR) environment; perform occlusion culling using the combined 3D model, to identify a set of virtual objects that occlude real object(s) in the XR environment from a perspective of a viewpoint; and send, to display apparatus(es), a virtual-reality (VR) image representing the set of virtual objects and information indicative of portion(s) of a field of view that corresponds to the real object(s) that is being occluded by the set of virtual objects from the perspective of the viewpoint.
    Type: Application
    Filed: October 17, 2023
    Publication date: April 17, 2025
    Applicant: Varjo Technologies Oy
    Inventors: Antti Hirvonen, Mikko Strandborg
  • Publication number: 20250119524
    Abstract: Image data is read out from a left part (LL) of a left field of view (FOV) of a left image sensor (L) and a right part (RR) of a right FOV of a right image sensor (R). The left part of the left FOV extends horizontally towards a right side of a gaze point (X) till only a first predefined angle (N1) from the gaze point. The right part of the right FOV extends horizontally towards a left side of a gaze point (X) till only a second predefined angle (N2) from the gaze point. The image data is processed to construct a left part of a left image and a right part of a right image. A right part of the left image and a left part of the right image are reconstructed using image reprojection. The left image and the right image are generated by combining respective parts.
    Type: Application
    Filed: October 5, 2023
    Publication date: April 10, 2025
    Applicant: Varjo Technologies Oy
    Inventors: Mikko Strandborg, Mikko Ollila
  • Publication number: 20250117076
    Abstract: Disclosed is a system with an image sensor comprising a plurality of photo-sensitive cells; and processor(s) configured to: receive, from the image sensor, a plurality of image signals captured by corresponding photo-sensitive cells of the image sensor; obtain information indicative of a current pupil size of a user of a display apparatus; determine a given luminosity range corresponding to the current pupil size; and selectively perform a sequence of image signal processes on the plurality of image signals and control a plurality of parameters employed for performing the sequence of image signal processes, based on the given luminosity range, to generate an image.
    Type: Application
    Filed: October 6, 2023
    Publication date: April 10, 2025
    Applicant: Varjo Technologies Oy
    Inventors: Mikko Strandborg, Mikko Ollila