Patents Assigned to Varjo Technologies Oy
-
Patent number: 12294789Abstract: An imaging system is disclosed. In a cycle, two or three consecutive pairs of first sub-images and second sub-images are captured using a first image sensor and a second image sensor respectively, while wobulators are controlled to perform, during the cycle, one or two first sub-pixel shifts and one or two second sub-pixel shifts, respectively. A given first sub-pixel shift is performed in first direction, while a given second sub-pixel shift is performed in a second direction different from the first direction The first sub-images and the second sub-images of the cycle are processed, to generate a first image and a second image, respectively.Type: GrantFiled: July 24, 2023Date of Patent: May 6, 2025Assignee: Varjo Technologies OyInventor: Mikko Ollila
-
Patent number: 12293454Abstract: A system and method for receiving colour images, depth images and viewpoint information; dividing 3D space occupied by real-world environment into 3D grid(s) of voxels; create 3D data structure(s) comprising nodes, each node representing corresponding voxel; dividing colour image and depth image into colour tiles and depth tiles, respectively; mapping colour tile to voxel(s) whose colour information is captured in colour tile; storing, in node representing voxel(s), viewpoint information indicative of viewpoint from which colour and depth images are captured, along with any of: colour tile that captures colour information of voxel(s) and corresponding depth tile that captured depth information, or reference information indicative of unique identification of colour tile and corresponding depth tile; and utilising 3D data structure(s) for training neural network(s), wherein input of neural network(s) comprises 3D position of point and output of neural network(s) comprises colour and opacity of point.Type: GrantFiled: February 17, 2023Date of Patent: May 6, 2025Assignee: Varjo Technologies OyInventors: Kimmo Roimela, Mikko Strandborg
-
Publication number: 20250142037Abstract: Disclosed is computer-implemented method including marching first ray and second ray, along first gaze direction and second gaze direction that are estimated using gaze-tracking means, from given viewpoint into depth map, to determine first optical depth and second optical depth corresponding to first eye and second eye, respectively; calculating gaze convergence distance, based on first gaze direction and second gaze direction; detecting whether first optical depth lies within predefined threshold percent from second optical depth; and when it is detected that first optical depth lies within predefined threshold percent from second optical depth, selecting given focus distance as an average of at least two of: first optical depth, second optical depth, gaze convergence distance; and employing given focus distance for capturing given image using at least one variable-focus camera.Type: ApplicationFiled: October 31, 2023Publication date: May 1, 2025Applicant: Varjo Technologies OyInventor: Ville Timonen
-
Patent number: 12289538Abstract: Disclosed is imaging system (200) including controllable light source; image sensor; metalens to focus light onto IS; and processor(s). The processor(s) is configured to control CLS to illuminate given part of field of view of IS at a first instant, while controlling IS to capture first image (FImg). The image segment(s) represent a given part as illuminated and remaining image segment(s) represent a remaining part of the FOV as non-illuminated. The processor controls the CLS to illuminate the remaining part at second instant, while controlling IS to capture a second image whose image segment(s) represents a given part as non-illuminated and a remaining image segment(s) represents a remaining part as illuminated. Output image is generated based on: (i) image segment(s) of FImg and remaining image segment(s) of SImg, and/or (ii) remaining image segment(s) of FImg and image segment(s) of SImg.Type: GrantFiled: April 25, 2023Date of Patent: April 29, 2025Assignee: Varjo Technologies OyInventor: Mikko Ollila
-
Publication number: 20250124667Abstract: Disclosed is a system comprising data repository(ies) and server(s) configured to: access, from data repository(ies), a first three-dimensional (3D) model of a virtual environment and a second 3D model of a real-world environment; combine the first 3D model and the second 3D model to generate a combined 3D model of an extended-reality (XR) environment; perform occlusion culling using the combined 3D model, to identify a set of virtual objects that occlude real object(s) in the XR environment from a perspective of a viewpoint; and send, to display apparatus(es), a virtual-reality (VR) image representing the set of virtual objects and information indicative of portion(s) of a field of view that corresponds to the real object(s) that is being occluded by the set of virtual objects from the perspective of the viewpoint.Type: ApplicationFiled: October 17, 2023Publication date: April 17, 2025Applicant: Varjo Technologies OyInventors: Antti Hirvonen, Mikko Strandborg
-
Publication number: 20250119523Abstract: A left part (LL) of a left image and a right part (RR) of a right image are rendered, wherein a horizontal FOV of the left part of the left image extends towards a right side of a gaze point (X) of the left image till only a first predefined angle (N1) from the gaze point of the left image, while a horizontal FOV of the right part of the right image extends towards a left side of a gaze point (X) of the right image till only a second predefined angle (N2) from the gaze point of the right image. A right part (RL) of the left image is reconstructed by reprojecting a corresponding part of the right image, while a left part (LR) of the right image is reconstructed by reprojecting a corresponding part of the left image. The left image and the right image are generated by combining respective rendered parts and respective reconstructed parts.Type: ApplicationFiled: October 5, 2023Publication date: April 10, 2025Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Ville Miettinen
-
Publication number: 20250119524Abstract: Image data is read out from a left part (LL) of a left field of view (FOV) of a left image sensor (L) and a right part (RR) of a right FOV of a right image sensor (R). The left part of the left FOV extends horizontally towards a right side of a gaze point (X) till only a first predefined angle (N1) from the gaze point. The right part of the right FOV extends horizontally towards a left side of a gaze point (X) till only a second predefined angle (N2) from the gaze point. The image data is processed to construct a left part of a left image and a right part of a right image. A right part of the left image and a left part of the right image are reconstructed using image reprojection. The left image and the right image are generated by combining respective parts.Type: ApplicationFiled: October 5, 2023Publication date: April 10, 2025Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Mikko Ollila
-
Publication number: 20250117076Abstract: Disclosed is a system with an image sensor comprising a plurality of photo-sensitive cells; and processor(s) configured to: receive, from the image sensor, a plurality of image signals captured by corresponding photo-sensitive cells of the image sensor; obtain information indicative of a current pupil size of a user of a display apparatus; determine a given luminosity range corresponding to the current pupil size; and selectively perform a sequence of image signal processes on the plurality of image signals and control a plurality of parameters employed for performing the sequence of image signal processes, based on the given luminosity range, to generate an image.Type: ApplicationFiled: October 6, 2023Publication date: April 10, 2025Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Mikko Ollila
-
Patent number: 12273504Abstract: A left part (LL) of a left image and a right part (RR) of a right image are rendered, wherein a horizontal FOV of the left part of the left image extends towards a right side of a gaze point (X) of the left image till only a first predefined angle (N1) from the gaze point of the left image, while a horizontal FOV of the right part of the right image extends towards a left side of a gaze point (X) of the right image till only a second predefined angle (N2) from the gaze point of the right image. A right part (RL) of the left image is reconstructed by reprojecting a corresponding part of the right image, while a left part (LR) of the right image is reconstructed by reprojecting a corresponding part of the left image. The left image and the right image are generated by combining respective rendered parts and respective reconstructed parts.Type: GrantFiled: October 5, 2023Date of Patent: April 8, 2025Assignee: Varjo Technologies OyInventors: Mikko Strandborg, Ville Miettinen
-
Publication number: 20250104619Abstract: Disclosed is a display apparatus comprising eye-tracking means, display(s) per eye, and processor(s). The processor(s) is/are configured to: process eye-tracking data, collected by the eye-tracking means, to detect a current pupil size; determine a given luminosity range corresponding to the current pupil size; employ at least one of: a tone-mapping technique, an exposure-adjustment technique, to map luminosity values of pixels in a high dynamic range (HDR) image to luminosity values of corresponding pixels in an output image, wherein the luminosity values of the pixels in the output image lie in the given luminosity range; and display the output image via the display(s).Type: ApplicationFiled: September 21, 2023Publication date: March 27, 2025Applicant: Varjo Technologies OyInventor: Mikko Strandborg
-
Patent number: 12260062Abstract: Disclosed is a display apparatus and a method for digital manipulation of a virtual user interface in virtual environments. The display apparatus includes a light source, tracking means, and a processor configured to control the light source to project the virtual user interface, and process tracking data from the tracking means to detect the proximity of an interaction element, such as a user's hand, to an invisible segment of a virtual widget within the virtual user interface. Upon proximity detection, the light source is activated to display the virtual widget. The processor then determines if the segment has been activated and subsequently processes any positional changes of the interaction element to adjust the virtual user interface accordingly. This adjustment is performed in accordance with predefined visual effects that correspond to the activated segment, enabling intuitive and dynamic user interaction.Type: GrantFiled: November 22, 2023Date of Patent: March 25, 2025Assignee: Varjo Technologies OyInventors: Ábel Csendes, Harri Wikberg
-
Publication number: 20250095113Abstract: When a first camera has a first value of an illumination parameter in first region(s) of a first field of view (FOV) of the first camera, while a second camera has, in corresponding region(s) of a second FOV of the second camera, a second value of the illumination parameter that is greater than the first value, a denoising technique is applied on first image segment(s) of the first image that represents the first region(s), based on corresponding image segment(s) of the second image that represents the corresponding region(s). The illumination parameter is any one of: (i) a ratio of a per-pixel area to pixels per degree (PPD), (ii) a ratio of a multiplication product of the per-pixel area and a relative illumination to the PPD.Type: ApplicationFiled: September 15, 2023Publication date: March 20, 2025Applicant: Varjo Technologies OyInventor: Mikko Ollila
-
Publication number: 20250097395Abstract: Disclosed is a system with server(s) configured to: obtain a plurality of input images captured by a plurality of input cameras comprising at least a first input camera per eye and a second input camera per eye, wherein a first field of view of the first input camera is narrower than a second field of view of the second input camera and is overlapping with an overlapping portion of the second field of view, the plurality of input images comprising at least a first input image and a second input image captured by the first input camera and the second input camera, respectively; and process the plurality of input images, by using neural style transfer network(s), to generate a single output image per eye.Type: ApplicationFiled: September 15, 2023Publication date: March 20, 2025Applicant: Varjo Technologies OyInventors: Mikko Ollila, Mikko Strandborg
-
Patent number: 12254131Abstract: Disclosed is an imaging system of a display apparatus with gaze-tracking means and processor(s). The processor(s) is/are configured to: process gaze-tracking data, collected by the gaze-tracking means, to detect gaze directions of a user's eyes; determine a gaze convergence distance, based on a convergence of the gaze directions of the user's eyes; identify a region of interest in a given image frame, based on a gaze direction of a given eye of the user from a perspective of which the given image frame is rendered; and generate a reprojected image frame, by reprojecting the region of interest using six degrees-of-freedom reprojection, whilst considering the gaze convergence distance as an optical depth of pixels of the region of interest.Type: GrantFiled: August 29, 2023Date of Patent: March 18, 2025Assignee: Varjo Technologies OyInventors: Mikko Strandborg, Urho Konttori
-
Patent number: 12256161Abstract: First image data and second image data are captured by a first image sensor and second image sensor(s), using at least two different settings. The first image data includes first subsampled image data of at least a first part of a first field of view of the first image sensor that has at least a part of overlapping field of view. The first image data and the second image data are processed together, using HDR imaging technique, to generate a first HDR image and a second HDR image. During processing, interpolation and demosaicking is performed on the first subsampled image data, by employing the second image data and using HDR imaging technique. Demosaicking is performed on the second image data, by employing the first image data and using HDR imaging technique.Type: GrantFiled: July 24, 2023Date of Patent: March 18, 2025Assignee: Varjo Technologies OyInventor: Mikko Ollila
-
Patent number: 12248624Abstract: A display apparatus includes eye-tracking means; display device(s); and processor(s) The processor is configured to process eye-tracking data collected by the eye-tracking means, to detect beginning time (T1) of a saccade (S) of user's eyes; control the display device(s) to display convoluted image for first time period (P1) during saccade (S); predict saccade end time (T2), wherein the first time period (P1) ends before the predicted saccade end time (T2); control the display device(s) to display the convoluted image for second time period (P2) starting before the predicted saccade end time (T2) to extend saccade until extended saccade end time (T3); and control the display device(s) to display an output image (402) after the second time period (P2), wherein the output image is an unconvoluted image.Type: GrantFiled: November 9, 2023Date of Patent: March 11, 2025Assignee: Varjo Technologies OyInventors: Mikko Strandborg, Tarmo Räntilä
-
Patent number: 12249050Abstract: A computer-implemented method includes obtaining 3D model of real-world environment; receiving image of real-world environment captured using camera and pose information indicative of camera pose from which image is captured; utilising 3D model of real-world environment to generate reconstructed depth map from perspective of camera pose; and applying extended depth-of-field correction to image segment(s) of image that is/are out of focus, by using point spread function determined for camera, based on optical depths in segment(s) of reconstructed depth map corresponding to image segment(s) of image.Type: GrantFiled: November 21, 2022Date of Patent: March 11, 2025Assignee: Varjo Technologies OyInventors: Mikko Strandborg, Mikko Ollila
-
Publication number: 20250076974Abstract: Disclosed is an imaging system of a display apparatus with gaze-tracking means and processor(s). The processor(s) is/are configured to: process gaze-tracking data, collected by the gaze-tracking means, to detect gaze directions of a user's eyes; determine a gaze convergence distance, based on a convergence of the gaze directions of the user's eyes; identify a region of interest in a given image frame, based on a gaze direction of a given eye of the user from a perspective of which the given image frame is rendered; and generate a reprojected image frame, by reprojecting the region of interest using six degrees-of-freedom reprojection, whilst considering the gaze convergence distance as an optical depth of pixels of the region of interest.Type: ApplicationFiled: August 29, 2023Publication date: March 6, 2025Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Urho Konttori
-
Publication number: 20250061593Abstract: A display apparatus has a display area of an output image divided into a plurality of regions. For a given region of the output image, a corresponding first region in a video-see-through (VST) image and a corresponding second region in at least one virtual-reality (VR) image are determined. From a depth map corresponding to the VST image, a minimum optical depth and a maximum optical depth in the first region of the VST image are determined. From a depth map corresponding to the at least one VR image, a minimum optical depth and a maximum optical depth in the second region of the at least one VR image are determined. When the minimum optical depth in the first region of the VST image is greater than the maximum optical depth in the second region of the at least one VR image, pixel data for pixels in the given region of the output image from the second region of the at least one VR image is fetched.Type: ApplicationFiled: August 14, 2023Publication date: February 20, 2025Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Antti Hirvonen
-
Publication number: 20250063152Abstract: A real-world depth map is obtained corresponding to a viewpoint from a perspective of which a virtual-reality (VR) depth map has been generated. For a given pixel in the VR depth map, an optical depth (D) of a corresponding pixel in the real-world depth map is found. A lower bound (D?D1) for the given pixel is determined by subtracting a first predefined value (D1) from the optical depth (D) of the corresponding pixel in the real-world depth map. An upper bound (D+D2) for the given pixel is determined by adding a second predefined value (D2) to the optical depth (D) of the corresponding pixel in the real-world depth map. An optical depth of the given pixel fetched from the VR depth image is re-mapped, from a scale of the lower bound to the upper bound determined for the given pixel to another scale of A to B, wherein A and B are scalars. Re-mapped optical depths of pixels of the VR depth map are then encoded into an encoded depth map. This encoded depth map is sent to at least one display apparatus.Type: ApplicationFiled: August 14, 2023Publication date: February 20, 2025Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Antti Hirvonen