Patents Assigned to Varjo Technologies Oy
-
Patent number: 12197638Abstract: Disclosed is an extended-reality (XR) device with pose-tracking means; light source(s) for displaying XR images to a user; and a processor configured to: send, to a server a network address of an XR device; send, to the server, request to provide a network address of a computing device; receive, from the server, the network address of the computing device; establish a direct communication link between the XR device and the computing device, using the network address of the computing device; send, to the computing device, information indicative of a pose of XR device or of a head of the user, via the direct communication link; receive, from computing device, XR image(s) generated according to the pose, via direct communication link; and display XR image(s) via light source(s).Type: GrantFiled: August 29, 2023Date of Patent: January 14, 2025Assignee: Varjo Technologies OyInventor: Ari Antti Erik Peuhkurinen
-
Patent number: 12190444Abstract: Disclosed is a method and system for: obtaining 3D data structure comprising nodes, each node representing voxel of 3D grid of voxels, wherein node stores viewpoint information, with any of: (i) colour tile that captures colour information of voxel and depth tile, (ii) reference information indicative of unique identification of colour and depth tiles; utilising 3D data structure for training neural network(s), wherein input of neural network(s) comprises 3D position of point in real-world environment and output of neural network(s) comprises colour and opacity of point; and for new viewpoint, determining visible nodes whose voxels are visible from new viewpoint; for visible node, selecting depth tile(s) whose viewpoint(s) matches new viewpoint most closely; reconstructing 2D geometry of objects from depth tiles; and utilising neural network(s) to render colours for pixels of output colour image.Type: GrantFiled: February 17, 2023Date of Patent: January 7, 2025Assignee: Varjo Technologies OyInventors: Mikko Strandborg, Kimmo Roimela
-
Patent number: 12183021Abstract: An imaging system including processor(s) and data repository. Processor(s) are configured to: receive images of region of real-world environment that are captured by cameras using at least one of: different exposure times, different sensitivities, different apertures; receive depth maps of region that are generated by depth-mapping means; identify different portions of each image that represent objects located at different optical depths; create set of depth planes corresponding to each image; warp depth planes of each set to match perspective of new viewpoint corresponding to which output image is to be generated; fuse sets of warped depth planes corresponding to two or more images to form output set of warped depth planes; and generate output image from output set of warped depth planes.Type: GrantFiled: August 12, 2022Date of Patent: December 31, 2024Assignee: Varjo Technologies OyInventor: Mikko Ollila
-
Patent number: 12159349Abstract: A method including: receiving colour images, depth images, and viewpoint information; dividing 3D space occupied by real-world environment into 3D grid(s) of voxels (204); creating 3D data structure(s) comprising nodes, each node representing corresponding voxel; dividing colour image and depth image into colour tiles and depth tiles, respectively; mapping colour tile to voxel(s) whose colour information is captured in colour tile, based on depth information captured in corresponding depth tile and viewpoint from which colour image and depth image are captured; and storing, in node representing voxel(s), reference information indicative of unique identification of colour tile that captures colour information of voxel(s) and corresponding depth tile that captures depth information, along with viewpoint information indicative of viewpoint from which colour image and depth image are captured.Type: GrantFiled: October 24, 2022Date of Patent: December 3, 2024Assignee: Varjo Technologies OyInventors: Mikko Strandborg, Kimmo Roimela, Pekka Väänänen
-
Publication number: 20240386655Abstract: Disclosed is a system with server(s) communicably coupled to client device(s). The server(s) is configured to obtain a 3D model of a real-world environment; receive, from client device(s), viewpoint information indicative of a viewpoint from a perspective of which a mixed-reality (MR) image is to be generated; for virtual object(s) to be embedded in MR image, determine portion(s) of the virtual object(s) being occluded by real object(s) present in the real-world environment, based on optical depths determined from 3D model corresponding to a viewpoint, a position at which the virtual object(s) is to be embedded with respect to viewpoint, and at least one of: size of the virtual object(s), shape of the virtual object(s), orientation of the virtual object(s) with respect to viewpoint; and send a remaining portion of the virtual object(s) that is not being occluded to client device(s).Type: ApplicationFiled: May 18, 2023Publication date: November 21, 2024Applicant: Varjo Technologies OyInventors: Ari Antti Peuhkurinen, Mikko Strandborg
-
Publication number: 20240378848Abstract: An imaging system includes a controllable light source; an image sensor; metalens for focusing incoming light onto the image sensor; and processor(s). The processor(s) is configured to control a light source using a first illumination intensity and/or first illumination wavelength, while controlling the image sensor to capture first image; control the light source using a second illumination intensity and/or second illumination wavelength, while controlling the image sensor to capture a second image; calculate measured differences between pixel values of pixels in the first image and pixel values of corresponding pixels in the second image; estimate expected pixel value differences based on difference between the first and second illumination intensities and/or differences between the first and second illumination wavelengths; and correct pixel values of pixels in the first image and/or second image based on deviation in measured differences from expected differences.Type: ApplicationFiled: May 11, 2023Publication date: November 14, 2024Applicant: Varjo Technologies OyInventors: Mikko Ollila, Mikko Strandborg
-
Publication number: 20240380984Abstract: A system includes a tracking device and a light source. The tracking device has a camera and a first controller. The camera captures a first image and a second image. The first image is captured during a first period of time (t0-t1) and the second image is captured during a second period of time (t2-t3). The first controller is coupled to the camera and configured to obtain first timing information and second timing information; form timing instructions; and communicate timing instructions to light source over a communication interface. The light source is configured to use timing instructions to illuminate first amount of light and a second amount of light. The first controller is further configured to calculate image intensity difference between the first and second image to identify from first image pixel(s) illuminated by the light source.Type: ApplicationFiled: May 11, 2023Publication date: November 14, 2024Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Mikko Ollila
-
Publication number: 20240362853Abstract: Disclosed is method including: obtaining neural network(s) trained for rendering images, wherein input of neural network(s) has 3D position of point in real-world environment and output of neural network(s) includes colour and opacity of point; obtaining 3D model(s) of real-world environment; receiving viewpoint from perspective of which image is to be generated; receiving gaze direction; determining region of real-world environment that is to be represented in image, based on viewpoint and field of view of image; determining gaze portion and peripheral portion of region of real-world environment, based on gaze direction, wherein gaze portion corresponds to gaze direction, while peripheral portion surrounds gaze portion; utilising neural network(s) to ray march for gaze portion, to generate gaze segment of image; and utilising 3D model(s) to generate peripheral segment of image.Type: ApplicationFiled: April 25, 2023Publication date: October 31, 2024Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Kimmo Roimela
-
Publication number: 20240365011Abstract: Disclosed is imaging system (200) including controllable light source; image sensor; metalens to focus light onto IS; and processor(s). The processor(s) is configured to control CLS to illuminate given part of field of view of IS at a first instant, while controlling IS to capture first image (FImg). The image segment(s) represent a given part as illuminated and remaining image segment(s) represent a remaining part of the FOV as non-illuminated. The processor controls the CLS to illuminate the remaining part at second instant, while controlling IS to capture a second image whose image segment(s) represents a given part as non-illuminated and a remaining image segment(s) represents a remaining part as illuminated. Output image is generated based on: (i) image segment(s) of FImg and remaining image segment(s) of SImg, and/or (ii) remaining image segment(s) of FImg and image segment(s) of SImg.Type: ApplicationFiled: April 25, 2023Publication date: October 31, 2024Applicant: Varjo Technologies OyInventor: Mikko Ollila
-
Publication number: 20240362862Abstract: A hierarchical data structure has sets of nodes representing a 3D space of an environment at different granularity levels. Sets of neural networks at different granularity levels are trained. For a portion of an output image, a granularity level at which the portion is to be reconstructed is determined. A corresponding node is identified; the node having sets of child nodes. A set of child nodes is selected at the granularity level at which the portion is to be reconstructed. For a child node, a cascade of neural networks is utilised to reconstruct the portion. Granularity level of N+1th neural network is higher than that of Nth neural network. Input of a neural network includes outputs of at least a predefined number of previous neural networks.Type: ApplicationFiled: April 25, 2023Publication date: October 31, 2024Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Kimmo Roimela
-
Patent number: 12112457Abstract: A system including server(s) and data repository, wherein server(s) is/are configured to receive images of real-world environment captured using camera(s), corresponding depth maps, and at least one of: pose information, relative pose information; generate three-dimensional model (3D) of real-world environment; store 3D model; utilise 3D model to generate output image from perspective of new pose; determine whether extended depth-of-field (EDOF) correction is required to be applied to any one of: at least one of images captured by camera(s) representing given object(s), 3D model, output image, based on whether optical focus of camera(s) was adjusted according to optical depth of given object from given pose of camera; and when it is determined that EDOF correction is required to be applied, apply EDOF correction to at least portion of any one of: at least one of images captured by camera(s), 3D model, output image.Type: GrantFiled: November 21, 2022Date of Patent: October 8, 2024Assignee: Varjo Technologies OyInventors: Mikko Ollila, Mikko Strandborg
-
Patent number: 12106735Abstract: Disclosed is a system with at least one server that is communicably coupled to at least one display apparatus, wherein the at least one server is configured to detect start of a smooth pursuit movement based on eye movements information received from the at least one display apparatus; detect an object or image region in motion in a field of view displayed by the at least one display apparatus; control remote rendering of an extended reality (XR) video stream by dynamically adjusting video compression and foveation parameters of the XR video stream during the smooth pursuit movement to prioritize visual clarity of the detected object or image region in motion over other elements in the captured field of view; detect an end of the smooth pursuit movement; and revert the video compression and foveation parameters to pre-set default settings after the end of the smooth pursuit movement.Type: GrantFiled: November 9, 2023Date of Patent: October 1, 2024Assignee: Varjo Technologies OyInventors: Mikko Strandborg, Mikko Ollila
-
Patent number: 12106734Abstract: Disclosed is a display apparatus with gaze-tracking means, light source(s), eyepiece lens, and processor(s) configured to: process gaze-tracking data to determine a gaze direction; determine relative position of a pupil with respect to eyepiece lens and relative orientation (A) of the pupil with respect to optical axis (OO?) of eyepiece lens; estimate distortions per pixel(s) in image, based on the relative position, the relative orientation and distortion information; identify an image segment whose pixels' estimated distortions are higher than predefined threshold distortion; modify image by replacing pixels of identified image segment with black pixels or near-black pixels; and utilise modified image for display to given eye via light source(s).Type: GrantFiled: November 20, 2023Date of Patent: October 1, 2024Assignee: Varjo Technologies OyInventors: Mikko Strandborg, Antti Hirvonen
-
Publication number: 20240323633Abstract: An acoustic apparatus includes microphones to sense sounds in real-world environment and generate acoustic signals; and processor(s) configured to obtain 3D model of real-world environment; receive acoustic signals collected by microphones; process acoustic signals based on positions and orientations of microphones, to estimate sound direction from which sound(s) corresponding to acoustic signals is incident upon microphones; determine position of sound source(s) from which sound(s) emanated, based on correlation between 3D model and sound direction; receive position of new user(s) in reconstructed environment; determine relative position of new user(s) with respect to sound source(s), based on position of new user(s) and position of sound source(s); and re-create sound(s) from perspective of new user(s), based on relative position of new user(s) with respect to sound source(s).Type: ApplicationFiled: March 21, 2023Publication date: September 26, 2024Applicant: Varjo Technologies OyInventor: Jelle van Mourik
-
Publication number: 20240310903Abstract: Disclosed is a display apparatus with at least one display or projector; a gaze-tracking means; and at least one processor configured to process gaze-tracking data, collected by the gaze-tracking means, to determine a gaze direction of a user; identify a gaze region and a peripheral region within an image that is to be displayed by the at least one display or projector, based on the gaze direction; apply at least one image restoration technique on the image in an iterative manner such that M iterations of the at least one image restoration technique are applied on the gaze region, and N iterations of the at least one image restoration technique are applied on the peripheral region, M being different from N; and control the at least one display or projector to display the image having the at least one image restoration technique applied thereon.Type: ApplicationFiled: March 13, 2023Publication date: September 19, 2024Applicant: Varjo Technologies OyInventor: Mikko Ollila
-
Publication number: 20240314452Abstract: Disclosed is an imaging system with an image sensor; and at least one processor configured to obtain image data read out by the image sensor; obtain information indicative of a gaze direction of a given user; and utilise at least one neural network to perform demosaicking on an entirety of the image data; identify a gaze region and a peripheral region of the image data, based on the gaze direction of the given user; and apply at least one image restoration technique to one of the gaze region and the peripheral region of the image data.Type: ApplicationFiled: March 13, 2023Publication date: September 19, 2024Applicant: Varjo Technologies OyInventors: Mikko Ollila, Mikko Strandborg
-
Patent number: 12094143Abstract: A computer-implemented method including: capturing visible-light images via visible-light camera(s) from view points in real-world environment, wherein 3D positions of view points are represented in coordinate system; dividing 3D space of real-world environment into 3D grid of convex-polyhedral regions; creating 3D data structure including nodes representing convex-polyhedral regions of 3D space; determining 3D positions of pixels of visible-light images based on 3D positions of view points; dividing each visible-light image into portions, wherein 3D positions of pixels of given portion of said visible-light image fall inside corresponding convex-polyhedral region; and storing, in each node, portions of visible-light images whose pixels' 3D positions fall inside corresponding convex-polyhedral region, wherein each portion of visible-light image is stored in corresponding node.Type: GrantFiled: December 10, 2021Date of Patent: September 17, 2024Assignee: Varjo Technologies OyInventors: Mikko Strandborg, Petteri Timonen
-
Publication number: 20240282051Abstract: A system and method for receiving colour images, depth images and viewpoint information; dividing 3D space occupied by real-world environment into 3D grid(s) of voxels; create 3D data structure(s) comprising nodes, each node representing corresponding voxel; dividing colour image and depth image into colour tiles and depth tiles, respectively; mapping colour tile to voxel(s) whose colour information is captured in colour tile; storing, in node representing voxel(s), viewpoint information indicative of viewpoint from which colour and depth images are captured, along with any of: colour tile that captures colour information of voxel(s) and corresponding depth tile that captured depth information, or reference information indicative of unique identification of colour tile and corresponding depth tile; and utilising 3D data structure(s) for training neural network(s), wherein input of neural network(s) comprises 3D position of point and output of neural network(s) comprises colour and opacity of point.Type: ApplicationFiled: February 17, 2023Publication date: August 22, 2024Applicant: Varjo Technologies OyInventors: Kimmo Roimela, Mikko Strandborg
-
Publication number: 20240282050Abstract: Disclosed is a method and system for: obtaining 3D data structure comprising nodes, each node representing voxel of 3D grid of voxels, wherein node stores viewpoint information, with any of: (i) colour tile that captures colour information of voxel and depth tile, (ii) reference information indicative of unique identification of colour and depth tiles; utilising 3D data structure for training neural network(s), wherein input of neural network(s) comprises 3D position of point in real-world environment and output of neural network(s) comprises colour and opacity of point; and for new viewpoint, determining visible nodes whose voxels are visible from new viewpoint; for visible node, selecting depth tile(s) whose viewpoint(s) matches new viewpoint most closely; reconstructing 2D geometry of objects from depth tiles; and utilising neural network(s) to render colours for pixels of output colour image.Type: ApplicationFiled: February 17, 2023Publication date: August 22, 2024Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Kimmo Roimela
-
Publication number: 20240275939Abstract: An imaging system including a first camera and a second camera corresponding to a first eye and a second eye of a user, respectively; and at least one processor. The at least one processor is configured to control the first camera and the second camera to capture a sequence of first images and a sequence of second images of a real-world environment, respectively; and apply a first extended depth-of-field correction to one of a given first image and a given second image, whilst applying at least one of: defocus blur correction, image sharpening, contrast enhancement, edge enhancement to another of the given first image and the given second image.Type: ApplicationFiled: February 20, 2024Publication date: August 15, 2024Applicant: Varjo Technologies OyInventor: Mikko Ollila