Patents by Inventor Mikko Strandborg
Mikko Strandborg has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240135644Abstract: A method including: receiving colour images, depth images, and viewpoint information; dividing 3D space occupied by real-world environment into 3D grid(s) of voxels (204); creating 3D data structure(s) comprising nodes, each node representing corresponding voxel; dividing colour image and depth image into colour tiles and depth tiles, respectively; mapping colour tile to voxel(s) whose colour information is captured in colour tile, based on depth information captured in corresponding depth tile and viewpoint from which colour image and depth image are captured; and storing, in node representing voxel(s), reference information indicative of unique identification of colour tile that captures colour information of voxel(s) and corresponding depth tile that captures depth information, along with viewpoint information indicative of viewpoint from which colour image and depth image are captured.Type: ApplicationFiled: October 23, 2022Publication date: April 25, 2024Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Kimmo Roimela, Pekka Väänänen
-
Patent number: 11947736Abstract: A controller-tracking system includes: camera(s) arranged on head-mounted display (HMD); one or more light sources arranged on controller(s) to be tracked, and controller(s) being associated with the HMD. The light sources provide light having wavelength(s). The processor(s) are configured to receive image(s) representing controller(s); identify image segment(s) in image(s) that represents light source(s); determine level of focussing of light source(s) in image(s), based on characteristics associated with pixels of image segment(s); determine distance between camera(s) and light source(s), based on level of focussing, intrinsic parameters of camera(s), and reference focussing distance corresponding to wavelength of light provided by light source(s); and determine pose of controller(s) in global coordinate space of real-world environment, using distance between camera(s) and light source(s).Type: GrantFiled: December 21, 2022Date of Patent: April 2, 2024Assignee: Varjo Technologies OyInventors: Mikko Strandborg, Mikko Ollila
-
Publication number: 20240046556Abstract: A computer-implemented method including: receiving visible-light images captured from viewpoints using visible-light camera(s); creating 3D model of real-world environment, wherein 3D model stores colour information pertaining to 3D points on surfaces of real objects (204); dividing 3D points into groups of 3D points, based on at least one of: whether surface normal of 3D points in group lie within predefined threshold angle from each other, differences in materials of real objects, differences in textures of surfaces of real objects; for group of 3D points, determining at least two of visible-light images in which group of 3D points is captured from different viewpoints, wherein said images are representative of different surface irradiances of group of 3D points; and storing, in 3D model, information indicative of different surface irradiances.Type: ApplicationFiled: August 4, 2022Publication date: February 8, 2024Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Kimmo Roimela
-
Patent number: 11863786Abstract: A method of transmitting image data in an image display system, includes dividing the image data into framebuffers, and for each framebuffer: dividing the framebuffer into a number of vertical stripes, each stripe including one or more scanlines, dividing each vertical stripe into at least a first and a second block, each of the first and the second block comprising pixel data to be displayed in an area of the image, and storing first pixel data in the first block with a first resolution and second pixel data in the second block having a second resolution which is lower than the first resolution, transmitting the framebuffer over the digital display interface to a decoder unit, and unpacking the framebuffer, including upscaling the pixel data in the second block to compensate for the lower second resolution and optionally upscaling the pixel data in the first block.Type: GrantFiled: May 21, 2021Date of Patent: January 2, 2024Assignee: Varjo Technologies OyInventors: Mikko Strandborg, Oiva Arvo Oskari Sahlsten, Ville Miettinen
-
Patent number: 11863788Abstract: An encoder for encoding images includes at least one processor configured transform a given input pixel of an input image having (x, y) coordinates in a Cartesian coordinate system into a given transformed pixel of a transformed image having (?, ?) coordinates in a log-polar coordinate system, using a log-polar transformation in which a radial distance (?) of the given transformed pixel is a logarithm of a distance of the given input pixel from an origin in the Cartesian coordinate system, and an angular distance (?) of the given transformed pixel is a sum of an arctangent of a slope of a line connecting the given input pixel to the origin and a function of the radial distance, encode the transformed image, by employing a compression algorithm, into an encoded image, and send the encoded image to a display apparatus for subsequent decoding thereat.Type: GrantFiled: April 8, 2022Date of Patent: January 2, 2024Assignee: Varjo Technologies OyInventors: Mikko Strandborg, Ville Miettinen
-
Publication number: 20230328283Abstract: An encoder for encoding images includes at least one processor configured transform a given input pixel of an input image having (x, y) coordinates in a Cartesian coordinate system into a given transformed pixel of a transformed image having (?, ?) coordinates in a log-polar coordinate system, using a log-polar transformation in which a radial distance (? of the given transformed pixel is a logarithm of a distance of the given input pixel from an origin in the Cartesian coordinate system, and an angular distance (?) of the given transformed pixel is a sum of an arctangent of a slope of a line connecting the given input pixel to the origin and a function of the radial distance, encode the transformed image, by employing a compression algorithm, into an encoded image, and send the encoded image to a display apparatus for subsequent decoding thereat.Type: ApplicationFiled: April 8, 2022Publication date: October 12, 2023Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Ville Miettinen
-
Publication number: 20230326074Abstract: Disclosed is a system (100, 200) comprising server (102, 202, 308) and data repository (104, 204, 310) storing three-dimensional (3D) environment model, wherein server is configured to: receive, from client device (106, 206, 300), first image(s) of real-world environment captured by camera(s) (108, 208, 302) of client device, along with information indicative of first measured pose of client device measured by pose-tracking means (110, 210, 304) of client device; utilise 3D environment model to generate first reconstructed image(s) from perspective of first measured pose; determine first spatial transformation indicative of difference in first measured pose and first actual pose of client device; calculate first actual pose, based on first measured pose and first spatial transformation; and send information indicative of at least one of: first actual pose, first spatial transformation, to client device for enabling client device to calculate subsequent actual poses.Type: ApplicationFiled: April 8, 2022Publication date: October 12, 2023Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Pekka Väänänen, Petteri Timonen
-
Patent number: 11727658Abstract: A system including server(s) configured to: receive, from host device, visible-light images of real-world environment captured by visible-light camera(s); process visible-light images to generate three-dimensional (3D) environment model; receive, from client device, information indicative of pose of client device; utilise 3D environment model to generate reconstructed image(s) and reconstructed depth map(s); determine position of each pixel of reconstructed image(s); receive, from host device, current visible-light image(s); receive, from host device, information indicative of current pose of host device, or determine said current pose; determine, for pixel of reconstructed image(s), whether or not corresponding pixel exists in current visible-light image(s); replace initial pixel values of pixel in reconstructed image(s) with pixel values of corresponding pixel in current visible-light image(s), when corresponding pixel exists in current visible-light image(s); and send reconstructed image(s) to client devicType: GrantFiled: October 1, 2021Date of Patent: August 15, 2023Assignee: Varjo Technologies OyInventors: Mikko Strandborg, Petteri Timonen
-
Publication number: 20230245408Abstract: A system includes server(s) configured to: receive plurality of images of real-world environment captured by camera(s); process a number of images to detect plurality of objects present in a real-world environment and generate a three-dimensional environment model of the real-world environment; classify each of the objects as either a static or dynamic object; receive current image(s) of the real-world environment; process the current image(s) to detect object(s); determine whether or not the object(s) is/are from amongst the plurality of objects; determine whether the object(s) is a static object or dynamic object when it is determined that the object(s) is/are from amongst the plurality of objects; and for each dynamic object that is represented in the three-dimensional environment model but not in a current image(s), apply a first visual effect to a representation of the dynamic object in the three-dimensional environment model for indicating staleness of the representation.Type: ApplicationFiled: February 2, 2022Publication date: August 3, 2023Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Petteri Timonen
-
Publication number: 20230186500Abstract: A computer-implemented method including: capturing visible-light images via visible-light camera(s) from view points in real-world environment, wherein 3D positions of view points are represented in coordinate system; dividing 3D space of real-world environment into 3D grid of convex-polyhedral regions; creating 3D data structure including nodes representing convex-polyhedral regions of 3D space; determining 3D positions of pixels of visible-light images based on 3D positions of view points; dividing each visible-light image into portions, wherein 3D positions of pixels of given portion of said visible-light image fall inside corresponding convex-polyhedral region; and storing, in each node, portions of visible-light images whose pixels' 3D positions fall inside corresponding convex-polyhedral region, wherein each portion of visible-light image is stored in corresponding node.Type: ApplicationFiled: December 10, 2021Publication date: June 15, 2023Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Petteri Timonen
-
Publication number: 20230108922Abstract: A system including server(s) configured to: receive, from host device, visible-light images of real-world environment captured by visible-light camera(s); process visible-light images to generate three-dimensional (3D) environment model; receive, from client device, information indicative of pose of client device; utilise 3D environment model to generate reconstructed image(s) and reconstructed depth map(s); determine position of each pixel of reconstructed image(s); receive, from host device, current visible-light image(s); receive, from host device, information indicative of current pose of host device, or determine said current pose; determine, for pixel of reconstructed image(s), whether or not corresponding pixel exists in current visible-light image(s); replace initial pixel values of pixel in reconstructed image(s) with pixel values of corresponding pixel in current visible-light image(s), when corresponding pixel exists in current visible-light image(s); and send reconstructed image(s) to client devicType: ApplicationFiled: October 1, 2021Publication date: April 6, 2023Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Petteri Timonen
-
Publication number: 20230057755Abstract: An encoding method and a decoding method. The encoding method includes generating curved image by creating projection of visual scene onto inner surface of imaginary 3D geometric shape that is curved in at least one dimension; dividing curved image into input portion and plurality of input rings; encoding input portion and input rings into first planar image and second planar image, respectively, such that input portion is stored into first planar image, and input rings are packed into corresponding rows of second planar image; and communicating, to display apparatus, first and second planar images and information indicative of sizes of input portion and input rings.Type: ApplicationFiled: August 18, 2021Publication date: February 23, 2023Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Ville Miettinen, Ari Antti Erik Peuhkurinen
-
Publication number: 20230057988Abstract: A display apparatus including display, display driver, and processor configured to send input signal to display driver, first part and second part of input signal comprise first pixel data pertaining to portion of first image frame and second pixel data pertaining to second image frame, respectively, first part of input signal further comprises extra pixel data pertaining to second image frame. Display driver is configured to: re-scale pixels of first pixel data based on display resolution; update pixels of second pixel data based on extra pixel data; generate control signal based on re-scaled pixels and updated pixels; drive display using control signal to present visual scene, wherein re-scaled pixels surround updated pixels when displayed on display area.Type: ApplicationFiled: August 17, 2021Publication date: February 23, 2023Applicant: Varjo Technologies OyInventors: Oiva Arvo Oskari Sahlsten, Mikko Strandborg
-
Patent number: 11568574Abstract: An encoding method and a decoding method. The encoding method includes generating curved image by creating projection of visual scene onto inner surface of imaginary 3D geometric shape that is curved in at least one dimension; dividing curved image into input portion and plurality of input rings; encoding input portion and input rings into first planar image and second planar image, respectively, such that input portion is stored into first planar image, and input rings are packed into corresponding rows of second planar image; and communicating, to display apparatus, first and second planar images and information indicative of sizes of input portion and input rings.Type: GrantFiled: August 18, 2021Date of Patent: January 31, 2023Assignee: Varjo Technologies OyInventors: Mikko Strandborg, Ville Miettinen, Ari Antti Erik Peuhkurinen
-
Patent number: 11568783Abstract: A display apparatus including display, display driver, and processor configured to send input signal to display driver, first part and second part of input signal comprise first pixel data pertaining to portion of first image frame and second pixel data pertaining to second image frame, respectively, first part of input signal further comprises extra pixel data pertaining to second image frame. Display driver is configured to: re-scale pixels of first pixel data based on display resolution; update pixels of second pixel data based on extra pixel data; generate control signal based on re-scaled pixels and updated pixels; drive display using control signal to present visual scene, wherein re-scaled pixels surround updated pixels when displayed on display area.Type: GrantFiled: August 17, 2021Date of Patent: January 31, 2023Assignee: Varjo Technologies OyInventors: Oiva Arvo Oskari Sahlsten, Mikko Strandborg
-
Publication number: 20220383512Abstract: The transmitted information from a gaze tracker camera to a control unit of a VR/AR system can be controlled by an image signal processor (ISP) for use with a camera arranged to provide a stream of images of a moving part of an object in a VR or AR system to a gaze tracking function of the VR or AR system, the image signal processor being arranged to receive a signal from the gaze tracking function indicating at least one desired property of the images and to provide the stream of images to the gaze tracking function according to the signal. The ISP may be arranged to provide the image as either a full view of the image with reduced resolution or a limited part of the image with a second resolution which is high enough to enable detailed tracking of the object.Type: ApplicationFiled: May 27, 2021Publication date: December 1, 2022Applicant: Varjo Technologies OyInventors: Ville Miettinen, Mikko Ollila, Mikko Strandborg
-
Publication number: 20220377372Abstract: A method of transmitting image data in an image display system, includes dividing the image data into framebuffers, and for each framebuffer: dividing the framebuffer into a number of vertical stripes, each stripe including one or more scanlines, dividing each vertical stripe into at least a first and a second block, each of the first and the second block comprising pixel data to be displayed in an area of the image, and storing first pixel data in the first block with a first resolution and second pixel data in the second block having a second resolution which is lower than the first resolution, transmitting the framebuffer over the digital display interface to a decoder unit, and unpacking the framebuffer, including upscaling the pixel data in the second block to compensate for the lower second resolution and optionally upscaling the pixel data in the first block.Type: ApplicationFiled: May 21, 2021Publication date: November 24, 2022Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Oiva Arvo Oskari Sahlsten, Ville Miettinen
-
Publication number: 20220358670Abstract: A tracking method for tracking a target in a VR/AR system having a tracker function for determining the position of the target, includes obtaining a stream of images of the target, placing two or more markers in determined positions on the target in the image said markers being arranged to follow the movement of the determined positions, detecting the movement of the markers between two images in the stream of images, if the detected movement is within a set of consistency criteria, determining the position of the target based on the detected movement, and if the detected movement is outside the set of consistency criteria, activating the tracker function. This reduces the computation power required for tracking.Type: ApplicationFiled: May 4, 2021Publication date: November 10, 2022Applicant: Varjo Technologies OyInventors: Ville Miettinen, Mikko Strandborg
-
Publication number: 20220351411Abstract: An AR system is arranged to display an image stream of an environment with one or more virtual objects, each virtual object being associated with a marker in the image stream. The AR system includes a tracking subsystem arranged to track a first pose of the marker in the image, and inform a frame rendering subsystem, which generates a rendering of the VR object and provides the rendering to the reprojecting subsystem together with information about the first pose of the marker and information identifying a set of pixels included in the VR image. The tracking subsystem further determines a second pose of the marker based on detected movement and informs the reprojecting subsystem about the second pose. The reprojecting subsystem renders an image frame including the image stream of the environment with the rendering of the VR object reprojected in dependence of the second pose.Type: ApplicationFiled: April 30, 2021Publication date: November 3, 2022Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Klaus Melakari, Ville Miettinen
-
Patent number: 11487358Abstract: A display apparatus including: light source(s); camera(s); and processor(s) configured to: display extended-reality image for presentation to user, whilst capturing eye image(s) of user's eyes; analyse eye image(s) to detect eye features; employ existing calibration model to determine gaze directions of user's eyes; determine gaze location of user; identify three-dimensional bounding box at gaze location within extended-reality environment, based on position and optical depth of gaze location; identify inlying pixels of extended-reality image lying within three-dimensional bounding box, based on optical depths of pixels in extended-reality image; compute probability of user focussing on given inlying pixel and generate probability distribution of probabilities computed for inlying pixels; identify at least one inlying pixel calibration target, based on probability distribution; and map position of calibration target to eye features, to update existing calibration model to generate new calibration model.Type: GrantFiled: April 19, 2021Date of Patent: November 1, 2022Assignee: Varjo Technologies OyInventors: Ville Miettinen, Mikko Strandborg