Patents by Inventor Petteri Timonen
Petteri Timonen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12094143Abstract: A computer-implemented method including: capturing visible-light images via visible-light camera(s) from view points in real-world environment, wherein 3D positions of view points are represented in coordinate system; dividing 3D space of real-world environment into 3D grid of convex-polyhedral regions; creating 3D data structure including nodes representing convex-polyhedral regions of 3D space; determining 3D positions of pixels of visible-light images based on 3D positions of view points; dividing each visible-light image into portions, wherein 3D positions of pixels of given portion of said visible-light image fall inside corresponding convex-polyhedral region; and storing, in each node, portions of visible-light images whose pixels' 3D positions fall inside corresponding convex-polyhedral region, wherein each portion of visible-light image is stored in corresponding node.Type: GrantFiled: December 10, 2021Date of Patent: September 17, 2024Assignee: Varjo Technologies OyInventors: Mikko Strandborg, Petteri Timonen
-
Patent number: 12058452Abstract: An imaging system includes first camera; second camera, second field of view of second camera being wider than first field of view of first camera, wherein first field of view overlaps with portion of second field of view; and processor(s) configured to: capture first images and second images, wherein overlapping image segment and non-overlapping image segment of second image correspond to said portion and remaining portion of second field of view; determine blurred region(s) (B1, B2) of first image; and generate output image in manner that: inner image segment of output image is generated from: region(s) of overlapping image segment that corresponds to blurred region(s) of first image, and remaining region of first image that is not blurred, and peripheral image segment of output image is generated from non-overlapping image segment.Type: GrantFiled: May 20, 2022Date of Patent: August 6, 2024Assignee: Varjo Technologies DyInventors: Mikko Ollila, Petteri Timonen
-
Publication number: 20240233254Abstract: A method including: receiving visible-light images captured using camera(s) and depth data corresponding to said images; identifying image segments of visible-light image that represent objects or their parts belonging to different material categories; detecting whether at least two adjacent image segments in visible-light image pertain to at least two different material categories related to same object category; and when it is detected that at least two adjacent image segments pertain to at least two different material categories related to same object category, identifying at least two adjacent depth segments of depth data corresponding to at least two adjacent image segments; and correcting errors in optical depths represented in at least one of at least two adjacent depth segments, based on optical depths represented in remaining of at least two adjacent depth segments.Type: ApplicationFiled: October 24, 2022Publication date: July 11, 2024Applicant: Varjo Technologies OyInventors: Roman Golovanov, Tarek Mohsen, Petteri Timonen, Oleksandr Dovzhenko, Ville Timonen, Tuomas Tölli, Joni-Matti Määttä
-
Publication number: 20240135637Abstract: A method including: receiving visible-light images captured using camera(s) and depth data corresponding to said images; identifying image segments of visible-light image that represent objects or their parts belonging to different material categories; detecting whether at least two adjacent image segments in visible-light image pertain to at least two different material categories related to same object category; and when it is detected that at least two adjacent image segments pertain to at least two different material categories related to same object category, identifying at least two adjacent depth segments of depth data corresponding to at least two adjacent image segments; and correcting errors in optical depths represented in at least one of at least two adjacent depth segments, based on optical depths represented in remaining of at least two adjacent depth segments.Type: ApplicationFiled: October 23, 2022Publication date: April 25, 2024Applicant: Varjo Technologies OyInventors: Roman Golovanov, Tarek Mohsen, Petteri Timonen, Oleksandr Dovzhenko, Ville Timonen, Tuomas Tölli, Joni-Matti Määttä
-
Patent number: 11967019Abstract: A method including: receiving visible-light images captured using camera(s) and depth data corresponding to said images; identifying image segments of visible-light image that represent objects or their parts belonging to different material categories; detecting whether at least two adjacent image segments in visible-light image pertain to at least two different material categories related to same object category; and when it is detected that at least two adjacent image segments pertain to at least two different material categories related to same object category, identifying at least two adjacent depth segments of depth data corresponding to at least two adjacent image segments; and correcting errors in optical depths represented in at least one of at least two adjacent depth segments, based on optical depths represented in remaining of at least two adjacent depth segments.Type: GrantFiled: October 24, 2022Date of Patent: April 23, 2024Assignee: Varjo Technologies OyInventors: Roman Golovanov, Tarek Mohsen, Petteri Timonen, Oleksandr Dovzhenko, Ville Timonen, Tuomas Tölli, Joni-Matti Määttä
-
Publication number: 20230379594Abstract: An imaging system includes first camera; second camera, second field of view of second camera being wider than first field of view of first camera, wherein first field of view overlaps with portion of second field of view; and processor(s) configured to: capture first images and second images, wherein overlapping image segment and non-overlapping image segment of second image correspond to said portion and remaining portion of second field of view; determine blurred region(s) (B1, B2) of first image; and generate output image in manner that: inner image segment of output image is generated from: region(s) of overlapping image segment that corresponds to blurred region(s) of first image, and remaining region of first image that is not blurred, and peripheral image segment of output image is generated from non-overlapping image segment.Type: ApplicationFiled: May 20, 2022Publication date: November 23, 2023Applicant: Varjo Technologies OyInventors: Mikko Ollila, Petteri Timonen
-
Publication number: 20230326074Abstract: Disclosed is a system (100, 200) comprising server (102, 202, 308) and data repository (104, 204, 310) storing three-dimensional (3D) environment model, wherein server is configured to: receive, from client device (106, 206, 300), first image(s) of real-world environment captured by camera(s) (108, 208, 302) of client device, along with information indicative of first measured pose of client device measured by pose-tracking means (110, 210, 304) of client device; utilise 3D environment model to generate first reconstructed image(s) from perspective of first measured pose; determine first spatial transformation indicative of difference in first measured pose and first actual pose of client device; calculate first actual pose, based on first measured pose and first spatial transformation; and send information indicative of at least one of: first actual pose, first spatial transformation, to client device for enabling client device to calculate subsequent actual poses.Type: ApplicationFiled: April 8, 2022Publication date: October 12, 2023Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Pekka Väänänen, Petteri Timonen
-
Patent number: 11727658Abstract: A system including server(s) configured to: receive, from host device, visible-light images of real-world environment captured by visible-light camera(s); process visible-light images to generate three-dimensional (3D) environment model; receive, from client device, information indicative of pose of client device; utilise 3D environment model to generate reconstructed image(s) and reconstructed depth map(s); determine position of each pixel of reconstructed image(s); receive, from host device, current visible-light image(s); receive, from host device, information indicative of current pose of host device, or determine said current pose; determine, for pixel of reconstructed image(s), whether or not corresponding pixel exists in current visible-light image(s); replace initial pixel values of pixel in reconstructed image(s) with pixel values of corresponding pixel in current visible-light image(s), when corresponding pixel exists in current visible-light image(s); and send reconstructed image(s) to client devicType: GrantFiled: October 1, 2021Date of Patent: August 15, 2023Assignee: Varjo Technologies OyInventors: Mikko Strandborg, Petteri Timonen
-
Publication number: 20230245408Abstract: A system includes server(s) configured to: receive plurality of images of real-world environment captured by camera(s); process a number of images to detect plurality of objects present in a real-world environment and generate a three-dimensional environment model of the real-world environment; classify each of the objects as either a static or dynamic object; receive current image(s) of the real-world environment; process the current image(s) to detect object(s); determine whether or not the object(s) is/are from amongst the plurality of objects; determine whether the object(s) is a static object or dynamic object when it is determined that the object(s) is/are from amongst the plurality of objects; and for each dynamic object that is represented in the three-dimensional environment model but not in a current image(s), apply a first visual effect to a representation of the dynamic object in the three-dimensional environment model for indicating staleness of the representation.Type: ApplicationFiled: February 2, 2022Publication date: August 3, 2023Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Petteri Timonen
-
Publication number: 20230186500Abstract: A computer-implemented method including: capturing visible-light images via visible-light camera(s) from view points in real-world environment, wherein 3D positions of view points are represented in coordinate system; dividing 3D space of real-world environment into 3D grid of convex-polyhedral regions; creating 3D data structure including nodes representing convex-polyhedral regions of 3D space; determining 3D positions of pixels of visible-light images based on 3D positions of view points; dividing each visible-light image into portions, wherein 3D positions of pixels of given portion of said visible-light image fall inside corresponding convex-polyhedral region; and storing, in each node, portions of visible-light images whose pixels' 3D positions fall inside corresponding convex-polyhedral region, wherein each portion of visible-light image is stored in corresponding node.Type: ApplicationFiled: December 10, 2021Publication date: June 15, 2023Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Petteri Timonen
-
Publication number: 20230108922Abstract: A system including server(s) configured to: receive, from host device, visible-light images of real-world environment captured by visible-light camera(s); process visible-light images to generate three-dimensional (3D) environment model; receive, from client device, information indicative of pose of client device; utilise 3D environment model to generate reconstructed image(s) and reconstructed depth map(s); determine position of each pixel of reconstructed image(s); receive, from host device, current visible-light image(s); receive, from host device, information indicative of current pose of host device, or determine said current pose; determine, for pixel of reconstructed image(s), whether or not corresponding pixel exists in current visible-light image(s); replace initial pixel values of pixel in reconstructed image(s) with pixel values of corresponding pixel in current visible-light image(s), when corresponding pixel exists in current visible-light image(s); and send reconstructed image(s) to client devicType: ApplicationFiled: October 1, 2021Publication date: April 6, 2023Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Petteri Timonen
-
Patent number: 11503270Abstract: An imaging system including visible-light camera(s), depth sensor(s), pose-tracking means, and server(s) configured to: control visible-light camera(s) and depth sensor(s) to capture visible-light images and depth images of real-world environment, respectively, whilst processing pose-tracking data to determine poses of visible-light camera(s) and depth sensor(s); reconstruct three-dimensional lighting model of real-world environment representative of lighting in different regions of real-world environment; receive, from client application, request message comprising information indicative of location in real-world environment where virtual object(s) is to be placed; utilise three-dimensional lighting model to create sample lighting data for said location, wherein sample lighting data is representative of lighting at given location in real-world environment; and provide client application with sample lighting data.Type: GrantFiled: August 10, 2021Date of Patent: November 15, 2022Assignee: Varjo Technologies OyInventors: Petteri Timonen, Ville Timonen, Joni-Matti Määttä, Ari Antti Erik Peuhkurinen
-
Publication number: 20220327784Abstract: Disocclusion in a VR/AR system may be handled by obtaining depth and color data for the disoccluded area from a 3D model of the imaged environment. The data may be obtained by raytracing and included in the image stream by the reprojecting subsystem.Type: ApplicationFiled: April 9, 2021Publication date: October 13, 2022Applicant: Varjo Technologies OyInventors: Mikko Strandborg, Ville Miettinen, Petteri Timonen
-
Patent number: 11030817Abstract: A display system including display or projector , camera, means for tracking position and orientation of user's head, and processor. The processor is configured to control camera to capture images of real-world environment using default exposure setting, whilst processing head-tracking data to determine corresponding positions and orientations of user's head with respect to which images are captured; process images to create environment map of real-world environment; generate extended-reality image from images using environment map; render extended-reality image; adjust exposure of camera to capture underexposed image of real-world environment; process images to generate derived image; generate next extended-reality image from derived image using environment map; render next extended-reality image; and identify and modify intensities of oversaturated pixels in environment map, based on underexposed image and position and orientation with respect to which underexposed image is captured.Type: GrantFiled: November 5, 2019Date of Patent: June 8, 2021Assignee: Varjo Technologies OyInventors: Petteri Timonen, Ville Timonen
-
Publication number: 20210134061Abstract: A display system including display or projector , camera, means for tracking position and orientation of user's head, and processor. The processor is configured to control camera to capture images of real-world environment using default exposure setting, whilst processing head-tracking data to determine corresponding positions and orientations of user's head with respect to which images are captured; process images to create environment map of real-world environment; generate extended-reality image from images using environment map; render extended-reality image; adjust exposure of camera to capture underexposed image of real-world environment; process images to generate derived image; generate next extended-reality image from derived image using environment map; render next extended-reality image; and identify and modify intensities of oversaturated pixels in environment map, based on underexposed image and position and orientation with respect to which underexposed image is captured.Type: ApplicationFiled: November 5, 2019Publication date: May 6, 2021Inventors: Petteri Timonen, Ville Timonen