Patents by Inventor Ville Timonen
Ville Timonen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240135637Abstract: A method including: receiving visible-light images captured using camera(s) and depth data corresponding to said images; identifying image segments of visible-light image that represent objects or their parts belonging to different material categories; detecting whether at least two adjacent image segments in visible-light image pertain to at least two different material categories related to same object category; and when it is detected that at least two adjacent image segments pertain to at least two different material categories related to same object category, identifying at least two adjacent depth segments of depth data corresponding to at least two adjacent image segments; and correcting errors in optical depths represented in at least one of at least two adjacent depth segments, based on optical depths represented in remaining of at least two adjacent depth segments.Type: ApplicationFiled: October 23, 2022Publication date: April 25, 2024Applicant: Varjo Technologies OyInventors: Roman Golovanov, Tarek Mohsen, Petteri Timonen, Oleksandr Dovzhenko, Ville Timonen, Tuomas Tölli, Joni-Matti Määttä
-
Patent number: 11967019Abstract: A method including: receiving visible-light images captured using camera(s) and depth data corresponding to said images; identifying image segments of visible-light image that represent objects or their parts belonging to different material categories; detecting whether at least two adjacent image segments in visible-light image pertain to at least two different material categories related to same object category; and when it is detected that at least two adjacent image segments pertain to at least two different material categories related to same object category, identifying at least two adjacent depth segments of depth data corresponding to at least two adjacent image segments; and correcting errors in optical depths represented in at least one of at least two adjacent depth segments, based on optical depths represented in remaining of at least two adjacent depth segments.Type: GrantFiled: October 24, 2022Date of Patent: April 23, 2024Assignee: Varjo Technologies OyInventors: Roman Golovanov, Tarek Mohsen, Petteri Timonen, Oleksandr Dovzhenko, Ville Timonen, Tuomas Tölli, Joni-Matti Määttä
-
Publication number: 20230336944Abstract: Disclosed is a computer-implemented method comprising: tracking positions and orientations of devices (104, 106, 204a-204f, A-F, 402, 404) within real-world environment (300), each device comprising active sensor(s) (108, 110, 206a-206f); classifying devices into groups, based on positions and orientations of devices within real-world environment, wherein a group has devices whose active sensors are likely to interfere with each other; and controlling active sensors of devices in the group to operate by employing multiplexing.Type: ApplicationFiled: April 14, 2022Publication date: October 19, 2023Applicant: Varjo Technologies OyInventors: Ville Timonen, Mika-Petteri Lundgren
-
Patent number: 11503270Abstract: An imaging system including visible-light camera(s), depth sensor(s), pose-tracking means, and server(s) configured to: control visible-light camera(s) and depth sensor(s) to capture visible-light images and depth images of real-world environment, respectively, whilst processing pose-tracking data to determine poses of visible-light camera(s) and depth sensor(s); reconstruct three-dimensional lighting model of real-world environment representative of lighting in different regions of real-world environment; receive, from client application, request message comprising information indicative of location in real-world environment where virtual object(s) is to be placed; utilise three-dimensional lighting model to create sample lighting data for said location, wherein sample lighting data is representative of lighting at given location in real-world environment; and provide client application with sample lighting data.Type: GrantFiled: August 10, 2021Date of Patent: November 15, 2022Assignee: Varjo Technologies OyInventors: Petteri Timonen, Ville Timonen, Joni-Matti Määttä, Ari Antti Erik Peuhkurinen
-
Patent number: 11315334Abstract: A display apparatus including light source(s), camera(s), head-tracking means, and processor configured to: obtain three-dimensional model of real-world environment; control camera(s) to capture given image of real-world environment, whilst processing head-tracking data obtained from head-tracking means to determine pose of users head with respect to which given image is captured; determine region of three-dimensional model that corresponds to said pose of users head; compare plurality of features extracted from region of three-dimensional model with plurality of features extracted from given image, to detect object(s) present in real-world environment; employ environment map of extended-reality environment to generate intermediate extended-reality image based on pose of users head; embed object(s) in intermediate extended-reality image to generate extended-reality image; and display extended-reality image via light source(s).Type: GrantFiled: February 9, 2021Date of Patent: April 26, 2022Assignee: Varjo Technologies OyInventors: Ari Antti Erik Peuhkurinen, Ville Timonen, Niki Dobrev
-
Patent number: 11218683Abstract: The invention relates to a method and technical equipment for implementing the method. The method comprises generating a three-dimensional segment of a scene of a content; generating more than one two-dimensional views of the three-dimensional segment, each two-dimensional view representing a virtual camera view; generating multi-view streams by encoding each of the two-dimensional views; encoding parameters of a virtual camera to the respective stream of the multi-view stream; receiving a selection of one or more streams of the multi-view stream; and streaming only the selected one or more streams.Type: GrantFiled: March 20, 2018Date of Patent: January 4, 2022Assignee: Nokia Technologies OyInventors: Mika Pesonen, Kimmo Roimela, Johannes Pystynen, Ville Timonen, Johannes Rajala, Emre Aksu
-
Patent number: 11159713Abstract: An imaging system for producing images for a display apparatus, the imaging system including: at least one camera; means for tracking an orientation of the at least one camera; and at least one processor communicably coupled to said camera and said means. The at least one processor is configured to: create and store an N-dimensional data structure representative of an environment; for a plurality of orientations of the at least one camera, determine values of camera attributes to be employed to capture a given image from a given orientation and update the N-dimensional data structure with the determined values; access the N-dimensional data structure to find values of camera attributes for a current orientation; and control the at least one camera to employ the found values for capturing an image of real-world scene from the current orientation.Type: GrantFiled: October 11, 2019Date of Patent: October 26, 2021Assignee: Varjo Technologies OyInventors: Anna Nilsson, Ville Timonen
-
Patent number: 11138760Abstract: A display system and method for correcting drifts in camera poses. Images are captured via camera, and camera poses are determined in global coordinate system. First features are extracted from first image. Relative pose of first feature with respect to camera is determined. Pose of first feature in global coordinate system is determined, based on its relative pose and first camera pose. Second features are extracted from second image. Relative pose of second feature with respect to camera is determined. Pose of second feature in global coordinate system is determined, based on its relative pose and second camera pose. Matching features are identified between first features and second features. Difference is determined between pose of feature based on first camera pose and pose of feature based on second camera pose. Matching features that satisfy first predefined criterion based on difference are selected.Type: GrantFiled: November 6, 2019Date of Patent: October 5, 2021Assignee: Varjo Technologies OyInventors: Thomas Carlsson, Ville Timonen
-
Patent number: 11030817Abstract: A display system including display or projector , camera, means for tracking position and orientation of user's head, and processor. The processor is configured to control camera to capture images of real-world environment using default exposure setting, whilst processing head-tracking data to determine corresponding positions and orientations of user's head with respect to which images are captured; process images to create environment map of real-world environment; generate extended-reality image from images using environment map; render extended-reality image; adjust exposure of camera to capture underexposed image of real-world environment; process images to generate derived image; generate next extended-reality image from derived image using environment map; render next extended-reality image; and identify and modify intensities of oversaturated pixels in environment map, based on underexposed image and position and orientation with respect to which underexposed image is captured.Type: GrantFiled: November 5, 2019Date of Patent: June 8, 2021Assignee: Varjo Technologies OyInventors: Petteri Timonen, Ville Timonen
-
Publication number: 20210134061Abstract: A display system including display or projector , camera, means for tracking position and orientation of user's head, and processor. The processor is configured to control camera to capture images of real-world environment using default exposure setting, whilst processing head-tracking data to determine corresponding positions and orientations of user's head with respect to which images are captured; process images to create environment map of real-world environment; generate extended-reality image from images using environment map; render extended-reality image; adjust exposure of camera to capture underexposed image of real-world environment; process images to generate derived image; generate next extended-reality image from derived image using environment map; render next extended-reality image; and identify and modify intensities of oversaturated pixels in environment map, based on underexposed image and position and orientation with respect to which underexposed image is captured.Type: ApplicationFiled: November 5, 2019Publication date: May 6, 2021Inventors: Petteri Timonen, Ville Timonen
-
Publication number: 20210134013Abstract: A display system and method for correcting drifts in camera poses. Images are captured via camera, and camera poses are determined in global coordinate system. First features are extracted from first image. Relative pose of first feature with respect to camera is determined. Pose of first feature in global coordinate system is determined, based on its relative pose and first camera pose. Second features are extracted from second image. Relative pose of second feature with respect to camera is determined. Pose of second feature in global coordinate system is determined, based on its relative pose and second camera pose. Matching features are identified between first features and second features. Difference is determined between pose of feature based on first camera pose and pose of feature based on second camera pose. Matching features that satisfy first predefined criterion based on difference are selected.Type: ApplicationFiled: November 6, 2019Publication date: May 6, 2021Inventors: Thomas Carlsson, Ville Timonen
-
Publication number: 20210112192Abstract: An imaging system for producing images for a display apparatus, the imaging system including: at least one camera; means for tracking an orientation of the at least one camera; and at least one processor communicably coupled to said camera and said means. The at least one processor is configured to: create and store an N-dimensional data structure representative of an environment; for a plurality of orientations of the at least one camera, determine values of camera attributes to be employed to capture a given image from a given orientation and update the N-dimensional data structure with the determined values; access the N-dimensional data structure to find values of camera attributes for a current orientation; and control the at least one camera to employ the found values for capturing an image of real-world scene from the current orientation.Type: ApplicationFiled: October 11, 2019Publication date: April 15, 2021Inventors: Anna Nilsson, Ville Timonen
-
Patent number: 10939034Abstract: An imaging system for producing images for a display apparatus. The imaging system includes at least one camera, and processor communicably coupled to the at least one camera. The processor is configured to: obtain, from display apparatus, information indicative of current gaze direction of a user; determine, based on current gaze direction of the user, an object of interest within at least one display image, wherein the at least one display image is representative of a current view presented to user via display apparatus; adjust, based on a plurality of object attributes of the object of interest, a plurality of camera attributes of the at least one camera for capturing a given image of a given real-world scene; and generate from the given image a view to be presented to user via display apparatus.Type: GrantFiled: July 8, 2019Date of Patent: March 2, 2021Assignee: Varjo Technologies OyInventors: Ville Timonen, Mikko Ollila
-
Publication number: 20210014408Abstract: An imaging system for producing images for a display apparatus. The imaging system includes at least one camera, and processor communicably coupled to the at least one camera. The processor is configured to: obtain, from display apparatus, information indicative of current gaze direction of a user; determine, based on current gaze direction of the user, an object of interest within at least one display image, wherein the at least one display image is representative of a current view presented to user via display apparatus; adjust, based on a plurality of object attributes of the object of interest, a plurality of camera attributes of the at least one camera for capturing a given image of a given real-world scene; and generate from the given image a view to be presented to user via display apparatus.Type: ApplicationFiled: July 8, 2019Publication date: January 14, 2021Inventors: Ville Timonen, Mikko Ollila
-
Patent number: 10665034Abstract: An imaging system for producing mixed-reality images for display apparatus. The imaging system includes a camera and a processor communicably coupled to the camera. The processor is configured to control the camera to capture image of real-world environment; analyze the image to identify surface that displays visual content; compare the visual content displayed in the image with reference image of the visual content to determine size, position and orientation of the surface with respect to the camera; process the reference image of the visual content to generate processed image of the visual content; and replace the visual content displayed in the image with the processed image to generate mixed-reality image, wherein resolution of the processed image is higher than resolution of the visual content displayed in the image.Type: GrantFiled: October 7, 2019Date of Patent: May 26, 2020Assignee: Varjo Technologies OyInventors: Roope Rainisto, Ville Timonen
-
Publication number: 20200036955Abstract: The invention relates to a method and technical equipment for implementing the method. The method comprises generating a three-dimensional segment of a scene of a content; generating more than one two-dimensional views of the three-dimensional segment, each two-dimensional view representing a virtual camera view; generating multi-view streams by encoding each of the two-dimensional views; encoding parameters of a virtual camera to the respective stream of the multi-view stream; receiving a selection of one or more streams of the multi-view stream; and streaming only the selected one or more streams.Type: ApplicationFiled: March 20, 2018Publication date: January 30, 2020Inventors: Mika Pesonen, Kimmo Roimela, Johannes Pystynen, Ville Timonen, Johannes Rajala, Emre Aksu
-
Publication number: 20200035035Abstract: An imaging system for producing mixed-reality images for display apparatus. The imaging system includes a camera and a processor communicably coupled to the camera. The processor is configured to control the camera to capture image of real-world environment; analyze the image to identify surface that displays visual content; compare the visual content displayed in the image with reference image of the visual content to determine size, position and orientation of the surface with respect to the camera; process the reference image of the visual content to generate processed image of the visual content; and replace the visual content displayed in the image with the processed image to generate mixed-reality image, wherein resolution of the processed image is higher than resolution of the visual content displayed in the image.Type: ApplicationFiled: October 7, 2019Publication date: January 30, 2020Inventors: Roope Rainisto, Ville Timonen