Patents by Inventor Ville Timonen

Ville Timonen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250218129
    Abstract: Disclosed is a method including determining a gaze point and a gaze depth; controlling camera(s) for capturing a real-world image, by adjusting camera settings according to the gaze point and the gaze depth; determining a pose of the camera(s) at a time of capturing the real-world image; identifying region(s) of the real-world environment represented in the real-world image; determining whether a representation of region(s) satisfies quality criteria; when the representation fails to satisfy the quality criteria, capturing a reference real-world image such that the representation fulfills the quality criteria; generating training data comprising reference data and input data wherein reference data comprises reference real-world image, and input data with real-world image and/or previously-captured real-world image; sending training data to a processor to train a first neural network.
    Type: Application
    Filed: December 27, 2023
    Publication date: July 3, 2025
    Applicant: Varjo Technologies Oy
    Inventors: Ville Timonen, Mikko Ollila, Mikko Strandborg
  • Publication number: 20250220307
    Abstract: Disclosed is a method that includes detecting a beginning of a movement of a user's gaze by processing gaze-tracking data, collected by a gaze-tracking means; predicting a motion blur in an image which is to be captured by at least one camera during the movement of the user's gaze, using a portion of the gaze-tracking data that corresponds to the beginning of the movement of the user's gaze; and compensating for the predicted motion blur while capturing the image by controlling the at least one camera.
    Type: Application
    Filed: December 29, 2023
    Publication date: July 3, 2025
    Applicant: Varjo Technologies Oy
    Inventors: Ville Timonen, Kalle Karhu
  • Publication number: 20250142037
    Abstract: Disclosed is computer-implemented method including marching first ray and second ray, along first gaze direction and second gaze direction that are estimated using gaze-tracking means, from given viewpoint into depth map, to determine first optical depth and second optical depth corresponding to first eye and second eye, respectively; calculating gaze convergence distance, based on first gaze direction and second gaze direction; detecting whether first optical depth lies within predefined threshold percent from second optical depth; and when it is detected that first optical depth lies within predefined threshold percent from second optical depth, selecting given focus distance as an average of at least two of: first optical depth, second optical depth, gaze convergence distance; and employing given focus distance for capturing given image using at least one variable-focus camera.
    Type: Application
    Filed: October 31, 2023
    Publication date: May 1, 2025
    Applicant: Varjo Technologies Oy
    Inventor: Ville Timonen
  • Publication number: 20240233254
    Abstract: A method including: receiving visible-light images captured using camera(s) and depth data corresponding to said images; identifying image segments of visible-light image that represent objects or their parts belonging to different material categories; detecting whether at least two adjacent image segments in visible-light image pertain to at least two different material categories related to same object category; and when it is detected that at least two adjacent image segments pertain to at least two different material categories related to same object category, identifying at least two adjacent depth segments of depth data corresponding to at least two adjacent image segments; and correcting errors in optical depths represented in at least one of at least two adjacent depth segments, based on optical depths represented in remaining of at least two adjacent depth segments.
    Type: Application
    Filed: October 24, 2022
    Publication date: July 11, 2024
    Applicant: Varjo Technologies Oy
    Inventors: Roman Golovanov, Tarek Mohsen, Petteri Timonen, Oleksandr Dovzhenko, Ville Timonen, Tuomas Tölli, Joni-Matti Määttä
  • Publication number: 20240135637
    Abstract: A method including: receiving visible-light images captured using camera(s) and depth data corresponding to said images; identifying image segments of visible-light image that represent objects or their parts belonging to different material categories; detecting whether at least two adjacent image segments in visible-light image pertain to at least two different material categories related to same object category; and when it is detected that at least two adjacent image segments pertain to at least two different material categories related to same object category, identifying at least two adjacent depth segments of depth data corresponding to at least two adjacent image segments; and correcting errors in optical depths represented in at least one of at least two adjacent depth segments, based on optical depths represented in remaining of at least two adjacent depth segments.
    Type: Application
    Filed: October 23, 2022
    Publication date: April 25, 2024
    Applicant: Varjo Technologies Oy
    Inventors: Roman Golovanov, Tarek Mohsen, Petteri Timonen, Oleksandr Dovzhenko, Ville Timonen, Tuomas Tölli, Joni-Matti Määttä
  • Patent number: 11967019
    Abstract: A method including: receiving visible-light images captured using camera(s) and depth data corresponding to said images; identifying image segments of visible-light image that represent objects or their parts belonging to different material categories; detecting whether at least two adjacent image segments in visible-light image pertain to at least two different material categories related to same object category; and when it is detected that at least two adjacent image segments pertain to at least two different material categories related to same object category, identifying at least two adjacent depth segments of depth data corresponding to at least two adjacent image segments; and correcting errors in optical depths represented in at least one of at least two adjacent depth segments, based on optical depths represented in remaining of at least two adjacent depth segments.
    Type: Grant
    Filed: October 24, 2022
    Date of Patent: April 23, 2024
    Assignee: Varjo Technologies Oy
    Inventors: Roman Golovanov, Tarek Mohsen, Petteri Timonen, Oleksandr Dovzhenko, Ville Timonen, Tuomas Tölli, Joni-Matti Määttä
  • Publication number: 20230336944
    Abstract: Disclosed is a computer-implemented method comprising: tracking positions and orientations of devices (104, 106, 204a-204f, A-F, 402, 404) within real-world environment (300), each device comprising active sensor(s) (108, 110, 206a-206f); classifying devices into groups, based on positions and orientations of devices within real-world environment, wherein a group has devices whose active sensors are likely to interfere with each other; and controlling active sensors of devices in the group to operate by employing multiplexing.
    Type: Application
    Filed: April 14, 2022
    Publication date: October 19, 2023
    Applicant: Varjo Technologies Oy
    Inventors: Ville Timonen, Mika-Petteri Lundgren
  • Patent number: 11503270
    Abstract: An imaging system including visible-light camera(s), depth sensor(s), pose-tracking means, and server(s) configured to: control visible-light camera(s) and depth sensor(s) to capture visible-light images and depth images of real-world environment, respectively, whilst processing pose-tracking data to determine poses of visible-light camera(s) and depth sensor(s); reconstruct three-dimensional lighting model of real-world environment representative of lighting in different regions of real-world environment; receive, from client application, request message comprising information indicative of location in real-world environment where virtual object(s) is to be placed; utilise three-dimensional lighting model to create sample lighting data for said location, wherein sample lighting data is representative of lighting at given location in real-world environment; and provide client application with sample lighting data.
    Type: Grant
    Filed: August 10, 2021
    Date of Patent: November 15, 2022
    Assignee: Varjo Technologies Oy
    Inventors: Petteri Timonen, Ville Timonen, Joni-Matti Määttä, Ari Antti Erik Peuhkurinen
  • Patent number: 11315334
    Abstract: A display apparatus including light source(s), camera(s), head-tracking means, and processor configured to: obtain three-dimensional model of real-world environment; control camera(s) to capture given image of real-world environment, whilst processing head-tracking data obtained from head-tracking means to determine pose of users head with respect to which given image is captured; determine region of three-dimensional model that corresponds to said pose of users head; compare plurality of features extracted from region of three-dimensional model with plurality of features extracted from given image, to detect object(s) present in real-world environment; employ environment map of extended-reality environment to generate intermediate extended-reality image based on pose of users head; embed object(s) in intermediate extended-reality image to generate extended-reality image; and display extended-reality image via light source(s).
    Type: Grant
    Filed: February 9, 2021
    Date of Patent: April 26, 2022
    Assignee: Varjo Technologies Oy
    Inventors: Ari Antti Erik Peuhkurinen, Ville Timonen, Niki Dobrev
  • Patent number: 11218683
    Abstract: The invention relates to a method and technical equipment for implementing the method. The method comprises generating a three-dimensional segment of a scene of a content; generating more than one two-dimensional views of the three-dimensional segment, each two-dimensional view representing a virtual camera view; generating multi-view streams by encoding each of the two-dimensional views; encoding parameters of a virtual camera to the respective stream of the multi-view stream; receiving a selection of one or more streams of the multi-view stream; and streaming only the selected one or more streams.
    Type: Grant
    Filed: March 20, 2018
    Date of Patent: January 4, 2022
    Assignee: Nokia Technologies Oy
    Inventors: Mika Pesonen, Kimmo Roimela, Johannes Pystynen, Ville Timonen, Johannes Rajala, Emre Aksu
  • Patent number: 11159713
    Abstract: An imaging system for producing images for a display apparatus, the imaging system including: at least one camera; means for tracking an orientation of the at least one camera; and at least one processor communicably coupled to said camera and said means. The at least one processor is configured to: create and store an N-dimensional data structure representative of an environment; for a plurality of orientations of the at least one camera, determine values of camera attributes to be employed to capture a given image from a given orientation and update the N-dimensional data structure with the determined values; access the N-dimensional data structure to find values of camera attributes for a current orientation; and control the at least one camera to employ the found values for capturing an image of real-world scene from the current orientation.
    Type: Grant
    Filed: October 11, 2019
    Date of Patent: October 26, 2021
    Assignee: Varjo Technologies Oy
    Inventors: Anna Nilsson, Ville Timonen
  • Patent number: 11138760
    Abstract: A display system and method for correcting drifts in camera poses. Images are captured via camera, and camera poses are determined in global coordinate system. First features are extracted from first image. Relative pose of first feature with respect to camera is determined. Pose of first feature in global coordinate system is determined, based on its relative pose and first camera pose. Second features are extracted from second image. Relative pose of second feature with respect to camera is determined. Pose of second feature in global coordinate system is determined, based on its relative pose and second camera pose. Matching features are identified between first features and second features. Difference is determined between pose of feature based on first camera pose and pose of feature based on second camera pose. Matching features that satisfy first predefined criterion based on difference are selected.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: October 5, 2021
    Assignee: Varjo Technologies Oy
    Inventors: Thomas Carlsson, Ville Timonen
  • Patent number: 11030817
    Abstract: A display system including display or projector , camera, means for tracking position and orientation of user's head, and processor. The processor is configured to control camera to capture images of real-world environment using default exposure setting, whilst processing head-tracking data to determine corresponding positions and orientations of user's head with respect to which images are captured; process images to create environment map of real-world environment; generate extended-reality image from images using environment map; render extended-reality image; adjust exposure of camera to capture underexposed image of real-world environment; process images to generate derived image; generate next extended-reality image from derived image using environment map; render next extended-reality image; and identify and modify intensities of oversaturated pixels in environment map, based on underexposed image and position and orientation with respect to which underexposed image is captured.
    Type: Grant
    Filed: November 5, 2019
    Date of Patent: June 8, 2021
    Assignee: Varjo Technologies Oy
    Inventors: Petteri Timonen, Ville Timonen
  • Publication number: 20210134061
    Abstract: A display system including display or projector , camera, means for tracking position and orientation of user's head, and processor. The processor is configured to control camera to capture images of real-world environment using default exposure setting, whilst processing head-tracking data to determine corresponding positions and orientations of user's head with respect to which images are captured; process images to create environment map of real-world environment; generate extended-reality image from images using environment map; render extended-reality image; adjust exposure of camera to capture underexposed image of real-world environment; process images to generate derived image; generate next extended-reality image from derived image using environment map; render next extended-reality image; and identify and modify intensities of oversaturated pixels in environment map, based on underexposed image and position and orientation with respect to which underexposed image is captured.
    Type: Application
    Filed: November 5, 2019
    Publication date: May 6, 2021
    Inventors: Petteri Timonen, Ville Timonen
  • Publication number: 20210134013
    Abstract: A display system and method for correcting drifts in camera poses. Images are captured via camera, and camera poses are determined in global coordinate system. First features are extracted from first image. Relative pose of first feature with respect to camera is determined. Pose of first feature in global coordinate system is determined, based on its relative pose and first camera pose. Second features are extracted from second image. Relative pose of second feature with respect to camera is determined. Pose of second feature in global coordinate system is determined, based on its relative pose and second camera pose. Matching features are identified between first features and second features. Difference is determined between pose of feature based on first camera pose and pose of feature based on second camera pose. Matching features that satisfy first predefined criterion based on difference are selected.
    Type: Application
    Filed: November 6, 2019
    Publication date: May 6, 2021
    Inventors: Thomas Carlsson, Ville Timonen
  • Publication number: 20210112192
    Abstract: An imaging system for producing images for a display apparatus, the imaging system including: at least one camera; means for tracking an orientation of the at least one camera; and at least one processor communicably coupled to said camera and said means. The at least one processor is configured to: create and store an N-dimensional data structure representative of an environment; for a plurality of orientations of the at least one camera, determine values of camera attributes to be employed to capture a given image from a given orientation and update the N-dimensional data structure with the determined values; access the N-dimensional data structure to find values of camera attributes for a current orientation; and control the at least one camera to employ the found values for capturing an image of real-world scene from the current orientation.
    Type: Application
    Filed: October 11, 2019
    Publication date: April 15, 2021
    Inventors: Anna Nilsson, Ville Timonen
  • Patent number: 10939034
    Abstract: An imaging system for producing images for a display apparatus. The imaging system includes at least one camera, and processor communicably coupled to the at least one camera. The processor is configured to: obtain, from display apparatus, information indicative of current gaze direction of a user; determine, based on current gaze direction of the user, an object of interest within at least one display image, wherein the at least one display image is representative of a current view presented to user via display apparatus; adjust, based on a plurality of object attributes of the object of interest, a plurality of camera attributes of the at least one camera for capturing a given image of a given real-world scene; and generate from the given image a view to be presented to user via display apparatus.
    Type: Grant
    Filed: July 8, 2019
    Date of Patent: March 2, 2021
    Assignee: Varjo Technologies Oy
    Inventors: Ville Timonen, Mikko Ollila
  • Publication number: 20210014408
    Abstract: An imaging system for producing images for a display apparatus. The imaging system includes at least one camera, and processor communicably coupled to the at least one camera. The processor is configured to: obtain, from display apparatus, information indicative of current gaze direction of a user; determine, based on current gaze direction of the user, an object of interest within at least one display image, wherein the at least one display image is representative of a current view presented to user via display apparatus; adjust, based on a plurality of object attributes of the object of interest, a plurality of camera attributes of the at least one camera for capturing a given image of a given real-world scene; and generate from the given image a view to be presented to user via display apparatus.
    Type: Application
    Filed: July 8, 2019
    Publication date: January 14, 2021
    Inventors: Ville Timonen, Mikko Ollila
  • Patent number: 10665034
    Abstract: An imaging system for producing mixed-reality images for display apparatus. The imaging system includes a camera and a processor communicably coupled to the camera. The processor is configured to control the camera to capture image of real-world environment; analyze the image to identify surface that displays visual content; compare the visual content displayed in the image with reference image of the visual content to determine size, position and orientation of the surface with respect to the camera; process the reference image of the visual content to generate processed image of the visual content; and replace the visual content displayed in the image with the processed image to generate mixed-reality image, wherein resolution of the processed image is higher than resolution of the visual content displayed in the image.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: May 26, 2020
    Assignee: Varjo Technologies Oy
    Inventors: Roope Rainisto, Ville Timonen
  • Publication number: 20200035035
    Abstract: An imaging system for producing mixed-reality images for display apparatus. The imaging system includes a camera and a processor communicably coupled to the camera. The processor is configured to control the camera to capture image of real-world environment; analyze the image to identify surface that displays visual content; compare the visual content displayed in the image with reference image of the visual content to determine size, position and orientation of the surface with respect to the camera; process the reference image of the visual content to generate processed image of the visual content; and replace the visual content displayed in the image with the processed image to generate mixed-reality image, wherein resolution of the processed image is higher than resolution of the visual content displayed in the image.
    Type: Application
    Filed: October 7, 2019
    Publication date: January 30, 2020
    Inventors: Roope Rainisto, Ville Timonen