Patents Assigned to Varjo Technologies Oy
  • Publication number: 20240135637
    Abstract: A method including: receiving visible-light images captured using camera(s) and depth data corresponding to said images; identifying image segments of visible-light image that represent objects or their parts belonging to different material categories; detecting whether at least two adjacent image segments in visible-light image pertain to at least two different material categories related to same object category; and when it is detected that at least two adjacent image segments pertain to at least two different material categories related to same object category, identifying at least two adjacent depth segments of depth data corresponding to at least two adjacent image segments; and correcting errors in optical depths represented in at least one of at least two adjacent depth segments, based on optical depths represented in remaining of at least two adjacent depth segments.
    Type: Application
    Filed: October 23, 2022
    Publication date: April 25, 2024
    Applicant: Varjo Technologies Oy
    Inventors: Roman Golovanov, Tarek Mohsen, Petteri Timonen, Oleksandr Dovzhenko, Ville Timonen, Tuomas Tölli, Joni-Matti Määttä
  • Publication number: 20240135644
    Abstract: A method including: receiving colour images, depth images, and viewpoint information; dividing 3D space occupied by real-world environment into 3D grid(s) of voxels (204); creating 3D data structure(s) comprising nodes, each node representing corresponding voxel; dividing colour image and depth image into colour tiles and depth tiles, respectively; mapping colour tile to voxel(s) whose colour information is captured in colour tile, based on depth information captured in corresponding depth tile and viewpoint from which colour image and depth image are captured; and storing, in node representing voxel(s), reference information indicative of unique identification of colour tile that captures colour information of voxel(s) and corresponding depth tile that captures depth information, along with viewpoint information indicative of viewpoint from which colour image and depth image are captured.
    Type: Application
    Filed: October 23, 2022
    Publication date: April 25, 2024
    Applicant: Varjo Technologies Oy
    Inventors: Mikko Strandborg, Kimmo Roimela, Pekka Väänänen
  • Patent number: 11967117
    Abstract: A method implemented by a server communicably coupled to at least two devices, each device including camera(s), the devices being present within same real-world environment. The method includes: receiving, from the devices(s), images captured by respective cameras of the devices; identifying one of the devices whose camera has camera parameter(s) better than camera parameter(s) of camera of another of the devices; training neural network using images captured by camera of one of the devices as ground truth material and using images captured by camera of another of the devices as training material; generating correction information to correct images captured by camera of another of the devices using trained neural network; and correcting the images captured by the camera of the another of the device(s) by utilising the correction information at the server, or sending correction information to another of the devices for correcting the images.
    Type: Grant
    Filed: March 22, 2022
    Date of Patent: April 23, 2024
    Assignee: Varjo Technologies Oy
    Inventor: Mikko Ollila
  • Patent number: 11967019
    Abstract: A method including: receiving visible-light images captured using camera(s) and depth data corresponding to said images; identifying image segments of visible-light image that represent objects or their parts belonging to different material categories; detecting whether at least two adjacent image segments in visible-light image pertain to at least two different material categories related to same object category; and when it is detected that at least two adjacent image segments pertain to at least two different material categories related to same object category, identifying at least two adjacent depth segments of depth data corresponding to at least two adjacent image segments; and correcting errors in optical depths represented in at least one of at least two adjacent depth segments, based on optical depths represented in remaining of at least two adjacent depth segments.
    Type: Grant
    Filed: October 24, 2022
    Date of Patent: April 23, 2024
    Assignee: Varjo Technologies Oy
    Inventors: Roman Golovanov, Tarek Mohsen, Petteri Timonen, Oleksandr Dovzhenko, Ville Timonen, Tuomas Tölli, Joni-Matti Määttä
  • Patent number: 11966045
    Abstract: An imaging system including: first camera and second camera; depth-mapping means; gaze-tracking means; and processor configured to: generate depth map of real-world scene; determine gaze directions of first eye and second eye; identify line of sight and conical region of interest; determine optical depths of first object and second object present in conical region; determine one of first camera and second camera having lesser occlusion in real-world scene; adjust optical focus of one of first camera and second camera to focus on one of first object and second object having greater optical depth, and adjust optical focus of another of first camera and second camera to focus on another of first object and second object; and capture first image(s) and second image(s) using adjusted optical focuses of cameras.
    Type: Grant
    Filed: January 3, 2022
    Date of Patent: April 23, 2024
    Assignee: Varjo Technologies Oy
    Inventor: Mikko Ollila
  • Patent number: 11956555
    Abstract: An imaging system includes first camera having negative distortion; second camera, second field of view of second camera being wider than first field of view of first camera, wherein first field of view fully overlaps with portion of second field of view, second camera having negative distortion at said portion and positive distortion at remaining portion; and processor(s) configured to: capture first image and second image; determine overlapping image segment and non-overlapping image segment of second image; and generate output image from first image and second image, wherein: inner image segment of output image is generated from at least one of: first image, overlapping image segment, and peripheral image segment of output image is generated from non-overlapping image segment.
    Type: Grant
    Filed: May 20, 2022
    Date of Patent: April 9, 2024
    Assignee: Varjo Technologies Oy
    Inventor: Mikko Ollila
  • Patent number: 11947736
    Abstract: A controller-tracking system includes: camera(s) arranged on head-mounted display (HMD); one or more light sources arranged on controller(s) to be tracked, and controller(s) being associated with the HMD. The light sources provide light having wavelength(s). The processor(s) are configured to receive image(s) representing controller(s); identify image segment(s) in image(s) that represents light source(s); determine level of focussing of light source(s) in image(s), based on characteristics associated with pixels of image segment(s); determine distance between camera(s) and light source(s), based on level of focussing, intrinsic parameters of camera(s), and reference focussing distance corresponding to wavelength of light provided by light source(s); and determine pose of controller(s) in global coordinate space of real-world environment, using distance between camera(s) and light source(s).
    Type: Grant
    Filed: December 21, 2022
    Date of Patent: April 2, 2024
    Assignee: Varjo Technologies Oy
    Inventors: Mikko Strandborg, Mikko Ollila
  • Patent number: 11947122
    Abstract: A tracking system for use in head-mounted device (HMD) includes light sources arranged spatially around user-interaction controller(s); controller-pose-tracking means arranged in user-interaction controller(s); HMD-pose-tracking means; camera(s) arranged on portion of HMD that faces real-world environment in which HMD is in use; and processor(s) configured to estimate relative pose, based on controller-pose-tracking data and HMD-pose-tracking data; determine sub-set of light sources, based on estimated relative pose and arrangement of light sources; selectively control light sources such that light sources of sub-set are activated, whereas remaining light sources are deactivated; process at least one image, captured by camera(s), to identify operational state of light source(s) of sub-set that is visible in image(s), wherein image(s) is indicative of actual relative pose; and correct estimated relative pose to determine actual relative pose, based on operational state.
    Type: Grant
    Filed: December 8, 2022
    Date of Patent: April 2, 2024
    Assignee: Varjo Technologies Oy
    Inventors: Roman Golovanov, Oleksandr Dovzhenko, Juha Ala-Luhtala
  • Publication number: 20240080606
    Abstract: Disclosed is a headphone adapter (100, 204) having: first element (102) having first part (104) and second part (106) attached to first part, wherein recess (108) is defined between first part and second part; second element (110) having first end (112), second end (114) opposite to first end, and at least one third part (116) extending between first end and second end, wherein second element is rotatably attached to first element at first end; and third element (118) having fourth part (120) and attachment parts (122, 124) extending from fourth part, wherein fourth part is rotatably attached to second element at second end, and wherein headphone adapter, in use, enables attachment of a headphone device (216, 302) to a head-mounted device (208, 306) such that headband (206, 304) of head-mounted device passes through the recess defined in first element, and headband (214) of headphone device is attached to third element.
    Type: Application
    Filed: September 1, 2022
    Publication date: March 7, 2024
    Applicant: Varjo Technologies Oy
    Inventor: Jukka Manni
  • Publication number: 20240054619
    Abstract: An imaging system including: a first camera and a second camera corresponding to a first eye and a second eye of a user, respectively; and processor(s) configured to: control the first camera and the second camera to capture a sequence of first images and a sequence of second images, respectively; and apply motion blur correction to one of a given first image and a given second image, whilst applying at least one of: defocus blur correction, image sharpening, contrast enhancement, edge enhancement to another of the given first image and the given second image.
    Type: Application
    Filed: August 12, 2022
    Publication date: February 15, 2024
    Applicant: Varjo Technologies Oy
    Inventor: Mikko Ollila
  • Publication number: 20240054667
    Abstract: An imaging system including processor(s) and data repository. Processor(s) are configured to: receive images of region of real-world environment that are captured by cameras using at least one of: different exposure times, different sensitivities, different apertures; receive depth maps of region that are generated by depth-mapping means; identify different portions of each image that represent objects located at different optical depths; create set of depth planes corresponding to each image; warp depth planes of each set to match perspective of new viewpoint corresponding to which output image is to be generated; fuse sets of warped depth planes corresponding to two or more images to form output set of warped depth planes; and generate output image from output set of warped depth planes.
    Type: Application
    Filed: August 12, 2022
    Publication date: February 15, 2024
    Applicant: Varjo Technologies Oy
    Inventor: Mikko Ollila
  • Publication number: 20240056564
    Abstract: An imaging system including a first camera and a second camera corresponding to a first eye and a second eye of a user, respectively; and at least one processor. The at least one processor is configured to control the first camera and the second camera to capture a sequence of first images and a sequence of second images of a real-world environment, respectively; and apply a first extended depth-of-field correction to one of a given first image and a given second image, whilst applying at least one of: defocus blur correction, image sharpening, contrast enhancement, edge enhancement to another of the given first image and the given second image.
    Type: Application
    Filed: August 12, 2022
    Publication date: February 15, 2024
    Applicant: Varjo Technologies Oy
    Inventor: Mikko Ollila
  • Publication number: 20240046556
    Abstract: A computer-implemented method including: receiving visible-light images captured from viewpoints using visible-light camera(s); creating 3D model of real-world environment, wherein 3D model stores colour information pertaining to 3D points on surfaces of real objects (204); dividing 3D points into groups of 3D points, based on at least one of: whether surface normal of 3D points in group lie within predefined threshold angle from each other, differences in materials of real objects, differences in textures of surfaces of real objects; for group of 3D points, determining at least two of visible-light images in which group of 3D points is captured from different viewpoints, wherein said images are representative of different surface irradiances of group of 3D points; and storing, in 3D model, information indicative of different surface irradiances.
    Type: Application
    Filed: August 4, 2022
    Publication date: February 8, 2024
    Applicant: Varjo Technologies Oy
    Inventors: Mikko Strandborg, Kimmo Roimela
  • Patent number: 11871133
    Abstract: An imaging system including: image sensor including pixels arranged on photo-sensitive surface; and processor configured to: obtain information indicative of gaze direction of user's eye; identify gaze position on photo-sensitive surface; determine first region and second region on photo-sensitive surface, wherein first region includes and surrounds gaze position, while second region surrounds first region; read out first pixel data from each pixel of first region; select set of pixels to be read out from second region based on predetermined sub-sampling pattern; read out second pixel data from pixels of selected set; generate, from second pixel data, pixel data of remaining pixels of second region; and process first pixel data, second pixel data, and generated pixel data to generate image frame(s).
    Type: Grant
    Filed: September 30, 2021
    Date of Patent: January 9, 2024
    Assignee: Varjo Technologies Oy
    Inventor: Mikko Ollila
  • Patent number: 11863788
    Abstract: An encoder for encoding images includes at least one processor configured transform a given input pixel of an input image having (x, y) coordinates in a Cartesian coordinate system into a given transformed pixel of a transformed image having (?, ?) coordinates in a log-polar coordinate system, using a log-polar transformation in which a radial distance (?) of the given transformed pixel is a logarithm of a distance of the given input pixel from an origin in the Cartesian coordinate system, and an angular distance (?) of the given transformed pixel is a sum of an arctangent of a slope of a line connecting the given input pixel to the origin and a function of the radial distance, encode the transformed image, by employing a compression algorithm, into an encoded image, and send the encoded image to a display apparatus for subsequent decoding thereat.
    Type: Grant
    Filed: April 8, 2022
    Date of Patent: January 2, 2024
    Assignee: Varjo Technologies Oy
    Inventors: Mikko Strandborg, Ville Miettinen
  • Patent number: 11863786
    Abstract: A method of transmitting image data in an image display system, includes dividing the image data into framebuffers, and for each framebuffer: dividing the framebuffer into a number of vertical stripes, each stripe including one or more scanlines, dividing each vertical stripe into at least a first and a second block, each of the first and the second block comprising pixel data to be displayed in an area of the image, and storing first pixel data in the first block with a first resolution and second pixel data in the second block having a second resolution which is lower than the first resolution, transmitting the framebuffer over the digital display interface to a decoder unit, and unpacking the framebuffer, including upscaling the pixel data in the second block to compensate for the lower second resolution and optionally upscaling the pixel data in the first block.
    Type: Grant
    Filed: May 21, 2021
    Date of Patent: January 2, 2024
    Assignee: Varjo Technologies Oy
    Inventors: Mikko Strandborg, Oiva Arvo Oskari Sahlsten, Ville Miettinen
  • Publication number: 20230412932
    Abstract: An imaging system includes first camera having negative distortion; second camera, second field of view of second camera being wider than first field of view of first camera, wherein first field of view fully overlaps with portion of second field of view, second camera having negative distortion at said portion and positive distortion at remaining portion; and processor(s) configured to: capture first image and second image; determine overlapping image segment and non-overlapping image segment of second image; and generate output image from first image and second image, wherein: inner image segment of output image is generated from at least one of: first image, overlapping image segment, and peripheral image segment of output image is generated from non-overlapping image segment.
    Type: Application
    Filed: May 20, 2022
    Publication date: December 21, 2023
    Applicant: Varjo Technologies Oy
    Inventor: Mikko Ollila
  • Publication number: 20230379594
    Abstract: An imaging system includes first camera; second camera, second field of view of second camera being wider than first field of view of first camera, wherein first field of view overlaps with portion of second field of view; and processor(s) configured to: capture first images and second images, wherein overlapping image segment and non-overlapping image segment of second image correspond to said portion and remaining portion of second field of view; determine blurred region(s) (B1, B2) of first image; and generate output image in manner that: inner image segment of output image is generated from: region(s) of overlapping image segment that corresponds to blurred region(s) of first image, and remaining region of first image that is not blurred, and peripheral image segment of output image is generated from non-overlapping image segment.
    Type: Application
    Filed: May 20, 2022
    Publication date: November 23, 2023
    Applicant: Varjo Technologies Oy
    Inventors: Mikko Ollila, Petteri Timonen
  • Publication number: 20230336944
    Abstract: Disclosed is a computer-implemented method comprising: tracking positions and orientations of devices (104, 106, 204a-204f, A-F, 402, 404) within real-world environment (300), each device comprising active sensor(s) (108, 110, 206a-206f); classifying devices into groups, based on positions and orientations of devices within real-world environment, wherein a group has devices whose active sensors are likely to interfere with each other; and controlling active sensors of devices in the group to operate by employing multiplexing.
    Type: Application
    Filed: April 14, 2022
    Publication date: October 19, 2023
    Applicant: Varjo Technologies Oy
    Inventors: Ville Timonen, Mika-Petteri Lundgren
  • Publication number: 20230334691
    Abstract: Disclosed is a system (100) comprising devices (102 a, 102 b, 200, 300, 304), each device comprising active illuminator, active sensor and processor, wherein processor (108 a) of device (102 a, 300) is configured to: project pattern of light onto surroundings (302), whilst detect reflections of pattern of light off onto surroundings; determine shapes of surfaces present in surroundings and distances of surfaces from pose of device; obtain pattern information indicative of other pattern(s) of light projected by other active illuminator(s) (104 b) of other device(s) (102 b, 304); detect reflections of other pattern(s) from pose of device; determine relative pose of other device(s) with respect to pose of device; and send, to server (110, 208), surface information indicative of shapes and distances of surfaces from pose of device, along with pose information indicative of relative pose of other device(s) with respect to pose of device.
    Type: Application
    Filed: April 14, 2022
    Publication date: October 19, 2023
    Applicant: Varjo Technologies Oy
    Inventors: Ari Antti Erik Peuhkurinen, Urho Konttori