Patents by Inventor Tatu V. J. Harviainen
Tatu V. J. Harviainen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11961264Abstract: Systems and methods are described for compressing color information in point cloud data. In some embodiments, point cloud data includes point position information and point color information for each of a plurality of points. The point position information is provided to a neural network, and the neural network generates predicted color information (e.g. predicted luma and chroma values) for respective points in the point cloud. A prediction residual is generated to represent the difference between the predicted color information and the input point color position. The point position information (which may be in compressed form) and the prediction residual are encoded in a bitstream. In some embodiments, color hint data is encoded to improve color prediction.Type: GrantFiled: December 11, 2019Date of Patent: April 16, 2024Assignee: InterDigital VC Holdings, Inc.Inventors: Tatu V. J. Harviainen, Louis Kerofsky, Ralph Neff
-
Patent number: 11954789Abstract: Systems and methods are described for rendering a 3D synthetic scene. A display client receives point cloud samples of the scene, where the point cloud samples include a point location, one or more viewing directions, and color information for each of the viewing directions. The point cloud samples may be generated by a server using ray tracing. The display client combines information from the point cloud samples with a locally generated rendering of at least a portion of the scene to generate a combined rendering of the scene, and the client displays the combined rendering. The number of point cloud samples may be adjusted adaptively based on performance metrics at the client.Type: GrantFiled: July 2, 2020Date of Patent: April 9, 2024Assignee: InterDigital VC Holdings, Inc.Inventors: Tatu V. J. Harviainen, Marko Palviainen
-
Publication number: 20240103795Abstract: Systems and methods are described for capturing, using a forward-facing camera associated with a head-mounted augmented reality (AR) head-mounted display (HMD), images of portions of first and second display devices in an environment, the first and second display devices displaying first and second portions of content related to an AR presentation, and displaying a third portion of content related to the AR presentation on the AR HMD, the third portion determined based upon the images of portions of the first and second display devices captured using the forward-facing camera. Moreover, the first and second display devices may be active stereo display, and the AR HMD may simultaneously function as shutter glasses.Type: ApplicationFiled: December 4, 2023Publication date: March 28, 2024Inventor: Tatu V. J. Harviainen
-
Publication number: 20240087217Abstract: Systems and methods are described for providing spatial content using a hybrid format. In some embodiments, a client device receives, from a server, surface light field representations of a plurality of scene elements in a 3D scene, including a first scene element. The client device provides to the server an indication of a dynamic behavior of a second scene element different from the first scene element. Further, in response to the indication, the client device receives from the server information defining the first scene element in a 3D asset format. The client device then renders at least the first scene element in the 3D asset format.Type: ApplicationFiled: November 13, 2023Publication date: March 14, 2024Inventor: Tatu V. J. Harviainen
-
Patent number: 11900532Abstract: Systems and methods are described for providing spatial content using a hybrid format. In some embodiments, a client device receives, from a server, surface light field representations of a plurality of scene elements in a 3D scene, including a first scene element. The client device provides to the server an indication of a dynamic behavior of a second scene element different from the first scene element. Further, in response to the indication, the client device receives from the server information defining the first scene element in a 3D asset format. The client device then renders at least the first scene element in the 3D asset format.Type: GrantFiled: June 29, 2020Date of Patent: February 13, 2024Assignee: InterDigital VC Holdings, Inc.Inventor: Tatu V. J. Harviainen
-
Patent number: 11893148Abstract: Described herein are various embodiments for enabling realistic haptic feedback for virtual three-dimensional (3D) objects. A real proxy surface is provided for a virtual object, where the proxy surface may be a relief surface, such as a depth-compressed model of the virtual object. In an exemplary method, when a user touches a proxy surface, a position of the touch point on the surface is detected. The touch point is mapped to a corresponding point on the virtual object. A position of the virtual object is determined such that the touch point is in alignment with the corresponding point on the virtual object, and the virtual object is displayed at the first determined position on an augmented reality display. As a result, a user touching the proxy surface may have the sensation of touching a solid version of the virtual 3D object instead of the proxy surface.Type: GrantFiled: June 2, 2020Date of Patent: February 6, 2024Assignee: InterDigital VC Holdings, Inc.Inventors: Tatu V. J. Harviainen, Raimo J. Launonen
-
Patent number: 11868675Abstract: Systems and methods are described for capturing, using a forward-facing camera associated with a head-mounted augmented reality (AR) head-mounted display (HMD), images of portions of first and second display devices in an environment, the first and second display devices displaying first and second portions of content related to an AR presentation, and displaying a third portion of content related to the AR presentation on the AR HMD, the third portion determined based upon the images of portions of the first and second display devices captured using the forward-facing camera. Moreover, the first and second display devices may be active stereo display, and the AR HMD may simultaneously function as shutter glasses.Type: GrantFiled: December 15, 2022Date of Patent: January 9, 2024Assignee: InterDigital VC Holdings, Inc.Inventor: Tatu V. J. Harviainen
-
Publication number: 20230412865Abstract: Some embodiments of an example method may include: receiving a manifest file for streaming content, the manifest file identifying one or more degrees of freedom representations of content; tracking bandwidth available; selecting a selected representation from the one or more degrees of freedom representations based on the bandwidth available; retrieving the selected representation; and rendering the selected representation. Some embodiments of the example method may include determining estimated download latency of the one or more degrees of freedom representations. Some embodiments of the example method may include tracking client capabilities. For some embodiments of the example method, selecting the selected representation may be based on the estimated download latency and/or the client capabilities.Type: ApplicationFiled: September 5, 2023Publication date: December 21, 2023Inventor: Tatu V. J. Harviainen
-
Publication number: 20230377273Abstract: Systems and methods are described for mirroring content from a 2D display device to a 3D display device. Techniques are provided that include receiving information identifying a virtual 3D object and a first scale information, where the virtual 3D object is displayed on a 2D display device to a user using the first scale information. A second scale is then determined based on the first scale information, and a 3D representation of the virtual 3D object is displayed on a 3D display device using the second scale. Provided techniques further include receiving information identifying a first orientation information, where the virtual 3D object is further displayed using the first orientation information. A second orientation based on the first orientation information and based on a received user's location relative to the 3D display device is determined, and the 3D representation is further displayed using the second orientation.Type: ApplicationFiled: July 25, 2023Publication date: November 23, 2023Inventors: Tatu V J HARVIAINEN, Marko PALVIAINEN
-
Patent number: 11816786Abstract: Some embodiments of an example method disclosed herein may include receiving point cloud data representing one or more three-dimensional objects; receiving a viewpoint of the point cloud data; selecting a selected object from the one or more three-dimensional objects using the viewpoint; retrieving a neural network model for the selected object; generating a level of detail data for the selected object using the neural network model; and replacing, within the point cloud data, points corresponding to the selected object with the level of detail data.Type: GrantFiled: April 14, 2022Date of Patent: November 14, 2023Assignee: InterDigital Madison Patent Holdings, SASInventor: Tatu V J Harviainen
-
Patent number: 11741673Abstract: Systems and methods are described for mirroring 3D content from a first display (which may be a handheld 2D display) to a second display (which may be a 3D display, such as a light field display). The 3D content is initially displayed (e.g. as a 2D projection) on a the first display. The relative positions and/or orientations of the first and second displays are determined. The position of a user viewing the content may also be determined or may be inferred from the position and orientation of the first display. The second display is provided with information used to display the 3D content with a size and/or orientation that preserves the original apparent size and/or apparent orientation of that content from the perspective of the user.Type: GrantFiled: November 26, 2019Date of Patent: August 29, 2023Assignee: InterDigital Madison Patent Holdings, SASInventors: Tatu V. J. Harviainen, Marko Palviainen
-
Patent number: 11722718Abstract: Some embodiments of an example method may include: receiving a manifest file for streaming content, the manifest file identifying one or more degrees of freedom representations of content; tracking bandwidth available; selecting a selected representation from the one or more degrees of freedom representations based on the bandwidth available; retrieving the selected representation; and rendering the selected representation. Some embodiments of the example method may include determining estimated download latency of the one or more degrees of freedom representations. Some embodiments of the example method may include tracking client capabilities. For some embodiments of the example method, selecting the selected representation may be based on the estimated download latency and/or the client capabilities.Type: GrantFiled: January 17, 2020Date of Patent: August 8, 2023Assignee: InterDigital VC Holdings, Inc.Inventor: Tatu V. J. Harviainen
-
Patent number: 11711504Abstract: Systems and methods are described for simulating motion parallax in 360-degree video. In an exemplary embodiment for producing video content, a method includes obtaining a source video, based on information received from a client device, determining a selected number of depth layers, producing, from the source video, a plurality of depth layer videos corresponding to the selected number of depth layers, wherein each depth layer video is associated with at least one respective depth value, and wherein each depth layer video includes regions of the source video having depth values corresponding to the respective associated depth value, and sending the plurality of depth layer videos to the client device.Type: GrantFiled: November 22, 2021Date of Patent: July 25, 2023Assignee: InterDigital VC Holdings, Inc.Inventors: Tatu V. J. Harviainen, Patrick Thomas Igoe
-
Publication number: 20230119930Abstract: Systems and methods are described for capturing, using a forward-facing camera associated with a head-mounted augmented reality (AR) head-mounted display (HMD), images of portions of first and second display devices in an environment, the first and second display devices displaying first and second portions of content related to an AR presentation, and displaying a third portion of content related to the AR presentation on the AR HMD, the third portion determined based upon the images of portions of the first and second display devices captured using the forward-facing camera. Moreover, the first and second display devices may be active stereo display, and the AR HMD may simultaneously function as shutter glasses.Type: ApplicationFiled: December 15, 2022Publication date: April 20, 2023Inventor: Tatu V. J. Harviainen
-
Publication number: 20230032824Abstract: Some embodiments of a method may include tracking a distance between a local VR device and a remote VR device; selecting an experience-sharing mode from a plurality of experience-sharing modes based on the tracked distance; and providing a different degree of immersive user experience to a user at the local VR device depending on the selected experience-sharing mode, wherein as the distance between the local VR device and the remote VR device decreases, the experience-sharing mode changes and the degree of immersive user experience increases. Some embodiments of another method may include detecting a gesture made by a user; selecting an immersiveness mode from a plurality of immersiveness modes based on the detected gesture; and providing a degree of immersive user experience to a local VR device user depending on the selected immersiveness mode.Type: ApplicationFiled: October 14, 2022Publication date: February 2, 2023Inventors: Tatu V. J. Harviainen, Marko Palviainen
-
Patent number: 11544031Abstract: Systems and methods are described for capturing, using a forward-facing camera associated with a head-mounted augmented reality (AR) head-mounted display (HMD), images of portions of first and second display devices in an environment, the first and second display devices displaying first and second portions of content related to an AR presentation, and displaying a third portion of content related to the AR presentation on the AR HMD, the third portion determined based upon the images of portions of the first and second display devices captured using the forward-facing camera. Moreover, the first and second display devices may be active stereo display, and the AR HMD may simultaneously function as shutter glasses.Type: GrantFiled: November 25, 2019Date of Patent: January 3, 2023Assignee: PCMS Holdings, Inc.Inventor: Tatu V. J. Harviainen
-
Publication number: 20220394355Abstract: Systems and methods are described for displaying 360-degree video using a viewing direction and/or a viewpoint position that is determined based on a user-selected level of automation. If full automation is selected by the user, the viewing direction and/or viewport position is determined by a defined viewing path. If partial automation is selected, the viewing direction and/or viewport position is determined by user input in combination with the defined viewing path. For example, a high-pass-filtered version of the user input may be added to the defined viewing path to obtain a partially-automated viewing direction and/or viewpoint position.Type: ApplicationFiled: August 12, 2022Publication date: December 8, 2022Inventors: Tatu V J HARVIAINEN, Marko PALVIAINEN
-
Patent number: 11493999Abstract: Some embodiments of a method may include tracking a distance between a local VR device and a remote VR device; selecting an experience-sharing mode from a plurality of experience-sharing modes based on the tracked distance; and providing a different degree of immersive user experience to a user at the local VR device depending on the selected experience-sharing mode, wherein as the distance between the local VR device and the remote VR device decreases, the experience-sharing mode changes and the degree of immersive user experience increases. Some embodiments of another method may include detecting a gesture made by a user; selecting an immersiveness mode from a plurality of immersiveness modes based on the detected gesture; and providing a degree of immersive user experience to a local VR device user depending on the selected immersiveness mode.Type: GrantFiled: April 26, 2019Date of Patent: November 8, 2022Assignee: PMCS HOLDINGS, INC.Inventors: Tatu V. J. Harviainen, Marko Palviainen
-
Publication number: 20220351456Abstract: Systems and methods are described for rendering a 3D synthetic scene. A display client receives point cloud samples of the scene, where the point cloud samples include a point location, one or more viewing directions, and color information for each of the viewing directions. The point cloud samples may be generated by a server using ray tracing. The display client combines information from the point cloud samples with a locally generated rendering of at least a portion of the scene to generate a combined rendering of the scene, and the client displays the combined rendering. The number of point cloud samples may be adjusted adaptively based on performance metrics at the client.Type: ApplicationFiled: July 2, 2020Publication date: November 3, 2022Inventors: Tatu V. J. Harviainen, Marko Palviainen
-
Publication number: 20220309743Abstract: Systems and methods are described for providing spatial content using a hybrid format. In some embodiments, a client device receives, from a server, surface light field representations of a plurality of scene elements in a 3D scene, including a first scene element. The client device provides to the server an indication of a dynamic behavior of a second scene element different from the first scene element. Further, in response to the indication, the client device receives from the server information defining the first scene element in a 3D asset format. The client device then renders at least the first scene element in the 3D asset format.Type: ApplicationFiled: June 29, 2020Publication date: September 29, 2022Inventor: Tatu V. J. Harviainen