Abstract: A display control device for a vehicle to control a display of a virtual image to be superimposed on a superimposition target in a foreground of an occupant includes: a condition determination unit that determines whether a predetermined interruption condition is satisfied; and a display control unit that interrupts the display of the virtual image when the condition determination unit determines that the interruption condition is satisfied.
Abstract: An information processing device and an information processing method are provided. The information processing device including a display controller that controls display for a first user on the basis of a background image arranged in a virtual space with reference to a position of the first user in the virtual space, and an object related to a second user arranged in the virtual space so as to maintain a relative positional relationship between the first user and the second user in the virtual space.
Abstract: The present disclosure related to a method for playing a multidimensional reaction-type image. The method includes at least: receiving, by a computer, input manipulation to an object from a user; and extracting, by the computer, an image frame matched to a detailed cell corresponding to location information and depth information in a reaction-type image, depending on the location information and the depth information of the input manipulation received at each playback time point. The depth information is information about pressure strength of the input manipulation applied to the reaction-type image or time length to which the input manipulation is applied. The location information is information about a location of a two-dimensional space in which the input manipulation is applied to the reaction-type image.
Abstract: There are disclosed techniques, systems, methods and instructions for a virtual reality, VR, augmented reality, AR, mixed reality, MR, or 360-degree video environment. In one example, the system includes at least one media video decoder configured to decode video signals from video streams for the representation of VR, AR, MR or 360-degree video environment scenes to a user. The system includes at least one audio decoder configured to decode audio signals from at least one audio stream. The system is configured to request at least one audio stream and/or one audio element of an audio stream and/or one adaptation set to a server on the basis of at least the user's current viewport and/or head orientation and/or movement data and/or interaction metadata and/or virtual positional data.
Type:
Grant
Filed:
April 10, 2020
Date of Patent:
June 7, 2022
Assignee:
Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
Inventors:
Adrian Murtaza, Harald Fuchs, Bernd Czelhan, Jan Plogsties, Matteo Agnelli, Ingo Hofmann
Abstract: The connected avatar is a virtual participant or assistant in an interactive game or entertainment attraction that recognizes players or items carried by players, and visually and/or audibly engages with the player throughout the interactive game or entertainment attraction to provide assistance, aid, guidance, or direction to them through the environment, provide items, information or clues, and/or pose challenges or games. The player may summon or enlist the connected avatar to perform tasks or retrieve information or items. In one embodiment, the connected avatar lives in a virtual parallel world that is connected to the real world through portals (such as mirrors, windows, or holographic displays) that can be accessed throughout the interactive game or entertainment attraction. The connected avatar combines the two worlds in an interactive manner and contributes to the connectivity of the player to the interactive game or entertainment attraction.
Abstract: A method comprising: determining a portion of a visual scene, wherein the portion is dependent upon a position of a sound source within the visual scene; and enabling adaptation of the visual scene to provide, via a display, spatially-limited visual highlighting of the portion of the visual scene.
Type:
Grant
Filed:
December 28, 2017
Date of Patent:
May 31, 2022
Assignee:
Nokia Technologies Oy
Inventors:
Antti Eronen, Arto Lehtiniemi, Jussi Leppänen, Juha Arrasvuori
Abstract: Embodiments of the present invention provide a system for generating and displaying tailored advertisements in a mixed reality environment. The system is configured for continuously identifying one or more objects in a mixed reality environment, identifying the one or more objects match advertising targets, generating a tailored advertisement, transmitting the tailored advertisement to the user device, causing the user device to display the tailored advertisement, determining that the user is interacting with the tailored advertisement, and capturing one or more metrics associated with the interaction of the user with the tailored advertisement.
Abstract: Disclosed are an extended reality (XR) device and a control method thereof, which are applicable to all of 5G communication technology field, a robot technology field, an autonomous driving technology field, and an AI technology field.
Abstract: A method and system for enabling a self-localizing mobile device to localize other self-localizing mobile devices having different reference frames is disclosed. Multiple self-localizing mobile devices are configured to survey an environment to generate a three-dimensional map of the environment using simultaneous localization and mapping (SLAM) techniques. The mobile devices are equipped with wireless transceivers, such as Ultra-wideband radios, for measuring distances between the mobile devices using wireless ranging techniques. Based on the measured distances and self-localized positions in the environment corresponding to each measured distance, at least one of the mobile devices is configured to determine relative rotational and translational transformations between the different reference frames of the mobile devices.
Abstract: Described herein are apparatuses, systems and methods for generating an interactive three-dimensional (“3D”) environment using virtual depth. A method comprises receiving a pre-rendered media file comprising a plurality of frames, receiving depth data related to the media file, wherein the depth data corresponds to each of the plurality of frames, creating an invisible three-dimensional (“3D”) framework of a first frame of the media file based on the corresponding depth data, and rendering a new first frame in real time to include the pre-rendered first frame, one or more virtual visible 3D objects and the invisible 3D framework.
Abstract: An augmented reality customer interaction system includes a transparent panel having a first side and a second side that is opposite to the first side, and a camera device configured to capture visual data from an area adjacent to the second side of the transparent panel. The visual data includes identifying features of a customer located in the area with respect to the second side of the transparent panel. The system further includes a projection system configured to project information on the first side of the transparent panel. The information projected on the first side of the transparent panel may include customer interaction data retrieved from a data store based on the identifying features of the customer.
Type:
Grant
Filed:
June 29, 2020
Date of Patent:
April 5, 2022
Assignee:
Truist Bank
Inventors:
Michael Anthony Dascola, Jacob Atticus Grady, Kaitlyn Stahl
Abstract: An information presentation device according to an embodiment includes a contour extraction unit, an abstraction processing unit, and a contour correction unit. The contour extraction unit extracts a contour of each structural object included in data showing a layout of a plurality of structural objects. The abstraction processing unit abstracts a contour of each structural object extracted by the contour extraction unit and draws the abstracted contour on a plane grid surface in which grid lines in two directions orthogonal to each other are drawn. The contour correction unit corrects, among contour lines to constitute a contour abstracted by the abstraction processing unit, a contour line deviating from both of the grid lines in the two directions so as to match a grid line in at least one direction of the grid lines in the two directions.
Type:
Grant
Filed:
September 23, 2019
Date of Patent:
April 5, 2022
Assignees:
Kabushiki Kaisha Toshiba, Toshiba Digital Solutions Corporation
Abstract: Described are various embodiments of a light field device, pixel rendering method therefor, and vision perception system and method using same. One embodiment describes a method to adjust user perception of an image portion to be rendered via a set of pixels and a corresponding array of light field shaping elements (LFSE), the method comprising: projecting an adjusted image ray trace between a given pixel and a user pupil location to intersect an adjusted image location for a given perceived image depth given a direction of a light field emanated by the given pixel based on a given LFSE intersected thereby; upon the adjusted image ray trace intersecting a given image portion associated with the given perceived image depth, associating with the given pixel an adjusted image portion value designated for the adjusted image location based on the intersection; and rendering for each given pixel the adjusted image portion value associated therewith.
Type:
Grant
Filed:
April 30, 2021
Date of Patent:
March 29, 2022
Assignee:
Evolution Optiks Limited
Inventors:
Guillaume Lussier, Raul Mihali, Yaiza Garcia, Matej Goc, Daniel Gotsch
Abstract: Systems, devices, media, and methods are presented for object detection and inserting graphical elements into an image stream in response to detecting the object. The systems and methods detect an object of interest in received frames of a video stream. The systems and methods identify a bounding box for the object of interest and estimate a three-dimensional position of the object of interest based on a scale of the object of interest. The systems and methods generate one or more graphical elements having a size based on the scale of the object of interest and a position based on the three-dimensional position estimated for the object of interest. The one or more graphical elements are generated within the video stream to form a modified video stream. The systems and methods cause presentation of the modified video stream including the object of interest and the one or more graphical elements.
Type:
Grant
Filed:
April 29, 2020
Date of Patent:
March 29, 2022
Assignee:
Snap Inc.
Inventors:
Travis Chen, Samuel Edward Hare, Yuncheng Li, Tony Mathew, Jonathan Solichin, Jianchao Yang, Ning Zhang
Abstract: A technique for performing rasterization and pixel shading with decoupled resolution is provided herein. The technique involves performing rasterization as normal to generate quads. The quads are accumulated into a tile buffer. A shading rate is determined for the contents of the tile buffer. If the shading rate is a sub-sampling shading rate, then the quads in the tile buffer are down-sampled, which reduces the amount of work to be performed by a pixel shader. The shaded down-sampled quads are then restored to the resolution of the render target. If the shading rate is a super-sampling shading rate, then the quads in the tile buffer are up-sampled. The results of the shaded down-sampled or up-sampled quads are written to the render target.
Type:
Grant
Filed:
December 20, 2018
Date of Patent:
March 15, 2022
Assignee:
Advanced Micro Devices, Inc.
Inventors:
Skyler Jonathon Saleh, Andrew S. Pomianowski
Abstract: A mobile device is fitted with an extended reality (XR) software application program executing on a processor within an XR system, and optionally a camera. Via the XR software application program, various techniques are performed for interacting with a physical object via the XR environment. In particular, the XR software application program generates and displays visual representations of real-time metric data received from a data intake and query system along with auxiliary data received from an asset management system. In addition, the XR software application program detects user interactions with the XR environment. In response, the XR software application generates messages directed to the asset management system. The messages include commands to update the auxiliary data associated with the physical object.
Type:
Grant
Filed:
October 18, 2019
Date of Patent:
March 15, 2022
Assignee:
SPLUNK INC.
Inventors:
Devin Bhushan, Jesse Chor, Sammy Lee, Glen Wong
Abstract: Disclosed are a binocular see-through AR head-mounted device and an information displaying method thereof. Sight mapping relationship ? is preset in the head-mounted device, human eye spatial sight information data of a user is tracked and calculated by a sight tracking system, virtual information that needs to be displayed is displayed on the left and right lines of sight of the human eyes on the basis of a binocular see-through AR head-mounted device virtual image imaging principle and a human eye binocular vision principle, thus implementing accurate overlap of the virtual information to the proximity of the position of the fixation point of the human eyes, allowing a high degree of integration of the virtual information with the environment, and implementing enhanced virtual reality in the true sense. The present invention provides a simple solution, requires only the sight tracking system to complete the process, obviates the need for excessive hardware facilities, and is inexpensive.
Abstract: In some embodiments, a method comprises obtaining a pipeline of operations, the pipeline of operations including a plurality of functions providing any of one or more modification operations or visualization operations for a plurality of datasets. A first dynamic visualization of the pipeline of operations at a first level of granularity is generated. A second dynamic visualization of the pipeline of operations at a second level of granularity is generated in response to user input.
Type:
Grant
Filed:
October 10, 2019
Date of Patent:
February 22, 2022
Assignee:
Palantir Technologies Inc.
Inventors:
Salar Al Khafaji, James Thompson, Joseph Hashim, Joseph Rafidi, Parvathy Menon, Patrick Szmucer, Robert Kruszewski, Slawomir Mucha, Tyler Uhlenkamp, Vilmos Ioo
Abstract: An apparatus such as a head-mounted display (HMD) may have a camera for capturing a visual scene for presentation via the HMD. A user of the apparatus may be operating a second, physical camera for capturing video or still images within the visual scene. The HMD may generate an augmented reality (AR) experience by presenting an AR view frustum representative of the actual view frustum of the physical camera. The field of view of the user viewing the captured visual scene via the AR experience is generally larger that the AR view frustum, allowing the user to avoid unnatural tracking movements and/or hunting for subjects.
Abstract: Techniques are disclosed for generating a three-dimensional (3D) visualization of data in an extended reality (XR) environment. One embodiment provides a computer-implemented method that includes receiving, via an input device, a repositioning of a first panel displayed within an XR environment and determining that, subsequent to the repositioning, at least one portion of the first panel overlaps with a second panel displayed within the XR environment. The method further includes, subsequent to the determination, generating a first 3D visualization of first data associated with the first panel and second data associated with the second panel. In addition, the method includes causing the first 3D visualization to be displayed within the XR environment.
Type:
Grant
Filed:
October 18, 2019
Date of Patent:
January 4, 2022
Assignee:
SPLUNK INC.
Inventors:
Samuel John Angelo Alberico, Jesse Chor, Kelly Kong, Ian Slattery, Glen Wong