Abstract: Systems and methods providing for determining physical location of a device of a user of an augmented reality environment corresponding to a physical space. The systems and methods involve requesting and receiving a list of participating users having a GPS location within a predetermined radius of a first device; sending advertising and scanning beacons, via a first wireless network, to generate a second list of devices present in the physical space; performing simultaneous localization and mapping (SLAM) using the participating devices of the second list; generating a third list based at least partly on a Bluetooth connection between the one or more participating devices of the second list; and identifying the participating devices of the third list.
February 24, 2020
Date of Patent:
November 24, 2020
Michael Conn McIntyre, Jr., Edward Young Zhang, Vadim Dagman
Abstract: Techniques are described for virtual representation creation and display. Processing circuitry may identify substantially transparent pixels in a virtual hairstyle, identify a first set of pixels that are away from the identified substantially transparent pixels, increase an opacity level for a second set of pixels that excludes the first set of pixels by a first amount, and generate the virtual hairstyle based on the first set of pixels and the second set of pixels having the increased opacity level. With the generated virtual hairstyle, processing circuitry of a personal computing device may blend, in a first pass, one or more pixels of the version of the generated virtual hairstyle having an opacity level greater than or equal to a threshold opacity level, and blend, in a second pass, one or more pixels of the version of the generated virtual hairstyle having an opacity level less than the threshold opacity level.
Abstract: There is provided a display control system including a plurality of display units, an imaging unit configured to capture a subject, a predictor configured to predict an action of the subject according to a captured image captured by the imaging unit, a guide image generator configured to generate a guide image that guides the subject according to a prediction result from the predictor, and a display controller configured to, on the basis of the prediction result from the predictor, select a display unit capable of displaying an image at a position corresponding to the subject from the plurality of display units, and to control the selected display unit to display the guide image at the position corresponding to the subject.
Abstract: A virtual reality experience apparatus includes: a displaying device configured to provide an experiencing user with a virtual reality image; and a riding device configured to provide the experiencing user with a motion, wherein the riding device includes: a riding part providing the experiencing user with a ridable space, and a gyro mechanism generating a pitching motion and a rolling motion of the riding part, wherein the gyro mechanism includes: a base structure having a pair of support columns disposed apart from each other, a pitching mechanism rotatably mounted on the pair of support columns to be rotated with respect to a pitching axis extending between the pair of support columns, and a rolling mechanism rotatably mounted on the pitching mechanism to be rotated with respect to a rolling axis perpendicular to the pitching axis.
Abstract: The disclosed computer-implemented method may include tracking (1) a position of a primary real-world object within a real-world environment via a primary tracking method, and (2) a position of a secondary real-world object within the real-world environment via a secondary tracking method. The method may further include presenting (1) a primary virtual object at a position within an artificial environment corresponding to the tracked position of the primary real-world object, and (2) a secondary virtual object at a position within the artificial environment corresponding to the tracked position of the secondary real-world object. The method may further include (1) detecting an interaction of the primary real-world object with the secondary real-world object, and (2) transitioning to tracking the position of the primary real-world object via the secondary tracking method. Various other methods, systems, and computer-readable media are also disclosed.
Abstract: Given a parcel of land that is regulated by city zoning rules, the system and method described uses a combination of rule based coded and machine learning to parametrically create geometry for the maximum or optimal buildable envelope with the least amount of regulation.
February 28, 2019
Date of Patent:
October 27, 2020
Matthew Esposito, Geraldine Esposito, Hector Tarrido-Picart
Abstract: A path tracing system in which the traversal task is distributed between one global acceleration structure, which is central in the system, and multiple local acceleration structures, distributed among cells, of high locality and of autonomous processing. Accordingly, the centrality of the critical resource of accelerating structure is reduced, lessening bottlenecks, while improving parallelism.
Abstract: A projection display device includes an image data control unit that controls image data to be input to a light modulation unit, and a situation determination unit that determines, in a state where an automated driving mode is set, whether or not a situation has occurred where an operation device mounted in a vehicle and used for driving needs to be operated. When it is determined that the situation has occurred where the operation device needs to be operated, the image data control unit inputs, to a driving unit, first image data for displaying images corresponding to a plurality of operation devices in a positional relationship corresponding to an arrangement of the plurality of operation devices and displaying, in an emphasized manner, the image corresponding to the operation device that needs to be operated, and displays an operation assisting image that is based on the first image data.
Abstract: This application provides a method for controlling a virtual object performed at an electronic device: obtaining a current location of a virtual object in a virtual scene; determining whether the current location is located outside an associated area of the virtual object; in accordance with a determination that the current location of the virtual object is outside the associated area of the virtual object: determining a current state of the virtual object at the current location; in accordance with a determination that the virtual object is performing an action of a first state, controlling the virtual object to return to a pre-specified location within the associated area after the virtual object completes the action; and in accordance with a determination that the virtual object is in a second state: controlling the virtual object to return to the pre-specified location within the associated area after waiting a predetermined time period.
March 7, 2019
Date of Patent:
October 6, 2020
TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
Abstract: Techniques are presented for constructing a digital representation of a physical environment. In some embodiments, a method includes obtaining image data indicative of the physical environment; receiving gesture input data from a user corresponding to at least one location in the physical environment, based on the obtained image data; detecting at least one discontinuity in the physical environment near the at least one location corresponding to the received gesture input data; and generating a digital surface corresponding to a surface in the physical environment, based on the received gesture input data and the at least one discontinuity.
Abstract: Embodiments of the disclosure disclose a system, method and apparatus for displaying information. A specific embodiment of the method comprises: acquiring currently displayed information, the currently displayed information including an image; associating an augmented reality AR display identifier with the currently displayed information, in response to a preset object in a preset object set existing in the image; displaying the image associated with an AR play identifier, in response to receiving a request for browsing the currently displayed information associated with the AR display identifier from a user; acquiring AR image data of the preset object existing in the image from a server, in response to receiving a request for browsing the image associated with the AR play identifier from the user; and displaying an AR image of the preset object existing in the image based on the AR image data of the preset object existing in the image.
Abstract: A data visualization system creates a visual representation of data. The visual representation of data is provided in a form that enables an end user to adjust variable data upon which one or more determined data elements are based using an input device. The adjustment of the variable data is detected, and the visual representation of the data is refreshed based on the detected adjustment of the variable data.
Abstract: A generation apparatus is configured to generate a virtual viewpoint image based on a plurality of images captured by a plurality of cameras for imaging a field from a plurality of different directions, the generation apparatus including an acquisition unit configured to acquire, based on a three-dimensional model of at least a portion of the field, correspondence information indicating correspondence between a coordinate of an image captured by at least one of the plurality of cameras and a coordinate related to a simple three-dimensional model less accurate than the three-dimensional model, and a generation unit configured to generate a virtual viewpoint image according to designation about a position and an orientation of a virtual viewpoint, by using an image captured by one or more of the plurality of cameras and the correspondence information acquired by the acquisition unit.
Abstract: An image signal processing device of the present disclosure includes a luminance correction section that performs, on a basis of information on a maximum output luminance value in a display section, luminance correction on an image signal to be supplied to the display section, the maximum output luminance value being variable.
Abstract: Video noise reduction for a video augmented reality system is provided. A head mounted display includes a display unit; a camera for generating frames of display data. A frame store is provided for storing previous frames of displayed information that was sent to the display unit; and a motion processor is provided in communication with the camera, display unit, and the frame store. The motion processor is operable to: identify an area of interest in a current frame of display data; match the area of interest to similar areas in previous frames stored in the frame store; rotate and translate the matched areas of interest from the one or more previous frames stored in the frame store to match the area of interest in the current frame; and average the prior matched areas of interest with the current area of interest to generate a displayed area of interest.
Abstract: A method and system for generating an augmented reality experience without a physical marker. At least two frames from a video stream are collected and one of the frames is designated as a first frame. The graphical processor of a device prepares two collected frames for analysis and features from the two collected frames are selected for comparison. The central processor of the device isolates points on a same plane as a tracked point in the first frame and calculates a position of a virtual object in a second frame in 2D. The next frame from the video stream is collected and the process is repeated until the user navigates away from the URL, webpage or when the camera is turned off. The central processor renders the virtual object on a display of the device.
Abstract: A program for causing a computer to execute: receiving an instruction to change a position and a direction of a viewpoint disposed in a virtual space from a user; controlling a viewpoint to change the position and the direction of the viewpoint in response to the instruction; rendering a spatial image that depicts an aspect of an interior of the virtual space on the basis of the position and the direction of the viewpoint; and switching over between a first mode of changing the direction of the viewpoint about the position of the viewpoint and a second mode of changing the position and the direction of the viewpoint about an object of interest to which the user pays attention in the virtual space in a case of receiving an instruction to change the direction of the viewpoint from the user at a time of controlling the viewpoint.
Abstract: Techniques are provided for optimizing display processing of layers below a dim layer by a display system. Because the dim layer may partially obstruct, conceal, or otherwise impact a user view of layers below the dim layer, resource-saving techniques may be used in the processing the layers below the dim layer. While these techniques may impact visual quality, a user is unlikely to notice visual artifacts or other reductions in quality in the modified layers below the dim layer. For example, when a dim layer is to be displayed, a GPU can render layers below the dim layer at a lower resolution. Furthermore, the GPU can increase a compression ratio for layers below the dim layer. The low-resolution layers can be scaled-up to an original resolution and the compressed layers can be uncompressed in the display pipeline for display underneath the dim layer.
Abstract: Methods and devices for generating reference data for adjusting a digital representation of a head region, and methods and devices for adjusting the digital representation of a head region are disclosed. In some arrangements, training data are received. A first machine learning algorithm generates first reference data using the training data. A second machine learning algorithm generates second reference data using the same training data and the first reference data generated by the first machine learning algorithm.
Abstract: An image processing system configured to process perceived images of an environment includes a central processing unit (CPU) including a memory storage device having stored thereon a computer model of the environment, at least one sensor configured and disposed to capture a perceived environment including at least one of visual images of the environment and range data to objects in the environment, and a rendering unit (RU) configured and disposed to render the computer model of the environment forming a rendered model of the environment. The image processing system compares the rendered model of the environment to the perceived environment to update the computer model of the environment.