Abstract: This application discloses an object loading method performed at an electronic device. The electronic device determines a visible space located within an acquisition range of an image acquisition device located at a first position in a virtual scene and determines a target subspace located within a visible distance threshold indicated by a target type of a plurality of types in the visible space based on the first position, each type of the plurality of types having a visible distance threshold of an object in a subspace of the virtual scene. The electronic device then acquires an object whose visible distance is not greater than the visible distance threshold indicated by the target type in the target subspace as a to-be-rendered object and loads the to-be-rendered object in a storage resource of the user terminal to render an image of the virtual scene.
Type:
Grant
Filed:
October 22, 2020
Date of Patent:
March 8, 2022
Assignee:
TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
Abstract: The disclosed embodiments relate to image processing methods and apparatuses. In one embodiment, a method includes: mapping an inputted three-dimensional (3D) model map into an asymmetric cubemap, the asymmetric cubemap located at a different place than the mapping center of the inputted 3D model map; and stretching the asymmetric cubemap mapped for the inputted 3D model map into a two-dimensional (2D) stretched plane map.
Abstract: An image signal processing device of the present disclosure includes a luminance correction section that performs, on a basis of information on a maximum output luminance value in a display section, luminance correction on an image signal to be supplied to the display section, the maximum output luminance value being variable.
Abstract: The disclosed computer-implemented method may include tracking (1) a position of a primary real-world object within a real-world environment via a primary tracking method, and (2) a position of a secondary real-world object within the real-world environment via a secondary tracking method. The method may further include presenting (1) a primary virtual object at a position within an artificial environment corresponding to the tracked position of the primary real-world object, and (2) a secondary virtual object at a position within the artificial environment corresponding to the tracked position of the secondary real-world object. The method may further include (1) detecting an interaction of the primary real-world object with the secondary real-world object, and (2) transitioning to tracking the position of the primary real-world object via the secondary tracking method. Various other methods, systems, and computer-readable media are also disclosed.
Abstract: The present disclosure relates to methods and apparatus for graphics processing. In some aspects, the apparatus selects a first mip-map layer with a first texture size and a second mip-map layer with a second texture size based on a third texture size of an image. The apparatus also determines a relative distance associated with the texture sizes. Additionally, the apparatus determines a first quantity of samples to select from the first mip-map layer, and determines a second quantity of samples to select from the second mip-map layer, the second quantity of samples being less than the first quantity of samples, and a second quantity of filter taps being less than a first quantity of filter taps. Also, the apparatus generates the image at the third texture size through filtering based on the first quantity of samples and the second quantity of samples.
Type:
Grant
Filed:
June 25, 2020
Date of Patent:
February 22, 2022
Assignee:
QUALCOMM Incorporated
Inventors:
Liang Li, Andrew Evan Gruber, Yunshan Kong
Abstract: An input image of an object is prepared for presentation by removing extraneous portions such as text, logos, advertising, watermarks, and so forth. The input image is processed to determined contours of features depicted in the input image. A bounding box corresponding to each contour may be determined. Based at least in part on the areas of these bounding boxes, an image mask is created. A candidate image is determined by applying the image mask to the input image to set pixels within portions of the input image to a predetermined value, such as white. Many candidate images may be generated using different parameters, such as different thresholds for relative sizes of the areas of the bounding boxes. These candidate images may be assessed, and a candidate image is selected for later use. Instead of manual editing of the input images, the candidate images are automatically generated.
Abstract: Provided is an electronic device. The electronic device includes: a communicator comprising communication circuitry configured to establish communication with an external device; a display configured to display a first image and a second image; a processor; and a memory, wherein the memory stores instructions which, when executed, cause the processor to control the electronic device to: acquire a feature of the first image and a feature of the second image; and identify a learning model to be applied to the first image and the second image from among a first learning model included in the electronic device and a second learning model included in a server in communication with the electronic device through the communicator, based on at least one of the feature of the first image or the feature of the second image, wherein the first learning model and the second learning model are configured to convert the first image into a style of the second image to acquire a third image.
Type:
Grant
Filed:
March 5, 2020
Date of Patent:
February 1, 2022
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Yoo-jin Seo, Jeong-rok Jang, Kwan-sik Yang, Jaehwang Lee
Abstract: Systems and methods providing for determining physical location of a device of a user of an augmented reality environment corresponding to a physical space. The systems and methods involve requesting and receiving a list of participating users having a GPS location within a predetermined radius of a first device; sending advertising and scanning beacons, via a first wireless network, to generate a second list of devices present in the physical space; performing simultaneous localization and mapping (SLAM) using the participating devices of the second list; generating a third list based at least partly on a Bluetooth connection between the one or more participating devices of the second list; and identifying the participating devices of the third list.
Type:
Grant
Filed:
November 16, 2020
Date of Patent:
February 1, 2022
Assignee:
SpotMap, Inc.
Inventors:
Michael Conn McIntyre, Jr., Edward Young Zhang, Vadim Dagman
Abstract: Augmented reality systems and methods for creating, saving and rendering designs comprising multiple items of virtual content in a three-dimensional (3D) environment of a user. The designs may be saved as a scene, which is built by a user from pre-built sub-components, built components, and/or previously saved scenes. Location information, expressed as a saved scene anchor and position relative to the saved scene anchor for each item of virtual content, may also be saved. Upon opening the scene, the saved scene anchor node may be correlated to a location within the mixed reality environment of the user for whom the scene is opened. The virtual items of the scene may be positioned with the same relationship to that location as they have to the saved scene anchor node. That location may be selected automatically and/or by user input.
Type:
Grant
Filed:
October 4, 2019
Date of Patent:
January 25, 2022
Assignee:
Magic Leap, Inc.
Inventors:
Jonathan Brodsky, Javier Antonio Busto, Martin Wilkins Smith
Abstract: Methods and devices for generating reference data for adjusting a digital representation of a head region, and methods and devices for adjusting the digital representation of a head region are disclosed. In some arrangements, training data are received. A first machine learning algorithm generates first reference data using the training data. A second machine learning algorithm generates second reference data using the same training data and the first reference data generated by the first machine learning algorithm.
Abstract: One embodiment provides a method, including: receiving, at an information handling device, an indication to display an element; identifying, using a processor, a universal size designation for the element; and displaying, on a display associated with the information handling device, the element at a size associated with the universal size designation and irrespective of a screen scaling factor associated with the display. Other aspects are described and claimed.
Type:
Grant
Filed:
September 30, 2019
Date of Patent:
January 18, 2022
Assignee:
Lenovo (Singapore) Pte. Ltd.
Inventors:
Robert James Kapinos, Scott Wentao Li, Robert James Norton, Jr., Russell Speight VanBlon
Abstract: A computer implemented method is disclosed including producing, with at least one of a computing device, an augmented reality computing device, a virtual reality computing device and a mixed reality computing device, multiple sources of data files provided in individual formats to overlay within a real-world environment, combining the multiple sources of data files into a unified data format that provides for each individual data format of the multiple sources of data files to run independently and with at least one of a spatial anchor and a temporal anchor to provide for a three-dimensional (“3D”) arrangement of the plurality of data, storing in at least one memory device the multiple sources and the at least one spatial anchor and temporal anchor, receiving, through a user interface of a viewing device, query relating to a real-world environment and displaying the 3D arrangement of the plurality of data in the viewing area of the viewing display in spatial relationship with the real-world environment as vie
Type:
Grant
Filed:
December 8, 2020
Date of Patent:
January 11, 2022
Assignee:
DESIGN INTERACTIVE, INC.
Inventors:
Eric Martin, Sam Haddad, Matt Johnston, Matt Archer
Abstract: A processing device receive a two dimensional (2D) video recording of a subject user performing a physical activity and provides a three dimensional (3D) visualization comprising a virtual avatar performing the physical activity. The processing device causes display of the 3D visualization comprising the virtual avatar at a first key point in performing the physical activity, receives first user input to advance the 2D video recording to a first position corresponding the first key point, and receives second user input comprising a first synchronization command. In response, the processing device generates a first synchronization marker to indicate the first position in the 2D video recording corresponding to the first key point.
Abstract: An electronic apparatus, comprising: a memory and at least one processor and/or at least one circuit to perform the operations of the following units: a control unit configured to 1) display, in a display area, at least a part including a reference point out of a VR image expressed by a projection format using a predetermined point as the reference point, and 2) change the reference point in case an instruction is received from a user; and a determination unit configured to determine the reference point as a zenith or a nadir of the VR image.
Abstract: An electronic device according to various embodiments may comprise a display, a camera module, a microphone, and at least one processor, wherein the at least one processor is configured to: display, on the display, an image obtained using the camera module; activate the microphone; receive a music through the activated microphone; select an augmented reality (AR) object on the basis of the genre of the received music; and display the selected AR object overlappingly on the displayed image.
Abstract: An imaging unit captures an image of a visual field. A detector detects a position of an eyeball and a sight line of an occupant. A visual point identifier identifies a position of a visual point of the occupant in the visual field, the eye position and the sight line direction. A measuring unit measures a position and a distance of an object included in the image of the visual field. An image generator generates display images based on the eye position and the position and the distance of the object. The display images are displayed on a virtual plane, fused on a fusion plane and displayed as a three-dimensional display on the visual field. The display images are generated to display the three-dimensional image at a given magnification ratio calculated by reducing a geometric display magnification ratio as the distance from the occupant increases.
Abstract: A support system for management of a machine for treating food products includes an augmented reality visor including: a camera for capturing a first image; a display for displaying a second image; and a first module for transmitting the first image and receiving the second image. Included is a machine for treating food products, including a treatment chamber for receiving a food product, an actuator for applying a treatment process on the food product inside the treatment chamber, and a second module connectable to the first module. A processing and control unit is connected to the second module for receiving the first image from the visor through the second module, identifying a plurality of real elements within the first image, generating the second image, incorporating a graphic element into the second image, and transmitting the second image to the visor through the second module.
Abstract: In one embodiment, a method for rendering objects within an operating system includes receiving multiple data structures from applications executing on the operating system. Each data structure includes a declarative definition of one or more objects within a volumetric space to be displayed to a user of the operating system. The operating system can generate a render graph that includes the declarative definition of each data structure and can cause images of the objects associated with each data structure to be rendered based on the render graph and a pose of the user relative to the volumetric space.
Type:
Grant
Filed:
September 23, 2019
Date of Patent:
October 19, 2021
Assignee:
Facebook Technologies, LLC
Inventors:
Benjamin Charles Constable, David Teitlebaum
Abstract: Augmented reality eyewear devices allow users to experience a version of our “real” physical world augmented with virtual objects. Augmented reality eyewear may present a user with a graphical user interface that appears to be in the airspace directly in front of the user thereby encouraging the user to interact with virtual objects in socially undesirable ways, such as by making sweeping hand gestures in the airspace in front of the user. Anchoring various input mechanisms or the graphical user interface of an augmented reality eyewear application to a wristwatch may allow a user to interact with an augmented reality eyewear device in a more socially acceptable manner. Combining the displays of a smartwatch and an augmented reality eyewear device into a single graphical user interface may provide enhanced display function and more responsive gestural input.