Abstract: Provided is an electronic device. The electronic device includes: a communicator comprising communication circuitry configured to establish communication with an external device; a display configured to display a first image and a second image; a processor; and a memory, wherein the memory stores instructions which, when executed, cause the processor to control the electronic device to: acquire a feature of the first image and a feature of the second image; and identify a learning model to be applied to the first image and the second image from among a first learning model included in the electronic device and a second learning model included in a server in communication with the electronic device through the communicator, based on at least one of the feature of the first image or the feature of the second image, wherein the first learning model and the second learning model are configured to convert the first image into a style of the second image to acquire a third image.
Type:
Grant
Filed:
March 5, 2020
Date of Patent:
February 1, 2022
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Yoo-jin Seo, Jeong-rok Jang, Kwan-sik Yang, Jaehwang Lee
Abstract: Systems and methods providing for determining physical location of a device of a user of an augmented reality environment corresponding to a physical space. The systems and methods involve requesting and receiving a list of participating users having a GPS location within a predetermined radius of a first device; sending advertising and scanning beacons, via a first wireless network, to generate a second list of devices present in the physical space; performing simultaneous localization and mapping (SLAM) using the participating devices of the second list; generating a third list based at least partly on a Bluetooth connection between the one or more participating devices of the second list; and identifying the participating devices of the third list.
Type:
Grant
Filed:
November 16, 2020
Date of Patent:
February 1, 2022
Assignee:
SpotMap, Inc.
Inventors:
Michael Conn McIntyre, Jr., Edward Young Zhang, Vadim Dagman
Abstract: Augmented reality systems and methods for creating, saving and rendering designs comprising multiple items of virtual content in a three-dimensional (3D) environment of a user. The designs may be saved as a scene, which is built by a user from pre-built sub-components, built components, and/or previously saved scenes. Location information, expressed as a saved scene anchor and position relative to the saved scene anchor for each item of virtual content, may also be saved. Upon opening the scene, the saved scene anchor node may be correlated to a location within the mixed reality environment of the user for whom the scene is opened. The virtual items of the scene may be positioned with the same relationship to that location as they have to the saved scene anchor node. That location may be selected automatically and/or by user input.
Type:
Grant
Filed:
October 4, 2019
Date of Patent:
January 25, 2022
Assignee:
Magic Leap, Inc.
Inventors:
Jonathan Brodsky, Javier Antonio Busto, Martin Wilkins Smith
Abstract: Methods and devices for generating reference data for adjusting a digital representation of a head region, and methods and devices for adjusting the digital representation of a head region are disclosed. In some arrangements, training data are received. A first machine learning algorithm generates first reference data using the training data. A second machine learning algorithm generates second reference data using the same training data and the first reference data generated by the first machine learning algorithm.
Abstract: One embodiment provides a method, including: receiving, at an information handling device, an indication to display an element; identifying, using a processor, a universal size designation for the element; and displaying, on a display associated with the information handling device, the element at a size associated with the universal size designation and irrespective of a screen scaling factor associated with the display. Other aspects are described and claimed.
Type:
Grant
Filed:
September 30, 2019
Date of Patent:
January 18, 2022
Assignee:
Lenovo (Singapore) Pte. Ltd.
Inventors:
Robert James Kapinos, Scott Wentao Li, Robert James Norton, Jr., Russell Speight VanBlon
Abstract: A computer implemented method is disclosed including producing, with at least one of a computing device, an augmented reality computing device, a virtual reality computing device and a mixed reality computing device, multiple sources of data files provided in individual formats to overlay within a real-world environment, combining the multiple sources of data files into a unified data format that provides for each individual data format of the multiple sources of data files to run independently and with at least one of a spatial anchor and a temporal anchor to provide for a three-dimensional (“3D”) arrangement of the plurality of data, storing in at least one memory device the multiple sources and the at least one spatial anchor and temporal anchor, receiving, through a user interface of a viewing device, query relating to a real-world environment and displaying the 3D arrangement of the plurality of data in the viewing area of the viewing display in spatial relationship with the real-world environment as vie
Type:
Grant
Filed:
December 8, 2020
Date of Patent:
January 11, 2022
Assignee:
DESIGN INTERACTIVE, INC.
Inventors:
Eric Martin, Sam Haddad, Matt Johnston, Matt Archer
Abstract: A processing device receive a two dimensional (2D) video recording of a subject user performing a physical activity and provides a three dimensional (3D) visualization comprising a virtual avatar performing the physical activity. The processing device causes display of the 3D visualization comprising the virtual avatar at a first key point in performing the physical activity, receives first user input to advance the 2D video recording to a first position corresponding the first key point, and receives second user input comprising a first synchronization command. In response, the processing device generates a first synchronization marker to indicate the first position in the 2D video recording corresponding to the first key point.
Abstract: An electronic apparatus, comprising: a memory and at least one processor and/or at least one circuit to perform the operations of the following units: a control unit configured to 1) display, in a display area, at least a part including a reference point out of a VR image expressed by a projection format using a predetermined point as the reference point, and 2) change the reference point in case an instruction is received from a user; and a determination unit configured to determine the reference point as a zenith or a nadir of the VR image.
Abstract: An electronic device according to various embodiments may comprise a display, a camera module, a microphone, and at least one processor, wherein the at least one processor is configured to: display, on the display, an image obtained using the camera module; activate the microphone; receive a music through the activated microphone; select an augmented reality (AR) object on the basis of the genre of the received music; and display the selected AR object overlappingly on the displayed image.
Abstract: An imaging unit captures an image of a visual field. A detector detects a position of an eyeball and a sight line of an occupant. A visual point identifier identifies a position of a visual point of the occupant in the visual field, the eye position and the sight line direction. A measuring unit measures a position and a distance of an object included in the image of the visual field. An image generator generates display images based on the eye position and the position and the distance of the object. The display images are displayed on a virtual plane, fused on a fusion plane and displayed as a three-dimensional display on the visual field. The display images are generated to display the three-dimensional image at a given magnification ratio calculated by reducing a geometric display magnification ratio as the distance from the occupant increases.
Abstract: A support system for management of a machine for treating food products includes an augmented reality visor including: a camera for capturing a first image; a display for displaying a second image; and a first module for transmitting the first image and receiving the second image. Included is a machine for treating food products, including a treatment chamber for receiving a food product, an actuator for applying a treatment process on the food product inside the treatment chamber, and a second module connectable to the first module. A processing and control unit is connected to the second module for receiving the first image from the visor through the second module, identifying a plurality of real elements within the first image, generating the second image, incorporating a graphic element into the second image, and transmitting the second image to the visor through the second module.
Abstract: In one embodiment, a method for rendering objects within an operating system includes receiving multiple data structures from applications executing on the operating system. Each data structure includes a declarative definition of one or more objects within a volumetric space to be displayed to a user of the operating system. The operating system can generate a render graph that includes the declarative definition of each data structure and can cause images of the objects associated with each data structure to be rendered based on the render graph and a pose of the user relative to the volumetric space.
Type:
Grant
Filed:
September 23, 2019
Date of Patent:
October 19, 2021
Assignee:
Facebook Technologies, LLC
Inventors:
Benjamin Charles Constable, David Teitlebaum
Abstract: Augmented reality eyewear devices allow users to experience a version of our “real” physical world augmented with virtual objects. Augmented reality eyewear may present a user with a graphical user interface that appears to be in the airspace directly in front of the user thereby encouraging the user to interact with virtual objects in socially undesirable ways, such as by making sweeping hand gestures in the airspace in front of the user. Anchoring various input mechanisms or the graphical user interface of an augmented reality eyewear application to a wristwatch may allow a user to interact with an augmented reality eyewear device in a more socially acceptable manner. Combining the displays of a smartwatch and an augmented reality eyewear device into a single graphical user interface may provide enhanced display function and more responsive gestural input.
Abstract: A method, apparatus, and system provide the ability to crop a three-dimensional (3D) scene. The 3D scene is acquired and includes multiple 3D images (with each image from a view angle of an image capture device) and a depth map for each image. The depth values in each depth map are sorted. Multiple initial cutoff depths are determined for the scene based on the view angles of the images (in the scene). A cutoff relaxation depth is determined based on a jump between depth values. A confidence map is generated for each depth map and indicates whether each depth value is above or below the cutoff relaxation depth. The confidence maps are aggregated into an aggregated model. A bounding volume is generated out of the aggregated model. Points are cropped from the scene based on the bounding volume.
Abstract: Systems and methods permit generation of a digital scan of a user's face such as for obtaining of a patient respiratory mask, or component(s) thereof, based on the digital scan. The method may include: receiving video data comprising a plurality of video frames of the user's face taken from a plurality of angles relative to the user's face, generating a three-dimensional representation of a surface of the user's face based on the plurality of video frames, receiving scale estimation data associated with the received video data, the scale estimation data indicative of a relative size of the user's face, and scaling the digital three-dimensional representation of the user's face based on the scale estimation data. In some aspects, the scale estimation data may be derived from motion information collected by the same device that collects the scan of the user's face.
Type:
Grant
Filed:
October 3, 2018
Date of Patent:
August 31, 2021
Inventors:
Simon Michael Lucey, Benjamin Peter Johnston, Priyanshu Gupta, Tzu-Chin Yu
Abstract: A system and method for receiving an ordered set of images and analyzing the images to determine at least one position in space and at least one motion vector in space and time for at least one object represented in the images is disclosed. Using these vectors, a four dimensional model of at least a portion of the information represented in the images is formulated. This model generally obeys the laws of physics, though aberrations may be imposed. The model is then exercised with an input parameter, which, for example, may represent a different perspective than the original set of images. The four dimensional model is then used to produce a modified set of ordered images in dependence on the input parameter and optionally the set of images, e.g., if only a portion of the data represented in the images is modeled. The set of images may then be rendered on a display device.
Abstract: Techniques for generating and using digital markups on digital images are presented. In an embodiment, a method comprises receiving, at an electronic device, a digital layout image that represents a form of a product for manufacturing a reference product; generating a digital markup layout by overlaying the digital markup image over the digital layout image; based on the digital markup layout, generating one or more manufacturing files comprising digital data for manufacturing the reference product; receiving a digital reference image of the reference product manufactured based on the one or more manufacturing files; identifying one or more found markup regions in the digital reference image; based on the found markup regions, generating a geometry map and an interactive asset image; based on, at least in part, the geometry map, generating a customized product image by applying a user pattern to the interactive asset image.
Abstract: A mechanism is described for facilitating consolidated compression/de-compression of graphics data streams of varying types at computing devices. A method of embodiments, as described herein, includes generating a common sector cache relating to a graphics processor. The method may further include performing a consolidated compression of multiple types of graphics data streams associated with the graphics processor using the common sector cache.
Type:
Grant
Filed:
September 30, 2019
Date of Patent:
July 27, 2021
Assignee:
INTEL CORPORATION
Inventors:
Abhishek R. Appu, Joydeep Ray, Prasoonkumar Surti, Altug Koker, Kiran C. Veernapu, Erik G. Liskay
Abstract: [Object] To facilitate user's operations. [Solution] An editing apparatus is provided including: a component output unit for outputting a display screen on which a plurality of components are displayed; a node output unit for outputting a plurality of nodes respectively corresponding to the plurality of components on the display screen so that the nodes are displayed along with the plurality of components so as to overlap the display of the plurality of components; and a setting unit for setting, in response to a user's instruction of association between two or more of the nodes, an association between two or more of the components corresponding to the two or more of the nodes.
Type:
Grant
Filed:
November 21, 2019
Date of Patent:
July 27, 2021
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION