Abstract: Graphics processing systems and methods provide soft shadowing effects into rendered images. This is achieved in a simple manner which can be implemented in real-time without incurring high processing costs so it is suitable for implementation in low-cost devices. Rays are cast from positions on visible surfaces corresponding to pixel positions towards the center of a light, and occlusions of the rays are determined. The results of these determinations are used to apply soft shadows to the rendered pixel values.
Abstract: A method implemented by a computer to output digital ink includes: repeatedly generating a stroke object each time a pointer is removed from a panel, where the stroke object includes control points used to reproduce a stroke made by the pointer on the panel, the stroke starting when the pointer contacts the panel and ending when the pointer is removed from the panel; each time the stroke object is generated, determining a generation time of the stroke object and generating metadata including the generation time of the stroke object; and serializing one piece of ink data, in which the stroke object is associated with the corresponding metadata, into a binary sequence, and outputting the binary sequence.
Abstract: In one embodiment, a system may capture one or more images of a user using one or more cameras, the one or more images depicting at least an eye and a face of the user. The system may determine a direction of a gaze of the user based on the eye depicted in the one or more images. The system may generate a facial mesh based on depth measurements of one or more features of the face depicted in the one or more images. The system may generate an eyeball texture for an eyeball mesh by processing the direction of the gaze and the facial mesh using a machine-learning model. The system may render an avatar of the user based on the eyeball mesh, the eyeball texture, the facial mesh, and a facial texture.
Type:
Grant
Filed:
January 9, 2020
Date of Patent:
May 18, 2021
Assignee:
Facebook Technologies, LLC
Inventors:
Gabriel Bailowitz Schwartz, Jason Saragih, Tomas Simon Kreuz, Shih-En Wei, Stephen Anthony Lombardi
Abstract: Systems and methods for presenting an augmented reality view are disclosed. Embodiments include a system with a database for personalizing an augmented reality view of a physical environment using at least one of a location of a physical environment or a location of a user. The system may further include a hardware device in communication with the database, the hardware device including a renderer configured to render the augmented reality view for display and a controller configured to determine a scope of the augmented reality view authenticating the augmented reality view. The hardware device may include a processor configured to receive the augmented reality view of the physical environment, and present, via a display, augmented reality content to the user while the user is present in the physical environment, based on the determined scope of the augmented reality view.
Type:
Grant
Filed:
November 1, 2019
Date of Patent:
May 11, 2021
Assignee:
CAPITAL ONE SERVICES, LLC
Inventors:
Jason Richard Hoover, Micah Price, Sunil Subrahmanyam Vasisht, Qiaochu Tang, Geoffrey Dagley, Stephen Michael Wylie
Abstract: Embodiments described herein provide a graphics, media, and compute device having a tiled architecture composed of a number of tiles of smaller graphics devices. The work distribution infrastructure for such device enables the distribution of workloads across multiple tiles of the device. Work items can be submitted to any one or more of the multiple tiles, with workloads able to span multiple tiles. Additionally, upon completion of a work item, graphics, media, and/or compute engines within the device can readily acquire new work items for execution with minimal latency.
Type:
Grant
Filed:
January 9, 2019
Date of Patent:
May 4, 2021
Assignee:
Intel Corporation
Inventors:
Balaji Vembu, Brandon Fliflet, James Valerio, Michael Apodaca, Ben Ashbaugh, Hema Nalluri, Ankur Shah, Murali Ramadoss, David Puffer, Altug Koker, Aditya Navale, Abhishek R. Appu, Joydeep Ray, Travis Schluessler
Abstract: Example techniques are described for generating graphics content by obtaining a rendering command for a first frame of the graphics content, rendering a full frame based on the rendering command for the first frame, storing the full frame in a buffer, obtaining a rendering command for a second frame of the graphics content, obtaining an eye position of a user, rendering a partial frame based on the rendering command for the second frame and the eye position of the user, obtaining the full frame from the buffer, and outputting the second frame, wherein the second frame is based on the full frame and the partial frame.
Abstract: An apparatus to facilitate an update of shader data constants. The apparatus includes one or more processors to detect a change to one or more data constants in a shader program, generate a micro-code block including updated constants data during execution of the shader program and transmit the micro-code block to the shader program.
Type:
Grant
Filed:
December 23, 2019
Date of Patent:
May 4, 2021
Assignee:
Intel Corporation
Inventors:
Michael Apodaca, John Feit, David Cimini, Thomas Raoux, Konstantin Levit-Gurevich
Abstract: A printing device (106) includes a laser source and a LCOS-SLM (Liquid Crystal on Silicon Spatial Light Modulator). The printing device generates a laser control signal and a LCOS-SLM control signal. The laser source (110) generates a plurality of incident laser beams based on the laser control signal. The LCOS-SLM (112) receives the plurality of incident laser beams, modulates the plurality of incident laser beams based on the LCOS-SLM control signal to generate a plurality of holographic wavefronts (214,216) from the modulated plurality of incident laser beams. Each holographic wavefront forms at least one corresponding focal point. The printing device cures a surface layer or sub-surface layer (406) of a target material (206) at interference points of focal points of the plurality of holographic wavefronts. The cured surface layer of the target material forms a three-dimensional printed content.
Abstract: Embodiments disclosed herein relate to a graphics processing chip for rendering computer graphics. The graphics processing chip may include a controller configured to manage operations of the graphics processing chip in accordance with a graphics-rendering pipeline. The operations may include geometry-processing operations, rasterization operations, and shading operations. The chip may further include programmable memory components configured to store a machine-learning model configured to perform at least a portion of the shading operations. The chip may also include a plurality of processing units configured to be selectively used to perform the shading operations in accordance with the machine-learning model. The chip may also include at least one output memory configured to store image data generated using the shading operations.
Type:
Grant
Filed:
February 21, 2019
Date of Patent:
April 6, 2021
Assignee:
Facebook Technologies, LLC
Inventors:
Christoph Herman Schied, Anton S. Kaplanyan
Abstract: A computer implemented method for warping virtual content from two sources includes a first source generating first virtual content based on a first pose. The method also includes a second source generating second virtual content based on a second pose. The method further includes a compositor processing the first and second virtual content in a single pass. Processing the first and second virtual content includes generating warped first virtual content by warping the first virtual content based on a third pose, generating warped second virtual content by warping the second virtual content based on the third pose, and generating output content by compositing the warped first and second virtual content.
Abstract: In certain embodiments, enhancement of a field of view of a user may be facilitated via one or more dynamic display portions. In some embodiments, one or more changes related to one or more eyes of a user may be monitored. Based on the monitoring, one or more positions of one or more transparent display portions of wearable device may be adjusted, where the transparent display portions enable the user to see through the wearable device. A live video stream representing an environment of the user may be obtained via the wearable device. A modified video stream derived from the live video stream may be displayed on one or more other display portions of the wearable device.
Abstract: A system for adjusting content display orientation on a screen is disclosed. The system may include a processor that may detect both eyes and a body part of a user that is proximal to one or more of the user's eyes. The system may then determine an eye gaze plane based on the positions of the first and second eyes of the user. The eye gaze plane may be determined by identifying a first line of sight extending from the first eye and a second line of sight extending from the second eye. Additionally, the eye gaze plane may bisect a center of the first eye and a center of the second eye of the user. Once the eye gaze plane is determined, the system may adjust the orientation of content displayed on a display device based on the eye gaze plane and on the position of the body part.
Type:
Grant
Filed:
June 5, 2019
Date of Patent:
March 23, 2021
Assignee:
AT&T Intellectual Property I, L.P.
Inventors:
Marc A. Sullivan, James H. Pratt, Garrett L. Stettler
Abstract: An information processing apparatus includes a bio-information obtaining unit configured to obtain bio-information of a subject; a kinetic-information obtaining unit configured to obtain kinetic information of the subject; and a control unit configured to determine an expression or movement of an avatar on the basis of the bio-information obtained by the bio-information obtaining unit and the kinetic information obtained by the kinetic-information obtaining unit and to perform a control operation so that the avatar with the determined expression or movement is displayed.
Abstract: Disclosed are various examples for selective screen sharing. In one example, a computing device determines that a state of a destination device does not satisfy a compliance rule of a management service. The computing device also determines an area to obscure within a video stream using screen-sharing data. The video stream is generated by applying a transformation to a screen capture. The transformation obscures the area within the video stream. The video stream is transmitted to a destination device. In some cases, a user-specified modification to the area is also obtained. The video stream is updated by applying an updated transformation to the screen capture that obscures the updated area within the video stream.
Abstract: An electronic device includes a control unit to control so that cyclical scroll display of a same content is performed on the second screen, the cyclical scroll display involving, in accordance with a first operation, scrolling the VR content being displayed by flat display in a first direction without scrolling the indicator and sequentially displaying, in the first direction from an end in a second direction in the rectangular region, an image region corresponding to a scroll amount in the first direction among the VR content, and that the cyclical scroll display is not performed on the first screen even when the first operation is performed; and a generating unit to generate an edited VR content including a second video range that is narrower than the first video range among the VR content on the basis of a region indicated by the indicator.
Abstract: An image obtaining unit that obtains an image picked up by an image pickup apparatus, a first detection unit that detects a first feature from the image, a second detection unit that detects a second feature different from the first feature from the image by using a method different from a method of the first detection unit, a first position orientation derivation unit that derives a position orientation of the image pickup apparatus as a first position orientation on the basis of the first feature detected from the image, a second position orientation derivation unit that derives a position orientation of the image pickup apparatus as a second position orientation on the basis of the second feature detected from the image, and a decision unit that decides the position orientation of the image pickup apparatus on the basis of the first position orientation and the second position orientation are provided.
Abstract: A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an I/O device page table.
Abstract: A device for accessing memory configured to store an image data cube, wherein the memory has memory banks, and each memory bank has memory rows and memory columns. The device includes an input configured to receive a memory access request having a logical start address, which specifies a logical bank, a logical row, and a logical column, and a burst size; and a memory address generator configured to generate physical memory addresses based on the logical start address and the burst size, wherein any consecutive logical start addresses mapped to different memory rows are mapped to different memory banks.
Type:
Grant
Filed:
March 28, 2019
Date of Patent:
February 23, 2021
Assignee:
Infineon Technologies AG
Inventors:
Muhammad Hassan, Pedro Costa, Andre Roger, Romain Ygnace
Abstract: A method measuring a distortion parameter of a visual reality device includes: obtaining an anti-distortion grid image according to a first distortion coefficient; obtaining a grid image of the anti-distortion grid image at a preset viewpoint after the anti-distortion grid image passes through a to-be-measured optical component of the visual reality device; determining a distortion type of the grid image after passing through the to-be-measured optical component; adjusting the first distortion coefficient according to the distortion type of the grid image, thereby obtaining an adjusted first distortion coefficient and then reducing distortion of the grid image; repeating the above steps until the distortion of the grid image is less than or equal to a distortion threshold. The adjusted first distortion coefficient when the distortion of the grid image is less than or equal to the distortion threshold, is taken as a distortion coefficient of the to-be-measured optical component.
Abstract: The technology disclosed relates to user interfaces for controlling augmented reality environments. Real and virtual objects can be seamlessly integrated to form an augmented reality by tracking motion of one or more real objects within view of a wearable sensor system using a combination a RGB (red, green, and blue) and IR (infrared) pixels of one or more cameras. It also relates to enabling multi-user collaboration and interaction in an immersive virtual environment. In particular, it relates to capturing different sceneries of a shared real world space from the perspective of multiple users. The technology disclosed further relates to sharing content between wearable sensor systems. In particular, it relates to capturing images and video streams from the perspective of a first user of a wearable sensor system and sending an augmented version of the captured images and video stream to a second user of the wearable sensor system.
Type:
Grant
Filed:
July 12, 2019
Date of Patent:
February 16, 2021
Assignee:
Ultrahaptics IP Two Limited
Inventors:
David S. Holz, Barrett Fox, Kyle A. Hay, Gabriel A. Hare, Wilbur Yung Sheng Yu, Dave Edelhart, Jody Medich, Daniel Plemmons