Abstract: Systems and methods for transitional effects in real-time rendering applications are described. Some implementations may include rendering a computer-generated reality environment in a first state using an application that includes multiple processes associated with respective objects of the computer-generated reality environment; generating a message that indicates a change in the computer-generated reality environment; sending the message to two or more of the multiple processes associated with respective objects of the computer-generated reality environment; responsive to the message, updating configurations of objects of the computer-generated reality environment to change the computer-generated reality environment from the first state to a second state; and rendering the computer-generated reality environment in the second state using the application.
Type:
Grant
Filed:
August 23, 2019
Date of Patent:
December 28, 2021
Assignee:
Apple Inc.
Inventors:
Xiaobo An, Peter Dollar, Eric J. Mueller, Brendan K. Duncan
Abstract: An electronic display and method for memorizing a theme are disclosed. A theme is exhibited via a picture and a plurality of phrases associated with the theme. The phrases are intermittently highlighted individually over the picture. Each word of each phrase is designated a fixed location relative to the picture, each phrase being scattered over the picture and a phrase is interspersed among other phrases.
Abstract: A circuit device includes an image processing circuit and a comparison circuit. The image processing circuit performs a first mapping process and a first rotation process on an input image to generate an image for a head up display. The image processing circuit performs, on an image, a second mapping process that is a reverse mapping process of the first mapping process and a second rotation process that is a reverse rotation process of the first rotation process to generate an image. The comparison circuit performs a comparison between the image and the image and outputs a result of the comparison as information for detecting an error in the image.
Abstract: An HUD system, includes at least one first HUD device, namely an AR-HUD device, with at least one first image source for displaying a first image produced by at least one hologram in a first display section of a display region and at least one second HUD device with at least one second image source for displaying a second image produced using geometric projection optics in a second display section of the display region.
Abstract: An information processing device according to one embodiment includes a detection unit that detects an attention region corresponding to a user's sense of sight, an identification unit that identifies a first object overlapping the attention region from one or more objects existing in a space where the user is located, a request unit that makes a request for first object information related to the first object to another computer during a first time period where the first object overlaps the attention region, and a receiving unit that receives the first object information transmitted from the another computer in response to the request during the first time period.
Abstract: In a face swap method carried out by an electronic device, a first head image is segmented from a destination image. First facial landmarks and a first hair mask are obtained according to the first head image. A second head image is segmented from a source image. Second facial landmarks and a second hair mask are obtained according to the second head image. If at least one eye landmark in the second facial landmarks is covered by hair, the second head image and the second hair mask are processed and repaired so as to obtain a swapped-face image with eyes not covered by hair.
Type:
Grant
Filed:
December 27, 2019
Date of Patent:
September 14, 2021
Assignee:
Ping An Technology (Shenzhen) Co., Ltd.
Inventors:
Jinghong Miao, Yuchuan Gou, Minghao Li, Jui-Hsin Lai, Bo Gong, Mei Han
Abstract: Aspects of the subject disclosure may include, for example, a method performed by a processing system including a processor, including receiving, from an augmented reality device, image data associated with a visual apparatus, determining whether the image data indicates a marker, and, responsive to determining that the image date indicates the marker, determining a first characteristic associated with a user of the augmented reality device, and sending a notification to an advertising server responsive to determining the image data includes the marker, where the advertising server sends content data to the augmented reality device responsive to the notification, and where the content data is selected by the advertising server according to the first characteristic associated with the user of the augmented reality device. Other embodiments are disclosed.
Abstract: A system and method for teaching users reading comprehension. In an aspect, user's educational teaching system and method utilizes visual representations on a user's device display of a focusing point called a Bindu and a viewpoint called a Mind's Eye to assist in the learning of words. The combination of the Bindu with the Mind's Eye force a user to align the two with one another in order to present an unobstructed view of a 3-D representation of 2-D letters, numbers, punctuations, and words.
Type:
Grant
Filed:
July 20, 2020
Date of Patent:
August 17, 2021
Assignee:
Gifted Bill Enterprises, LLC
Inventors:
William H. Allen, Sreekanth Sunil Thankamushy, Marcia Pierson Hart
Abstract: An HMD device identifies a pose of the device and identifies a subset of a plurality of camera viewpoints of a light-field based on the pose. The HMD device interpolates image data of the light-field based on the pose and the subset of the plurality of camera viewpoints to generate an interpolated view; and displays at the HMD device an image based on the interpolated view. By interpolating based on the subset of camera viewpoints, the HMD device can reduce processing overhead and improve the user experience.
Type:
Grant
Filed:
August 27, 2018
Date of Patent:
August 10, 2021
Assignee:
Google LLC
Inventors:
Manfred Ernst, Daniel Erickson, Harrison McKenzie Chapter
Abstract: A method of building an anatomical branch model comprises receiving anatomical image data comprising a plurality of graphical units associated with an anatomical structure and determining a plurality of parent segments and child segments. The method also comprises determining a set of relationships between the parent segments and the child segments by determining a first set of connection costs of connecting at least one of the parent segments to a first subset of the child segments, the child segments of the first subset are separated from the at least one of the parent segments by one or more gaps, identifying a first child segment from the first subset of the child segments based on a first connection cost, and connecting the first child segment to the at least one parent segment. The method further comprising generating an image of the anatomical branch model based on the determined set of relationships.
Abstract: A method for implementing an augmented reality picture is performed at a computing device having a camera, including: obtaining a picture frame and a pose direction of the camera during shooting of the picture frame; determining a framing space in a 3D rectangular coordinate system in accordance with that the camera shoots in a shooting mode; determining 2D graphics in the framing space, and obtaining 3D coordinates of a position reference point of each 2D graphic; for each 2D graphic, adjusting a pose based on a rotation datum point of the two-dimensional graphic, and obtaining 3D coordinates of a position reference point of the pose-adjusted 2D graphic; and projecting, through perspective projection transformation, the 3D coordinates of the position reference point of the two-dimensional graphic to be 2D coordinates in a corresponding planar projection area, and rendering the 2D graphic on the picture frame according to the 2D coordinates.
Type:
Grant
Filed:
October 1, 2019
Date of Patent:
July 27, 2021
Assignee:
TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
Abstract: The present technology relates to an image processing apparatus and method that are capable of more facilitating editing. The image processing apparatus includes an operation detection unit that detects an operation input by an operation unit and a display control unit that outputs, to an immersive presentation device, part or entirety of a spherical image on which an image of the operation unit is superimposed, as a presentation image, to cause the immersive presentation device to display the presentation image, the display control unit being configured to control, in a case where the operation input has been detected, the display of the presentation image such that the operation input is reflected, in which the operation detection unit detects coordinates at a position pointed by a pointer of the operation unit, in the spherical image. The present technology is applicable to an editing system that edits spherical images.
Abstract: Enhanced methods and systems for the automatic generation and rendering of anamorphic (e.g., curved, distorted, deformed, and/or warped) images are described. When viewed via a reflection from a non-planar (e.g., curved) surface, the automatically generated and rendered anamorphic images are perceived as being relatively non-distorted, deformed, and/or warped. The anamorphic images may be utilized for catoptric anamorphis, e.g., projective, mirrored and/or reflective anamorphic displays of images. Various artworks may employ the automatically generated anamorphic image, and the curved reflective surface to generate a relatively undistorted reflected image of the anamorphic image.
Abstract: An information processing apparatus includes a determination unit configured to determine an output explicitness level of notification information to a user, and an output control unit configured to perform control such that the notification information is output in an output mode according to the determined output explicitness level. The output control unit performs control such that the notification information is output in a display mode of assimilating the notification information into a display environment in accordance with the output explicitness level.
Abstract: A device and algorithm for allowing a customer to choose a photo book cover template that is compatible with a photo having faces. The photo is compared with a set of templates arranged in a first order to determine how compatible the photo is to each of the templates, and a score indicative of compatibility is assigned. A re-sorted set of compatible templates combined with the photo is presented to the customer for consideration.
Type:
Grant
Filed:
June 23, 2020
Date of Patent:
June 15, 2021
Assignee:
PlanetArt LLC
Inventors:
Adam Black, Erik Malkemus, Roger Bloxberg, Zhang Ming Jun, Fan Xiang, Wang Ping
Abstract: A method, non-transitory computer readable medium, and computing apparatus that identifies with automated image analysis two or more different types of content in image data for an electronic image associated with one or more of a plurality of types of claims. The image data associated with each of the identified two or more different types of content is converted by a different one of a plurality of automated content conversion techniques based on the association with the one or more types of claims and on the identified one of the plurality of types of content. Modified image data for the electronic image is generated based on the converted image data associated with each of the identified two or more different types of content. The modified image data for the electronic image with the converted image data for each of the identified two or more different types of content is provided.
Abstract: A computerized process useful for sharing a persistent augmented reality (AR) objects between a set of users in a persistent AR system, comprising: implementing a persistent AR system, wherein the persistent AR system the set of users to place a set of persistent AR objects that are persistently viewable in an associated real-world context via a mobile device, wherein the persistent AR objects are shareable between users of the persistent AR system, and wherein the persistent AR objects are geolocated with the associated real-world context location; providing a list of each geolocated persistent AR object created by a first user as a user channel in the persistent AR system; enabling another user to subscribe to the channel; and enabling the other user to view each geolocated persistent AR object of the channel.
Abstract: The present invention relates to systems and methods suitable for creating and delivering augmented reality (AR) content. In particular, the present invention relates to systems and methods to create portable AR content to be downloaded, rendered, and displayed on a display device in real-time.
Abstract: Analysis and graphical rendering of subscriber data is provided. A data analysis component is provided that obtains a set of subscriber data, correlates various subsets of the subscriber data to determine a plurality of data relationships, and graphical renders the subscriber data as a heat map, a fractal map, a tree map, a three dimensional plot, a three dimensional map, a graph, a chart, etc. based on a scale associated with the data relationships. In addition, the data analysis component can obtain a set of line number portability data that can be correlated with the various subsets of the subscriber data to determine the data relationships.
Abstract: Computerized systems, methods, kits, and computer-readable media storing code for implementing the methods are provided for interacting with a physical object in an augmented reality (AR) environment generated by an AR system. One such system includes: a plurality of neuromuscular sensors able to sense a plurality of neuromuscular signals from a user, and at least one computer processor. The neuromuscular sensors are arranged on one or more wearable devices worn by the user to sense the neuromuscular signals. The at least one computer processor is or are programmed to: determine, based at least in part, on the neuromuscular signals sensed by the neuromuscular sensors, information about an interaction of the user with the physical object in the AR environment generated by the AR system; and instruct the AR system to provide feedback based, at least in part, on the information about the interaction of the user with the physical object.
Type:
Grant
Filed:
October 4, 2019
Date of Patent:
April 6, 2021
Assignee:
Facebook Technologies, LLC
Inventors:
Christopher Osborn, Mason Remaley, Lana Awad, Adam Berenzweig, Arielle Susu-Mago, Michael Astolfi, Daniel Wetmore