Patents Examined by Robert J Craddock
-
Patent number: 11961191Abstract: In some implementations, a method includes obtaining a semantic construction of a physical environment. In some implementations, the semantic construction of the physical environment includes a representation of a physical element and a semantic label for the physical element. In some implementations, the method includes obtaining a graphical representation of the physical element. In some implementations, the method includes synthesizing a perceptual property vector (PPV) for the graphical representation of the physical element based on the semantic label for the physical element. In some implementations, the PPV includes one or more perceptual characteristic values characterizing the graphical representation of the physical element. In some implementations, the method includes compositing an affordance in association with the graphical representation of the physical element.Type: GrantFiled: September 2, 2021Date of Patent: April 16, 2024Assignee: APPLE INC.Inventors: Mark Drummond, Bo Morgan, Siva Chandra Mouli Sivapurapu
-
Patent number: 11954759Abstract: A tile-based graphics system has a rendering space sub-divided into a plurality of tiles which are to be processed. Graphics data items, such as parameters or texels, are fetched into a cache for use in processing one of the tiles. Indicators are determined for the graphics data items, whereby the indicator for a graphics data item indicates the number of tiles with which that graphics data item is associated. The graphics data items are evicted from the cache in accordance with the indicators of the graphics data items. For example, the indicator for a graphics data item may be a count of the number of tiles with which that graphics data item is associated, whereby the graphics data item(s) with the lowest count(s) is (are) evicted from the cache.Type: GrantFiled: December 7, 2022Date of Patent: April 9, 2024Assignee: Imagination Technologies LimitedInventors: Steven Fishwick, John Howson
-
Patent number: 11948239Abstract: A method for managing a multi-user animation platform is disclosed. A three-dimensional space within a computer memory is modeled. An avatar of a client is located within the three-dimensional space, the avatar being graphically represented by a three-dimensional figure within the three-dimensional space. The avatar is responsive to client input commands, and the three-dimensional figure includes a graphical representation of client activity. The client input commands are monitored to determine client activity. The graphical representation of client activity is then altered according to an inactivity scheme when client input commands are not detected. Following a predetermined period of client inactivity, the inactivity scheme varies non-repetitively with time.Type: GrantFiled: May 8, 2023Date of Patent: April 2, 2024Assignee: PFAQUTRUMA RESEARCH LLCInventor: Brian Mark Shuster
-
Systems, methods, and graphical user interfaces for adding effects in augmented reality environments
Patent number: 11941764Abstract: A computer system displays a representation of a field of view of the one or more cameras, including a representation of a portion of a three-dimensional physical environment that is in the field of view of the one or more cameras. The computer system receives a request to add a first virtual effect to the displayed representation of the field of view of the one or more cameras. In response to receiving the request to add the first virtual effect to the displayed representation of the field of view of the one or more cameras and in accordance with a determination that the first virtual effect requires a scan of the physical environment, the computer system initiates a scan of the physical environment to detect one or more features of the physical environment and displays a user interface that indicates a progress of the scan of the physical environment.Type: GrantFiled: April 13, 2022Date of Patent: March 26, 2024Assignee: APPLE INC.Inventors: Andrew L. Harding, James A. Queen, Joseph-Alexander P. Weil, Joanna M. Newman, Ron A. Buencamino, Richard H. Salvador, Fernando Garcia, Austin T. Tamaddon, Omid Khalili, Scott W. Wilson, Thomas H. Smith, III -
Patent number: 11935174Abstract: Processes, systems, and devices generate a training set comprising a first presentation having a first visual aid and a first audio description. The first visual aid and the first audio description are based on initial data retrieved from a first data source using a first indexing technique. The machine-learning system is trained using the first presentation and the initial data retrieved from the first data source using the first indexing technique. The machine-learning system generates a second presentation having a second visual aid and a second audio description. The second visual aid and the second audio description are based on refreshed data retrieved from the first data source using the first indexing technique. The machine-learning system presents the second presentation via an avatar in a virtual meeting room. The avatar is generated by the machine-learning system to present the second visual aid and the second audio description.Type: GrantFiled: June 27, 2022Date of Patent: March 19, 2024Assignee: DISH Network L.L.C.Inventors: Eric Pleiman, Jesus Flores Guerra
-
Patent number: 11935203Abstract: In order to guide the user to a target object that is located outside of the field of view of the wearer of the AR computing device, a rotational navigation system displays on a display device an arrow or a pointer, referred to as a direction indicator. The direction indicator is generated based on the angle between the direction of the user's head and the direction of the target object, and a correction coefficient. The correction coefficient is defined such that the greater the angle between the direction of the user's head and the direction of the target object, the greater is the horizontal component of the direction indicator.Type: GrantFiled: June 30, 2022Date of Patent: March 19, 2024Assignee: Snap Inc.Inventor: Pawel Wawruch
-
Patent number: 11935186Abstract: A modeling method and apparatus based on point cloud data, an electronic device, and a storage medium are provided, and relate to the field of three-dimensional panoramic technologies. The modeling method based on point cloud data includes: obtaining three-dimensional point cloud information collected by a point cloud collection device from a scanned target, a first time information, first rotation position information of an electric motor, and a second time information; determining second rotation position information based on the first time information and the second time information; and obtaining position information of the scanned target based on the three-dimensional point cloud information and the second rotation position information, to construct a three-dimensional panoramic model corresponding to the scanned target.Type: GrantFiled: November 13, 2020Date of Patent: March 19, 2024Assignee: REALSEE (BEIJING) TECHNOLOGY CO., LTD.Inventor: Xianyu Cheng
-
Patent number: 11922586Abstract: Augmented reality eyewear are provided removing need to aim-down-sight while using a firearm. Eyewear have accelerometer sensors outputting a “view angle” that is compared with line of sight or “head angle” and true horizon. A camera system outputs the target distance. A firearm equipped with one or more accelerometers output a trajectory. With the firearm and goggles within an arms length proximity of one another, data points can be combined to create a full 3D spacial image of the bullet's path. Target distance determines the length of the trajectory and “termination coordinates” describing the projectile's target in space relative to the user's personal coordinate system (i.e., view angle). Relative termination coordinates can then be delivered to the goggle drivers. The termination coordinates are converted to a pixel command generating an illuminated aiming reticle that corresponds with the direction a firearm is pointing and overlaid on the target.Type: GrantFiled: May 5, 2022Date of Patent: March 5, 2024Inventor: Matthew Pohl
-
Patent number: 11922539Abstract: An exercise instruction apparatus for guiding a user to perform a fitness exercise includes a control unit and a display device. The control unit is configured to control a video imagery displayed by the display device. The video imagery shows an instructor image and a user image simultaneously. The instructor image demonstrates movements of the fitness exercise and the user image presented in a mirror image is a real-time image of the user standing in front of the display device. The control unit is configured to adjust at least one image parameters such as brightness, transparency or contrast of at least one of the instructor image and the user image. The image parameter that is adjusted for the instructor image may be a different parameter than the image parameter that is adjusted for the user image. The instructor image may be overlapped on the user image, and the user can adjust the transparency of at least one of the first image and the second image.Type: GrantFiled: December 26, 2021Date of Patent: March 5, 2024Assignee: Johnson Health Tech Co., Ltd.Inventor: Nathan Pyles
-
Patent number: 11915370Abstract: Provided is a method for 3D modeling based on an irregular-shaped sketch, in which the method is executed by one or more processors, and includes receiving 2D sketch data of a target object, inputting the 2D sketch data into a 3D model generation model to generate a 3D model of the target object, and displaying the generated 3D model on a display.Type: GrantFiled: January 25, 2022Date of Patent: February 27, 2024Assignee: RECON LABS INC.Inventors: Kyungwon Yun, Roger Blanco, Kyung Hoon Hyun, Seonghoon Ban
-
Patent number: 11908083Abstract: Methods and systems are disclosed for performing operations comprising: receiving a video that includes a depiction of a real-world object; generating a three-dimensional (3D) body mesh associated with the real-world object that tracks movement of the real-world object across frames of the video; obtaining an external mesh associated with an augmented reality element; automatically establishing a correspondence between the 3D body mesh associated with the real-world object and the external mesh; deforming the external mesh based on movement of the real-world object and the established correspondence with the 3D body mesh; and modifying the video to include a display of the augmented reality element based on the deformed external mesh.Type: GrantFiled: August 31, 2021Date of Patent: February 20, 2024Assignee: Snap Inc.Inventors: Yanli Zhao, Matan Zohar, Brian Fulkerson, Georgios Papandreou, Haoyang Wang
-
Patent number: 11908040Abstract: An image processing method and a computer system. The method may be applied to a cloud-side server in a cloud mobile phone. The server may be a virtualization server, a host operating system and a guest operating system are deployed on the server, a user mode graphics driver is deployed in the guest operating system, and a kernel mode graphics driver is deployed in the host operating system. The user mode graphics driver and the kernel mode graphics driver collaborate with each other to implement image rendering of the server. Then, the server may send a rendered image to the cloud mobile phone. Accordingly, an instruction translation process is reduced, to reduce overheads of a processor and improve image processing efficiency.Type: GrantFiled: September 28, 2021Date of Patent: February 20, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Lingfei Liu, Lixin Chen, Yang Xiong
-
Patent number: 11900551Abstract: A technology that streams graphical components and rendering instructions to a client device, for the client device to perform the final rendering and overlaying of that content onto the client's video stream based on the client's most recent tracking of the device's position and orientation. A client device sends a request for augmented reality drawing data to a network device. In response, the network device generates augmented reality drawing data, which can be augmented reality change data based on the augmented reality information and previous client render state information, and sends the augmented reality drawing data to the client device. The client device receives the augmented reality drawing data and renders a visible representation of an augmented reality scene comprising overlaying augmented reality graphics over a current video scene obtained from a camera of the client device.Type: GrantFiled: March 28, 2022Date of Patent: February 13, 2024Assignee: HOME BOX OFFICE, INC.Inventor: Richard Parr
-
Patent number: 11900548Abstract: A system includes an augmented virtual reality (AVR) object creation engine, an AVR object enhancement engine, an AVR object positioning engine, and an AVR media authoring engine. The AVR object creation engine is configured to convert real world data into one or more AVR objects. The AVR object enhancement engine is configured to enhance the one or more AVR objects to include at least one of processed data visualization and multiuser controls. The AVR object positioning engine is configured to position the enhanced one or more AVR objects in a virtual space-time. The AVR media authoring engine is configured to make available, as AVR media, a scene tree including the virtual space-time in which the enhanced one or more AVR objects are positioned.Type: GrantFiled: September 13, 2021Date of Patent: February 13, 2024Assignee: Flow Immersive, Inc.Inventors: Jason Marsh, Aleksei Karpov, Timofey Biryukov
-
Patent number: 11893674Abstract: Aspects of the present disclosure are directed to creating interactive avatars that can be pinned as world-locked artificial reality content. Once pinned, an avatar can interact with the environment according to contextual queues and rules, without active control by the avatar owner. An interactive avatar system can configure the avatar with action rules, visual elements, and settings based on user selections. Once an avatar is configured and pinned to a location by an avatar owner, when other XR devices are at that location, a central system can provide the avatar (with its configurations) to the other XR device. This allows a user of that other XR device to discover and interact with the avatar according to the configurations established by the avatar owner.Type: GrantFiled: February 22, 2022Date of Patent: February 6, 2024Assignee: Meta Platforms Technologies, LLCInventors: Campbell Orme, Matthew Adam Hanson, Daphne Liang, Matthew Roberts, Fangwei Lee, Bryant Jun-Yao Tang
-
Patent number: 11893702Abstract: Provided is a virtual object processing method. The virtual object processing method includes: detecting a spatial plane in a scene where a first device is located; detecting real objects in the scene to determine a plurality of real object position boxes; determining, based on a matching relation between the plurality of real object position boxes and the spatial plane in the scene, a candidate position box set from the plurality of real object position boxes; determining, in response to a virtual object configuration operation for a target position box in the candidate position box set, position information of the virtual object in the target position box; transmitting position on the virtual object and the position information of the virtual object in the target position box to a second device for displaying the virtual object on the second device.Type: GrantFiled: January 27, 2022Date of Patent: February 6, 2024Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.Inventor: Jian Kang
-
Patent number: 11880958Abstract: A method for adjusting an over-rendered area of a display in an AR device is described. The method includes identifying an angular velocity of a display device, a most recent pose of the display device, previous warp poses, and previous over-rendered areas, and adjusting a size of a dynamic over-rendered area based on a combination of the angular velocity, the most recent pose, the previous warp poses, and the previous over-rendered areas.Type: GrantFiled: March 2, 2023Date of Patent: January 23, 2024Assignee: Snap Inc.Inventors: Bernhard Jung, Edward Lee Kim-Koon
-
Patent number: 11875524Abstract: The present disclosure provides an unmanned aerial vehicle platform based vision measurement method for a static rigid object. Aiming at the problem of high professionality but poor versatility of existing vision measurement methods, the present disclosure uses a method combining object detection and three-dimensional reconstruction to mark an object to be measured, and uses a three-dimensional point cloud processing method to further mark a size to be measured and calculate its length, which takes full advantage of the convenience of data collection by an unmanned aerial vehicle platform (UAV), and its global navigation satellite system (GNSS), an inertial measurement unit (IMU) and the like to assist measurement. There is no need to use common auxiliary devices such as a light pen and a marker, which can improve the versatility of vision measurement.Type: GrantFiled: November 16, 2021Date of Patent: January 16, 2024Assignee: BEIHANG UNIVERSITYInventors: Xiaoyan Luo, Bo Liu, Xiaofeng Shi, Chengxi Wu, Lu Li
-
Patent number: 11869158Abstract: A cross reality system enables any of multiple devices to efficiently render shared location-based content. The cross reality system may include a cloud-based service that responds to requests from devices to localize with respect to a stored map. The service may return to the device information that localizes the device with respect to the stored map. In conjunction with localization information, the service may provide information about locations in the physical world proximate the device for which virtual content has been provided. Based on information received from the service, the device may render, or stop rendering, virtual content to each of multiple users based on the user's location and specified locations for the virtual content.Type: GrantFiled: May 25, 2022Date of Patent: January 9, 2024Assignee: Magic Leap, Inc.Inventors: Timothy Dean Caswell, Konrad Piascik, Leonid Zolotarev, Mark Ashley Rushton
-
Patent number: 11861783Abstract: Various methods are provided for the generation of motion vectors in the context of 3D computer-generated images. In one example, a method includes generating, for each pixel of one or more objects to be rendered in a current frame, a 1-phase motion vector (MV1) and a 0-phase motion vector (MV0), each MV1 and MV0 having an associated depth value, to thereby form an MV1 texture and an MV0 texture, each MV0 determined based on a camera MV0 and an object MV0, converting the MV1 texture to a set of MV1 pixel blocks and converting the MV0 texture to a set of MV0 pixel blocks and outputting the set of MV1 pixel blocks and the set of MV0 pixel blocks for image processing.Type: GrantFiled: March 23, 2022Date of Patent: January 2, 2024Inventors: Hongmin Zhang, Miao Sima, Gongxian Liu, Zongming Han, Junhua Chen, Guohua Cheng, Baochen Liu, Neil Woodall, Yue Ma, Huili Han