Patents Examined by Robert J Craddock
  • Patent number: 11961191
    Abstract: In some implementations, a method includes obtaining a semantic construction of a physical environment. In some implementations, the semantic construction of the physical environment includes a representation of a physical element and a semantic label for the physical element. In some implementations, the method includes obtaining a graphical representation of the physical element. In some implementations, the method includes synthesizing a perceptual property vector (PPV) for the graphical representation of the physical element based on the semantic label for the physical element. In some implementations, the PPV includes one or more perceptual characteristic values characterizing the graphical representation of the physical element. In some implementations, the method includes compositing an affordance in association with the graphical representation of the physical element.
    Type: Grant
    Filed: September 2, 2021
    Date of Patent: April 16, 2024
    Assignee: APPLE INC.
    Inventors: Mark Drummond, Bo Morgan, Siva Chandra Mouli Sivapurapu
  • Patent number: 11954759
    Abstract: A tile-based graphics system has a rendering space sub-divided into a plurality of tiles which are to be processed. Graphics data items, such as parameters or texels, are fetched into a cache for use in processing one of the tiles. Indicators are determined for the graphics data items, whereby the indicator for a graphics data item indicates the number of tiles with which that graphics data item is associated. The graphics data items are evicted from the cache in accordance with the indicators of the graphics data items. For example, the indicator for a graphics data item may be a count of the number of tiles with which that graphics data item is associated, whereby the graphics data item(s) with the lowest count(s) is (are) evicted from the cache.
    Type: Grant
    Filed: December 7, 2022
    Date of Patent: April 9, 2024
    Assignee: Imagination Technologies Limited
    Inventors: Steven Fishwick, John Howson
  • Patent number: 11948239
    Abstract: A method for managing a multi-user animation platform is disclosed. A three-dimensional space within a computer memory is modeled. An avatar of a client is located within the three-dimensional space, the avatar being graphically represented by a three-dimensional figure within the three-dimensional space. The avatar is responsive to client input commands, and the three-dimensional figure includes a graphical representation of client activity. The client input commands are monitored to determine client activity. The graphical representation of client activity is then altered according to an inactivity scheme when client input commands are not detected. Following a predetermined period of client inactivity, the inactivity scheme varies non-repetitively with time.
    Type: Grant
    Filed: May 8, 2023
    Date of Patent: April 2, 2024
    Assignee: PFAQUTRUMA RESEARCH LLC
    Inventor: Brian Mark Shuster
  • Patent number: 11941764
    Abstract: A computer system displays a representation of a field of view of the one or more cameras, including a representation of a portion of a three-dimensional physical environment that is in the field of view of the one or more cameras. The computer system receives a request to add a first virtual effect to the displayed representation of the field of view of the one or more cameras. In response to receiving the request to add the first virtual effect to the displayed representation of the field of view of the one or more cameras and in accordance with a determination that the first virtual effect requires a scan of the physical environment, the computer system initiates a scan of the physical environment to detect one or more features of the physical environment and displays a user interface that indicates a progress of the scan of the physical environment.
    Type: Grant
    Filed: April 13, 2022
    Date of Patent: March 26, 2024
    Assignee: APPLE INC.
    Inventors: Andrew L. Harding, James A. Queen, Joseph-Alexander P. Weil, Joanna M. Newman, Ron A. Buencamino, Richard H. Salvador, Fernando Garcia, Austin T. Tamaddon, Omid Khalili, Scott W. Wilson, Thomas H. Smith, III
  • Patent number: 11935174
    Abstract: Processes, systems, and devices generate a training set comprising a first presentation having a first visual aid and a first audio description. The first visual aid and the first audio description are based on initial data retrieved from a first data source using a first indexing technique. The machine-learning system is trained using the first presentation and the initial data retrieved from the first data source using the first indexing technique. The machine-learning system generates a second presentation having a second visual aid and a second audio description. The second visual aid and the second audio description are based on refreshed data retrieved from the first data source using the first indexing technique. The machine-learning system presents the second presentation via an avatar in a virtual meeting room. The avatar is generated by the machine-learning system to present the second visual aid and the second audio description.
    Type: Grant
    Filed: June 27, 2022
    Date of Patent: March 19, 2024
    Assignee: DISH Network L.L.C.
    Inventors: Eric Pleiman, Jesus Flores Guerra
  • Patent number: 11935203
    Abstract: In order to guide the user to a target object that is located outside of the field of view of the wearer of the AR computing device, a rotational navigation system displays on a display device an arrow or a pointer, referred to as a direction indicator. The direction indicator is generated based on the angle between the direction of the user's head and the direction of the target object, and a correction coefficient. The correction coefficient is defined such that the greater the angle between the direction of the user's head and the direction of the target object, the greater is the horizontal component of the direction indicator.
    Type: Grant
    Filed: June 30, 2022
    Date of Patent: March 19, 2024
    Assignee: Snap Inc.
    Inventor: Pawel Wawruch
  • Patent number: 11935186
    Abstract: A modeling method and apparatus based on point cloud data, an electronic device, and a storage medium are provided, and relate to the field of three-dimensional panoramic technologies. The modeling method based on point cloud data includes: obtaining three-dimensional point cloud information collected by a point cloud collection device from a scanned target, a first time information, first rotation position information of an electric motor, and a second time information; determining second rotation position information based on the first time information and the second time information; and obtaining position information of the scanned target based on the three-dimensional point cloud information and the second rotation position information, to construct a three-dimensional panoramic model corresponding to the scanned target.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: March 19, 2024
    Assignee: REALSEE (BEIJING) TECHNOLOGY CO., LTD.
    Inventor: Xianyu Cheng
  • Patent number: 11922586
    Abstract: Augmented reality eyewear are provided removing need to aim-down-sight while using a firearm. Eyewear have accelerometer sensors outputting a “view angle” that is compared with line of sight or “head angle” and true horizon. A camera system outputs the target distance. A firearm equipped with one or more accelerometers output a trajectory. With the firearm and goggles within an arms length proximity of one another, data points can be combined to create a full 3D spacial image of the bullet's path. Target distance determines the length of the trajectory and “termination coordinates” describing the projectile's target in space relative to the user's personal coordinate system (i.e., view angle). Relative termination coordinates can then be delivered to the goggle drivers. The termination coordinates are converted to a pixel command generating an illuminated aiming reticle that corresponds with the direction a firearm is pointing and overlaid on the target.
    Type: Grant
    Filed: May 5, 2022
    Date of Patent: March 5, 2024
    Inventor: Matthew Pohl
  • Patent number: 11922539
    Abstract: An exercise instruction apparatus for guiding a user to perform a fitness exercise includes a control unit and a display device. The control unit is configured to control a video imagery displayed by the display device. The video imagery shows an instructor image and a user image simultaneously. The instructor image demonstrates movements of the fitness exercise and the user image presented in a mirror image is a real-time image of the user standing in front of the display device. The control unit is configured to adjust at least one image parameters such as brightness, transparency or contrast of at least one of the instructor image and the user image. The image parameter that is adjusted for the instructor image may be a different parameter than the image parameter that is adjusted for the user image. The instructor image may be overlapped on the user image, and the user can adjust the transparency of at least one of the first image and the second image.
    Type: Grant
    Filed: December 26, 2021
    Date of Patent: March 5, 2024
    Assignee: Johnson Health Tech Co., Ltd.
    Inventor: Nathan Pyles
  • Patent number: 11915370
    Abstract: Provided is a method for 3D modeling based on an irregular-shaped sketch, in which the method is executed by one or more processors, and includes receiving 2D sketch data of a target object, inputting the 2D sketch data into a 3D model generation model to generate a 3D model of the target object, and displaying the generated 3D model on a display.
    Type: Grant
    Filed: January 25, 2022
    Date of Patent: February 27, 2024
    Assignee: RECON LABS INC.
    Inventors: Kyungwon Yun, Roger Blanco, Kyung Hoon Hyun, Seonghoon Ban
  • Patent number: 11908083
    Abstract: Methods and systems are disclosed for performing operations comprising: receiving a video that includes a depiction of a real-world object; generating a three-dimensional (3D) body mesh associated with the real-world object that tracks movement of the real-world object across frames of the video; obtaining an external mesh associated with an augmented reality element; automatically establishing a correspondence between the 3D body mesh associated with the real-world object and the external mesh; deforming the external mesh based on movement of the real-world object and the established correspondence with the 3D body mesh; and modifying the video to include a display of the augmented reality element based on the deformed external mesh.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: February 20, 2024
    Assignee: Snap Inc.
    Inventors: Yanli Zhao, Matan Zohar, Brian Fulkerson, Georgios Papandreou, Haoyang Wang
  • Patent number: 11908040
    Abstract: An image processing method and a computer system. The method may be applied to a cloud-side server in a cloud mobile phone. The server may be a virtualization server, a host operating system and a guest operating system are deployed on the server, a user mode graphics driver is deployed in the guest operating system, and a kernel mode graphics driver is deployed in the host operating system. The user mode graphics driver and the kernel mode graphics driver collaborate with each other to implement image rendering of the server. Then, the server may send a rendered image to the cloud mobile phone. Accordingly, an instruction translation process is reduced, to reduce overheads of a processor and improve image processing efficiency.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: February 20, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Lingfei Liu, Lixin Chen, Yang Xiong
  • Patent number: 11900551
    Abstract: A technology that streams graphical components and rendering instructions to a client device, for the client device to perform the final rendering and overlaying of that content onto the client's video stream based on the client's most recent tracking of the device's position and orientation. A client device sends a request for augmented reality drawing data to a network device. In response, the network device generates augmented reality drawing data, which can be augmented reality change data based on the augmented reality information and previous client render state information, and sends the augmented reality drawing data to the client device. The client device receives the augmented reality drawing data and renders a visible representation of an augmented reality scene comprising overlaying augmented reality graphics over a current video scene obtained from a camera of the client device.
    Type: Grant
    Filed: March 28, 2022
    Date of Patent: February 13, 2024
    Assignee: HOME BOX OFFICE, INC.
    Inventor: Richard Parr
  • Patent number: 11900548
    Abstract: A system includes an augmented virtual reality (AVR) object creation engine, an AVR object enhancement engine, an AVR object positioning engine, and an AVR media authoring engine. The AVR object creation engine is configured to convert real world data into one or more AVR objects. The AVR object enhancement engine is configured to enhance the one or more AVR objects to include at least one of processed data visualization and multiuser controls. The AVR object positioning engine is configured to position the enhanced one or more AVR objects in a virtual space-time. The AVR media authoring engine is configured to make available, as AVR media, a scene tree including the virtual space-time in which the enhanced one or more AVR objects are positioned.
    Type: Grant
    Filed: September 13, 2021
    Date of Patent: February 13, 2024
    Assignee: Flow Immersive, Inc.
    Inventors: Jason Marsh, Aleksei Karpov, Timofey Biryukov
  • Patent number: 11893674
    Abstract: Aspects of the present disclosure are directed to creating interactive avatars that can be pinned as world-locked artificial reality content. Once pinned, an avatar can interact with the environment according to contextual queues and rules, without active control by the avatar owner. An interactive avatar system can configure the avatar with action rules, visual elements, and settings based on user selections. Once an avatar is configured and pinned to a location by an avatar owner, when other XR devices are at that location, a central system can provide the avatar (with its configurations) to the other XR device. This allows a user of that other XR device to discover and interact with the avatar according to the configurations established by the avatar owner.
    Type: Grant
    Filed: February 22, 2022
    Date of Patent: February 6, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Campbell Orme, Matthew Adam Hanson, Daphne Liang, Matthew Roberts, Fangwei Lee, Bryant Jun-Yao Tang
  • Patent number: 11893702
    Abstract: Provided is a virtual object processing method. The virtual object processing method includes: detecting a spatial plane in a scene where a first device is located; detecting real objects in the scene to determine a plurality of real object position boxes; determining, based on a matching relation between the plurality of real object position boxes and the spatial plane in the scene, a candidate position box set from the plurality of real object position boxes; determining, in response to a virtual object configuration operation for a target position box in the candidate position box set, position information of the virtual object in the target position box; transmitting position on the virtual object and the position information of the virtual object in the target position box to a second device for displaying the virtual object on the second device.
    Type: Grant
    Filed: January 27, 2022
    Date of Patent: February 6, 2024
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventor: Jian Kang
  • Patent number: 11880958
    Abstract: A method for adjusting an over-rendered area of a display in an AR device is described. The method includes identifying an angular velocity of a display device, a most recent pose of the display device, previous warp poses, and previous over-rendered areas, and adjusting a size of a dynamic over-rendered area based on a combination of the angular velocity, the most recent pose, the previous warp poses, and the previous over-rendered areas.
    Type: Grant
    Filed: March 2, 2023
    Date of Patent: January 23, 2024
    Assignee: Snap Inc.
    Inventors: Bernhard Jung, Edward Lee Kim-Koon
  • Patent number: 11875524
    Abstract: The present disclosure provides an unmanned aerial vehicle platform based vision measurement method for a static rigid object. Aiming at the problem of high professionality but poor versatility of existing vision measurement methods, the present disclosure uses a method combining object detection and three-dimensional reconstruction to mark an object to be measured, and uses a three-dimensional point cloud processing method to further mark a size to be measured and calculate its length, which takes full advantage of the convenience of data collection by an unmanned aerial vehicle platform (UAV), and its global navigation satellite system (GNSS), an inertial measurement unit (IMU) and the like to assist measurement. There is no need to use common auxiliary devices such as a light pen and a marker, which can improve the versatility of vision measurement.
    Type: Grant
    Filed: November 16, 2021
    Date of Patent: January 16, 2024
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaoyan Luo, Bo Liu, Xiaofeng Shi, Chengxi Wu, Lu Li
  • Patent number: 11869158
    Abstract: A cross reality system enables any of multiple devices to efficiently render shared location-based content. The cross reality system may include a cloud-based service that responds to requests from devices to localize with respect to a stored map. The service may return to the device information that localizes the device with respect to the stored map. In conjunction with localization information, the service may provide information about locations in the physical world proximate the device for which virtual content has been provided. Based on information received from the service, the device may render, or stop rendering, virtual content to each of multiple users based on the user's location and specified locations for the virtual content.
    Type: Grant
    Filed: May 25, 2022
    Date of Patent: January 9, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Timothy Dean Caswell, Konrad Piascik, Leonid Zolotarev, Mark Ashley Rushton
  • Patent number: 11861783
    Abstract: Various methods are provided for the generation of motion vectors in the context of 3D computer-generated images. In one example, a method includes generating, for each pixel of one or more objects to be rendered in a current frame, a 1-phase motion vector (MV1) and a 0-phase motion vector (MV0), each MV1 and MV0 having an associated depth value, to thereby form an MV1 texture and an MV0 texture, each MV0 determined based on a camera MV0 and an object MV0, converting the MV1 texture to a set of MV1 pixel blocks and converting the MV0 texture to a set of MV0 pixel blocks and outputting the set of MV1 pixel blocks and the set of MV0 pixel blocks for image processing.
    Type: Grant
    Filed: March 23, 2022
    Date of Patent: January 2, 2024
    Inventors: Hongmin Zhang, Miao Sima, Gongxian Liu, Zongming Han, Junhua Chen, Guohua Cheng, Baochen Liu, Neil Woodall, Yue Ma, Huili Han