Three-dimension Patents (Class 345/419)
  • Patent number: 12292507
    Abstract: An embodiment method of detecting a curb includes selecting road points from a point cloud acquired by a LiDAR sensor and detecting multiple consecutive points, having a constant first slope in a first plane viewed from above and a second slope having a constant sign in a second plane viewed from a side, as curb candidate points from among the road points, the curb candidate points being candidates for curb points.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: May 6, 2025
    Assignees: Hyundai Motor Company, Kia Corporation
    Inventors: Jin Won Park, Yoon Ho Jang
  • Patent number: 12293467
    Abstract: A method, device, and computer-readable storage medium for generating a proxy mesh are disclosed. The method includes: receiving a reference mesh, wherein the reference mesh comprises a polygonal mesh that is a computer representation of a three-dimensional (3D) object; computing quadrics corresponding to the reference mesh; receiving a second polygonal mesh, wherein the second polygonal mesh comprises a polygonal mesh generated based on the reference mesh; transferring the quadrics corresponding to the reference mesh to the second polygonal mesh; and generating a proxy mesh based on the quadrics corresponding to the reference mesh transferred to the second polygonal mesh.
    Type: Grant
    Filed: March 27, 2023
    Date of Patent: May 6, 2025
    Assignee: Electronic Arts Inc.
    Inventor: Ashton Mason
  • Patent number: 12293464
    Abstract: A method, device, and computer-readable storage medium for generating a proxy mesh. The method includes: receiving an input polygonal mesh that includes multiple sub-meshes, each of which is a polygonal mesh, where the input polygonal mesh is a computer representation of a three-dimensional (3D) object; generating a voxel volume representing the input polygonal mesh, wherein the voxel volume comprises voxels that approximates a shape of the 3D object, wherein a first set of voxels of the voxel volume includes voxels that are identified as boundary voxels that correspond to positions of polygons of the multiple sub-meshes of the input polygonal mesh; determining a grouping of two or more sub-meshes that together enclose one or more voxels of the voxel volume other than the voxels in the first set of voxels; and generating a proxy mesh corresponding to the input polygonal mesh based on the grouping of two or more sub-meshes.
    Type: Grant
    Filed: May 3, 2022
    Date of Patent: May 6, 2025
    Assignee: Electronic Arts Inc.
    Inventor: Ashton Mason
  • Patent number: 12293477
    Abstract: According to one embodiment, a method, computer system, and computer program product for adjusting an audible area of an avatar's voice is provided. The present invention may include receiving, at a microphone, a source audio; creating a received audio; calculating, by a generative model, a voice propagation distance of a user based on the source audio, the received audio, and a templated text sentence describing a category of a mixed reality environment experienced by the user; drawing a virtual circle within the mixed reality environment centered on a user avatar representing the user and with a radius equal to the voice propagation distance; and transmitting the source audio to one or more participants within the mixed-reality environment represented by one or more participant avatars located within the virtual circle.
    Type: Grant
    Filed: March 21, 2023
    Date of Patent: May 6, 2025
    Assignee: International Business Machines Corporation
    Inventors: Meng Chai, Dan Zhang, Yuan Jie Song, Yu Li, Wen Ting Su, Xiao Feng Ji
  • Patent number: 12293457
    Abstract: A method includes: performing semantic segmentation on an RGBD image to obtain a semantic label of each pixel of the image; performing reconstruction of a point cloud based on the image and mapping the semantic label of each pixel of the image into the point cloud to respectively obtain a semantic point cloud of a current frame with the semantic labels and a three-dimensional scene semantic map with the semantic labels; generating two-dimensional discrete semantic feature points for each of three-dimensional semantic point clouds in the current frame and the semantic map to obtain a corresponding two-dimensional semantic feature point image, and performing a three-dimensional semantic feature description on each feature point in the two-dimensional semantic feature point image; and performing feature matching on all feature points in the current frame and all feature points in the semantic map to obtain positioning information based on the three-dimensional semantic feature description.
    Type: Grant
    Filed: July 12, 2023
    Date of Patent: May 6, 2025
    Assignee: UBTECH ROBOTICS CORP LTD
    Inventors: Tiecheng Sun, Jichao Jiao
  • Patent number: 12293448
    Abstract: Methods and graphics processing systems render items of geometry using a rendering space which is subdivided into a plurality of first regions. Each of the first regions is sub-divided into a plurality of second regions. Each of a plurality of items of geometry is processed by identifying which of the first regions the item of geometry is present within, and for each identified first region determining an indication of the spatial coverage, within the identified first region, of the item of geometry, and using the determined indication of the spatial coverage within the identified first region to determine whether to add the item of geometry to a first control list for the identified first region or to add the item of geometry to one or more second control lists for a respective one or more of the second regions within the identified first region.
    Type: Grant
    Filed: August 29, 2023
    Date of Patent: May 6, 2025
    Assignee: Imagination Technologies Limited
    Inventors: Xile Yang, Robert Brigg
  • Patent number: 12291846
    Abstract: A robot generates a cell grid (e.g., occupancy grid) representation of a geographic area. The occupancy grid may include a plurality of evenly sized cells, and each cell may be assigned an occupancy status, which can be used to indicate the location of obstacles present in the geographic area. Footprints for the robot corresponding to a plurality of robot orientations and joint states may be generated and stored in a look-up table. The robot may generate a planned path for the robot to navigate within the geographic area by generating a plurality of candidate paths, each candidate path comprising a plurality of candidate robot poses. For each candidate robot pose, the robot may query the look-up table for a corresponding robot footprint to determine if a collision will occur.
    Type: Grant
    Filed: June 11, 2024
    Date of Patent: May 6, 2025
    Assignee: Built Robotics Inc.
    Inventors: Noah Austen Ready-Campbell, Martin Karlsson, Brian Lerner
  • Patent number: 12290917
    Abstract: An object pose estimation system, an execution method thereof and a graphic user interface are provided. The execution method of the object pose estimation system includes the following steps. A feature extraction strategy of a pose estimation unit is determined by a feature extraction strategy neural network model according to a scene point cloud. According to the feature extraction strategy, a model feature is extracted from a 3D model of an object and a scene feature is extracted from the scene point cloud by the pose estimation unit. The model feature is compared with the scene feature by the pose estimation unit to obtain an estimated pose of the object.
    Type: Grant
    Filed: October 19, 2021
    Date of Patent: May 6, 2025
    Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Dong-Chen Tsai, Ping-Chang Shih, Yu-Ru Huang, Hung-Chun Chou
  • Patent number: 12293132
    Abstract: A display control system includes a first display processing unit that displays a first user icon of a user at a predetermined position in a virtual space on the basis of a current position of the user in a real space, a reception processing unit that receives, from the user corresponding to the first user icon, moving operation of the first user icon displayed at the predetermined position in the virtual space, and a second display processing unit that displays a second user icon corresponding to the moving operation by the user in the virtual space, in a case where the reception processing unit receives the moving operation of the first user icon from the user.
    Type: Grant
    Filed: July 26, 2023
    Date of Patent: May 6, 2025
    Assignee: SHARP KABUSHIKI KAISHA
    Inventor: Keiko Hirukawa
  • Patent number: 12287406
    Abstract: A method includes receiving a trajectory dataset including a plurality of geospatial points forming a point cloud and acquired along a trajectory wherein for each of the plurality of geospatial points there is a defined an x-coordinate, a y-coordinate and a z-coordinate and at least one mapping device orientation attribute, segmenting the trajectory dataset into a plurality of segments, determining at least one relative constraint for each of the plurality of segments and utilizing, for each of the plurality of segments, at least one of the determined relative constraints to determine a relative position of at least two of the plurality of segments.
    Type: Grant
    Filed: October 5, 2023
    Date of Patent: April 29, 2025
    Assignee: Carnegie Mellon University
    Inventors: Ji Zhang, Calvin Wade Sheen, Kevin Joseph Dowling
  • Patent number: 12285684
    Abstract: In a communication system including a server and a plurality of communication terminals capable of communication with the server, based on a variety of parameters indicating the status of a space formed within a game playable by the user of each communication terminal over the communication system, the server transmits advisory information, which suggests the next action for the space, to the communication terminal. The communication terminal displays a screen including the received advisory information.
    Type: Grant
    Filed: December 28, 2023
    Date of Patent: April 29, 2025
    Assignee: GREE, INC.
    Inventor: Takayuki Sano
  • Patent number: 12288295
    Abstract: Systems and methods of the present disclosure are directed to a method that can include obtaining a 3D mesh comprising polygons and texture/shading data. The method can include rasterizing the 3D mesh to obtain a 2D raster comprising pixels and coordinates respectively associated with a subset of pixels. The method can include determining an initial color value for the subset of pixels based on the coordinates of the pixel and the associated shading/texture data. The method can include constructing a splat at the coordinates of a respective pixel. The method can include determining an updated color value for a respective pixel based on a weighting of the subset of splats to generate a 2D rendering of the 3D mesh based on the coordinates of a pixel and a splat.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: April 29, 2025
    Assignee: GOOGLE LLC
    Inventors: Kyle Adam Genova, Daniel Vlasic, Forrester H. Cole
  • Patent number: 12288349
    Abstract: Provided is an accelerator provided in an electronic device and configured to perform simultaneous localization and mapping (SLAM), the accelerator including a factor graph database, a memory, and a back-end processor, wherein the back-end processor is configured to receive a first piece of data corresponding to map points and camera positions from the factor graph database, convert the received first piece of data into a matrix for the map points and a matrix for the camera positions, store, in the memory, results obtained by performing an optimization calculation on the matrix for the map points and a matrix for at least one camera position, among the camera positions, corresponding to the map points, and obtain a second piece of data optimized with respect to the first piece of data based on the results stored in the memory.
    Type: Grant
    Filed: November 2, 2021
    Date of Patent: April 29, 2025
    Assignees: SAMSUNG ELECTRONICS CO., LTD., University of Seoul Industry Cooperation Foundation
    Inventors: Myungjae Jeon, Kichul Kim, Yongkyu Kim, Hongseok Lee, San Kim, Hyekwan Yun, Donggeun Lee
  • Patent number: 12288300
    Abstract: Described herein is a method for generating a two-dimensional (2D) image of one or more products within a physical scene is provided. The method comprises: obtaining, via a communication network from another computing device, an image of the physical scene; obtaining, via the communication network from the other computing device, position information indicative of a target position of a first product in the physical scene; rendering a 2D image of a second product in the physical scene using the image of the physical scene, the position information, and a 3D model of the second product; and providing, via the communication network to the other computing device, the rendered 2D image of the second product in the physical scene for display by the other computing device.
    Type: Grant
    Filed: October 23, 2023
    Date of Patent: April 29, 2025
    Assignee: Wayfair LLC
    Inventors: Shrenik Sadalgi, Christian Vázquez
  • Patent number: 12287277
    Abstract: The present technology is to provide technology for appropriately visualizing a population of particles in particle analysis technology. There is provided an information processing apparatus including an information processing unit that receives optical data obtained from particles, and calculates a parameter that specifies a display method of the optical data in a display range having at least one axis including a linear axis and a logarithmic axis on the basis of the received optical data, in which the parameter includes a first parameter that specifies a range of the linear axis and a second parameter that specifies a lower limit value of the display range, and the first parameter and the second parameter are calculated on the basis of different reference values.
    Type: Grant
    Filed: November 18, 2020
    Date of Patent: April 29, 2025
    Assignee: Sony Group Corporation
    Inventor: Fumitaka Otsuka
  • Patent number: 12288395
    Abstract: Systems and methods for video presentation and analytics for live sporting events are disclosed. At least two cameras are used for tracking objects during a live sporting event and generate video feeds to a server processor. The server processor is operable to match the video feeds and create a 3D model of the world based on the video feeds from the at least two cameras. 2D graphics are created from different perspectives based on the 3D model. Statistical data and analytical data related to object movement are produced and displayed on the 2D graphics. The present invention also provides a standard file format for object movement in space over a timeline across multiple sports.
    Type: Grant
    Filed: March 27, 2024
    Date of Patent: April 29, 2025
    Assignee: SportsMEDIA Technology Corporation
    Inventor: Gerard J. Hall
  • Patent number: 12283001
    Abstract: Systems, methods, apparatuses, and computer program products for computing three-dimensional (3D) Euler angles through a distinctive matrices pattern. A method for calculating relative orientations of rigid bodies in space may include determining a three-dimensional (3D) coordinate system of a rigid body D at a time T. The method may also include determining a 3D coordinate system of a rigid body E at time T. The method may further include determining a relative orientation at time T of the rigid body E in the 3D coordinate system of the rigid body D. In addition, the method may include calculating a final relative orientation of the rigid body E in the 3D coordinate system of the rigid body D by implementing a single set of Euler angle equations irrespective of a rotation sequence or a convention chosen.
    Type: Grant
    Filed: October 24, 2022
    Date of Patent: April 22, 2025
    Assignee: QATAR UNIVERSITY
    Inventor: Rony Ibrahim
  • Patent number: 12283167
    Abstract: The disclosure relates to the field of intelligent home, and particularly to an indoor monitoring method, device and system, a storage medium and a camera device. The method includes: controlling a camera device to acquire monitoring images at consecutive times in a room; identifying a first monitoring image at a previous time and a second monitoring image at a current time, and identifying items corresponding to the first monitoring image and the second monitoring image and positions of the items, respectively; determining whether a position of an item changes according to the position of the item in the first monitoring image and the position of the item in the second monitoring image; and generating item record information of the item when it is determined that the position of the item changes, wherein the item record information represents movement of the item in the room.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: April 22, 2025
    Assignees: Gree Electric Appliances, Inc. of Zhuhai, Leayun Technology Co., Ltd. of Zhuhai
    Inventors: Shaobin Li, Miao Yang, Haoxin Huang, Jie Tang, Daoyuan Chen
  • Patent number: 12283030
    Abstract: Disclosed are systems and methods to track an object in a scene even when the object is partially or wholly obscured by an artifact, such as another object. As the tracked object moves through the scene, frames of video are processed and used to refine and tune a diffusion model that predicts an appearance of the tracked object in future frames of the video as well as the appearance of artifacts in the frames of the video. Frames of the video may then be enhanced to illustrate the tracked object as if the tracked object were visible through the artifact.
    Type: Grant
    Filed: March 19, 2024
    Date of Patent: April 22, 2025
    Assignee: Armada Systems, Inc.
    Inventor: Pragyana K. Mishra
  • Patent number: 12282605
    Abstract: Systems and methods generating a haptic output response are disclosed. Video content is displayed on a display. A location of a user touch on the display is detected while the video content is being displayed. A region of interest in the video content is determined based on the location of the user touch. And a haptic output response is generated to a user. A characteristic of the haptic output response is determined using one or more characteristics of the region of interest.
    Type: Grant
    Filed: February 10, 2023
    Date of Patent: April 22, 2025
    Assignee: Disney Enterprises, Inc.
    Inventors: Evan M. Goldberg, Daniel L. Baker, Jackson A. Rogow
  • Patent number: 12284058
    Abstract: A method and device for self-tuning scales of variables for processing in fixed-point hardware. The device includes a sequence of fixed-point arithmetic circuits configured to receive at least one input signal and output at least one output signal. The circuits are preconfigured with control scales associated with each of the input and output signals. A first circuit in the sequence is configured to receive a first input signal having a dynamic true scale that is different from the control scale associated with the first input signal. Each of the circuits is further configured to determine, for each of the output signals, an adaptive scale from the control scale associated with the output signal based on the true scale of the first input signal and the control scale associated with the first input signal, and generate, from the input signal, the output signal having the associated adaptive scale.
    Type: Grant
    Filed: May 1, 2023
    Date of Patent: April 22, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Junmo Sung, Tiexing Wang, Yeqing Hu, Yang Li, Jianzhong Zhang
  • Patent number: 12282995
    Abstract: An unreal engine-based automatic light arrangement method includes: acquiring a house type model and basic data imported into an unreal engine, the basic data being data obtained after a house type image file of the house type model is parsed (S101); on the basis of first object data and second object data in the basic data, respectively creating an indoor light source and a light-supplementing light source in a room region of the house type model (S102); creating an outdoor light source for the house type model, and creating a later volume on an outer side of the house type model by using a preset light arrangement rule, such that the house type model is wrapped in the later volume (S103); and adjusting parameters of the later volume, and setting light tracking parameters in the later volume, so as to render light arrangement of the house type model (S104).
    Type: Grant
    Filed: October 18, 2022
    Date of Patent: April 22, 2025
    Assignee: SHENZHEN XUMI YUNTU SPACE TECHNOLOGY CO., LTD.
    Inventors: Zhiyuan Jiang, Zhenghui Li, Chunqing Li
  • Patent number: 12284349
    Abstract: A method and apparatus comprising computer code configured to cause a processor or processors to obtain an input mesh comprising volumetric data of at least one three-dimensional (3D) visual content, derive a plurality of submeshes of the input mesh from a frame of the volumetric data, set bitdepths to a first submesh and a second submesh from the submeshes, a first bitdepth being different than a second bitdepth, quantize the first submesh and the second submesh based on respective ones of the first bitdepth and the second bitdepth, and signal a result of quantizing the first submesh and the second submesh.
    Type: Grant
    Filed: May 5, 2023
    Date of Patent: April 22, 2025
    Assignee: TENCENT AMERICA LLC
    Inventors: Thuong Nguyen Canh, Xiaozhong Xu, Xiang Zhang, Shan Liu
  • Patent number: 12284324
    Abstract: Augmented reality (AR) systems, devices, media, and methods are described for generating AR experiences including interactions with virtual or physical prop objects. The AR experiences are generated by capturing images of a scene with a camera system, identifying an object receiving surface and corresponding surface coordinates within the scene, identifying an AR primary object and a prop object (physical or virtual), establishing a logical connection between the AR primary object and the prop object, generating AR overlays including actions associated with the AR primary object responsive to commands received via a user input system that position the AR primary object adjacent the object receiving surface responsive to the primary object coordinates and the surface coordinates within the scene and that position the AR primary object and the prop object with respect to one another in accordance with the logical connection, and presenting the generated AR overlays with a display system.
    Type: Grant
    Filed: August 16, 2022
    Date of Patent: April 22, 2025
    Assignee: Snap Inc.
    Inventors: Tianying Chen, Timothy Chong, Sven Kratz, Fannie Liu, Andrés Monroy-Hernández, Olivia Seow, Yu Jiang Tham, Rajan Vaish, Lei Zhang
  • Patent number: 12277621
    Abstract: In some implementations, a method includes obtaining, by a virtual intelligent agent (VIA), a perceptual property vector (PPV) for a graphical representation of a physical element. In some implementations, the PPV includes one or more perceptual characteristic values characterizing the graphical representation of the physical element. In some implementations, the method includes instantiating a graphical representation of the VIA in a graphical environment that includes the graphical representation of the physical element and an affordance that is associated with the graphical representation of the physical element. In some implementations, the method includes generating, by the VIA, an action for the graphical representation of the VIA based on the PPV. In some implementations, the method includes displaying a manipulation of the affordance by the graphical representation of the VIA in order to effectuate the action generated by the VIA.
    Type: Grant
    Filed: September 2, 2021
    Date of Patent: April 15, 2025
    Assignee: APPLE INC.
    Inventors: Mark Drummond, Bo Morgan, Siva Chandra Mouli Sivapurapu
  • Patent number: 12277739
    Abstract: A point-cloud decoding device 200 according to the present invention includes: a geometry information decoding unit 2010 configured to decode a flag that controls whether to apply “Implicit QtBt” or not; and an attribute-information decoding unit 2060 configured to decode a flag that controls whether to apply “scalable lifting” or not; wherein, a restriction is set not to apply the “scalable lifting” when the “Implicit QtBt” is applied.
    Type: Grant
    Filed: September 30, 2022
    Date of Patent: April 15, 2025
    Assignee: KDDI CORPORATION
    Inventors: Kyohei Unno, Kei Kawamura, Sei Naito
  • Patent number: 12277662
    Abstract: Techniques for aligning object representations for animation include analyzing a source object representation and a target object representation to identify a category of the source object and a category of the target object. Based on the category or categories of the objects, a feature extraction machine learning models is selected. The source object representation and the target object representation are provided as input to the selected feature extraction machine learning model to generate respective semantic descriptors and shape vectors for the source and target objects. Based on the semantic descriptors and the shape vectors for the source and target objects, an alignment machine learning model generates an aligned target object representation that is aligned with the source object representation and usable for animating the target object.
    Type: Grant
    Filed: October 31, 2022
    Date of Patent: April 15, 2025
    Assignees: PIXAR, DISNEY ENTERPRISES, INC.
    Inventors: Hayko Riemenschneider, Clara Fernandez, Joan Massich, Christopher Richard Schroers, Daniel Teece, Rasmus Tamstorf, Charles Tappan, Mark Meyer, Antoine Milliez
  • Patent number: 12277129
    Abstract: Systems and methods provide a system that gathers information about data as it progresses through data processing pipelines of data analysis projects. The data analytics system derives value indicators and implicit metadata from the data processing pipelines. For example, the data analytics system may derive value indicators and implicit metadata from data-related products themselves, semantic analysis of the code/processing steps used to process the data-related products, the structure of data processing pipelines, and human behavior related to production and usage of data-related products. Once a new data analysis project is initiated, the data analytics system gathers parameters and characteristics about the new data analysis project and references the value indicators and implicit metadata to recommend useful processing steps, datasets, and/or other data-related products for the new data analysis project.
    Type: Grant
    Filed: July 12, 2023
    Date of Patent: April 15, 2025
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Ted Dunning, Suparna Bhattacharya, Glyn Bowden, Lin A. Nease, Janice M. Zdankus, Sonu Sudhakaran
  • Patent number: 12277651
    Abstract: A system is provided for generating and displaying a sequence of three-dimensional (3D) graphics that illustrate motion of the heart over a cardiac cycle. The system generates a start heart wall mesh and an end heart wall mesh that represent a start geometry and an end geometry of a heart wall derived from a start 3D image and an end 3D image. The system then generates one or more an intermediate heart wall 3D meshes based on an intermediate geometry of the heart wall. An intermediate geometry is interpolated based on the start geometry and the end geometry factoring a start time of a start heart wall mesh, an end time of the end heart wall mesh, and an intermediate time for the intermediate heart wall mesh. The system then displays in sequence representations the heart wall 3D meshes to illustrate the motion.
    Type: Grant
    Filed: June 8, 2024
    Date of Patent: April 15, 2025
    Assignee: THE VEKTOR GROUP, INC.
    Inventors: Christopher J. T. Villongco, Robert Joseph Krummen, Christian David Marton
  • Patent number: 12275158
    Abstract: A system for generating a three-dimensional (3D) representation of a surface of an object. The system includes a point cloud processor and an object surface representation processor. The point cloud processor is to generate a structured point cloud of the object based on sensor data received from a sensor. The object surface representation processor is to: identify surface nodes in the structured point cloud; and link each surface node with any of its active neighbors to generate a surface net, wherein the linking comprises simultaneously establishing a forward-connectivity-link for a respective surface node to an active neighbor and a reverse-connectivity-link for the active neighbor to the respective surface node.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: April 15, 2025
    Assignee: Intel Corporation
    Inventor: David Gonzalez Aguirre
  • Patent number: 12277741
    Abstract: A point cloud data processing apparatus includes: an image data acquisition unit that acquires image data of an object; a point cloud data acquisition unit that acquires point cloud data; a recognition unit that recognizes the object on the basis of the image data, and acquires a region of the object and attribute information for identifying the object; and an attribute assigning unit that selects, from the point cloud data, point cloud data that belongs to the region of the object, and assigns the identified attribute information to the selected point cloud data.
    Type: Grant
    Filed: October 7, 2021
    Date of Patent: April 15, 2025
    Assignee: FUJIFILM Corporation
    Inventor: Kazuchika Iwami
  • Patent number: 12277652
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.
    Type: Grant
    Filed: November 15, 2022
    Date of Patent: April 15, 2025
    Assignee: Adobe Inc.
    Inventors: Radomir Mech, Nathan Carr, Matheus Gadelha
  • Patent number: 12277691
    Abstract: Provided is a method for providing annotation information for a 3D image, which may include outputting a representative image for the 3D image including a plurality of slices, selecting at least one pixel associated with a target item from among a plurality of pixels included in the representative image, outputting, among the plurality of slices, a slice associated with the selected at least one pixel, and receiving an annotation for a partial region of the output slice.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: April 15, 2025
    Assignee: LUNIT INC.
    Inventor: Hyunjae Lee
  • Patent number: 12277653
    Abstract: An augmented reality system generates computer-mediated reality on a client device. The client device has sensors including a camera configured to capture image data of an environment and a location sensor to capture location data describing a geolocation of the client device. The client device creates a three-dimensional (3-D) map with the image data and the location data for use in generating virtual objects to augment reality. The client device transmits the created 3-D map to an external server that may utilize the 3-D map to update a world map stored on the external server. The external server sends a local portion of the world map to the client device. The client device determines a distance between the client device and a mapping point to generate a computer-mediated reality image at the mapping point to be displayed on the client device.
    Type: Grant
    Filed: July 19, 2022
    Date of Patent: April 15, 2025
    Assignee: Niantic, Inc.
    Inventors: Ross Edward Finman, Si ying Diana Hu
  • Patent number: 12274016
    Abstract: A display device for presenting 3D images is provided. The display device includes: a display assembly; a hollow housing assembly; a motor connected to the display assembly; a driving assembly connected to the display assembly; and a hollow base assembly; wherein the driving assembly comprises a power transmission structure and a power reception structure, the power transmission structure is independent from the display assembly and wiredly connected to a first driving power supply, and the power reception structure is wiredly connected to the display assembly and capable of delivering power to the power reception structure by electromagnetic mutual inductance; and the base assembly comprises a first cylinder and a first plate fixedly connected to one end of the first cylinder; the first plate comprises a plate body and a cushion, and the cushion covers a side, proximal to the first cylinder, of the plate body.
    Type: Grant
    Filed: June 22, 2023
    Date of Patent: April 8, 2025
    Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Yuhong Liu, Zhanshan Ma, Zheng Ge, Jiyang Shao
  • Patent number: 12273504
    Abstract: A left part (LL) of a left image and a right part (RR) of a right image are rendered, wherein a horizontal FOV of the left part of the left image extends towards a right side of a gaze point (X) of the left image till only a first predefined angle (N1) from the gaze point of the left image, while a horizontal FOV of the right part of the right image extends towards a left side of a gaze point (X) of the right image till only a second predefined angle (N2) from the gaze point of the right image. A right part (RL) of the left image is reconstructed by reprojecting a corresponding part of the right image, while a left part (LR) of the right image is reconstructed by reprojecting a corresponding part of the left image. The left image and the right image are generated by combining respective rendered parts and respective reconstructed parts.
    Type: Grant
    Filed: October 5, 2023
    Date of Patent: April 8, 2025
    Assignee: Varjo Technologies Oy
    Inventors: Mikko Strandborg, Ville Miettinen
  • Patent number: 12271351
    Abstract: In one embodiment, techniques are provided for aligning source infrastructure data to be compatible with a conceptual schema (e.g., BIS) implemented through an underlying database schema (e.g., DgnDb). Data aligned according to the conceptual schema may serve as a “digital twin” of real-world infrastructure usable throughout various phases of an infrastructure project, with physical information serving as a “backbone”, and non-physical information maintained relative thereto, forming a cohesive whole, while avoiding unwanted data redundancies. Source-format-specific bridge software processes may be provided that that know how to read and interpret source data of a respective source format, and express it in terms of the conceptual schema.
    Type: Grant
    Filed: July 14, 2022
    Date of Patent: April 8, 2025
    Assignee: Bentley Systems, Incorporated
    Inventors: Keith A. Bentley, Casey Mullen, Samuel W. Wilson
  • Patent number: 12271525
    Abstract: Systems and methods are presented herein for requesting a version of media content from a server that includes haptic feedback rending criteria compatible with the haptics capabilities of a client device. At a server, a request is received for a media asset for interaction on a haptic enabled client device, wherein the media asset comprises haptic feedback rendering criteria. Based on the request, haptic feedback capabilities of the haptic enabled client device associated with the request is determined. The haptic feedback capabilities of the haptic enabled client device are compared to the haptic feedback rendering criteria of one or more versions of the media asset available via the server. In response to the comparing, a version of the media asset comprising haptic feedback rendering criteria compatible with the haptic feedback capabilities of the haptic enabled client device is transmitted from the server to the haptic enabled client device.
    Type: Grant
    Filed: June 26, 2023
    Date of Patent: April 8, 2025
    Assignee: Adeia Guides Inc.
    Inventor: Tatu V. J. Harviainen
  • Patent number: 12266217
    Abstract: Generally discussed herein are examples of gesture-based extended reality (XR) with object recognition and tracking. A method, implemented by an extended reality (XR) device can include recognizing and tracking one or more objects in the image data, recognizing a gesture in the image data, analyzing the image data to determine whether a condition is satisfied, the condition indicating a recognized and tracked object of the one or more objects proximate which the recognized gesture is to be made, and in response to determining that the condition is satisfied, performing an augmentation of the image data based on satisfaction of the condition.
    Type: Grant
    Filed: May 11, 2022
    Date of Patent: April 1, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Erik A. Hill
  • Patent number: 12266057
    Abstract: Systems, methods, and computer readable media for input modalities for an augmented reality (AR) wearable device are disclosed. The AR wearable device captures images using an image capturing device and processes the images to identify objects. The objects may be people, places, things, and so forth. The AR wearable device associates the objects with tags such as the name of the object or a function that can be provided by the selection of the object. The AR wearable device then matches the tags of the objects with tags associated with AR applications. The AR wearable device presents on a display of the AR wearable device indications of the AR applications with matching tags, which provides a user with the opportunity to invoke one of the AR applications. The AR wearable device recognizes a selection of an AR application in a number of different ways including gesture recognition and voice commands.
    Type: Grant
    Filed: June 2, 2022
    Date of Patent: April 1, 2025
    Assignee: Snap Inc.
    Inventors: Piotr Gurgul, Sharon Moll
  • Patent number: 12266044
    Abstract: Data structures, methods and tiling engines for storing tiling data in memory wherein the tiles are grouped into tile groups and the primitives are grouped into primitive blocks. The methods include, for each tile group: determining, for each tile in the tile group, which primitives of each primitive block intersect that tile; storing in memory a variable length control data block for each primitive block that comprises at least one primitive that intersects at least one tile of the tile group; and storing in memory a control stream comprising a fixed sized primitive block entry for each primitive block that comprises at least one primitive that intersects at least one tile of the tile group, each primitive block entry identifying a location in memory of the control data block for the corresponding primitive block. Each primitive block entry may comprise valid tile information identifying which tiles of the tile group are valid for the corresponding primitive block.
    Type: Grant
    Filed: December 31, 2023
    Date of Patent: April 1, 2025
    Assignee: Imagination Technologies Limited
    Inventors: Xile Yang, Robert Brigg, Michael John Livesley
  • Patent number: 12266042
    Abstract: Provided are a device and method that enable highly accurate and efficient three-dimensional model generation processing. The device includes: a facial feature information detection unit that analyzes a facial image of a subject shot by an image capturing unit and detects facial feature information; an input data selection unit that selects, from a plurality of facial images shot by the image capturing unit and a plurality of pieces of facial feature information corresponding to the plurality of facial images, a set of a facial image and feature information optimal for generating a 3D model; and a facial expression 3D model generation unit that generates a 3D model using the facial image and the feature information selected by the input data selection unit.
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: April 1, 2025
    Assignee: SONY GROUP CORPORATION
    Inventor: Seiji Kimura
  • Patent number: 12266053
    Abstract: Embodiments of the invention provide a computer system that includes a processor electronically coupled to a memory. The processor is operable to perform processor operations that include determining that a user is in a virtual reality (VR) environment and accessing a multisensory state associated with the VR environment and the user. A multisensory declutter analysis is applied to the multisensory state to generate decluttered multisensory streams. The decluttered multisensory streams are used to generate a decluttered multisensory view associated with the user.
    Type: Grant
    Filed: February 14, 2023
    Date of Patent: April 1, 2025
    Assignee: International Business Machines Corporation
    Inventors: Paul R. Bastide, Matthew E. Broomhall, Robert E. Loredo
  • Patent number: 12265705
    Abstract: A control apparatus controls a display device equipped with a display unit, and controls the display device that allows a user to view back of the display unit through the display unit in a case where the user views the display unit, and the control apparatus includes a processor configured to: change display of some of images displayed on the display unit, in response to a change in a position of a terminal device with respect to the display device, the terminal device being located behind the display unit and viewed by the user through the display unit, and maintain some other images displayed on the display unit in a state of being displayed at predetermined places, regardless of the change in the position of the terminal device.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: April 1, 2025
    Assignee: FUJIFILM Business Innovation Corp.
    Inventor: Momoko Fujiwara
  • Patent number: 12264934
    Abstract: The invention relates to a method for depicting a virtual element in a display area of at least one display apparatus of a vehicle. Virtual elements that lie outside of the display area of a display apparatus are perceptible by the driver with a certain estimation of distance, or respectively direction, in that a driver of the vehicle is signaled when the determined three-dimensional coordinates of the at least one virtual element lie outside of the display area of the display apparatus.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: April 1, 2025
    Assignee: VOLKSWAGEN AKTIENGESELLSCHAFT
    Inventors: Alexander Kunze, Yannis Tebaibi, Johanna Sandbrink, Vitalij Sadovitch
  • Patent number: 12266069
    Abstract: Various implementations disclosed herein include devices, systems, and methods that provide extended reality (XR) environments that include virtual content anchored to objects within physical environments. Such objects may be movable objects. In some implementations, the virtual content is adaptive in that the virtual content is presented based on a characteristic of the movable object. For example, a virtual game piece may be scaled, shaped, etc. to match a physical game piece to which it is anchored. As another example, a virtual gameboard may be scaled and positioned on a real table such that the edge of the gameboard aligns with the edge of the table and such that a virtual waterfall appears to flow over the edge of the real table.
    Type: Grant
    Filed: August 15, 2022
    Date of Patent: April 1, 2025
    Assignee: Apple Inc.
    Inventors: Andrew S. Robertson, David Hobbins, James R. Cooper, Joon Hyung Ahn
  • Patent number: 12265750
    Abstract: A platform for synchronizing augmented reality (AR) views and information between two or more network connected devices is disclosed. A first device captures a video stream and associated essential meta-data, embeds the essential meta-data into the video stream and transmits it to a second device. The second device receives the video stream, extracts the essential meta-data, inserts one or more AR objects into the video stream with reference to the enhanced meta-data, and transmits to the first device the AR objects and reference to the essential meta-data. The first device renders the one or more AR objects into the video stream, using the essential meta-data references to locate the AR objects in each video stream frame. The second device may also determine and transmit a modified video stream to the first device.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: April 1, 2025
    Assignee: STREEM, LLC
    Inventor: Pavan K. Kamaraju
  • Patent number: 12263407
    Abstract: Methods and systems for a virtual experience where a user holds or wears a computer device that displays a virtual world, and the user's physical movement is used to control a virtual avatar through that virtual world in a heightened way. For example, by simply walking around, the user may control a fast moving virtual airplane. A simulation system reads user location information from a sensor in the computer device, and feeds that into a matrix of movement rules. That changes the location data of the user avatar and viewpoint in the virtual world, as shown on the computer device.
    Type: Grant
    Filed: September 17, 2024
    Date of Patent: April 1, 2025
    Assignee: Monsarrat, Inc.
    Inventor: Jonathan Monsarrat
  • Patent number: 12260498
    Abstract: Provided is a method of creating digital twin content for a space based on object recognition in the space. A method of creating digital twin content includes identifying, by a computer system, an object in a space using a camera, determining a value of a parameter associated with the camera to define (or display) the identified object with a predetermined first size at a predetermined location on an image captured by the camera or a screen on which the image is displayed, and using a control result value based on the determined value of the parameter and the location information of the identified object to create the digital twin content for the space.
    Type: Grant
    Filed: January 30, 2024
    Date of Patent: March 25, 2025
    Assignee: CORNERS CO., LTD.
    Inventors: Jang Won Choi, Min Woo Park, Dae Gyun Lee, Dong Oh Kim
  • Patent number: 12251884
    Abstract: Systems and methods of support structures in powder-bed fusion (PBF) are provided. Support structures can be formed of bound powder, which can be, for example, compacted powder, compacted and sintered powder, powder with a binding agent applied, etc. Support structures can be formed of non-powder support material, such as a foam. Support structures can be formed to include resonant structures that can be removed by applying a resonance frequency. Support structures can be formed to include structures configured to melt when electrical current is applied for easy removal.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: March 18, 2025
    Assignee: DIVERGENT TECHNOLOGIES, INC.
    Inventors: Eahab Nagi El Naga, John Russell Bucknell, Chor Yen Yap, Broc William TenHouten, Antonio Bernerd Martinez