Three-dimension Patents (Class 345/419)
  • Patent number: 11978227
    Abstract: A system for determining a length of an object includes a rod, an imaging device couplable to the rod, a laser rangefinder, one or more sensors, and a processor. The processor can receive image data, the orientation of the rod relative to the surface, and the distance to the surface from the imaging device and can determine the length of the rod. A system for surveying trees within a plot includes an imaging device, one or more sensors for measuring at least one of a pitch, a roll, or a compass bearing of the imaging device, and one or more processors configured to receive images of the plot including trees within the plot, associate one or more sensor measurements with the images, store the images with their associated measurements, and generate a survey of trees in the plot using at least the images and the one or more sensor measurements.
    Type: Grant
    Filed: August 19, 2021
    Date of Patent: May 7, 2024
    Assignee: FOREST CARBON WORKS, PBC
    Inventor: Kyle Andrew Holland
  • Patent number: 11979545
    Abstract: An information processing apparatus according to the present technology includes an image obtaining unit and a display control unit. The image obtaining unit obtains a plurality of first divided images obtained by dividing a first image showing a first location along a second direction substantially perpendicular to a first direction, and a plurality of second divided images obtained by dividing a second image showing a second location along the second direction. The display control unit arranges and simultaneously displays the plurality of first divided images and the plurality of second divided images along the first direction on a display device of a user at a third location.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: May 7, 2024
    Assignee: SONY GROUP CORPORATION
    Inventors: Mari Saito, Kenji Sugihara
  • Patent number: 11974881
    Abstract: A system and method for providing an anatomic orientation indicator with a patient-specific model of an anatomical structure of interest extracted from a three-dimensional (3D) ultrasound volume is provided. The method includes extracting the anatomical structure of interest from the 3D volume and generating a patient-specific model of the anatomical structure of interest. The method includes generating an anatomic orientation indicator including at least one mocked patient anatomy model of an anatomical structure adjacent the anatomical structure of interest at a position and orientation relative the patient-specific model. The method includes displaying the anatomic orientation indicator with the patient-specific model at a same first point of view. The method includes receiving an instruction to change a point of view of the patient-specific model to a second point of view and updating the displaying of the anatomic orientation indicator with the patient-specific model to the second point of view.
    Type: Grant
    Filed: August 26, 2020
    Date of Patent: May 7, 2024
    Assignee: GE Precision Healthcare LLC
    Inventor: Federico Veronesi
  • Patent number: 11978259
    Abstract: Systems and methods for operating a mobile platform. The methods comprise, by a computing device: obtaining a LiDAR point cloud; using the LiDAR point cloud to generate a track for a given object in accordance with a particle filter algorithm by generating states of a given object over time (each state has a score indicating a likelihood that a cuboid would be created given an acceleration value and an angular velocity value); using the track to train a machine learning algorithm to detect and classify objects based on sensor data; and/or causing the machine learning algorithm to be used for controlling movement of the mobile platform.
    Type: Grant
    Filed: July 9, 2021
    Date of Patent: May 7, 2024
    Assignee: Ford Global Technologies, LLC
    Inventor: Kevin James Player
  • Patent number: 11978153
    Abstract: A method for determining a visible angle of a target object, an electronic device, and a storage medium are provided. The method includes: acquiring first point of interest POI data of the target object, second POI data of an occluder, and a position of an observation point; determining a first tangent line and a second tangent line of the target object, passing through the position of the observation point, according to the first POI data and the position of the observation point; determining a third tangent line and a fourth tangent line of the occluder, passing through the position of the observation point, according to the second POI data and the position of the observation point; and determining a target visible angle of the target object relative to the occluder according to the first tangent line, the second tangent line, the third tangent line, and the fourth tangent line.
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: May 7, 2024
    Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventor: Lingguang Wang
  • Patent number: 11978254
    Abstract: Systems and methods for video presentation and analytics for live sporting events are disclosed. At least two cameras are used for tracking objects during a live sporting event and generate video feeds to a server processor. The server processor is operable to match the video feeds and create a 3D model of the world based on the video feeds from the at least two cameras. 2D graphics are created from different perspectives based on the 3D model. Statistical data and analytical data related to object movement are produced and displayed on the 2D graphics. The present invention also provides a standard file format for object movement in space over a timeline across multiple sports.
    Type: Grant
    Filed: March 22, 2023
    Date of Patent: May 7, 2024
    Assignee: SPORTSMEDIA TECHNOLOGY CORPORATION
    Inventor: Gerard J. Hall
  • Patent number: 11978151
    Abstract: Aspects presented herein relate to methods and devices for graphics processing including an apparatus, e.g., a GPU. The apparatus may obtain an indication of a BVH structure including a plurality of nodes, wherein the BVH structure is associated with geometry data for a plurality of primitives in a scene, wherein each of the plurality of nodes is associated with one or more primitives, where a first level BVH includes a set of first nodes and a second level BVH includes a set of second nodes. The apparatus may also allocate information for a plurality of second nodes in the set of second nodes to at least one first node in the set of first nodes. Further, the apparatus may store the allocated information for the plurality of second nodes in the set of second nodes in the at least one first node in the set of first nodes.
    Type: Grant
    Filed: August 31, 2022
    Date of Patent: May 7, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Adimulam Ramesh Babu, Srihari Babu Alla, Avinash Seetharamaiah, Jonnala Gadda Nagendra Kumar
  • Patent number: 11978177
    Abstract: A method and system of image processing of omnidirectional images with a viewpoint shift.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: May 7, 2024
    Assignee: Intel Corporation
    Inventors: Radka Tezaur, Niloufar Pourian
  • Patent number: 11973991
    Abstract: A processor may initiate a recording. The processor may segment the recording into one or more segments. The processor may determine, based on the identification of a primary object in a first segment of the recording, a first bit rate for the first segment of the first recording. The processor may preload one or more subsequent segments that include the primary object at the first bit rate. The processor may preload each of the one or more subsequent segments with a secondary object at a second bit rate. The second bit rate may be lower than the first bit rate. The processor may display the recording to the user.
    Type: Grant
    Filed: October 11, 2019
    Date of Patent: April 30, 2024
    Assignee: International Business Machines Corporation
    Inventors: Balamurugaramanathan Sivaramalingam, Sathya Santhar, Samuel Mathew Jawaharlal, Sarbajit K. Rakshit
  • Patent number: 11972622
    Abstract: A method for updating a coordinate of an annotated point in a digital image due to camera movement is performed by an image processing device, which obtains a current digital image of a scene. The current digital image has been captured by a camera subsequent to movement of the camera relative the scene. The current digital image is associated with at least one annotated point. Each at least one annotated point has a respective coordinate in the current digital image. The method comprises identifying an amount of the movement by comparing position indicative information in the current digital image to position indicative information in a previous digital image of the scene. The previous digital image has been captured prior to movement of the camera. The method comprises updating the coordinate of each at least one annotated point in accordance with the identified amount of movement and a camera homography.
    Type: Grant
    Filed: December 16, 2021
    Date of Patent: April 30, 2024
    Assignee: Axis AB
    Inventors: Jiandan Chen, Haiyan Xie
  • Patent number: 11969651
    Abstract: An augmented reality system generates computer-mediated reality on a client device. The client device has sensors including a camera configured to capture image data of an environment. The augmented reality system generates a first 3D map of the environment around the client device based on captured image data. The server receives image data captured from a second client device in the environment and generates a second 3D map of the environment. The server links the first and second 3D together in a singular 3D map. The singular 3D map may be a graphical representation of the real world using nodes that represent 3D maps generated by image data captured at client devices and edges that represent transformations between the nodes.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: April 30, 2024
    Assignee: NIANTIC, INC.
    Inventors: Anvith Ekkati, Purna Sowmya Munukutla, Dharini Krishna, Peter James Turner, Gandeevan Raghuraman, Si ying Diana Hu
  • Patent number: 11972536
    Abstract: The present invention relates to a method for tracking progress of the construction of objects, in particular walls comprised in a building based on 3D digital representation. Building Information Modeling (BIM) may provide a digital representation of the physical and functional characteristics of a place, such as a building comprising walls and other objects.
    Type: Grant
    Filed: November 3, 2023
    Date of Patent: April 30, 2024
    Assignee: DALUX APS
    Inventor: Anders Rong
  • Patent number: 11972061
    Abstract: To provide an input apparatus including an acquisition circuit that acquires a captured image capturing a user, a detection circuit that detects a first hand of the user from the captured image acquired by the acquisition circuit, and a display circuit that displays, when a second hand different from the first hand is detected during tracking of the first hand detected by the detection circuit, notification information corresponding to a distance between the first hand and the second hand on the display screen.
    Type: Grant
    Filed: April 21, 2023
    Date of Patent: April 30, 2024
    Assignee: SHARP KABUSHIKI KAISHA
    Inventors: Masahiko Takashima, Teruhiko Matsuoka, Tomoya Ishikura
  • Patent number: 11964400
    Abstract: A method for controlling a robot to pick up an object in various positions. The method includes: defining a plurality of reference points on the object; mapping a first camera image of the object in a known position onto a first descriptor image; identifying the descriptors of the reference points from the first descriptor image; mapping a second camera image of the object in an unknown position onto a second descriptor image; searching the identified descriptors of the reference points in the second descriptor image; ascertaining the positions of the reference points in the three-dimensional space in the unknown position from the found positions; and ascertaining a pickup pose of the object for the unknown position from the ascertained positions of the reference points.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: April 23, 2024
    Assignee: ROBERT BOSCH GMBH
    Inventors: Andras Gabor Kupcsik, Marco Todescato, Markus Spies, Nicolai Waniek, Philipp Christian Schillinger, Mathias Buerger
  • Patent number: 11966051
    Abstract: The device (3) includes a display panel (6), a shutter panel (7), and a controller (8). The display panel (6) includes subpixels for displaying a parallax image including a first image and a second image having parallax between the images. The shutter panel (7) is configured to define a traveling direction of image light representing the parallax image from the display panel (6). The controller (8) is configured to change, in a certain time cycle, areas on the shutter panel in a light transmissive state to transmit the image light with at least a certain transmittance and areas in a light attenuating state to transmit the image light with a transmittance lower than the transmittance in the light transmissive state, and is configured to change the subpixels to display the first image and the second image based on positions of the areas in the light transmissive state.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: April 23, 2024
    Assignee: KYOCERA Corporation
    Inventor: Kaoru Kusafuka
  • Patent number: 11967022
    Abstract: In various examples, to support training a deep neural network (DNN) to predict a dense representation of a 3D surface structure of interest, a training dataset is generated using a parametric mathematical modeling. A variety of synthetic 3D road surfaces may be generated by modeling a 3D road surface using varied parameters to simulate changes in road direction and lateral surface slope. In an example embodiment, a synthetic 3D road surface may be created by modeling a longitudinal 3D curve and expanding the longitudinal 3D curve to a 3D surface, and the resulting synthetic 3D surface may be sampled to form a synthetic ground truth projection image (e.g., a 2D height map). To generate corresponding input training data, a known pattern that represents which pixels may remain unobserved during 3D structure estimation may be generated and applied to a ground truth projection image to simulate a corresponding sparse projection image.
    Type: Grant
    Filed: October 28, 2021
    Date of Patent: April 23, 2024
    Assignee: NVIDIA Corporation
    Inventors: Kang Wang, Yue Wu, Minwoo Park, Gang Pan
  • Patent number: 11966999
    Abstract: An electronic apparatus performs a method of real time simulation of physical visual effect on one or more Graphics Processing Units (GPUs). The method includes a plurality of time steps. Each of the time steps includes: building up a mapping between particles and background grid blocks; sorting the particles to a level of granularity; transferring momenta and masses of the particles to grid nodes on the background grid blocks to compute forces on the grid nodes; updating velocities and resolving collisions from the computed forces on the grid nodes; and applying the updated velocities back to the particles from the grid nodes and advecting the particles. In some embodiments, the frequency of building up and sorting is reduced compared with the frequency of transferring, updating, and applying in the plurality of time steps.
    Type: Grant
    Filed: March 4, 2022
    Date of Patent: April 23, 2024
    Assignee: TENCENT AMERICA LLC
    Inventors: Yun Fei, Ming Gao
  • Patent number: 11967020
    Abstract: A distributed, cross reality system efficiently and accurately compares location information that includes image frames. Each of the frames may be represented as a numeric descriptor that enables identification of frames with similar content. The resolution of the descriptors may vary for different computing devices in the distributed system based on degree of ambiguity in image comparisons and/or computing resources for the device. A descriptor computed for a cloud-based component operating on maps of large areas that can result in ambiguous identification of multiple image frames may use high resolution descriptors. High resolution descriptors reduce computationally intensive disambiguation processing. A portable device, which is more likely to operate on smaller maps and less likely to have the computational resources to compute a high resolution descriptor, may use a lower resolution descriptor.
    Type: Grant
    Filed: December 20, 2022
    Date of Patent: April 23, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Elad Joseph, Gal Braun, Ali Shahrokni
  • Patent number: 11967014
    Abstract: A 3D conversation system can facilitate 3D conversations in an augmented reality environment, allowing conversation participants to appear as if they are face-to-face. The 3D conversation system can accomplish this with a pipeline of data processing stages, which can include calibrate, capture, tag and filter, compress, decompress, reconstruct, render, and display stages. Generally, the pipeline can capture images of the sending user, create intermediate representations, transform the representations to convert from the orientation the images were taken from to a viewpoint of the receiving user, and output images of the sending user, from the viewpoint of the receiving user, in synchronization with audio captured from the sending user. Such a 3D conversation can take place between two or more sender/receiving systems and, in some implementations can be mediated by one or more server systems. In various configurations, stages of the pipeline can be customized based on a conversation context.
    Type: Grant
    Filed: May 3, 2023
    Date of Patent: April 23, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Brian Keith Cabral, Albert Parra Pozo
  • Patent number: 11960653
    Abstract: Systems and methods herein describe a multi-modal interaction system. The multi-modal interaction system, receives a selection of an augmented reality (AR) experience within an application on a computer device, displays a set of AR objects associated with the AR experience on a graphical user interface (GUI) of the computer device, display textual cues associated with the set of augmented reality objects on the GUI, receives a hand gesture and a voice command, modifies a subset of augmented reality objects of the set of augmented reality objects based on the hand gesture and the voice command, and displays the modified subset of augmented reality objects on the GUI.
    Type: Grant
    Filed: May 10, 2022
    Date of Patent: April 16, 2024
    Assignee: Snap Inc.
    Inventors: Jonathan Solichin, Xinyao Wang
  • Patent number: 11961250
    Abstract: A light-field image generation system including a shape information acquisition server that acquires shape information indicating a three-dimensional shape of an object, and an image generation server that is provided with a shape reconstruction unit that reconstructs the three-dimensional shape of the object as a virtual three-dimensional shape in a virtual space based on the shape information and a light-field image generation unit that generates a light-field image of the virtual three-dimensional shape at a predetermined viewing point in the virtual space.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: April 16, 2024
    Assignee: TOPPAN PRINTING CO., LTD.
    Inventor: Tetsuro Morimoto
  • Patent number: 11957978
    Abstract: The present disclosure describes approaches to camera re-localization that improve the speed and accuracy with which pose estimates are generated by fusing output of a computer vision algorithm with data from a prior model of a geographic area in which a user is located. For each candidate pose estimate output by the algorithm, a game server maps the estimate to a position on the prior model (e.g., a specific cell on a heatmap-style histogram) and retrieves a probability corresponding to the mapped position. A data fusion module fuses, for each candidate pose estimate, a confidence score generated by the computer vision algorithm with the location probability from the prior model to generate an updated confidence score. If an updated confidence score meets or exceeds a score threshold, a re-localization module initiates a location-based application (e.g., a parallel reality game) based on the associated candidate pose estimate.
    Type: Grant
    Filed: March 3, 2022
    Date of Patent: April 16, 2024
    Assignee: NIANTIC, INC.
    Inventors: Ben Benfold, Victor Adrian Prisacariu
  • Patent number: 11961428
    Abstract: Provided is a method including: obtaining a plurality of images corresponding to a plurality of views; identifying at least one view region overlapping with a sub-pixel from among a plurality of view regions corresponding to the plurality of views; identifying a data value corresponding to the sub-pixel for each of at least one image corresponding to the at least one view region; determining an application degree of the data value for each of the at least one image, based on a level of overlap between the sub-pixel and the at least one view region, and determining an output value of the sub-pixel based on a data value adjusted according to the determined application degree; and outputting an image based on output values respectively determined for a plurality of sub-pixels including the sub-pixel.
    Type: Grant
    Filed: May 12, 2022
    Date of Patent: April 16, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Kangwon Jeon
  • Patent number: 11960009
    Abstract: Techniques for determining an object contour are discussed. Depth data associated with an object may be received. The depth data, such as lidar data, can be projected onto a two-dimensional plane. A first convex hull may be determined based on the projected lidar data. The first convex hull may include a plurality of boundary edges. A longest boundary edge, having a first endpoint and a second endpoint, can be determined. An angle can be determined based on the first endpoint, the second endpoint, and an interior point in the interior of the first convex hull. The longest boundary edge may be replaced with a first segment based on the first endpoint and the interior point, and a second segment based on the interior point and the second endpoint. An updated convex hull can be determined based on the first segment and the second segment.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: April 16, 2024
    Assignee: ZOOX, INC.
    Inventors: Yuanyuan Chen, Zeng Wang
  • Patent number: 11961176
    Abstract: Disclosed approaches provide for interactions of secondary rays of light transport paths in a virtual environment to share lighting contributions when determining lighting conditions for a light transport path. Interactions may be shared based on similarities in characteristics (e.g., hit locations), which may define a region in which interactions may share lighting condition data. The region may correspond to a texel of a texture map and lighting contribution data for interactions may be accumulated to the texel spatially and/or temporally, then used to compute composite lighting contribution data that estimates radiance at an interaction. Approaches are also provided for reprojecting lighting contributions of interactions to pixels to share lighting contribution data from secondary bounces of light transport paths while avoiding potential over blurring.
    Type: Grant
    Filed: February 7, 2022
    Date of Patent: April 16, 2024
    Assignee: NVIDIA Corporation
    Inventor: Jacopo Pantaleoni
  • Patent number: 11954809
    Abstract: The present disclosure relates to display systems and, more particularly, to augmented reality display systems. In one aspect, a method of fabricating an optical element includes providing a substrate having a first refractive index and transparent in the visible spectrum. The method additionally includes forming on the substrate periodically repeating polymer structures. The method further includes exposing the substrate to a metal precursor followed by an oxidizing precursor. Exposing the substrate is performed under a pressure and at a temperature such that an inorganic material comprising the metal of the metal precursor is incorporated into the periodically repeating polymer structures, thereby forming a pattern of periodically repeating optical structures configured to diffract visible light. The optical structures have a second refractive index greater than the first refractive index.
    Type: Grant
    Filed: April 29, 2022
    Date of Patent: April 9, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Melanie Maputol West, Christophe Peroz, Mauro Melli
  • Patent number: 11954425
    Abstract: A computer-implemented method, computer program product, and computing system for rendering an annotatable image within an image viewer. An indication of an intent to annotate the annotatable image is received from a user. A meme generation interface is rendered with respect to the annotatable image. Meme annotation criteria is received from the user via the meme generation interface. The meme annotation criteria includes one or more of: a meme message, a meme position indicator, and a font type identifier. The annotatable image is modified based, at least in part, upon the meme annotation criteria, thus generating an annotated image. The annotated image is published to a meme publication website.
    Type: Grant
    Filed: August 22, 2022
    Date of Patent: April 9, 2024
    Assignee: Google LLC
    Inventors: Stanislaw Pasko, Michal Brzozowski, Wiktor Gworek, Zachary Yeskel
  • Patent number: 11954890
    Abstract: The present disclosure relates to an apparatus and method for fast refining segmentation for a V-PCC encoder. The apparatus may include a grid segmentation unit segmenting a coordinate space of a point cloud into grid units, and an edge cube search unit searching a cube containing one or more points among the cubes segmented into grid units and containing a segment boundary. The apparatus may also include a surrounding cube search unit searching an edge surrounding cube containing one or more points within a predetermined range from the edge cube, and a smooth score calculation unit calculating smooth scores for all the edge surrounding cubes and all the edge cubes. The apparatus may further include a projection plane index update unit obtaining a normal score based on the calculated smooth scores and updating a projection plane index of each point in the edge cube using the normal score.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: April 9, 2024
    Assignee: Korea Electronics Technology Institute
    Inventors: Yong Hwan Kim, Jieon Kim, JinGang Huh, Jong-geun Park
  • Patent number: 11951395
    Abstract: This application relates to a method for displaying a marker element in a virtual scene performed at a terminal. The method includes: receiving a marking request from a user of the terminal, wherein the user controls a current virtual object rendered in a display interface of the virtual scene; in response to the marking request, determining a target virtual item in the display interface of the virtual scene; obtaining graphic data of a marker element when a distance between the target virtual item and the current virtual object in the virtual scene is within a predefined distance, the marker element being a graphic element used for indicating a location of the target virtual item in the virtual scene; and rendering the marker element according to the graphic data at a designated location adjacent the target virtual item in the display interface of the virtual scene.
    Type: Grant
    Filed: June 2, 2022
    Date of Patent: April 9, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Yourui Fan
  • Patent number: 11956620
    Abstract: A method of presenting audio comprises: identifying a first ear listener position and a second ear listener position in a mixed reality environment; identifying a first virtual sound source in the mixed reality environment; identifying a first object in the mixed reality environment; determining a first audio signal in the mixed reality environment, wherein the first audio signal originates at the first virtual sound source and intersects the first ear listener position; determining a second audio signal in the mixed reality environment, wherein the second audio signal originates at the first virtual sound source, intersects the first object, and intersects the second ear listener position; determining a third audio signal based on the second audio signal and the first object; presenting, to a first ear of a user, the first audio signal; and presenting, to a second ear of the user, the third audio signal.
    Type: Grant
    Filed: June 23, 2023
    Date of Patent: April 9, 2024
    Assignee: Magic Leap, Inc.
    Inventor: Anastasia Andreyevna Tajik
  • Patent number: 11954243
    Abstract: An augmented reality (AR) device can be configured to generate a virtual representation of a user's physical environment. The AR device can capture images of the user's physical environment to generate or identify a user's location. The AR device can project graphics at designated locations within the user's environment to guide the user to capture images of the user's physical environment. The AR device can provide visual, audible, or haptic guidance to direct the user of the AR device to explore the user's environment.
    Type: Grant
    Filed: July 11, 2022
    Date of Patent: April 9, 2024
    Assignee: MAGIC LEAP, INC.
    Inventors: Amy Dedonato, James Cameron Petty, Griffith Buckley Hazen, Jordan Alexander Cazamias, Karen Stolzenberg
  • Patent number: 11954787
    Abstract: The disclosure provides image rendering methods and apparatuses. One example method includes that a foreground image is first rendered, and then a panoramic image used as a background is rendered. A pixel corresponding to the foreground image has a corresponding depth value. When the panoramic image is rendered, content corresponding to the panoramic image may be rendered at a pixel corresponding to a depth standard value based on a depth value of a pixel on a canvas. The depth reference value is a depth value of a pixel other than the pixel corresponding to the foreground image.
    Type: Grant
    Filed: June 27, 2022
    Date of Patent: April 9, 2024
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yixin Deng, Qichao Zhu, Dong Wei, Lei Yang
  • Patent number: 11954813
    Abstract: A three-dimensional scene constructing method, apparatus and system, and a storage medium. The three-dimensional scene constructing method includes: acquiring point cloud data of a key object and a background object in a target scene, wherein the point cloud data of the key object comprises three-dimensional information and corresponding feature information, and the point cloud data of the background object at least comprises three-dimensional information; establishing a feature database of the target scene, wherein the feature database at least comprises a key object feature library for recording three-dimensional information and feature information of the key object; performing registration and fusion on the point cloud data of the key object and the point cloud data of the background object, so as to obtain a three-dimensional model of the target scene; and when updating the three-dimensional model, reconstructing the three-dimensional model in a regional manner according to the feature database.
    Type: Grant
    Filed: September 2, 2021
    Date of Patent: April 9, 2024
    Assignee: BOE Technology Group Co., Ltd.
    Inventors: Youxue Wang, Xiaohui Ma, Kai Geng, Mengjun Hou, Qian Ha
  • Patent number: 11954892
    Abstract: Disclosed is a system and associated methods for compressing motion within an animated point cloud. The resulting compressed file encodes different transforms that recreate the motion of different sets of points across different point clouds or frames of the animation in place of the data for the different sets of points from the different point clouds. The compression involves detecting a motion that changes positioning of a set of points between a first point cloud and subsequent point clouds of an uncompressed encoding of two or more frames of an animation. The compression further involves defining a transform that models the motion, and generating a compressed animated point cloud by encoding the data of the first point cloud in the compressed animated point cloud, and by replacing the data for the set of points in the one or more subsequent point clouds with the transform.
    Type: Grant
    Filed: June 22, 2023
    Date of Patent: April 9, 2024
    Assignee: Illuscio, Inc.
    Inventors: Dwayne Elahie, Nolan Taeksang Yoo
  • Patent number: 11956504
    Abstract: Provided is a content distribution server which is able to establish restrictions on the public disclosure of an object displayed in virtual space at the convenience of the distributor. The content distribution server comprises: a distribution unit that distributes live content for synthesizing video in virtual space using information from the distributor as virtual character information; and a first setting receiving unit that receives from the distributor terminal used by the distributor public disclosure restriction settings for establishing restrictions on objects present in virtual space displayed on the distributor terminal that can be viewed on a viewer terminal used by a viewer to view live content.
    Type: Grant
    Filed: July 8, 2022
    Date of Patent: April 9, 2024
    Assignee: DWANGO CO., LTD.
    Inventors: Nobuo Kawakami, Kentarou Matsui, Shinnosuke Iwaki, Takashi Kojima, Naoki Yamaguchi
  • Patent number: 11954773
    Abstract: Embodiments described herein provide a process and method running on a computer for creating an augmented image. According to an embodiment, a graphical user interface gathers data that is programmatically analyzed to obtain photographic properties from a first image. Photographic properties are provided to a user for obtaining a second image containing a fiducial mark. The second image is programmatically analyzed to obtain photographic properties. The first image and the second image are programmatically analyzed and processed to produce an augmented image.
    Type: Grant
    Filed: May 12, 2021
    Date of Patent: April 9, 2024
    Inventors: William S. Baron, Sandra F. Baron
  • Patent number: 11954784
    Abstract: A method and system for performing safety-critical rendering of a frame in a tile based graphics processing system. Geometry data for the frame is received, including data defining a plurality of primitives representing a plurality of objects in the frame. A definition of a region in the frame is received, the region being associated with one or more primitives among the plurality of primitives. Verification data is received that associates one or more primitives with the region in the frame. The frame is rendered using the geometry data and the rendering of the frame is controlled using the verification data, so that the rendering excludes, from the frame outside the region, the primitives identified by the verification data.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: April 9, 2024
    Assignee: Imagination Technologies Limited
    Inventors: Jamie Broome, Ian King
  • Patent number: 11954262
    Abstract: A scanning system comprises an intraoral scanner to capture scan data of a dental site and a computing device to generate a 3D rendering of the dental site. The scanner comprises one or more input devices configured to provide manual interaction with the computing device, where: a first activation of the input device(s) causes the scanning system to enter the scan mode, wherein the 3D rendering has a first visualization during the scan mode; and a second activation of the input device(s) causes the scanning system to enter an overlay mode, wherein the 3D rendering has a second visualization during the overlay mode, wherein the computing device is to present a menu comprising menu options on the display while the scanning system is in the overlay mode, and wherein the scanner is usable to select among the presented menu options.
    Type: Grant
    Filed: May 26, 2022
    Date of Patent: April 9, 2024
    Assignee: Align Technology, Inc.
    Inventors: Michael Sabina, Leon Rasovsky
  • Patent number: 11948235
    Abstract: Disclosed is a system for encoding and/or rendering animations without temporal or spatial restrictions. The system may encode an animation as a point cloud with first data points having a first time value and different positional and non-positional values, and second data points having a second time value and different positional and non-positional values. Rendering the animation may include generating and presenting a first image for the first time value of the animation based on the positional and non-positional values of the first data points, and generating and presenting a second image for the second time value of the animation by changing a visualization at a first position in the first image based on the positional values of a data point from the second data points corresponding to the first position and the data point non-positional values differing from the visualization.
    Type: Grant
    Filed: October 2, 2023
    Date of Patent: April 2, 2024
    Assignee: Illuscio, Inc.
    Inventors: William Peake, III, Joseph Bogacz
  • Patent number: 11948301
    Abstract: Systems and methods of facilitating determination of risk of coronary artery disease (CAD) based at least in part on one or more measurements derived from non-invasive medical image analysis. The methods can include accessing a non-invasive generated medical image, identifying one or more arteries, identifying, regions of plaque within an artery, analyzing the regions of plaque to identify low density non-calcified plaque, non-calcified plaque, or calcified plaque based at least in part on density, determining a distance from identified regions of low density non-calcified plaque to one or more of a lumen wall or vessel wall, determining embeddedness of the regions of low density non-calcified plaque by one or more of non-calcified plaque or calcified plaque, determining a shape of the more regions of low density non-calcified plaque, and generating a display of the analysis to facilitate determination of one or more of a risk of CAD of the subject.
    Type: Grant
    Filed: August 23, 2023
    Date of Patent: April 2, 2024
    Assignee: CLEERLY, INC.
    Inventors: James K. Min, James P. Earls, Shant Malkasian, Hugo Miguel Rodrigues Marques, Chung Chan, Shai Ronen
  • Patent number: 11948249
    Abstract: Disclosed are techniques for estimating a 3D bounding box (3DBB) from a 2D bounding box (2DBB). Conventional techniques to estimate 3DBB from 2DBB rely upon classifying target vehicles within the 2DBB. When the target vehicle is misclassified, the projected bounding box from the estimated 3DBB is inaccurate. To address such issues, it is proposed to estimate the 3DBB without relying upon classifying the target vehicle.
    Type: Grant
    Filed: October 7, 2022
    Date of Patent: April 2, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Young-Ki Baik, ChaeSeong Lim, Duck Hoon Kim
  • Patent number: 11948381
    Abstract: A system can use semantic images, lidar images, and/or 3D bounding boxes to determine mobility parameters for objects in the semantic image. In some cases, the system can generate virtual points for an object in a semantic image and associate the virtual points with lidar points to form denser point clouds for the object. The denser point clouds can be used to estimate the mobility parameters for the object. In certain cases, the system can use semantic images, lidar images, and/or 3D bounding boxes to determine an object sequence for an object. The object sequence can indicate a location of the particular object at different times. The system can use the object sequence to estimate the mobility parameters for the object.
    Type: Grant
    Filed: August 20, 2021
    Date of Patent: April 2, 2024
    Assignee: Motional AD LLC
    Inventors: Varun Bankiti, Oscar Beijbom, Tianwei Yin
  • Patent number: 11947862
    Abstract: Aspects of the present disclosure are directed to streaming interactive content from a native application executing at an artificial reality (XR) device into an artificial reality environment and/or to nearby XR device(s). A shell environment at an XR system can manage the software components of the system. The shell environment can include a shell application and a three-dimensional shell XR environment displayed to a user. An additional application, natively executing at the XR system, can provide a host version of content and a remote version of content. A two-dimensional virtual object displayed in the shell XR environment can display the host version of the content, and the remote version of the content can be streamed to a remote XR system. The remote XR system can display the remote content within another two-dimensional virtual object, for example in another shell XR environment displayed by the remote XR system.
    Type: Grant
    Filed: December 30, 2022
    Date of Patent: April 2, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Jonathan Lindo, Agustin Fonts, Michael James Armstrong, Nandit Tiku, Biju Mathew, Rukmani Ravisundaram, Bryce Masatsune Matsumori
  • Patent number: 11948337
    Abstract: The present disclosure relates to image processing apparatus and method that can prevent a reduction in image quality. Geometry data that is a frame image having arranged thereon a projected image obtained by projecting 3D data representing a three-dimensional structure on a two-dimensional plane and includes a special value indicating occupancy map information in a range is generated. The generated geometry data is encoded. Further, the encoded data on the geometry data is decoded, and a depth value indicating a position of the 3D data and the occupancy map information are extracted from the decoded geometry data. The present disclosure is applicable to, for example, an information processing apparatus, an image processing apparatus, electronic equipment, an information processing method, or a program.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: April 2, 2024
    Assignee: SONY CORPORATION
    Inventors: Satoru Kuma, Ohji Nakagami, Hiroyuki Yasuda, Koji Yano, Tsuyoshi Kato
  • Patent number: 11949443
    Abstract: An eyewear device that includes a lens; a support structure adapted to be worn on the head of a user, the support structure including a rim configured to support the lens in a viewing area visible to the user when wearing the support structure; an antenna embedded into or forming part of the support structure, the antenna at least partially extending into the rim; a transceiver adapter to send and receive signals; and a tuner coupled between the transceiver and the antenna, the tuner adapted to match impedance between the antenna and the transceiver to improve power transfer.
    Type: Grant
    Filed: October 19, 2022
    Date of Patent: April 2, 2024
    Assignee: Snap Inc.
    Inventor: Ugur Olgun
  • Patent number: 11948237
    Abstract: A method includes obtaining input information defining a user input associated with a user of a first electronic device at a second electronic device. The method also includes presenting, on a display screen of the second electronic device, an avatar. The method further includes causing, using at least one processor of the second electronic device, the avatar on the display screen of the second electronic device to draw the user input on the display screen of the second electronic device. The avatar has associated dimensions within an avatar space, and a first draw path used by the avatar to draw the user input is normalized based on the dimensions of the avatar within the avatar space.
    Type: Grant
    Filed: May 24, 2022
    Date of Patent: April 2, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Duncan D. Knarr, Siva Penke, Svetlana P. Gurenkova
  • Patent number: 11937573
    Abstract: In order to achieve a music providing system capable of controlling the behavioral state of a non-human animal using music, this music providing system for a non-human animal is provided with: a state information acquisition unit for acquiring state information relating to the motion state of an animal of interest; a state estimation processing unit for estimating the current behavioral state of the animal of interest from the state information; a target state storage unit for storing information relating to a target behavioral state for the animal of interest; a sound source storage unit for storing multiple music information pieces; a music information selection unit for detecting the degree of divergence of the current behavioral state from the target behavioral state and selecting one specific music information piece on the basis of the multiple music information pieces stored in the sound source storage unit; and a music information output unit for outputting the specific music information by wireless co
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: March 26, 2024
    Assignee: MEC COMPANY LTD.
    Inventors: Kiyoto Tai, Yuji Adachi, Hideshi Hamaguchi
  • Patent number: 11941774
    Abstract: The present disclosure is directed to automatically generating a 360 Virtual Photographic Representation (“spin”) of an object using multiple images of the object. The system uses machine learning to automatically differentiate between images of the object taken from different angles. A user supplies multiple images and/or videos of an object and the system automatically analyzes and classifies the images into the proper order before incorporating the images into an interactive spin. The system automatically classifies the images using features identified in the images. The classifications are based on predetermined classifications associated with the object to facilitate proper ordering of the images in the resulting spin.
    Type: Grant
    Filed: September 27, 2022
    Date of Patent: March 26, 2024
    Assignee: Freddy Technologies LLC
    Inventor: Sudheer Kumar Pamuru
  • Patent number: 11941831
    Abstract: An image processing system to estimate depth for a scene. The image processing system includes a fusion engine to receive a first depth estimate from a geometric reconstruction engine and a second depth estimate from a neural network architecture. The fusion engine is configured to probabilistically fuse the first depth estimate and the second depth estimate to output a fused depth estimate for the scene. The fusion engine is configured to receive a measurement of uncertainty for the first depth estimate from the geometric reconstruction engine and a measurement of uncertainty for the second depth estimate from the neural network architecture, and use the measurements of uncertainty to probabilistically fuse the first depth estimate and the second depth estimate.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: March 26, 2024
    Assignee: Imperial College Innovations Limited
    Inventors: Tristan William Laidlow, Jan Czarnowski, Stefan Leutenegger
  • Patent number: 11941854
    Abstract: Provided are a face image processing method and apparatus, an image device, and a storage medium. The face image processing method includes: acquiring first-key-point information of a first face image; performing position transformation on the first-key-point information to obtain second-key-point information conforming to a second facial geometric attribute, the second facial geometric attribute being different from a first facial geometric attribute corresponding to the first-key-point information; and performing facial texture coding processing by utilizing a neural network and the second-key-point information to obtain a second face image.
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: March 26, 2024
    Assignee: Beijing SenseTime Technology Development Co., Ltd.
    Inventors: Wenyan Wu, Chen Qian, Keqiang Sun, Qianyi Wu, Yuanyuan Xu