Three-dimension Patents (Class 345/419)
  • Patent number: 11967022
    Abstract: In various examples, to support training a deep neural network (DNN) to predict a dense representation of a 3D surface structure of interest, a training dataset is generated using a parametric mathematical modeling. A variety of synthetic 3D road surfaces may be generated by modeling a 3D road surface using varied parameters to simulate changes in road direction and lateral surface slope. In an example embodiment, a synthetic 3D road surface may be created by modeling a longitudinal 3D curve and expanding the longitudinal 3D curve to a 3D surface, and the resulting synthetic 3D surface may be sampled to form a synthetic ground truth projection image (e.g., a 2D height map). To generate corresponding input training data, a known pattern that represents which pixels may remain unobserved during 3D structure estimation may be generated and applied to a ground truth projection image to simulate a corresponding sparse projection image.
    Type: Grant
    Filed: October 28, 2021
    Date of Patent: April 23, 2024
    Assignee: NVIDIA Corporation
    Inventors: Kang Wang, Yue Wu, Minwoo Park, Gang Pan
  • Patent number: 11967020
    Abstract: A distributed, cross reality system efficiently and accurately compares location information that includes image frames. Each of the frames may be represented as a numeric descriptor that enables identification of frames with similar content. The resolution of the descriptors may vary for different computing devices in the distributed system based on degree of ambiguity in image comparisons and/or computing resources for the device. A descriptor computed for a cloud-based component operating on maps of large areas that can result in ambiguous identification of multiple image frames may use high resolution descriptors. High resolution descriptors reduce computationally intensive disambiguation processing. A portable device, which is more likely to operate on smaller maps and less likely to have the computational resources to compute a high resolution descriptor, may use a lower resolution descriptor.
    Type: Grant
    Filed: December 20, 2022
    Date of Patent: April 23, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Elad Joseph, Gal Braun, Ali Shahrokni
  • Patent number: 11967014
    Abstract: A 3D conversation system can facilitate 3D conversations in an augmented reality environment, allowing conversation participants to appear as if they are face-to-face. The 3D conversation system can accomplish this with a pipeline of data processing stages, which can include calibrate, capture, tag and filter, compress, decompress, reconstruct, render, and display stages. Generally, the pipeline can capture images of the sending user, create intermediate representations, transform the representations to convert from the orientation the images were taken from to a viewpoint of the receiving user, and output images of the sending user, from the viewpoint of the receiving user, in synchronization with audio captured from the sending user. Such a 3D conversation can take place between two or more sender/receiving systems and, in some implementations can be mediated by one or more server systems. In various configurations, stages of the pipeline can be customized based on a conversation context.
    Type: Grant
    Filed: May 3, 2023
    Date of Patent: April 23, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Brian Keith Cabral, Albert Parra Pozo
  • Patent number: 11966999
    Abstract: An electronic apparatus performs a method of real time simulation of physical visual effect on one or more Graphics Processing Units (GPUs). The method includes a plurality of time steps. Each of the time steps includes: building up a mapping between particles and background grid blocks; sorting the particles to a level of granularity; transferring momenta and masses of the particles to grid nodes on the background grid blocks to compute forces on the grid nodes; updating velocities and resolving collisions from the computed forces on the grid nodes; and applying the updated velocities back to the particles from the grid nodes and advecting the particles. In some embodiments, the frequency of building up and sorting is reduced compared with the frequency of transferring, updating, and applying in the plurality of time steps.
    Type: Grant
    Filed: March 4, 2022
    Date of Patent: April 23, 2024
    Assignee: TENCENT AMERICA LLC
    Inventors: Yun Fei, Ming Gao
  • Patent number: 11964400
    Abstract: A method for controlling a robot to pick up an object in various positions. The method includes: defining a plurality of reference points on the object; mapping a first camera image of the object in a known position onto a first descriptor image; identifying the descriptors of the reference points from the first descriptor image; mapping a second camera image of the object in an unknown position onto a second descriptor image; searching the identified descriptors of the reference points in the second descriptor image; ascertaining the positions of the reference points in the three-dimensional space in the unknown position from the found positions; and ascertaining a pickup pose of the object for the unknown position from the ascertained positions of the reference points.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: April 23, 2024
    Assignee: ROBERT BOSCH GMBH
    Inventors: Andras Gabor Kupcsik, Marco Todescato, Markus Spies, Nicolai Waniek, Philipp Christian Schillinger, Mathias Buerger
  • Patent number: 11966051
    Abstract: The device (3) includes a display panel (6), a shutter panel (7), and a controller (8). The display panel (6) includes subpixels for displaying a parallax image including a first image and a second image having parallax between the images. The shutter panel (7) is configured to define a traveling direction of image light representing the parallax image from the display panel (6). The controller (8) is configured to change, in a certain time cycle, areas on the shutter panel in a light transmissive state to transmit the image light with at least a certain transmittance and areas in a light attenuating state to transmit the image light with a transmittance lower than the transmittance in the light transmissive state, and is configured to change the subpixels to display the first image and the second image based on positions of the areas in the light transmissive state.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: April 23, 2024
    Assignee: KYOCERA Corporation
    Inventor: Kaoru Kusafuka
  • Patent number: 11957978
    Abstract: The present disclosure describes approaches to camera re-localization that improve the speed and accuracy with which pose estimates are generated by fusing output of a computer vision algorithm with data from a prior model of a geographic area in which a user is located. For each candidate pose estimate output by the algorithm, a game server maps the estimate to a position on the prior model (e.g., a specific cell on a heatmap-style histogram) and retrieves a probability corresponding to the mapped position. A data fusion module fuses, for each candidate pose estimate, a confidence score generated by the computer vision algorithm with the location probability from the prior model to generate an updated confidence score. If an updated confidence score meets or exceeds a score threshold, a re-localization module initiates a location-based application (e.g., a parallel reality game) based on the associated candidate pose estimate.
    Type: Grant
    Filed: March 3, 2022
    Date of Patent: April 16, 2024
    Assignee: NIANTIC, INC.
    Inventors: Ben Benfold, Victor Adrian Prisacariu
  • Patent number: 11960653
    Abstract: Systems and methods herein describe a multi-modal interaction system. The multi-modal interaction system, receives a selection of an augmented reality (AR) experience within an application on a computer device, displays a set of AR objects associated with the AR experience on a graphical user interface (GUI) of the computer device, display textual cues associated with the set of augmented reality objects on the GUI, receives a hand gesture and a voice command, modifies a subset of augmented reality objects of the set of augmented reality objects based on the hand gesture and the voice command, and displays the modified subset of augmented reality objects on the GUI.
    Type: Grant
    Filed: May 10, 2022
    Date of Patent: April 16, 2024
    Assignee: Snap Inc.
    Inventors: Jonathan Solichin, Xinyao Wang
  • Patent number: 11960009
    Abstract: Techniques for determining an object contour are discussed. Depth data associated with an object may be received. The depth data, such as lidar data, can be projected onto a two-dimensional plane. A first convex hull may be determined based on the projected lidar data. The first convex hull may include a plurality of boundary edges. A longest boundary edge, having a first endpoint and a second endpoint, can be determined. An angle can be determined based on the first endpoint, the second endpoint, and an interior point in the interior of the first convex hull. The longest boundary edge may be replaced with a first segment based on the first endpoint and the interior point, and a second segment based on the interior point and the second endpoint. An updated convex hull can be determined based on the first segment and the second segment.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: April 16, 2024
    Assignee: ZOOX, INC.
    Inventors: Yuanyuan Chen, Zeng Wang
  • Patent number: 11961176
    Abstract: Disclosed approaches provide for interactions of secondary rays of light transport paths in a virtual environment to share lighting contributions when determining lighting conditions for a light transport path. Interactions may be shared based on similarities in characteristics (e.g., hit locations), which may define a region in which interactions may share lighting condition data. The region may correspond to a texel of a texture map and lighting contribution data for interactions may be accumulated to the texel spatially and/or temporally, then used to compute composite lighting contribution data that estimates radiance at an interaction. Approaches are also provided for reprojecting lighting contributions of interactions to pixels to share lighting contribution data from secondary bounces of light transport paths while avoiding potential over blurring.
    Type: Grant
    Filed: February 7, 2022
    Date of Patent: April 16, 2024
    Assignee: NVIDIA Corporation
    Inventor: Jacopo Pantaleoni
  • Patent number: 11961250
    Abstract: A light-field image generation system including a shape information acquisition server that acquires shape information indicating a three-dimensional shape of an object, and an image generation server that is provided with a shape reconstruction unit that reconstructs the three-dimensional shape of the object as a virtual three-dimensional shape in a virtual space based on the shape information and a light-field image generation unit that generates a light-field image of the virtual three-dimensional shape at a predetermined viewing point in the virtual space.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: April 16, 2024
    Assignee: TOPPAN PRINTING CO., LTD.
    Inventor: Tetsuro Morimoto
  • Patent number: 11961428
    Abstract: Provided is a method including: obtaining a plurality of images corresponding to a plurality of views; identifying at least one view region overlapping with a sub-pixel from among a plurality of view regions corresponding to the plurality of views; identifying a data value corresponding to the sub-pixel for each of at least one image corresponding to the at least one view region; determining an application degree of the data value for each of the at least one image, based on a level of overlap between the sub-pixel and the at least one view region, and determining an output value of the sub-pixel based on a data value adjusted according to the determined application degree; and outputting an image based on output values respectively determined for a plurality of sub-pixels including the sub-pixel.
    Type: Grant
    Filed: May 12, 2022
    Date of Patent: April 16, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Kangwon Jeon
  • Patent number: 11954773
    Abstract: Embodiments described herein provide a process and method running on a computer for creating an augmented image. According to an embodiment, a graphical user interface gathers data that is programmatically analyzed to obtain photographic properties from a first image. Photographic properties are provided to a user for obtaining a second image containing a fiducial mark. The second image is programmatically analyzed to obtain photographic properties. The first image and the second image are programmatically analyzed and processed to produce an augmented image.
    Type: Grant
    Filed: May 12, 2021
    Date of Patent: April 9, 2024
    Inventors: William S. Baron, Sandra F. Baron
  • Patent number: 11954890
    Abstract: The present disclosure relates to an apparatus and method for fast refining segmentation for a V-PCC encoder. The apparatus may include a grid segmentation unit segmenting a coordinate space of a point cloud into grid units, and an edge cube search unit searching a cube containing one or more points among the cubes segmented into grid units and containing a segment boundary. The apparatus may also include a surrounding cube search unit searching an edge surrounding cube containing one or more points within a predetermined range from the edge cube, and a smooth score calculation unit calculating smooth scores for all the edge surrounding cubes and all the edge cubes. The apparatus may further include a projection plane index update unit obtaining a normal score based on the calculated smooth scores and updating a projection plane index of each point in the edge cube using the normal score.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: April 9, 2024
    Assignee: Korea Electronics Technology Institute
    Inventors: Yong Hwan Kim, Jieon Kim, JinGang Huh, Jong-geun Park
  • Patent number: 11954262
    Abstract: A scanning system comprises an intraoral scanner to capture scan data of a dental site and a computing device to generate a 3D rendering of the dental site. The scanner comprises one or more input devices configured to provide manual interaction with the computing device, where: a first activation of the input device(s) causes the scanning system to enter the scan mode, wherein the 3D rendering has a first visualization during the scan mode; and a second activation of the input device(s) causes the scanning system to enter an overlay mode, wherein the 3D rendering has a second visualization during the overlay mode, wherein the computing device is to present a menu comprising menu options on the display while the scanning system is in the overlay mode, and wherein the scanner is usable to select among the presented menu options.
    Type: Grant
    Filed: May 26, 2022
    Date of Patent: April 9, 2024
    Assignee: Align Technology, Inc.
    Inventors: Michael Sabina, Leon Rasovsky
  • Patent number: 11954892
    Abstract: Disclosed is a system and associated methods for compressing motion within an animated point cloud. The resulting compressed file encodes different transforms that recreate the motion of different sets of points across different point clouds or frames of the animation in place of the data for the different sets of points from the different point clouds. The compression involves detecting a motion that changes positioning of a set of points between a first point cloud and subsequent point clouds of an uncompressed encoding of two or more frames of an animation. The compression further involves defining a transform that models the motion, and generating a compressed animated point cloud by encoding the data of the first point cloud in the compressed animated point cloud, and by replacing the data for the set of points in the one or more subsequent point clouds with the transform.
    Type: Grant
    Filed: June 22, 2023
    Date of Patent: April 9, 2024
    Assignee: Illuscio, Inc.
    Inventors: Dwayne Elahie, Nolan Taeksang Yoo
  • Patent number: 11954813
    Abstract: A three-dimensional scene constructing method, apparatus and system, and a storage medium. The three-dimensional scene constructing method includes: acquiring point cloud data of a key object and a background object in a target scene, wherein the point cloud data of the key object comprises three-dimensional information and corresponding feature information, and the point cloud data of the background object at least comprises three-dimensional information; establishing a feature database of the target scene, wherein the feature database at least comprises a key object feature library for recording three-dimensional information and feature information of the key object; performing registration and fusion on the point cloud data of the key object and the point cloud data of the background object, so as to obtain a three-dimensional model of the target scene; and when updating the three-dimensional model, reconstructing the three-dimensional model in a regional manner according to the feature database.
    Type: Grant
    Filed: September 2, 2021
    Date of Patent: April 9, 2024
    Assignee: BOE Technology Group Co., Ltd.
    Inventors: Youxue Wang, Xiaohui Ma, Kai Geng, Mengjun Hou, Qian Ha
  • Patent number: 11954787
    Abstract: The disclosure provides image rendering methods and apparatuses. One example method includes that a foreground image is first rendered, and then a panoramic image used as a background is rendered. A pixel corresponding to the foreground image has a corresponding depth value. When the panoramic image is rendered, content corresponding to the panoramic image may be rendered at a pixel corresponding to a depth standard value based on a depth value of a pixel on a canvas. The depth reference value is a depth value of a pixel other than the pixel corresponding to the foreground image.
    Type: Grant
    Filed: June 27, 2022
    Date of Patent: April 9, 2024
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yixin Deng, Qichao Zhu, Dong Wei, Lei Yang
  • Patent number: 11951395
    Abstract: This application relates to a method for displaying a marker element in a virtual scene performed at a terminal. The method includes: receiving a marking request from a user of the terminal, wherein the user controls a current virtual object rendered in a display interface of the virtual scene; in response to the marking request, determining a target virtual item in the display interface of the virtual scene; obtaining graphic data of a marker element when a distance between the target virtual item and the current virtual object in the virtual scene is within a predefined distance, the marker element being a graphic element used for indicating a location of the target virtual item in the virtual scene; and rendering the marker element according to the graphic data at a designated location adjacent the target virtual item in the display interface of the virtual scene.
    Type: Grant
    Filed: June 2, 2022
    Date of Patent: April 9, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Yourui Fan
  • Patent number: 11954784
    Abstract: A method and system for performing safety-critical rendering of a frame in a tile based graphics processing system. Geometry data for the frame is received, including data defining a plurality of primitives representing a plurality of objects in the frame. A definition of a region in the frame is received, the region being associated with one or more primitives among the plurality of primitives. Verification data is received that associates one or more primitives with the region in the frame. The frame is rendered using the geometry data and the rendering of the frame is controlled using the verification data, so that the rendering excludes, from the frame outside the region, the primitives identified by the verification data.
    Type: Grant
    Filed: January 26, 2022
    Date of Patent: April 9, 2024
    Assignee: Imagination Technologies Limited
    Inventors: Jamie Broome, Ian King
  • Patent number: 11954425
    Abstract: A computer-implemented method, computer program product, and computing system for rendering an annotatable image within an image viewer. An indication of an intent to annotate the annotatable image is received from a user. A meme generation interface is rendered with respect to the annotatable image. Meme annotation criteria is received from the user via the meme generation interface. The meme annotation criteria includes one or more of: a meme message, a meme position indicator, and a font type identifier. The annotatable image is modified based, at least in part, upon the meme annotation criteria, thus generating an annotated image. The annotated image is published to a meme publication website.
    Type: Grant
    Filed: August 22, 2022
    Date of Patent: April 9, 2024
    Assignee: Google LLC
    Inventors: Stanislaw Pasko, Michal Brzozowski, Wiktor Gworek, Zachary Yeskel
  • Patent number: 11954809
    Abstract: The present disclosure relates to display systems and, more particularly, to augmented reality display systems. In one aspect, a method of fabricating an optical element includes providing a substrate having a first refractive index and transparent in the visible spectrum. The method additionally includes forming on the substrate periodically repeating polymer structures. The method further includes exposing the substrate to a metal precursor followed by an oxidizing precursor. Exposing the substrate is performed under a pressure and at a temperature such that an inorganic material comprising the metal of the metal precursor is incorporated into the periodically repeating polymer structures, thereby forming a pattern of periodically repeating optical structures configured to diffract visible light. The optical structures have a second refractive index greater than the first refractive index.
    Type: Grant
    Filed: April 29, 2022
    Date of Patent: April 9, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Melanie Maputol West, Christophe Peroz, Mauro Melli
  • Patent number: 11954243
    Abstract: An augmented reality (AR) device can be configured to generate a virtual representation of a user's physical environment. The AR device can capture images of the user's physical environment to generate or identify a user's location. The AR device can project graphics at designated locations within the user's environment to guide the user to capture images of the user's physical environment. The AR device can provide visual, audible, or haptic guidance to direct the user of the AR device to explore the user's environment.
    Type: Grant
    Filed: July 11, 2022
    Date of Patent: April 9, 2024
    Assignee: MAGIC LEAP, INC.
    Inventors: Amy Dedonato, James Cameron Petty, Griffith Buckley Hazen, Jordan Alexander Cazamias, Karen Stolzenberg
  • Patent number: 11956620
    Abstract: A method of presenting audio comprises: identifying a first ear listener position and a second ear listener position in a mixed reality environment; identifying a first virtual sound source in the mixed reality environment; identifying a first object in the mixed reality environment; determining a first audio signal in the mixed reality environment, wherein the first audio signal originates at the first virtual sound source and intersects the first ear listener position; determining a second audio signal in the mixed reality environment, wherein the second audio signal originates at the first virtual sound source, intersects the first object, and intersects the second ear listener position; determining a third audio signal based on the second audio signal and the first object; presenting, to a first ear of a user, the first audio signal; and presenting, to a second ear of the user, the third audio signal.
    Type: Grant
    Filed: June 23, 2023
    Date of Patent: April 9, 2024
    Assignee: Magic Leap, Inc.
    Inventor: Anastasia Andreyevna Tajik
  • Patent number: 11956504
    Abstract: Provided is a content distribution server which is able to establish restrictions on the public disclosure of an object displayed in virtual space at the convenience of the distributor. The content distribution server comprises: a distribution unit that distributes live content for synthesizing video in virtual space using information from the distributor as virtual character information; and a first setting receiving unit that receives from the distributor terminal used by the distributor public disclosure restriction settings for establishing restrictions on objects present in virtual space displayed on the distributor terminal that can be viewed on a viewer terminal used by a viewer to view live content.
    Type: Grant
    Filed: July 8, 2022
    Date of Patent: April 9, 2024
    Assignee: DWANGO CO., LTD.
    Inventors: Nobuo Kawakami, Kentarou Matsui, Shinnosuke Iwaki, Takashi Kojima, Naoki Yamaguchi
  • Patent number: 11948381
    Abstract: A system can use semantic images, lidar images, and/or 3D bounding boxes to determine mobility parameters for objects in the semantic image. In some cases, the system can generate virtual points for an object in a semantic image and associate the virtual points with lidar points to form denser point clouds for the object. The denser point clouds can be used to estimate the mobility parameters for the object. In certain cases, the system can use semantic images, lidar images, and/or 3D bounding boxes to determine an object sequence for an object. The object sequence can indicate a location of the particular object at different times. The system can use the object sequence to estimate the mobility parameters for the object.
    Type: Grant
    Filed: August 20, 2021
    Date of Patent: April 2, 2024
    Assignee: Motional AD LLC
    Inventors: Varun Bankiti, Oscar Beijbom, Tianwei Yin
  • Patent number: 11948301
    Abstract: Systems and methods of facilitating determination of risk of coronary artery disease (CAD) based at least in part on one or more measurements derived from non-invasive medical image analysis. The methods can include accessing a non-invasive generated medical image, identifying one or more arteries, identifying, regions of plaque within an artery, analyzing the regions of plaque to identify low density non-calcified plaque, non-calcified plaque, or calcified plaque based at least in part on density, determining a distance from identified regions of low density non-calcified plaque to one or more of a lumen wall or vessel wall, determining embeddedness of the regions of low density non-calcified plaque by one or more of non-calcified plaque or calcified plaque, determining a shape of the more regions of low density non-calcified plaque, and generating a display of the analysis to facilitate determination of one or more of a risk of CAD of the subject.
    Type: Grant
    Filed: August 23, 2023
    Date of Patent: April 2, 2024
    Assignee: CLEERLY, INC.
    Inventors: James K. Min, James P. Earls, Shant Malkasian, Hugo Miguel Rodrigues Marques, Chung Chan, Shai Ronen
  • Patent number: 11948337
    Abstract: The present disclosure relates to image processing apparatus and method that can prevent a reduction in image quality. Geometry data that is a frame image having arranged thereon a projected image obtained by projecting 3D data representing a three-dimensional structure on a two-dimensional plane and includes a special value indicating occupancy map information in a range is generated. The generated geometry data is encoded. Further, the encoded data on the geometry data is decoded, and a depth value indicating a position of the 3D data and the occupancy map information are extracted from the decoded geometry data. The present disclosure is applicable to, for example, an information processing apparatus, an image processing apparatus, electronic equipment, an information processing method, or a program.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: April 2, 2024
    Assignee: SONY CORPORATION
    Inventors: Satoru Kuma, Ohji Nakagami, Hiroyuki Yasuda, Koji Yano, Tsuyoshi Kato
  • Patent number: 11949443
    Abstract: An eyewear device that includes a lens; a support structure adapted to be worn on the head of a user, the support structure including a rim configured to support the lens in a viewing area visible to the user when wearing the support structure; an antenna embedded into or forming part of the support structure, the antenna at least partially extending into the rim; a transceiver adapter to send and receive signals; and a tuner coupled between the transceiver and the antenna, the tuner adapted to match impedance between the antenna and the transceiver to improve power transfer.
    Type: Grant
    Filed: October 19, 2022
    Date of Patent: April 2, 2024
    Assignee: Snap Inc.
    Inventor: Ugur Olgun
  • Patent number: 11948237
    Abstract: A method includes obtaining input information defining a user input associated with a user of a first electronic device at a second electronic device. The method also includes presenting, on a display screen of the second electronic device, an avatar. The method further includes causing, using at least one processor of the second electronic device, the avatar on the display screen of the second electronic device to draw the user input on the display screen of the second electronic device. The avatar has associated dimensions within an avatar space, and a first draw path used by the avatar to draw the user input is normalized based on the dimensions of the avatar within the avatar space.
    Type: Grant
    Filed: May 24, 2022
    Date of Patent: April 2, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Duncan D. Knarr, Siva Penke, Svetlana P. Gurenkova
  • Patent number: 11948235
    Abstract: Disclosed is a system for encoding and/or rendering animations without temporal or spatial restrictions. The system may encode an animation as a point cloud with first data points having a first time value and different positional and non-positional values, and second data points having a second time value and different positional and non-positional values. Rendering the animation may include generating and presenting a first image for the first time value of the animation based on the positional and non-positional values of the first data points, and generating and presenting a second image for the second time value of the animation by changing a visualization at a first position in the first image based on the positional values of a data point from the second data points corresponding to the first position and the data point non-positional values differing from the visualization.
    Type: Grant
    Filed: October 2, 2023
    Date of Patent: April 2, 2024
    Assignee: Illuscio, Inc.
    Inventors: William Peake, III, Joseph Bogacz
  • Patent number: 11947862
    Abstract: Aspects of the present disclosure are directed to streaming interactive content from a native application executing at an artificial reality (XR) device into an artificial reality environment and/or to nearby XR device(s). A shell environment at an XR system can manage the software components of the system. The shell environment can include a shell application and a three-dimensional shell XR environment displayed to a user. An additional application, natively executing at the XR system, can provide a host version of content and a remote version of content. A two-dimensional virtual object displayed in the shell XR environment can display the host version of the content, and the remote version of the content can be streamed to a remote XR system. The remote XR system can display the remote content within another two-dimensional virtual object, for example in another shell XR environment displayed by the remote XR system.
    Type: Grant
    Filed: December 30, 2022
    Date of Patent: April 2, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Jonathan Lindo, Agustin Fonts, Michael James Armstrong, Nandit Tiku, Biju Mathew, Rukmani Ravisundaram, Bryce Masatsune Matsumori
  • Patent number: 11948249
    Abstract: Disclosed are techniques for estimating a 3D bounding box (3DBB) from a 2D bounding box (2DBB). Conventional techniques to estimate 3DBB from 2DBB rely upon classifying target vehicles within the 2DBB. When the target vehicle is misclassified, the projected bounding box from the estimated 3DBB is inaccurate. To address such issues, it is proposed to estimate the 3DBB without relying upon classifying the target vehicle.
    Type: Grant
    Filed: October 7, 2022
    Date of Patent: April 2, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Young-Ki Baik, ChaeSeong Lim, Duck Hoon Kim
  • Patent number: 11941854
    Abstract: Provided are a face image processing method and apparatus, an image device, and a storage medium. The face image processing method includes: acquiring first-key-point information of a first face image; performing position transformation on the first-key-point information to obtain second-key-point information conforming to a second facial geometric attribute, the second facial geometric attribute being different from a first facial geometric attribute corresponding to the first-key-point information; and performing facial texture coding processing by utilizing a neural network and the second-key-point information to obtain a second face image.
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: March 26, 2024
    Assignee: Beijing SenseTime Technology Development Co., Ltd.
    Inventors: Wenyan Wu, Chen Qian, Keqiang Sun, Qianyi Wu, Yuanyuan Xu
  • Patent number: 11941774
    Abstract: The present disclosure is directed to automatically generating a 360 Virtual Photographic Representation (“spin”) of an object using multiple images of the object. The system uses machine learning to automatically differentiate between images of the object taken from different angles. A user supplies multiple images and/or videos of an object and the system automatically analyzes and classifies the images into the proper order before incorporating the images into an interactive spin. The system automatically classifies the images using features identified in the images. The classifications are based on predetermined classifications associated with the object to facilitate proper ordering of the images in the resulting spin.
    Type: Grant
    Filed: September 27, 2022
    Date of Patent: March 26, 2024
    Assignee: Freddy Technologies LLC
    Inventor: Sudheer Kumar Pamuru
  • Patent number: 11941808
    Abstract: A medical image processing device for visualizing an organ includes a processor. the processor is configured: to acquire volume data including the organ; to extract tubular tissues included in the organ; to designate an excision region that is a region to be excised in the organ; to determine whether or not to excise tubular tissues included in the excision region; and not to display tubular tissues to be excised in the excision region and to display tubular tissues not to be excised in the excision region on a display unit, when displaying a remaining region that is a range excluding the excision region in the organ.
    Type: Grant
    Filed: February 21, 2022
    Date of Patent: March 26, 2024
    Assignee: ZIOSOFT, INC.
    Inventors: Shusuke Chino, Yuichiro Hourai
  • Patent number: 11943425
    Abstract: A display device according to an embodiment of the present disclosure includes: a transparent screen; one or more imaging units; and a video projection unit that acquires positional information regarding a predetermined subject included in each of captured images obtained by the one or more imaging units and then irradiates the transparent screen with video light on the basis of the positional information to cause predetermined video to appear on the transparent screen for the subject.
    Type: Grant
    Filed: November 7, 2019
    Date of Patent: March 26, 2024
    Assignee: SONY GROUP CORPORATION
    Inventors: Tomoya Yano, Yuji Nakahata, Akira Tanaka
  • Patent number: 11941831
    Abstract: An image processing system to estimate depth for a scene. The image processing system includes a fusion engine to receive a first depth estimate from a geometric reconstruction engine and a second depth estimate from a neural network architecture. The fusion engine is configured to probabilistically fuse the first depth estimate and the second depth estimate to output a fused depth estimate for the scene. The fusion engine is configured to receive a measurement of uncertainty for the first depth estimate from the geometric reconstruction engine and a measurement of uncertainty for the second depth estimate from the neural network architecture, and use the measurements of uncertainty to probabilistically fuse the first depth estimate and the second depth estimate.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: March 26, 2024
    Assignee: Imperial College Innovations Limited
    Inventors: Tristan William Laidlow, Jan Czarnowski, Stefan Leutenegger
  • Patent number: 11937573
    Abstract: In order to achieve a music providing system capable of controlling the behavioral state of a non-human animal using music, this music providing system for a non-human animal is provided with: a state information acquisition unit for acquiring state information relating to the motion state of an animal of interest; a state estimation processing unit for estimating the current behavioral state of the animal of interest from the state information; a target state storage unit for storing information relating to a target behavioral state for the animal of interest; a sound source storage unit for storing multiple music information pieces; a music information selection unit for detecting the degree of divergence of the current behavioral state from the target behavioral state and selecting one specific music information piece on the basis of the multiple music information pieces stored in the sound source storage unit; and a music information output unit for outputting the specific music information by wireless co
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: March 26, 2024
    Assignee: MEC COMPANY LTD.
    Inventors: Kiyoto Tai, Yuji Adachi, Hideshi Hamaguchi
  • Patent number: 11935196
    Abstract: Techniques are described for using computing devices to perform automated operations related to providing visual information of multiple types in an integrated manner about a building or other defined area. The techniques may include generating and presenting a GUI (graphical user interface) on a client device that includes a computer model of the building's interior with one or more first types of information (e.g., in a first pane of the GUI), and simultaneously presenting other types of related information about the building interior (e.g., in additional separate GUI pane(s)) that is coordinated with the first type(s) of information being currently displayed. The computer model may be a 3D (three-dimensional) or 2.5D representation generated after the house is built and showing the actual house's interior (e.g., walls, furniture, etc.), and may be displayed to a user of a client computing device in a displayed GUI with various user-selectable controls.
    Type: Grant
    Filed: June 10, 2023
    Date of Patent: March 19, 2024
    Assignee: MFTB Holdco, Inc.
    Inventors: Yuguang Li, Ivaylo Boyadzhiev, Romualdo Impas
  • Patent number: 11935288
    Abstract: The systems and methods herein provide improved methodologies for visualization on a user's display of sensor data (e.g., 2D and 3D information obtained from or derived from sensors) for objects, components, or features of interest in a scene. The previously acquired sensor data is processable for concurrent display of objects/features/scene or location visualizations to a user during their real-time navigation of a scene camera during a variety of user visualization activities. Sensor data can be acquired via the operation of vehicles configured with one or more sensors, such as unmanned aerial vehicles, or from other methodologies, or from any other suitable sensor data acquisition activities. Objects etc. for which acquired sensor data can be visualized by a user on a display includes buildings, parts of buildings, and infrastructure elements, among other things.
    Type: Grant
    Filed: January 3, 2022
    Date of Patent: March 19, 2024
    Assignee: Pointivo Inc.
    Inventors: Iven Connary, Guy Ettinger, Habib Fathi, Jacob Garland, Daniel Ciprari
  • Patent number: 11935255
    Abstract: A display apparatus includes a generation unit that generates generation images when a certain object is viewed at a plurality of angles such that an angle of the object with respect to a virtual light source is changed on the basis of an image of the object, a selection unit that selects a first image in which the object is viewed at a first angle from among the generation images, as a selection image, a conversion unit that converts the first image into a conversion image in which the object is viewed at a second angle that is different from the first angle in a state in which a positional relationship between the virtual light source and the object in the first image is maintained, and a display unit that displays the conversion image.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: March 19, 2024
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Kazuhiko Horikawa, Masaru Okutsu
  • Patent number: 11935189
    Abstract: A method for generating a photogrammetric corridor map from a set of input images by recovering a respective pose of each image, wherein a pose includes a position and an orientation information of the underlying camera, including steps of: a) receiving a set of input images, b) defining a working set, c) initializing an image cluster, d) further growing the image cluster: d1) selecting one image from the working set that features overlap with at least one image already in the cluster, e) continuing with step b) if there remain images in the working set; if not, f) generating and providing as output the corridor map using the recovered camera poses.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: March 19, 2024
    Assignee: Siemens Energy Global GmbH & Co. KG
    Inventors: Philipp Glira, Jürgen Hatzl, Michael Hornacek, Stefan Wakolbinger, Josef Alois Birchbauer, Claudia Windisch
  • Patent number: 11935208
    Abstract: A virtual object system can orchestrate virtual objects defined as a collection of components and with inheritance in an object hierarchy. Virtual object components can include a container, data, a template, and a controller. A container can define the volume the virtual object is authorized to write into. A virtual object's data can specify features such as visual elements, parameters, links to external data, meta-data, etc. The template can define view states of the virtual object and contextual breakpoints for transitioning between them. Each view state can control when and how the virtual object presents data elements. The controller can define logic for the virtual object to respond to input, context, etc. The definition of each object can specify which other object in an object hierarchy that object extends, where extending an object includes inheriting that object's components, which can be modified or overwritten as part of the extension.
    Type: Grant
    Filed: January 25, 2023
    Date of Patent: March 19, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Yeliz Karadayi, Wai Leong Chak, Michal Hlavac, Pol Pla I Conesa
  • Patent number: 11937450
    Abstract: A display apparatus includes a display module including a display surface. The display module includes a display panel including a plurality of display devices which displays an image on the display surface, a plurality of light concentration lenses arranged on the display panel, a buffer layer disposed on the light concentration lenses, and a plurality of diffraction patterns arranged at regular intervals on the buffer layer, where the diffraction patterns diffract a portion of lights incident thereto.
    Type: Grant
    Filed: July 17, 2019
    Date of Patent: March 19, 2024
    Assignee: SAMSUNG DISPLAY CO., LTD.
    Inventors: Koichi Sugitani, Jin-su Byun, Gwangmin Cha, Saehee Han, Hoon Kang, Jin-lak Kim
  • Patent number: 11931907
    Abstract: In some aspects, a system comprises a computer hardware processor and a non-transitory computer-readable storage medium storing processor-executable instructions for receiving, from one or more sensors, sensor data relating to a robot; generating, using a statistical model, based on the sensor data, first control information for the robot to accomplish a task; transmitting, to the robot, the first control information for execution of the task; and receiving, from the robot, a result of execution of the task.
    Type: Grant
    Filed: February 25, 2022
    Date of Patent: March 19, 2024
    Assignee: Massachusetts Institute of Technology
    Inventors: Daniela Rus, Jeffrey Lipton, Aidan Fay, Changhyun Choi
  • Patent number: 11935203
    Abstract: In order to guide the user to a target object that is located outside of the field of view of the wearer of the AR computing device, a rotational navigation system displays on a display device an arrow or a pointer, referred to as a direction indicator. The direction indicator is generated based on the angle between the direction of the user's head and the direction of the target object, and a correction coefficient. The correction coefficient is defined such that the greater the angle between the direction of the user's head and the direction of the target object, the greater is the horizontal component of the direction indicator.
    Type: Grant
    Filed: June 30, 2022
    Date of Patent: March 19, 2024
    Assignee: Snap Inc.
    Inventor: Pawel Wawruch
  • Patent number: 11928772
    Abstract: In a ray tracer, to prevent any long-running query from hanging the graphics processing unit, a traversal coprocessor provides a preemption mechanism that will allow rays to stop processing or time out early. The example non-limiting implementations described herein provide such a preemption mechanism, including a forward progress guarantee, and additional programmable timeout options that can be time or cycle based. Those programmable options provide a means for quality of service timing guarantees for applications such as virtual reality (VR) that have strict timing requirements.
    Type: Grant
    Filed: August 17, 2022
    Date of Patent: March 12, 2024
    Assignee: NVIDIA Corporation
    Inventors: Greg Muthler, Ronald Charles Babich, Jr., William Parsons Newhall, Jr., Peter Nelson, James Robertson, John Burgess
  • Patent number: 11928831
    Abstract: An information processing apparatus generates shape data on an object included in a first partial space based on one or more captured images obtained from one or more of a plurality of imaging apparatuses and a first parameter corresponding to the first partial space, the first partial space being included in a plurality of partial spaces in an imaging space which is an image capturing target for the plurality of imaging apparatuses, and generates shape data on an object included in a second partial space based on one or more captured images obtained from one or more of the plurality of imaging apparatuses and a second parameter corresponding to the second partial space, the second partial space being included in the plurality of partial spaces, the second parameter being different from the first parameter.
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: March 12, 2024
    Assignee: Canon Kabushiki Kaisha
    Inventor: Yasufumi Takama
  • Patent number: 11928995
    Abstract: A display test apparatus includes a measuring apparatus and a calculating device connected to the measuring apparatus. The measuring apparatus provides a sample pixel value to a first display device including a first display panel, measures a first color coordinate value of an image displayed by the first display panel in response to the sample pixel value. The calculating device generates a first parameter of the first display panel in response to the first color coordinate value, generates a target color coordinate value in response to the sample pixel value, and generates a first mapping table in response to the first color coordinate value, the first parameter and the target color coordinate value.
    Type: Grant
    Filed: January 21, 2022
    Date of Patent: March 12, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Joo Hyuk Yum, Deok Soo Park, Byoung-Ju Song