Patents Issued in September 12, 2024
  • Publication number: 20240303908
    Abstract: A method including generating a first vector based on a first grid and a three-dimensional (3D) position associated with a first implicit representation (IR) of a 3D object, generating at least one second vector based on at least one second grid and an upsampled first grid, decoding the first vector to generate a second IR of the 3D object, decoding the at least one second vector to generate at least one third IR of the 3D object, generating a composite IR of the 3D object based on the second IR of the 3D object and the at least one third IR of the 3D object, and generating a reconstructed volume representing the 3D object based on the composite IR of the 3D object.
    Type: Application
    Filed: April 30, 2021
    Publication date: September 12, 2024
    Inventors: Yinda Zhang, Danhang Tang, Ruofei Du, Zhang Chen, Kyle Genova, Sofien Bouaziz, Thomas Allen Funkhouser, Sean Ryan Francesco Fanello, Christian Haene
  • Publication number: 20240303909
    Abstract: An image processing method and an image processing apparatus are provided. The image processing method includes obtaining a panoramic image to which a room corresponds; performing door detection on the panoramic image to determine first information related to at least one door in the room; and displaying, based on the first information, a panoramic identifier of the at least one door in the panoramic image, the panoramic identifier indicating at least a door outline, a door type and an opening type of the door.
    Type: Application
    Filed: March 1, 2024
    Publication date: September 12, 2024
    Applicant: Ricoh Company, Ltd.
    Inventors: Hong YI, Haijing JIA, Hengzhi ZHANG, Liyan LIU, Weitao GONG
  • Publication number: 20240303910
    Abstract: A method for generating tiled multiplane images from a source image is disclosed. The method includes obtaining color and depth images. The method also includes extracting a feature map using a first neural network. The method also includes generating masks for a tile using a second neural network based on a corresponding tile of the feature map and corresponding sections of the color and depth images. The method also includes computing depths of a planes corresponding to the tile based on the masks. The method also includes generating a per-tile multiplane image for the tile based on the masks. The method also includes rendering an image using per-tile multiplane images and depths. A system for generating tiled multiplane images from a source image is also disclosed.
    Type: Application
    Filed: February 8, 2024
    Publication date: September 12, 2024
    Inventors: Lei XIAO, Douglas Robert LANMAN, Numair Khalil Ullah KHAN
  • Publication number: 20240303911
    Abstract: An image processing apparatus: obtains information on a brightness value of a captured image captured by an image capture apparatus for generating a foreground image or of a background image generated from the captured image; corrects color information of three-dimensional shape data of a background based on the brightness value; and generates a virtual viewpoint image by using the three-dimensional shape data colored with the corrected color information.
    Type: Application
    Filed: February 16, 2024
    Publication date: September 12, 2024
    Inventor: Hiroyasu ITO
  • Publication number: 20240303912
    Abstract: Implementations of post-tessellation blender hardware perform both domain shading and blending and whilst some vertices may not require blending, all vertices require domain shading. The blender hardware includes a cache and/or a content addressable memory and these data structures are used to reduce duplicate domain shading operations.
    Type: Application
    Filed: May 20, 2024
    Publication date: September 12, 2024
    Inventors: Peter Malcolm Lacey, Simon Fenney, Tobias Hector, Ian King
  • Publication number: 20240303913
    Abstract: Systems and techniques are provided for physical-based light estimation for inverse rendering of indoor scenes. For example, a computing device can obtain an estimated scene geometry based on a multi-view observation of a scene. The computing device can further obtain a light emission mask based on the multi-view observation of the scene. The computing device can also obtain an emitted radiance field based on the multi-view observation of the scene. The computing device can then determine, based on the light emission mask and the emitted radiance field, a geometry of at least one light source of the estimated scene geometry.
    Type: Application
    Filed: March 8, 2023
    Publication date: September 12, 2024
    Inventors: Yinhao ZHU, Rui ZHU, Hong CAI, Fatih Murat PORIKLI
  • Publication number: 20240303914
    Abstract: In a method for generating a multi-dimensional scene graph with a complex light field, entity features of entities are obtained by inputting respective 2-Dimensional (2D) images captured in multiple view directions into an object detection model, and features of a respective single-view-direction scene graph are obtained by predicting a semantic relation among the entities contained in a corresponding 2D image captured in each view direction. An entity correlation for an entity among the multiple view directions is determined as an entity re-identification result. A multi-dimensional bounding box for the entity is established based on the entity re-identification result and a geometric constraint of camera parameters. A feature fusion result is obtained by fusing features of respective single-view-direction scene graphs in the multiple view directions. The multi-dimensional scene graph with the complex light field is established.
    Type: Application
    Filed: September 7, 2023
    Publication date: September 12, 2024
    Inventors: Lu FANG, Zequn CHEN, Haozhe LIN, Jinzhi ZHANG
  • Publication number: 20240303915
    Abstract: An apparatus to facilitate inferred object shading is disclosed. The apparatus comprises one or more processors to receive rasterized pixel data and hierarchical data associated with one or more objects and perform an inferred shading operation on the rasterized pixel data, including using one or more trained neural networks to perform texture and lighting on the rasterized pixel data to generate a pixel output, wherein the one or more trained neural networks uses the hierarchical data to learn a three-dimensional (3D) geometry, latent space and representation of the one or more objects.
    Type: Application
    Filed: March 20, 2024
    Publication date: September 12, 2024
    Applicant: Intel Corporation
    Inventors: Selvakumar Panneer, Mrutunjayya Mrutunjayya, Carl S. Marshall, Ravishankar Iyer, Zack Waters
  • Publication number: 20240303916
    Abstract: The present disclosure discloses an indoor structure segmentation method based on a laser measurement point cloud. The method includes the following steps: inputting an indoor three-dimensional point cloud, and performing supervoxel segmentation of the point cloud using a toward better boundary preserved supervoxel (TBBPS) segmentation method based on a given initial resolution r to extract a plane supervoxel unit; extracting surface point units in surface supervoxels by means of curvatures and normal vector changes; fitting an indoor plane model based on plane supervoxels; fitting an indoor surface model using a cylindrical simplex based on surface points; and based on an ?-expansion optimization algorithm, allocating the extracted supervoxel units and surface point units to an optimal model, so as to realize unit classification and segmentation.
    Type: Application
    Filed: March 7, 2023
    Publication date: September 12, 2024
    Inventors: Fei Su, Yaohui Liu, Yingkun Du, Jingxue Bi, Guoqiang Zheng, Mingyang Yu
  • Publication number: 20240303917
    Abstract: A method for generating a three-dimensional (3D) global pose includes: receiving an image and performing a detection operation to detect a human body in the image; obtaining a two-dimensional (2D) heatmap that is related to a skeleton structure of the human body and that includes a plurality of human keypoints, and obtaining a plurality of 2D coordinate sets each indicating a position of a corresponding one of the human keypoints; performing a 3D human pose estimation operation on the plurality of 2D coordinate sets to obtain a 3D human pose that is related to the skeleton structure in a local coordinate system, and that includes a plurality of 3D keypoints corresponding to the plurality of human keypoints, respectively; and based on the 3D human pose, using a numerical optimization solver to generate a 3D global pose in a world coordinate system.
    Type: Application
    Filed: March 7, 2023
    Publication date: September 12, 2024
    Inventors: Dobromir TODOROV, Ting-Chieh LIN, Tsung-Yuan HSU, Chien-Hung SHIH
  • Publication number: 20240303918
    Abstract: A method can include receiving, via a camera, a first video stream of a face of a user; determining a location of the face of the user based on the first video stream and a facial landmark detection model; receiving, via the camera, a second video stream of the face of the user; generating a depth map based on the second video stream, the location of the face of the user, and a depth prediction model; and generating a representation of the user based on the depth map and the second video stream.
    Type: Application
    Filed: October 11, 2023
    Publication date: September 12, 2024
    Inventors: Ruofei Du, Xun Qian, Yinda Zhang, Alex Olwal
  • Publication number: 20240303919
    Abstract: The invention relates to a method (100) for generating a representation (70) of the surroundings, comprising the following steps: providing (101) at least one image (30) that results from a recording by an image detection device (5) and that represents objects (6) and/or surfaces (6) in the surroundings (7) of the image detection device (5), wherein the provided image (30) is subdivided into multiple image columns (31), generating (102) the representation (70) of the surroundings, wherein for this purpose multiple three-dimensional stixels (80) for each image column (31) of the provided image (30) are parameterized for representing the objects (6) and/or surfaces (6) in three-dimensional space, wherein the generation (102) of the representation (70) of the surroundings takes place using a model (50) which uses the provided image (30) as input.
    Type: Application
    Filed: March 7, 2024
    Publication date: September 12, 2024
    Inventor: Denis Tananaev
  • Publication number: 20240303920
    Abstract: A method includes segmenting a virtual scene into virtual plots and allocating each virtual plot respectively to each of scene devices in response to start-up messages transmitted by the scene devices. The method further includes transmitting a first loading message to two or more scene devices of the plural scene devices. The first loading message instructs the two or more scene devices to load in parallel, into respective memories of the two or more scene devices, plot data files of virtual plots respectively allocated to each of the two or more scene devices. The method further includes transmitting loading configuration information to a service device configured to respond to operation requests from a terminal device displaying the virtual scene, the loading configuration information indicating a scene device to which each virtual plot is allocated.
    Type: Application
    Filed: May 16, 2024
    Publication date: September 12, 2024
    Applicant: Tencent Technology (Shenzhen) Company Limited
    Inventor: Yachang WANG
  • Publication number: 20240303921
    Abstract: An image acquisition method includes importing a three-dimensional model of an object into a three-dimensional virtual photographing scene having a virtual world coordinate system, determining model position information and model attitude information of the three-dimensional model in the virtual world coordinate system, determining, according to a layout pattern of a plurality of virtual cameras in the three-dimensional virtual photographing scene, camera position information and camera attitude information of each of the plurality of virtual cameras in the virtual world coordinate system, and acquiring, for each of the plurality of virtual cameras, an image of the object from a viewing angle of the virtual camera according to the model position information, the model attitude information, the camera position information, and the camera attitude information.
    Type: Application
    Filed: May 17, 2024
    Publication date: September 12, 2024
    Inventors: Yiting XU, Yi ZHOU, Xiaoming YU, Yang YI, Chengwei PENG, Feng LI, Xiaoxiang ZUO
  • Publication number: 20240303922
    Abstract: A computer-implemented method to generate a three-dimensional model, wherein the computer comprises one or more processors and memory accessible by the one or more processors, and the memory stores instructions that when executed by the one or more processors cause the computer to perform the computer-implemented method, includes: receiving first image data of a first portion of the patient's body in a first image modality, receiving second image data of a second portion of the patient's body in a second image modality, modifying the second image data from the second image modality to the first image modality, and generating, based on the first image data in the first image modality and the modified second image data in the second image modality, a three-dimensional model of the first portion and the second portion of the patient's body.
    Type: Application
    Filed: May 20, 2024
    Publication date: September 12, 2024
    Applicant: Novocure GmbH
    Inventors: Reuven Ruby SHAMIR, Noa URMAN, Yana GLOZMAN
  • Publication number: 20240303923
    Abstract: Systems, methods, and other embodiments described herein relate to using octrees and trilinear interpretation to generate field-specific representations. In one embodiment, a method includes acquiring a latent vector describing an object. The method includes generating an octree from the latent vector according to a recursive network, the octree representing the object at a desired level-of-detail (LoD). The method includes extracting features from the octree at separate resolutions. The method includes providing a field as a representation of the object according to the features.
    Type: Application
    Filed: August 31, 2023
    Publication date: September 12, 2024
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Sergey Zakharov, Katherine Y Liu, Adrien David Gaidon, Rares A Ambrus
  • Publication number: 20240303924
    Abstract: The described positional awareness techniques employing visual-inertial sensory data gathering and analysis hardware with reference to specific example implementations implement improvements in the use of sensors, techniques and hardware design that can enable specific embodiments to provide positional awareness to machines with improved speed and accuracy.
    Type: Application
    Filed: February 8, 2024
    Publication date: September 12, 2024
    Applicant: TRIFO, INC.
    Inventors: Zhe ZHANG, Grace TSAI, Shaoshan LIU
  • Publication number: 20240303925
    Abstract: Neural network systems and related machine learning methods for geological modeling are provided that employ an improved generative adversarial network including a generator neural network and a discriminator neural network. The generator neural network is trained to map a combination of a noise vector and a category code vector as input to a simulated image of geological facies. The discriminator neural network is trained to map at least one image of geological facies provided as input to corresponding probability that the at least one image of geological facies provided as input is a training image of geological facies or a simulated image of geological facies produced by the generator neural network.
    Type: Application
    Filed: May 14, 2024
    Publication date: September 12, 2024
    Inventors: Lingchen Zhu, Tuanfeng Zhang
  • Publication number: 20240303926
    Abstract: An system for augmenting images using hand surface normal estimation is provided. In a model training phase, 3D models of hands are generated using 3D data of hands in a variety of positions. Target normal training data is generated that includes normals of surfaces of the 3D models and synthetic 2D image training data corresponding to the 3D models and the normals. The target normal training data and the synthetic image training data are used to train a normal estimation model. The normal estimation is used by an interactive application to generate augmentations that are applied to hand image data.
    Type: Application
    Filed: March 7, 2023
    Publication date: September 12, 2024
    Inventors: Riza Alp Guler, Dominik Kulon, Himmy Tam, Haoyang Wang
  • Publication number: 20240303927
    Abstract: Systems and methods of image processing include a processor in communication with a display and a computer readable recording medium having instructions executed by the processor to read a three-dimensional (3D) image data set from the computer-readable recording medium and automatically generate a tree structure of blood vessels based on patient images of the image data set using a neural network. Manually- and/or semi-automatically-generated 3D models of blood vessels are used to train the neural network. The systems and methods involve segmenting and classifying blood vessels in the 3D image data set using the trained neural network, closing holes, finding roots and endpoints in the segmentation, finding shortest paths between the roots and endpoints, selecting most probable paths, combining most probable paths into directed graphs, solving overlaps between directed graphs, and creating 3D models of blood vessels based on the directed graphs.
    Type: Application
    Filed: February 25, 2022
    Publication date: September 12, 2024
    Applicant: Covidien LP
    Inventors: Ariel Birenbaum, Ofer Barasofsky, Guy Alexandroni, Irina Shevlev
  • Publication number: 20240303928
    Abstract: According to examples, a processor may identify locations on a digital 3D model of an object corresponding to positions at which features are to be added to the object. For each of a plurality of locations of the identified locations, the processor may identify displacements of each of a plurality of patch points around the location from the surface of the digital 3D model. The processor may also create a map of the digital 3D model that encodes the identified displacements of the patch points around the locations, in which a 3D fabrication system is to fabricate the object to include the plurality of features based on the digital 3D model and the created map.
    Type: Application
    Filed: March 31, 2021
    Publication date: September 12, 2024
    Inventors: Juan Carlos CATANA SALAZAR, Jun ZENG, Sergio GONZALEZ MARTIN
  • Publication number: 20240303929
    Abstract: A method for constructing a 3D model of a target object provided is performed by a computer device, the method including: obtaining at least two initial images of a target object from a plurality of shooting angles, the at least two initial images respectively including depth information of the target object, and the depth information indicating distances between a plurality of points of the target object and a reference position; removing, from the at least two initial images, images having a similarity greater than a preset value; obtaining first point cloud information corresponding to the at least two initial images respectively according to the depth information in the at least two initial images; fusing the first point cloud information respectively corresponding to the at least two initial images into second point cloud information; and constructing a 3D model of the target object according to the second point cloud information.
    Type: Application
    Filed: May 15, 2024
    Publication date: September 12, 2024
    Inventor: Xiangkai LIN
  • Publication number: 20240303930
    Abstract: Methods, systems, and non-transitory computer readable media are disclosed for transferring modifications or deformations from a three-dimensional model of one type to a three-dimensional model of another type. In some embodiments, the disclosed systems receive an indication of a user interaction defining a modification to a three-dimensional model. In some cases, the disclosed systems modify an implicit function corresponding to the three-dimensional model in response to the user interaction. In certain embodiments, the disclosed systems generate a modified three-dimensional model by transferring the modification from the implicit function to the three-dimensional model. Further, in some embodiments, the disclosed systems provide the modified three-dimensional model for display on a client device.
    Type: Application
    Filed: March 9, 2023
    Publication date: September 12, 2024
    Inventors: Uday Kusupati, Jean Thiery, Adrien Kaiser
  • Publication number: 20240303931
    Abstract: According to one embodiment, a method, computer system, and computer program product for mixed reality is provided. The present invention may include receiving 3D hand keypoints (keypoints) of a user's visible hand joints from the user's capturable hand, and visible hand joints, if any, from the user's uncapturable hand; using random noise sampled with a unit normal distribution as initial keypoints for the uncapturable hand joints from the user's uncapturable hand; inputting the received and the initial keypoints, in a preset order, into a trained 3D hand joint generative model; performing an iterative refinement of the uncapturable hand joints from the user's uncapturable hand using the trained 3D hand joint generative model; identifying whether generated keypoints of the user's uncapturable hand are synchronized with the keypoints of the user's capturable hand; and rendering the generated 3D keypoints for the user's uncapturable hand joints using a 3D virtual hand modeler.
    Type: Application
    Filed: March 8, 2023
    Publication date: September 12, 2024
    Inventors: Wei Jun Zheng, Xiao Xia Mao, QING LU, Yuan Jin, Xiao Feng Ji
  • Publication number: 20240303932
    Abstract: An artificial reality environment (XRE) schema is defined that supports controlling interactions between various artificial reality actors. The XRE schema includes a set of definitions for an XRE, independent of type of artificial reality device. The definitions in the XRE schema can include standards for both interfaces and data objects. The XRE schema can define XR elements in terms of entities and components of a space, organized according to a hierarchy. Each entity can represent a real or virtual object or space, within the XRE, defined by a name and a collection of one or more components. Each component (as part of an entity) can define aspects and expose information about the entity. The XRE schema can specify structures that that allow actors (e.g., producers, instantiators, and consumers) to define and perform actions in relation to XRE elements.
    Type: Application
    Filed: May 1, 2024
    Publication date: September 12, 2024
    Inventors: Gioacchino NORIS, Michal HLAVAC, Paul Timothy FURGALE, Johannes Joachim SCHMID, Anush MOHAN, Christopher Richard TANNER
  • Publication number: 20240303933
    Abstract: A computer-implemented method for providing a user with a digital companion in a three-dimensional (3D) environment is provided. The computer-implemented method includes recognizing content of the 3D environment, obtaining a point-of-view (POV) of the user in the 3D environment, displaying the digital companion to the user in the 3D environment based on the content and the POV, moving the digital companion through the 3D environment with the user while continuing the displaying and executing communicative interaction between the digital companion and the user based on the content and one or more communications of the user.
    Type: Application
    Filed: March 9, 2023
    Publication date: September 12, 2024
    Inventors: Yuan Yuan Ding, Ke Yong Zhang, Ya Juan Dang, Tian Tian Chai, Yu Pan, Ze Wang
  • Publication number: 20240303934
    Abstract: Examples describe adaptive image processing for an augmented reality (AR) device. An input image is captured by a camera of the AR device, and a region of interest of the input image is determined. The region of interest is associated with an object that is being tracked using an object tracking system. A crop-and-scale order of an image processing operation directed at the region of interest is determined for the input image. One or more object tracking parameters may be used to determine the crop-and-scale order. The crop-and-scale order is dynamically adjustable between a first order and a second order. An output image is generated from the input image by performing the image processing operation according to the determined crop-and-scale order for the particular input image. The output image can be accessed by the object tracking system to track the object.
    Type: Application
    Filed: March 8, 2023
    Publication date: September 12, 2024
    Inventors: Thomas Muttenthaler, Kai Zhou
  • Publication number: 20240303935
    Abstract: Disclosed is a method for checking a geometric form of a physical dental element using motion tracking and augmented reality. The method comprises a receiving of a three-dimensional digital model comprising a three-dimensional digital dental element, as well as repeatedly: a receiving of optical imaging data from an optical sensor device, a detecting of structural elements within the optical imaging data, a determining of a target position for the three-dimensional digital dental element using the reference points defined by the structural elements, and a controlling of an electronic display device for displaying an augmented reality view on the physical dental element augmented with the three-dimensional digital dental element.
    Type: Application
    Filed: March 8, 2023
    Publication date: September 12, 2024
    Inventors: Maik GERTH, Paul SCHNITZSPAN
  • Publication number: 20240303936
    Abstract: An operation assistance system can collect on-site data and perform fault diagnosis analysis to provide an operational guidance for helping users to operate a subject device. In the operation assistance system, a user device is configured to observe the subject device, capture live video, and simultaneously display visual aids. The monitor device is coupled to the subject device to monitor various sensor states of the subject device to determine a fault status. The server is coupled to the user device and the monitor device, providing visual aids to the user device based on the fault status. The user device is also configured to allow the wearer to perceive the visual aids displayed as in a specific relative position in the space where the subject device is located.
    Type: Application
    Filed: April 21, 2023
    Publication date: September 12, 2024
    Inventors: Wen-Yuh Jywe, Tung-Hsing Hsieh, Shang-Kai Liao, Yung-Chuan Huang, Ruo-Heng Wang
  • Publication number: 20240303937
    Abstract: Methods and systems are disclosed for generating a 3D body mesh. The system receives an image that includes a depiction of a real-world object in a real-world environment. The system applies a first machine learning model to a portion of the image that depicts the real-world object to predict a tensor of heatmaps representing vertex positions of a plurality of triangles of a 3D mesh corresponding to the real-world object. First and second heatmaps of the tensor represent respectively first and second groups of possible coordinates for a first vertex of a first triangle of the plurality of triangles. The system generates the 3D mesh based on the selected subset of the tensor of heatmaps.
    Type: Application
    Filed: June 23, 2023
    Publication date: September 12, 2024
    Inventors: Riza Alp Guler, Antonios Kakolyris, Iason Kokkinos, Petros Koutras, Eric-Tuan Le, Georgios Papandreou, Efstratios Skordos, Himmy Tam
  • Publication number: 20240303938
    Abstract: An augmented reality content distribution method and system provides for distributing secondary content. The method and system, via an image capture application executing on a mobile computing device, detects a first image disposed on a physical item, the physical item relating to a primary event and/or primary content. Therein, the method and system accesses a secondary content database across a network communication path and retrieves the secondary content based on the first image as detected by the image capture application. The method and system therein displays, on an output display of the mobile computing device, the secondary content in an augmented reality display in relation to the first image and the physical item, wherein the secondary content is associated with the primary event and/or primary content.
    Type: Application
    Filed: February 9, 2024
    Publication date: September 12, 2024
    Inventors: Spencer S Combs, Trey Cushman, Jeremy Cowart
  • Publication number: 20240303939
    Abstract: An exercise system is provided comprising an augmented reality (AR) device worn by or fitted to a user, the AR device being adapted to display virtual digital content which the user can view and interact with; and a haptic feedback arrangement adapted to be worn or held by at least one hand of the user. The haptic feedback arrangement includes a tracking unit to track the movement of at least one of the user's hands so as determine an action by the user in response to the digital content viewed by the user via the AR device, the haptic feedback arrangement further including a haptic actuator to generate a haptic or tactile response that can be felt by the user's hand/s in response to the user's action. The haptic feedback arrangement is integrated into or secured to a hand device, which comprises either a glove arranged to be fitted to the user's hands or a handheld body that can held by the user.
    Type: Application
    Filed: May 24, 2022
    Publication date: September 12, 2024
    Inventors: Louwrens Jakobus BRIEL, Liesl Celeste BRIEL, Josef LEBERER
  • Publication number: 20240303940
    Abstract: A method for communication between a first mixed-reality terminal and a second mixed-reality terminal via a communication network, the method including the following: transmitting by the first terminal of a first user to the second terminal of a second user: a first virtual character generated on the basis of data relating to the first user and captured by the first terminal, a first virtual object generated on the basis of data relating to a first real object and captured by the first terminal, and a second virtual object consisting of a duplicate of the first virtual object generated at a given time, the second virtual object being modifiable by interaction of the second user; and transmitting by the second terminal to the first terminal a second virtual character generated by the second terminal of the second user on the basis of captured data.
    Type: Application
    Filed: May 30, 2022
    Publication date: September 12, 2024
    Inventor: Guillaume BATAILLE
  • Publication number: 20240303941
    Abstract: An information processing apparatus includes one or more processors and one or more memories. The one or more processors and the one or more memories are configured to acquire play area information about a first play area indicating a movable range of a first user in a real space, set a second play area indicating a movable range of a second user in the real space, and set the second play area according to whether the first play area overlaps the set second play area.
    Type: Application
    Filed: February 23, 2024
    Publication date: September 12, 2024
    Inventor: MASATOSHI ISHII
  • Publication number: 20240303942
    Abstract: The present disclosure describes a location-based application in which users can attach media content to geographic locations. Other users can later experience the media content when in proximity to the geographic content or via a map interface displaying indications of media content in the vicinity of the viewing user. When users experience media content attached to a geographic location they may initiate communication with the creator of the media content. Additionally, the creator or viewer of a video or photograph of a geographic location may select an object depicted in the video or photograph to create a 2D or 3D virtual object representing the depicted object. This virtual object may be held by the user (e.g., in a virtual bag) or dropped at the same or a different geographic location where it may be viewed and interacted with by other users in an augmented reality environment.
    Type: Application
    Filed: March 6, 2024
    Publication date: September 12, 2024
    Inventors: Ryuta Hiroi, Masashi Kawashima, Michael Joseph Garcia Bader, Alejandro Aguilar, Leif Wilden
  • Publication number: 20240303943
    Abstract: Systems and methods synchronize content of a virtual environment with a state of a physical environment. In aspects, a method includes obtaining sensor data from a network of remote sensors measuring a physical state of a location at a time; generating context specific parameter data based on the sensor data; obtaining context data from a remote virtual reality (VR) system, wherein the context data reflects a current state of virtual content in a virtual environment displayed by the remote VR system; selecting virtual content to be displayed in the virtual environment by the remote VR system based on the context specific parameter data, the context data, and rules; and sending the virtual content to the remote VR system to be displayed to a user, wherein the virtual content reflects a state of the physical location at the time.
    Type: Application
    Filed: May 14, 2024
    Publication date: September 12, 2024
    Inventors: Todd Russell Whitman, Zachary A. Silverstein, Jeremy R. Fox, Sarbajit K. Rakshit
  • Publication number: 20240303944
    Abstract: The subject technology receives a selection of a selectable graphical item to initiate generating augment reality content including facial synthesis, the selection being received by a third party application, the third party application being executed by a computing device separate from a first party application and a messaging server system. The subject technology captures image data by the client device. The subject technology generates, by the one or more hardware processors and based at least in part on frames of a source media content, sets of source pose parameters. The subject technology generates, based at least in part on sets of the source pose parameters, an output media content using an interface communicating with the messaging server system. The subject technology provides augmented reality content based at least in part on the output media content for display on the computing device.
    Type: Application
    Filed: May 14, 2024
    Publication date: September 12, 2024
    Inventors: Grigoriy Tkachenko, Inna Zaitseva
  • Publication number: 20240303945
    Abstract: Methods and systems are disclosed for generating AR experiences. The methods and systems access a first component of a plurality of components implemented by the messaging application, the plurality of components comprising an AR experience, each of the plurality of components being configured to be separately launched by the messaging application. The methods and systems store a first state of the first component in a data structure that is shared across the plurality of components; launching. The methods and system launch, by the messaging application, a second component of the plurality of components in response to determining that an interaction has been performed using the first component; and configure a second state of the second component based on the interaction that has been performed using the first component.
    Type: Application
    Filed: May 15, 2024
    Publication date: September 12, 2024
    Inventors: Rastan Boroujerdi, Michael John Evans, Panayoti Haritatos
  • Publication number: 20240303946
    Abstract: Systems and methods are provided herein for providing privacy while allowing interactions with virtual representations of sensitive objects. This may be accomplished by an extended reality (XR) system receiving information associated with a real-world environment of a user. The XR system may transmit the received information to one or more devices (e.g., server, XR device, etc.) for processing. Before transmitting the received information, the XR system can use the received information to identify and categorize objects in the real-world environment to identify sensitive objects. To protect the privacy of the sensitive objects, the XR system transforms features of the sensitive objects using one or more transform keys prior to transmitting the information to the one or more devices. The XR system may also update the transform keys after a first time period to further protect the privacy of the sensitive objects.
    Type: Application
    Filed: March 10, 2023
    Publication date: September 12, 2024
    Inventor: Christian Gehrmann
  • Publication number: 20240303947
    Abstract: An information processing device includes a control unit that performs control to output information regarding an image of a user viewpoint in a virtual space to a user terminal, in which the control unit acquires a characteristic of at least one user related to an event performed in the virtual space and controls movement of a position of an indicator corresponding to a performer in the virtual space based on a position of an indicator corresponding to at least one user whose characteristic satisfies a condition.
    Type: Application
    Filed: February 28, 2022
    Publication date: September 12, 2024
    Applicants: SONY GROUP CORPORATION, Sony Interactive Entertainment Inc.
    Inventors: Takuma DOMAE, Junichi TANAKA, Kazuharu TANAKA, Masaaki MATSUBARA, Yasunori MATSUI
  • Publication number: 20240303948
    Abstract: A wearable terminal apparatus to be worn by a user for use includes a display unit and at least one processor. The at least one processor causes the display unit to display a virtual image located in a space. The at least one processor changes a display position of a partial image when a predetermined condition is satisfied, the partial image being an image of a partial region included in the virtual image.
    Type: Application
    Filed: June 30, 2021
    Publication date: September 12, 2024
    Applicant: KYOCERA Corporation
    Inventors: Kai SHIMIZU, Tomokazu ADACHI, Shingo ITO
  • Publication number: 20240303949
    Abstract: According to an embodiment, a wearable electronic device may include a display, memory, and a processor. The memory may store instructions that, when executed by the processor, cause the wearable electronic device to display a first execution screen of a first application in a 3D virtual space. A first guide representing a first virtual plane for guiding a location at which an object can be placed and a second guide representing a second virtual plane for guiding a location at which an object can be placed may be provided in the 3D virtual space, wherein the second virtual plane is facing in a direction different from the first virtual plane. The memory may store instructions that, when executed by the processor, cause the wearable electronic device to, based on a user input for moving a first object included in the first execution screen, display at least one guide of the first guide or the second guide to enable relocation of the first object to the first virtual plane or the second virtual plane.
    Type: Application
    Filed: February 21, 2024
    Publication date: September 12, 2024
    Inventors: Jinha CHOI, Sungman KIM, Boosun SHIN, Chaekyung LEE, Minkyung CHO, Sanggeon KIM, Minseoung WOO
  • Publication number: 20240303950
    Abstract: Editing histories of a plurality of pieces of material data to be used for generating a virtual viewpoint image are specified. If the specified editing histories satisfy a predetermined condition of each of the plurality of pieces of material data, authentication information is assigned to the virtual viewpoint image generated on the basis of the plurality of pieces of material data.
    Type: Application
    Filed: March 7, 2024
    Publication date: September 12, 2024
    Inventor: WAKI MIDORIKAWA
  • Publication number: 20240303951
    Abstract: A method for training a real-time, modeling for animating an avatar for a subject is provided. The method includes collecting multiple images of a subject. The method also includes selecting a plurality of vertex positions in a guide mesh, indicative of a volumetric primitive enveloping the subject, determining a geometric attribute for the volumetric primitive including a position, a rotation, and a scale factor of the volumetric primitive, determining a payload attribute for each of the volumetric primitive, the payload attribute including a color value and an opacity value for each voxel in a voxel grid defining the volumetric primitive, determining a loss factor for each point in the volumetric primitive based on the geometric attribute, the payload attribute and a ground truth value, and updating a three-dimensional model for the subject. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
    Type: Application
    Filed: April 16, 2024
    Publication date: September 12, 2024
    Inventors: Stephen Anthony Lombardi, Tomas Simon Kreuz, Jason Saragih, Gabriel Bailowitz Schwartz, Michael Zollhoefer, Yaser Sheikh
  • Publication number: 20240303952
    Abstract: The present invention provides a system and method for capturing images during review of a microscope slide. In certain embodiments, such system and method allow for the capture of images and construction of a composited microscope mosaic image within the workflow of the slide reviewer, such as a pathologist reviewing a tissue sample. In certain embodiments, said mosaic images are whole slide images constructed by and capable of being viewed at variable resolutions and magnifications corresponding to the review of the original microscope slide by the slide reviewer.
    Type: Application
    Filed: March 4, 2022
    Publication date: September 12, 2024
    Inventors: Kimberly Lorain ASHMAN, Jonathon Quincy BROWN, Cody Maurice LICORISH, Brian Mark SUMMA, Carola WENK, Huimin ZHUGE, Max Sebastian COOPER
  • Publication number: 20240303953
    Abstract: A technique identifies regions of an image characterized by constant pixel intensity in a resource-efficient, latency-efficient, and scalable manner. The technique involves: obtaining a candidate image; determining whether the candidate image contains a contiguous region of pixels having intensity values within a specified range of intensity values; assessing whether the contiguous region satisfies a prescribed test; and selecting or excluding the candidate image for further processing based on a result of the assessing. The operation of determining involve two phases. First, the technique determines a distribution of intensity values within the candidate image. Second, the technique leverages the distribution to search the candidate image for neighboring pixels having intensity values within the specified range of intensity values, beginning from a selected starting pixel in a qualifying subset of pixels.
    Type: Application
    Filed: March 10, 2023
    Publication date: September 12, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Vidush VISHWANATH, Changbo HU, Rajesh KODURU
  • Publication number: 20240303954
    Abstract: A device, system, and method to provide front-facing camera images identified using a scene image assembled from rear-facing camera images is provided. A device retrieves dual-sensor camera images having respective metadata indicating image acquisitions substantially matching a time and a place associated with an incident, the dual-sensor camera images including a front and rear-facing camera image acquired via a same respective dual-sensor camera. The device assembles the rear-facing camera images into a scene image and renders the scene image at a display screen.
    Type: Application
    Filed: March 6, 2023
    Publication date: September 12, 2024
    Inventors: John KIM, Duane GROVES, Sruti KAMARAJUGADDA, Craig A. IBBOTSON
  • Publication number: 20240303955
    Abstract: An objection detection system for a vehicle includes one or more range sensors and one or more controllers in communication with the one or more range sensors. The one or more controllers execute instructions to instruct the one or more range sensors to emit a signal. The one or more range sensors receive a reflected signal from the environment surrounding the vehicle. The one or more controllers determine a position of the detected object based on the reflected signal and compare the position of the detected object with a dynamic region of interest (ROI). The one or more controllers determine the position of the detected object is outside of the dynamic ROI. In response to determining the position of the detected object is outside the dynamic ROI, the one or more controllers disregard the detected object for purposes of scene building.
    Type: Application
    Filed: March 8, 2023
    Publication date: September 12, 2024
    Inventor: Kamran Ali
  • Publication number: 20240303956
    Abstract: A method may include receiving video data that includes frames representative of infrared radiation within a scene. Each of the frames may include pixels The method may also include identifying pixels within the frames that correspond to a gas plume released by a gas source within the scene based on the infrared radiation. In addition, the method may include determining a size of the gas plume within each frame based on the identified pixels.
    Type: Application
    Filed: May 16, 2024
    Publication date: September 12, 2024
    Inventors: Daniel Zimmerle, Marcus Martinez
  • Publication number: 20240303957
    Abstract: Provided in the present application are an end-edge-cloud coordination system and method based on a digital retina, and a device. The coordination system may include, but is not limited to, a front-end device, an edge device, and a cloud device. The front-end device is used for extracting, from collected video data, features having universality, and is used for generating analysis and recognition tasks on the basis of the features. The front-end device is also used for processing the analysis and recognition tasks, so as to obtain a first intermediate result to be sent to the edge device. The edge device is used for processing the analysis and recognition tasks on the basis of the first intermediate result, so as to obtain a second intermediate result to be sent to the cloud device. The cloud device is used for processing the analysis and recognition tasks on the basis of the second intermediate result, so as to obtain an analysis and recognition result for video data.
    Type: Application
    Filed: April 12, 2021
    Publication date: September 12, 2024
    Applicant: Peking University
    Inventors: Yonghong TIAN, Peiyin XING, Feng GAO, Xiaofei LIU, Peixi PENG, Wen GAO