Patents Issued in July 25, 2023
-
Patent number: 11710239Abstract: An approach is provided for using a machine learning model for identifying planar region(s) in an image. The approach involves, for example, determining the model for performing image segmentation. The model comprises at least: a trainable filter that convolves the image to generate an input volume comprising a projection of the image at different resolution scales; and feature(s) to identify image region(s) having a texture within a similarity threshold. The approach also involves processing the image using the model by generating the input volume from the image using the trainable filter and extracting the feature(s) from the input volume to determine the region(s) having the texture. The approach further involves determining the planar region(s) by clustering the image regions. The approach further involves generating a planar mask based on the planar region(s). The approach further involves providing the planar mask as an output of the image segmentation.Type: GrantFiled: November 10, 2020Date of Patent: July 25, 2023Assignee: HERE Global B.V.Inventors: Souham Biswas, Sanjay Kumar Boddhu
-
Patent number: 11710240Abstract: Techniques for identifying pixel groups representing objects in an image include using images having multiple groups of pixels, grouped such that each pixel group represents a zone of interest and determining a pixel value for pixels within each pixel group based on a comparison of pixel values for each individual pixel within the group. A probability heat map is derived from the pixel group values using a first neural network using the pixel group values as input and produces the heat map having a set of graded values indicative of the probability that the respective pixel group includes an object of interest. A zone of interest is identified based on whether the groups of graded values meet a determined probability threshold objects of interest are identified within the at least one zone of interest by way of a second neural network.Type: GrantFiled: July 27, 2022Date of Patent: July 25, 2023Assignee: XailientInventors: Shivanthan Yohanandan, Lars Oleson
-
Patent number: 11710241Abstract: Techniques for enhancing image segmentation with the integration of deep learning are disclosed herein. An example method for atlas-based segmentation using deep learning includes: applying a deep learning model to a subject image to identify an anatomical feature, registering an atlas image to the subject image, using the deep learning segmentation data to improve a registration result, generating a mapped atlas, and identifying the feature in the subject image using the mapped atlas. Another example method for training and use of a trained machine learning classifier, in an atlas-based segmentation process using deep learning, includes: applying a deep learning model to an atlas image, training a machine learning model classifier using data from applying the deep learning model, estimating structure labels of areas of the subject image, and defining structure labels by combining the estimated structure labels with labels produced from atlas-based segmentation on the subject image.Type: GrantFiled: November 19, 2020Date of Patent: July 25, 2023Assignee: Elekta, Inc.Inventors: Xiao Han, Nicolette Patricia Magro
-
Patent number: 11710242Abstract: The application discloses a method and system for segmenting a lung image. The method may include obtaining a target image relating to a lung region. The target image may include a plurality of image slices. The method may also include segmenting the lung region from the target image, identifying an airway structure relating to the lung region, and identifying one or more fissures in the lung region. The method may further include determining one or more pulmonary lobes in the lung region.Type: GrantFiled: February 7, 2021Date of Patent: July 25, 2023Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.Inventors: Xiaodong Wang, Yufei Mao, Renchao Jin, Yujie Tian, Naiwen Hu, Lijun Xu, Fengli He, Hong Liu, Kai He, Enmin Song, Xiangyang Xu
-
Patent number: 11710243Abstract: A method for predicting a direction of movement of a target object, a method for training a neural network, a smart vehicle control method, a device, an electronic apparatus, a computer readable storage medium, and a computer program. The method for predicting a direction of movement of a target object comprises: acquiring an apparent orientation of a target object in an image captured by a camera device, and acquiring a relative position relationship of the target object in the image and the camera device in three-dimensional space (S100); and determining, according to the apparent orientation of the target object and the relative position relationship, a direction of movement of the target object relative to a traveling direction of the camera device (S110).Type: GrantFiled: September 18, 2020Date of Patent: July 25, 2023Assignees: SENSETIME GROUP LIMITED, HONDA MOTOR CO. LTD.Inventors: Shu Zhang, Zhaohui Yang, Jiaman Li, Xingyu Zeng
-
Patent number: 11710244Abstract: A system for physiological motion measurement is provided. The system may acquire a reference image corresponding to a reference motion phase of an ROI and a target image of the ROI corresponding to a target motion phase, wherein the reference motion phase may be different from the target motion phase. The system may identify one or more feature points relating to the ROI from the reference image, and determine a motion field of the feature points from the reference motion phase to the target motion phase using a motion prediction model. An input of the motion prediction model may include at least the reference image and the target image. The system may further determine a physiological condition of the ROI based on the motion field.Type: GrantFiled: November 4, 2019Date of Patent: July 25, 2023Assignee: SHANGHAI UNITED IMAGING INTELLIGENCE CO., LTD.Inventors: Shanhui Sun, Zhang Chen, Terrence Chen, Ziyan Wu
-
Patent number: 11710245Abstract: Optical flow refers to the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene. Optical flow algorithms can be used to detect and delineate independently moving objects, even in the presence of camera motion. The present invention uses optical-flow algorithms to detect and remove marine snow particles from live video. Portions of an image scene which are identified as marine snow are reconstructed in a manner intended to reveal underwater scenery which had been occluded by the marine snow. Pixel locations within the regions of marine snow are replaced with new pixel values that are determined based on either historical data for each pixel or a mathematical operation, such as one which uses data from neighboring pixels.Type: GrantFiled: February 24, 2021Date of Patent: July 25, 2023Inventor: Jack Wade
-
Patent number: 11710246Abstract: The present disclosure provides a method of medical procedure using augmented reality for superimposing a patient's medical images (e.g., CT or MRI) over a real-time camera view of the patient. Prior to the medical procedure, the patient's medical images are processed to generate a 3D model that represents a skin contour of the patient's body. The 3D model is further processed to generate a skin marker that comprises only selected portions of the 3D model. At the time of the medical procedure, 3D images of the patient's body are captured using a camera, which are then registered with the skin marker. Then, the patient's medical images can be superimposed over the real-time camera view that is presented to the person performing the medical procedure.Type: GrantFiled: May 19, 2022Date of Patent: July 25, 2023Assignee: SKIAInventors: Seungwon Na, Wonki Eun, Jun Woo Lee, Hyuk Kwon, Jong Myoung Lee
-
Patent number: 11710247Abstract: Embodiments allow live action images from an image capture device to be composited with computer generated images in real-time or near real-time. The two types of images (live action and computer generated) are composited accurately by using a depth map. In an embodiment, the depth map includes a “depth value” for each pixel in the live action image. In an embodiment, steps of one or more of feature extraction, matching, filtering or refinement can be implemented, at least in part, with an artificial intelligence (AI) computing approach using a deep neural network with training. A combination of computer-generated (“synthetic”) and live-action (“recorded”) training data is created and used to train the network so that it can improve the accuracy or usefulness of a depth map so that compositing can be improved.Type: GrantFiled: October 27, 2020Date of Patent: July 25, 2023Assignee: UNITY TECHNOLOGIES SFInventors: Tobias B. Schmidt, Erik B. Edlund, Dejan Momcilovic, Josh Hardgrave
-
Patent number: 11710248Abstract: Aspects of the present disclosure involve a system and a method for performing operations comprising: accessing a source image depicting a target structure; accessing one or more target images depicting at least a portion of the target structure; computing correspondence between a first set of pixels in the source image of a first portion of the target structure and a second set of pixels in the one or more target images of the first portion of the target structure, the correspondence being computed as a function of camera parameters that vary between the source image and the one or more target images; and generating a three-dimensional (3D) model of the target structure based on the correspondence between the first set of pixels in the source image and the second set of pixels in the one or more target images based on a joint optimization of target structure and camera parameters.Type: GrantFiled: July 20, 2022Date of Patent: July 25, 2023Assignee: Snap Inc.Inventor: Oliver Woodford
-
Patent number: 11710249Abstract: A system for executing a three-dimensional (3D) intraoperative scan of a patient is disclosed. A 3D scanner controller projects the object points included onto a first image plane and the object points onto a second image plane. The 3D scanner controller determines first epipolar lines associated with the first image plane and second epipolar lines associated with the second image plane based on an epipolar plane that triangulates the object points included in the first 2D intraoperative image to the object points included in the second 2D intraoperative image. Each epipolar lines provides a depth of each object as projected onto the first image plane and the second image plane. The 3D scanner controller converts the first 2D intraoperative image and the second 2D intraoperative image to the 3D intraoperative scan of the patient based on the depth of each object point provided by each corresponding epipolar line.Type: GrantFiled: December 21, 2020Date of Patent: July 25, 2023Assignee: Unify Medical, Inc.Inventors: Yang Liu, Maziyar Askari Karchegani
-
Patent number: 11710250Abstract: An electronic device for setting a processing procedure for controlling an apparatus, the electronic device comprising: at least one processor configured to cause the electronic device to perform operations of: obtaining a captured image; determining a type of an object included in the captured image; notifying an item indicating target processing corresponding to the object included in the captured image, among a plurality of analysis processing; notifying a result obtained by applying the target processing to the captured image; and receiving a user instruction for adding the processing corresponding to the item to a processing procedure for controlling the apparatus.Type: GrantFiled: August 10, 2020Date of Patent: July 25, 2023Assignee: Canon Kabushiki KaishaInventor: Genki Cho
-
Patent number: 11710251Abstract: In one embodiment, a method includes receiving an image associated with an object in an environment, the image being captured by sensors associated with a vehicle, generating a feature representation of the image, determining a potential ground control point associated with the object based on the feature representation of the image, determining a predetermined location reading based on the potential ground control point, calculating a differential relative to the predetermined location reading based on the potential ground control point, and determining a location of the vehicle based on the differential and the predetermined location reading based on the potential ground control point.Type: GrantFiled: November 2, 2020Date of Patent: July 25, 2023Assignee: Lyft, Inc.Inventors: Ramesh Rangarajan Sarukkai, Shaohui Sun
-
Patent number: 11710252Abstract: A material data collection system allows capturing of material data. For example, the material data collection system may include digital image data for materials. The material data collection system may ensure that captured digital image data is properly aligned, so that material data may be easily recalled for later use, while maintaining the proper alignment for the captured digital image. The material data collection system may include using a capture guide, to provide cues on how to orient a mobile device used with the material data collection system.Type: GrantFiled: December 22, 2020Date of Patent: July 25, 2023Assignee: Centric Software, Inc.Inventors: Humberto Roa, Rammohan Akula, Fabrice Canonge, Nicholas Fjellberg Swerdlowe, Rohit Ghatol, Grif Von Holst
-
Position and attitude estimation device, position and attitude estimation method, and storage medium
Patent number: 11710253Abstract: According to one embodiment, a position and attitude estimation device includes a processor. The processor is configured to acquire time-series images continuously captured by a capture device installed on a mobile object, estimate first position and attitude of the mobile object based on the acquired time-series images, estimate a distance to a subject included in the acquired time-series images and correct the estimated first position and attitude to a second position and attitude based on an actual scale, based on the estimated distance.Type: GrantFiled: March 5, 2021Date of Patent: July 25, 2023Assignee: Kabushiki Kaisha ToshibaInventors: Yusuke Tazoe, Tomoya Tsuruyama, Akihito Seki -
Patent number: 11710254Abstract: A first six degree-of-freedom (DoF) pose of an object from a perspective of a first image sensor is determined with a neural network. A second six DoF pose of the object from a perspective of a second image sensor is determined with the neural network. A pose offset between the first and second six DoF poses is determined. A first projection offset is determined for a first two-dimensional (2D) bounding box generated from the first six DoF pose. A second projection offset is determined for a second 2D bounding box generated from the second six DoF pose. A total offset is determined by combining the pose offset, the first projection offset, and the second projection offset. Parameters of a loss function are updated based on the total offset. The updated parameters are provided to the neural network to obtain an updated total offset.Type: GrantFiled: April 7, 2021Date of Patent: July 25, 2023Assignee: Ford Global Technologies, LLCInventors: Shubham Shrivastava, Punarjay Chakravarty, Gaurav Pandey
-
Patent number: 11710255Abstract: An object identification and collection method is disclosed. The method includes receiving a pick-up path that identifies a route in which to guide an object-collection system over a target geographical area to pick up objects, determining a current location of the object-collection system relative to the pick-up path, and guiding the object-collection system along the pick-up path over the target geographical area based on the current location. The method further includes capturing images in a direction of movement of the object-collection system along the pick-up path, identifying a target object in the images; tracking movement of the target object through the images, determining that the target object is within range of an object picker assembly on the object-collection system based on the tracked movement of the target object, and instructing the object picker assembly to pick up the target object.Type: GrantFiled: July 21, 2021Date of Patent: July 25, 2023Assignee: TerraClear Inc.Inventors: Brent Ronald Frei, Dwight Galen McMaster, Michael Racine, Jacobus du Preez, William David Dimmit, Isabelle Butterfield, Clifford Holmgren, Dafydd Daniel Rhys-Jones, Thayne Kollmorgen, Vivek Ullal Nayak
-
Patent number: 11710256Abstract: A method of generating a 3D reconstruction of a scene, the scene comprising a plurality of cameras positioned around the scene, comprises: obtaining the extrinsics and intrinsics of a virtual camera within a scene; accessing a data structure so as to determine a camera pair that is to be used in reconstructing the scene from the viewpoint of the virtual camera; wherein the data structure defines a voxel representation of the scene, the voxel representation comprising a plurality of voxels, at least some of the voxel surfaces being associated with respective camera pair identifiers; wherein each camera pair identifier associated with a respective voxel surface corresponds to a camera pair that has been identified as being suitable for obtaining depth data for the part of the scene within that voxel and for which the averaged pose of the camera pair is oriented towards the voxel surface; identifying, based on the obtained extrinsics and intrinsics of the virtual camera, at least one voxel that is within the fieType: GrantFiled: August 31, 2020Date of Patent: July 25, 2023Assignee: Sony Interactive Entertainment Inc.Inventors: Nigel John Williams, Andrew William Walker
-
Patent number: 11710257Abstract: An image processing apparatus includes a first evaluator configured to evaluate under a first evaluation condition a focus state of each of a plurality of image data acquired by consecutive capturing, a second evaluator configured to evaluate the focus state of each of the plurality of image data under a second evaluation condition different from the first evaluation condition, and a recorder configured to record first evaluation information indicating an evaluation result under the first evaluation condition and second evaluation information indicating an evaluation result under the second evaluation condition.Type: GrantFiled: June 14, 2021Date of Patent: July 25, 2023Assignee: CANON KABUSHIKI KAISHAInventor: Masahiro Kawarada
-
Patent number: 11710258Abstract: Disclosed is a compression system for compressing image data. The compression system receives an uncompressed image file with data points that are defined with absolute values for elements representing the data point position in a space. The compression system stores the absolute values defined for a first data point in a compressed image file, determines a difference between the absolute values of the first data point and the absolute values of a second data point, derives a relative value for the absolute values of the second data point from the difference, and stores the relative value in place of the absolute values of the second data point in the compressed image file.Type: GrantFiled: January 25, 2023Date of Patent: July 25, 2023Assignee: Illuscio, Inc.Inventor: Alexandre Leuckert Klein
-
Patent number: 11710259Abstract: A method and device for decoding a point cloud using octree partitioning and a predictive tree include obtaining the point cloud. A bounding box of the point cloud is determined. Octree nodes are generated by partitioning the bounding box using octree partitioning. The predictive tree is generated for points in at least one octree node of the octree nodes. A transform is applied to the predictive tree. The points in the at least one octree node are decoded using the predictive tree.Type: GrantFiled: November 13, 2020Date of Patent: July 25, 2023Assignee: TENCENT AMERICA LLCInventors: Xiang Zhang, Wen Gao, Shan Liu
-
Patent number: 11710260Abstract: A method for coding information of a point cloud comprises obtaining the point cloud including a set of points in a three-dimensional space; partitioning the point cloud into a plurality of objects and generating occupancy information for each of the plurality of objects; and encoding the occupancy information by taking into account the distance between the plurality of objects.Type: GrantFiled: July 7, 2022Date of Patent: July 25, 2023Assignee: TENCENT AMERICA LLCInventors: Xiang Zhang, Wen Gao, Shan Liu
-
Patent number: 11710261Abstract: Methods, systems, devices and apparatuses for generating a high-quality MRI image from under-sampled or corrupted data The image reconstruction system includes a memory. The memory is configured to store multiple samples of biological, physiological, neurological or anatomical data that has missing or corrupted k-space data and a deep learning model or neural network. The image reconstruction system includes a processor coupled to the memory. The processor is configured to obtain the multiple samples. The processor is configured to determine the missing or corrupted k-space data using the multiple samples and the deep learning model or neural network. The processor is configured to reconstruct an MRI image using the determined missing or corrupted k-space data and the multiple samples.Type: GrantFiled: July 27, 2020Date of Patent: July 25, 2023Assignee: University of Southern CaliforniaInventors: Tae Hyung Kim, Justin Haldar
-
Patent number: 11710262Abstract: Automatic font synthesis for modifying a local font to have an appearance that is visually similar to a source font is described. A font modification system receives an electronic document including the source font together with an indication of a font descriptor for the source font. The font descriptor includes information describing various font attributes for the source font, which define a visual appearance of the source font. Using the source font descriptor, the font modification system identifies a local font that is visually similar in appearance to the source font by comparing local font descriptors to the source font descriptor. A visually similar font is then synthesized by modifying glyph outlines of the local font to achieve the visual appearance defined by the source font descriptor. The synthesized font is then used to replace the source font and output in the electronic document at the computing device.Type: GrantFiled: February 18, 2022Date of Patent: July 25, 2023Assignee: Adobe Inc.Inventors: Nirmal Kumawat, Zhaowen Wang
-
Patent number: 11710263Abstract: A method of rasterising a line in computer graphics determines whether the line's start and/or end is inside a diamond test area within the pixel. If the end is not inside and the start is inside, the pixel is drawn as part of the line. If neither the start nor the end of the line are inside, it is determined whether the line crosses more than one extended diamond edge and if so, it is further determined (i) whether an extended line passing through the start and end is substantially vertical and touches the right point of the diamond area, (ii) if the extended line touches the bottom point of the diamond area, and (iii) whether the extended line is on a same side of each point of the diamond area. If any of (i), (ii) and (iii) is positive, the pixel is drawn as part of the line.Type: GrantFiled: May 14, 2022Date of Patent: July 25, 2023Assignee: Imagination Technologies LimitedInventor: Casper Van Benthem
-
Patent number: 11710264Abstract: Multi-graphic display method and computer-readable storage medium are disclosed. In the multi-graphic display method, a processor is used to execute instructions to perform the step of, within a display window, determining the position of each financial graphic, determining a plurality of rectangular sub-regions in the display window so that the financial varieties of the financial graphics contained in the single rectangular sub-region are the same, and setting the financial characteristics of each financial graphic, and setting a financial variety of financial graphics within each of the rectangular sub-regions. In the case where two or more rectangular sub-regions within the display window contain a plurality of periodic financial graphics, the financial characteristics of at least two of the periodic financial graphics between at least two of the rectangular sub-regions are identical.Type: GrantFiled: June 17, 2021Date of Patent: July 25, 2023Inventor: Jian Sun
-
Patent number: 11710265Abstract: A method for imaging expected results of a medical cosmetic treatment includes converting an input image of an anatomical feature into an input image vector. A direction vector corresponding to the medical cosmetic treatment is determined. An amplitude of the direction vector is determined. The direction vector is multiplied by the determined amplitude to obtain a product vector. The product vector is vector added to the input image vector. An output image corresponding to the expected visual appearance of the human anatomical feature is generated from the vector added product vector and input image vector. A computer program stored in a non-transitory computer readable medium causes a computer to perform the imaging method.Type: GrantFiled: June 8, 2021Date of Patent: July 25, 2023Assignee: RealFaceValue B.V.Inventors: Jacques van der Meulen, Ekin Gedik, Berno Bucker
-
Patent number: 11710266Abstract: Embodiments of this application provide a rendering method and apparatus, and the like. The method includes: A processor (which is usually a CPU) modifies a rendering instruction based on a relationship between a first frame buffer and a second frame buffer, so that a GPU renders a rendering job corresponding to the first frame buffer to the second frame buffer based on a new rendering instruction. In this application, render passes of one or more frame buffers are redirected to another frame buffer. In this way, memory occupation in a rendering process of an application program is effectively reduced, bandwidth of the GPU is reduced, and power consumption can be reduced.Type: GrantFiled: July 29, 2021Date of Patent: July 25, 2023Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Fan Zhang, Feng Wang, Jun Li, Kelan Song, Qichao Zhu
-
Patent number: 11710267Abstract: Systems, apparatuses, and methods may provide for technology to process graphics data in a virtual gaming environment. The technology may identify, from graphics data in a graphics application, redundant graphics calculations relating to common frame characteristics of one or more graphical scenes to be shared between client game devices of a plurality of users and calculate, in response to the identified redundant graphics calculations, frame characteristics relating to the one or more graphical scenes. Additionally, the technology may send, over a computer network, the calculation of the frame characteristics to the client game devices.Type: GrantFiled: September 17, 2021Date of Patent: July 25, 2023Assignee: Intel CorporationInventors: Jonathan Kennedy, Gabor Liktor, Jeffery S. Boles, Slawomir Grajewski, Balaji Vembu, Travis T. Schluessler, Abhishek R. Appu, Ankur N. Shah, Joydeep Ray, Altug Koker, Jacek Kwiatkowski
-
Patent number: 11710268Abstract: A graphics processing unit (GPU) processes graphics data using a rendering space which is sub-divided into a plurality of tiles. The GPU comprises cost indication logic configured to obtain a cost indication for each of a plurality of sets of one or more tiles of the rendering space. The cost indication for a set of tile(s) is suggestive of a cost of processing the set of one or more tiles. The GPU controls a rendering complexity with which primitives are rendered in tiles based on the cost indication for those tiles. This allows tiles to be rendered in a manner that is suitable based on the complexity of the graphics data within the tiles. In turn, this allows the rendering to satisfy constraints such as timing constraints even when the complexity of different tiles may vary significantly within an image.Type: GrantFiled: May 3, 2022Date of Patent: July 25, 2023Assignee: Imagination Technologies LimitedInventors: John W. Howson, Richard Broadhurst, Steven Fishwick
-
Patent number: 11710269Abstract: Position-based rendering apparatus and method for multi-die/GPU graphics processing. For example, one embodiment of a method comprises: distributing a plurality of graphics draws to a plurality of graphics processors; performing position-only shading using vertex data associated with tiles of a first draw on a first graphics processor, the first graphics processor responsively generating visibility data for each of the tiles; distributing subsets of the visibility data associated with different subsets of the tiles to different graphics processors; limiting geometry work to be performed on each tile by each graphics processor using the visibility data, each graphics processor to responsively generate rendered tiles; and wherein the rendered tiles are combined to generate a complete image frame.Type: GrantFiled: July 28, 2022Date of Patent: July 25, 2023Assignee: Intel CorporationInventors: Travis Schluessler, Zack Waters, Michael Apodaca, Daniel Johnston, Jason Surprise, Prasoonkumar Surti, Subramaniam Maiyuran, Peter Doyle, Saurabh Sharma, Ankur Shah, Murali Ramadoss
-
Patent number: 11710270Abstract: A programmatic arbitrary distribution of items in a modeling system may be provided. To perform the distribution, a surface may be received, and a point count of application points associated with locations on the surface may be determined. A density map may be applied over the surface to assign a density to portions of the surface for the point count. Application points are then assigned to locations on the surface according to the density map and a scattering function of the point count, where the scattering function is based on one or more repulsion forces between neighboring points. The one or more repulsion forces are treated as pushing each of the neighboring point apart. Thereafter, the surface may be provided having the application points scattered across the surface based on the one or more repulsion forces.Type: GrantFiled: June 7, 2022Date of Patent: July 25, 2023Assignee: Unity Technologies SFInventor: Philip Hunter
-
Patent number: 11710271Abstract: A three-dimensional data creation method includes: creating first three-dimensional data from information detected by a sensor; receiving encoded three-dimensional data that is obtained by encoding second three-dimensional data; decoding the received encoded three-dimensional data to obtain the second three-dimensional data; and merging the first three-dimensional data with the second three-dimensional data to create third three-dimensional data.Type: GrantFiled: September 15, 2020Date of Patent: July 25, 2023Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventors: Toru Matsunobu, Takahiro Nishi, Tadamasa Toma, Toshiyasu Sugio, Satoshi Yoshikawa, Tatsuya Koyama
-
Patent number: 11710272Abstract: An image processing system includes a computing platform having processing hardware, a display, and a system memory storing a software code. The processing hardware executes the software code to receive a digital object, surround the digital object with virtual cameras oriented toward the digital object, render, using each one of the virtual cameras, a depth map identifying a distance of that one of the virtual cameras from the digital object, and generate, using the depth map, a volumetric perspective of the digital object from a perspective of that one of the virtual cameras, resulting in multiple volumetric perspectives of the digital object. The processing hardware further executes the software code to merge the multiple volumetric perspectives of the digital object to form a volumetric representation of the digital object, and to convert the volumetric representation of the digital object to a renderable form.Type: GrantFiled: March 24, 2021Date of Patent: July 25, 2023Assignee: Disney Enterprises, Inc.Inventors: Dane M. Coffey, Siroberto Scerbo, Daniel L. Baker, Mark R. Mine, Evan M. Goldberg
-
Patent number: 11710273Abstract: Apparatus comprises a camera configured to capture images of a user in a scene; a depth detector configured to capture depth representations of the scene, the depth detector comprising an emitter configured to emit a non-visible signal; a mirror arranged to reflect at least some of the non-visible signal emitted by the emitter to one or more features within the scene that would otherwise be occluded by the user and to reflect light from the one or more features to the camera; a pose detector configured to detect a position and orientation of the mirror relative to at least one of the camera and depth detector; and a scene generator configured to generate a three-dimensional representation of the scene in dependence on the images captured by the camera and the depth representations captured by the depth detector and the pose of the mirror detected by the pose detector.Type: GrantFiled: May 19, 2020Date of Patent: July 25, 2023Assignee: Sony Interactive Entertainment Inc.Inventor: Andrew William Walker
-
Patent number: 11710274Abstract: The present technology relates to an image processing apparatus and a file generation apparatus that make it possible to appropriately reproduce a BV content. An image processing apparatus includes: a file acquisition unit that acquires a file having a management region where information for management of a 3D object content is stored and a data region where a track in which streams included in the 3D object content are stored is stored, group information for selection, from a plurality of the streams included in the 3D object content, of the stream appropriate for reproduction of the 3D object content being stored in the management region; and a file processor that selects a plurality of the streams to be used for reproduction of the 3D object content on the basis of the group information. The present technology is applicable to a client apparatus.Type: GrantFiled: August 31, 2018Date of Patent: July 25, 2023Assignee: SONY CORPORATIONInventors: Ryohei Takahashi, Mitsuhiro Hirabayashi, Mitsuru Katsumata, Toshiya Hamada
-
Patent number: 11710275Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.Type: GrantFiled: October 12, 2021Date of Patent: July 25, 2023Assignee: Snap Inc.Inventors: Soumyadip Sengupta, Linjie Luo, Chen Cao, Menglei Chai
-
Patent number: 11710276Abstract: In one implementation, a method for improved motion planning. The method includes: obtaining a macro task for a virtual agent within a virtual environment; generating a search-tree based on at least one of the macro task, a state of the virtual environment, and a state of the virtual agent, wherein the search-tree includes a plurality of task nodes corresponding to potential tasks for performance by the virtual agent in furtherance of the macro task; and determining physical motion plans (PMPs) for at least some of the plurality of task nodes within the search-tree in order to generate a lookahead planning gradient for the first time, wherein a granularity of a PMP for a respective task node in the first search-tree is a function of the temporal distance of the respective task node from the first time.Type: GrantFiled: June 25, 2021Date of Patent: July 25, 2023Assignee: Apple Inc.Inventors: Daniel Laszlo Kovacs, Siva Chandra Mouli Sivapurapu, Payal Jotwani, Noah Jonathan Gamboa
-
Patent number: 11710277Abstract: A map database creation method is provided. The method includes: obtaining a factor set including factors; dividing a map database into levels based on the factors, and taking each interval of the last level as one sub-database; creating an initial map based on a factor value of each factor corresponding to each sub-database, and creating the sub-database as an initial map database by storing the corresponding initial map in the sub-database; finding the initial map matching a current lighting condition from the initial map database based on the current lighting condition, and taking the found initial map as a positioning map; and performing a visual positioning based on the positioning map, creating an expanded map corresponding to the current lighting condition based on the visual positioning, and creating the sub-database corresponding to the current lighting condition as an expanded map database by storing the corresponding expanded map in the sub-database.Type: GrantFiled: September 23, 2021Date of Patent: July 25, 2023Assignee: UBTECH ROBOTICS CORP LTDInventors: Mingqiang Huang, Zhichao Liu, Youfang Lai, Yun Zhao
-
Patent number: 11710278Abstract: Embodiments of the present invention describe predictively reconstructing a physical event using augmented reality. Embodiments describe, identifying relative states of objects located in a physical event area by using video analysis to analyze collected video feeds from the physical event area before and after a physical event involving at least one of the objects, creating a knowledge corpus including the video analysis and the collected video feeds associated with the physical event and historical information, and capturing data, by a computing device, of the physical event area.Type: GrantFiled: December 2, 2019Date of Patent: July 25, 2023Assignee: International Business Machines CorporationInventors: James R. Kozloski, Sarbajit K. Rakshit, Michael S. Gordon, Komminist Weldemariam
-
Patent number: 11710279Abstract: A contextual local image recognition module of a device retrieves a primary content dataset from a server and then generates and updates a contextual content dataset based on an image captured with the device. The device stores the primary content dataset and the contextual content dataset. The primary content dataset comprises a first set of images and corresponding virtual object models. The contextual content dataset comprises a second set of images and corresponding virtual object models retrieved from the server.Type: GrantFiled: April 26, 2021Date of Patent: July 25, 2023Assignee: RPX CorporationInventor: Brian Mullins
-
Patent number: 11710280Abstract: Disclosed herein is an environmental scanning tool that generates a digital model representing the surroundings of a user of an extended reality head-mounted display device. The environment is imaged in both a depth map and in visible light for some select objects of interest. The selected objects exist within the digital model at higher fidelity and resolution than the remaining portions of the model in order to manage the storage size of the digital model. In some cases, the objects of interest are selected, or their higher fidelity scans are directed, by a remote user. The digital model further includes time stamped updates of the environment such that users can view a state of the environment according to various timestamps.Type: GrantFiled: August 13, 2021Date of Patent: July 25, 2023Assignee: United Services Automobile Association (USAA)Inventors: Ravi Durairaj, Marta Argumedo, Sean C. Mitchem, Ruthie Lyle, Nolan Serrao, Bharat Prasad, Nathan L. Post
-
Patent number: 11710281Abstract: In one embodiment, a computer implemented method for rendering virtual environments is disclosed. The method includes associating by a computing system, an object with a container effect, by receiving information regarding an object category for the object and matching the object category to a category associated with the container effect, where the container effect defines virtual effects for objects associated therewith. The method also includes generating by the computing system a virtual environment including the object by retrieving a model of the object an utilizing the model and the container effect to render a virtual object.Type: GrantFiled: September 2, 2021Date of Patent: July 25, 2023Assignee: Meta Platforms Technologies, LLCInventors: Srilatha P. Raghavan, Nikhil Vijay Chandhok
-
Patent number: 11710282Abstract: Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user.Type: GrantFiled: October 19, 2021Date of Patent: July 25, 2023Assignee: Nant Holdings IP, LLCInventors: Matheen Siddiqui, Kamil Wnuk
-
Patent number: 11710283Abstract: Various implementations disclosed herein include devices, systems, and methods that enable faster and more efficient real-time physical object recognition, information retrieval, and updating of a CGR environment. In some implementations, the CGR environment is provided at a first device based on a classification of the physical object, image or video data including the physical object is transmitted by the first device to a second device, and the CGR environment is updated by the first device based on a response associated with the physical object received from the second device.Type: GrantFiled: October 22, 2021Date of Patent: July 25, 2023Assignee: Apple Inc.Inventors: Eshan Verma, Daniel Ulbricht, Angela Blechschmidt, Mohammad Haris Baig, Chen-Yu Lee, Tanmay Batra
-
Patent number: 11710284Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.Type: GrantFiled: December 14, 2021Date of Patent: July 25, 2023Assignee: Campfire 3D, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
-
Patent number: 11710285Abstract: Disclosed is a location tracking system and associated methods for precisely locating a target device with a recipient device via different forms of location tracking and augmented reality. The recipient device receives a first position of the target device over a data network. The recipient device is moved according to the first position until the target device is in Ultra-WideBand (“UWB”) signaling range of the recipient device. The recipient device then measures a distance and direction of the target device relative to the recipient device based on Time-of-Flight (“ToF”) measurements generated from the UWB signaling. The recipient device determines a second position of the target device based on the distance and direction of the target device, and generates an augmented reality view with a visual reference at a particular position in images of a captured scene that corresponds to the second position of the target device.Type: GrantFiled: October 19, 2022Date of Patent: July 25, 2023Assignee: ON LLCInventor: Luis Contreras
-
Patent number: 11710286Abstract: In some implementations, a method includes obtaining a virtual object kit that includes a set of virtual object templates of a particular virtual object type. In some implementations, the virtual object kit includes a plurality of groups of components. In some implementations, each of the plurality of groups of components is associated with a particular portion of a virtual object. In some implementations, the method includes receiving a request to assemble a virtual object. In some implementations, the request includes a selection of components from at least some of the plurality of groups of components. In some implementations, the method includes synthesizing the virtual object in accordance with the request.Type: GrantFiled: April 14, 2021Date of Patent: July 25, 2023Assignee: APPLE INC.Inventor: Jack R. Greasley
-
Patent number: 11710287Abstract: Systems and methods are described for generating a plurality of three-dimensional (3D) proxy geometries of an object, generating, based on the plurality of 3D proxy geometries, a plurality of neural textures of the object, the neural textures defining a plurality of different shapes and appearances representing the object, providing the plurality of neural textures to a neural renderer, receiving, from the neural renderer and based on the plurality of neural textures, a color image and an alpha mask representing an opacity of at least a portion of the object, and generating a composite image based on the pose, the color image, and the alpha mask.Type: GrantFiled: August 4, 2020Date of Patent: July 25, 2023Assignee: GOOGLE LLCInventors: Ricardo Martin Brualla, Daniel Goldman, Sofien Bouaziz, Rohit Kumar Pandey, Matthew Brown
-
Patent number: 11710288Abstract: An editing terminal includes a simple display data acquisition unit that acquires simple display data from an item management server, an item selection processing unit that receives selection of an item from a plurality of items displayed using the simple display data, a three-dimensional data acquisition unit that acquires three-dimensional data of a selected item from the item management server, and an editing processing unit that displays an editing space on an editing screen on the basis of editing space information, receives an input of operation information regarding editing of the editing space using the three-dimensional data of the selected item, transmits the operation information to an editing server, and displays the editing space after editing on the editing screen.Type: GrantFiled: October 19, 2022Date of Patent: July 25, 2023Assignee: CLUSTER, INC.Inventors: Daiki Handa, Shoma Sato, Hiroyuki Tomine