Patents Examined by Ming Wu
-
Patent number: 12141927Abstract: Methods and systems for rendering augmented reality display data to locations of a physical presentation environment based on a presentation configuration are provided. A physical presentation environment configuration may be accessed that includes locations of a physical presentation environment for mapping augmented reality display data. The augmented reality display data may include a plurality of augmented reality objects that are rendered for display. Presentation attributes of the augmented reality display data may be used in conjunction with the presentation configuration for mapping and rendering the augmented reality display data. The rendered augmented reality display data may be dynamically interactive, and may be generated based on previous presentation configurations, mapping preferences, mapping limitations, and/or other factors.Type: GrantFiled: June 30, 2017Date of Patent: November 12, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Michel Pahud, Nathalie Riche, Eyal Ofek, Christophe Hurter, Steven Mark Drucker
-
Patent number: 12133690Abstract: A system or method according to at least one embodiment of the present disclosure includes receiving a preoperative image of a patient in a first posture; receiving an intraoperative image of the patient in a second posture; comparing the preoperative image of the patient in the first posture with the intraoperative image of the patient in the second posture; and determining, based on comparing the preoperative image of the patient in the first posture with the intraoperative image of the patient in the second posture, a difference between the first posture and the second posture, the adequate surgical changes are imparted to the second posture to achieve satisfactory surgical outcome.Type: GrantFiled: October 8, 2021Date of Patent: November 5, 2024Assignee: Medtronic Navigation, Inc.Inventor: Saba Pasha
-
Patent number: 12131416Abstract: A method of forming a pixel-aligned volumetric avatar includes receiving multiple two-dimensional images having at least two or more fields of view of a subject. The method also includes extracting multiple image features from the two-dimensional images using a set of learnable weights, projecting the image features along a direction between a three-dimensional model of the subject and a selected observation point for a viewer, and providing, to the viewer, an image of the three-dimensional model of the subject. A system and a non-transitory, computer readable medium storing instructions to perform the above method, are also provided.Type: GrantFiled: December 20, 2021Date of Patent: October 29, 2024Assignee: Meta Platforms Technologies, LLCInventors: Stephen Anthony Lombardi, Jason Saragih, Tomas Simon Kreuz, Shunsuke Saito, Michael Zollhoefer, Amit Raj, James Henry Hays
-
Patent number: 12125131Abstract: A method of generating a 3D video, a method of training a neural network model, an electronic device, and a storage medium, which relate to a field of image processing, and in particular to technical fields of computer vision, augmented/virtual reality and deep learning. The method includes: determining, based on an input speech feature, a principal component analysis (PCA) coefficient by using a first network, wherein the PCA coefficient is used to generate the 3D video; correcting the PCA coefficient by using a second network; generating a lip movement information based on the corrected PCA coefficient and a PCA parameter for a neural network model, wherein the neural network model includes the first network and the second network; and applying the lip movement information to a pre-constructed 3D basic avatar model to obtain a 3D video with a lip movement effect.Type: GrantFiled: December 5, 2022Date of Patent: October 22, 2024Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.Inventors: Zhe Peng, Yuqiang Liu, Fanyu Geng
-
Patent number: 12125153Abstract: The present invention provides an optical device for augmented reality having a ghost image blocking function. The optical device includes: an optical means configured to transmit at least part of real object image light, which is image light output from a real object, therethrough toward the pupil of an eye of a user; a first reflective unit disposed inside the optical means, and configured to transfer augmented reality image light, which is image light corresponding to an image for augmented reality output from an image output unit, to a second reflective unit; and the second reflective unit disposed inside the optical means, and configured to transfer the augmented reality image light to the pupil of the eye of the user by reflecting the augmented reality image light to the pupil of the eye of the user, thereby providing the image for augmented reality to the user.Type: GrantFiled: October 22, 2020Date of Patent: October 22, 2024Assignee: LETINAR CO., LTDInventor: Jeong Hun Ha
-
Patent number: 12118671Abstract: A computer-implemented method of modelling a common structure component, the method comprising, in a modelling computer system: receiving a plurality of captured frames, each frame comprising a set of 3D structure points, in which at least a portion of a common structure component is captured; computing a first reference position within at least one first frame of the plurality of frames; selectively extracting first 3D structure points of the first frame based on the first reference position computed for the first frame; computing a second reference position within a second frame of the plurality of frames; selectively extracting second 3D structure points of the second frame based on the second reference position computed for the second frame; and aggregating the first 3D structure points and the second 3D structure points, thereby generating an aggregate 3D model of the common structure component based on the first and second reference positions.Type: GrantFiled: July 20, 2020Date of Patent: October 15, 2024Assignee: Five AI LimitedInventors: Robert Chandler, Simon Walker, Benjamin Fuller, Thomas Westmacott
-
Patent number: 12120287Abstract: An apparatus for viewing in a structure with objects having a first participant and at least a second participant. The apparatus includes a first VR headset to be worn by the first participant. The apparatus includes a second VR headset to be worn by the second participant. Each participant sees every other participant in the structure as every other participant physically appears in real time in a simulated world displayed about them by the respective VR headset each participant is wearing. Each participant sees the simulated world from their own correct perspective in the structure. Each participant able to interact with the simulated world and simultaneously with desired physical objects in the structure and sees the desired physical objects as they physically appear in real time in the structure. A method for a first participant and at least a second participant viewing in a structure with objects.Type: GrantFiled: February 7, 2022Date of Patent: October 15, 2024Inventor: Kenneth Perlin
-
Patent number: 12112428Abstract: In various examples, shader bindings may be recorded in a shader binding table that includes shader records. Geometry of a 3D scene may be instantiated using object instances, and each may be associated with a respective set of the shader records using a location identifier of the set of shader records in memory. The set of shader records may represent shader bindings for an object instance under various predefined conditions. One or more of these predefined conditions may be implicit in the way the shader records are arranged in memory (e.g., indexed by ray type, by sub-geometry, etc.). For example, a section selector value (e.g., a section index) may be computed to locate and select a shader record based at least in part on a result of a ray tracing query (e.g., what sub-geometry was hit, what ray type was traced, etc.).Type: GrantFiled: July 17, 2023Date of Patent: October 8, 2024Assignee: NVIDIA CorporationInventors: Martin Stich, Ignacio Llamas, Steven Parker
-
Patent number: 12100064Abstract: The present application discloses a warp execution method used for SPs of an SM of a GPU and an associated GPU. The SPs share a scratchpad memory, and the warp execution method includes: when the predetermined time point for warp-loading is reached, checking a first indicator to obtain a size of a space with the status of blank in the scratchpad memory, to determining whether to load the warp, wherein the first indicator is used to indicate a starting position of a space with the status of data-in-use and an ending position of the space with the status of blank; and when the predetermined time point for computing is reached, checking a second indicator and a third indicator to obtain a size of a space with the status of data-not-in-use in the scratchpad memory, to determining whether to compute the warp.Type: GrantFiled: October 12, 2022Date of Patent: September 24, 2024Assignee: ALIBABA (CHINA) CO., LTD.Inventors: Yuan Gao, Fei Sun, Haoran Li, Guyue Huang, Chen Zhang, Ruiguang Zhong
-
Patent number: 12094052Abstract: An improved living area estimation system is described herein that can control and/or use data obtained by a residential robotic device that operates within a structure to estimate the living area. In particular, a residential robotic device may navigate along a predetermined or dynamically-determined route on one floor of the structure. As the residential robotic device travels along the route, the residential robotic device can use one or more sensors to track the area covered. The living area estimation system can obtain a traversed area map directly or indirectly from the residential robotic device, use image processing techniques to enhance the traversed area map, and estimate a living area of a floor on which the residential robotic device operated using the enhanced traversed area map. The living area estimation system can then use artificial intelligence and the estimated floor living area to estimate the living area of a structure.Type: GrantFiled: January 17, 2023Date of Patent: September 17, 2024Assignee: CoreLogic Solutions, LLCInventors: Frankie Famighetti, Anand Singh, Savita Bhayal
-
Patent number: 12086959Abstract: An image processing apparatus that is capable of ensuring a wide dynamic range while reducing strangeness generated due to a moving body. The image processing apparatus acquires a plurality of images amplified with different gains, respectively, for each of exposure operations with different exposure amounts, determines whether or not there is a moving body having moved between images acquired for each of exposure operations with different exposure amounts, and selects images to be used for image combination out of the plurality of acquired images, based on a result of the determination.Type: GrantFiled: October 19, 2022Date of Patent: September 10, 2024Assignee: Canon Kabushiki KaishaInventor: Kazuki Sato
-
Patent number: 12079940Abstract: A method of providing a geographically distributed live mixed-reality meeting is described. The method comprises receiving, from a camera at a first endpoint, a live video stream; generating an mixed reality view incorporating the received video stream; rendering the mixed reality view at a display at the first endpoint and transmitting the mixed reality view to at least one other geographically distant endpoint; receiving data defining a bounding area; calculating a real world anchor for the bounding area using the data defining the bounding area; rendering the bounding area in the mixed reality view at a real world position determined using the real world anchor; and applying different rule sets to content objects placed into the mixed reality view by users dependent upon the position of the content objects relative to the bounding area in real world space.Type: GrantFiled: June 25, 2022Date of Patent: September 3, 2024Assignee: Microsoft Technology Licensing, LLC.Inventors: Anthony Arnold Wieser, Martin Grayson, Kenton Paul Anthony O'Hara, Edward Sean Lloyd Rintel, Camilla Alice Longden, Philipp Steinacher, Dominic Roedel, Advait Sarkar, Shu Sam Chen, Jens Emil Krarup Gronbaek, Ding Wang
-
Patent number: 12062174Abstract: A method, machine readable medium and system for semantic segmentation of 3D point cloud data includes determining ground data points of the 3D point cloud data, categorizing non-ground data points relative to a ground surface determined from the ground data points to determine legitimate non-ground data points, segmenting the determined legitimate non-ground and ground data points based on a set of common features, applying logical rules to a data structure of the features built on the segmented determined non-ground and ground data points based on their spatial relationships and incorporated within a machine learning system, and constructing a 3D semantics model from the application of the logical rules to the data structure.Type: GrantFiled: September 15, 2021Date of Patent: August 13, 2024Assignee: SRI InternationalInventors: Anil Usumezbas, Bogdan Calin Mihai Matei, Rakesh Kumar, Supun Samarasekera
-
Patent number: 12062127Abstract: According to certain embodiments, an electronic device comprises: at least one display module; at least one communication module; at least one microphone; at least one camera; and a processor, wherein the processor is configured to: capture at least one image through the at least one camera, identify at least one object located around the electronic device, based at least in part on the at least one image, identify sound information attributable to the identified at least one object from first sound data input through the at least one microphone, control the at least one display module to display at least one virtual object corresponding to the identified at least one object at a position corresponding to the at least one object, wherein the position is determined based on the sound information, obtain a first user input associated with a first virtual object among the at least one virtual object, and determine a noise cancellation (NC) level of a first object corresponding to the first virtual object, basedType: GrantFiled: January 6, 2022Date of Patent: August 13, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Jeongwon Park, Chulkwi Kim, Seungnyun Kim
-
Patent number: 12056904Abstract: This disclosure relates to methods and systems for for encoding or decoding a 3D mesh with constrained geometry dynamic range. The example decoding method includes receiving a coded bitstream comprising a geometry patch for a three-dimension mesh; extracting, from the coded bitstream, a first syntax element indicating whether the geometry patch is partitioned, wherein the geometry patch comprises one or more partitions; and for a respective partition in the geometry patch, obtaining a dynamic range of pixel values for points in the respective partition that correspond to vertices in the three-dimension mesh, wherein the dynamic range enables the geometry patch to be coded within a predetermined bit depth.Type: GrantFiled: October 26, 2022Date of Patent: August 6, 2024Assignee: TENCENT AMERICA LLCInventors: Xiang Zhang, Xiaozhong Xu, Jun Tian, Chao Huang, Shan Liu
-
Patent number: 12057091Abstract: A head-up display device (HUD) 1 includes a sound analysis portion 42 that determines whether a specific alarm is involved in sound signals from microphones 40a to 40d to collect ambient sound around a vehicle 2. When a specific alarm has been detected as determined by the sound analysis portion 42, an image display unit 30 displays icons 91, 92 representing the specific alarm. The specific alarm includes a siren sound given from an emergency vehicle and a horn sound given from a general vehicle and the sound analysis portion 42 analyzes frequency of input sound signals and thereby determines whether the sound is the specific alarm.Type: GrantFiled: May 24, 2019Date of Patent: August 6, 2024Assignee: MAXELL, LTD.Inventors: Hidenori Takata, Nozomu Shimoda
-
Patent number: 12056820Abstract: A 3D scanning toolkit to perform operations that include: accessing a first data stream at a client device, wherein the first data stream comprises at least image data; applying a bit mask to the first data stream, the bit mask identifying a portion of the image data; accessing a second data stream at the client device, the second data stream comprising depth data associated with the portion of the image data; generating a point cloud based on the depth data, the point cloud comprising a set of data points that define surface features of an object depicted in the first data stream; and causing display of a visualization of the point cloud upon a presentation of the first data stream at the client device.Type: GrantFiled: February 18, 2021Date of Patent: August 6, 2024Assignee: SDC U.S. SmilePay SPVInventors: Jeffrey Huber, Aaron Thompson, Ricky Reusser, Dustin Dorroh, Eric Arnebäck, Garrett Spiegel
-
Patent number: 12045924Abstract: Graphics processing unit (GPU) performance and power efficiency is improved using machine learning to tune operating parameters based on performance monitor values and application information. Performance monitor values are processed using machine learning techniques to generate model parameters, which are used by a control unit within the GPU to provide real-time updates to the operating parameters. In one embodiment, a neural network processes the performance monitor values to generate operating parameters in real-time.Type: GrantFiled: September 15, 2022Date of Patent: July 23, 2024Assignee: NVIDIA CorporationInventors: Rouslan L. Dimitrov, Dale L. Kirkland, Emmett M. Kilgariff, Sachin Satish Idgunji, Siddharth Sharma
-
Patent number: 12033274Abstract: The present disclosure provides new and innovative systems and methods for generating eye models with realistic color. In an example, a computer-implemented method includes obtaining refraction data, obtaining mesh data, generating aligned model data by aligning the refraction data and the mesh data, calculating refraction points in the aligned model data, and calculating an approximated iris color based on the refraction points and the aligned model data by calculating melanin information for the aligned model data based on the refraction points for iris pixels in the aligned model data.Type: GrantFiled: August 25, 2022Date of Patent: July 9, 2024Assignee: TRANSFOLIO, LLCInventor: Jeroen Snepvangers
-
Patent number: 12033261Abstract: One example method involves a processing device that performs operations that include receiving a request to retarget a source motion into a target object. Operations further include providing the target object to a contact-aware motion retargeting neural network trained to retarget the source motion into the target object. The contact-aware motion retargeting neural network is trained by accessing training data that includes a source object performing the source motion. The contact-aware motion retargeting neural network generates retargeted motion for the target object, based on a self-contact having a pair of input vertices. The retargeted motion is subject to motion constraints that: (i) preserve a relative location of the self-contact and (ii) prevent self-penetration of the target object.Type: GrantFiled: July 26, 2021Date of Patent: July 9, 2024Assignee: ADOBE INC.Inventors: Ruben Villegas, Jun Saito, Jimei Yang, Duygu Ceylan Aksit, Aaron Hertzmann