Three-dimension Patents (Class 345/419)
  • Patent number: 11397322
    Abstract: An image providing system for vehicles includes an imaging unit; a display unit which generates a virtual image of a person; a communication unit which connects for communication to an apparatus outside a vehicle; and a seat occupancy detection unit which detects a seat occupancy state in the vehicle, wherein the display unit controls a display mode of the virtual image on the basis of the seat occupancy state in the vehicle detected by the seat occupancy detection unit in an operating state of the communication unit.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: July 26, 2022
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Yuji Yasui, Hisao Asaumi, Shion Tokunaga, Masashi Yuki, Yo Ito, Hirotaka Uchitomi
  • Patent number: 11398073
    Abstract: An internet or cloud-based system, method, or platform (“platform”) used to facilitate the conversion of electronic two-dimensional drawings to three-dimensional models. A group of people (“crowd”) that has been found qualified to make such conversions, are selected for the conversion. The two-dimensional drawings are transmitted to the crowd for conversion to three-dimensional models. In some embodiments, multiple instances of the same two-dimensional drawings (or image data) is sent to multiple, independent crowd members in order that multiple versions of the same three-dimensional model can be created. Once the models are complete and returned, they are compared to each other on multiple features or characteristics. If two or more three-dimensional models are found to match within the prescribed tolerances, they are determined to be an accurate representation of the product or device shown in the two-dimensional drawings.
    Type: Grant
    Filed: June 23, 2020
    Date of Patent: July 26, 2022
    Assignee: Draawn, LLC
    Inventors: James Cotteleer, Mark Cotteleer
  • Patent number: 11397508
    Abstract: Described herein are techniques for providing a virtual experience including, but not limited to, the use of a virtual experience “pillar,” or virtual rotation of a virtual area and/or a participant in the virtual area. The entry of a participant into a physical environment via a physical entrance area is detected. The participant uses a head-mounted display (HMD) to view a virtual environment associated with the physical environment, the virtual environment including a virtual entrance coinciding with the physical entrance area. An outer virtual environment, and a virtual pillar upon which a virtual avatar representing the participant stands, are caused to be displayed in the virtual environment viewed by the participant.
    Type: Grant
    Filed: June 10, 2020
    Date of Patent: July 26, 2022
    Assignee: Hyper Reality Partners, LLC
    Inventor: Curtis Hickman
  • Patent number: 11393198
    Abstract: Techniques for generating an insurance claim include receiving pupil data from an electronic device. The pupil data indicates a gaze direction of a user. Environment information is received from the electronic device, including point cloud data representing an environment in which the electronic device is currently disposed and a plurality of objects located within the environment. The techniques include determining an identity of an object of the plurality of objects based at least in part on the gaze direction and the environment information. The techniques include receiving, from the electronic device and via the network, information indicative of an input provided by the user, the input corresponding to the object, and comprising at least one of a first user utterance or a hand gesture. The techniques include generating an insurance claim based at least in part on the information and on the identity of the object.
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: July 19, 2022
    Assignee: State Farm Mutual Automobile Insurance Company
    Inventors: Rebecca A. Little, Christopher Robert Galante
  • Patent number: 11393199
    Abstract: The present invention relates to an information display method and system, and a terminal. The method includes: collecting a visual signal in a target area of a display device by using an augmented reality device; judging a user reading scene according to the visual signal; generating a first signal according to the user reading scene; generating a second signal after processing the first signal; collecting a target parameter of the display device; generating a display signal after fusing the second signal with the target parameter; and performing information display on a display interface of the augmented reality device according to the display signal. By adopting the information display method and system, and the terminal of the present invention, the reading efficiency of the user may be improved.
    Type: Grant
    Filed: July 16, 2020
    Date of Patent: July 19, 2022
    Assignee: YUTOU TECHNOLOGY (HANGZHOU) CO., LTD.
    Inventors: Fuyao Zhang, Yiming Chen
  • Patent number: 11392199
    Abstract: Eyewear providing an interactive augmented reality experience between two users of eyewear devices to perform a shared group task. During a shared group task session, each eyewear displays the same image. An eye tracker in each eyewear detects a portion of the image the respective user is gazing at. Each eyewear generates an indication of the portion of the respective image each eyewear user is gazing at. The indication is shared with the other eyewear, and the eyewear display indicates the portion of the image the other eyewear user is gazing at. This allows each eyewear user to see what the other user is gazing at when collaborating and visually observing the same image.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: July 19, 2022
    Assignee: Snap Inc.
    Inventor: Ilteris Canberk
  • Patent number: 11393126
    Abstract: A method for calibrating one or more extrinsic parameters of an image sensor includes selecting a first set of parallel feature edges appearing in an image frame captured by the image sensor and determining reference vanishing points for the first set of parallel feature edges. Selecting a second set of parallel feature edges and projecting a plurality of points from the second set parallel feature edges onto the projection reference frame of the image sensor. The method determines the second set of parallel feature edges, second vanishing points located on the projection reference frame and reduces any deviation in location of the second vanishing points from the reference vanishing points until the deviation is within acceptable predefined limits by recursively: modifying the pre-existing projection matrix, projecting a plurality of points from the second set parallel feature edges onto the projection reference frame and determining the second vanishing points after projecting.
    Type: Grant
    Filed: December 17, 2019
    Date of Patent: July 19, 2022
    Assignee: CONTINENTAL AUTOMOTIVE GMBH
    Inventor: Sreejith Markkassery
  • Patent number: 11392876
    Abstract: Methods, systems, and computer-readable media for deploying and implementing enterprise policies that control augmented reality computing functions are presented. A computing device may receive policy information defining policies that, when implemented, control capture of augmented renderings. After receiving the policy information, the computing device may intercept a request to capture at least one view having at least one augmented reality element. In response to intercepting the request, the computing device may determine whether the policies allow capture of views comprising augmented reality elements. Based on determining that the policies allow capture, the computing device may store view information associated with the at least one view having the at least one augmented reality element. Based on determining that the policies do not allow capture, the computing device may prevent the at least one view having the at least one augmented reality element from being captured.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: July 19, 2022
    Assignee: Citrix Systems, Inc.
    Inventor: Thierry Duchastel
  • Patent number: 11393153
    Abstract: A method for performing object occlusion is disclosed. The method includes capturing an image of a physical item, determining a location and an orientation of a first virtual item with an augmented reality registration function; generating an augmented reality image, wherein the augmented reality image comprises a rendering of the first virtual item in the image using a first rendering function to depict the location and orientation of the first virtual item in the image and a rendering of a second virtual item in the image with a second rendering function; displaying the augmented reality image, wherein occlusion of the first virtual item by the physical item is shown in the augmented reality image based on occlusion of the first virtual item by the second virtual item wherein the first virtual item depicts a next step in the step-by-step instructions for the assembly.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: July 19, 2022
    Assignee: The Texas A&M University System
    Inventor: Wei Yan
  • Patent number: 11393167
    Abstract: Image processing is carried out by accepting an array of voxels that include data representing a physical property of a 3-dimensional object, segmenting the array of voxels into a plurality of regional subarrays of voxels that respectively satisfy predetermined criteria, transforming the subarrays into respective triangular meshes, the meshes having triangles that surround the subarrays and intercept the outer voxels of the subarrays, and rendering the triangular meshes on a display
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: July 19, 2022
    Assignee: Biosense Webster (Israel) Ltd.
    Inventors: Benjamin Cohen, Lior Zar, Aharon Turgeman, Natan Sharon Katz
  • Patent number: 11389118
    Abstract: A system and method for extracting breathing patterns from PPG signals are provided. The method includes designing a filter for extracting breathing patterns from PPG signals. Designing the filter includes defining filter specifications for extraction of breathing pattern from the PPG signals. Herein, the filter specifications includes a type, an order and a cut-off frequency of the filter. Designing the filter further includes generating a transfer function associated with the filter specifications, and computing a plurality of filter coefficients using filtfilt function for allowing filtering of the PPG signals. Using the filter comprising the plurality of filter coefficients, a filtered PPG signal is generated by removing DC component from PPG signals obtained from a wearable device being worn by a subject. The filtered PPG signal is indicative of the breathing pattern of the subject.
    Type: Grant
    Filed: November 2, 2018
    Date of Patent: July 19, 2022
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Avik Ghose, Shalini Mukhopadhyay, Dibyanshu Jaiswal, Dhaval Satish Jani
  • Patent number: 11393132
    Abstract: An encoding device, a decoding device, and a method for mesh decoding are disclosed. The method for mesh decoding includes receiving a compressed bitstream. The method also includes separating, from the compressed bitstream, a first bitstream and a second bitstream. The method further includes decoding, from the second bitstream, connectivity information of a three dimensional (3D) mesh. The method additionally includes decoding, from the first bitstream, a first frame and a second frame that include patches. The patches included in the first frame represent vertex coordinates of the 3D mesh and the patches included in the second frame represent a vertex attribute of the 3D mesh. The method also includes reconstructing a point cloud based on the first and second frames. Additionally, the method also includes applying the connectivity information to the point cloud to reconstruct the 3D mesh.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: July 19, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Esmaeil Faramarzi, Madhukar Budagavi, Rajan Laxman Joshi, Hossein Najaf-Zadeh, Indranil Sinharoy
  • Patent number: 11393097
    Abstract: Disclosed are techniques for annotating image frames using information from a light detection and ranging (LiDAR) sensor. An exemplary method includes receiving, from the LiDAR sensor, at least one LiDAR frame, receiving, from a camera sensor, at least one image frame, removing LiDAR points that represent a ground surface of the environment, identifying LiDAR points of interest in the at least one LiDAR frame, segmenting the LiDAR points of interest to identify at least one object of interest in the at least one LiDAR frame, and annotating the at least one image frame with a three-dimensional oriented bounding box of the at least one object of interest detected in the at least one image frame by projecting the three-dimensional oriented bounding boxes from the at least one LiDAR frame to the at least one image frame using cross-calibration transforms between the LiDAR sensor and the camera.
    Type: Grant
    Filed: January 6, 2020
    Date of Patent: July 19, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Christopher Brunner, Radhika Dilip Gowaikar, Fu-Chun Yeh, Michael Joshua Shomin, John Anthony Dougherty, Jayakrishnan Unnikrishnan
  • Patent number: 11392105
    Abstract: A data conversion system includes an interface to receive path data, a memory to store a computer-executable program including a lattice full algorithm and a dynamic programming algorithm, a processor, in connection with the memory, configured to execute the computer-executable program.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: July 19, 2022
    Inventor: Matthew Brand
  • Patent number: 11394946
    Abstract: A video transmitting method according to embodiments may comprise the steps of: removing inter-view redundancy of pictures with respect to a plurality of viewing positions; packing the pictures in which the inter-view redundancy is removed; and encoding the packed pictures and signaling information. A video receiving method according to embodiments may comprise the steps of: decoding a bitstream of a video, on the basis of a viewing position and viewport information; unpacking pictures and signaling information in the decoded bitstream; view regenerating the unpacked pictures; and view synthesizing the view-regenerated pictures.
    Type: Grant
    Filed: October 30, 2019
    Date of Patent: July 19, 2022
    Assignee: LG ELECTRONICS INC.
    Inventors: Hyunmook Oh, Sejin Oh
  • Patent number: 11386529
    Abstract: A method for displaying a three dimensional (“3D”) image includes rendering a frame of 3D image data. The method also includes analyzing the frame of 3D image data to generate best known depth data. The method further includes using the best known depth data to segment the 3D image data into near and far frames of two dimensional (“2D”) image data corresponding to near and far depths respectively. Moreover, the method includes displaying near and far 2D image frames corresponding to the near and far frames of 2D image data at near and far depths to a user respectively.
    Type: Grant
    Filed: December 3, 2020
    Date of Patent: July 12, 2022
    Assignee: Magic Leap, Inc.
    Inventor: Robert Blake Taylor
  • Patent number: 11385720
    Abstract: A picture selection method of projection touch for a projection touch system is provided. The projection touch system includes an image projection module, a sensing module, an image recognition module including at least one camera module and a processing unit. The picture selection method includes: the sensing module sensing and transferring a first projection coordinate on the target picture at a first time point of a sensing action; the sensing module sensing and transferring a second projection coordinate on the target picture at a second time point of the sensing action; the processing unit selecting at least one to-be-selected picture in the target picture based on the first and second projection coordinates and generating a set of selected image data; and the processing unit controlling the selected image data projected by the image projection module to move to a designated position according to a movement instruction of the user.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: July 12, 2022
    Assignee: Compal Electronics, Inc.
    Inventors: Yu-Hao Tseng, Kun-Hsuan Chang, Wei-Jun Wang, Ting-Wei Wu, Hsin-Chieh Cheng, Jui-Tsen Huang
  • Patent number: 11386872
    Abstract: Described herein is a system and method for experiencing a virtual object at a plurality of sizes. During an AR session, the virtual object can created at a first size based upon a first scale (e.g., miniature, tabletop size). Once created, information regarding the virtual object can be stored. Thereafter, the virtual object can be displayed in an AR session at a second size based upon a second scale (e.g., full size or life size). In some embodiments, functionality of at least portion(s) of the virtual object are different when experienced in an AR session at the second size than when experienced in an AR session at the first size.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: July 12, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jason Matthew Cahill, Torfi Frans Olafsson, Jesse Dylan Merriam, Michael Meincke Persson, Bradley Reid Shuber
  • Patent number: 11386701
    Abstract: Face recognition of a face, to determine whether the face correlates with an enrolled face, may include generating a personalized three-dimensional (3D) face model based on a two-dimensional (2D) input image of the face, acquiring 3D shape information and a normalized 2D input image of the face based on the personalized 3D face model, generating feature information based on the 3D shape information and pixel color values of the normalized 2D input image, and comparing the feature information with feature information associated with the enrolled face. The feature information may include first and second feature information generated based on applying first and second deep neural network models to the pixel color values of the normalized 2D input image and the 3D shape information, respectively. The personalized 3D face model may be generated based on transforming a generic 3D face model based on landmarks detected in the 2D input image.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: July 12, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seon Min Rhee, Jungbae Kim, Byungin Yoo, Jaejoon Han, Seungju Han
  • Patent number: 11386608
    Abstract: An apparatus, method and computer program is disclosed, comprising rendering a virtual scene of a virtual space that corresponds to a virtual position of a user in the virtual space as determined at least in part by the position of the user in a physical space. Embodiments also involve identifying one or more objects in the virtual scene which are in conflict with attributes of the physical space. Embodiments also involve detecting one or more blinking periods of the user when consuming the virtual scene. Embodiments also involve modifying the position of the one or more conflicting objects in the virtual scene based on a detected context. The modifying may be performed within the one or more detected blinking periods.
    Type: Grant
    Filed: February 20, 2019
    Date of Patent: July 12, 2022
    Assignee: NOKIA TECHNOLOGIES OY
    Inventors: Ari-Pekka Liljeroos, Arto Lehtiniemi, Jussi Leppänen
  • Patent number: 11386654
    Abstract: Aspects of the subject disclosure may include, for example, obtaining a first request for a first virtual object, obtaining first data regarding the first virtual object responsive to the obtaining of the first request, analyzing the first data to identify a first plurality of characteristics for the first virtual object, wherein the first plurality of characteristics include a first visual aspect of the first virtual object, a first auditory aspect of the first virtual object, a first scent aspect of the first virtual object, and a first haptic aspect of the first virtual object, and responsive to the analyzing of the first data, enabling at least a first sensory unit of a plurality of sensory units to render the first virtual object in accordance with the first plurality of characteristics. Other embodiments are disclosed.
    Type: Grant
    Filed: February 12, 2021
    Date of Patent: July 12, 2022
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Joseph Soryal, Naila Jaoude, Samuel N. Zellner
  • Patent number: 11386601
    Abstract: A technique for combining first and second images respectively depicting first and second subject matter to facilitate virtual presentation. The first image is processed to identify portions or regions of the first subject matter and determine an estimated depth location of each portion or region. A composite image is generated that depicts the second subject matter overlayed, inserted or otherwise combined with the first subject matter. One or more of the portions or regions of the first subject matter are added, removed, enhanced or modified in the composite image in order to generate a realistic appearance of the first subject matter combined with the second subject matter. The composite image is caused to be displayed as a virtual presentation.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: July 12, 2022
    Assignee: ZEEKIT ONLINE SHOPPING LTD.
    Inventors: Alon Kristal, Nir Appleboim, Yael Wiesel, Israel Harry Zimmerman
  • Patent number: 11380051
    Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. Point cloud data describing an environment is then accessed. A two-dimensional surface of an image of an environment is captured, and a portion of the image is matched to a portion of key points in the point cloud data. An augmented reality object is then aligned within one or more images of the environment based on the match of the point cloud with the image. In some embodiments, building façade data may additionally be used to determine a device location and place the augmented reality object within an image.
    Type: Grant
    Filed: February 10, 2021
    Date of Patent: July 5, 2022
    Assignee: Snap Inc.
    Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M Rodriguez, II, Rahul Sheth, Jia Li, Xutao Lv
  • Patent number: 11381840
    Abstract: A method of point cloud geometry decoding in a point cloud decoder can include receiving a bitstream including a slice of a coded point cloud frame, and reconstructing an octree representing a geometry of points in a bounding box of the slice where a current node of the octree is partitioned with a quadtree (QT) partition or a binary tree (BT) partition.
    Type: Grant
    Filed: June 23, 2020
    Date of Patent: July 5, 2022
    Assignee: TENCENT AMERICA LLC
    Inventors: Xiang Zhang, Wen Gao, Sehoon Yea, Shan Liu
  • Patent number: 11380039
    Abstract: Examples of systems and methods for rendering an avatar in a mixed reality environment are disclosed. The systems and methods may be configured to automatically scale an avatar or to render an avatar based on a determined intention of a user, an interesting impulse, environmental stimuli, or user saccade points. The disclosed systems and methods may apply discomfort curves when rendering an avatar. The disclosed systems and methods may provide a more realistic interaction between a human user and an avatar.
    Type: Grant
    Filed: December 3, 2018
    Date of Patent: July 5, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Thomas Marshall Miller, IV, Victor Ng-Thow-Hing, Josh Anon, Frank Alexander Hamilton, IV, Cole Parker Heiner, Rodrigo Cano, Karen Stolzenberg, Lorena Pazmino, Gregory Minh Tran, Stephane Antoine Joseph Imbert, Anthony Marinello
  • Patent number: 11381753
    Abstract: An example method includes setting an exposure time of a camera of a distance sensor to a first value, instructing the camera to acquire a first image of an object in a field of view of the camera, where the first image is acquired while the exposure time is set to the first value, instructing a pattern projector of the distance sensor to project a pattern of light onto the object, setting the exposure time of the camera to a second value that is different than the first value, and instructing the camera to acquire a second image of the object, where the second image includes the pattern of light, and where the second image is acquired while the exposure time is set to the second value.
    Type: Grant
    Filed: January 26, 2021
    Date of Patent: July 5, 2022
    Assignee: Magik Eye Inc.
    Inventor: Akiteru Kimura
  • Patent number: 11381778
    Abstract: A method for generating a texture map used during a video conference, the method may include obtaining multiple texture maps of multiple areas of at least a portion of a three-dimensional (3D) object; wherein the multiple texture maps comprise a first texture map of a first area and of a first resolution, and a second texture map of a second area and of a second resolution, wherein the first area differs from the first area and the first resolution differs from the second resolution; and generating a texture map of the at least portion of the 3D object, the generating is based on the multiple texture maps; and utilizing the visual representation of the at least portion of the 3D object based on the texture map of the at least portion of the 3D object during the video conference.
    Type: Grant
    Filed: March 2, 2021
    Date of Patent: July 5, 2022
    Assignee: TRUE MEETING INC.
    Inventors: Ran Oz, Yuval Gronau, Michael Rabinovich, Osnat Goren-Peyser, Tal Perl
  • Patent number: 11380046
    Abstract: A system on a chip (SoC) includes a digital signal processor (DSP) and a graphics processing unit (GPU) coupled to the DSP. The DSP is configured to receive a stream of received depth measurements and generate a virtual bowl surface based on the stream of received depth measurements. The DSP is also configured to generate a bowl to physical camera mapping based on the virtual bowl surface. The GPU is configured to receive a first texture and receive a second texture. The GPU is also configured to perform physical camera to virtual camera transformation on the first texture and on the second texture, based on the bowl to physical camera mapping, to generate an output image.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: July 5, 2022
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Shashank Dabral, Vikram Appia, Hemant Hariyani, Lucas Weaver
  • Patent number: 11380076
    Abstract: Systems and methods configured to facilitate animation are disclosed. Exemplary implementations may: obtain a first scene definition; receive second entity information; integrate the second entity information into the first scene definition such that a second scene definition is generated; for each of the entities of the entity information, execute a simulation of the virtual reality scene from the second scene definition for at least a portion of the scene duration; for each of the entities of the entity information, analyze the second scene definition for deviancy between the given entity and the second motion capture information; for each of the entities of the entity information, indicate, based on the analysis for deviancy, the given entity as deviant; and for each of the entities of the entity information, re-integrate the given entity into the second scene definition.
    Type: Grant
    Filed: May 24, 2021
    Date of Patent: July 5, 2022
    Assignee: Mindshow Inc.
    Inventors: Jeffrey Scott Dixon, William Stuart Farquhar
  • Patent number: 11380067
    Abstract: A system configured to present virtual content in an interactive space may comprise one or more of a light source, an optical element, one or more physical processors, non-transitory electronic storage, and/or other components.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: July 5, 2022
    Assignee: Campfire 3D, Inc.
    Inventors: Kharis O'Connell, Yazan Kawar, Michael Stein, Nicholas Cottrell, Amber Choo, Cory Evens, Antonio M. Vasquez
  • Patent number: 11380024
    Abstract: Instant Situational Awareness Visualization Module enables new types and classes of Human Computer Interface enhancements where the Humans can easily and simultaneously see where objects are relative within or outside AR shapes as viewed as a picture-in-a-picture inset that provides an overhead view of terrain/map.
    Type: Grant
    Filed: October 2, 2020
    Date of Patent: July 5, 2022
    Assignee: VR REHAB, INC.
    Inventors: Elizabeth T. Guckenberger, Ronald J. Guckenberger
  • Patent number: 11380045
    Abstract: In various embodiments, a training application generates a trained encoder that automatically generates shape embeddings having a first size and representing three-dimensional (3D) geometry shapes. First, the training application generates a different view activation for each of multiple views associated with a first 3D geometry based on a first convolutional neural network (CNN) block. The training application then aggregates the view activations to generate a tiled activation. Subsequently, the training application generates a first shape embedding having the first size based on the tiled activation and a second CNN block. The training application then generates multiple re-constructed views based on the first shape embedding. The training application performs training operation(s) on at least one of the first CNN block and the second CNN block based on the views and the re-constructed views to generate the trained encoder.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: July 5, 2022
    Assignee: AUTODESK, INC.
    Inventors: Thomas Davies, Michael Haley, Ara Danielyan, Morgan Fabian
  • Patent number: 11375179
    Abstract: A system, method or compute program product for displaying a first image based on first image data in a display area of a first display device, receiving at least one camera-captured second image of an environment with the second image capturing at least a portion of the first image displayed in the display area, determining a location and orientation of the first display device relative to the camera, determining a portion of the second image that corresponds to the portion of the first image displayed in the display area, generating a third image that corresponds to the portion of the first image displayed on the first display device as viewed from a point of view of the camera from the first image data, and generating a composite image of the environment by replacing at least a portion of the second image with the third image, and displaying the composite image.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: June 28, 2022
    Assignee: Tanzle, Inc.
    Inventors: Nancy L. Clemens, Michael A. Vesely
  • Patent number: 11375178
    Abstract: A device and method for video rendering. The device includes a memory and an electronic processor. The electronic processor is configured to receive, from a source device, video data including multiple reference viewpoints, determine a target image plane corresponding to a target viewpoint, determine, within the target image plane, one or more target image regions, and determine, for each target image region, a proxy image region larger than the corresponding target image region. The electronic processor is configured to determine, for each target image region, a plurality of reference pixels that fit within the corresponding proxy image region, project, for each target image region, the plurality of reference pixels that fit within the corresponding proxy image region to the target image region, producing a rendered target region from each target image region, and composite one or more of the rendered target regions to create video rendering.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: June 28, 2022
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Haricharan Lakshman, Wenhui Jia, Jasper Chao, Shwetha Ram, Domagoj Baricevic, Ajit Ninan
  • Patent number: 11373550
    Abstract: Disclosed is an augmented reality training system, which includes a manipulation platform, an augmented reality stereo microscopic assembly, an instrument tracking module and a simulation generation module. The augmented reality stereo microscopic assembly is configured for camera-based capture of real stereo videos and for optical transmission of augmented reality images into the user's eyes. The instrument tracking module uses top and bottom digital cameras to track a marker on an upper portion of an instrument manipulated on a surgical phantom and to track a lower portion of the instrument. The simulation generation module can generate and display augmented reality images that merge the real stereo videos and virtual images for simulation of actions of the instrument in interaction with a training program executed in a processor of the simulation generation module.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: June 28, 2022
    Inventor: Yu-Hsuan Huang
  • Patent number: 11373353
    Abstract: Methods, apparatus, and computer readable storage medium for simulating and rendering a material with a modified material point method are described. The method includes, for each of a plurality of time-steps of simulating a material: transferring states of particles representing the material at a N-th time-step to a grid, determining a plurality of grid-node velocities at the N-th time-step using a particle-to-grid computation based on the states of the particles at the N-th time-step, updating the plurality of grid-node velocities at a (N+1)-th time-step based on grid forces, and updating the states of the particles at the (N+1)-th time-step using a grid-to-particle computation based on the states of the particles at the N-th time-step, the plurality of grid-node velocities at the N-th and (N+1)-th time-steps. The method further includes rendering one or more image depicting the material based on the states of the particles at the plurality of time-steps.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: June 28, 2022
    Assignee: TENCENT AMERICA LLC
    Inventors: Yun Fei, Ming Gao, Qi Guo, Rundong Wu
  • Patent number: 11372132
    Abstract: A system and method for outputting analysis regarding weather and/or environmental conditions at a venue for an event by determining correlations between the results of past events and historical weather and/or environmental conditions, determining current and/or forecasted weather and/or environmental conditions (for example, using a dense mesonet of sensors in and around an event/venue), and generating analysis based on the current and/or forecasted weather and/or environmental conditions and the correlations between the results of past events and the historical weather and/or environmental conditions.
    Type: Grant
    Filed: September 23, 2016
    Date of Patent: June 28, 2022
    Assignee: Locator IP, L.P.
    Inventors: Joel N. Myers, Michael R. Root
  • Patent number: 11373360
    Abstract: Disclosed techniques relate to grouping rays during traversal of a spatially-organized acceleration data structure (e.g., a bounding volume hierarchy) for ray intersection processing. The grouping may provide temporal locality for accesses to bounding region data. In some embodiments, ray intersect circuitry is configured to group rays based on the node of the data structure that they target next. The ray intersect circuitry may select one or more groups of rays for issuance each clock cycle, e.g., to bounding region test circuitry.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: June 28, 2022
    Assignee: Apple Inc.
    Inventors: Ali Rabbani Rankouhi, Christopher A. Burns, Justin A. Hensley, Luca Iuliano, Jonathan M. Redshaw
  • Patent number: 11364432
    Abstract: A timing game device includes a virtual-space image display unit 13, a movement-line display unit 14, and an object display unit 15. The virtual-space image display unit 13 generates a virtual space image, based on VR image data in which a hit area is set in a virtual space having a range wider than a view field of the HMD 200 and displays the virtual space image on a HMD 200. The virtual space image has a view field that changes in response to a movement of the HMD 200. The movement-line display unit 14 generates and displays a movement line of an object that moves to a hit area. The object display unit 15 displays the object to move along the movement line. A game is caused to progress by performing an operation at a timing at which the object reaches the hit area set in the virtual space of the changed view field while appropriately changing the view field of the virtual space image in response to the movement of the HMD 200.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: June 21, 2022
    Assignee: Alpha Code Inc.
    Inventor: Takuhiro Mizuno
  • Patent number: 11367247
    Abstract: Encoding/decoding data representative of a 3D representation of a scene according to a range of points of view can involve generating a depth map associated with a part of the 3D representation according to a parameter representative of a two-dimensional parameterization associated with the part and data associated with a point included in the part, wherein the two-dimensional parameterization can be responsive to geometric information associated with the point and to pose information associated with the range of points of view. A texture map associated with the part can be generated according to the parameter and data associated with the point. First information representative of point density of points in a part of the part can be obtained. The depth map, texture map, parameter, and first information can be included in respective syntax elements of a bitstream.
    Type: Grant
    Filed: November 6, 2018
    Date of Patent: June 21, 2022
    Assignee: InterDigital VC Holdings, Inc.
    Inventors: Franck Thudor, Bertrand Chupeau, Renaud Dore, Thierry Tapie, Julien Fleureau
  • Patent number: 11367226
    Abstract: A method for aligning a real-world object with a virtual object includes capturing images, video, or both of the real-world object from a first viewpoint and from a second viewpoint. The first and second viewpoints are different. The method also includes simultaneously superimposing the virtual object at least partially over the real-world object from the first viewpoint in a first augmented reality (AR) display and from the second viewpoint in a second AR display based at least in part on the images, video, or both. The method also includes adjusting a position of the real-world object to at least partially align the real-world object with the virtual object from the first viewpoint in the first AR display and from the second viewpoint in the second AR display.
    Type: Grant
    Filed: February 23, 2021
    Date of Patent: June 21, 2022
    Assignee: THE JOHNS HOPKINS UNIVERSITY
    Inventors: Nassir Navab, Javad Fotouhi
  • Patent number: 11367243
    Abstract: An apparatus and method for performing BVH compression and decompression concurrently with stores and loads, respectively.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: June 21, 2022
    Assignee: Intel Corporation
    Inventors: Carsten Benthin, Ingo Wald, Gabor Liktor, Johannes Guenther, Elmoustapha Ould-Ahmed-Vall
  • Patent number: 11367235
    Abstract: In accordance with an aspect of the present disclosure, there is provided a method for simplifying three-dimensional mesh data using three-dimensional mesh data simplification device. The method comprises, determining a vertex or an edge of three-dimensional mesh data to be deleted based on animation information including skin weight values and geometric information of the three-dimensional mesh data; and simplifying the three-dimensional mesh data by deleting the vertex or the edge.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: June 21, 2022
    Assignee: SK TELECOM CO., LTD.
    Inventors: Seungho Shin, Gukchan Lim, Jinsoo Jeon, Ikhwan Cho
  • Patent number: 11366566
    Abstract: In one implementation, a method comprises: obtaining an input from a client device indicating an entry transaction between an avatar and a portal of a modeled space; in response to obtaining the input from the client device indicating the entry transaction between the avatar and the portal of the modeled space, determining an identifier associated with the avatar; determining whether a custom destination has been defined for the portal based on the identifier for the avatar; causing the avatar to enter a default destination environment for the portal according to a determination that a custom destination has not been defined for the portal; and causing the avatar to enter a custom destination environment that is distinct from the default destination environment according to a determination that a custom destination has been defined for the portal.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: June 21, 2022
    Assignee: PFAQUTRUMA RESEARCH LLC
    Inventor: Brian Mark Shuster
  • Patent number: 11368557
    Abstract: A computer-implemented method of providing a server-based feature cloud model of a realm includes receiving by a server a series of digital contributions that collectively originate from a plurality of remote computing devices, characterizing portions of the realm. The method also includes processing by the server the received digital contributions to associate them with a global coordinate system and storing the processed contributions in a realm model database as components of the feature cloud model of the realm. Finally, the method includes, in response to a query message over the Internet from a computing device of an end-user, serving, over the Internet by the server to the computing device, digital data defining a selected portion of the feature cloud model for integration and display by the computing device.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: June 21, 2022
    Inventors: Alexander Hertel, Philipp Hertel
  • Patent number: 11368652
    Abstract: Audio content and played frames may be received. The audio content may correspond to first video content. The played frames may be included in the first video content. The first video content may further include a replaced frame. The played frames and the replaced frame may include a face of a person. Location data may also be received that indicates locations of facial features of the face of the person within the replaced frame. A replacement frame may be generated, such as by rendering the facial features in the replacement frame based at least in part on the locations indicated by the location data and positions indicated by a portion of the audio content that is associated with the replaced frame. Second video content may be played including the played frames and the replacement frame. The replacement frame may replace the replaced frame in the second video content.
    Type: Grant
    Filed: October 29, 2020
    Date of Patent: June 21, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Gregory Johnson, Pragyana K. Mishra, Mohammed Khalilia, Wenbin Ouyang, Naveen Sudhakaran Nair
  • Patent number: 11368631
    Abstract: Systems, imaging devices and methods for creating background blur in camera panning or motion. Using an imaging device with an image sensor, a method may comprise selecting an object to be tracked in a scene, recording an image or an image stream, and aligning the selected object optically and/or digitally to a same position on the image sensor while the selected object moves relative to the imaging device or relative to the scene, thereby creating a blurred image background and/or foreground relative to the selected object and a sense of panning or motion.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: June 21, 2022
    Assignee: Corephotonics Ltd.
    Inventors: Anat Leshem Gat, Ruthy Katz, Omri Levi, Oded Gigushinski, Yiftah Kowal, Ephraim Goldenberg, Gal Shabtay, Noy Cohen, Michael Scherer
  • Patent number: 11364103
    Abstract: A method and a system for determining a bite position between arch forms of a subject. The method comprises: receiving a 3D model including a first portion and a second portion respectively representative of lower and upper arch forms of the subject; determining, a respective distance value from each point of the first portion to the second portion; determining, for each point of the first portion, a respective weight value, thereby determining a respective weighted distance value; aggregating respective weighted distance values associated with each point of the first portion to determine an aggregate distance value being a remoteness measure between the first portion and the second portion; and determining the bite position based on the aggregate distance value.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: June 21, 2022
    Assignee: Oxilio Ltd
    Inventor: Islam Khasanovich Raslambekov
  • Patent number: 11367523
    Abstract: A method is for image data processing. In an embodiment, the method includes providing a 3D medical image data record, which relates to an elongated anatomical structure, a center line of the elongated anatomical structure being defined in the 3D medical image data record; defining at least one curved slice in the 3D medical image data record, the at least one curved slice winding around the center line; scanning at least one part of the 3D medical image data record into the at least one curved slice; and unrolling the at least one curved slice, into which the at least one part of the 3D medical image data record was scanned, at least one unrolled flat slice being determined. An image data processing unit is also for image data processing and a medical imaging apparatus includes the image data processing unit.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: June 21, 2022
    Assignee: SIEMENS HEALTHCARE GMBH
    Inventors: Grzegorz Soza, Stefan Grosskopf, Hannes Martinke, Christian Petry, Helmut Ringl, Michael Suehling
  • Patent number: RE49150
    Abstract: A computer-implemented method for ordering vertices in an image frame within a data stream, wherein the image frame corresponds to Earth-viewing data. A point of intersection of a primary pair of lines is determined and loaded into computer memory, and interrogated as to a sign of a signed remainder with respect to each of two secondary lines defined by the pairwise ordered sets of vertices. In the case of opposite remainder sign with respect to the two secondary lines, two provisional indices are swapped to obtain a rectified index for each of the four vertices. The process is repeated with respect to the signed remainder of the intersection point of the secondary lines relative to the primary lines. The four vertices are then fit, in accordance with index ordering, into a tiling of the surface of the Earth based on the rectified index of each of the four vertices.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: July 26, 2022
    Assignee: Intergraph Corporation
    Inventor: Gene Arthur Grindstaff