Three-dimension Patents (Class 345/419)
-
Patent number: 11397322Abstract: An image providing system for vehicles includes an imaging unit; a display unit which generates a virtual image of a person; a communication unit which connects for communication to an apparatus outside a vehicle; and a seat occupancy detection unit which detects a seat occupancy state in the vehicle, wherein the display unit controls a display mode of the virtual image on the basis of the seat occupancy state in the vehicle detected by the seat occupancy detection unit in an operating state of the communication unit.Type: GrantFiled: June 12, 2018Date of Patent: July 26, 2022Assignee: HONDA MOTOR CO., LTD.Inventors: Yuji Yasui, Hisao Asaumi, Shion Tokunaga, Masashi Yuki, Yo Ito, Hirotaka Uchitomi
-
Patent number: 11398073Abstract: An internet or cloud-based system, method, or platform (“platform”) used to facilitate the conversion of electronic two-dimensional drawings to three-dimensional models. A group of people (“crowd”) that has been found qualified to make such conversions, are selected for the conversion. The two-dimensional drawings are transmitted to the crowd for conversion to three-dimensional models. In some embodiments, multiple instances of the same two-dimensional drawings (or image data) is sent to multiple, independent crowd members in order that multiple versions of the same three-dimensional model can be created. Once the models are complete and returned, they are compared to each other on multiple features or characteristics. If two or more three-dimensional models are found to match within the prescribed tolerances, they are determined to be an accurate representation of the product or device shown in the two-dimensional drawings.Type: GrantFiled: June 23, 2020Date of Patent: July 26, 2022Assignee: Draawn, LLCInventors: James Cotteleer, Mark Cotteleer
-
Patent number: 11397508Abstract: Described herein are techniques for providing a virtual experience including, but not limited to, the use of a virtual experience “pillar,” or virtual rotation of a virtual area and/or a participant in the virtual area. The entry of a participant into a physical environment via a physical entrance area is detected. The participant uses a head-mounted display (HMD) to view a virtual environment associated with the physical environment, the virtual environment including a virtual entrance coinciding with the physical entrance area. An outer virtual environment, and a virtual pillar upon which a virtual avatar representing the participant stands, are caused to be displayed in the virtual environment viewed by the participant.Type: GrantFiled: June 10, 2020Date of Patent: July 26, 2022Assignee: Hyper Reality Partners, LLCInventor: Curtis Hickman
-
Patent number: 11393198Abstract: Techniques for generating an insurance claim include receiving pupil data from an electronic device. The pupil data indicates a gaze direction of a user. Environment information is received from the electronic device, including point cloud data representing an environment in which the electronic device is currently disposed and a plurality of objects located within the environment. The techniques include determining an identity of an object of the plurality of objects based at least in part on the gaze direction and the environment information. The techniques include receiving, from the electronic device and via the network, information indicative of an input provided by the user, the input corresponding to the object, and comprising at least one of a first user utterance or a hand gesture. The techniques include generating an insurance claim based at least in part on the information and on the identity of the object.Type: GrantFiled: June 2, 2020Date of Patent: July 19, 2022Assignee: State Farm Mutual Automobile Insurance CompanyInventors: Rebecca A. Little, Christopher Robert Galante
-
Patent number: 11393199Abstract: The present invention relates to an information display method and system, and a terminal. The method includes: collecting a visual signal in a target area of a display device by using an augmented reality device; judging a user reading scene according to the visual signal; generating a first signal according to the user reading scene; generating a second signal after processing the first signal; collecting a target parameter of the display device; generating a display signal after fusing the second signal with the target parameter; and performing information display on a display interface of the augmented reality device according to the display signal. By adopting the information display method and system, and the terminal of the present invention, the reading efficiency of the user may be improved.Type: GrantFiled: July 16, 2020Date of Patent: July 19, 2022Assignee: YUTOU TECHNOLOGY (HANGZHOU) CO., LTD.Inventors: Fuyao Zhang, Yiming Chen
-
Patent number: 11392199Abstract: Eyewear providing an interactive augmented reality experience between two users of eyewear devices to perform a shared group task. During a shared group task session, each eyewear displays the same image. An eye tracker in each eyewear detects a portion of the image the respective user is gazing at. Each eyewear generates an indication of the portion of the respective image each eyewear user is gazing at. The indication is shared with the other eyewear, and the eyewear display indicates the portion of the image the other eyewear user is gazing at. This allows each eyewear user to see what the other user is gazing at when collaborating and visually observing the same image.Type: GrantFiled: June 28, 2021Date of Patent: July 19, 2022Assignee: Snap Inc.Inventor: Ilteris Canberk
-
Patent number: 11393126Abstract: A method for calibrating one or more extrinsic parameters of an image sensor includes selecting a first set of parallel feature edges appearing in an image frame captured by the image sensor and determining reference vanishing points for the first set of parallel feature edges. Selecting a second set of parallel feature edges and projecting a plurality of points from the second set parallel feature edges onto the projection reference frame of the image sensor. The method determines the second set of parallel feature edges, second vanishing points located on the projection reference frame and reduces any deviation in location of the second vanishing points from the reference vanishing points until the deviation is within acceptable predefined limits by recursively: modifying the pre-existing projection matrix, projecting a plurality of points from the second set parallel feature edges onto the projection reference frame and determining the second vanishing points after projecting.Type: GrantFiled: December 17, 2019Date of Patent: July 19, 2022Assignee: CONTINENTAL AUTOMOTIVE GMBHInventor: Sreejith Markkassery
-
Patent number: 11392876Abstract: Methods, systems, and computer-readable media for deploying and implementing enterprise policies that control augmented reality computing functions are presented. A computing device may receive policy information defining policies that, when implemented, control capture of augmented renderings. After receiving the policy information, the computing device may intercept a request to capture at least one view having at least one augmented reality element. In response to intercepting the request, the computing device may determine whether the policies allow capture of views comprising augmented reality elements. Based on determining that the policies allow capture, the computing device may store view information associated with the at least one view having the at least one augmented reality element. Based on determining that the policies do not allow capture, the computing device may prevent the at least one view having the at least one augmented reality element from being captured.Type: GrantFiled: January 4, 2019Date of Patent: July 19, 2022Assignee: Citrix Systems, Inc.Inventor: Thierry Duchastel
-
Patent number: 11393153Abstract: A method for performing object occlusion is disclosed. The method includes capturing an image of a physical item, determining a location and an orientation of a first virtual item with an augmented reality registration function; generating an augmented reality image, wherein the augmented reality image comprises a rendering of the first virtual item in the image using a first rendering function to depict the location and orientation of the first virtual item in the image and a rendering of a second virtual item in the image with a second rendering function; displaying the augmented reality image, wherein occlusion of the first virtual item by the physical item is shown in the augmented reality image based on occlusion of the first virtual item by the second virtual item wherein the first virtual item depicts a next step in the step-by-step instructions for the assembly.Type: GrantFiled: May 29, 2020Date of Patent: July 19, 2022Assignee: The Texas A&M University SystemInventor: Wei Yan
-
Patent number: 11393167Abstract: Image processing is carried out by accepting an array of voxels that include data representing a physical property of a 3-dimensional object, segmenting the array of voxels into a plurality of regional subarrays of voxels that respectively satisfy predetermined criteria, transforming the subarrays into respective triangular meshes, the meshes having triangles that surround the subarrays and intercept the outer voxels of the subarrays, and rendering the triangular meshes on a displayType: GrantFiled: December 31, 2018Date of Patent: July 19, 2022Assignee: Biosense Webster (Israel) Ltd.Inventors: Benjamin Cohen, Lior Zar, Aharon Turgeman, Natan Sharon Katz
-
Patent number: 11389118Abstract: A system and method for extracting breathing patterns from PPG signals are provided. The method includes designing a filter for extracting breathing patterns from PPG signals. Designing the filter includes defining filter specifications for extraction of breathing pattern from the PPG signals. Herein, the filter specifications includes a type, an order and a cut-off frequency of the filter. Designing the filter further includes generating a transfer function associated with the filter specifications, and computing a plurality of filter coefficients using filtfilt function for allowing filtering of the PPG signals. Using the filter comprising the plurality of filter coefficients, a filtered PPG signal is generated by removing DC component from PPG signals obtained from a wearable device being worn by a subject. The filtered PPG signal is indicative of the breathing pattern of the subject.Type: GrantFiled: November 2, 2018Date of Patent: July 19, 2022Assignee: TATA CONSULTANCY SERVICES LIMITEDInventors: Avik Ghose, Shalini Mukhopadhyay, Dibyanshu Jaiswal, Dhaval Satish Jani
-
Patent number: 11393132Abstract: An encoding device, a decoding device, and a method for mesh decoding are disclosed. The method for mesh decoding includes receiving a compressed bitstream. The method also includes separating, from the compressed bitstream, a first bitstream and a second bitstream. The method further includes decoding, from the second bitstream, connectivity information of a three dimensional (3D) mesh. The method additionally includes decoding, from the first bitstream, a first frame and a second frame that include patches. The patches included in the first frame represent vertex coordinates of the 3D mesh and the patches included in the second frame represent a vertex attribute of the 3D mesh. The method also includes reconstructing a point cloud based on the first and second frames. Additionally, the method also includes applying the connectivity information to the point cloud to reconstruct the 3D mesh.Type: GrantFiled: March 5, 2020Date of Patent: July 19, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Esmaeil Faramarzi, Madhukar Budagavi, Rajan Laxman Joshi, Hossein Najaf-Zadeh, Indranil Sinharoy
-
Patent number: 11393097Abstract: Disclosed are techniques for annotating image frames using information from a light detection and ranging (LiDAR) sensor. An exemplary method includes receiving, from the LiDAR sensor, at least one LiDAR frame, receiving, from a camera sensor, at least one image frame, removing LiDAR points that represent a ground surface of the environment, identifying LiDAR points of interest in the at least one LiDAR frame, segmenting the LiDAR points of interest to identify at least one object of interest in the at least one LiDAR frame, and annotating the at least one image frame with a three-dimensional oriented bounding box of the at least one object of interest detected in the at least one image frame by projecting the three-dimensional oriented bounding boxes from the at least one LiDAR frame to the at least one image frame using cross-calibration transforms between the LiDAR sensor and the camera.Type: GrantFiled: January 6, 2020Date of Patent: July 19, 2022Assignee: QUALCOMM IncorporatedInventors: Christopher Brunner, Radhika Dilip Gowaikar, Fu-Chun Yeh, Michael Joshua Shomin, John Anthony Dougherty, Jayakrishnan Unnikrishnan
-
Patent number: 11392105Abstract: A data conversion system includes an interface to receive path data, a memory to store a computer-executable program including a lattice full algorithm and a dynamic programming algorithm, a processor, in connection with the memory, configured to execute the computer-executable program.Type: GrantFiled: March 28, 2019Date of Patent: July 19, 2022Inventor: Matthew Brand
-
Patent number: 11394946Abstract: A video transmitting method according to embodiments may comprise the steps of: removing inter-view redundancy of pictures with respect to a plurality of viewing positions; packing the pictures in which the inter-view redundancy is removed; and encoding the packed pictures and signaling information. A video receiving method according to embodiments may comprise the steps of: decoding a bitstream of a video, on the basis of a viewing position and viewport information; unpacking pictures and signaling information in the decoded bitstream; view regenerating the unpacked pictures; and view synthesizing the view-regenerated pictures.Type: GrantFiled: October 30, 2019Date of Patent: July 19, 2022Assignee: LG ELECTRONICS INC.Inventors: Hyunmook Oh, Sejin Oh
-
Patent number: 11386529Abstract: A method for displaying a three dimensional (“3D”) image includes rendering a frame of 3D image data. The method also includes analyzing the frame of 3D image data to generate best known depth data. The method further includes using the best known depth data to segment the 3D image data into near and far frames of two dimensional (“2D”) image data corresponding to near and far depths respectively. Moreover, the method includes displaying near and far 2D image frames corresponding to the near and far frames of 2D image data at near and far depths to a user respectively.Type: GrantFiled: December 3, 2020Date of Patent: July 12, 2022Assignee: Magic Leap, Inc.Inventor: Robert Blake Taylor
-
Patent number: 11385720Abstract: A picture selection method of projection touch for a projection touch system is provided. The projection touch system includes an image projection module, a sensing module, an image recognition module including at least one camera module and a processing unit. The picture selection method includes: the sensing module sensing and transferring a first projection coordinate on the target picture at a first time point of a sensing action; the sensing module sensing and transferring a second projection coordinate on the target picture at a second time point of the sensing action; the processing unit selecting at least one to-be-selected picture in the target picture based on the first and second projection coordinates and generating a set of selected image data; and the processing unit controlling the selected image data projected by the image projection module to move to a designated position according to a movement instruction of the user.Type: GrantFiled: May 31, 2019Date of Patent: July 12, 2022Assignee: Compal Electronics, Inc.Inventors: Yu-Hao Tseng, Kun-Hsuan Chang, Wei-Jun Wang, Ting-Wei Wu, Hsin-Chieh Cheng, Jui-Tsen Huang
-
Patent number: 11386872Abstract: Described herein is a system and method for experiencing a virtual object at a plurality of sizes. During an AR session, the virtual object can created at a first size based upon a first scale (e.g., miniature, tabletop size). Once created, information regarding the virtual object can be stored. Thereafter, the virtual object can be displayed in an AR session at a second size based upon a second scale (e.g., full size or life size). In some embodiments, functionality of at least portion(s) of the virtual object are different when experienced in an AR session at the second size than when experienced in an AR session at the first size.Type: GrantFiled: February 15, 2019Date of Patent: July 12, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Jason Matthew Cahill, Torfi Frans Olafsson, Jesse Dylan Merriam, Michael Meincke Persson, Bradley Reid Shuber
-
Patent number: 11386701Abstract: Face recognition of a face, to determine whether the face correlates with an enrolled face, may include generating a personalized three-dimensional (3D) face model based on a two-dimensional (2D) input image of the face, acquiring 3D shape information and a normalized 2D input image of the face based on the personalized 3D face model, generating feature information based on the 3D shape information and pixel color values of the normalized 2D input image, and comparing the feature information with feature information associated with the enrolled face. The feature information may include first and second feature information generated based on applying first and second deep neural network models to the pixel color values of the normalized 2D input image and the 3D shape information, respectively. The personalized 3D face model may be generated based on transforming a generic 3D face model based on landmarks detected in the 2D input image.Type: GrantFiled: May 22, 2020Date of Patent: July 12, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Seon Min Rhee, Jungbae Kim, Byungin Yoo, Jaejoon Han, Seungju Han
-
Patent number: 11386608Abstract: An apparatus, method and computer program is disclosed, comprising rendering a virtual scene of a virtual space that corresponds to a virtual position of a user in the virtual space as determined at least in part by the position of the user in a physical space. Embodiments also involve identifying one or more objects in the virtual scene which are in conflict with attributes of the physical space. Embodiments also involve detecting one or more blinking periods of the user when consuming the virtual scene. Embodiments also involve modifying the position of the one or more conflicting objects in the virtual scene based on a detected context. The modifying may be performed within the one or more detected blinking periods.Type: GrantFiled: February 20, 2019Date of Patent: July 12, 2022Assignee: NOKIA TECHNOLOGIES OYInventors: Ari-Pekka Liljeroos, Arto Lehtiniemi, Jussi Leppänen
-
Patent number: 11386654Abstract: Aspects of the subject disclosure may include, for example, obtaining a first request for a first virtual object, obtaining first data regarding the first virtual object responsive to the obtaining of the first request, analyzing the first data to identify a first plurality of characteristics for the first virtual object, wherein the first plurality of characteristics include a first visual aspect of the first virtual object, a first auditory aspect of the first virtual object, a first scent aspect of the first virtual object, and a first haptic aspect of the first virtual object, and responsive to the analyzing of the first data, enabling at least a first sensory unit of a plurality of sensory units to render the first virtual object in accordance with the first plurality of characteristics. Other embodiments are disclosed.Type: GrantFiled: February 12, 2021Date of Patent: July 12, 2022Assignee: AT&T Intellectual Property I, L.P.Inventors: Joseph Soryal, Naila Jaoude, Samuel N. Zellner
-
Patent number: 11386601Abstract: A technique for combining first and second images respectively depicting first and second subject matter to facilitate virtual presentation. The first image is processed to identify portions or regions of the first subject matter and determine an estimated depth location of each portion or region. A composite image is generated that depicts the second subject matter overlayed, inserted or otherwise combined with the first subject matter. One or more of the portions or regions of the first subject matter are added, removed, enhanced or modified in the composite image in order to generate a realistic appearance of the first subject matter combined with the second subject matter. The composite image is caused to be displayed as a virtual presentation.Type: GrantFiled: June 29, 2020Date of Patent: July 12, 2022Assignee: ZEEKIT ONLINE SHOPPING LTD.Inventors: Alon Kristal, Nir Appleboim, Yael Wiesel, Israel Harry Zimmerman
-
Patent number: 11380051Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. Point cloud data describing an environment is then accessed. A two-dimensional surface of an image of an environment is captured, and a portion of the image is matched to a portion of key points in the point cloud data. An augmented reality object is then aligned within one or more images of the environment based on the match of the point cloud with the image. In some embodiments, building façade data may additionally be used to determine a device location and place the augmented reality object within an image.Type: GrantFiled: February 10, 2021Date of Patent: July 5, 2022Assignee: Snap Inc.Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M Rodriguez, II, Rahul Sheth, Jia Li, Xutao Lv
-
Patent number: 11381840Abstract: A method of point cloud geometry decoding in a point cloud decoder can include receiving a bitstream including a slice of a coded point cloud frame, and reconstructing an octree representing a geometry of points in a bounding box of the slice where a current node of the octree is partitioned with a quadtree (QT) partition or a binary tree (BT) partition.Type: GrantFiled: June 23, 2020Date of Patent: July 5, 2022Assignee: TENCENT AMERICA LLCInventors: Xiang Zhang, Wen Gao, Sehoon Yea, Shan Liu
-
Patent number: 11380039Abstract: Examples of systems and methods for rendering an avatar in a mixed reality environment are disclosed. The systems and methods may be configured to automatically scale an avatar or to render an avatar based on a determined intention of a user, an interesting impulse, environmental stimuli, or user saccade points. The disclosed systems and methods may apply discomfort curves when rendering an avatar. The disclosed systems and methods may provide a more realistic interaction between a human user and an avatar.Type: GrantFiled: December 3, 2018Date of Patent: July 5, 2022Assignee: Magic Leap, Inc.Inventors: Thomas Marshall Miller, IV, Victor Ng-Thow-Hing, Josh Anon, Frank Alexander Hamilton, IV, Cole Parker Heiner, Rodrigo Cano, Karen Stolzenberg, Lorena Pazmino, Gregory Minh Tran, Stephane Antoine Joseph Imbert, Anthony Marinello
-
Patent number: 11381753Abstract: An example method includes setting an exposure time of a camera of a distance sensor to a first value, instructing the camera to acquire a first image of an object in a field of view of the camera, where the first image is acquired while the exposure time is set to the first value, instructing a pattern projector of the distance sensor to project a pattern of light onto the object, setting the exposure time of the camera to a second value that is different than the first value, and instructing the camera to acquire a second image of the object, where the second image includes the pattern of light, and where the second image is acquired while the exposure time is set to the second value.Type: GrantFiled: January 26, 2021Date of Patent: July 5, 2022Assignee: Magik Eye Inc.Inventor: Akiteru Kimura
-
Patent number: 11381778Abstract: A method for generating a texture map used during a video conference, the method may include obtaining multiple texture maps of multiple areas of at least a portion of a three-dimensional (3D) object; wherein the multiple texture maps comprise a first texture map of a first area and of a first resolution, and a second texture map of a second area and of a second resolution, wherein the first area differs from the first area and the first resolution differs from the second resolution; and generating a texture map of the at least portion of the 3D object, the generating is based on the multiple texture maps; and utilizing the visual representation of the at least portion of the 3D object based on the texture map of the at least portion of the 3D object during the video conference.Type: GrantFiled: March 2, 2021Date of Patent: July 5, 2022Assignee: TRUE MEETING INC.Inventors: Ran Oz, Yuval Gronau, Michael Rabinovich, Osnat Goren-Peyser, Tal Perl
-
Patent number: 11380046Abstract: A system on a chip (SoC) includes a digital signal processor (DSP) and a graphics processing unit (GPU) coupled to the DSP. The DSP is configured to receive a stream of received depth measurements and generate a virtual bowl surface based on the stream of received depth measurements. The DSP is also configured to generate a bowl to physical camera mapping based on the virtual bowl surface. The GPU is configured to receive a first texture and receive a second texture. The GPU is also configured to perform physical camera to virtual camera transformation on the first texture and on the second texture, based on the bowl to physical camera mapping, to generate an output image.Type: GrantFiled: July 23, 2019Date of Patent: July 5, 2022Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Shashank Dabral, Vikram Appia, Hemant Hariyani, Lucas Weaver
-
Patent number: 11380076Abstract: Systems and methods configured to facilitate animation are disclosed. Exemplary implementations may: obtain a first scene definition; receive second entity information; integrate the second entity information into the first scene definition such that a second scene definition is generated; for each of the entities of the entity information, execute a simulation of the virtual reality scene from the second scene definition for at least a portion of the scene duration; for each of the entities of the entity information, analyze the second scene definition for deviancy between the given entity and the second motion capture information; for each of the entities of the entity information, indicate, based on the analysis for deviancy, the given entity as deviant; and for each of the entities of the entity information, re-integrate the given entity into the second scene definition.Type: GrantFiled: May 24, 2021Date of Patent: July 5, 2022Assignee: Mindshow Inc.Inventors: Jeffrey Scott Dixon, William Stuart Farquhar
-
Patent number: 11380067Abstract: A system configured to present virtual content in an interactive space may comprise one or more of a light source, an optical element, one or more physical processors, non-transitory electronic storage, and/or other components.Type: GrantFiled: April 30, 2019Date of Patent: July 5, 2022Assignee: Campfire 3D, Inc.Inventors: Kharis O'Connell, Yazan Kawar, Michael Stein, Nicholas Cottrell, Amber Choo, Cory Evens, Antonio M. Vasquez
-
Patent number: 11380024Abstract: Instant Situational Awareness Visualization Module enables new types and classes of Human Computer Interface enhancements where the Humans can easily and simultaneously see where objects are relative within or outside AR shapes as viewed as a picture-in-a-picture inset that provides an overhead view of terrain/map.Type: GrantFiled: October 2, 2020Date of Patent: July 5, 2022Assignee: VR REHAB, INC.Inventors: Elizabeth T. Guckenberger, Ronald J. Guckenberger
-
Patent number: 11380045Abstract: In various embodiments, a training application generates a trained encoder that automatically generates shape embeddings having a first size and representing three-dimensional (3D) geometry shapes. First, the training application generates a different view activation for each of multiple views associated with a first 3D geometry based on a first convolutional neural network (CNN) block. The training application then aggregates the view activations to generate a tiled activation. Subsequently, the training application generates a first shape embedding having the first size based on the tiled activation and a second CNN block. The training application then generates multiple re-constructed views based on the first shape embedding. The training application performs training operation(s) on at least one of the first CNN block and the second CNN block based on the views and the re-constructed views to generate the trained encoder.Type: GrantFiled: October 29, 2018Date of Patent: July 5, 2022Assignee: AUTODESK, INC.Inventors: Thomas Davies, Michael Haley, Ara Danielyan, Morgan Fabian
-
Patent number: 11375179Abstract: A system, method or compute program product for displaying a first image based on first image data in a display area of a first display device, receiving at least one camera-captured second image of an environment with the second image capturing at least a portion of the first image displayed in the display area, determining a location and orientation of the first display device relative to the camera, determining a portion of the second image that corresponds to the portion of the first image displayed in the display area, generating a third image that corresponds to the portion of the first image displayed on the first display device as viewed from a point of view of the camera from the first image data, and generating a composite image of the environment by replacing at least a portion of the second image with the third image, and displaying the composite image.Type: GrantFiled: November 6, 2020Date of Patent: June 28, 2022Assignee: Tanzle, Inc.Inventors: Nancy L. Clemens, Michael A. Vesely
-
Patent number: 11375178Abstract: A device and method for video rendering. The device includes a memory and an electronic processor. The electronic processor is configured to receive, from a source device, video data including multiple reference viewpoints, determine a target image plane corresponding to a target viewpoint, determine, within the target image plane, one or more target image regions, and determine, for each target image region, a proxy image region larger than the corresponding target image region. The electronic processor is configured to determine, for each target image region, a plurality of reference pixels that fit within the corresponding proxy image region, project, for each target image region, the plurality of reference pixels that fit within the corresponding proxy image region to the target image region, producing a rendered target region from each target image region, and composite one or more of the rendered target regions to create video rendering.Type: GrantFiled: March 4, 2020Date of Patent: June 28, 2022Assignee: Dolby Laboratories Licensing CorporationInventors: Haricharan Lakshman, Wenhui Jia, Jasper Chao, Shwetha Ram, Domagoj Baricevic, Ajit Ninan
-
Patent number: 11373550Abstract: Disclosed is an augmented reality training system, which includes a manipulation platform, an augmented reality stereo microscopic assembly, an instrument tracking module and a simulation generation module. The augmented reality stereo microscopic assembly is configured for camera-based capture of real stereo videos and for optical transmission of augmented reality images into the user's eyes. The instrument tracking module uses top and bottom digital cameras to track a marker on an upper portion of an instrument manipulated on a surgical phantom and to track a lower portion of the instrument. The simulation generation module can generate and display augmented reality images that merge the real stereo videos and virtual images for simulation of actions of the instrument in interaction with a training program executed in a processor of the simulation generation module.Type: GrantFiled: April 22, 2019Date of Patent: June 28, 2022Inventor: Yu-Hsuan Huang
-
Patent number: 11373353Abstract: Methods, apparatus, and computer readable storage medium for simulating and rendering a material with a modified material point method are described. The method includes, for each of a plurality of time-steps of simulating a material: transferring states of particles representing the material at a N-th time-step to a grid, determining a plurality of grid-node velocities at the N-th time-step using a particle-to-grid computation based on the states of the particles at the N-th time-step, updating the plurality of grid-node velocities at a (N+1)-th time-step based on grid forces, and updating the states of the particles at the (N+1)-th time-step using a grid-to-particle computation based on the states of the particles at the N-th time-step, the plurality of grid-node velocities at the N-th and (N+1)-th time-steps. The method further includes rendering one or more image depicting the material based on the states of the particles at the plurality of time-steps.Type: GrantFiled: February 26, 2021Date of Patent: June 28, 2022Assignee: TENCENT AMERICA LLCInventors: Yun Fei, Ming Gao, Qi Guo, Rundong Wu
-
Patent number: 11372132Abstract: A system and method for outputting analysis regarding weather and/or environmental conditions at a venue for an event by determining correlations between the results of past events and historical weather and/or environmental conditions, determining current and/or forecasted weather and/or environmental conditions (for example, using a dense mesonet of sensors in and around an event/venue), and generating analysis based on the current and/or forecasted weather and/or environmental conditions and the correlations between the results of past events and the historical weather and/or environmental conditions.Type: GrantFiled: September 23, 2016Date of Patent: June 28, 2022Assignee: Locator IP, L.P.Inventors: Joel N. Myers, Michael R. Root
-
Patent number: 11373360Abstract: Disclosed techniques relate to grouping rays during traversal of a spatially-organized acceleration data structure (e.g., a bounding volume hierarchy) for ray intersection processing. The grouping may provide temporal locality for accesses to bounding region data. In some embodiments, ray intersect circuitry is configured to group rays based on the node of the data structure that they target next. The ray intersect circuitry may select one or more groups of rays for issuance each clock cycle, e.g., to bounding region test circuitry.Type: GrantFiled: November 24, 2020Date of Patent: June 28, 2022Assignee: Apple Inc.Inventors: Ali Rabbani Rankouhi, Christopher A. Burns, Justin A. Hensley, Luca Iuliano, Jonathan M. Redshaw
-
Patent number: 11364432Abstract: A timing game device includes a virtual-space image display unit 13, a movement-line display unit 14, and an object display unit 15. The virtual-space image display unit 13 generates a virtual space image, based on VR image data in which a hit area is set in a virtual space having a range wider than a view field of the HMD 200 and displays the virtual space image on a HMD 200. The virtual space image has a view field that changes in response to a movement of the HMD 200. The movement-line display unit 14 generates and displays a movement line of an object that moves to a hit area. The object display unit 15 displays the object to move along the movement line. A game is caused to progress by performing an operation at a timing at which the object reaches the hit area set in the virtual space of the changed view field while appropriately changing the view field of the virtual space image in response to the movement of the HMD 200.Type: GrantFiled: May 16, 2019Date of Patent: June 21, 2022Assignee: Alpha Code Inc.Inventor: Takuhiro Mizuno
-
Patent number: 11367247Abstract: Encoding/decoding data representative of a 3D representation of a scene according to a range of points of view can involve generating a depth map associated with a part of the 3D representation according to a parameter representative of a two-dimensional parameterization associated with the part and data associated with a point included in the part, wherein the two-dimensional parameterization can be responsive to geometric information associated with the point and to pose information associated with the range of points of view. A texture map associated with the part can be generated according to the parameter and data associated with the point. First information representative of point density of points in a part of the part can be obtained. The depth map, texture map, parameter, and first information can be included in respective syntax elements of a bitstream.Type: GrantFiled: November 6, 2018Date of Patent: June 21, 2022Assignee: InterDigital VC Holdings, Inc.Inventors: Franck Thudor, Bertrand Chupeau, Renaud Dore, Thierry Tapie, Julien Fleureau
-
Patent number: 11367226Abstract: A method for aligning a real-world object with a virtual object includes capturing images, video, or both of the real-world object from a first viewpoint and from a second viewpoint. The first and second viewpoints are different. The method also includes simultaneously superimposing the virtual object at least partially over the real-world object from the first viewpoint in a first augmented reality (AR) display and from the second viewpoint in a second AR display based at least in part on the images, video, or both. The method also includes adjusting a position of the real-world object to at least partially align the real-world object with the virtual object from the first viewpoint in the first AR display and from the second viewpoint in the second AR display.Type: GrantFiled: February 23, 2021Date of Patent: June 21, 2022Assignee: THE JOHNS HOPKINS UNIVERSITYInventors: Nassir Navab, Javad Fotouhi
-
Patent number: 11367243Abstract: An apparatus and method for performing BVH compression and decompression concurrently with stores and loads, respectively.Type: GrantFiled: December 1, 2020Date of Patent: June 21, 2022Assignee: Intel CorporationInventors: Carsten Benthin, Ingo Wald, Gabor Liktor, Johannes Guenther, Elmoustapha Ould-Ahmed-Vall
-
Patent number: 11367235Abstract: In accordance with an aspect of the present disclosure, there is provided a method for simplifying three-dimensional mesh data using three-dimensional mesh data simplification device. The method comprises, determining a vertex or an edge of three-dimensional mesh data to be deleted based on animation information including skin weight values and geometric information of the three-dimensional mesh data; and simplifying the three-dimensional mesh data by deleting the vertex or the edge.Type: GrantFiled: October 31, 2018Date of Patent: June 21, 2022Assignee: SK TELECOM CO., LTD.Inventors: Seungho Shin, Gukchan Lim, Jinsoo Jeon, Ikhwan Cho
-
Patent number: 11366566Abstract: In one implementation, a method comprises: obtaining an input from a client device indicating an entry transaction between an avatar and a portal of a modeled space; in response to obtaining the input from the client device indicating the entry transaction between the avatar and the portal of the modeled space, determining an identifier associated with the avatar; determining whether a custom destination has been defined for the portal based on the identifier for the avatar; causing the avatar to enter a default destination environment for the portal according to a determination that a custom destination has not been defined for the portal; and causing the avatar to enter a custom destination environment that is distinct from the default destination environment according to a determination that a custom destination has been defined for the portal.Type: GrantFiled: January 15, 2021Date of Patent: June 21, 2022Assignee: PFAQUTRUMA RESEARCH LLCInventor: Brian Mark Shuster
-
Patent number: 11368557Abstract: A computer-implemented method of providing a server-based feature cloud model of a realm includes receiving by a server a series of digital contributions that collectively originate from a plurality of remote computing devices, characterizing portions of the realm. The method also includes processing by the server the received digital contributions to associate them with a global coordinate system and storing the processed contributions in a realm model database as components of the feature cloud model of the realm. Finally, the method includes, in response to a query message over the Internet from a computing device of an end-user, serving, over the Internet by the server to the computing device, digital data defining a selected portion of the feature cloud model for integration and display by the computing device.Type: GrantFiled: June 8, 2020Date of Patent: June 21, 2022Inventors: Alexander Hertel, Philipp Hertel
-
Patent number: 11368652Abstract: Audio content and played frames may be received. The audio content may correspond to first video content. The played frames may be included in the first video content. The first video content may further include a replaced frame. The played frames and the replaced frame may include a face of a person. Location data may also be received that indicates locations of facial features of the face of the person within the replaced frame. A replacement frame may be generated, such as by rendering the facial features in the replacement frame based at least in part on the locations indicated by the location data and positions indicated by a portion of the audio content that is associated with the replaced frame. Second video content may be played including the played frames and the replacement frame. The replacement frame may replace the replaced frame in the second video content.Type: GrantFiled: October 29, 2020Date of Patent: June 21, 2022Assignee: Amazon Technologies, Inc.Inventors: Gregory Johnson, Pragyana K. Mishra, Mohammed Khalilia, Wenbin Ouyang, Naveen Sudhakaran Nair
-
Patent number: 11368631Abstract: Systems, imaging devices and methods for creating background blur in camera panning or motion. Using an imaging device with an image sensor, a method may comprise selecting an object to be tracked in a scene, recording an image or an image stream, and aligning the selected object optically and/or digitally to a same position on the image sensor while the selected object moves relative to the imaging device or relative to the scene, thereby creating a blurred image background and/or foreground relative to the selected object and a sense of panning or motion.Type: GrantFiled: June 9, 2020Date of Patent: June 21, 2022Assignee: Corephotonics Ltd.Inventors: Anat Leshem Gat, Ruthy Katz, Omri Levi, Oded Gigushinski, Yiftah Kowal, Ephraim Goldenberg, Gal Shabtay, Noy Cohen, Michael Scherer
-
Patent number: 11364103Abstract: A method and a system for determining a bite position between arch forms of a subject. The method comprises: receiving a 3D model including a first portion and a second portion respectively representative of lower and upper arch forms of the subject; determining, a respective distance value from each point of the first portion to the second portion; determining, for each point of the first portion, a respective weight value, thereby determining a respective weighted distance value; aggregating respective weighted distance values associated with each point of the first portion to determine an aggregate distance value being a remoteness measure between the first portion and the second portion; and determining the bite position based on the aggregate distance value.Type: GrantFiled: May 13, 2021Date of Patent: June 21, 2022Assignee: Oxilio LtdInventor: Islam Khasanovich Raslambekov
-
Patent number: 11367523Abstract: A method is for image data processing. In an embodiment, the method includes providing a 3D medical image data record, which relates to an elongated anatomical structure, a center line of the elongated anatomical structure being defined in the 3D medical image data record; defining at least one curved slice in the 3D medical image data record, the at least one curved slice winding around the center line; scanning at least one part of the 3D medical image data record into the at least one curved slice; and unrolling the at least one curved slice, into which the at least one part of the 3D medical image data record was scanned, at least one unrolled flat slice being determined. An image data processing unit is also for image data processing and a medical imaging apparatus includes the image data processing unit.Type: GrantFiled: May 16, 2018Date of Patent: June 21, 2022Assignee: SIEMENS HEALTHCARE GMBHInventors: Grzegorz Soza, Stefan Grosskopf, Hannes Martinke, Christian Petry, Helmut Ringl, Michael Suehling
-
Patent number: RE49150Abstract: A computer-implemented method for ordering vertices in an image frame within a data stream, wherein the image frame corresponds to Earth-viewing data. A point of intersection of a primary pair of lines is determined and loaded into computer memory, and interrogated as to a sign of a signed remainder with respect to each of two secondary lines defined by the pairwise ordered sets of vertices. In the case of opposite remainder sign with respect to the two secondary lines, two provisional indices are swapped to obtain a rectified index for each of the four vertices. The process is repeated with respect to the signed remainder of the intersection point of the secondary lines relative to the primary lines. The four vertices are then fit, in accordance with index ordering, into a tiling of the surface of the Earth based on the rectified index of each of the four vertices.Type: GrantFiled: November 24, 2020Date of Patent: July 26, 2022Assignee: Intergraph CorporationInventor: Gene Arthur Grindstaff