Patents Examined by Phuc N Doan
-
Patent number: 11657572Abstract: Systems and methods for generating a map. The methods comprise: performing, by a computing device, ray-casting operations to generate a 3D point cloud with a reduced number of data points associated with moving objects; generating, by the computing device, a 2D binary mask for at least one semantic label class of the 3D point cloud; determining, by the computing device, x-coordinates and y-coordinates for a 2D volume defining an object of the at least one semantic label class; identifying, by the computing device, data points in the 3D point cloud based on the 2D volume; comparing, by the computing device, z-coordinates of the identified data points to at least one threshold value selected for the at least one semantic label class; and generating, by the computing device, the map by removing data points from the 3D point cloud based on results of the comparing.Type: GrantFiled: October 21, 2020Date of Patent: May 23, 2023Assignee: argo ai, llcInventor: Yong-Dian Jian
-
Patent number: 11651550Abstract: An application server system and a method for generating a three-Dimensional (3D) model from 2D humanoid sketches is provided. The method includes receiving, by a parameterized humanoid engine, the 2D humanoid sketches from a user device associated with a user, predicting 3D vertices that correspond to the 2D humanoid sketches based on a pose, shape, and camera orientation of a subject in the 2D humanoid sketches, plotting, using a 3D vertex plotting engine, the 3D vertices to obtain a rough 3D model, rendering the rough 3D model onto a user interface of an application server system or the user device, enabling the user to realign the rough 3D model, thereby obtaining a realigned 3D model that is accurately fit the rough 3D model to the 2D humanoid sketches, and adding textures to the realigned 3D model to generate the 3D model of the subject in the 2D humanoid sketches.Type: GrantFiled: September 1, 2021Date of Patent: May 16, 2023Assignee: TREADSTONE MEDIA LABS PRIVATE LIMITEDInventors: Karthik Rajagopalan, Mohamed Faheem Thanveer
-
Patent number: 11650421Abstract: A method may include identifying, by one or more processors, an object in a field of view of a wearable display, where the object is identified for a presbyopic compensation. The presbyopic compensation is performed by the one or more processors on image data of the object to generate compensated image data of the object. The one or more processors render an image in response to the compensated image data of the object on a display of the wearable display.Type: GrantFiled: May 23, 2019Date of Patent: May 16, 2023Assignee: Meta Platforms Technologies, LLCInventors: Ian Erkelens, Larry Richard Moore, Jr., Kevin James MacKenzie
-
Patent number: 11631226Abstract: Disclosed herein is a method of graphically presenting an indicating marker over a 3-D model of a tissue surface during a catheterization procedure, comprising determining a region over the 3-D model, deforming the indicating marker to congruently match a shape defined by the 3-D model across the region at a plurality of positions; and rendering the 3-D model into an image including the deformed indicating marker by generating an image of the 3-D model covered by said deformed indicating marker.Type: GrantFiled: April 26, 2021Date of Patent: April 18, 2023Assignee: Navix International LimitedInventors: Yizhaq Shmayahu, Yitzhack Schwartz
-
Patent number: 11625888Abstract: A system for performing a raytracing process, the system comprising a bounding volume hierarchy, BVH, identification unit operable to identify a BVH structure for use in generating images of a virtual environment, the BVH structure comprising information about one or more surfaces within the virtual environment, a BVH selection unit operable to discard one or more elements of the BVH structure in dependence upon a direction of an incident ray, and a raytracing unit operable to perform a raytracing process using the remaining BVH elements.Type: GrantFiled: November 19, 2020Date of Patent: April 11, 2023Assignee: Sony Interactive Entertainment Inc.Inventor: Rosario Leonardi
-
Patent number: 11620793Abstract: Methods, systems, and apparatus, including medium-encoded computer program products, for computer aided design of structures include, in one aspect, a method for producing, the method including: obtaining a polygonal control mesh for a smooth surface representing an object; subdividing the polygonal control mesh in one or more subdivisions to produce a refined control mesh, wherein the subdividing comprises: using data defining an eigen polyhedron around an extraordinary point in the polygonal control mesh to generate adjustment rules to determine positions of the extraordinary point, and face points and edge points for faces adjacent to the extraordinary point, and determining, according to the adjustment rules, the positions for the extraordinary point, the face points, and the edge points for the faces adjacent to the extraordinary point; and generating, by the computer graphics application, the smooth surface for output from the refined control mesh.Type: GrantFiled: March 25, 2021Date of Patent: April 4, 2023Assignee: Autodesk, Inc.Inventors: Kevin James Marshall, Nicholas Stewart North, Adam Michael Helps
-
Patent number: 11615569Abstract: An example of an image display system includes a goggle apparatus having a display section. A virtual camera and a user interface are placed in a virtual space. The orientation of the virtual camera in the virtual space is controlled in accordance with the orientation of the goggle apparatus. When the goggle apparatus rotates by an angle greater than or equal to a predetermined angle in a pitch direction, the user interface is moved to the front of the virtual camera in a yaw direction.Type: GrantFiled: June 3, 2021Date of Patent: March 28, 2023Assignee: NINTENDO CO., LTD.Inventors: Yuki Onozawa, Daigo Shimizu, Kentaro Kawai
-
Patent number: 11609675Abstract: A system and method may include receiving data defining an augmented reality (AR) environment including a representation of a physical environment, identifying relationships between a plurality of scene elements in the AR environment, and obtaining a set of UI layout patterns for arranging the plurality of scene elements in the AR environment according to one or more relationships between the plurality of scene elements. The system and method may identify, for the at least one scene element, at least one relationship that corresponds to at least one UI layout pattern, generate a modified UI layout pattern for the at least one scene element using different relationships than the identified at least one relationship, and trigger display of the AR content associated with the information and the at least one scene element using the modified UI layout pattern.Type: GrantFiled: December 2, 2019Date of Patent: March 21, 2023Assignee: Google LLCInventors: David Joseph Murphy, Ariel Sachter-Zeltzer, Caroline Hermans
-
Patent number: 11610380Abstract: A method for interacting with an autostereoscopic display is disclosed. The method includes initiating displaying by the autostereoscopic display a left eye view and a right eye view that contain a virtual manipulated object, determining a real-world coordinate of the virtual manipulated object perceived by a user located at a predetermined viewing position of the auto stereoscopic display, receiving an interactive action of the user's manipulating body acquired by a motion tracker, where the interaction action includes a real-world coordinate of the manipulating body, determining whether an interaction condition is triggered based at least in part on the real-world coordinate of the virtual manipulated object and the real-world coordinate of the manipulating body, and refreshing the left eye view and the right eye view based on the interactive action of the manipulating body acquired by the motion tracker, in response to determining that the interaction condition is triggered.Type: GrantFiled: December 2, 2019Date of Patent: March 21, 2023Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.Inventors: Guixin Yan, Hao Zhang, Lili Chen, Minglei Chu, Wenhong Tian, Zhanshan Ma, Ziqiang Guo
-
Patent number: 11605200Abstract: A system for transferring a mesh into a 3D mesh, includes: a mesh receiving unit receiving a first mesh and a second mesh; a mapping unit mapping the mesh onto the 3D mesh, thereby generating a combined mesh comprising combined vertices; an intersection calculating unit calculating intersection vertices formed between the mesh and the 3D mesh in the combined mesh and further configured to add the intersection vertices to the combined mesh; and an edge calculating unit calculating combined edges between the intersection vertices and the combined vertices. The edge calculating unit is further configured to add the combined edges to the combined mesh.Type: GrantFiled: December 12, 2019Date of Patent: March 14, 2023Assignee: TWIKIT NVInventors: Martijn Joris, Olivier De Deken, Sam Van Den Berghe
-
Patent number: 11605199Abstract: Systems and methods are provided for generating a mesh for a computer model of a three-dimensional object. In an embodiment: a Cartesian mesh of hexahedral elements is generated for a representation of the three-dimensional object; a sweep direction is determined for the Cartesian mesh; a swept index level is assigned for each planar face of the Cartesian mesh perpendicular to the sweep direction; the hexahedral elements are grouped into a plurality of mesh groups, with each mesh group including contiguous hexahedral elements that span the same swept index levels along the sweep direction; a block volume is generated for each respective mesh group, with the block volume being defined by boundary loop edges of first and second faces of the respective mesh group; and a hexahedral mesh is generated for each of the block volumes generated for the mesh groups.Type: GrantFiled: October 30, 2020Date of Patent: March 14, 2023Assignee: Ansys, Inc.Inventor: Ganesan Hariharaputhiran
-
Patent number: 11600036Abstract: In examples, a filter used to denoise shadows for a pixel(s) may be adapted based at least on variance in temporally accumulated ray-traced samples. A range of filter values for a spatiotemporal filter may be defined based on the variance and used to exclude temporal ray-traced samples that are outside of the range. Data used to compute a first moment of a distribution used to compute variance may be used to compute a second moment of the distribution. For binary signals, such as visibility, the first moment (e.g., accumulated mean) may be equivalent to a second moment (e.g., the mean squared). In further respects, spatial filtering of a pixel(s) may be skipped based on comparing the mean of variance of the pixel(s) to one or more thresholds and based on the accumulated number of values for the pixel.Type: GrantFiled: September 4, 2020Date of Patent: March 7, 2023Assignee: NVIDIA CorporationInventors: Pawel Kozlowski, Alexey Panteleev
-
Patent number: 11544903Abstract: Managing volumetric data, including: defining a view volume in a volume of space, wherein the volumetric data has multiple points in the volume of space and at least one point is in the view volume and at least one point is not in the view volume; defining a grid in the volume of space, the grid having multiple cells and dividing the volume of space into respective cells, wherein each point has a corresponding cell in the grid, and each cell in the grid has zero or more corresponding points; and reducing the number of points for a cell in the grid where that cell is outside the view volume.Type: GrantFiled: May 15, 2020Date of Patent: January 3, 2023Assignees: Sony Group Corporation, Sony Pictures Entertainment Inc.Inventors: Brad Hunt, Tobias Anderberg
-
Patent number: 11501499Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and a method for rendering virtual modifications to real-world environments depicted in image content. A reference surface is detected in a three-dimensional (3D) space captured within a camera feed produced by a camera of a computing device. An image mask is applied to the reference surface within the 3D space captured within the camera feed. A visual effect is applied to the image mask corresponding to the reference surface in the 3D space. The application of the visual effect to the image mask causes a modified surface to be rendered in presenting the camera feed on a display of the computing device.Type: GrantFiled: December 20, 2019Date of Patent: November 15, 2022Assignee: Snap Inc.Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Wentao Shang
-
Patent number: 11494982Abstract: Methods for CAD operations and corresponding systems (2800) and computer-readable mediums (2826) are disclosed herein. A method includes receiving (502) a model (600) of a part to be manufactured, wherein the model includes a plurality of original faces (102, 104, 106, 112, 114). The method includes classifying (510) each face in model according to a relative face curvature according to classifications that include at least a high-curvature classification (702). The method includes classifying (514) any sliver faces (102, 104, 106, 112, 114) and narrow blend faces (402, 404, 406, 408) of the plurality of faces. The method includes merging (516) contiguous faces (702) in each classification. The method includes detecting (518) special faces (1002, 1012) of the plurality of faces. The method includes restoring (520) original faces in the high-curvature classification except for the special faces (1002, 1012).Type: GrantFiled: September 21, 2018Date of Patent: November 8, 2022Assignee: Siemens Industry Software Inc.Inventors: Jonathan Makem, Nilanjan Mukherjee, Debashis Basu, Abinesh Thota, Harold Fogg
-
Patent number: 11496723Abstract: Generating a representation of a scene includes detecting an indication to capture sensor data to generate a virtual representation of a scene in a physical environment at a first time, in response to the indication obtaining first sensor data from a first capture device at the first time, obtaining second sensor data from a second capture device at the first time, and combining the first sensor data and the second sensor data to generate the virtual representation of the scene.Type: GrantFiled: September 24, 2019Date of Patent: November 8, 2022Assignee: Apple Inc.Inventors: Bertrand Nepveu, Yan Cote
-
Patent number: 11450057Abstract: Enhanced techniques applicable to a ray tracing hardware accelerator for traversing a hierarchical acceleration structure and its underlying primitives are disclosed. For example, traversal speed is improved by grouping processing of primitives sharing at least one feature (e.g., a vertex or an edge) during ray-primitive intersection testing. Grouping the primitives for ray intersection testing can reduce processing (e.g., projections and transformations of primitive vertices and/or determining edge function values) because at least a portion of the processing results related to the shared feature in one primitive can be used in determine whether the ray intersects another primitive(s). Processing triangles sharing an edge can double the culling rate of the triangles in the ray/triangle intersection test without replicating the hardware.Type: GrantFiled: June 15, 2020Date of Patent: September 20, 2022Assignee: NVIDIA CorporationInventors: Gregory Muthler, John Burgess, Ian Chi Yan Kwong
-
Patent number: 11450065Abstract: Examples of the disclosure describe systems and methods for decomposing and sharing 3D models. In an example method, a first version of a virtual three-dimensional model is displayed via a display of a wearable head device. A request is made to a host device for data associated with a second version of the virtual three-dimensional model, wherein the second version of the virtual three-dimensional model comprises a constituent part. It is determined whether the first version of the virtual three-dimensional model comprises the constituent part. In accordance with a determination that the first version of the virtual three-dimensional model does not comprise the constituent part, a request is made to the host device for data associated with the constituent part. The second version of the virtual three-dimensional model is displayed, via the display of the wearable head device.Type: GrantFiled: September 24, 2019Date of Patent: September 20, 2022Assignee: Magic Leap, Inc.Inventor: Marc Alan McCall
-
Patent number: 11436782Abstract: Example systems, methods, and instructions to be executed by a processor for the animation of realistic facial performances of avatars are provided. Such an example system includes a memory to store a facial gesture model of a subject head derived from a photogrammetric scan of the subject head, and a video of a face of the subject head delivering a facial performance. The system further includes a processor to generate a dynamic texture map that combines the video of the face of the subject head delivering the facial performance with a static portion of the facial gesture model of the subject head, apply the dynamic texture map to the facial gesture model, and animate the facial gesture model of the subject head to emulate the facial performance.Type: GrantFiled: January 22, 2020Date of Patent: September 6, 2022Assignee: CREAM DIGITAL INC.Inventors: Andrew MacDonald, Tristan Cezair, Stephan Kozak
-
Patent number: 11430179Abstract: Techniques for improving remote rendering and reprojection are disclosed herein. A color image is generated, where this color image includes overlapping content regions. A depth buffer is generated for the color image and includes depth values for the pixels in the color image. The depth buffer includes both essential and non-essential depth discontinuities. While preserving the essential depth discontinuities, the non-essential depth discontinuities are eliminated from the depth buffer. New non-essential discontinuities are prevented from being included in the final version of the depth buffer. The color image is encoded into a color image video stream, and the modified depth buffer is encoded into a depth buffer stream. The color image video stream and the depth buffer stream are then transmitted to a remotely located HMD. The HMD then reprojects the color image based on the depth values in the depth buffer.Type: GrantFiled: February 24, 2020Date of Patent: August 30, 2022Assignee: Microsoft Technology Licensing, LLCInventor: Christian Voss-Wolff