Patents Examined by Phu K. Nguyen
  • Patent number: 11116606
    Abstract: A method for determining a jaw curve for orthodontic treatment planning for a patient, the method being executable by a processor. The method includes obtaining a tooth and gingiva mesh from image data associated with teeth and surrounding gingiva of the patient, the mesh being representative of a surface of the teeth and the surrounding gingiva; obtaining a tooth contour of each tooth, the tooth contour being defined by a border between a visible portion of each tooth and the surrounding gingiva; determining a tooth contour center of each tooth, the tooth contour center of a given tooth being an average point of the tooth contour of the given tooth; projecting the tooth contour center of each tooth onto a jaw plane; and fitting the tooth contour center of each tooth to a curve to determine the jaw curve.
    Type: Grant
    Filed: January 6, 2021
    Date of Patent: September 14, 2021
    Assignee: Arkimos Ltd.
    Inventor: Islam Khasanovich Raslambekov
  • Patent number: 11113869
    Abstract: Examples described herein generally relate to generating a visualization of an image. A proprietary structure that specifies ray tracing instructions for generating the image using ray tracing is intercepted from a graphics processing unit (GPU) or a graphics driver. The proprietary structure can be converted, based on assistance information, to a visualization structure for generating the visualization of the image. The visualization of the image can be generated from the visualization structure.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: September 7, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Austin Neil Kinross, Shawn Lee Hargreaves, Amar Patel, Thomas Lee Davidson
  • Patent number: 11113865
    Abstract: One embodiment of the present invention provides a technique for generating a three-dimensional model from a two-dimensional sketch. The technique includes receiving input indicating a set of points defining a first sketch element and a second set of points defining a second sketch element included in a sketch. The technique further includes identifying one or more design relationships between the first sketch element and the second sketch element. The technique further includes generating a computer model of the sketch that represents a structure linking the first sketch element and the second sketch element according to the one or more design relationships. The technique further includes outputting the first sketch element, the second sketch element, and the structure for display.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: September 7, 2021
    Assignee: AUTODESK, INC.
    Inventors: Hyunmin Cheong, George Fitzmaurice, Tovi Grossman, Rubaiat Habib Kazi, Ali Baradaran Hashemi
  • Patent number: 11107261
    Abstract: The present disclosure generally relates to displaying visual effects such as virtual avatars. An electronic device having a camera and a display apparatus displays a virtual avatar that changes appearance in response to changes in a face in a field of view of the camera. In response to detecting changes in one or more physical features of the face in the field of view of the camera, the electronic device modifies one or more features of the virtual avatar.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: August 31, 2021
    Assignee: Apple Inc.
    Inventors: Nicolas Scapel, Guillaume Pierre André Barlier, Aurelio Guzman, Jason Rickwald
  • Patent number: 11100722
    Abstract: Provided are a method, an apparatus, an electronic device, and a storage medium for displaying an expansion of a 3D shape, including: determining a 3D shape to be expanded, and acquiring a target expanded state of the 3D shape; searching a preset multi-level information relationship table for an articulation relationship set corresponding to the target expanded state; determining, according to the articulation relationship set and a preset expansion rule library, a target expansion rule for each target plane surface on the 3D shape; and controlling to expand each target plane surface at a predetermined a rate based on the each target expansion rule, and displaying the expansion process in real time. The method dynamically displays an expansion process of a 3D shape to a student, such that the student can understands more about the process of transformation from a 3D shape to a selected expanded state, thereby improving user experience of a teaching demonstration function on an electronic device.
    Type: Grant
    Filed: December 17, 2017
    Date of Patent: August 24, 2021
    Assignees: GUANGZHOU SHIYUAN ELECTRONICS CO., LTD., GUANGZHOU SHIRUI ELECTRONICS CO. LTD.
    Inventor: Hong Ye
  • Patent number: 11094112
    Abstract: The present invention generally relates to generating a three-dimensional representation of a physical environment, which includes dynamic scenarios.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: August 17, 2021
    Assignee: Foresight AI Inc.
    Inventors: Shili Xu, Matheen Siddiqui, Chang Yuan
  • Patent number: 11095857
    Abstract: Disclosed herein is a web-based videoconference system that allows for video avatars to navigate within the virtual environment. The system has a presented mode that allows for a presentation stream to be texture mapped to a presenter screen situated within the virtual environment. The relative left-right sound is adjusted to provide sense of an avatar's position in a virtual space. The sound is further adjusted based on the area where the avatar is located and where the virtual camera is located. Video stream quality is adjusted based on relative position in a virtual space. Three-dimensional modeling is available inside the virtual video conferencing environment.
    Type: Grant
    Filed: October 20, 2020
    Date of Patent: August 17, 2021
    Assignee: Katmai Tech Holdings LLC
    Inventors: Gerard Cornelis Krol, Erik Stuart Braund
  • Patent number: 11087548
    Abstract: Various methods and systems are provided for authoring and presenting 3D presentations. Generally, an augmented or virtual reality device for each author, presenter and audience member includes 3D presentation software. During authoring mode, one or more authors can use 3D and/or 2D interfaces to generate a 3D presentation that choreographs behaviors of 3D assets into scenes and beats. During presentation mode, the 3D presentation is loaded in each user device, and 3D images of the 3D assets and corresponding asset behaviors are rendered among the user devices in a coordinated manner. As such, one or more presenters can navigate the scenes and beats of the 3D presentation to deliver the 3D presentation to one or more audience members wearing augmented reality headsets.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: August 10, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Darren Alexander Bennett, David J. W. Seymour, Charla M. Pereira, Enrico William Guld, Kin Hang Chu, Julia Faye Taylor-Hell, Jonathon Burnham Cobb, Helen Joan Hem Lam, You-Da Yang, Dean Alan Wadsworth, Andrew Jackson Klein
  • Patent number: 11087514
    Abstract: Techniques for automatically synchronizing poses of objects in an image or between multiple images. An automatic pose synchronization functionality is provided by an image editor. The image editor identifies or enables a user to select objects (e.g., people) whose poses are to be synchronized and the image editor then performs processing to automatically synchronize the poses of the identified objects. For two objects whose poses are to be synchronized, a reference object is identified as one whose associated pose is to be used as a reference pose. A target object is identified as one whose associated pose is to be modified to match the reference pose of the reference object. An output image is generated by the image editor in which the position of a part of the target object is modified such that the pose associated with the target object matches the reference pose of the reference object.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: August 10, 2021
    Assignee: Adobe Inc.
    Inventors: Sankalp Shukla, Sourabh Gupta, Angad Kumar Gupta
  • Patent number: 11068698
    Abstract: A three-dimensional model (e.g., motion capture model) of a user is generated from captured images or captured video of the user. A machine learning network may track poses and expressions of the user to generate and refine the three-dimensional model. Refinement of the three-dimensional model may provide more accurate tracking of the user's face. Refining of the three-dimensional model may include refining the determinations of poses and expressions at defined locations (e.g., eye corners and/or nose) in the three-dimensional model. The refining may occur in an iterative process. Tracking of the three-dimensional model over time (e.g., during video capture) may be used to generate an animated three-dimensional model (e.g., an animated puppet) of the user that simulates the user's poses and expressions.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: July 20, 2021
    Assignee: Apple Inc.
    Inventors: Sofien Bouaziz, Brian Amberg, Thibaut Weise, Patrick Snape, Stefan Brugger, Alex Mansfield, Reinhard Knothe, Thomas Kiser
  • Patent number: 11030804
    Abstract: A technique for generating virtual models of plants in a field is described. Generally, this includes recording images of plants in-situ; generating point clouds from the images; generating skeleton segments from the point cloud; classifying a subset of skeleton segments as unique plant features using the images; and growing plant skeletons from skeleton segments classified as unique plant feature. The technique may be used to generate a virtual model of a single, real plant, a portion of a real plant field, and/or the entirety of the real plant field. The virtual model can be analyzed to determine or estimate a variety of individual plant or plant population parameters, which in turn can be used to identify potential treatments or thinning practices, or predict future values for yield, plant uniformity, or any other parameter can be determined from the projected results based on the virtual model.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: June 8, 2021
    Assignee: BLUE RIVER TECHNOLOGY INC.
    Inventors: Lee Kamp Redden, Nicholas Apostoloff
  • Patent number: 11019503
    Abstract: In one embodiment, a method includes accessing a point cloud comprising a plurality of point-cloud points, each point-cloud point corresponding to a location on a surface of an object located in a region in a three-dimensional space, identifying, from the point cloud, a plurality of point clusters, each point cluster comprising a plurality of point-cloud points located within a grid segment on a two-dimensional grid derived from the three-dimensional space, selecting, for each point cluster, a set of point-cloud points from the plurality of point-cloud points in the point cluster, the set of point-cloud points being selected based on a predetermined threshold number of point-cloud points associated with an acceptable reduction in an error detection rate, and determining, for each point cluster, a structure classification based on the selected set of point-cloud points from the point cluster.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: May 25, 2021
    Assignee: Facebook, Inc.
    Inventors: Guan Pang, Jing Huang, Balmanohar Paluri, Brian Christopher Karrer, Ismail Onur Filiz, Birce Tezel, Nicolas Emilio Stier Moses, Vishakan Ponnampalam, Timothy Eric Danford
  • Patent number: 11010969
    Abstract: Data in physical space may be converted to layer space before performing modeling to generate one or more subsurface representations. Computational stratigraphy model representations that define subsurface configurations as a function of depth in the physical space may be converted to the layer space so that the subsurface configurations are defined as a function of layers. Conditioning information that defines conditioning characteristics as the function of depth in the physical space may be converted to the layer space so that the conditioning characteristics are defined as the function of layers. Modeling may be performed in the layer space to generate subsurface representations within layer space, and the subsurface representations may be converted into the physical space.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: May 18, 2021
    Assignee: Chevron U.S.A. Inc.
    Inventors: Lewis Li, Tao Sun, Sebastien B. Strebelle
  • Patent number: 11004265
    Abstract: Systems, apparatuses and methods may provide a way to subdivide a patch generated in graphics processing pipeline into sub-patches, and generate sub-patch tessellations for the sub-patches. More particularly, systems, apparatuses and methods may provide a way to diverge tessellation sizes to a configurable size within an interior region of a patch or sub-patches based on a position of each of the tessellations. The systems, apparatuses and methods may determine a number of tessellation factors to use based on one or more of a level of granularity of one or more domains of a scene to be digitally rendered, available computing capacity, or power consumption to compute the number of tessellation factors.
    Type: Grant
    Filed: February 6, 2019
    Date of Patent: May 11, 2021
    Assignee: Intel Corporation
    Inventors: Peter L. Doyle, Devan Burke, Subramaniam Maiyuran, Abhishek R. Appu, Joydeep Ray, Elmoustapha Ould-Ahmed-Vall, Philip R. Laws, Altug Koker
  • Patent number: 10991075
    Abstract: Systems, apparatuses and methods may provide away to blend two or more of the scene surfaces based on the focus area and an offload threshold. More particularly, systems, apparatuses and methods may provide a way to blend, by a display engine, two or more of the focus area scene surfaces and blended non-focus area scene surfaces. The systems, apparatuses and methods may include a graphics engine to render the focus area surfaces at a higher sample rate than the non-focus area scene surfaces.
    Type: Grant
    Filed: September 7, 2018
    Date of Patent: April 27, 2021
    Assignee: Intel Corporation
    Inventors: Joydeep Ray, Travis T. Schluessler, John H. Feit, Nikos Kaburlasos, Jacek Kwiatkowski, Abhishek R. Appu, Balaji Vembu, Prasoonkumar Surti
  • Patent number: 10991152
    Abstract: One embodiment of the present invention includes a parallel processing unit (PPU) that performs pixel shading at variable granularities. For effects that vary at a low frequency across a pixel block, a coarse shading unit performs the associated shading operations on a subset of the pixels in the pixel block. By contrast, for effects that vary at a high frequency across the pixel block, fine shading units perform the associated shading operations on each pixel in the pixel block. Because the PPU implements coarse shading units and fine shading units, the PPU may tune the shading rate per-effect based on the frequency of variation across each pixel group. By contrast, conventional PPUs typically compute all effects per-pixel, performing redundant shading operations for low frequency effects. Consequently, to produce similar image quality, the PPU consumes less power and increases the rendering frame rate compared to a conventional PPU.
    Type: Grant
    Filed: January 20, 2017
    Date of Patent: April 27, 2021
    Assignee: NVIDIA Corporation
    Inventors: Yong He, Eric B. Lum, Eric Enderton, Henry Packard Moreton, Kayvon Fatahalian
  • Patent number: 10984573
    Abstract: A non-transitory computer readable storage medium storing computer program code that, when executed by a processing device, cause the processing device to perform operations comprising: determining a first representative point, wherein the first representative point represents a first geometric primitive; determining a second representative point, wherein the second representative point represents a second geometric primitive; determining an initial distance between the first representative point and the second representative point; calculating a first displacement based on a velocity of the first representative point; calculating a second displacement based on a velocity of the second representative point; determining a separating direction between the first representative point and the second representative point; projecting the first displacement along the separating direction; projecting the second displacement along the separating direction; calculating a predicted minimum distance between the first repr
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: April 20, 2021
    Assignee: ELECTRONIC ARTS INC.
    Inventor: Christopher Charles Lewin
  • Patent number: 10983506
    Abstract: An automated manufacturing system for manufacturing a discrete object is configured to manufacture a reference feature on a precursor to the discrete object. Reference feature is used to place the precursor at a subtractive manufacturing machine; the reference feature may be based on a locating feature at the subtractive manufacturing machine. Manufacturing reference feature is accomplished by automatedly detecting one or more critical-to-quality features and manufacturing the reference feature based on the one or more detected critical-to-quality features.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: April 20, 2021
    Assignee: Proto Labs, Inc.
    Inventors: James L Jacobs, Arthur Richard Baker, III
  • Patent number: 10984590
    Abstract: Data in physical space may be converted to layer space before performing modeling to generate one or more subsurface representations. Computational stratigraphy model representations that define subsurface configurations as a function of depth in the physical space may be converted to the layer space so that the subsurface configurations are defined as a function of layers. Conditioning information that defines conditioning characteristics as the function of depth in the physical space may be converted to the layer space so that the conditioning characteristics are defined as the function of layers. Modeling may be performed in the layer space to generate subsurface representations within layer space, and the subsurface representations may be converted into the physical space.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: April 20, 2021
    Assignee: Chevron U.S.A. Inc.
    Inventors: Lewis Li, Tao Sun, Sebastien B. Strebelle
  • Patent number: 10970935
    Abstract: A person who is not using a hybrid reality (HR) system communicates with the HR system without using a network communications link using a body pose. Data is received from a sensor and an individual is detected in the sensor data. A first situation of at least one body part of the individual in 3D space is ascertained at a first time and a body pose is determined based on the first situation of the at least one body part. An action is decided on based on the body pose and the action is performed on an HR system worn by a user.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: April 6, 2021
    Inventors: Anthony Mark Jones, Jessica A. F. Jones, Bruce A. Young