Patents Examined by Phu K. Nguyen
  • Patent number: 11147627
    Abstract: A computer process including receiving first patient bone data of a patient leg and foot in a first pose, the first pose comprising a position and orientation of the patient leg relative to the patient foot as defined in the first patient bone data. The computer process may further include receiving second patient bone data of the patient leg and foot in a second pose, the second pose comprising a position and orientation of the patient leg relative to the patient foot as defined in the second patient bone data. The computer process may further include generating a 3D bone model of the patient leg and foot. Finally, the computer process may include modifying the 3D bone model of the patient leg and foot such that the plurality of 3D bone models are reoriented into a third pose that matches the second pose.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: October 19, 2021
    Assignee: STRYKER EUROPEAN OPERATIONS HOLDINGS LLC
    Inventors: Ashish Gangwar, Kanishk Sethi, Anup Kumar, Ryan Sellman, Peter Sterrantino, Manoj Kumar Singh
  • Patent number: 11145276
    Abstract: Complementary near-field and far-field light field displays (LFDs) are provided. A distributed LFD system is disclosed in which the light field is cooperatively displayed by a direct view LFD and a near-eye LFD. The two display components of the system work together synchronously to display a high-fidelity 3D experience to one or multiple viewers.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: October 12, 2021
    Assignee: Ostendo Technologies, Inc.
    Inventor: Hussein S. El-Ghoroury
  • Patent number: 11135036
    Abstract: In one aspect, the present application provides a computer-based method of measuring positional relationship between two teeth, comprising: obtaining a second 3D digital model representing a plurality of teeth under a second tooth arrangement, wherein the plurality of teeth comprise a first tooth and a second tooth; and measuring a value of a collision or a size of a gap between the first tooth and second tooth based on the second 3D digital model and in a predetermined first direction.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: October 5, 2021
    Assignee: Wuxi EA Medical Instruments Technologies Limited
    Inventors: Yang Feng, Xiaolin Liu, Jing Wang
  • Patent number: 11138808
    Abstract: A method of processor-aided design of a workholding frame for a manufacturing process includes using a processor to perform operations including receiving, by a processor, first data representing a first three-dimensional (3D) model of an object to be manufactured. The operations further include obtaining, by the processor, second data describing a second 3D model. The second 3D model represents a bounding box having one or more sides adjoining a surface of the object within the first 3D model. The operations further include automatically generating, by the processor and based on the first data and the bounding box, third data indicating one or more parameters of the workholding frame. The workholding frame and the object are to be formed based on the one or more parameters and as a single workpiece from a blank material during a machining process associated with the object.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: October 5, 2021
    Assignee: THE BOEING COMPANY
    Inventor: Justin Lee Peters
  • Patent number: 11132833
    Abstract: The method for remote clothing selection includes determination of anthropometric dimensional parameters of a user, automatic assessment of correspondence of a garment to the shape and body measurements of a user, determination and provision of recommendations to a user on the selection of a particular garment and, optionally, visualization of a garment on a digital avatar of this user in the virtual fitting room, including optional change of his/her pose. The invention provides an increase in the efficiency of remote clothing selection by a user, an improvement in user's experience of remote purchase, an increase in user satisfaction and, ultimately, an increase in online sales of clothing and a decrease in the proportion of clothing returned after a purchase due to unsatisfactory matching to the shape and measurements of user's body.
    Type: Grant
    Filed: April 6, 2020
    Date of Patent: September 28, 2021
    Assignee: Texel, Inc.
    Inventors: Maxim Alexandrovich Fedyukov, Andrey Vladimirovich Poskonin, Sergey Mikhailovich Klimentyev, Vladimir Vladimirovich Guzov, Ilia Alexeevich Petrov, Nikolay Patakin, Anton Vladimirovich Fedotov, Oleg Vladimirovich Korneev
  • Patent number: 11129694
    Abstract: Two or more adjacent teeth in a virtual model are determined. The two or more adjacent teeth include a first tooth adjacent to a second tooth in the virtual model. The virtual filler is inserted in the virtual model between the first tooth and the second tooth. Points in the virtual model are selected. The points and the virtual filler are transformed into a voxel volume. A geometry of an updated virtual filler is determined by transforming a surface of the voxel volume into a polygonal mesh.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: September 28, 2021
    Assignee: Align Technology, Inc.
    Inventors: Israel Velazquez, Andrey Cherkas, Stephan Albert Alexandre Dumothier, Anatoliy Parpara, Alexey Geraskin, Yury Slynko, Danila Chesnokov
  • Patent number: 11127191
    Abstract: Rendering systems that can use combinations of rasterization rendering processes and ray tracing rendering processes are disclosed. In some implementations, these systems perform a rasterization pass to identify visible surfaces of pixels in an image. Some implementations may begin shading processes for visible surfaces, before the geometry is entirely processed, in which rays are emitted. Rays can be culled at various points during processing, based on determining whether the surface from which the ray was emitted is still visible. Rendering systems may implement rendering effects as disclosed.
    Type: Grant
    Filed: June 18, 2020
    Date of Patent: September 21, 2021
    Assignee: Imagination Technologies Limited
    Inventor: Luke T. Peterson
  • Patent number: 11116606
    Abstract: A method for determining a jaw curve for orthodontic treatment planning for a patient, the method being executable by a processor. The method includes obtaining a tooth and gingiva mesh from image data associated with teeth and surrounding gingiva of the patient, the mesh being representative of a surface of the teeth and the surrounding gingiva; obtaining a tooth contour of each tooth, the tooth contour being defined by a border between a visible portion of each tooth and the surrounding gingiva; determining a tooth contour center of each tooth, the tooth contour center of a given tooth being an average point of the tooth contour of the given tooth; projecting the tooth contour center of each tooth onto a jaw plane; and fitting the tooth contour center of each tooth to a curve to determine the jaw curve.
    Type: Grant
    Filed: January 6, 2021
    Date of Patent: September 14, 2021
    Assignee: Arkimos Ltd.
    Inventor: Islam Khasanovich Raslambekov
  • Patent number: 11113869
    Abstract: Examples described herein generally relate to generating a visualization of an image. A proprietary structure that specifies ray tracing instructions for generating the image using ray tracing is intercepted from a graphics processing unit (GPU) or a graphics driver. The proprietary structure can be converted, based on assistance information, to a visualization structure for generating the visualization of the image. The visualization of the image can be generated from the visualization structure.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: September 7, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Austin Neil Kinross, Shawn Lee Hargreaves, Amar Patel, Thomas Lee Davidson
  • Patent number: 11113865
    Abstract: One embodiment of the present invention provides a technique for generating a three-dimensional model from a two-dimensional sketch. The technique includes receiving input indicating a set of points defining a first sketch element and a second set of points defining a second sketch element included in a sketch. The technique further includes identifying one or more design relationships between the first sketch element and the second sketch element. The technique further includes generating a computer model of the sketch that represents a structure linking the first sketch element and the second sketch element according to the one or more design relationships. The technique further includes outputting the first sketch element, the second sketch element, and the structure for display.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: September 7, 2021
    Assignee: AUTODESK, INC.
    Inventors: Hyunmin Cheong, George Fitzmaurice, Tovi Grossman, Rubaiat Habib Kazi, Ali Baradaran Hashemi
  • Patent number: 11107261
    Abstract: The present disclosure generally relates to displaying visual effects such as virtual avatars. An electronic device having a camera and a display apparatus displays a virtual avatar that changes appearance in response to changes in a face in a field of view of the camera. In response to detecting changes in one or more physical features of the face in the field of view of the camera, the electronic device modifies one or more features of the virtual avatar.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: August 31, 2021
    Assignee: Apple Inc.
    Inventors: Nicolas Scapel, Guillaume Pierre André Barlier, Aurelio Guzman, Jason Rickwald
  • Patent number: 11100722
    Abstract: Provided are a method, an apparatus, an electronic device, and a storage medium for displaying an expansion of a 3D shape, including: determining a 3D shape to be expanded, and acquiring a target expanded state of the 3D shape; searching a preset multi-level information relationship table for an articulation relationship set corresponding to the target expanded state; determining, according to the articulation relationship set and a preset expansion rule library, a target expansion rule for each target plane surface on the 3D shape; and controlling to expand each target plane surface at a predetermined a rate based on the each target expansion rule, and displaying the expansion process in real time. The method dynamically displays an expansion process of a 3D shape to a student, such that the student can understands more about the process of transformation from a 3D shape to a selected expanded state, thereby improving user experience of a teaching demonstration function on an electronic device.
    Type: Grant
    Filed: December 17, 2017
    Date of Patent: August 24, 2021
    Assignees: GUANGZHOU SHIYUAN ELECTRONICS CO., LTD., GUANGZHOU SHIRUI ELECTRONICS CO. LTD.
    Inventor: Hong Ye
  • Patent number: 11095857
    Abstract: Disclosed herein is a web-based videoconference system that allows for video avatars to navigate within the virtual environment. The system has a presented mode that allows for a presentation stream to be texture mapped to a presenter screen situated within the virtual environment. The relative left-right sound is adjusted to provide sense of an avatar's position in a virtual space. The sound is further adjusted based on the area where the avatar is located and where the virtual camera is located. Video stream quality is adjusted based on relative position in a virtual space. Three-dimensional modeling is available inside the virtual video conferencing environment.
    Type: Grant
    Filed: October 20, 2020
    Date of Patent: August 17, 2021
    Assignee: Katmai Tech Holdings LLC
    Inventors: Gerard Cornelis Krol, Erik Stuart Braund
  • Patent number: 11094112
    Abstract: The present invention generally relates to generating a three-dimensional representation of a physical environment, which includes dynamic scenarios.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: August 17, 2021
    Assignee: Foresight AI Inc.
    Inventors: Shili Xu, Matheen Siddiqui, Chang Yuan
  • Patent number: 11087548
    Abstract: Various methods and systems are provided for authoring and presenting 3D presentations. Generally, an augmented or virtual reality device for each author, presenter and audience member includes 3D presentation software. During authoring mode, one or more authors can use 3D and/or 2D interfaces to generate a 3D presentation that choreographs behaviors of 3D assets into scenes and beats. During presentation mode, the 3D presentation is loaded in each user device, and 3D images of the 3D assets and corresponding asset behaviors are rendered among the user devices in a coordinated manner. As such, one or more presenters can navigate the scenes and beats of the 3D presentation to deliver the 3D presentation to one or more audience members wearing augmented reality headsets.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: August 10, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Darren Alexander Bennett, David J. W. Seymour, Charla M. Pereira, Enrico William Guld, Kin Hang Chu, Julia Faye Taylor-Hell, Jonathon Burnham Cobb, Helen Joan Hem Lam, You-Da Yang, Dean Alan Wadsworth, Andrew Jackson Klein
  • Patent number: 11087514
    Abstract: Techniques for automatically synchronizing poses of objects in an image or between multiple images. An automatic pose synchronization functionality is provided by an image editor. The image editor identifies or enables a user to select objects (e.g., people) whose poses are to be synchronized and the image editor then performs processing to automatically synchronize the poses of the identified objects. For two objects whose poses are to be synchronized, a reference object is identified as one whose associated pose is to be used as a reference pose. A target object is identified as one whose associated pose is to be modified to match the reference pose of the reference object. An output image is generated by the image editor in which the position of a part of the target object is modified such that the pose associated with the target object matches the reference pose of the reference object.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: August 10, 2021
    Assignee: Adobe Inc.
    Inventors: Sankalp Shukla, Sourabh Gupta, Angad Kumar Gupta
  • Patent number: 11068698
    Abstract: A three-dimensional model (e.g., motion capture model) of a user is generated from captured images or captured video of the user. A machine learning network may track poses and expressions of the user to generate and refine the three-dimensional model. Refinement of the three-dimensional model may provide more accurate tracking of the user's face. Refining of the three-dimensional model may include refining the determinations of poses and expressions at defined locations (e.g., eye corners and/or nose) in the three-dimensional model. The refining may occur in an iterative process. Tracking of the three-dimensional model over time (e.g., during video capture) may be used to generate an animated three-dimensional model (e.g., an animated puppet) of the user that simulates the user's poses and expressions.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: July 20, 2021
    Assignee: Apple Inc.
    Inventors: Sofien Bouaziz, Brian Amberg, Thibaut Weise, Patrick Snape, Stefan Brugger, Alex Mansfield, Reinhard Knothe, Thomas Kiser
  • Patent number: 11030804
    Abstract: A technique for generating virtual models of plants in a field is described. Generally, this includes recording images of plants in-situ; generating point clouds from the images; generating skeleton segments from the point cloud; classifying a subset of skeleton segments as unique plant features using the images; and growing plant skeletons from skeleton segments classified as unique plant feature. The technique may be used to generate a virtual model of a single, real plant, a portion of a real plant field, and/or the entirety of the real plant field. The virtual model can be analyzed to determine or estimate a variety of individual plant or plant population parameters, which in turn can be used to identify potential treatments or thinning practices, or predict future values for yield, plant uniformity, or any other parameter can be determined from the projected results based on the virtual model.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: June 8, 2021
    Assignee: BLUE RIVER TECHNOLOGY INC.
    Inventors: Lee Kamp Redden, Nicholas Apostoloff
  • Patent number: 11019503
    Abstract: In one embodiment, a method includes accessing a point cloud comprising a plurality of point-cloud points, each point-cloud point corresponding to a location on a surface of an object located in a region in a three-dimensional space, identifying, from the point cloud, a plurality of point clusters, each point cluster comprising a plurality of point-cloud points located within a grid segment on a two-dimensional grid derived from the three-dimensional space, selecting, for each point cluster, a set of point-cloud points from the plurality of point-cloud points in the point cluster, the set of point-cloud points being selected based on a predetermined threshold number of point-cloud points associated with an acceptable reduction in an error detection rate, and determining, for each point cluster, a structure classification based on the selected set of point-cloud points from the point cluster.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: May 25, 2021
    Assignee: Facebook, Inc.
    Inventors: Guan Pang, Jing Huang, Balmanohar Paluri, Brian Christopher Karrer, Ismail Onur Filiz, Birce Tezel, Nicolas Emilio Stier Moses, Vishakan Ponnampalam, Timothy Eric Danford
  • Patent number: 11010969
    Abstract: Data in physical space may be converted to layer space before performing modeling to generate one or more subsurface representations. Computational stratigraphy model representations that define subsurface configurations as a function of depth in the physical space may be converted to the layer space so that the subsurface configurations are defined as a function of layers. Conditioning information that defines conditioning characteristics as the function of depth in the physical space may be converted to the layer space so that the conditioning characteristics are defined as the function of layers. Modeling may be performed in the layer space to generate subsurface representations within layer space, and the subsurface representations may be converted into the physical space.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: May 18, 2021
    Assignee: Chevron U.S.A. Inc.
    Inventors: Lewis Li, Tao Sun, Sebastien B. Strebelle