Patents Examined by Phu K. Nguyen
  • Patent number: 11393163
    Abstract: Method for remote clothing selection includes determination of anthropometric dimensional parameters of a user, automatic assessment of correspondence of a garment to the shape and body measurements of a user, determination and provision of recommendations to a user on the selection of a particular garment and, optionally, visualization of a garment on a digital avatar of this user in the virtual fitting room, including optional change of his/her pose. The invention provides an increase in the efficiency of remote clothing selection by a user, an improvement in user's experience of remote purchase, an increase in user satisfaction and, ultimately, an increase in online sales of clothing and a decrease in the proportion of clothing returned after a purchase due to unsatisfactory matching to the shape and measurements of user's body.
    Type: Grant
    Filed: October 29, 2020
    Date of Patent: July 19, 2022
    Assignee: Texel LLC
    Inventors: Maxim Alexandrovich Fedyukov, Andrey Vladimirovich Poskonin, Sergey Mikhailovich Klimentyev, Vladimir Vladimirovich Guzov, Ilia Alexeevich Petrov, Nikolay Patakin, Anton Vladimirovich Fedotov, Oleg Vladimirovich Korneev
  • Patent number: 11389273
    Abstract: Methods and systems for manufacturing an orthodontic appliance with an object incorporated in a surface thereof comprising: acquiring a preliminary appliance 3D digital model; acquiring an object 3D digital model; obtaining a desired coupling location of the object on the orthodontic appliance; positioning the object 3D digital model onto a surface of the preliminary appliance 3D digital model based on the obtained coupling location; causing an initial predetermined degree of penetration; merging the object 3D digital model with the preliminary appliance 3D digital model to generate an appliance 3D digital model of the orthodontic appliance with the object incorporated in the surface; and storing the appliance 3D digital model in an internal memory of the electronic device.
    Type: Grant
    Filed: August 3, 2021
    Date of Patent: July 19, 2022
    Assignee: Oxilio Ltd
    Inventors: Islam Khasanovich Raslambekov, Oleksandr Khmil, Dmitrii Bubelnik, Zelimkhan Gerikhanov
  • Patent number: 11394943
    Abstract: There is provided an image processing apparatus and method, a file generation apparatus and method, and a program that enable a suitable occlusion image to be obtained. The image processing apparatus includes an MPD file processing unit configured to select an occlusion image to be acquired, on the basis of information regarding a viewpoint position of the occlusion image included in an MPD file, from among a plurality of the occlusion images indicated by the MPD file. The present technology can be applied to a client device.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: July 19, 2022
    Assignee: SONY CORPORATION
    Inventors: Mitsuru Katsumata, Mitsuhiro Hirabayashi, Kazuhiko Takabayashi, Toshiya Hamada, Ryohei Takahashi
  • Patent number: 11380049
    Abstract: In various embodiments, a finite aperture omni-directional camera is modeled by aligning a finite aperture lens and focal point with the omni-directional part of the projection. For example, each point on an image plane maps to a direction in camera space. For a spherical projection, the lens can be orientated along this direction and the focal point is picked along this direction at focal distance from the lens. For a cylindrical projection, the lens can be oriented along the projected direction on the two dimensional (2D) xz-plane, as the projection is not omni-directional in the y direction. The focal point is picked along the (unprojected) direction so its projection on the xz-plane is at focal distance from the lens. The final outgoing ray can be constructed by sampling of a point on this oriented lens and shooting a ray from there through the focal point.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: July 5, 2022
    Assignee: NVIDIA Corporation
    Inventor: Dietger van Antwerpen
  • Patent number: 11367245
    Abstract: In an example a method includes identifying, by a processor, in a data model of at least a portion of a three-dimensional object, an object property associated with a location in the three-dimensional object. A data model of a virtual build volume comprising at least a portion of the three-dimensional object may be generated in which an association with an object property is dispersed beyond the location.
    Type: Grant
    Filed: July 10, 2017
    Date of Patent: June 21, 2022
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Matthew A Shepherd, Jake Wright, Hector Jose Lebron, Vanessa Verzwyvelt, Morgan T Schramm
  • Patent number: 11364103
    Abstract: A method and a system for determining a bite position between arch forms of a subject. The method comprises: receiving a 3D model including a first portion and a second portion respectively representative of lower and upper arch forms of the subject; determining, a respective distance value from each point of the first portion to the second portion; determining, for each point of the first portion, a respective weight value, thereby determining a respective weighted distance value; aggregating respective weighted distance values associated with each point of the first portion to determine an aggregate distance value being a remoteness measure between the first portion and the second portion; and determining the bite position based on the aggregate distance value.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: June 21, 2022
    Assignee: Oxilio Ltd
    Inventor: Islam Khasanovich Raslambekov
  • Patent number: 11354846
    Abstract: There is a region of interest of a synthetic image depicting an object from a class of objects. A trained neural image generator, having been trained to map embeddings from a latent space to photorealistic images of objects in the class, is accessed. A first embedding is computed from the latent space, the first embedding corresponding to an image which is similar to the region of interest while maintaining photorealistic appearance. A second embedding is computed from the latent space, the second embedding corresponding to an image which matches the synthetic image. Blending of the first embedding and the second embedding is done to form a blended embedding. At least one output image is generated from the blended embedding, the output image being more photorealistic than the synthetic image.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: June 7, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephan Joachim Garbin, Marek Adam Kowalski, Matthew Alastair Johnson, Tadas Baltrusaitis, Martin De La Gorce, Virginia Estellers Casas, Sebastian Karol Dziadzio, Jamie Daniel Joseph Shotton
  • Patent number: 11348267
    Abstract: The method comprising providing a plurality of images of a scene captured by a plurality of image capturing devices (101); providing silhouette information of at least one object in the scene (102); generating a point cloud for the scene in 3D space using the plurality of images (103); extracting an object point cloud from the generated point cloud, the object point cloud being a point cloud associated with the at least one object in the scene (104); estimating a 3D shape volume of the at least one object from the silhouette information (105); and combining the object point cloud and the shape volume of the at least one object to generate a three-dimensional model (106). An apparatus for generating a 3D model, and a computer readable medium for generating the 3D model.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: May 31, 2022
    Assignee: The Provost, Fellows, Foundation Scholars, and the Other Members of Board, of the College of the Holy and Undivided Trinity of Queen Elizabeth, Near Dublin Trinity Collge Dublin
    Inventors: Aljosa Smolic, Rafael Pages, Jan Ondrej, Konstantinos Amplianitis, David Monaghan
  • Patent number: 11341710
    Abstract: Approaches in accordance with various embodiments provide for fluid simulation with substantially reduced time and memory requirements with respect to conventional approaches. In particular, various embodiments can perform time and energy efficient, large scale fluid simulation on processing hardware using a method that does not solve for the Navier-Stokes equations to enforce incompressibility. Instead, various embodiments generate a density tensor and rigid body map tensor for a large number of particles contained in a sub-domain. Collectively, the density tensor and rigid body map may represent input channels of a network with three spatial-dimensions. The network may apply a series of operations to the input channels to predict an updated position and updated velocity for each particle at the end of a frame. Such approaches can handle tens of millions of particles within a virtually unbounded simulation domain, as compared to classical approaches that solve for the Navier-Stokes equations.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: May 24, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Evgenii Tumanov, Dmitry Korobchenko, Alexey Solovey
  • Patent number: 11341711
    Abstract: System and method for rendering dynamic three-dimensional appearing imagery on a two-dimensional user interface screen of a portable computing device in dependence on a user's view-point of the screen. The method includes processing, on a portable computing device, data defining a plurality of user view-points of a user interface screen of the portable computing device. The method next includes rendering a first image of a constructed scene on the user interface screen based on a first determined user's view-point of the user interface screen of the portable computing device. The method then includes rendering a different image of the constructed scene on the user interface screen based on a subsequently determined user's view-point of the user interface screen and thereby presenting the illusion of a three-dimensional image of the constructed scene on the user interface screen.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: May 24, 2022
    Assignee: Apple Inc.
    Inventor: Brendan A. McCarthy
  • Patent number: 11341722
    Abstract: A computer vision method for processing an omnidirectional image to extract understanding of a scene, the method comprising: receiving an omnidirectional image of a scene; mapping the omnidirectional image to a mesh on a three-dimensional polyhedron; convert the three dimensional polyhedron into a representation of a neighbourhood structure, wherein the representation of a neighbourhood structure represents vertices of said mesh and their neighbouring vertices; and processing the representation of the neighbourhood structure with a neural network processing stage to produce an output providing understanding of the scene, wherein the neural network processing stage comprising at least one module configured to perform convolution with a filter, aligned with a reference axis of the three-dimensional polyhedron.
    Type: Grant
    Filed: July 7, 2020
    Date of Patent: May 24, 2022
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Chao Zhang, Stephan Liwicki
  • Patent number: 11335017
    Abstract: A registration facility and a registration method are provided where a pre-interventionally generated simulation model of an examination object is registered with an intra-interventional live image. The simulation model is adapted to the live image using at least one simulated course line of an anatomical feature and/or an instrument by minimizing a line distance metric, specified as a cost function, for a distance between the simulated course line and an actual intra-interventional course of the instrument that is visible in the live image.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: May 17, 2022
    Assignee: Siemens Healthcare GmbH
    Inventors: Katharina Breininger, Marcus Pfister
  • Patent number: 11327465
    Abstract: The exemplified methods and systems facilitate manufacturing of a new class of mechanical, loading-bearing components having optimized stress/strain three-dimensional meta-structure structures (also referred to herein as “Meshagons”) as finite-element-based 3D volumetric mesh structures. The resulting three-dimensional meta-structure structures provide high strength, ultra-light connectivity, with programmable interlinkage properties (e.g., density/porosity of linkages).
    Type: Grant
    Filed: March 2, 2020
    Date of Patent: May 10, 2022
    Assignee: Abemis LLC
    Inventors: Todd Curtis Doehring, William Joseph Nelson
  • Patent number: 11315329
    Abstract: In one embodiment, a method includes accessing a plurality of points, wherein each point (1) corresponds to a spatial location associated with an observed feature of a physical environment and (2) is associated with a patch representing the observed feature, determining a density associated with each of the plurality of points based on the spatial locations of the plurality of points, scaling the patch associated with each of the plurality of points based on the density associated with the point, and reconstructing a scene of the physical environment based on at least the scaled patches.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: April 26, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Alexander Sorkine Hornung, Alessia Marra, Fabian Langguth, Matthew James Alderman
  • Patent number: 11308362
    Abstract: Methods and Systems for generating a centerline for an object in an image and computer readable medium are provided. The method includes receiving an image containing the object. The method also includes generating the centerline of the object by tracing a sequence of patches with a virtual agent. For each patch other than the initial patch, the method determines a current patch based on the position and action of the virtual agent at a previous patch. The method further determines a policy function and a value function based on the current patch using a trained learning network, which includes an encoder followed by a first learning network and a second learning network. The learning network is trained by maximizing a cumulative reward. The method also determines the action of the virtual agent at the current patch. Additionally, the method displays the centerline of the object.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: April 19, 2022
    Assignee: SHENZHEN KEYA MEDICAL TECHNOLOGY CORPORATION
    Inventors: Xin Wang, Youbing Yin, Qi Song, Junjie Bai, Yi Lu, Yi Wu, Feng Gao, Kunlin Cao
  • Patent number: 11295513
    Abstract: A computer-based method for generating a custom hand brace for a patient includes compiling optical data captured during a three-dimensional scan of a target hand of the patient into a three-dimensional hand model of the target hand; and receiving a diagnosis for an injury to the target hand of the patient. Based on the diagnosis, the method includes generating a custom hand brace model by extracting a first set of points from the three-dimensional hand model to generate an initial hand brace model; forming an interior surface of the initial hand brace model based on the first set of points; deforming the interior surface of the initial hand brace model into alignment with an exterior surface of the three-dimensional hand model to generate the custom hand brace model; and queuing the custom hand brace model for fabricating at an advanced manufacturing system.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: April 5, 2022
    Assignee: Able Hands Rehab PC
    Inventor: Shirish Godbole
  • Patent number: 11288857
    Abstract: According to an aspect, a method for neural rerendering includes obtaining a three-dimensional (3D) model representing a scene of a physical space, where the 3D model is constructed from a collection of input images, rendering an image data buffer from the 3D model according to a viewpoint, where the image data buffer represents a reconstructed image from the 3D model, receiving, by a neural rerendering network, the image data buffer, receiving, by the neural rerendering network, an appearance code representing an appearance condition, and transforming, by the neural rerendering network, the image data buffer into a rerendered image with the viewpoint of the image data buffer and the appearance condition specified by the appearance code.
    Type: Grant
    Filed: April 1, 2020
    Date of Patent: March 29, 2022
    Assignee: Google LLC
    Inventors: Moustafa Meshry, Ricardo Martin Brualla, Sameh Khamis, Daniel Goldman, Hugues Hoppe, Noah Snavely, Rohit Pandey
  • Patent number: 11284967
    Abstract: A bone foundation guide system for a dental implant surgical site has a bone foundation guide and a dental implant surgical guide with an open surgical space therebetween. The top of the foundation guide body is contoured and has an upper surface to attach a bottom surface of the dental implant surgical guide as well as to define a removal and augmentation level and to guide the removal and/or augmentation of bone segments from the dental surgical site via a separation plane. The separation plane has a central area predetermined to define an implant depth of bores for anchoring implants, and further has an inclined area extending from all side ends of the central area towards the first bridge end and the second bridge end raising above the removal and augmentation level of the jaw bone.
    Type: Grant
    Filed: November 8, 2018
    Date of Patent: March 29, 2022
    Inventor: Stefan Schmälzle
  • Patent number: 11288859
    Abstract: Embodiments provide techniques for rendering augmented reality effects on an image of a user's face in real time. The method generally includes receiving an image of a face of a user. A global facial depth map and a luminance map are generated based on the captured image. The captured image is segmented into a plurality of segments. For each segment in the plurality of segments, a displacement energy of the respective segment is minimized using a least square minimization of a linear system for the respective segment. The displacement energy is generally defined by a relationship between a detailed depth map, the global facial depth map and the luminance map. The detailed depth map is generated based on the minimized displacement energy for each segment in the plurality of segments. One or more visual effects are rendered over the captured image using the generated detailed depth map.
    Type: Grant
    Filed: June 1, 2020
    Date of Patent: March 29, 2022
    Assignee: Disney Enterprises, Inc.
    Inventors: Kenneth J. Mitchell, Llogari Casas Cambra, Yue Li
  • Patent number: 11278375
    Abstract: Embodiments relate to an aligner breakage solution. A method includes obtaining a digital design of a polymeric aligner for a dental arch of a patient. The polymeric aligner is shaped to apply forces to teeth of the dental arch. The method also includes performing an analysis on the digital design of the polymeric aligner using at least one of a) a trained machine learning model, b) a numerical simulation, c) a geometry evaluator or d) a rules engine. The method may also include determining, based on the analysis, whether the digital design of the polymeric aligner includes probable points of damage, wherein for a probable point of damage there is a threshold probability that breakage, deformation, or warpage will occur. The method may also include, responsive to determining that the digital design of the polymeric aligner comprises probable points of damage, performing corrective actions based on the probable points of damage.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: March 22, 2022
    Assignee: Align Technology, Inc.
    Inventors: Yuxiang Wang, Rohit Tanugula, Reza Shirazi Aghjari, Andrew Jang, Chunhua Li, Jun Sato, Luyao Cai, Viktoria Medvinskaya, Arno Kukk, Andrey Cherkas, Anna Akopova, Kangning Su