Solid Modelling Patents (Class 345/420)
  • Patent number: 11036973
    Abstract: Methods, devices and systems for training a pattern recognition system are described. In one example, a method for training a sign language translation system includes generating a three-dimensional (3D) scene that includes a 3D model simulating a gesture that represents a letter, a word, or a phrase in a sign language. The method includes obtaining a value indicative of a total number of training images to be generated, using the value indicative of the total number of training images to determine a plurality of variations of the 3D scene for generating of the training images, applying each of plurality of variations to the 3D scene to produce a plurality of modified 3D scenes, and capturing an image of each of the plurality of modified 3D scenes to form the training images for a neural network of the sign language translation system.
    Type: Grant
    Filed: May 13, 2019
    Date of Patent: June 15, 2021
    Assignee: AVODAH, INC.
    Inventors: Trevor Chandler, Dallas Nash, Michael Menefee
  • Patent number: 11037071
    Abstract: A machine learning engine may be used to identify items in a second item category that have a visual appearance similar to the visual appearance of a first item selected from a first item category. Image data and text data associated with a large number of items from different item categories may be processed and used by an association model created by a machine learning engine. The association model may extract item attributes from the image data and text data of the first item. The machine learning engine may determine weights for parameter types, and the weights may calibrate the influence of the respective parameter types on the search results. The association model may be deployed to identify items from different item categories that have a visual appearance similar to the first item. The association model may be updated over time by the machine learning engine as data correlations evolve.
    Type: Grant
    Filed: March 6, 2017
    Date of Patent: June 15, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Karolina Tekiela, Gabriel Blanco Saldana, Rui Luo
  • Patent number: 11028681
    Abstract: Methods and systems for improving mineral resource exploration and resource classification efficiency are provided herein. The generation and iterative, dynamic improvement of drill plans for sampling a target volume using drill holes is described. Methods and systems for the development and optimization of drill plans are able to accommodate a wide variety of constraints and targets, providing drill plans which aim to minimize the amount of explorative drilling while substantially converting unclassified sub-volumes, and in particular high-desirability sub-volumes, of the target volume to a specified or desired level while attempting to maximize targeted resource conversion efficiency. Resulting drill plans may provide a proposed collection of drill holes, defined in 3D space, penetrating the target volume which sufficiently sample a target volume while remaining within one or more specified constraints.
    Type: Grant
    Filed: December 16, 2016
    Date of Patent: June 8, 2021
    Assignee: 1789703 Ontario Ltd.
    Inventors: Andrew Dasys, Nehme Bilal
  • Patent number: 11030779
    Abstract: An Image processing system (IPS) and related method and imaging arrangement (IAR). The system (IPS) comprises an input interface (IN) for receiving i) a 3D input image volume (V) previously reconstructed from projection images (?) of an imaged object (BR) acquired along different projection directions and ii) a specification of an image structure in the input volume (V). A model former (MF) of the system (IPS) is configured to form, based on said specification, a 3D model (m) for said structure in the input 3D image volume. A volume adaptor (VA) of the system (IPS) is configured to adapt, based on said 3D model (m), the input image volume to so form a 3D output image volume (V?).
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: June 8, 2021
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventor: Klaus Erhard
  • Patent number: 11030458
    Abstract: The disclosure herein describes training a machine learning model to recognize a real-world object based on generated virtual scene variations associated with a model of the real-world object. A digitized three-dimensional (3D) model representing the real-world object is obtained and a virtual scene is built around the 3D model. A plurality of virtual scene variations is generated by varying one or more characteristics. Each virtual scene variation is generated to include a label identifying the 3D model in the virtual scene variation. A machine learning model may be trained based on the plurality of virtual scene variations. The use of generated digital assets to train the machine learning model greatly decreases the time and cost requirements of creating training assets and provides training quality benefits based on the quantity and quality of variations that may be generated, as well as the completeness of information included in each generated digital asset.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: June 8, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Muhammad Zeeshan Zia, Emanuel Shalev, Jonathan C. Hanzelka, Harpreet S. Sawhney, Pedro U. Escos, Michael J. Ebstyne
  • Patent number: 11030786
    Abstract: Systems and methods are provided for rendering hair. The systems and methods include receiving hair spline data comprising coordinates of a plurality of hair strands; selecting a first hair strand of the plurality of hair strands; retrieving coordinates of the first hair strand; identifying based on the respective coordinates of the plurality of hair strands a second hair strand that is adjacent to the first hair strand; storing a reference to the second hair strand in association with the coordinates of the first hair strand; and generating one or more additional hair strands between the first hair strand and the second hair strand based on the coordinates of the first hair strand and the reference to the second hair strand.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: June 8, 2021
    Assignee: Snap Inc.
    Inventors: Artem Bondich, Oleksandr Pyshchenko
  • Patent number: 11030795
    Abstract: Graphics processing systems and methods provide soft shadowing effects into rendered images. This is achieved in a simple manner which can be implemented in real-time without incurring high processing costs so it is suitable for implementation in low-cost devices. Rays are cast from positions on visible surfaces corresponding to pixel positions towards the center of a light, and occlusions of the rays are determined. The results of these determinations are used to apply soft shadows to the rendered pixel values.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: June 8, 2021
    Assignee: Imagination Technologies Limited
    Inventors: Justin P. DeCell, Luke T. Peterson
  • Patent number: 11024095
    Abstract: A method to culling parts of a 3D reconstruction volume is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data with low usage of computational resources and storage spaces. The method includes culling parts of the 3D reconstruction volume against a depth image. The depth image has a plurality of pixels, each of which represents a distance to a surface in a scene. In some embodiments, the method includes culling parts of the 3D reconstruction volume against a frustum. The frustum is derived from a field of view of an image sensor, from which image data to create the 3D reconstruction is obtained.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: June 1, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Frank Thomas Steinbrücker, David Geoffrey Molyneaux, Zhongle Wu, Xiaolin Wei, Jianyuan Min, Yifu Zhang
  • Patent number: 11025959
    Abstract: A method includes receiving head-tracking data that describe one or more positions of people while the people are viewing a three-dimensional video. The method further includes generating a probabilistic model of the one or more positions of the people based on the head-tracking data, wherein the probabilistic model identifies a probability of a viewer looking in a particular direction as a function of time. The method further includes generating video segments from the three-dimensional video. The method further includes, for each of the video segments: determining a directional encoding format that projects latitudes and longitudes of locations of a surface of a sphere onto locations on a plane, determining a cost function that identifies a region of interest on the plane based on the probabilistic model, and generating optimal segment parameters that minimize a sum-over position for the region of interest.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: June 1, 2021
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Andrew Walkingshaw, Arthur van Hoff, Daniel Kopeinigg
  • Patent number: 11017611
    Abstract: Systems and methods related to generating and modifying a room or space within a virtual reality environment may comprise addition, removal, placement, modification, and/or resizing of a plurality of environment surfaces, such as a floor, walls, and ceiling, and/or a plurality of fixtures, such as doors, windows, or openings, associated with the environment surfaces, in which each environment surface and fixture includes associated dimensions. The environment surfaces and fixtures may be added, removed, placed, moved, and/or resized by a user. During such interactions, only a subset of dimensions relevant to the current functions or operations by the user may be presented to facilitate such functions or operations. Further, various aspects associated with environment surfaces and fixtures may be modified, such as paints, colors, materials, textures, or others.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: May 25, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Brian James Mount, Lee David Thompson, Dillon Taylor Baker, Joonhao Chuah, Hai Quang Kim, Michael Thomas, Kristian Kane, Jesse Alan DuPree
  • Patent number: 11014001
    Abstract: A method for building a gaming environment. The method includes accessing a base VR model of a real-world environment from a third party mapping data store, wherein the real-world environment includes a plurality of real-world objects. The method includes augmenting a first object in the base VR model that corresponds to a first real-world object in the real-world environment. The method includes stitching in the first object that is augmented into the base VR model to generate an augmented base VR model. The method includes storing the augmented base VR model to a library data store comprising a plurality of augmented base VR models for use as virtual environments of corresponding gaming applications storing the augmented base VR model to a library data store comprising a plurality of augmented base VR models. The method includes using one or more of the plurality of augmented base VR models to define a virtual environment of a gaming application.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: May 25, 2021
    Assignee: Sony Interactive Entertainment LLC
    Inventors: Miao Li, Shawn He
  • Patent number: 11010955
    Abstract: Methods for mapping 3D point cloud data into 2D surfaces are described herein. The methods utilize 3D surface patches to represent point clouds and perform flexible mapping of 3D patch surface data into 2D canvas images. Patches representing geometry and patches representing attributes such as textures are placed in different canvases, where the placement of each patch is done independently for geometry and texture, that is, geometry and texture patches do not need to be co-located, as in conventional point cloud mapping. Furthermore, methods include transformations of the 3D patch when placing it into the 2D canvas, for more efficient packing.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: May 18, 2021
    Assignee: Sony Group Corporation
    Inventor: Danillo Graziosi
  • Patent number: 11012719
    Abstract: Systems and methods are operable to present a sporting event on a display based on a determined level of viewer engagement and a determined team preference of the viewer. An exemplary embodiment presents a neutral viewpoint video content segment on the display during the first period of game play when the viewer has a neutral team preference, alternatively presents a first team alternative video content segment on the display during the first period of game play when the viewer has a preference for the first team, or alternatively presents a second team alternative video content segment on the display during the first period of game play when the viewer has a preference for the second team.
    Type: Grant
    Filed: March 8, 2016
    Date of Patent: May 18, 2021
    Assignee: DISH Technologies L.L.C.
    Inventor: Jeremy Mickelsen
  • Patent number: 11004248
    Abstract: A method is described comprising: applying a random pattern to specified regions of an object; tracking the movement of the random pattern during a motion capture session; and generating motion data representing the movement of the object using the tracked movement of the random pattern.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: May 11, 2021
    Assignee: Rearden Mova, LLC
    Inventors: Timothy Cotter, Stephen G. Perlman, John Speck, Roger van der Laan, Kenneth A. Pearce, Greg LaSalle
  • Patent number: 11006097
    Abstract: A method includes sending, to an interaction device including a projector and a camera, instructions causing the projector to project a light pattern to each of one or more specified directions and receiving, from the interaction device, one or more images each including illumination patterns associated with one or more surfaces of a specified direction. The method also includes constructing, based on the projected light patterns and the received illumination patterns, a model describing an environment of the interaction device, where the model includes one or more characteristics of each of one or more objects in the environment and one or more characteristics of each of one or more surfaces in the environment.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: May 11, 2021
    Assignee: Facebook, Inc.
    Inventors: Baback Elmieh, Joyce Hsu, Scott Snibbe, Amir Mesguich Havilio, Angela Chang, Alexandre Jais, Rex Crossen
  • Patent number: 11004202
    Abstract: Systems and methods for obtaining 3D point-level segmentation of 3D point clouds in accordance with various embodiments of the invention are disclosed. One embodiment includes: at least one processor, and a memory containing a segmentation pipeline application. In addition, the segmentation pipeline application configures the at least one processor to: pre-process a 3D point cloud to group 3D points; provide the groups of 3D points to a 3D neural network to generate initial label predictions for the groups of 3D points; interpolate label predictions for individual 3D points based upon initial label predictions for at least two neighboring groups of 3D points including the group of 3D points to which a given individual 3D point belongs; refine the label predictions using a graph neural network; and output a segmented 3D point cloud.
    Type: Grant
    Filed: October 9, 2018
    Date of Patent: May 11, 2021
    Assignee: The Board of Trustees of the Leland Stanford Junior University
    Inventors: Lyne P. Tchapmi, Christopher B. Choy, Iro Armeni, JunYoung Gwak, Silvio Savarese
  • Patent number: 10994201
    Abstract: In a method for providing an augmented reality interface for use by a first real-world human user and a second real-world human user, an augmented reality and virtual reality engine (AR-VR engine) produces a visual transformation of the first real-world human user (transformed human user 1), and a visual transformation of a real-world environment around the first real-world human user (transformed environment). The AR-VR engine also produces a virtualized reality world that includes images of transformed first real-world human user moving about, and interacting with, the transformed environment. The AR-VR engine further provides an electronic interface through which the second real-world human user can interact, in real-time, with at least one of the transformed first real-world human user and the transformed environment.
    Type: Grant
    Filed: March 21, 2019
    Date of Patent: May 4, 2021
    Assignee: Wormhole Labs, Inc.
    Inventors: Robert D. Fish, Curtis Hutten
  • Patent number: 10997795
    Abstract: An apparatus and method are provided for compressing a three-dimensional (3D) object image represented by point cloud data. The method includes positioning the 3D object image into a plurality of equi-sized cubes for compression; determining 3D local coordinates in each of the plurality of equi-sized cubes and a cube index for each point of the 3D object image positioned in the plurality of equi-sized cubes; generating two-dimensional (2D) image data based on the 3D local coordinates and the cube indexes; and storing the 2D image data in a memory. The 2D image data includes at least one of 2D geometry data, 2D meta data, or 2D color data.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: May 4, 2021
    Inventors: Raghavan Velappan, Suresh Kumar KrishnanKutty Vettukuzhyparambhil, Pavan Kumar Dusi, Raghavendra Holla, Amit Yadav, Nachiketa Das, Divyanshu Chuchra
  • Patent number: 10990505
    Abstract: A method for composing a scene using a data module includes: receiving, from a user, an instruction to instantiate the data module to produce at least a first instance of the data module in a second data module; receiving, from the user, a first override for modifying the first instance of the data module; receiving, from the user, a second override for modifying the data module; identifying a conflict introduced by the first override or the second override; configuring a display interface to display an indication informing the user of the identified conflict; configuring the display interface to display one or more options for resolving the identified conflict; receiving, from the user, a selection of an option of the one or more options; and in response to the selection of the option, resolving the identified conflict by deleting the first override or the second override.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: April 27, 2021
    Assignee: DREAMWORKS ANIMATION LLC
    Inventors: Esteban Papp, Chi-Wei Tseng, Stuart Bryson, Matthew Christopher Gong, Yu-Hsin Chang
  • Patent number: 10984142
    Abstract: A system and method are disclosed for building a modular electrical system for a jack up rig, the method including but not limited to identifying rig equipment on the jack up rig that will be connected to the modular electrical system; selecting electrical equipment to control the rig equipment; placing the electrical equipment in an electrical module; and electrically connecting the electrical equipment to power cables and control cables inside of the electrical module; and testing the electrical equipment inside of the electrical module.
    Type: Grant
    Filed: July 30, 2014
    Date of Patent: April 20, 2021
    Assignee: Electronic Power Design, Inc.
    Inventor: John Norwood, IV
  • Patent number: 10984222
    Abstract: The present disclosure provides method, apparatus and system for 3-dimension (3D) face tracking. The method for 3D face tracking may comprise: obtaining a 2-dimension (2D) face image; performing a local feature regression on the 2D face image to determine 3D face representation parameters corresponding to the 2D face image; and generating a 3D facial mesh and corresponding 2D facial landmarks based on the determined 3D face representation parameters. The present disclosure may improve tracking accuracy and reduce memory cost, and accordingly may be effectively applied in broader application scenarios.
    Type: Grant
    Filed: July 12, 2016
    Date of Patent: April 20, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Hsiang-Tao Wu, Xin Tong, Yangang Wang, Fang Wen
  • Patent number: 10984609
    Abstract: Disclosed herein are an apparatus and method for generating a 3D avatar. The method, performed by the apparatus, includes performing a 3D scan of the body of a user using an image sensor and generating a 3D scan model using the result of the 3D scan of the body of the user, matching the 3D scan model and a previously stored template avatar, and generating a 3D avatar based on the result of matching the 3D scan model and the template avatar.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: April 20, 2021
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Byung-Ok Han, Ho-Won Kim, Ki-Nam Kim, Jae-Hwan Kim, Ji-Hyung Lee, Yu-Gu Jung, Chang-Joon Park, Gil-Haeng Lee
  • Patent number: 10984182
    Abstract: Systems and methods are disclosed herein relating to the annotation of microscan data/images and the generation of context-rich electronic reports. Microscan images are imported and displayed in a context-rich environment to provide contextual information for an operator to annotate microscan images. Markers are used to identify the relative location of a microscan image on a full-subject image. Reports are generated that include a full-subject image with one or more markers identifying the relative locations of annotated image data in one or more locations on the full-subject image. Hyperlinked data elements allow for quick navigation to detailed report information in location selection sections of the report for each marked location on the full-subject image in the report.
    Type: Grant
    Filed: September 22, 2017
    Date of Patent: April 20, 2021
    Assignee: Loveland Innovations, LLC
    Inventors: Jim Loveland, Leif Larson, Dan Christiansen, Tad Christiansen, Cam Christiansen
  • Patent number: 10977827
    Abstract: This disclosure describes a method and system to perform object detection and 6D pose estimation. The system comprises a database of 3D models, a CNN-based object detector, multiview pose verification, and a hard example generator for CNN training. The accuracy of that detection and estimation can be iteratively improved by retraining the CNN with increasingly hard ground truth examples. The additional images are detected and annotated by an automatic process of pose estimation and verification.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: April 13, 2021
    Inventors: J. William Mauchly, Joseph T. Friel
  • Patent number: 10977549
    Abstract: In implementations of object animation using generative neural networks, one or more computing devices of a system implement an animation system for reproducing animation of an object in a digital video. A mesh of the object is obtained from a first frame of the digital video and a second frame of the digital video having the object is selected. Features of the object from the second frame are mapped to vertices of the mesh, and the mesh is warped based on the mapping. The warped mesh is rendered as an image by a neural renderer and compared to the object from the second frame to train a neural network. The rendered image is then refined by a generator of a generative adversarial network which includes a discriminator. The discriminator trains the generator to reproduce the object from the second frame as the refined image.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: April 13, 2021
    Assignee: Adobe Inc.
    Inventors: Vladimir Kim, Omid Poursaeed, Jun Saito, Elya Shechtman
  • Patent number: 10973440
    Abstract: Methods for controlling a mobile or wearable device user's representation in real time are described, where the user is performing a gait activity with a gait velocity, and the gait velocity is used for control. Additional user's mobility characteristics leveraged for control may include cadence and stride length, and the sensors utilized to obtain any contextual information may be accelerometers.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: April 13, 2021
    Inventor: David Martin
  • Patent number: 10972349
    Abstract: In some embodiments, a message and a digital signature related to the message may be obtained, where the message may include a source identifier of a data source and values associated with parameters for an executable. The message may be transformed into a network-specific data structure having a specific format associated with a network. A verification of the network-specific data structure may be performed based on the digital signature. The values may be provided to the executable based on the verification indicating a match between the network-specific data structure and the digital signature.
    Type: Grant
    Filed: August 13, 2020
    Date of Patent: April 6, 2021
    Inventor: Matthew Branton
  • Patent number: 10970889
    Abstract: Embodiments provide systems, methods, and computer storage media for generating stroke predictions based on prior strokes and a reference image. An interactive drawing interface can allow a user to sketch over, or with respect to, a reference image. A UI tool such as an autocomplete or workflow clone tool can access or identify a set of prior strokes and a target region, and stroke predictions can be generated using an iterative algorithm that minimizes an energy function considering stroke-to-stroke and image-patch-to-image-patch comparisons. For any particular future stroke, one or more stroke predictions may be initialized based on the set of prior strokes. Each initialized prediction can be improved by iteratively executing search and assignment steps to incrementally improve the prediction, and the best prediction can be selected and presented as a stroke prediction for the future stroke. The process can be repeated to predict any number of future strokes.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: April 6, 2021
    Assignee: Adobe Inc.
    Inventors: Yilan Chen, Li-Yi Wei
  • Patent number: 10970849
    Abstract: According to one implementation, a pose estimation and body tracking system includes a computing platform having a hardware processor and a system memory storing a software code including a tracking module trained to track motions. The software code receives a series of images of motion by a subject, and for each image, uses the tracking module to determine locations corresponding respectively to two-dimensional (2D) skeletal landmarks of the subject based on constraints imposed by features of a hierarchical skeleton model intersecting at each 2D skeletal landmark. The software code further uses the tracking module to infer joint angles of the subject based on the locations and determine a three-dimensional (3D) pose of the subject based on the locations and the joint angles, resulting in a series of 3D poses. The software code outputs a tracking image corresponding to the motion by the subject based on the series of 3D poses.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: April 6, 2021
    Assignees: Disney Enterprises, Inc., ETH Zürich (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Ahmet Cengiz Öztireli, Prashanth Chandran, Markus Gross
  • Patent number: 10970912
    Abstract: Aspects relate to tracing rays in 3-D scenes that comprise objects that are defined by or with implicit geometry. In an example, a trapping element defines a portion of 3-D space in which implicit geometry exist. When a ray is found to intersect a trapping element, a trapping element procedure is executed. The trapping element procedure may comprise marching a ray through a 3-D volume and evaluating a function that defines the implicit geometry for each current 3-D position of the ray. An intersection detected with the implicit geometry may be found concurrently with intersections for the same ray with explicitly-defined geometry, and data describing these intersections may be stored with the ray and resolved.
    Type: Grant
    Filed: March 10, 2014
    Date of Patent: April 6, 2021
    Assignee: Imagination Technologies Limited
    Inventors: Cuneyt Ozdas, Luke Tilman Peterson, Steven Blackmon, Steven John Clohset
  • Patent number: 10970907
    Abstract: Disclosed herein includes a system, a method, and a non-transitory computer readable medium for applying an expression to an avatar. In one aspect, a class of an expression of a face can be determined according to a set of attributes indicating states of portions of the face. In one aspect, a set of blendshapes with respective weights corresponding to the expression of the face can be determined according to the class of the expression of the face. In one aspect, the set of blendshapes with respective weights can be provided as an input to train a machine learning model. In one aspect, the machine learning model can be configured, via training, to generate an output set of blendshapes with respective weights, according to an input image. An image of an avatar may be rendered according to the output set of blendshapes with respective weights.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: April 6, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Elif Albuz, Melinda Ozel, Tong Xiao, Sidi Fu
  • Patent number: 10970919
    Abstract: A method of determining an illumination effect value of a volumetric dataset includes determining, based on the volumetric dataset, one or more parameter values relating to one or more properties of the volumetric dataset at a sample point; and providing the one or more parameter values as inputs to an anisotropic illumination model and thereby determining an illumination effect value relating to an illumination effect at the sample point, the illumination effect value defining a relationship between an amount of incoming light and an amount of outgoing light at the sample point.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: April 6, 2021
    Assignee: SIEMENS HEALTHCARE GMBH
    Inventor: Felix Dingeldey
  • Patent number: 10963812
    Abstract: Some aspects of the present disclosure relate to computer processes for generating and training a generative machine learning model to estimate the true sizes of items and users of an electronic catalog and subsequently applied to determine fit recommendations, as well as confidence values for the fit recommendations, for how a particular item may fit a particular user. During training, the disclosed generative model can implement Bayesian statistical inference to calculate estimated true sizes of both items and users of an electronic catalog using both (1) a prior distribution of sizes for items and users and (2) a distribution based on obtained evidence regarding how items actually fit users. The resulting posterior distribution can be approximated using a proposal distribution used to generate the fit recommendations and associated confidence values.
    Type: Grant
    Filed: March 17, 2017
    Date of Patent: March 30, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Vivek Sembium Varadarajan, Rajeev Ramnarain Rastogi, Atul Saroop
  • Patent number: 10964124
    Abstract: A 3D image processing system includes voxel adjustments based on radiodensity, filtering and segmentation, each of which may be selected, configured, and applied in response to controller-entered commands. In this disclosure, a method and apparatus for improved voxel processing and improved filtering is established. With regard to the improved voxel processing, a first group of voxels is changed in shape, size or orientation independently from a second group of voxels. For example, the volume is divided into groups and the dynamic filtering is performed. This improves visualization of 3D images by providing a greater extent of filtering while maintaining context of portions of the 3D image.
    Type: Grant
    Filed: April 9, 2020
    Date of Patent: March 30, 2021
    Inventor: Robert Edwin Douglas
  • Patent number: 10953602
    Abstract: Embodiments of this application relate to systems and methods which allow for 3-D printed objects, such as eyeglasses and wristwatches, for example, to be customized by users according to modification specifications that are defined and constrained by manufacturers based on factors relating to the printability of a modified design.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: March 23, 2021
    Assignee: Materialise N.V.
    Inventors: Tom Cluckers, Jan Maes
  • Patent number: 10956625
    Abstract: A system and method is provided that facilitates generating meshes for object models of structures for use with finite element analysis simulations carried out on the structure. The system may include at least one processor configured to classify a type of an input face of a three dimensional (3D) object model of a structure based at least in part on a number of loops included by the input face. The processor may also select based on the classified type of the input face a multi-block decomposition algorithm from among a plurality of multi-block decomposition algorithms that the processor is configured to use. Further the processor may use the selected multi-block decomposition algorithm to determine locations of a plurality of blocks across the input face. In addition the processor may mesh each block to produce mesh data defining a mesh that divides the input face into a plurality of quadrilateral elements.
    Type: Grant
    Filed: September 1, 2015
    Date of Patent: March 23, 2021
    Assignee: Siemens Industry Software Inc.
    Inventors: Jonathan Makem, Nilanjan Mukherjee
  • Patent number: 10958891
    Abstract: Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation (MVIDMR) of an object can be generated from live images of an object captured from a camera. After the MVIDMR of the object is generated, a tag can be placed at a location on the object in the MVIDMR. The locations of the tag in the frames of the MVIDMR can vary from frame to frame as the view of the object changes. When the tag is selected, media content can be output which shows details of the object at location where the tag is placed. In one embodiment, the object can be car and tags can be used to link to media content showing details of the car at the locations where the tags are placed.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: March 23, 2021
    Assignee: Fyusion, Inc.
    Inventors: Radu Bogdan Rusu, Dave Morrison, Keith Martin, Stephen David Miller, Pantelis Kalogiros, Mike Penz, Martin Markus Hubert Wawro, Bojana Dumeljic, Jai Chaudhry, Luke Parham, Julius Santiago, Stefan Johannes Josef Holzer
  • Patent number: 10957082
    Abstract: When performing conservative rasterisation in a graphics processing pipeline, modified edge information that accounts for an error in the dimensions of a primitive is determined by a primitive set-up stage. That modified edge information is then used by a rasterisation stage to determine whether the primitive covers one or more sampling points associated with pixels to be displayed. The same modified edge information can also be used to determine if the pixels are fully covered by the primitive irrespective of any rounding effects (errors) in the position of the (vertices of the) primitive.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: March 23, 2021
    Assignee: Arm Limited
    Inventor: Frode Heggelund
  • Patent number: 10950045
    Abstract: A virtual reality image display system includes processor selects an attribute, senses motion of a user, generates a virtual reality image from body image information of the selected attribute, and controls motion of the generated virtual reality image based on information on the sensed motion of the user and displays the virtual reality image in accordance with the controlled motion.
    Type: Grant
    Filed: January 16, 2017
    Date of Patent: March 16, 2021
    Assignee: NEC Solution Innovators, Ltd.
    Inventors: Masakazu Moriguchi, Yusuke Nakao, Yoshie Sakurazawa
  • Patent number: 10950043
    Abstract: Images of various views of objects can be captured. An object mesh structure can be created based at least in part on the object images. The object mesh structure represents the three-dimensional shape of the object. Alpha masks indicating which pixels are associated with the object can be used to refine the object mesh structure. A request can be made to view the object from an arbitrary viewpoint which differs from the viewpoints associated with the captured images. A subset of the captured images can be used to create a synthetic image. Different weights can be assigned to the captured image to render a synthetic image that represents the view from the arbitrary viewpoint selected. The input images for the synthetic image can be prefetched, or loaded into memory before the arbitrary view is requested. The images can also be cached for future use or to avoid reloading them for another synthetic image.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: March 16, 2021
    Assignee: A9.com, Inc.
    Inventors: Karl Hillesland, Xi Zhang, Himanshu Arora, Yu Lou, Radek Grzeszczuk, Arnab Sanat Kumar Dhua
  • Patent number: 10949715
    Abstract: Systems and methods are disclosed configured to train an autoencoder using images that include faces, wherein the autoencoder comprises an input layer, an encoder configured to output a latent image from a corresponding input image, and a decoder configured to attempt to reconstruct the input image from the latent image. An image sequence of a face exhibiting a plurality of facial expressions and transitions between facial expressions is generated and accessed. Images of the plurality of facial expressions and transitions between facial expressions are captured from a plurality of different angles and using different lighting. An autoencoder is trained using source images that include the face with different facial expressions captured at different angles with different lighting, and using destination images that include a destination face.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: March 16, 2021
    Assignee: Neon Evolution Inc.
    Inventors: Cody Gustave Berlin, Carl Davis Bogan, III, Kenneth Michael Lande, Jacob Myles Laser, Brian Sung Lee, Anders Øland
  • Patent number: 10943375
    Abstract: Generation of a multi-state symbol from an input graphic object is described. A multi-state graphic symbol system generates an outline and a base mesh for a graphic object. The multi-state graphic symbol system then defines graphic manipulation handles relative to the base mesh and deforms the base mesh by altering a state of the handles. Vectors describing initial positions and final positions of the handles are generated and stored with the outline and base mesh to define the graphic object's multi-state symbol. Additional poses can be generated by adding and/or modifying other handles, and each additional pose is stored as a vector in the multi-state symbol. Additional poses of the graphic object can be generated by interpolating between different vectors of the multi-state symbol. The multi-state graphic symbol system additionally enables for an interpolated pose to be generated based on separate user-defined paths for different handles of the multi-state symbol.
    Type: Grant
    Filed: April 17, 2019
    Date of Patent: March 9, 2021
    Assignee: Adobe Inc.
    Inventors: Ankit Phogat, Vineet Batra, Mansi Nagpal
  • Patent number: 10936920
    Abstract: A system trains and applies a machine learning model to label maps of a region. Various data modalities are combined as inputs for multiple data tiles used to characterize a region for a geographical map. Each data modality reflects sensor data captured in different ways. Some data modalities include aerial imagery, point cloud data, and location trace data. The different data modalities are captured independently and then aggregated using machine learning models to determine map labeling information about tiles in the region. Data is ingested by the system and corresponding tiles are identified. A tile is represented by a feature vector of different data types related to the various data modalities, and values from the ingested data are added to the feature vector for the tile. Models can be trained to predict characteristics of a region using these various types of input.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: March 2, 2021
    Assignee: Uber Technologies, Inc.
    Inventors: Timo Pekka Pylvaenaeinen, Aditya Sarawgi, Vijay Mahadevan, Vasudev Parameswaran, Mohammed Waleed Kadous
  • Patent number: 10937244
    Abstract: The construction of virtual reality environments can be made more efficient with enhancements directed to the sizing of objects to be utilized in the construction of virtual reality environments, enhancements directed to the simultaneous display of multiple thumbnails, or other like indicators, of virtual reality environments being constructed, enhancements directed to controlling the positioning of a view of a virtual reality environment, enhancements directed to conceptualizing the virtual reality environment as perceived through different types of three-dimensional presentational hardware, and enhancements directed to the exchange of objects between multiple virtual reality environments being constructed.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: March 2, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dong Back Kim, Ricardo Acosta Moreno, Jia Wang, Joshua Benjamin Eiten, Stefan Landvogt
  • Patent number: 10937246
    Abstract: A method of operating a computing system to generate a model of an environment represented by a mesh is provided. The method allows to update 3D meshes to client applications in real time with low latency to support on the fly environment changes. The method provides 3D meshes adaptive to different levels of simplification requested by various client applications. The method provides local update, for example, updating the mesh parts that are changed since last update. The method also provides 3D meshes with planarized surfaces to support robust physics simulations. The method includes segmenting a 3D mesh into mesh blocks. The method also includes performing a multi-stage simplification on selected mesh blocks. The multi-stage simplification includes a pre-simplification operation, a planarization operation, and a post-simplification operation.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: March 2, 2021
    Assignee: Magic Leap, Inc.
    Inventors: David Geoffrey Molyneaux, Frank Thomas Steinbrücker, Zhongle Wu, Xiaolin Wei, Jianyuan Min, Yifu Zhang
  • Patent number: 10937188
    Abstract: Systems and methods for cuboid detection and keypoint localization in images are disclosed. In one aspect, a deep cuboid detector can be used for simultaneous cuboid detection and keypoint localization in monocular images. The deep cuboid detector can include a plurality of convolutional layers and non-convolutional layers of a trained convolution neural network for determining a convolutional feature map from an input image. A region proposal network of the deep cuboid detector can determine a bounding box surrounding a cuboid in the image using the convolutional feature map. The pooling layer and regressor layers of the deep cuboid detector can implement iterative feature pooling for determining a refined bounding box and a parameterized representation of the cuboid.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: March 2, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Tomasz Jan Malisiewicz, Andrew Rabinovich, Vijay Badrinarayanan, Debidatta Dwibedi
  • Patent number: 10930072
    Abstract: In an example embodiment, techniques are provided for displaying contour lines on a multi-resolution mesh substantially in real-time. Contour lines may be computed on a per-tile basis, scaling for various resolutions. The mesh and computed contour lines from lower resolution tiles may be displayed as temporary (referred to hereinafter as “overview”) data while the mesh and contour lines for higher resolution tiles are obtained or computed, to enable substantially real-time update. The techniques may handle very large meshes and large numbers of contour lines, without unduly taxing hardware resources. The techniques may also be applicable to multiple types of meshes (e.g., 2-D, 2.5-D, 3-D, 4-D, etc.).
    Type: Grant
    Filed: November 11, 2019
    Date of Patent: February 23, 2021
    Assignee: Bentley Systems, Incorporated
    Inventors: Mathieu St-Pierre, Elenie Godzaridis
  • Patent number: 10929494
    Abstract: There is provided a method of creating an augmented reality image, comprising: capturing by an imaging sensor of a mobile device, a two dimensional (2D) image of a three dimensional scene (3D) comprising objects and pixel neighborhoods, selecting with a graphical user interface (GUI) presented on a display of the mobile device, pixel(s) of the 2D image corresponding to a certain object, computing a 3D geo-location of the certain object corresponding to the selected pixel(s) of the 2D image, wherein the 3D geo-location includes an altitude relative to sea level, and wherein the 3D geo-location is geographically distinct and spaced apart from a location of the imaging sensor outputted by a location sensor, and creating a tag for the selected pixel(s) of the certain object of the 2D image according to the computed 3D geo-location within a virtual grid, wherein the tag maps to media-object(s) correspond with real world coordinates.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: February 23, 2021
    Assignee: Stops.com Ltd.
    Inventors: Eitan Richard Chamberlin, Ehud Spiegel, Nathan Akimov, Gregory Zaoui
  • Patent number: 10930073
    Abstract: In various embodiments, techniques are provided for clipping and displaying a multi-resolution textured mesh using asynchronous incremental on-demand marking of spatial index nodes to allow for substantially real-time display refresh after a change is made to clip geometry. Timestamps may be added to spatial index nodes and an upper bound placed on the number of operations performed such that an index in an intermediate (unfinished) state may be produced. Further, operations may be focused on tiles required for display and not simply all tiles affected by the change to the clip geometry. A display process may use the spatial index in the intermediate (unfinished) state to produce a substantially real-time display, without waiting for all operations to complete.
    Type: Grant
    Filed: November 11, 2019
    Date of Patent: February 23, 2021
    Assignee: Bentley Systems, Incorporated
    Inventors: Elenie Godzaridis, Mathieu St-Pierre
  • Patent number: 10924691
    Abstract: [Object] To enable a user to image a free viewpoint video picture easily even in a place where it is difficult to install an imaging device at all times. [Solution] A control device of a movable type imaging device according to the present disclosure includes an imaging information acquiring section that acquires imaging information with regard to imaging from a plurality of movable type imaging devices having an imaging function; and an arrangement information calculating section that calculates arrangement information for arranging a plurality of the movable type imaging devices in order to generate a free viewpoint video picture by synthesizing an image imaged by a plurality of the movable type imaging devices on the basis of the imaging information.
    Type: Grant
    Filed: September 19, 2017
    Date of Patent: February 16, 2021
    Assignee: SONY CORPORATION
    Inventors: Hideyuki Suzuki, Junji Kato, Kei Takahashi, Hisayuki Tateno