Patents by Inventor Aditya SANGHI

Aditya SANGHI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11954820
    Abstract: One embodiment of the present invention sets forth a technique for adding dimensions to a target drawing. The technique includes generating a first set of node embeddings for a first set of nodes included in a target graph that represents the target drawing. The technique also includes receiving a second set of node embeddings for a second set of nodes included in a source graph that represents a source drawing, where one or more nodes included in the second set of nodes are associated with one or more source dimensions included in the source drawing. The technique further includes generating a set of mappings between the first and second sets of nodes based similarities between the first set of node embeddings and the second set of node embeddings, and automatically placing the one or more source dimensions within the target drawing based on the set of mappings.
    Type: Grant
    Filed: July 13, 2021
    Date of Patent: April 9, 2024
    Assignee: AUTODESK, INC.
    Inventors: Thomas Ryan Davies, Alexander Ray Carlson, Aditya Sanghi, Tarkeshwar Kumar Shah, Divya Sivasankaran, Anup Bhalchandra Walvekar, Ran Zhang
  • Publication number: 20230326158
    Abstract: One embodiment of the present invention sets forth a technique for training a machine learning model to perform style transfer. The technique includes applying one or more augmentations to a first input three-dimensional (3D) shape to generate a second input 3D shape. The technique also includes generating, via a first set of neural network layers, a style code based on a first latent representation of the first input 3D shape and a second latent representation of the second input 3D shape. The technique further includes generating, via a second set of neural network layers, a first output 3D shape based on the style code and the second latent representation, and performing one or more operations on the first and second sets of neural network layers based on a first loss associated with the first output 3D shape to generate a trained machine learning model.
    Type: Application
    Filed: January 3, 2023
    Publication date: October 12, 2023
    Inventors: Hooman SHAYANI, Marco FUMERO, Aditya SANGHI
  • Publication number: 20230326159
    Abstract: One embodiment of the present invention sets forth a technique for performing style transfer. The technique includes determining a distribution associated with a plurality of style codes for a plurality of three-dimensional (3D) shapes, where each style code included in the plurality of style codes represents a difference between a first 3D shape and a second 3D shape, and where the second 3D shape is generated by applying one or more augmentations to the first 3D shape. The technique also includes sampling from the distribution to generate an additional style code and executing a trained machine learning model based on the additional style code to generate an output 3D shape having style-based attributes associated with the additional style code and content-based attributes associated with an object. The technique further includes generating a 3D model of the object based on the output 3D shape.
    Type: Application
    Filed: January 3, 2023
    Publication date: October 12, 2023
    Inventors: Hooman SHAYANI, Marco FUMERO, Aditya SANGHI
  • Publication number: 20230326157
    Abstract: One embodiment of the present invention sets forth a technique for performing style transfer. The technique includes generating an input shape representation that includes a plurality of points near a surface of an input three-dimensional (3D) shape, where the input 3D shape includes content-based attributes associated with an object. The technique also includes determining a style code based on a difference between a first latent representation of a first 3D shape and a second latent representation of a second 3D shape, where the second 3D shape is generated by applying one or more augmentations to the first 3D shape. The technique further includes generating, based on the input shape representation and style code, an output 3D shape having the content-based attributes of the input 3D shape and style-based attributes associated with the style code, and generating a 3D model of the object based on the output 3D shape.
    Type: Application
    Filed: January 3, 2023
    Publication date: October 12, 2023
    Inventors: Hooman SHAYANI, Marco FUMERO, Aditya SANGHI
  • Publication number: 20220318947
    Abstract: One embodiment of the present invention sets forth a technique for adding dimensions to a target drawing. The technique includes generating a first set of node embeddings for a first set of nodes included in a target graph that represents the target drawing. The technique also includes receiving a second set of node embeddings for a second set of nodes included in a source graph that represents a source drawing, where one or more nodes included in the second set of nodes are associated with one or more source dimensions included in the source drawing. The technique further includes generating a set of mappings between the first and second sets of nodes based similarities between the first set of node embeddings and the second set of node embeddings, and automatically placing the one or more source dimensions within the target drawing based on the set of mappings.
    Type: Application
    Filed: July 13, 2021
    Publication date: October 6, 2022
    Inventors: Thomas Ryan DAVIES, Alexander Ray CARLSON, Aditya SANGHI, Tarkeshwar Kumar SHAH, Divya SIVASANKARAN, Anup Bhalchandra WALVEKAR, Ran ZHANG
  • Publication number: 20220318636
    Abstract: In various embodiments, a training application trains machine learning models to perform tasks associated with 3D CAD objects that are represented using B-reps. In operation, the training application computes a preliminary result via a machine learning model based on a representation of a 3D CAD object that includes a graph and multiple 2D UV-grids. Based on the preliminary result, the training application performs one or more operations to determine that the machine learning model has not been trained to perform a first task. The training application updates at least one parameter of a graph neural network included in the machine learning model based on the preliminary result to generate a modified machine learning model. The training application performs one or more operations to determine that the modified machine learning model has been trained to perform the first task.
    Type: Application
    Filed: June 15, 2021
    Publication date: October 6, 2022
    Inventors: Pradeep Kumar JAYARAMAN, Thomas Ryan DAVIES, Joseph George LAMBOURNE, Nigel Jed Wesley MORRIS, Aditya SANGHI, Hooman SHAYANI
  • Publication number: 20220318637
    Abstract: In various embodiments, an inference application performs tasks associated with 3D CAD objects that are represented using B-reps. A UV-net representation of a 3D CAD object that is represented using a B-rep includes a set of 2D UV-grids and a graph. In operation, the inference application maps the set of 2D UV-grids to a set of node feature vectors via a trained neural network. Based on the node feature vectors and the graph, the inference application computes a final result via a trained graph neural network. Advantageously, the UV-net representation of the 3D CAD object enabled the trained neural network and the trained graph neural network to efficiently process the 3D CAD object.
    Type: Application
    Filed: June 15, 2021
    Publication date: October 6, 2022
    Inventors: Pradeep Kumar JAYARAMAN, Thomas Ryan DAVIES, Joseph George LAMBOURNE, Nigel Jed Wesley MORRIS, Aditya SANGHI, Hooman SHAYANI
  • Publication number: 20220318466
    Abstract: In various embodiments, a parameter domain graph application generates UV-net representations of 3D CAD objects for machine learning models. In operation, the parameter domain graph application generates a graph based on a B-rep of a 3D CAD object. The parameter domain graph application discretizes a parameter domain of a parametric surface associated with the B-rep into a 2D grid. The parameter domain graph application computes at least one feature at a grid point included in the 2D grid based on the parametric surface to generate a 2D UV-grid. Based on the graph and the 2D UV-grid, the parameter domain graph application generates a UV-net representation of the 3D CAD object. Advantageously, generating UV-net representations of 3D CAD objects that are represented using B-reps enables the 3D CAD objects to be processed efficiently using neural networks.
    Type: Application
    Filed: June 15, 2021
    Publication date: October 6, 2022
    Inventors: Pradeep Kumar JAYARAMAN, Thomas Ryan DAVIES, Joseph George LAMBOURNE, Nigel Jed Wesley MORRIS, Aditya SANGHI, Hooman SHAYANI
  • Publication number: 20220156415
    Abstract: In various embodiments, a style comparison metric application generates a style comparison metric for pairs of different three dimensional (3D) computer-aided design (CAD) objects. In operation, the style comparison metric application executes a trained neural network any number of times to map 3D CAD objects to feature maps. Based on the feature maps, the style comparison metric application computes style signals. The style comparison metric application determines values for weights based on the style signals. The style comparison metric application generates the style comparison metric based on the weights and a parameterized style comparison metric.
    Type: Application
    Filed: November 10, 2021
    Publication date: May 19, 2022
    Inventors: Peter MELTZER, Amir Hosein KHAS AHMADI, Pradeep Kumar JAYARAMAN, Joseph George LAMBOURNE, Aditya SANGHI, Hooman SHAYANI
  • Publication number: 20220156420
    Abstract: In various embodiments, a style comparison application generates visualization(s) of geometric style gradient(s). The style comparison application generates a first set of style signals based on a first 3D CAD object and generates a second set of style signals based on a second 3D CAD object. Based on the first and second sets of style signals, the style comparison application computes a different partial derivative of a style comparison metric for each position included in a set of positions associated with the first 3D CAD object to generate a geometric style gradient. The style comparison application generates a graphical element based on at least one of the direction or the magnitude of a vector in the geometric style gradient and positions the graphical element relative to a graphical representation of the first 3D CAD object within a graphical user interface to generate a visualization of the geometric style gradient.
    Type: Application
    Filed: November 10, 2021
    Publication date: May 19, 2022
    Inventors: Peter MELTZER, Amir Hosein KHAS AHMADI, Pradeep Kumar JAYARAMAN, Joseph George LAMBOURNE, Aditya SANGHI, Hooman SHAYANI
  • Publication number: 20220156416
    Abstract: In various embodiments, a style comparison application compares geometric styles of different three dimensional (3D) computer-aided design (CAD) objects. In operation, the style comparison application executes a trained neural network one or more times to map 3D CAD objects to feature map sets. The style comparison application computes a first set of style signals based on a first feature set included in the feature map sets. The style comparison application computes a second set of style signals based on a second feature set included in the feature map sets. Based on the first set of style signals and the second set of style signals, the style comparison application determines a value for a style comparison metric. The value for the style comparison metric quantifies a similarity or a dissimilarity in geometric style between a first 3D CAD object and a second 3D CAD object.
    Type: Application
    Filed: November 10, 2021
    Publication date: May 19, 2022
    Inventors: Peter MELTZER, Amir Hosein KHAS AHMADI, Pradeep Kumar JAYARAMAN, Joseph George LAMBOURNE, Aditya SANGHI, Hooman SHAYANI