Patents by Inventor Radomir Mech
Radomir Mech has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12277652Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.Type: GrantFiled: November 15, 2022Date of Patent: April 15, 2025Assignee: Adobe Inc.Inventors: Radomir Mech, Nathan Carr, Matheus Gadelha
-
Publication number: 20250061650Abstract: An image processing system is configured to receive a three-dimensional (3D) model and a text prompt that describes a scene corresponding to the 3D model. The system may then generate a depth map of the 3D model and generate an output image based on the depth map and the text prompt. The output image may depicts a view of the scene that includes textures described by the text prompt. The output image may be generated using an image generation model.Type: ApplicationFiled: August 17, 2023Publication date: February 20, 2025Inventors: Matheus Gadelha, Tomasz Opasinski, Kevin James Blackburn-Matzen, Mathieu Kevin Pascal Gaillard, Giorgio Gori, Radomir Mech
-
Publication number: 20250061660Abstract: Systems and methods for extracting 3D shapes from unstructured and unannotated datasets are described. Embodiments are configured to obtain a first image and a second image, where the first image depicts an object and the second image includes a corresponding object of a same object category as the object. Embodiments are further configured to generate, using an image encoder, image features for portions of the first image and for portions of the second image; identify a keypoint correspondence between a first keypoint in the first image and a second keypoint in the second image by clustering the image features corresponding to the portions of the first image and the portions of the second image; and generate, using an occupancy network, a 3D model of the object based on the keypoint correspondence.Type: ApplicationFiled: August 18, 2023Publication date: February 20, 2025Inventors: Ta-Ying Cheng, Matheus Gadelha, Soren Pirk, Radomir Mech, Thibault Groueix
-
Patent number: 12223661Abstract: Systems and methods provide editing operations in a smart editing system that may generate a focal point within a mask of an object for each frame of a video segment and perform editing effects on the frames of the video segment to quickly provide users with natural video editing effects. An eye-gaze network may produce a hotspot map of predicted focal points in a video frame. These predicted focal points may then be used by a gaze-to-mask network to determine objects in the image and generate an object mask for each of the detected objects. This process may then be repeated to effectively track the trajectory of objects and object focal points in videos. Based on the determined trajectory of an object in a video clip and editing parameters, the editing engine may produce editing effects relative to an object for the video clip.Type: GrantFiled: May 3, 2022Date of Patent: February 11, 2025Assignee: ADOBE INC.Inventors: Lu Zhang, Jianming Zhang, Zhe Lin, Radomir Mech
-
Patent number: 12198231Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.Type: GrantFiled: June 26, 2023Date of Patent: January 14, 2025Assignee: Adobe Inc.Inventors: Milos Hasan, Liang Shi, Tamy Boubekeur, Kalyan Sunkavalli, Radomir Mech
-
Publication number: 20240161366Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.Type: ApplicationFiled: November 15, 2022Publication date: May 16, 2024Inventors: Radomir Mech, Nathan Carr, Matheus Gadelha
-
Publication number: 20240161406Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.Type: ApplicationFiled: November 15, 2022Publication date: May 16, 2024Inventors: Radomir Mech, Nathan Carr, Matheus Gadelha
-
Publication number: 20240161320Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.Type: ApplicationFiled: November 15, 2022Publication date: May 16, 2024Inventors: Matheus Gadelha, Radomir Mech
-
Publication number: 20240161405Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.Type: ApplicationFiled: November 15, 2022Publication date: May 16, 2024Inventors: Radomir Mech, Nathan Carr, Matheus Gadelha
-
Publication number: 20240144586Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images.Type: ApplicationFiled: April 20, 2023Publication date: May 2, 2024Inventors: Yannick Hold-Geoffroy, Vojtech Krs, Radomir Mech, Nathan Carr, Matheus Gadelha
-
Publication number: 20240135612Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify two-dimensional images via scene-based editing using three-dimensional representations of the two-dimensional images. For instance, in one or more embodiments, the disclosed systems utilize three-dimensional representations of two-dimensional images to generate and modify shadows in the two-dimensional images according to various shadow maps. Additionally, the disclosed systems utilize three-dimensional representations of two-dimensional images to modify humans in the two-dimensional images. The disclosed systems also utilize three-dimensional representations of two-dimensional images to provide scene scale estimation via scale fields of the two-dimensional images. In some embodiments, the disclosed systems utilizes three-dimensional representations of two-dimensional images to generate and visualize 3D planar surfaces for modifying objects in two-dimensional images.Type: ApplicationFiled: April 20, 2023Publication date: April 25, 2024Inventors: Yannick Hold-Geoffroy, Vojtech Krs, Radomir Mech, Nathan Carr, Matheus Gadelha
-
Patent number: 11900514Abstract: Procedural model digital content editing techniques are described that overcome the limitations of conventional techniques to make procedural models available for interaction by a wide range of users without requiring specialized knowledge and do so without “breaking” the underlying model. In the techniques described herein, an inverse procedural model system receives a user input that specifies an edit to digital content generated by a procedural model. Input parameters from these candidate input parameters are selected by the system which cause the digital content generated by the procedural model to incorporate the edit.Type: GrantFiled: July 18, 2022Date of Patent: February 13, 2024Assignee: Adobe Inc.Inventors: Vojtech Krs, Radomir Mech, Mathieu Gaillard, Giorgio Gori
-
Patent number: 11900558Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that tune a 3D-object-reconstruction-machine-learning model to reconstruct 3D models of objects from real images using real images as training data. For instance, the disclosed systems can determine a depth map for a real two-dimensional (2D) image and then reconstruct a 3D model of a digital object in the real 2D image based on the depth map. By using a depth map for a real 2D image, the disclosed systems can generate reconstructed 3D models that better conform to the shape of digital objects in real images than existing systems and use such reconstructed 3D models to generate more realistic looking visual effects (e.g., shadows, relighting).Type: GrantFiled: November 5, 2021Date of Patent: February 13, 2024Assignee: Adobe Inc.Inventors: Marissa Ramirez de Chanlatte, Radomir Mech, Matheus Gadelha, Thibault Groueix
-
Patent number: 11875446Abstract: Aspects of a system and method for procedural media generation include generating a sequence of operator types using a node generation network; generating a sequence of operator parameters for each operator type of the sequence of operator types using a parameter generation network; generating a sequence of directed edges based on the sequence of operator types using an edge generation network; combining the sequence of operator types, the sequence of operator parameters, and the sequence of directed edges to obtain a procedural media generator, wherein each node of the procedural media generator comprises an operator that includes an operator type from the sequence of operator types, a corresponding sequence of operator parameters, and an input connection or an output connection from the sequence of directed edges that connects the node to another node of the procedural media generator; and generating a media asset using the procedural media generator.Type: GrantFiled: May 6, 2022Date of Patent: January 16, 2024Assignee: ADOBE, INC.Inventors: Paul Augusto Guerrero, Milos Hasan, Kalyan K. Sunkavalli, Radomir Mech, Tamy Boubekeur, Niloy Jyoti Mitra
-
Publication number: 20230360285Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.Type: ApplicationFiled: June 26, 2023Publication date: November 9, 2023Inventors: Milos Hasan, Liang Shi, Tamy Boubekeur, Kalyan Sunkavalli, Radomir Mech
-
Publication number: 20230360310Abstract: Aspects of a system and method for procedural media generation include generating a sequence of operator types using a node generation network; generating a sequence of operator parameters for each operator type of the sequence of operator types using a parameter generation network; generating a sequence of directed edges based on the sequence of operator types using an edge generation network; combining the sequence of operator types, the sequence of operator parameters, and the sequence of directed edges to obtain a procedural media generator, wherein each node of the procedural media generator comprises an operator that includes an operator type from the sequence of operator types, a corresponding sequence of operator parameters, and an input connection or an output connection from the sequence of directed edges that connects the node to another node of the procedural media generator; and generating a media asset using the procedural media generator.Type: ApplicationFiled: May 6, 2022Publication date: November 9, 2023Inventors: Paul Augusto Guerrero, Milos Hasan, Kalyan K. Sunkavalli, Radomir Mech, Tamy Boubekeur, Niloy Jyoti Mitra
-
Patent number: 11769279Abstract: Generative shape creation and editing is leveraged in a digital medium environment. An object editor system represents a set of training shapes as sets of visual elements known as “handles,” and converts sets of handles into signed distance field (SDF) representations. A handle processor model is then trained using the SDF representations to enable the handle processor model to generate new shapes that reflect salient visual features of the training shapes. The trained handle processor model, for instance, generates new sets of handles based on salient visual features learned from the training handle set. Thus, utilizing the described techniques, accurate characterizations of a set of shapes can be learned and used to generate new shapes. Further, generated shapes can be edited and transformed in different ways.Type: GrantFiled: May 11, 2021Date of Patent: September 26, 2023Assignee: Adobe Inc.Inventors: Giorgio Gori, Tamy Boubekeur, Radomir Mech, Nathan Aaron Carr, Matheus Abrantes Gadelha, Duygu Ceylan Aksit
-
Patent number: 11694416Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.Type: GrantFiled: March 22, 2021Date of Patent: July 4, 2023Assignee: Adobe, Inc.Inventors: Duygu Ceylan Aksit, Vladimir Kim, Siddhartha Chaudhuri, Radomir Mech, Noam Aigerman, Kevin Wampler, Jonathan Eisenmann, Giorgio Gori, Emiliano Gambaretto
-
Patent number: 11688109Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.Type: GrantFiled: October 28, 2021Date of Patent: June 27, 2023Assignee: Adobe Inc.Inventors: Milos Hasan, Liang Shi, Tamy Boubekeur, Kalyan Sunkavalli, Radomir Mech
-
Patent number: 11663762Abstract: Embodiments of the present invention are directed to facilitating region of interest preservation. In accordance with some embodiments of the present invention, a region of interest preservation score using adaptive margins is determined. The region of interest preservation score indicates an extent to which at least one region of interest is preserved in a candidate image crop associated with an image. A region of interest positioning score is determined that indicates an extent to which a position of the at least one region of interest is preserved in the candidate image crop associated with the image. The region of interest preservation score and/or the preserving score are used to select a set of one or more candidate image crops as image crop suggestions.Type: GrantFiled: October 29, 2020Date of Patent: May 30, 2023Assignee: Adobe Inc.Inventors: Jianming Zhang, Zhe Lin, Radomir Mech, Xiaohui Shen