Patents by Inventor Radomir Mech
Radomir Mech has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220262011Abstract: Systems and methods provide editing operations in a smart editing system that may generate a focal point within a mask of an object for each frame of a video segment and perform editing effects on the frames of the video segment to quickly provide users with natural video editing effects. An eye-gaze network may produce a hotspot map of predicted focal points in a video frame. These predicted focal points may then be used by a gaze-to-mask network to determine objects in the image and generate an object mask for each of the detected objects. This process may then be repeated to effectively track the trajectory of objects and object focal points in videos. Based on the determined trajectory of an object in a video clip and editing parameters, the editing engine may produce editing effects relative to an object for the video clip.Type: ApplicationFiled: May 3, 2022Publication date: August 18, 2022Inventors: Lu Zhang, Jianming Zhang, Zhe Lin, Radomir Mech
-
Patent number: 11410038Abstract: Various embodiments describe frame selection based on training and using a neural network. In an example, the neural network is a convolutional neural network trained with training pairs. Each training pair includes two training frames from a frame collection. The loss function relies on the estimated quality difference between the two training frames. Further, the definition of the loss function varies based on the actual quality difference between these two frames. In a further example, the neural network is trained by incorporating facial heatmaps generated from the training frames and facial quality scores of faces detected in the training frames. In addition, the training involves using a feature mean that represents an average of the features of the training frames belonging to the same frame collection. Once the neural network is trained, a frame collection is input thereto and a frame is selected based on generated quality scores.Type: GrantFiled: March 17, 2021Date of Patent: August 9, 2022Assignee: ADOBE INC.Inventors: Zhe Lin, Xiaohui Shen, Radomir Mech, Jian Ren
-
Patent number: 11410361Abstract: Procedural model digital content editing techniques are described that overcome the limitations of conventional techniques to make procedural models available for interaction by a wide range of users without requiring specialized knowledge and do so without “breaking” the underlying model. In the techniques described herein, an inverse procedural model system receives a user input that specifies an edit to digital content generated by a procedural model. Input parameters from these candidate input parameters are selected by the system which cause the digital content generated by the procedural model to incorporate the edit.Type: GrantFiled: October 26, 2020Date of Patent: August 9, 2022Assignee: Adobe Inc.Inventors: Vojtech Krs, Radomir Mech, Mathieu Gaillard, Giorgio Gori
-
Patent number: 11367199Abstract: Systems and methods provide editing operations in a smart editing system that may generate a focal point within a mask of an object for each frame of a video segment and perform editing effects on the frames of the video segment to quickly provide users with natural video editing effects. An eye-gaze network may produce a hotspot map of predicted focal points in a video frame. These predicted focal points may then be used by a gaze-to-mask network to determine objects in the image and generate an object mask for each of the detected objects. This process may then be repeated to effectively track the trajectory of objects and object focal points in videos. Based on the determined trajectory of an object in a video clip and editing parameters, the editing engine may produce editing effects relative to an object for the video clip.Type: GrantFiled: June 12, 2020Date of Patent: June 21, 2022Assignee: Adobe Inc.Inventors: Lu Zhang, Jianming Zhang, Zhe Lin, Radomir Mech
-
Publication number: 20220130086Abstract: Procedural model digital content editing techniques are described that overcome the limitations of conventional techniques to make procedural models available for interaction by a wide range of users without requiring specialized knowledge and do so without “breaking” the underlying model. In the techniques described herein, an inverse procedural model system receives a user input that specifies an edit to digital content generated by a procedural model. Input parameters from these candidate input parameters are selected by the system which cause the digital content generated by the procedural model to incorporate the edit.Type: ApplicationFiled: October 26, 2020Publication date: April 28, 2022Applicant: Adobe Inc.Inventors: Vojtech Krs, Radomir Mech, Mathieu Gaillard, Giorgio Gori
-
Publication number: 20220078358Abstract: Systems and methods provide reframing operations in a smart editing system that may generate a focal point within a mask of an object for each frame of a video segment and perform editing effects on the frames of the video segment to quickly provide users with natural video editing effects. A reframing engine may processes video clips using a segmentation and hotspot module to determine a salient region of an object, generate a mask of the object, and track the trajectory of an object in the video clips. The reframing engine may then receive reframing parameters from a crop suggestion module and a user interface. Based on the determined trajectory of an object in a video clip and reframing parameters, the reframing engine may use reframing logic to produce temporally consistent reframing effects relative to an object for the video clip.Type: ApplicationFiled: November 15, 2021Publication date: March 10, 2022Inventors: Lu Zhang, Jianming Zhang, Zhe Lin, Radomir Mech
-
Publication number: 20220051453Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.Type: ApplicationFiled: October 28, 2021Publication date: February 17, 2022Inventors: Milos Hasan, Liang Shi, Tamy Boubekeur, Kalyan Sunkavalli, Radomir Mech
-
Patent number: 11244502Abstract: Techniques are disclosed for generation of 3D structures. A methodology implementing the techniques according to an embodiment includes initializing systems configured to provide rules that specify edge connections between vertices and parametric properties of the vertices. The rules are applied to an initial set of vertices to generate 3D graphs for each of these vertex-rule-graph (VRG) systems. The initial set of vertices is associated with provided interaction surfaces of a 3D model. Skeleton geometries are generated for the 3D graphs, and an associated objective function is calculated. The objective function is configured to evaluate the fitness of the skeleton geometries based on given geometric and functional constraints. A 3D structure is generated through an iterative application of genetic programming techniques applied to the VRG systems to minimize the objective function. Receiving updated constraints and interaction surfaces, for incorporation in the iterative process.Type: GrantFiled: November 29, 2017Date of Patent: February 8, 2022Assignee: Adobe Inc.Inventors: Vojt{hacek over (e)}ch Krs, Radomir Mech, Nathan A. Carr
-
Patent number: 11222399Abstract: Image cropping suggestion using multiple saliency maps is described. In one or more implementations, component scores, indicative of visual characteristics established for visually-pleasing croppings, are computed for candidate image croppings using multiple different saliency maps. The visual characteristics on which a candidate image cropping is scored may be indicative of its composition quality, an extent to which it preserves content appearing in the scene, and a simplicity of its boundary. Based on the component scores, the croppings may be ranked with regard to each of the visual characteristics. The rankings may be used to cluster the candidate croppings into groups of similar croppings, such that croppings in a group are different by less than a threshold amount and croppings in different groups are different by at least the threshold amount. Based on the clustering, croppings may then be chosen, e.g., to present them to a user for selection.Type: GrantFiled: April 15, 2019Date of Patent: January 11, 2022Assignee: Adobe Inc.Inventors: Zhe Lin, Radomir Mech, Xiaohui Shen, Brian L. Price, Jianming Zhang, Anant Gilra, Jen-Chan Jeff Chien
-
Publication number: 20210390710Abstract: Systems and methods provide editing operations in a smart editing system that may generate a focal point within a mask of an object for each frame of a video segment and perform editing effects on the frames of the video segment to quickly provide users with natural video editing effects. An eye-gaze network may produce a hotspot map of predicted focal points in a video frame. These predicted focal points may then be used by a gaze-to-mask network to determine objects in the image and generate an object mask for each of the detected objects. This process may then be repeated to effectively track the trajectory of objects and object focal points in videos. Based on the determined trajectory of an object in a video clip and editing parameters, the editing engine may produce editing effects relative to an object for the video clip.Type: ApplicationFiled: June 12, 2020Publication date: December 16, 2021Inventors: Lu Zhang, Jianming Zhang, Zhe Lin, Radomir Mech
-
Publication number: 20210392278Abstract: Systems and methods provide reframing operations in a smart editing system that may generate a focal point within a mask of an object for each frame of a video segment and perform editing effects on the frames of the video segment to quickly provide users with natural video editing effects. A reframing engine may processes video clips using a segmentation and hotspot module to determine a salient region of an object, generate a mask of the object, and track the trajectory of an object in the video clips. The reframing engine may then receive reframing parameters from a crop suggestion module and a user interface. Based on the determined trajectory of an object in a video clip and reframing parameters, the reframing engine may use reframing logic to produce temporally consistent reframing effects relative to an object for the video clip.Type: ApplicationFiled: June 12, 2020Publication date: December 16, 2021Inventors: Lu Zhang, Jianming Zhang, Zhe Lin, Radomir Mech
-
Patent number: 11189060Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.Type: GrantFiled: April 30, 2020Date of Patent: November 30, 2021Assignee: ADOBE INC.Inventors: Milos Hasan, Liang Shi, Tamy Boubekeur, Kalyan Sunkavalli, Radomir Mech
-
Patent number: 11184558Abstract: Systems and methods provide reframing operations in a smart editing system that may generate a focal point within a mask of an object for each frame of a video segment and perform editing effects on the frames of the video segment to quickly provide users with natural video editing effects. A reframing engine may processes video clips using a segmentation and hotspot module to determine a salient region of an object, generate a mask of the object, and track the trajectory of an object in the video clips. The reframing engine may then receive reframing parameters from a crop suggestion module and a user interface. Based on the determined trajectory of an object in a video clip and reframing parameters, the reframing engine may use reframing logic to produce temporally consistent reframing effects relative to an object for the video clip.Type: GrantFiled: June 12, 2020Date of Patent: November 23, 2021Assignee: Adobe Inc.Inventors: Lu Zhang, Jianming Zhang, Zhe Lin, Radomir Mech
-
Publication number: 20210343051Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.Type: ApplicationFiled: April 30, 2020Publication date: November 4, 2021Inventors: Milos Hasan, Liang Shi, Tamy Boubekeur, Kalyan Sunkavalli, Radomir Mech
-
Patent number: 11138776Abstract: Various methods and systems are provided for image-management operations that includes generating adaptive image armatures based on an alignment between composition lines of a reference armature and a position of an object in an image. In operation, a reference armature for an image is accessed. The reference armature includes a plurality of composition lines that define a frame of reference for image composition. An alignment map is determined using the reference armature. The alignment map includes alignment information that indicates alignment between the composition lines of the reference armature and the position of the object in the image. Based on the alignment map, an adaptive image armature is determined. The adaptive image armature includes a subset of the composition lines of the reference armature. The adaptive image armature is displayed.Type: GrantFiled: May 17, 2019Date of Patent: October 5, 2021Assignee: ADOBE INC.Inventors: Radomir Mech, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Jianming Zhang, Jane Little E
-
Patent number: 11113578Abstract: A non-photorealistic image rendering system and related techniques are described herein that train and implement machine learning models to reproduce digital images in accordance with various painting styles and constraints. The image rendering system can include a machine learning system that utilizes actor-critic based reinforcement learning techniques to train painting agents (e.g., models that include one or more neural networks) how to transform images into various artistic styles with minimal loss between the original images and the transformed images. The image rendering system can generate constrained painting agents, which correspond to painting agents that are further trained to reproduce images in accordance with one or more constraints. The constraints may include limitations of the color, width, size, and/or position of brushstrokes within reproduced images. These constrained painting agents may provide users with robust, flexible, and customizable non-photorealistic painting systems.Type: GrantFiled: April 13, 2020Date of Patent: September 7, 2021Assignee: Adobe, Inc.Inventors: Jonathan Brandt, Radomir Mech, Ning Xu, Byungmoon Kim, Biao Jia
-
Publication number: 20210264649Abstract: Generative shape creation and editing is leveraged in a digital medium environment. An object editor system represents a set of training shapes as sets of visual elements known as “handles,” and converts sets of handles into signed distance field (SDF) representations. A handle processor model is then trained using the SDF representations to enable the handle processor model to generate new shapes that reflect salient visual features of the training shapes. The trained handle processor model, for instance, generates new sets of handles based on salient visual features learned from the training handle set. Thus, utilizing the described techniques, accurate characterizations of a set of shapes can be learned and used to generate new shapes. Further, generated shapes can be edited and transformed in different ways.Type: ApplicationFiled: May 11, 2021Publication date: August 26, 2021Applicant: Adobe Inc.Inventors: Giorgio Gori, Tamy Boubekeur, Radomir Mech, Nathan Aaron Carr, Matheus Abrantes Gadelha, Duygu Ceylan Aksit
-
Publication number: 20210256775Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.Type: ApplicationFiled: March 22, 2021Publication date: August 19, 2021Inventors: Duygu Ceylan Aksit, Vladimir Kim, Siddhartha Chaudhuri, Radomir Mech, Noam Aigerman, Kevin Wampler, Jonathan Eisenmann, Giorgio Gori, Emiliano Gambaretto
-
Patent number: 11069099Abstract: Various embodiments enable curves to be drawn around 3-D objects by intelligently determining or inferring how the curve flows in the space around the outside of the 3-D object. The various embodiments enable such curves to be drawn without having to constantly rotate the 3-D object. In at least some embodiments, curve flow is inferred by employing a vertex position discovery process, a path discovery process, and a final curve construction process.Type: GrantFiled: April 22, 2020Date of Patent: July 20, 2021Assignee: Adobe Inc.Inventors: Vojtech Krs, Radomir Mech, Nathan Aaron Carr, Mehmet Ersin Yumer
-
Publication number: 20210201150Abstract: Various embodiments describe frame selection based on training and using a neural network. In an example, the neural network is a convolutional neural network trained with training pairs. Each training pair includes two training frames from a frame collection. The loss function relies on the estimated quality difference between the two training frames. Further, the definition of the loss function varies based on the actual quality difference between these two frames. In a further example, the neural network is trained by incorporating facial heatmaps generated from the training frames and facial quality scores of faces detected in the training frames. In addition, the training involves using a feature mean that represents an average of the features of the training frames belonging to the same frame collection. Once the neural network is trained, a frame collection is input thereto and a frame is selected based on generated quality scores.Type: ApplicationFiled: March 17, 2021Publication date: July 1, 2021Inventors: Zhe Lin, Xiaohui Shen, Radomir Mech, Jian Ren