Patents by Inventor Radomir Mech

Radomir Mech has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10559061
    Abstract: Systems and methods for computerized drawing of ornamental designs consisting of placed instances of simple shapes. The shapes, called elements, are selected from a small library of templates. The elements are deformed to flow along a direction field interpolated from user-supplied strokes, giving a sense of visual flow to the final composition, and constrained to lie within a container region. In an implementation, a vector field is computed based on user strokes. Streamlines that conform to the vector field are constructed, and an element is placed over each streamline. The shape of the elements may be modified such as by bending, stretching or enlarging to reduce spacing between elements and to minimize variations in spacing to improve aesthetic appearance.
    Type: Grant
    Filed: May 8, 2017
    Date of Patent: February 11, 2020
    Assignee: Adobe Inc.
    Inventors: Paul Asente, Craig Kaplan, Radomir Mech, Reza Adhitya Saputra
  • Patent number: 10552730
    Abstract: An intuitive object-generation experience is provided by employing an autoencoder neural network to reduce the dimensionality of a procedural model. A set of sample objects are generated using the procedural model. In embodiments, the sample objects may be selected according to visual features such that the sample objects are uniformly distributed in visual appearance. Both procedural model parameters and visual features from the sample objects are used to train an autoencoder neural network, which maps a small number of new parameters to the larger number of procedural model parameters of the original procedural model. A user interface may be provided that allows users to generate new objects by adjusting the new parameters of the trained autoencoder neural network, which outputs procedural model parameters. The output procedural model parameters may be provided to the procedural model to generate the new objects.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: February 4, 2020
    Assignee: ADOBE INC.
    Inventors: Mehmet Ersin Yumer, Radomir Mech, Paul John Asente, Gavin Stuart Peter Miller
  • Patent number: 10515443
    Abstract: Systems and methods are disclosed for estimating aesthetic quality of digital images using deep learning. In particular, the disclosed systems and methods describe training a neural network to generate an aesthetic quality score digital images. In particular, the neural network includes a training structure that compares relative rankings of pairs of training images to accurately predict a relative ranking of a digital image. Additionally, in training the neural network, an image rating system can utilize content-aware and user-aware sampling techniques to identify pairs of training images that have similar content and/or that have been rated by the same or different users. Using content-aware and user-aware sampling techniques, the neural network can be trained to accurately predict aesthetic quality ratings that reflect subjective opinions of most users as well as provide aesthetic scores for digital images that represent the wide spectrum of aesthetic preferences of various users.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: December 24, 2019
    Assignee: Adobe Inc.
    Inventors: Xiaohui Shen, Zhe Lin, Shu Kong, Radomir Mech
  • Patent number: 10516830
    Abstract: Various embodiments describe facilitating real-time crops on an image. In an example, an image processing application executed on a device receives image data corresponding to a field of view of a camera of the device. The image processing application renders a major view on a display of the device in a preview mode. The major view presents a previewed image based on the image data. The image processing application receives a composition score of a cropped image from a deep-learning system. The image processing application renders a sub-view presenting the cropped image based on the composition score in a preview mode. Based on a user interaction, the image processing application renders the cropped image in the major view with the sub-view in the preview mode.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: December 24, 2019
    Assignee: Adobe Inc.
    Inventors: Jianming Zhang, Zijun Wei, Zhe Lin, Xiaohui Shen, Radomir Mech
  • Patent number: 10497122
    Abstract: Various embodiments describe using a neural network to evaluate image crops in substantially real-time. In an example, a computer system performs unsupervised training of a first neural network based on unannotated image crops, followed by a supervised training of the first neural network based on annotated image crops. Once this first neural network is trained, the computer system inputs image crops generated from images to this trained network and receives composition scores therefrom. The computer system performs supervised training of a second neural network based on the images and the composition scores.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: December 3, 2019
    Assignee: Adobe Inc.
    Inventors: Jianming Zhang, Zijun Wei, Zhe Lin, Xiaohui Shen, Radomir Mech
  • Publication number: 20190362199
    Abstract: Techniques are disclosed for blur classification. The techniques utilize an image content feature map, a blur map, and an attention map, thereby combining low-level blur estimation with a high-level understanding of important image content in order to perform blur classification. The techniques allow for programmatically determining if blur exists in an image, and determining what type of blur it is (e.g., high blur, low blur, middle or neutral blur, or no blur). According to one example embodiment, if blur is detected, an estimate of spatially-varying blur amounts is performed and blur desirability is categorized in terms of image quality.
    Type: Application
    Filed: May 25, 2018
    Publication date: November 28, 2019
    Applicant: Adobe Inc.
    Inventors: Zhe Lin, Xiaohui Shen, Shanghang Zhang, Radomir Mech
  • Patent number: 10489688
    Abstract: Techniques and systems are described to determine personalized digital image aesthetics in a digital medium environment. In one example, a personalized offset is generated to adapt a generic model for digital image aesthetics. A generic model, once trained, is used to generate training aesthetics scores from a personal training data set that corresponds to an entity, e.g., a particular user, group of users, and so on. The image aesthetics system then generates residual scores (e.g., offsets) as a difference between the training aesthetics score and the personal aesthetics score for the personal training digital images. The image aesthetics system then employs machine learning to train a personalized model to predict the residual scores as a personalized offset using the residual scores and personal training digital images.
    Type: Grant
    Filed: July 24, 2017
    Date of Patent: November 26, 2019
    Assignee: Adobe Inc.
    Inventors: Xiaohui Shen, Zhe Lin, Radomir Mech, Jian Ren
  • Patent number: 10467529
    Abstract: In embodiments of convolutional neural network joint training, a computing system memory maintains different data batches of multiple digital image items, where the digital image items of the different data batches have some common features. A convolutional neural network (CNN) receives input of the digital image items of the different data batches, and classifier layers of the CNN are trained to recognize the common features in the digital image items of the different data batches. The recognized common features are input to fully-connected layers of the CNN that distinguish between the recognized common features of the digital image items of the different data batches. A scoring difference is determined between item pairs of the digital image items in a particular one of the different data batches. A piecewise ranking loss algorithm maintains the scoring difference between the item pairs, and the scoring difference is used to train CNN regression functions.
    Type: Grant
    Filed: June 8, 2016
    Date of Patent: November 5, 2019
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Yufei Wang, Radomir Mech, Xiaohui Shen, Gavin Stuart Peter Miller
  • Patent number: 10389804
    Abstract: Content creation and sharing integration techniques and systems are described. In one or more implementations, techniques are described in which modifiable versions of content (e.g., images) are created and shared via a content sharing service such that image creation functionality used to create the images is preserved to permit continued creation using this functionality. In one or more additional implementations, image creation functionality employed by a creative professional to create content is leveraged to locate similar images from a content sharing service.
    Type: Grant
    Filed: November 11, 2015
    Date of Patent: August 20, 2019
    Assignee: Adobe Inc.
    Inventors: Zeke Koch, Gavin Stuart Peter Miller, Jonathan W. Brandt, Nathan A. Carr, Radomir Mech, Walter Wei-Tuh Chang, Scott D. Cohen, Hailin Jin
  • Publication number: 20190253614
    Abstract: The present disclosure includes systems, methods, and non-transitory computer readable media that can guide a user to align a camera feed captured by a user client device with a target digital image. In particular, the systems described herein can analyze a camera feed to determine image attributes for the camera feed. The systems can compare the image attributes of the camera feed with corresponding target image attributes of a target digital image. Additionally, the systems can generate and provide instructions to guide a user to align the image attributes of the camera feed with the target image attributes of the target digital image.
    Type: Application
    Filed: February 15, 2018
    Publication date: August 15, 2019
    Inventors: Alannah Oleson, Radomir Mech, Jose Echevarria, Jingwan Lu
  • Publication number: 20190244327
    Abstract: Image cropping suggestion using multiple saliency maps is described. In one or more implementations, component scores, indicative of visual characteristics established for visually-pleasing croppings, are computed for candidate image croppings using multiple different saliency maps. The visual characteristics on which a candidate image cropping is scored may be indicative of its composition quality, an extent to which it preserves content appearing in the scene, and a simplicity of its boundary. Based on the component scores, the croppings may be ranked with regard to each of the visual characteristics. The rankings may be used to cluster the candidate croppings into groups of similar croppings, such that croppings in a group are different by less than a threshold amount and croppings in different groups are different by at least the threshold amount. Based on the clustering, croppings may then be chosen, e.g., to present them to a user for selection.
    Type: Application
    Filed: April 15, 2019
    Publication date: August 8, 2019
    Applicant: Adobe Inc.
    Inventors: Zhe Lin, Radomir Mech, Xiaohui Shen, Brian L. Price, Jianming Zhang, Anant Gilra, Jen-Chan Jeff Chien
  • Publication number: 20190213474
    Abstract: Various embodiments describe frame selection based on training and using a neural network. In an example, the neural network is a convolutional neural network trained with training pairs. Each training pair includes two training frames from a frame collection. The loss function relies on the estimated quality difference between the two training frames. Further, the definition of the loss function varies based on the actual quality difference between these two frames. In a further example, the neural network is trained by incorporating facial heatmaps generated from the training frames and facial quality scores of faces detected in the training frames. In addition, the training involves using a feature mean that represents an average of the features of the training frames belonging to the same frame collection. Once the neural network is trained, a frame collection is input thereto and a frame is selected based on generated quality scores.
    Type: Application
    Filed: January 9, 2018
    Publication date: July 11, 2019
    Inventors: Zhe Lin, Xiaohui Shen, Radomir Mech, Jian Ren
  • Patent number: 10346951
    Abstract: Image cropping suggestion using multiple saliency maps is described. In one or more implementations, component scores, indicative of visual characteristics established for visually-pleasing croppings, are computed for candidate image croppings using multiple different saliency maps. The visual characteristics on which a candidate image cropping is scored may be indicative of its composition quality, an extent to which it preserves content appearing in the scene, and a simplicity of its boundary. Based on the component scores, the croppings may be ranked with regard to each of the visual characteristics. The rankings may be used to cluster the candidate croppings into groups of similar croppings, such that croppings in a group are different by less than a threshold amount and croppings in different groups are different by at least the threshold amount. Based on the clustering, croppings may then be chosen, e.g., to present them to a user for selection.
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: July 9, 2019
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Radomir Mech, Xiaohui Shen, Brian L. Price, Jianming Zhang, Anant Gilra, Jen-Chan Jeff Chien
  • Patent number: 10318102
    Abstract: Techniques and systems are described to generate a three-dimensional model from two-dimensional images. A plurality of inputs is received, formed through user interaction with a user interface. Each of the plurality of inputs define a respective user-specified point on the object in a respective one of the plurality of images. A plurality of estimated points on the object are generated automatically and without user intervention. Each of the plurality of estimated points corresponds to a respective user-specified point for other ones of the plurality of images. The plurality of estimated points is displayed for the other ones of the plurality of images in the user interface by a computing device. A mesh of the three-dimensional model of the object is generated by the computing device by mapping respective ones of the user-specified points to respective ones of the estimated points in the plurality of images.
    Type: Grant
    Filed: January 25, 2016
    Date of Patent: June 11, 2019
    Assignee: Adobe Inc.
    Inventors: Daichi Ito, Radomir Mech, Nathan A. Carr, Tsukasa Fukusato
  • Patent number: 10307969
    Abstract: This document describes techniques and apparatuses for 3D printing with small geometric offsets to affect surface characteristics. These techniques are capable of enabling fused-deposition printers to create 3D objects having desired surface characteristics, such as particular colors, images and image resolutions, textures, and luminosities. In some cases, the techniques do so using a single filament head with a single filament material. In some other cases, the techniques do so using multiple heads each with different filaments, though the techniques can forgo many switches between these heads. Each printing layer may use even a single filament from one head, thereby enabling surface characteristics while reducing starts and stops for filaments heads, which enables fewer artifacts or increases printing speed.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: June 4, 2019
    Assignee: Adobe Inc.
    Inventors: Nathan A. Carr, Tim Christopher Reiner, Gavin Stuart Peter Miller, Radomir Mech
  • Publication number: 20190164342
    Abstract: Techniques are disclosed for generation of 3D structures. A methodology implementing the techniques according to an embodiment includes initializing systems configured to provide rules that specify edge connections between vertices and parametric properties of the vertices. The rules are applied to an initial set of vertices to generate 3D graphs for each of these vertex-rule-graph (VRG) systems. The initial set of vertices is associated with provided interaction surfaces of a 3D model. Skeleton geometries are generated for the 3D graphs, and an associated objective function is calculated. The objective function is configured to evaluate the fitness of the skeleton geometries based on given geometric and functional constraints. A 3D structure is generated through an iterative application of genetic programming techniques applied to the VRG systems to minimize the objective function. Receiving updated constraints and interaction surfaces, for incorporation in the iterative process.
    Type: Application
    Filed: November 29, 2017
    Publication date: May 30, 2019
    Applicant: Adobe Inc.
    Inventors: Vojtech Krs, Radomir Mech, Nathan A. Carr
  • Publication number: 20190110002
    Abstract: Various embodiments describe view switching of video on a computing device. In an example, a video processing application executed on the computing device receives a stream of video data. The video processing application renders a major view on a display of the computing device. The major view presents a video from the stream of video data. The video processing application inputs the stream of video data to a deep learning system and receives back information that identifies a cropped video from the video based on a composition score of the cropped video, while the video is presented in the major view. The composition score is generated by the deep learning system. The video processing application renders a sub-view on a display of the device, the sub-view presenting the cropped video. The video processing application renders the cropped video in the major view based on a user interaction with the sub-view.
    Type: Application
    Filed: October 11, 2017
    Publication date: April 11, 2019
    Inventors: Jianming Zhang, Zijun Wei, Zhe Lin, Xiaohui Shen, Radomir Mech
  • Publication number: 20190109981
    Abstract: Various embodiments describe facilitating real-time crops on an image. In an example, an image processing application executed on a device receives image data corresponding to a field of view of a camera of the device. The image processing application renders a major view on a display of the device in a preview mode. The major view presents a previewed image based on the image data. The image processing application receives a composition score of a cropped image from a deep-learning system. The image processing application renders a sub-view presenting the cropped image based on the composition score in a preview mode. Based on a user interaction, the image processing application renders the cropped image in the major view with the sub-view in the preview mode.
    Type: Application
    Filed: October 11, 2017
    Publication date: April 11, 2019
    Inventors: Jianming Zhang, Zijun Wei, Zhe Lin, Xiaohui Shen, Radomir Mech
  • Publication number: 20190108640
    Abstract: Various embodiments describe using a neural network to evaluate image crops in substantially real-time. In an example, a computer system performs unsupervised training of a first neural network based on unannotated image crops, followed by a supervised training of the first neural network based on annotated image crops. Once this first neural network is trained, the computer system inputs image crops generated from images to this trained network and receives composition scores therefrom. The computer system performs supervised training of a second neural network based on the images and the composition scores.
    Type: Application
    Filed: October 11, 2017
    Publication date: April 11, 2019
    Inventors: Jianming Zhang, Zijun Wei, Zhe Lin, Xiaohui Shen, Radomir Mech
  • Patent number: 10257436
    Abstract: Various embodiments describe view switching of video on a computing device. In an example, a video processing application receives a stream of video data. The video processing application renders a major view on a display of the computing device. The major view presents a video from the stream of video data. The video processing application inputs the stream of video data to a deep learning system and receives back information that identifies a cropped video from the video based on a composition score of the cropped video, while the video is presented in the major view. The composition score is generated by the deep learning system. The video processing application renders a sub-view on a display of the device, the sub-view presenting the cropped video. The video processing application renders the cropped video in the major view based on a user interaction with the sub-view.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: April 9, 2019
    Assignee: Adobe Systems Incorporated
    Inventors: Jianming Zhang, Zijun Wei, Zhe Lin, Xiaohui Shen, Radomir Mech