Patents by Inventor Aaron Hertzmann

Aaron Hertzmann has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10748324
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that integrate (or embed) a non-photorealistic rendering (“NPR”) generator with a style-transfer-neural network to generate stylized images that both correspond to a source image and resemble a stroke style. By integrating an NPR generator with a style-transfer-neural network, the disclosed methods, non-transitory computer readable media, and systems can accurately capture a stroke style resembling one or both of stylized edges or stylized shadings. When training such a style-transfer-neural network, the integrated NPR generator can enable the disclosed methods, non-transitory computer readable media, and systems to use real-stroke drawings (instead of conventional paired-ground-truth drawings) for training the network to accurately portray a stroke style.
    Type: Grant
    Filed: November 8, 2018
    Date of Patent: August 18, 2020
    Assignee: ADOBE INC.
    Inventors: Elya Shechtman, Yijun Li, Chen Fang, Aaron Hertzmann
  • Patent number: 10706554
    Abstract: The present disclosure includes methods and systems for identifying and manipulating a segment of a three-dimensional digital model based on soft classification of the three-dimensional digital model. In particular, one or more embodiments of the disclosed systems and methods identify a soft classification of a digital model and utilize the soft classification to tune segmentation algorithms. For example, the disclosed systems and methods can utilize a soft classification to select a segmentation algorithm from a plurality of segmentation algorithms, to combine segmentation parameters from a plurality of segmentation algorithms, and/or to identify input parameters for a segmentation algorithm. The disclosed systems and methods can utilize the tuned segmentation algorithms to accurately and efficiently identify a segment of a three-dimensional digital model.
    Type: Grant
    Filed: April 14, 2017
    Date of Patent: July 7, 2020
    Assignee: ADOBE INC.
    Inventors: Vladimir Kim, Aaron Hertzmann, Mehmet Yumer
  • Publication number: 20200151938
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that integrate (or embed) a non-photorealistic rendering (“NPR”) generator with a style-transfer-neural network to generate stylized images that both correspond to a source image and resemble a stroke style. By integrating an NPR generator with a style-transfer-neural network, the disclosed methods, non-transitory computer readable media, and systems can accurately capture a stroke style resembling one or both of stylized edges or stylized shadings. When training such a style-transfer-neural network, the integrated NPR generator can enable the disclosed methods, non-transitory computer readable media, and systems to use real-stroke drawings (instead of conventional paired-ground-truth drawings) for training the network to accurately portray a stroke style.
    Type: Application
    Filed: November 8, 2018
    Publication date: May 14, 2020
    Inventors: Elya Shechtman, Yijun Li, Chen Fang, Aaron Hertzmann
  • Patent number: 10489489
    Abstract: Systems and methods are disclosed for classifying digital fonts. In particular, in one or more embodiments, the disclosed systems and methods detect a new digital font, automatically classify the digital font into one or more font classifications, and make the digital font available via a user interface. More particularly, the disclosed systems and methods can conduct searches for the new digital font, identify digital fonts similar to the new digital font, and apply the new digital font to digital text in an electronic document.
    Type: Grant
    Filed: March 9, 2016
    Date of Patent: November 26, 2019
    Assignee: Adobe Inc.
    Inventors: Yuyan Song, Seth Shaw, Aaron Hertzmann
  • Patent number: 10467760
    Abstract: This disclosure involves generating and outputting a segmentation model using 3D models having user-provided labels and scene graphs. For example, a system uses a neural network learned from the user-provided labels to transform feature vectors, which represent component shapes of the 3D models, into transformed feature vectors identifying points in a feature space. The system identifies component-shape groups from clusters of the points in the feature space. The system determines, from the scene graphs, parent-child relationships for the component-shape groups. The system generates a segmentation hierarchy with nodes corresponding to the component-shape groups and links corresponding to the parent-child relationships. The system trains a point classifier to assign feature points, which are sampled from an input 3D shape, to nodes of the segmentation hierarchy, and thereby segment the input 3D shape into component shapes.
    Type: Grant
    Filed: February 23, 2017
    Date of Patent: November 5, 2019
    Assignee: Adobe Inc.
    Inventors: Vladimir Kim, Aaron Hertzmann, Mehmet Yumer, Li Yi
  • Publication number: 20190295280
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating modified video content to reduce depth conflicts between user interface elements and video objects. For example, the disclosed systems can analyze an input video to identify feature points that designate objects within the input video and to determine the depths of the identified feature points. In addition, the disclosed systems can compare the depths of the feature points with a depth of a user interface element to determine whether there are any depth conflicts. In response to detecting a depth conflict, the disclosed systems can modify the depth of the user interface element to reduce or avoid the depth conflict. Furthermore, the disclosed systems can apply a blurring effect to an area around a user interface element to reduce the effect of depth conflicts.
    Type: Application
    Filed: March 26, 2018
    Publication date: September 26, 2019
    Inventors: Stephen DiVerdi, Cuong Nguyen, Aaron Hertzmann, Feng Liu
  • Publication number: 20180300882
    Abstract: The present disclosure includes methods and systems for identifying and manipulating a segment of a three-dimensional digital model based on soft classification of the three-dimensional digital model. In particular, one or more embodiments of the disclosed systems and methods identify a soft classification of a digital model and utilize the soft classification to tune segmentation algorithms. For example, the disclosed systems and methods can utilize a soft classification to select a segmentation algorithm from a plurality of segmentation algorithms, to combine segmentation parameters from a plurality of segmentation algorithms, and/or to identify input parameters for a segmentation algorithm. The disclosed systems and methods can utilize the tuned segmentation algorithms to accurately and efficiently identify a segment of a three-dimensional digital model.
    Type: Application
    Filed: April 14, 2017
    Publication date: October 18, 2018
    Inventors: Vladimir Kim, Aaron Hertzmann, Mehmet Yumer
  • Publication number: 20180240243
    Abstract: This disclosure involves generating and outputting a segmentation model using 3D models having user-provided labels and scene graphs. For example, a system uses a neural network learned from the user-provided labels to transform feature vectors, which represent component shapes of the 3D models, into transformed feature vectors identifying points in a feature space. The system identifies component-shape groups from clusters of the points in the feature space. The system determines, from the scene graphs, parent-child relationships for the component-shape groups. The system generates a segmentation hierarchy with nodes corresponding to the component-shape groups and links corresponding to the parent-child relationships. The system trains a point classifier to assign feature points, which are sampled from an input 3D shape, to nodes of the segmentation hierarchy, and thereby segment the input 3D shape into component shapes.
    Type: Application
    Filed: February 23, 2017
    Publication date: August 23, 2018
    Inventors: Vladimir Kim, Aaron Hertzmann, Mehmet Yumer, Li Yi
  • Patent number: 10021001
    Abstract: The present disclosure is directed toward systems and methods for analyzing event sequence data. Additionally, the present disclosure is directed toward systems and methods for providing visualizations of event sequence data analyses. For example, systems and methods described herein can analyze event sequence data related to websites and provide matrix-based visualizations of the event sequence data. The matrix-based visualization can be interactive and can allow a user to trace changes in traffic volume across webpages and hyperlinks of a website.
    Type: Grant
    Filed: January 5, 2017
    Date of Patent: July 10, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Lubomira Dontcheva, Jian Zhao, Aaron Hertzmann, Alan Wilson, Zhicheng Liu
  • Patent number: 9978003
    Abstract: Systems and methods are disclosed for segregating target individuals represented in a probe digital image from background pixels in the probe digital image. In particular, in one or more embodiments, the disclosed systems and methods train a neural network based on two or more of training position channels, training shape input channels, training color channels, or training object data. Moreover, in one or more embodiments, the disclosed systems and methods utilize the trained neural network to select a target individual in a probe digital image. Specifically, in one or more embodiments, the disclosed systems and methods generate position channels, training shape input channels, and color channels corresponding the probe digital image, and utilize the generated channels in conjunction with the trained neural network to select the target individual.
    Type: Grant
    Filed: August 17, 2017
    Date of Patent: May 22, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Ian Sachs, Xiaoyong Shen, Sylvain Paris, Aaron Hertzmann, Elya Shechtman, Brian Price
  • Publication number: 20180121069
    Abstract: The present disclosure is directed toward systems and methods that enable simultaneous viewing and editing of audio-visual content within a virtual-reality environment (i.e., while wearing a virtual-reality device). For example, the virtual-reality editing system allows for editing of audio-visual content while viewing the audio-visual content via a virtual-reality device. In particular, the virtual-reality editing system provides an editing interface over a display of audio-visual content provided via a virtual-reality device (e.g., a virtual-reality headset) that allows for editing of the audio-visual content.
    Type: Application
    Filed: October 28, 2016
    Publication date: May 3, 2018
    Inventors: Stephen DiVerdi, Aaron Hertzmann, Cuong Nguyen
  • Publication number: 20170344860
    Abstract: Systems and methods are disclosed for segregating target individuals represented in a probe digital image from background pixels in the probe digital image. In particular, in one or more embodiments, the disclosed systems and methods train a neural network based on two or more of training position channels, training shape input channels, training color channels, or training object data. Moreover, in one or more embodiments, the disclosed systems and methods utilize the trained neural network to select a target individual in a probe digital image. Specifically, in one or more embodiments, the disclosed systems and methods generate position channels, training shape input channels, and color channels corresponding the probe digital image, and utilize the generated channels in conjunction with the trained neural network to select the target individual.
    Type: Application
    Filed: August 17, 2017
    Publication date: November 30, 2017
    Inventors: Ian Sachs, Xiaoyong Shen, Sylvain Paris, Aaron Hertzmann, Elya Shechtman, Brian Price
  • Patent number: 9773196
    Abstract: Systems and methods are disclosed for segregating target individuals represented in a probe digital image from background pixels in the probe digital image. In particular, in one or more embodiments, the disclosed systems and methods train a neural network based on two or more of training position channels, training shape input channels, training color channels, or training object data. Moreover, in one or more embodiments, the disclosed systems and methods utilize the trained neural network to select a target individual in a probe digital image. Specifically, in one or more embodiments, the disclosed systems and methods generate position channels, training shape input channels, and color channels corresponding the probe digital image, and utilize the generated channels in conjunction with the trained neural network to select the target individual.
    Type: Grant
    Filed: January 25, 2016
    Date of Patent: September 26, 2017
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Ian Sachs, Xiaoyong Shen, Sylvain Paris, Aaron Hertzmann, Elya Shechtman, Brian Price
  • Publication number: 20170262413
    Abstract: Systems and methods are disclosed for classifying digital fonts. In particular, in one or more embodiments, the disclosed systems and methods detect a new digital font, automatically classify the digital font into one or more font classifications, and make the digital font available via a user interface. More particularly, the disclosed systems and methods can conduct searches for the new digital font, identify digital fonts similar to the new digital font, and apply the new digital font to digital text in an electronic document.
    Type: Application
    Filed: March 9, 2016
    Publication date: September 14, 2017
    Inventors: Yuyan Song, Seth Shaw, Aaron Hertzmann
  • Publication number: 20170213112
    Abstract: Systems and methods are disclosed for segregating target individuals represented in a probe digital image from background pixels in the probe digital image. In particular, in one or more embodiments, the disclosed systems and methods train a neural network based on two or more of training position channels, training shape input channels, training color channels, or training object data. Moreover, in one or more embodiments, the disclosed systems and methods utilize the trained neural network to select a target individual in a probe digital image. Specifically, in one or more embodiments, the disclosed systems and methods generate position channels, training shape input channels, and color channels corresponding the probe digital image, and utilize the generated channels in conjunction with the trained neural network to select the target individual.
    Type: Application
    Filed: January 25, 2016
    Publication date: July 27, 2017
    Inventors: Ian Sachs, Xiaoyong Shen, Sylvain Paris, Aaron Hertzmann, Elya Shechtman, Brian Price
  • Publication number: 20170118093
    Abstract: The present disclosure is directed toward systems and methods for analyzing event sequence data. Additionally, the present disclosure is directed toward systems and methods for providing visualizations of event sequence data analyses. For example, systems and methods described herein can analyze event sequence data related to websites and provide matrix-based visualizations of the event sequence data. The matrix-based visualization can be interactive and can allow a user to trace changes in traffic volume across webpages and hyperlinks of a website.
    Type: Application
    Filed: January 5, 2017
    Publication date: April 27, 2017
    Inventors: Lubomira Dontcheva, Jian Zhao, Aaron Hertzmann, Alan Wilson, Zhicheng Liu
  • Patent number: 9577897
    Abstract: The present disclosure is directed toward systems and methods for analyzing event sequence data. Additionally, the present disclosure is directed toward systems and methods for providing visualizations of event sequence data analyses. For example, systems and methods described herein can analyze event sequence data related to websites and provide matrix-based visualizations of the event sequence data. The matrix-based visualization can be interactive and can allow a user to trace changes in traffic volume across webpages and hyperlinks of a website.
    Type: Grant
    Filed: February 20, 2015
    Date of Patent: February 21, 2017
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Lubomira Dontcheva, Jian Zhao, Aaron Hertzmann, Alan Wilson, Zhicheng Liu
  • Publication number: 20160248644
    Abstract: The present disclosure is directed toward systems and methods for analyzing event sequence data. Additionally, the present disclosure is directed toward systems and methods for providing visualizations of event sequence data analyses. For example, systems and methods described herein can analyze event sequence data related to websites and provide matrix-based visualizations of the event sequence data. The matrix-based visualization can be interactive and can allow a user to trace changes in traffic volume across webpages and hyperlinks of a website.
    Type: Application
    Filed: February 20, 2015
    Publication date: August 25, 2016
    Inventors: Lubomira Dontcheva, Jian Zhao, Aaron Hertzmann, Alan Wilson, Zhicheng Liu
  • Patent number: 6628282
    Abstract: A system for viewing a scene from a remote location. The system includes a client machine. The system includes a network connected to the client machine. The system includes a server machine having a 3D environment stored in it. The server machine is connected to the network and remote from the client machine, wherein the client machine predicts a next view of the 3D environment based on a previous view of the 3D environment by the client machine, and the server machine predicts the next view also based on the previous view and sends to the client machine by way of the network only the difference between the predicted view and the previous view. Methods for viewing a scene from a remote location.
    Type: Grant
    Filed: October 22, 1999
    Date of Patent: September 30, 2003
    Assignee: New York University
    Inventors: Aaron Hertzmann, Henning Biermann, Jon Meyer, Kenneth Perlin