Patents by Inventor Aaron Hertzmann
Aaron Hertzmann has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10748324Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that integrate (or embed) a non-photorealistic rendering (“NPR”) generator with a style-transfer-neural network to generate stylized images that both correspond to a source image and resemble a stroke style. By integrating an NPR generator with a style-transfer-neural network, the disclosed methods, non-transitory computer readable media, and systems can accurately capture a stroke style resembling one or both of stylized edges or stylized shadings. When training such a style-transfer-neural network, the integrated NPR generator can enable the disclosed methods, non-transitory computer readable media, and systems to use real-stroke drawings (instead of conventional paired-ground-truth drawings) for training the network to accurately portray a stroke style.Type: GrantFiled: November 8, 2018Date of Patent: August 18, 2020Assignee: ADOBE INC.Inventors: Elya Shechtman, Yijun Li, Chen Fang, Aaron Hertzmann
-
Patent number: 10706554Abstract: The present disclosure includes methods and systems for identifying and manipulating a segment of a three-dimensional digital model based on soft classification of the three-dimensional digital model. In particular, one or more embodiments of the disclosed systems and methods identify a soft classification of a digital model and utilize the soft classification to tune segmentation algorithms. For example, the disclosed systems and methods can utilize a soft classification to select a segmentation algorithm from a plurality of segmentation algorithms, to combine segmentation parameters from a plurality of segmentation algorithms, and/or to identify input parameters for a segmentation algorithm. The disclosed systems and methods can utilize the tuned segmentation algorithms to accurately and efficiently identify a segment of a three-dimensional digital model.Type: GrantFiled: April 14, 2017Date of Patent: July 7, 2020Assignee: ADOBE INC.Inventors: Vladimir Kim, Aaron Hertzmann, Mehmet Yumer
-
Publication number: 20200151938Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that integrate (or embed) a non-photorealistic rendering (“NPR”) generator with a style-transfer-neural network to generate stylized images that both correspond to a source image and resemble a stroke style. By integrating an NPR generator with a style-transfer-neural network, the disclosed methods, non-transitory computer readable media, and systems can accurately capture a stroke style resembling one or both of stylized edges or stylized shadings. When training such a style-transfer-neural network, the integrated NPR generator can enable the disclosed methods, non-transitory computer readable media, and systems to use real-stroke drawings (instead of conventional paired-ground-truth drawings) for training the network to accurately portray a stroke style.Type: ApplicationFiled: November 8, 2018Publication date: May 14, 2020Inventors: Elya Shechtman, Yijun Li, Chen Fang, Aaron Hertzmann
-
Patent number: 10489489Abstract: Systems and methods are disclosed for classifying digital fonts. In particular, in one or more embodiments, the disclosed systems and methods detect a new digital font, automatically classify the digital font into one or more font classifications, and make the digital font available via a user interface. More particularly, the disclosed systems and methods can conduct searches for the new digital font, identify digital fonts similar to the new digital font, and apply the new digital font to digital text in an electronic document.Type: GrantFiled: March 9, 2016Date of Patent: November 26, 2019Assignee: Adobe Inc.Inventors: Yuyan Song, Seth Shaw, Aaron Hertzmann
-
Patent number: 10467760Abstract: This disclosure involves generating and outputting a segmentation model using 3D models having user-provided labels and scene graphs. For example, a system uses a neural network learned from the user-provided labels to transform feature vectors, which represent component shapes of the 3D models, into transformed feature vectors identifying points in a feature space. The system identifies component-shape groups from clusters of the points in the feature space. The system determines, from the scene graphs, parent-child relationships for the component-shape groups. The system generates a segmentation hierarchy with nodes corresponding to the component-shape groups and links corresponding to the parent-child relationships. The system trains a point classifier to assign feature points, which are sampled from an input 3D shape, to nodes of the segmentation hierarchy, and thereby segment the input 3D shape into component shapes.Type: GrantFiled: February 23, 2017Date of Patent: November 5, 2019Assignee: Adobe Inc.Inventors: Vladimir Kim, Aaron Hertzmann, Mehmet Yumer, Li Yi
-
Publication number: 20190295280Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating modified video content to reduce depth conflicts between user interface elements and video objects. For example, the disclosed systems can analyze an input video to identify feature points that designate objects within the input video and to determine the depths of the identified feature points. In addition, the disclosed systems can compare the depths of the feature points with a depth of a user interface element to determine whether there are any depth conflicts. In response to detecting a depth conflict, the disclosed systems can modify the depth of the user interface element to reduce or avoid the depth conflict. Furthermore, the disclosed systems can apply a blurring effect to an area around a user interface element to reduce the effect of depth conflicts.Type: ApplicationFiled: March 26, 2018Publication date: September 26, 2019Inventors: Stephen DiVerdi, Cuong Nguyen, Aaron Hertzmann, Feng Liu
-
Publication number: 20180300882Abstract: The present disclosure includes methods and systems for identifying and manipulating a segment of a three-dimensional digital model based on soft classification of the three-dimensional digital model. In particular, one or more embodiments of the disclosed systems and methods identify a soft classification of a digital model and utilize the soft classification to tune segmentation algorithms. For example, the disclosed systems and methods can utilize a soft classification to select a segmentation algorithm from a plurality of segmentation algorithms, to combine segmentation parameters from a plurality of segmentation algorithms, and/or to identify input parameters for a segmentation algorithm. The disclosed systems and methods can utilize the tuned segmentation algorithms to accurately and efficiently identify a segment of a three-dimensional digital model.Type: ApplicationFiled: April 14, 2017Publication date: October 18, 2018Inventors: Vladimir Kim, Aaron Hertzmann, Mehmet Yumer
-
Publication number: 20180240243Abstract: This disclosure involves generating and outputting a segmentation model using 3D models having user-provided labels and scene graphs. For example, a system uses a neural network learned from the user-provided labels to transform feature vectors, which represent component shapes of the 3D models, into transformed feature vectors identifying points in a feature space. The system identifies component-shape groups from clusters of the points in the feature space. The system determines, from the scene graphs, parent-child relationships for the component-shape groups. The system generates a segmentation hierarchy with nodes corresponding to the component-shape groups and links corresponding to the parent-child relationships. The system trains a point classifier to assign feature points, which are sampled from an input 3D shape, to nodes of the segmentation hierarchy, and thereby segment the input 3D shape into component shapes.Type: ApplicationFiled: February 23, 2017Publication date: August 23, 2018Inventors: Vladimir Kim, Aaron Hertzmann, Mehmet Yumer, Li Yi
-
Patent number: 10021001Abstract: The present disclosure is directed toward systems and methods for analyzing event sequence data. Additionally, the present disclosure is directed toward systems and methods for providing visualizations of event sequence data analyses. For example, systems and methods described herein can analyze event sequence data related to websites and provide matrix-based visualizations of the event sequence data. The matrix-based visualization can be interactive and can allow a user to trace changes in traffic volume across webpages and hyperlinks of a website.Type: GrantFiled: January 5, 2017Date of Patent: July 10, 2018Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Lubomira Dontcheva, Jian Zhao, Aaron Hertzmann, Alan Wilson, Zhicheng Liu
-
Patent number: 9978003Abstract: Systems and methods are disclosed for segregating target individuals represented in a probe digital image from background pixels in the probe digital image. In particular, in one or more embodiments, the disclosed systems and methods train a neural network based on two or more of training position channels, training shape input channels, training color channels, or training object data. Moreover, in one or more embodiments, the disclosed systems and methods utilize the trained neural network to select a target individual in a probe digital image. Specifically, in one or more embodiments, the disclosed systems and methods generate position channels, training shape input channels, and color channels corresponding the probe digital image, and utilize the generated channels in conjunction with the trained neural network to select the target individual.Type: GrantFiled: August 17, 2017Date of Patent: May 22, 2018Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Ian Sachs, Xiaoyong Shen, Sylvain Paris, Aaron Hertzmann, Elya Shechtman, Brian Price
-
Publication number: 20180121069Abstract: The present disclosure is directed toward systems and methods that enable simultaneous viewing and editing of audio-visual content within a virtual-reality environment (i.e., while wearing a virtual-reality device). For example, the virtual-reality editing system allows for editing of audio-visual content while viewing the audio-visual content via a virtual-reality device. In particular, the virtual-reality editing system provides an editing interface over a display of audio-visual content provided via a virtual-reality device (e.g., a virtual-reality headset) that allows for editing of the audio-visual content.Type: ApplicationFiled: October 28, 2016Publication date: May 3, 2018Inventors: Stephen DiVerdi, Aaron Hertzmann, Cuong Nguyen
-
Publication number: 20170344860Abstract: Systems and methods are disclosed for segregating target individuals represented in a probe digital image from background pixels in the probe digital image. In particular, in one or more embodiments, the disclosed systems and methods train a neural network based on two or more of training position channels, training shape input channels, training color channels, or training object data. Moreover, in one or more embodiments, the disclosed systems and methods utilize the trained neural network to select a target individual in a probe digital image. Specifically, in one or more embodiments, the disclosed systems and methods generate position channels, training shape input channels, and color channels corresponding the probe digital image, and utilize the generated channels in conjunction with the trained neural network to select the target individual.Type: ApplicationFiled: August 17, 2017Publication date: November 30, 2017Inventors: Ian Sachs, Xiaoyong Shen, Sylvain Paris, Aaron Hertzmann, Elya Shechtman, Brian Price
-
Patent number: 9773196Abstract: Systems and methods are disclosed for segregating target individuals represented in a probe digital image from background pixels in the probe digital image. In particular, in one or more embodiments, the disclosed systems and methods train a neural network based on two or more of training position channels, training shape input channels, training color channels, or training object data. Moreover, in one or more embodiments, the disclosed systems and methods utilize the trained neural network to select a target individual in a probe digital image. Specifically, in one or more embodiments, the disclosed systems and methods generate position channels, training shape input channels, and color channels corresponding the probe digital image, and utilize the generated channels in conjunction with the trained neural network to select the target individual.Type: GrantFiled: January 25, 2016Date of Patent: September 26, 2017Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Ian Sachs, Xiaoyong Shen, Sylvain Paris, Aaron Hertzmann, Elya Shechtman, Brian Price
-
Publication number: 20170262413Abstract: Systems and methods are disclosed for classifying digital fonts. In particular, in one or more embodiments, the disclosed systems and methods detect a new digital font, automatically classify the digital font into one or more font classifications, and make the digital font available via a user interface. More particularly, the disclosed systems and methods can conduct searches for the new digital font, identify digital fonts similar to the new digital font, and apply the new digital font to digital text in an electronic document.Type: ApplicationFiled: March 9, 2016Publication date: September 14, 2017Inventors: Yuyan Song, Seth Shaw, Aaron Hertzmann
-
Publication number: 20170213112Abstract: Systems and methods are disclosed for segregating target individuals represented in a probe digital image from background pixels in the probe digital image. In particular, in one or more embodiments, the disclosed systems and methods train a neural network based on two or more of training position channels, training shape input channels, training color channels, or training object data. Moreover, in one or more embodiments, the disclosed systems and methods utilize the trained neural network to select a target individual in a probe digital image. Specifically, in one or more embodiments, the disclosed systems and methods generate position channels, training shape input channels, and color channels corresponding the probe digital image, and utilize the generated channels in conjunction with the trained neural network to select the target individual.Type: ApplicationFiled: January 25, 2016Publication date: July 27, 2017Inventors: Ian Sachs, Xiaoyong Shen, Sylvain Paris, Aaron Hertzmann, Elya Shechtman, Brian Price
-
Publication number: 20170118093Abstract: The present disclosure is directed toward systems and methods for analyzing event sequence data. Additionally, the present disclosure is directed toward systems and methods for providing visualizations of event sequence data analyses. For example, systems and methods described herein can analyze event sequence data related to websites and provide matrix-based visualizations of the event sequence data. The matrix-based visualization can be interactive and can allow a user to trace changes in traffic volume across webpages and hyperlinks of a website.Type: ApplicationFiled: January 5, 2017Publication date: April 27, 2017Inventors: Lubomira Dontcheva, Jian Zhao, Aaron Hertzmann, Alan Wilson, Zhicheng Liu
-
Patent number: 9577897Abstract: The present disclosure is directed toward systems and methods for analyzing event sequence data. Additionally, the present disclosure is directed toward systems and methods for providing visualizations of event sequence data analyses. For example, systems and methods described herein can analyze event sequence data related to websites and provide matrix-based visualizations of the event sequence data. The matrix-based visualization can be interactive and can allow a user to trace changes in traffic volume across webpages and hyperlinks of a website.Type: GrantFiled: February 20, 2015Date of Patent: February 21, 2017Assignee: ADOBE SYSTEMS INCORPORATEDInventors: Lubomira Dontcheva, Jian Zhao, Aaron Hertzmann, Alan Wilson, Zhicheng Liu
-
Publication number: 20160248644Abstract: The present disclosure is directed toward systems and methods for analyzing event sequence data. Additionally, the present disclosure is directed toward systems and methods for providing visualizations of event sequence data analyses. For example, systems and methods described herein can analyze event sequence data related to websites and provide matrix-based visualizations of the event sequence data. The matrix-based visualization can be interactive and can allow a user to trace changes in traffic volume across webpages and hyperlinks of a website.Type: ApplicationFiled: February 20, 2015Publication date: August 25, 2016Inventors: Lubomira Dontcheva, Jian Zhao, Aaron Hertzmann, Alan Wilson, Zhicheng Liu
-
Patent number: 6628282Abstract: A system for viewing a scene from a remote location. The system includes a client machine. The system includes a network connected to the client machine. The system includes a server machine having a 3D environment stored in it. The server machine is connected to the network and remote from the client machine, wherein the client machine predicts a next view of the 3D environment based on a previous view of the 3D environment by the client machine, and the server machine predicts the next view also based on the previous view and sends to the client machine by way of the network only the difference between the predicted view and the previous view. Methods for viewing a scene from a remote location.Type: GrantFiled: October 22, 1999Date of Patent: September 30, 2003Assignee: New York UniversityInventors: Aaron Hertzmann, Henning Biermann, Jon Meyer, Kenneth Perlin