Patents by Inventor Xuaner Zhang
Xuaner Zhang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12271804Abstract: Embodiments are disclosed for performing wire segmentation of images using machine learning. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input image, generating, by a first trained neural network model, a global probability map representation of the input image indicating a probability value of each pixel including a representation of wires, and identifying regions of the input image indicated as including the representation of wires. The disclosed systems and methods further comprise, for each region from the identified regions, concatenating the region and information from the global probability map to create a concatenated input, and generating, by a second trained neural network model, a local probability map representation of the region based on the concatenated input, indicating pixels of the region including representations of wires. The disclosed systems and methods further comprise aggregating local probability maps for each region.Type: GrantFiled: July 21, 2022Date of Patent: April 8, 2025Assignee: Adobe Inc.Inventors: Mang Tik Chiu, Connelly Barnes, Zijun Wei, Zhe Lin, Yuqian Zhou, Xuaner Zhang, Sohrab Amirghodsi, Florian Kainz, Elya Shechtman
-
Publication number: 20240404188Abstract: In accordance with the described techniques, a portrait relighting system receives user input defining one or more markings drawn on a portrait image. Using one or more machine learning models, the portrait relighting system generates an albedo representation of the portrait image by removing lighting effects from the portrait image. Further, the portrait relighting system generates a shading map of the portrait image using the one or more machine learning models by designating the one or more markings as a lighting condition, and applying the lighting condition to a geometric representation of the portrait image. The one or more machine learning models are further employed to generate a relit portrait image based on the albedo representation and the shading map.Type: ApplicationFiled: June 2, 2023Publication date: December 5, 2024Applicant: Adobe Inc.Inventors: He Zhang, Zijun Wei, Zhixin Shu, Yiqun Mei, Yilin Wang, Xuaner Zhang, Shi Yan, Jianming Zhang
-
Publication number: 20240320838Abstract: Systems and methods perform image matte generation using image bursts. In accordance with some aspects, an image burst comprising a set of images is received. Features of a reference image from the set of images is aligned with features of other images from the set of images. A matte for the reference image is generated using the aligned features.Type: ApplicationFiled: March 20, 2023Publication date: September 26, 2024Inventors: Xuaner ZHANG, Xinyi WU, Markus Jamal WOODSON, Joon-Young LEE, Brian PRICE, Jiawen CHEN
-
Publication number: 20240202989Abstract: Digital content stylization techniques are described that leverage a neural photofinisher to generate stylized digital images. In one example, the neural photofinisher is implemented as part of a stylization system to train a neural network to perform digital image style transfer operations using reference digital content as training data. The training includes calculating a style loss term that identifies a particular visual style of the reference digital content. Once trained, the stylization system receives a digital image and generates a feature map of a scene depicted by the digital image. Based on the feature map as well as the style loss, the stylization system determines visual parameter values to apply to the digital image to incorporate a visual appearance of the particular visual style. The stylization system generates the stylized digital image by applying the visual parameter values to the digital image automatically and without user intervention.Type: ApplicationFiled: December 19, 2022Publication date: June 20, 2024Applicant: Adobe Inc.Inventors: Ethan Tseng, Zhihao Xia, Yifei Fan, Xuaner Zhang, Peter Merrill, Lars Jebe, Jiawen Chen
-
Publication number: 20240028871Abstract: Embodiments are disclosed for performing wire segmentation of images using machine learning. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input image, generating, by a first trained neural network model, a global probability map representation of the input image indicating a probability value of each pixel including a representation of wires, and identifying regions of the input image indicated as including the representation of wires. The disclosed systems and methods further comprise, for each region from the identified regions, concatenating the region and information from the global probability map to create a concatenated input, and generating, by a second trained neural network model, a local probability map representation of the region based on the concatenated input, indicating pixels of the region including representations of wires. The disclosed systems and methods further comprise aggregating local probability maps for each region.Type: ApplicationFiled: July 21, 2022Publication date: January 25, 2024Applicant: Adobe Inc.Inventors: Mang Tik CHIU, Connelly BARNES, Zijun WEI, Zhe LIN, Yuqian ZHOU, Xuaner ZHANG, Sohrab AMIRGHODSI, Florian KAINZ, Elya SHECHTMAN
-
Publication number: 20230351560Abstract: Systems and methods described herein may relate to potential methods of training a machine learning model to be implemented on a mobile computing device configured to capture, adjust, and/or store image frames. An example method includes supplying a first image frame of a subject in a setting lit within a first lighting environment and supplying a second image frame of the subject lit within a second lighting environment. The method further includes determining a mask. Additionally, the method includes combining the first image frame and the second image frame according to the mask to generate a synthetic image and assigning a score to the synthetic image. The method also includes training a machine learning model based on the assigned score to adjust a captured image based on the synthetic image.Type: ApplicationFiled: December 23, 2019Publication date: November 2, 2023Inventors: David Jacobs, Yun-Ta Tsai, Jonathan T. Barron, Xuaner Zhang
-
Patent number: 10116897Abstract: Photometric stabilization for time-compressed video is described. Initially, video content captured by a video capturing device is time-compressed by selecting a subset of frames from the video content according to a frame sampling technique. Photometric characteristics are then stabilized across the frames of the time-compressed video. This involves determining correspondences of pixels in adjacent frames of the time-compressed video. Photometric transformations are then determined that describe how photometric characteristics (e.g., one or both of luminance and chrominance) change between the adjacent frames, given movement of objects through the captured scene. Based on the determined photometric transformations, filters are computed for smoothing photometric characteristic changes across the time-compressed video. Photometrically stabilized time-compressed video is generated from the time-compressed video by using the filters to smooth the photometric characteristic changes.Type: GrantFiled: March 1, 2017Date of Patent: October 30, 2018Assignee: Adobe Systems IncorporatedInventors: Joon-Young Lee, Zhaowen Wang, Xuaner Zhang, Kalyan Krishna Sunkavalli
-
Publication number: 20180255273Abstract: Photometric stabilization for time-compressed video is described. Initially, video content captured by a video capturing device is time-compressed by selecting a subset of frames from the video content according to a frame sampling technique. Photometric characteristics are then stabilized across the frames of the time-compressed video. This involves determining correspondences of pixels in adjacent frames of the time-compressed video. Photometric transformations are then determined that describe how photometric characteristics (e.g., one or both of luminance and chrominance) change between the adjacent frames, given movement of objects through the captured scene. Based on the determined photometric transformations, filters are computed for smoothing photometric characteristic changes across the time-compressed video. Photometrically stabilized time-compressed video is generated from the time-compressed video by using the filters to smooth the photometric characteristic changes.Type: ApplicationFiled: March 1, 2017Publication date: September 6, 2018Applicant: Adobe Systems IncorporatedInventors: Joon-Young Lee, Zhaowen Wang, Xuaner Zhang, Kalyan Krishna Sunkavalli