Patents by Inventor Yuqian ZHOU
Yuqian ZHOU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12271804Abstract: Embodiments are disclosed for performing wire segmentation of images using machine learning. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input image, generating, by a first trained neural network model, a global probability map representation of the input image indicating a probability value of each pixel including a representation of wires, and identifying regions of the input image indicated as including the representation of wires. The disclosed systems and methods further comprise, for each region from the identified regions, concatenating the region and information from the global probability map to create a concatenated input, and generating, by a second trained neural network model, a local probability map representation of the region based on the concatenated input, indicating pixels of the region including representations of wires. The disclosed systems and methods further comprise aggregating local probability maps for each region.Type: GrantFiled: July 21, 2022Date of Patent: April 8, 2025Assignee: Adobe Inc.Inventors: Mang Tik Chiu, Connelly Barnes, Zijun Wei, Zhe Lin, Yuqian Zhou, Xuaner Zhang, Sohrab Amirghodsi, Florian Kainz, Elya Shechtman
-
Publication number: 20250069203Abstract: A method, non-transitory computer readable medium, apparatus, and system for image generation are described. An embodiment of the present disclosure includes obtaining an input image, an inpainting mask, and a plurality of content preservation values corresponding to different regions of the inpainting mask, and identifying a plurality of mask bands of the inpainting mask based on the plurality of content preservation values. An image generation model generates an output image based on the input image and the inpainting mask. The output image is generated in a plurality of phases. Each of the plurality of phases uses a corresponding mask band of the plurality of mask bands as an input.Type: ApplicationFiled: August 24, 2023Publication date: February 27, 2025Inventors: Yuqian Zhou, Krishna Kumar Singh, Benjamin Delarre, Zhe Lin, Jingwan Lu, Taesung Park, Sohrab Amirghodsi, Elya Shechtman
-
Publication number: 20250061626Abstract: Techniques for performing a digital operation on a digital image are described along with methods and systems employing such techniques. According to the techniques, an input (e.g., an input stroke) is received by, for example, a processing system. Based upon the input, an area of the digital image upon which a digital operation (e.g., for removal of a distractor within the area) is to be performed is determined. In an implementation, one or more metrics of an input stroke are analyzed, typically in real time, to at least partially determine the area upon which the digital operation is to be performed. In an additional or alternative implementation, the input includes a first point, a second point and a connector, and the area is at least partially determined by a location of the first point relative to a location of the second point and/or by locations of the first point and/or second point relative to one or more edges of the digital image.Type: ApplicationFiled: May 24, 2024Publication date: February 20, 2025Applicant: Adobe Inc.Inventors: Xiaoyang Liu, Zhe Lin, Yuqian Zhou, Sohrab Amirghodsi, Sarah Jane Stuckey, Sakshi Gupta, Guotong Feng, Elya Schechtman, Connelly Stuart Barnes, Betty Leong
-
Publication number: 20250054115Abstract: Various disclosed embodiments are directed to resizing, via down-sampling and up-sampling, a high-resolution input image in order to meet machine learning model low-resolution processing requirements, while also producing a high-resolution output image for image inpainting via a machine learning model. Some embodiments use a refinement model to refine the low-resolution inpainting result from the machine learning model such that there will be clear content with high resolution both inside and outside of the mask region in the output. Some embodiments employ new model architecture for the machine learning model that produces the inpainting result—an advanced Cascaded Modulated Generative Adversarial Network (CM-GAN) that includes Fast Fourier Convolution (FCC) layers at the skip connections between the encoder and decoder.Type: ApplicationFiled: August 9, 2023Publication date: February 13, 2025Inventors: Zhe LIN, Yuqian ZHOU, Sohrab AMIRGHODSI, Qing LIU, Elya SHECHTMAN, Connelly BARNES, Haitian ZHENG
-
Publication number: 20240428384Abstract: Inpainting dispatch techniques for digital images are described. In one or more examples, an inpainting system includes a plurality of inpainting modules. The inpainting modules are configured to employ a variety of different techniques, respectively, as part of performing an inpainting operation. An inpainting dispatch module is also included as part of the inpainting system that is configured to select which of the plurality of inpainting modules are to be used to perform an inpainting operation for one or more regions in a digital image, automatically and without user intervention.Type: ApplicationFiled: June 22, 2023Publication date: December 26, 2024Applicant: Adobe Inc.Inventors: Yuqian Zhou, Zhe Lin, Xiaoyang Liu, Sohrab Amirghodsi, Qing Liu, Lingzhi Zhang, Elya Schechtman, Connelly Stuart Barnes
-
Patent number: 12165063Abstract: Examples are disclosed that relate to the restoration of degraded images acquired via a behind-display camera. One example provides a method of training a machine learning model, the method comprising inputting training image pairs into the machine learning model, each training image pair comprising an undegraded image and a degraded image that represents an appearance of the undegraded image to a behind-display camera, and training the machine learning model using the training image pairs to generate frequency information that is missing from the degraded images.Type: GrantFiled: January 12, 2024Date of Patent: December 10, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Yuqian Zhou, Timothy Andrew Large, Se Hoon Lim, Neil Emerton, Yonghuan David Ren
-
Publication number: 20240404013Abstract: Embodiments include systems and methods for generative image filling based on text and a reference image. In one aspect, the system obtains an input image, a reference image, and a text prompt. Then, the system encodes the reference image to obtain an image embedding and encodes the text prompt to obtain a text embedding. Subsequently, a composite image is generated based on the input image, the image embedding, and the text embedding.Type: ApplicationFiled: November 21, 2023Publication date: December 5, 2024Inventors: Yuqian Zhou, Krishna Kumar Singh, Zhe Lin, Qing Liu, Zhifei Zhang, Sohrab Amirghodsi, Elya Shechtman, Jingwan Lu
-
Publication number: 20240362791Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing machine learning to generate a mask for an object portrayed in a digital image. For example, in some embodiments, the disclosed systems utilize a neural network to generate an image feature representation from the digital image. The disclosed systems can receive a selection input identifying one or more pixels corresponding to the object. In addition, in some implementations, the disclosed systems generate a modified feature representation by integrating the selection input into the image feature representation. Moreover, in one or more embodiments, the disclosed systems utilize an additional neural network to generate a plurality of masking proposals for the object from the modified feature representation. Furthermore, in some embodiments, the disclosed systems utilize a further neural network to generate the mask for the object from the modified feature representation and/or the masking proposals.Type: ApplicationFiled: April 26, 2023Publication date: October 31, 2024Inventors: Yuqian Zhou, Chuong Huynh, Connelly Barnes, Elya Shechtman, Sohrab Amirghodsi, Zhe Lin
-
Publication number: 20240362757Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for inpainting digital images utilizing mask-robust machine-learning models. In particular, in one or more embodiments, the disclosed systems obtain an initial mask for an object depicted in a digital image. Additionally, in some embodiments, the disclosed systems generate, utilizing a mask-robust inpainting machine-learning model, an inpainted image from the digital image and the initial mask. Moreover, in some implementations, the disclosed systems generate a relaxed mask that expands the initial mask. Furthermore, in some embodiments, the disclosed systems generate a modified image by compositing the inpainted image and the digital image utilizing the relaxed mask.Type: ApplicationFiled: April 26, 2023Publication date: October 31, 2024Inventors: Sohrab Amirghodsi, Lingzhi Zhang, Connelly Barnes, Elya Shechtman, Yuqian Zhou, Zhe Lin
-
Patent number: 12125179Abstract: An electronic device comprises a display, an illumination source, a camera, and a logic system. The illumination source is configured to project structured illumination onto a subject. The camera is configured to image the subject through the display, which includes collecting the structured illumination as reflected by the subject. The logic system is configured to receive, from the camera, a digital image of the subject imaged through the display. The logic system is further configured to sharpen the digital image based on the spatially resolved intensity of the structured illumination as reflected by the subject.Type: GrantFiled: November 10, 2020Date of Patent: October 22, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Yonghuan David Ren, Timothy Andrew Large, Neil Emerton, Yuqian Zhou
-
Publication number: 20240338869Abstract: An image processing system obtains an input image (e.g., a user provided image, etc.) and a mask indicating an edit region of the image. A user selects an image editing mode for an image generation network from a plurality of image editing modes. The image generation network generates an output image using the input image, the mask, and the image editing mode.Type: ApplicationFiled: September 26, 2023Publication date: October 10, 2024Inventors: Yuqian Zhou, Krishna Kumar Singh, Zhifei Zhang, Difan Liu, Zhe Lin, Jianming Zhang, Qing Liu, Jingwan Lu, Elya Shechtman, Sohrab Amirghodsi, Connelly Stuart Barnes
-
Publication number: 20240331214Abstract: Systems and methods for image processing (e.g., image extension or image uncropping) using neural networks are described. One or more aspects include obtaining an image (e.g., a source image, a user provided image, etc.) having an initial aspect ratio, and identifying a target aspect ratio (e.g., via user input) that is different from the initial aspect ratio. The image may be positioned in an image frame having the target aspect ratio, where the image frame includes an image region containing the image and one or more extended regions outside the boundaries of the image. An extended image may be generated (e.g., using a generative neural network), where the extended image includes the image in the image region as well as generated image portions in the extended regions and the one or more generated image portions comprise an extension of a scene element depicted in the image.Type: ApplicationFiled: March 20, 2024Publication date: October 3, 2024Inventors: Yuqian Zhou, Elya Shechtman, Zhe Lin, Krishna Kumar Singh, Jingwan Lu, Connelly Stuart Barnes, Sohrab Amirghodsi
-
Publication number: 20240331114Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately generating inpainted digital images utilizing a guided inpainting model guided by both plane panoptic segmentation and plane grouping. For example, the disclosed systems utilize a guided inpainting model to fill holes of missing pixels of a digital image as informed or guided by an appearance guide and a geometric guide. Specifically, the disclosed systems generate an appearance guide utilizing plane panoptic segmentation and generate a geometric guide by grouping plane panoptic segments. In some embodiments, the disclosed systems generate a modified digital image by implementing an inpainting model guided by both the appearance guide (e.g., a plane panoptic segmentation map) and the geometric guide (e.g., a plane grouping map).Type: ApplicationFiled: June 14, 2024Publication date: October 3, 2024Inventors: Yuqian Zhou, Connelly Barnes, Sohrab Amirghodsi, Elya Shechtman
-
Publication number: 20240303787Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for inpainting a digital image using a hybrid wire removal pipeline. For example, the disclosed systems use a hybrid wire removal pipeline that integrates multiple machine learning models, such as a wire segmentation model, a hole separation model, a mask dilation model, a patch-based inpainting model, and a deep inpainting model. Using the hybrid wire removal pipeline, in some embodiments, the disclosed systems generate a wire segmentation from a digital image depicting one or more wires. The disclosed systems also utilize the hybrid wire removal pipeline to extract or identify portions of the wire segmentation that indicate specific wires or portions of wires. In certain embodiments, the disclosed systems further inpaint pixels of the digital image corresponding to the wires indicated by the wire segmentation mask using the patch-based inpainting model and/or the deep inpainting model.Type: ApplicationFiled: March 7, 2023Publication date: September 12, 2024Inventors: Yuqian Zhou, Connelly Barnes, Zijun Wei, Zhe Lin, Elya Shechtman, Sohrab Amirghodsi, Xiaoyang Liu
-
Patent number: 12086965Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for accurately restoring missing pixels within a hole region of a target image utilizing multi-image inpainting techniques based on incorporating geometric depth information. For example, in various implementations, the disclosed systems utilize a depth prediction of a source image as well as camera relative pose parameters. Additionally, in some implementations, the disclosed systems jointly optimize the depth rescaling and camera pose parameters before generating the reprojected image to further increase the accuracy of the reprojected image. Further, in various implementations, the disclosed systems utilize the reprojected image in connection with a content-aware fill model to generate a refined composite image that includes the target image having a hole, where the hole is filled in based on the reprojected image of the source image.Type: GrantFiled: November 5, 2021Date of Patent: September 10, 2024Assignee: Adobe Inc.Inventors: Yunhan Zhao, Connelly Barnes, Yuqian Zhou, Sohrab Amirghodsi, Elya Shechtman
-
Patent number: 12056857Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately generating inpainted digital images utilizing a guided inpainting model guided by both plane panoptic segmentation and plane grouping. For example, the disclosed systems utilize a guided inpainting model to fill holes of missing pixels of a digital image as informed or guided by an appearance guide and a geometric guide. Specifically, the disclosed systems generate an appearance guide utilizing plane panoptic segmentation and generate a geometric guide by grouping plane panoptic segments. In some embodiments, the disclosed systems generate a modified digital image by implementing an inpainting model guided by both the appearance guide (e.g., a plane panoptic segmentation map) and the geometric guide (e.g., a plane grouping map).Type: GrantFiled: November 5, 2021Date of Patent: August 6, 2024Assignee: Adobe Inc.Inventors: Yuqian Zhou, Connelly Barnes, Sohrab Amirghodsi, Elya Shechtman
-
Publication number: 20240169500Abstract: Systems and methods for image processing are described. Embodiments of the present disclosure receive an image comprising a first region that includes content and a second region to be inpainted. Noise is then added to the image to obtain a noisy image, and a plurality of intermediate output images are generated based on the noisy image using a diffusion model trained using a perceptual loss. The intermediate output images predict a final output image based on a corresponding intermediate noise level of the diffusion model. The diffusion model then generates the final output image based on the intermediate output image. The final output image includes inpainted content in the second region that is consistent with the content in the first region.Type: ApplicationFiled: November 22, 2022Publication date: May 23, 2024Inventors: Haitian Zheng, Zhe Lin, Jianming Zhang, Connelly Stuart Barnes, Elya Shechtman, Jingwan Lu, Qing Liu, Sohrab Amirghodsi, Yuqian Zhou, Scott Cohen
-
Publication number: 20240160922Abstract: Examples are disclosed that relate to the restoration of degraded images acquired via a behind-display camera. One example provides a method of training a machine learning model, the method comprising inputting training image pairs into the machine learning model, each training image pair comprising an undegraded image and a degraded image that represents an appearance of the undegraded image to a behind-display camera, and training the machine learning model using the training image pairs to generate frequency information that is missing from the degraded images.Type: ApplicationFiled: January 12, 2024Publication date: May 16, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Yuqian ZHOU, Timothy Andrew LARGE, Se Hoon LIM, Neil EMERTON, Yonghuan David REN
-
Patent number: 11958149Abstract: A fastening tool. In the fastening tool, a top part guiding matching portion engages a top part guiding portion in a sliding fit; a pressing part guiding matching portion engages a pressing part guiding portion in a sliding fit; a lever hinged to a base part; a transmission part engages a transmission part guiding slot in a sliding fit; the transmission part is driven to slide by the pressing part, and the lever is driven to rotate by the transmission part, then the top part is driven to slide out, due to a reaction force acting on the pressing part by the transmission part and the friction between the pressing part and the base part, the pressing part remains to be self-locked by friction, which can constrain the pressing part and the top part is locked.Type: GrantFiled: July 9, 2021Date of Patent: April 16, 2024Assignees: AECC SHANGHAI COMMERCIAL AIRCRAFT ENGINE MANUFACTURING CO., LTD., AECC COMMERCIAL AIRCRAFT ENGINE CO., LTD.Inventors: Yiting Hu, Wenxing Mu, Fei Pan, Yuqian Zhou
-
Publication number: 20240046429Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating neural network based perceptual artifact segmentations in synthetic digital image content. The disclosed system utilizing neural networks to detect perceptual artifacts in digital images in connection with generating or modifying digital images. The disclosed system determines a digital image including one or more synthetically modified portions. The disclosed system utilizes an artifact segmentation machine-learning model to detect perceptual artifacts in the synthetically modified portion(s). The artifact segmentation machine-learning model is trained to detect perceptual artifacts based on labeled artifact regions of synthetic training digital images. Additionally, the disclosed system utilizes the artifact segmentation machine-learning model in an iterative inpainting process. The disclosed system utilizes one or more digital image inpainting models to inpaint in a digital image.Type: ApplicationFiled: July 27, 2022Publication date: February 8, 2024Inventors: Sohrab Amirghodsi, Lingzhi Zhang, Zhe Lin, Elya Shechtman, Yuqian Zhou, Connelly Barnes