Patents by Inventor Zhe Lin
Zhe Lin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250043136Abstract: A novel rheology modifier which comprises a quaternary ammonium containing polyamide for use in aqueous paint, and that can provide excellent pigment suspension and rheological properties to the aqueous based coating without being affected by pH fluctuation.Type: ApplicationFiled: July 24, 2023Publication date: February 6, 2025Applicant: ELEMENTIS SPECIALTIES, INC.Inventors: Chun-Hung Yen, Wei-Jen Huang, Ming-Jhe Li, Yu-Lun Hung, Hou-Jen Yen, Yu-Yen Lu, Yu-Zhe Su, Hung-Yi Lin
-
Publication number: 20250046055Abstract: This disclosure describes one or more implementations of systems, non-transitory computer-readable media, and methods that trains (and utilizes) an image color editing diffusion neural network to generate a color edited digital image(s) for a digital image. In particular, in one or more implementations, the disclosed systems identify a digital image depicting content in a first color style. Moreover, the disclosed systems generate, from the digital image utilizing an image color editing diffusion neural network, a color-edited digital image depicting the content in a second color style different from the first color style. Further, the disclosed systems provide, for display within a graphical user interface, the color-edited digital image.Type: ApplicationFiled: August 2, 2023Publication date: February 6, 2025Inventors: Zhifei Zhang, Zhe Lin, Yixuan Ren, Yifei Fan, Jing Shi
-
Patent number: 12217395Abstract: Systems and methods for image processing are configured. Embodiments of the present disclosure encode a content image and a style image using a machine learning model to obtain content features and style features, wherein the content image includes a first object having a first appearance attribute and the style image includes a second object having a second appearance attribute; align the content features and the style features to obtain a sparse correspondence map that indicates a correspondence between a sparse set of pixels of the content image and corresponding pixels of the style image; and generate a hybrid image based on the sparse correspondence map, wherein the hybrid image depicts the first object having the second appearance attribute.Type: GrantFiled: April 27, 2022Date of Patent: February 4, 2025Assignee: ADOBE INC.Inventors: Sangryul Jeon, Zhifei Zhang, Zhe Lin, Scott Cohen, Zhihong Ding
-
Publication number: 20250039883Abstract: This application relates to a data transmission method and apparatus, a computer device, and a storage medium. A first device (01) sends indication information to a second device (02), so that the first device (01) informs the second device (02) of a usage status of a target resource in time, and after receiving the indication information, the second device (02) may determine, based on the indication information, which resources to use, which resources to receive or decode, which resources not to receive or decode, which resources to terminate, and the like.Type: ApplicationFiled: October 16, 2024Publication date: January 30, 2025Inventors: Zhe FU, Yanan LIN, Cong SHI
-
Publication number: 20250030152Abstract: The application discloses an NFC antenna for handheld devices and a handheld device. The NFC antenna includes an antenna base and a coupling coil arranged along the edge of the antenna base. The antenna base integrally includes a first antenna base and two second antenna bases. The first antenna base is arranged on a backside of the handheld device, with two sides bending and extending towards the frontside of the handheld device to form the second antenna bases. The second antenna bases are respectively located on sides of the handheld device and outside the screen assembly of the handheld device. The NFC antenna covers the sides of the handheld device, so that NFC antenna signals at the sides can be sensed at the frontside, thereby NFC cards can be swiped at both the frontside and backside of the handheld device, without occupying space at the frontside.Type: ApplicationFiled: December 9, 2022Publication date: January 23, 2025Applicant: Shanghai Sunmi Technology Co., Ltd.Inventors: Laibing Gu, Zhe Lin, Hui Shu, Xusheng Lu, Baike Li
-
Publication number: 20250029226Abstract: Models for classifying exposure defects in images are provided by training a binary model on a dataset of images labeled to indicate exposure within the images. When trained, the binary model classifies an image based on whether the image includes an exposure defect. A classification model is also trained. The classification model is trained on a dataset of images having exposure defects labeled to indicate exposure scores or exposure defect classifications. When trained, the classification model classifies the image based on a level of exposure. The binary model and the classification model can be stored for identifying and classifying exposure defects within images.Type: ApplicationFiled: October 7, 2024Publication date: January 23, 2025Inventors: Akhilesh KUMAR, Zhe Lin, William Lawrence Marino
-
Patent number: 12204610Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for training a generative inpainting neural network to accurately generate inpainted digital images via object-aware training and/or masked regularization. For example, the disclosed systems utilize an object-aware training technique to learn parameters for a generative inpainting neural network based on masking individual object instances depicted within sample digital images of a training dataset. In some embodiments, the disclosed systems also (or alternatively) utilize a masked regularization technique as part of training to prevent overfitting by penalizing a discriminator neural network utilizing a regularization term that is based on an object mask. In certain cases, the disclosed systems further generate an inpainted digital image utilizing a trained generative inpainting model with parameters learned via the object-aware training and/or the masked regularization.Type: GrantFiled: February 14, 2022Date of Patent: January 21, 2025Assignee: Adobe Inc.Inventors: Zhe Lin, Haitian Zheng, Jingwan Lu, Scott Cohen, Jianming Zhang, Ning Xu, Elya Shechtman, Connelly Barnes, Sohrab Amirghodsi
-
Publication number: 20250018687Abstract: In one aspect, a laminated glass includes first and second glass plates and at least two connectors, with the thickness of the second glass plate being not larger than 1.6 mm and a middle layer being arranged between the first glass plate and the second glass plate. Each connector comprises an embedded part. The embedded part is arranged between the middle layer and the second glass plate, so as to be in contact with the middle layer and the second glass plate at the same time. The laminated glass has a clinging value A=T1×T2×L/(T3)3, and the clinging value A is at least 20, wherein the thickness of the second glass plate is T1, the thickness of the intermediate layer is T2, the thickness of the embedded part is T3, and the distance between the embedded parts of two adjacent connectors is L.Type: ApplicationFiled: April 4, 2023Publication date: January 16, 2025Inventors: Zhe WANG, Jun LIN, Li WANG, Bizhu CHEN
-
Publication number: 20250022252Abstract: This disclosure describes one or more implementations of systems, non-transitory computer-readable media, and methods that extract multiple attributes from an object portrayed in a digital image utilizing a multi-attribute contrastive classification neural network. For example, the disclosed systems utilize a multi-attribute contrastive classification neural network that includes an embedding neural network, a localizer neural network, a multi-attention neural network, and a classifier neural network. In some cases, the disclosed systems train the multi-attribute contrastive classification neural network utilizing a multi-attribute, supervised-contrastive loss. In some embodiments, the disclosed systems generate negative attribute training labels for labeled digital images utilizing positive attribute labels that correspond to the labeled digital images.Type: ApplicationFiled: September 27, 2024Publication date: January 16, 2025Inventors: Khoi Pham, Kushal Kafle, Zhe Lin, Zhihong Ding, Scott Cohen, Quan Tran
-
Publication number: 20250022099Abstract: Systems and methods for image compositing are provided. An aspect of the systems and methods includes obtaining a first image and a second image, wherein the first image includes a target location and the second image includes a target element; encoding the second image using an image encoder to obtain an image embedding; generating a descriptive embedding based on the image embedding using an adapter network; and generating a composite image based on the descriptive embedding and the first image using an image generation model, wherein the composite image depicts the target element from the second image at the target location of the first image.Type: ApplicationFiled: July 13, 2023Publication date: January 16, 2025Inventors: Yizhi Song, Zhifei Zhang, Zhe Lin, Scott Cohen, Brian Lynn Price, Jianming Zhang, Soo Ye Kim
-
Patent number: 12198224Abstract: Systems and methods for image generation are described. Embodiments of the present disclosure receive a text phrase that describes a target image to be generated; generate text features based on the text phrase; retrieve a search image based on the text phrase; and generate the target image using an image generation network based on the text features and the search image.Type: GrantFiled: February 15, 2022Date of Patent: January 14, 2025Assignee: ADOBE INC.Inventors: Xin Yuan, Zhe Lin, Jason Wen Yong Kuen, Jianming Zhang, John Philip Collomosse
-
Publication number: 20250013866Abstract: Systems and methods for reducing inference time of vision-language models, as well as for multimodal search, are described herein. Embodiments are configured to obtain an embedding neural network. The embedding neural network is pretrained to embed inputs from a plurality of modalities into a multimodal embedding space. Embodiments are further configured to perform a first progressive pruning stage, where the first progressive pruning stage includes a first pruning of the embedding neural network and a first fine-tuning of the embedding neural network. Embodiments then perform a second progressive pruning stage based on an output of the first progressive pruning stage, where the second progressive pruning stage includes a second pruning of the embedding neural network and a second fine-tuning of the embedding neural network.Type: ApplicationFiled: July 6, 2023Publication date: January 9, 2025Inventors: Handong Zhao, Yue Bai, Zhe Lin, Ajinkya Gorakhnath Kale, Jiuxiang Gu, Tong Yu, Sungchul Kim
-
Patent number: 12190484Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating modified digital images utilizing a guided inpainting approach that implements a patch match model informed by a deep visual guide. In particular, the disclosed systems can utilize a visual guide algorithm to automatically generate guidance maps to help identify replacement pixels for inpainting regions of digital images utilizing a patch match model. For example, the disclosed systems can generate guidance maps in the form of structure maps, depth maps, or segmentation maps that respectively indicate the structure, depth, or segmentation of different portions of digital images. Additionally, the disclosed systems can implement a patch match model to identify replacement pixels for filling regions of digital images according to the structure, depth, and/or segmentation of the digital images.Type: GrantFiled: March 15, 2021Date of Patent: January 7, 2025Assignee: Adobe Inc.Inventors: Sohrab Amirghodsi, Lingzhi Zhang, Zhe Lin, Connelly Barnes, Elya Shechtman
-
Publication number: 20240428384Abstract: Inpainting dispatch techniques for digital images are described. In one or more examples, an inpainting system includes a plurality of inpainting modules. The inpainting modules are configured to employ a variety of different techniques, respectively, as part of performing an inpainting operation. An inpainting dispatch module is also included as part of the inpainting system that is configured to select which of the plurality of inpainting modules are to be used to perform an inpainting operation for one or more regions in a digital image, automatically and without user intervention.Type: ApplicationFiled: June 22, 2023Publication date: December 26, 2024Applicant: Adobe Inc.Inventors: Yuqian Zhou, Zhe Lin, Xiaoyang Liu, Sohrab Amirghodsi, Qing Liu, Lingzhi Zhang, Elya Schechtman, Connelly Stuart Barnes
-
Publication number: 20240418626Abstract: The present invention discloses a method for cell identification aimed at addressing challenges encountered in the field. The present invention involves acquiring cell information, which includes cellular traction force data obtained from a point within a cell using a cellular mechanical sensor, with details on the magnitude of the traction force at that point. Subsequently, the acquired cell information undergoes preprocessing to generate structured cell data, comprising counts of cells, the number of cell features, and relevant information about each cell feature. This structured cell data is then utilized as input to establish a cell feature model through supervised, unsupervised, or semi-supervised machine learning techniques. The established model is subsequently applied to classify or cluster cells of unknown types or states. Additionally, the invention encompasses a cell identification device and system embodying this technical solution.Type: ApplicationFiled: March 22, 2024Publication date: December 19, 2024Inventor: Zhe LIN
-
Patent number: 12169895Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate a height map for a digital object portrayed in a digital image and further utilizes the height map to generate a shadow for the digital object. Indeed, in one or more embodiments, the disclosed systems generate (e.g., utilizing a neural network) a height map that indicates the pixels heights for pixels of a digital object portrayed in a digital image. The disclosed systems utilize the pixel heights, along with lighting information for the digital image, to determine how the pixels of the digital image project to create a shadow for the digital object. Further, in some implementations, the disclosed systems utilize the determined shadow projections to generate (e.g., utilizing another neural network) a soft shadow for the digital object. Accordingly, in some cases, the disclosed systems modify the digital image to include the shadow.Type: GrantFiled: October 15, 2021Date of Patent: December 17, 2024Assignee: Adobe Inc.Inventors: Yifan Liu, Jianming Zhang, He Zhang, Elya Shechtman, Zhe Lin
-
Patent number: 12165295Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate inpainted digital images utilizing a cascaded modulation inpainting neural network. For example, the disclosed systems utilize a cascaded modulation inpainting neural network that includes cascaded modulation decoder layers. For example, in one or more decoder layers, the disclosed systems start with global code modulation that captures the global-range image structures followed by an additional modulation that refines the global predictions. Accordingly, in one or more implementations, the image inpainting system provides a mechanism to correct distorted local details. Furthermore, in one or more implementations, the image inpainting system leverages fast Fourier convolutions block within different resolution layers of the encoder architecture to expand the receptive field of the encoder and to allow the network encoder to better capture global structure.Type: GrantFiled: May 4, 2022Date of Patent: December 10, 2024Assignee: Adobe Inc.Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Elya Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi
-
Patent number: 12165284Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that implement a dual-branched neural network architecture to harmonize composite images. For example, in one or more implementations, the transformer-based harmonization system uses a convolutional branch and a transformer branch to generate a harmonized composite image based on an input composite image and a corresponding segmentation mask. More particularly, the convolutional branch comprises a series of convolutional neural network layers followed by a style normalization layer to extract localized information from the input composite image. Further, the transformer branch comprises a series of transformer neural network layers to extract global information based on different resolutions of the input composite image.Type: GrantFiled: March 21, 2022Date of Patent: December 10, 2024Assignee: Adobe Inc.Inventors: He Zhang, Jianming Zhang, Jose Ignacio Echevarria Vallespi, Kalyan Sunkavalli, Meredith Payne Stotzner, Yinglan Ma, Zhe Lin, Elya Shechtman, Frederick Mandia
-
Publication number: 20240404013Abstract: Embodiments include systems and methods for generative image filling based on text and a reference image. In one aspect, the system obtains an input image, a reference image, and a text prompt. Then, the system encodes the reference image to obtain an image embedding and encodes the text prompt to obtain a text embedding. Subsequently, a composite image is generated based on the input image, the image embedding, and the text embedding.Type: ApplicationFiled: November 21, 2023Publication date: December 5, 2024Inventors: Yuqian Zhou, Krishna Kumar Singh, Zhe Lin, Qing Liu, Zhifei Zhang, Sohrab Amirghodsi, Elya Shechtman, Jingwan Lu
-
Publication number: 20240404243Abstract: Systems and methods for multimodal machine learning are provided. According to one aspect, a method for multimodal machine learning includes obtaining a prompt; encoding the prompt using a multimodal encoder to obtain a prompt embedding, wherein the encoding comprises generating a plurality of multi-head attention (MHA) outputs corresponding to a plurality of different scales, respectively, and combining the plurality of MHA outputs using a multi-scale aggregator; and generating a response to the prompt based on the prompt embedding.Type: ApplicationFiled: June 5, 2023Publication date: December 5, 2024Inventors: Handong Zhao, Yue Bai, Zhe Lin, Ajinkya Gorakhnath Kale, Jiuxiang Gu, Tong Yu, Sungchul Kim