Patents by Inventor Jianming Zhang

Jianming Zhang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12236640
    Abstract: Systems and methods for image dense field based view calibration are provided. In one embodiment, an input image is applied to a dense field machine learning model that generates a vertical vector dense field (VVF) and a latitude dense field (LDF) from the input image. The VVF comprises a vertical vector of a projected vanishing point direction for each of the pixels of the input image. The latitude dense field (LDF) comprises a projected latitude value for the pixels of the input image. A dense field map for the input image comprising the VVF and the LDF can be directly or indirectly used for a variety of image processing manipulations. The VVF and LDF can be optionally used to derive traditional camera calibration parameters from uncontrolled images that have undergone undocumented or unknown manipulations.
    Type: Grant
    Filed: March 28, 2022
    Date of Patent: February 25, 2025
    Assignee: Adobe Inc.
    Inventors: Jianming Zhang, Linyi Jin, Kevin Matzen, Oliver Wang, Yannick Hold-Geoffroy
  • Publication number: 20250054116
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate inpainted digital images utilizing a cascaded modulation inpainting neural network. For example, the disclosed systems utilize a cascaded modulation inpainting neural network that includes cascaded modulation decoder layers. For example, in one or more decoder layers, the disclosed systems start with global code modulation that captures the global-range image structures followed by an additional modulation that refines the global predictions. Accordingly, in one or more implementations, the image inpainting system provides a mechanism to correct distorted local details. Furthermore, in one or more implementations, the image inpainting system leverages fast Fourier convolutions block within different resolution layers of the encoder architecture to expand the receptive field of the encoder and to allow the network encoder to better capture global structure.
    Type: Application
    Filed: October 28, 2024
    Publication date: February 13, 2025
    Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Elya Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi
  • Patent number: 12223439
    Abstract: Systems and methods for multi-modal representation learning are described. One or more embodiments provide a visual representation learning system trained using machine learning techniques. For example, some embodiments of the visual representation learning system are trained using cross-modal training tasks including a combination of intra-modal and inter-modal similarity preservation objectives. In some examples, the training tasks are based on contrastive learning techniques.
    Type: Grant
    Filed: March 3, 2021
    Date of Patent: February 11, 2025
    Assignee: ADOBE INC.
    Inventors: Xin Yuan, Zhe Lin, Jason Wen Yong Kuen, Jianming Zhang, Yilin Wang, Ajinkya Kale, Baldo Faieta
  • Patent number: 12223661
    Abstract: Systems and methods provide editing operations in a smart editing system that may generate a focal point within a mask of an object for each frame of a video segment and perform editing effects on the frames of the video segment to quickly provide users with natural video editing effects. An eye-gaze network may produce a hotspot map of predicted focal points in a video frame. These predicted focal points may then be used by a gaze-to-mask network to determine objects in the image and generate an object mask for each of the detected objects. This process may then be repeated to effectively track the trajectory of objects and object focal points in videos. Based on the determined trajectory of an object in a video clip and editing parameters, the editing engine may produce editing effects relative to an object for the video clip.
    Type: Grant
    Filed: May 3, 2022
    Date of Patent: February 11, 2025
    Assignee: ADOBE INC.
    Inventors: Lu Zhang, Jianming Zhang, Zhe Lin, Radomir Mech
  • Publication number: 20250038704
    Abstract: A unidirectional conductive joint assembly includes a unidirectional conductive device, wherein a conductive sheet is electrically connected to a conductive end of the unidirectional conductive device, another conductive sheet is electrically connected to another conductive end of the unidirectional conductive device, insulating films are laminated on two sides of each conductive sheet, and each conductive sheet is sandwiched between two insulating films; and the two conductive ends of the unidirectional conductive device are insulated and sealed in an insulating package, portions of the conductive sheets on the sides of the insulating films close to the unidirectional conductive device are first conductive portions, the first conductive portions and film sides of the insulating films close to the first conductive portions are all insulated and sealed in the insulating package, and the conductive sheets and the insulating films are all partially located outside the insulating package.
    Type: Application
    Filed: September 21, 2022
    Publication date: January 30, 2025
    Inventors: Jianming ZHANG, Jinliang GUAN, Min CHEN, Yong GUO, Chunxin LIU
  • Patent number: 12204610
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for training a generative inpainting neural network to accurately generate inpainted digital images via object-aware training and/or masked regularization. For example, the disclosed systems utilize an object-aware training technique to learn parameters for a generative inpainting neural network based on masking individual object instances depicted within sample digital images of a training dataset. In some embodiments, the disclosed systems also (or alternatively) utilize a masked regularization technique as part of training to prevent overfitting by penalizing a discriminator neural network utilizing a regularization term that is based on an object mask. In certain cases, the disclosed systems further generate an inpainted digital image utilizing a trained generative inpainting model with parameters learned via the object-aware training and/or the masked regularization.
    Type: Grant
    Filed: February 14, 2022
    Date of Patent: January 21, 2025
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Haitian Zheng, Jingwan Lu, Scott Cohen, Jianming Zhang, Ning Xu, Elya Shechtman, Connelly Barnes, Sohrab Amirghodsi
  • Publication number: 20250022099
    Abstract: Systems and methods for image compositing are provided. An aspect of the systems and methods includes obtaining a first image and a second image, wherein the first image includes a target location and the second image includes a target element; encoding the second image using an image encoder to obtain an image embedding; generating a descriptive embedding based on the image embedding using an adapter network; and generating a composite image based on the descriptive embedding and the first image using an image generation model, wherein the composite image depicts the target element from the second image at the target location of the first image.
    Type: Application
    Filed: July 13, 2023
    Publication date: January 16, 2025
    Inventors: Yizhi Song, Zhifei Zhang, Zhe Lin, Scott Cohen, Brian Lynn Price, Jianming Zhang, Soo Ye Kim
  • Patent number: 12198224
    Abstract: Systems and methods for image generation are described. Embodiments of the present disclosure receive a text phrase that describes a target image to be generated; generate text features based on the text phrase; retrieve a search image based on the text phrase; and generate the target image using an image generation network based on the text features and the search image.
    Type: Grant
    Filed: February 15, 2022
    Date of Patent: January 14, 2025
    Assignee: ADOBE INC.
    Inventors: Xin Yuan, Zhe Lin, Jason Wen Yong Kuen, Jianming Zhang, John Philip Collomosse
  • Publication number: 20250014201
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for training and/or implementing machine learning models utilizing compressed log scene measurement maps. For example, the disclosed system generates compressed log scene measurement maps by converting scene measurement maps to compressed log scene measurement maps by applying a logarithmic function. In particular, the disclosed system uses scene measurement distribution metrics from a digital image to determine a base for the logarithmic function. In this way, the compressed log scene measurement maps normalize ranges within a digital image and accurately differentiates between scene elements objects at a variety of depths. Moreover, for training, the disclosed system generates a predicted scene measurement map via a machine learning model and compares the predicted scene measurement map with a compressed log ground truth map.
    Type: Application
    Filed: September 17, 2024
    Publication date: January 9, 2025
    Inventor: Jianming Zhang
  • Publication number: 20250005807
    Abstract: A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining a text prompt. A text encoder encodes the text prompt to obtain a preliminary text embedding. An adaptor network generates an adapted text embedding based on the preliminary text embedding. In some cases, the adaptor network is trained to adapt the preliminary text embedding for generating an input to an image generation model. The image generation model generates a synthetic image based on the adapted text embedding. In some cases, the synthetic image includes content described by the text prompt.
    Type: Application
    Filed: November 13, 2023
    Publication date: January 2, 2025
    Inventor: Jianming Zhang
  • Publication number: 20250008279
    Abstract: The present disclosure relates to the technical field of wearable devices. Provided are a hearing aid control method and apparatus, a hearing aid device, and a storage medium. The hearing aid control method includes: displaying a hearing detection image in a viewing window area of the pair of augmented reality (AR) glasses, playing hearing test audio to acquire a feedback signal from a user wearing the hearing aid device based on the hearing test audio, and determining a hearing assessment result for the user based on the feedback signal; in response to determining, based on the hearing assessment result, that a display operation is required to be performed by the hearing aid device, collecting speech information; and converting the speech information into text information upon the collection of the speech information, and displaying the text information in the viewing window area of the pair of AR glasses.
    Type: Application
    Filed: May 18, 2022
    Publication date: January 2, 2025
    Inventors: Xianglong LIANG, Fei WU, Lin MA, Zhiyuan CHENG, Jianming ZHANG, Shenqiang LOU, Wei ZHAO
  • Patent number: 12169895
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate a height map for a digital object portrayed in a digital image and further utilizes the height map to generate a shadow for the digital object. Indeed, in one or more embodiments, the disclosed systems generate (e.g., utilizing a neural network) a height map that indicates the pixels heights for pixels of a digital object portrayed in a digital image. The disclosed systems utilize the pixel heights, along with lighting information for the digital image, to determine how the pixels of the digital image project to create a shadow for the digital object. Further, in some implementations, the disclosed systems utilize the determined shadow projections to generate (e.g., utilizing another neural network) a soft shadow for the digital object. Accordingly, in some cases, the disclosed systems modify the digital image to include the shadow.
    Type: Grant
    Filed: October 15, 2021
    Date of Patent: December 17, 2024
    Assignee: Adobe Inc.
    Inventors: Yifan Liu, Jianming Zhang, He Zhang, Elya Shechtman, Zhe Lin
  • Patent number: 12165295
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that generate inpainted digital images utilizing a cascaded modulation inpainting neural network. For example, the disclosed systems utilize a cascaded modulation inpainting neural network that includes cascaded modulation decoder layers. For example, in one or more decoder layers, the disclosed systems start with global code modulation that captures the global-range image structures followed by an additional modulation that refines the global predictions. Accordingly, in one or more implementations, the image inpainting system provides a mechanism to correct distorted local details. Furthermore, in one or more implementations, the image inpainting system leverages fast Fourier convolutions block within different resolution layers of the encoder architecture to expand the receptive field of the encoder and to allow the network encoder to better capture global structure.
    Type: Grant
    Filed: May 4, 2022
    Date of Patent: December 10, 2024
    Assignee: Adobe Inc.
    Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Elya Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi
  • Patent number: 12165292
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a plurality of neural networks in a multi-branch pipeline to generate image masks for digital images. Specifically, the disclosed system can classify a digital image as a portrait or a non-portrait image. Based on classifying a portrait image, the disclosed system can utilize separate neural networks to generate a first mask portion for a portion of the digital image including a defined boundary region and a second mask portion for a portion of the digital image including a blended boundary region. The disclosed system can generate the mask portion for the blended boundary region by utilizing a trimap generation neural network to automatically generate a trimap segmentation including the blended boundary region. The disclosed system can then merge the first mask portion and the second mask portion to generate an image mask for the digital image.
    Type: Grant
    Filed: May 15, 2023
    Date of Patent: December 10, 2024
    Assignee: Adobe Inc.
    Inventors: He Zhang, Seyed Morteza Safdarnejad, Yilin Wang, Zijun Wei, Jianming Zhang, Salil Tambe, Brian Price
  • Patent number: 12165284
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that implement a dual-branched neural network architecture to harmonize composite images. For example, in one or more implementations, the transformer-based harmonization system uses a convolutional branch and a transformer branch to generate a harmonized composite image based on an input composite image and a corresponding segmentation mask. More particularly, the convolutional branch comprises a series of convolutional neural network layers followed by a style normalization layer to extract localized information from the input composite image. Further, the transformer branch comprises a series of transformer neural network layers to extract global information based on different resolutions of the input composite image.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: December 10, 2024
    Assignee: Adobe Inc.
    Inventors: He Zhang, Jianming Zhang, Jose Ignacio Echevarria Vallespi, Kalyan Sunkavalli, Meredith Payne Stotzner, Yinglan Ma, Zhe Lin, Elya Shechtman, Frederick Mandia
  • Publication number: 20240404138
    Abstract: In accordance with the described techniques, an image delighting system receives an input image depicting a human subject that includes lighting effects. The image delighting system further generates a segmentation mask and a skin tone mask. The segmentation mask includes multiple segments each representing a different portion of the human subject, and the skin tone mask identifies one or more color values for a skin region of the human subject. Using a machine learning lighting removal network, the image delighting system generates an unlit image by removing the lighting effects from the input image based on the segmentation mask and the skin tone mask.
    Type: Application
    Filed: June 2, 2023
    Publication date: December 5, 2024
    Applicant: Adobe Inc.
    Inventors: He Zhang, Shi Yan, Jianming Zhang
  • Publication number: 20240404188
    Abstract: In accordance with the described techniques, a portrait relighting system receives user input defining one or more markings drawn on a portrait image. Using one or more machine learning models, the portrait relighting system generates an albedo representation of the portrait image by removing lighting effects from the portrait image. Further, the portrait relighting system generates a shading map of the portrait image using the one or more machine learning models by designating the one or more markings as a lighting condition, and applying the lighting condition to a geometric representation of the portrait image. The one or more machine learning models are further employed to generate a relit portrait image based on the albedo representation and the shading map.
    Type: Application
    Filed: June 2, 2023
    Publication date: December 5, 2024
    Applicant: Adobe Inc.
    Inventors: He Zhang, Zijun Wei, Zhixin Shu, Yiqun Mei, Yilin Wang, Xuaner Zhang, Shi Yan, Jianming Zhang
  • Publication number: 20240404090
    Abstract: In various examples, a set of camera parameters associated with an input image are determined based on a disparity map and a signed defocus map. For example, a disparity model generates the disparity map indicating disparity values associated with pixels of the input image and a defocus model generates a signed defocus map indicating blur values associated with the pixels of the input image.
    Type: Application
    Filed: June 2, 2023
    Publication date: December 5, 2024
    Inventors: Yannick HOLD-GEOFFROY, Jianming ZHANG, Dominique PICHE-MEUNIER, Jean-François LALONDE
  • Patent number: 12152150
    Abstract: The disclosure discloses a high-viscosity, high-elasticity, and anti-aging composite modified asphalt and a preparation method thereof, belongs to the technical field of road engineering materials, and solves the technical problem that the comprehensive performance of an existing asphalt ultrathin wearing layer needs to be further improved so as to prolong the service life of a pavement surface layer and reduce the pavement maintenance costs. The composite modified asphalt is prepared from the following components in parts by mass: 100 parts of a matrix asphalt, 10 to 15 parts of a thermoplastic styrene-butadiene rubber, 5 to 8 parts of a tackifier, 0.5 to 1.5 parts of a plasticizer, 2 to 5 parts of a compatibilizer, 0.1 to 0.4 parts of a stabilizer, and 0.01 to 0.05 parts of an anti-aging agent. The composite modified asphalt prepared by the disclosure has the advantages of high elasticity, high viscosity, excellent aging resistance, etc.
    Type: Grant
    Filed: February 2, 2024
    Date of Patent: November 26, 2024
    Assignee: Sichuan Road and Bridge Construction Group Co., Ltd.
    Inventors: Shuangquan Jiang, Jian Yang, Peilong Li, Wei Lu, Liuda Cheng, Yi Pei, Zhan Ding, Jianglin Du, Haiqing Li, Jixiang Pu, Qingyun Li, Maoqin Niu, Jianming Zhang, Wanchun Liu
  • Patent number: 12148074
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly generating harmonized digital images utilizing an object-to-object harmonization neural network. For example, the disclosed systems implement, and learn parameters for, an object-to-object harmonization neural network to combine a style code from a reference object with features extracted from a target object. Indeed, the disclosed systems extract a style code from a reference object utilizing a style encoder neural network. In addition, the disclosed systems generate a harmonized target object by applying the style code of the reference object to a target object utilizing an object-to-object harmonization neural network.
    Type: Grant
    Filed: October 18, 2021
    Date of Patent: November 19, 2024
    Assignee: Adobe Inc.
    Inventors: He Zhang, Jeya Maria Jose Valanarasu, Jianming Zhang, Jose Ignacio Echevarria Vallespi, Kalyan Sunkavalli, Yilin Wang, Yinglan Ma, Zhe Lin, Zijun Wei