Patents by Inventor Zijun Wei

Zijun Wei has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240028871
    Abstract: Embodiments are disclosed for performing wire segmentation of images using machine learning. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input image, generating, by a first trained neural network model, a global probability map representation of the input image indicating a probability value of each pixel including a representation of wires, and identifying regions of the input image indicated as including the representation of wires. The disclosed systems and methods further comprise, for each region from the identified regions, concatenating the region and information from the global probability map to create a concatenated input, and generating, by a second trained neural network model, a local probability map representation of the region based on the concatenated input, indicating pixels of the region including representations of wires. The disclosed systems and methods further comprise aggregating local probability maps for each region.
    Type: Application
    Filed: July 21, 2022
    Publication date: January 25, 2024
    Applicant: Adobe Inc.
    Inventors: Mang Tik CHIU, Connelly BARNES, Zijun WEI, Zhe LIN, Yuqian ZHOU, Xuaner ZHANG, Sohrab AMIRGHODSI, Florian KAINZ, Elya SHECHTMAN
  • Publication number: 20230401717
    Abstract: Systems and methods for image segmentation are described. Embodiments of the present disclosure receive an image depicting an object; generate image features for the image by performing an atrous self-attention operation based on a plurality of dilation rates for a convolutional kernel applied at a position of a sliding window on the image; and generate label data that identifies the object based on the image features.
    Type: Application
    Filed: June 10, 2022
    Publication date: December 14, 2023
    Inventors: Yilin Wang, Chenglin Yang, Jianming Zhang, He Zhang, Zijun Wei, Zhe Lin
  • Publication number: 20230401716
    Abstract: Systems and methods for image segmentation are described. Embodiments of the present disclosure receive an image depicting an object; generate image features for the image by performing a convolutional self-attention operation that outputs a plurality of attention-weighted values for a convolutional kernel applied at a position of a sliding window on the image; and generate label data that identifies the object based on the image features.
    Type: Application
    Filed: June 10, 2022
    Publication date: December 14, 2023
    Inventors: Yilin Wang, Chenglin Yang, Jianming Zhang, He Zhang, Zijun Wei, Zhe Lin
  • Publication number: 20230401718
    Abstract: An image processing system generates an image mask from an image. The image is processed by an object detector to identify a region having an object, and the region is classified based on an object type of the object. A masking pipeline is selected from a number of masking pipelines based on the classification of the region. The region is processed using the masking pipeline to generate a region mask. An image mask for the image is generated using the region mask.
    Type: Application
    Filed: June 13, 2022
    Publication date: December 14, 2023
    Inventors: Zijun Wei, Yilin Wang, Jianming Zhang, He Zhang
  • Publication number: 20230328330
    Abstract: Embodiments of the present disclosure provide a live streaming interface display method, a device, an electronic device, and a storage medium. The live streaming interface display method is applied to a terminal device and the terminal device accesses a live streaming room. The method includes: determining at least one piece of popular comment content in the live streaming room in a current counting period; and distinguishingly displaying, on a live streaming interface of the live streaming room, the popular comment content and real-time comment content in the live streaming room.
    Type: Application
    Filed: June 6, 2023
    Publication date: October 12, 2023
    Inventors: Jingting HE, Xuyuan XIANG, Wenjing LIU, Zijun WEI
  • Publication number: 20230281763
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a plurality of neural networks in a multi-branch pipeline to generate image masks for digital images. Specifically, the disclosed system can classify a digital image as a portrait or a non-portrait image. Based on classifying a portrait image, the disclosed system can utilize separate neural networks to generate a first mask portion for a portion of the digital image including a defined boundary region and a second mask portion for a portion of the digital image including a blended boundary region. The disclosed system can generate the mask portion for the blended boundary region by utilizing a trimap generation neural network to automatically generate a trimap segmentation including the blended boundary region. The disclosed system can then merge the first mask portion and the second mask portion to generate an image mask for the digital image.
    Type: Application
    Filed: May 15, 2023
    Publication date: September 7, 2023
    Inventors: He Zhang, Seyed Morteza Safdarnejad, Yilin Wang, Zijun Wei, Jianming Zhang, Salil Tambe, Brian Price
  • Patent number: 11651477
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a plurality of neural networks in a multi-branch pipeline to generate image masks for digital images. Specifically, the disclosed system can classify a digital image as a portrait or a non-portrait image. Based on classifying a portrait image, the disclosed system can utilize separate neural networks to generate a first mask portion for a portion of the digital image including a defined boundary region and a second mask portion for a portion of the digital image including a blended boundary region. The disclosed system can generate the mask portion for the blended boundary region by utilizing a trimap generation neural network to automatically generate a trimap segmentation including the blended boundary region. The disclosed system can then merge the first mask portion and the second mask portion to generate an image mask for the digital image.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: May 16, 2023
    Assignee: Adobe Inc.
    Inventors: He Zhang, Seyed Morteza Safdarnejad, Yilin Wang, Zijun Wei, Jianming Zhang, Salil Tambe, Brian Price
  • Publication number: 20230128792
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generates object masks for digital objects portrayed in digital images utilizing a detection-masking neural network pipeline. In particular, in one or more embodiments, the disclosed systems utilize detection heads of a neural network to detect digital objects portrayed within a digital image. In some cases, each detection head is associated with one or more digital object classes that are not associated with the other detection heads. Further, in some cases, the detection heads implement multi-scale synchronized batch normalization to normalize feature maps across various feature levels. The disclosed systems further utilize a masking head of the neural network to generate one or more object masks for the detected digital objects. In some cases, the disclosed systems utilize post-processing techniques to filter out low-quality masks.
    Type: Application
    Filed: January 31, 2022
    Publication date: April 27, 2023
    Inventors: Jason Wen Yong Kuen, Su Chen, Scott Cohen, Zhe Lin, Zijun Wei, Jianming Zhang
  • Publication number: 20230129341
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate preliminary object masks for objects in an image, surface the preliminary object masks as object mask previews, and on-demand converts preliminary object masks into refined object masks. Indeed, in one or more implementations, an object mask preview and on-demand generation system automatically detects objects in an image. For the detected objects, the object mask preview and on-demand generation system generates preliminary object masks for the detected objects of a first lower resolution. The object mask preview and on-demand generation system surfaces a given preliminary object mask in response to detecting a first input. The object mask preview and on-demand generation system also generates a refined object mask of a second higher resolution in response to detecting a second input.
    Type: Application
    Filed: January 25, 2022
    Publication date: April 27, 2023
    Inventors: Betty Leong, Hyunghwan Byun, Alan L Erickson, Chih-Yao Hsieh, Sarah Kong, Seyed Morteza Safdarnejad, Salil Tambe, Yilin Wang, Zijun Wei, Zhengyun Zhang
  • Publication number: 20230122623
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly generating harmonized digital images utilizing an object-to-object harmonization neural network. For example, the disclosed systems implement, and learn parameters for, an object-to-object harmonization neural network to combine a style code from a reference object with features extracted from a target object. Indeed, the disclosed systems extract a style code from a reference object utilizing a style encoder neural network. In addition, the disclosed systems generate a harmonized target object by applying the style code of the reference object to a target object utilizing an object-to-object harmonization neural network.
    Type: Application
    Filed: October 18, 2021
    Publication date: April 20, 2023
    Inventors: He Zhang, Jeya Maria Jose Valanarasu, Jianming Zhang, Jose Ignacio Echevarria Vallespi, Kalyan Sunkavalli, Yilin Wang, Yinglan Ma, Zhe Lin, Zijun Wei
  • Patent number: 11393100
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a plurality of neural networks in a multi-branch pipeline to generate image masks for digital images. Specifically, the disclosed system can classify a digital image as a portrait or a non-portrait image. Based on classifying a portrait image, the disclosed system can utilize separate neural networks to generate a first mask portion for a portion of the digital image including a defined boundary region and a second mask portion for a portion of the digital image including a blended boundary region. The disclosed system can generate the mask portion for the blended boundary region by utilizing a trimap generation neural network to automatically generate a trimap segmentation including the blended boundary region. The disclosed system can then merge the first mask portion and the second mask portion to generate an image mask for the digital image.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: July 19, 2022
    Assignee: Adobe Inc.
    Inventors: He Zhang, Seyed Morteza Safdarnejad, Yilin Wang, Zijun Wei, Jianming Zhang, Salil Tambe, Brian Price
  • Publication number: 20220044366
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a plurality of neural networks in a multi-branch pipeline to generate image masks for digital images. Specifically, the disclosed system can classify a digital image as a portrait or a non-portrait image. Based on classifying a portrait image, the disclosed system can utilize separate neural networks to generate a first mask portion for a portion of the digital image including a defined boundary region and a second mask portion for a portion of the digital image including a blended boundary region. The disclosed system can generate the mask portion for the blended boundary region by utilizing a trimap generation neural network to automatically generate a trimap segmentation including the blended boundary region. The disclosed system can then merge the first mask portion and the second mask portion to generate an image mask for the digital image.
    Type: Application
    Filed: August 7, 2020
    Publication date: February 10, 2022
    Inventors: He Zhang, Seyed Morteza Safdarnejad, Yilin Wang, Zijun Wei, Jianming Zhang, Salil Tambe, Brian Price
  • Publication number: 20220044365
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a plurality of neural networks in a multi-branch pipeline to generate image masks for digital images. Specifically, the disclosed system can classify a digital image as a portrait or a non-portrait image. Based on classifying a portrait image, the disclosed system can utilize separate neural networks to generate a first mask portion for a portion of the digital image including a defined boundary region and a second mask portion for a portion of the digital image including a blended boundary region. The disclosed system can generate the mask portion for the blended boundary region by utilizing a trimap generation neural network to automatically generate a trimap segmentation including the blended boundary region. The disclosed system can then merge the first mask portion and the second mask portion to generate an image mask for the digital image.
    Type: Application
    Filed: August 7, 2020
    Publication date: February 10, 2022
    Inventors: He Zhang, Seyed Morteza Safdarnejad, Yilin Wang, Zijun Wei, Jianming Zhang, Salil Tambe, Brian Price
  • Patent number: 11096654
    Abstract: Devices, systems, and methods of the present disclosure are directed to accurate and non-invasive assessments of anatomic vessels (e.g., the internal jugular vein (IJV)) of vertebrates. For example, a piezoelectric crystal may generate a signal and receive a pulse echo of the signal along an axis extending through the piezoelectric crystal and an anatomic vessel. A force sensor disposed relative to the piezoelectric crystal may measure a force exerted (e.g., along skin of the vertebrate) on the anatomic vessel along the axis. The pulse echo received by the piezoelectric crystal and the force measured by the force sensor may, in combination, non-invasively and accurately determine a force response of the anatomic vessel. In turn, the force response may be probative of any one or more of a variety of different characteristics of the anatomic vessel including, for example, location of the anatomic vessel and pressure of the anatomic vessel.
    Type: Grant
    Filed: April 13, 2018
    Date of Patent: August 24, 2021
    Assignees: Massachusetts Institute of Technology, The General Hospital Corporation
    Inventors: Galit Hocsman Frydman, Alexander Tyler Jaffe, Maulik D. Majmudar, Mohamad Ali Toufic Najia, Robin Singh, Zijun Wei, Jason Yang, Brian W. Anthony, Athena Yeh Huang, Aaron Michael Zakrzewski
  • Patent number: 10516830
    Abstract: Various embodiments describe facilitating real-time crops on an image. In an example, an image processing application executed on a device receives image data corresponding to a field of view of a camera of the device. The image processing application renders a major view on a display of the device in a preview mode. The major view presents a previewed image based on the image data. The image processing application receives a composition score of a cropped image from a deep-learning system. The image processing application renders a sub-view presenting the cropped image based on the composition score in a preview mode. Based on a user interaction, the image processing application renders the cropped image in the major view with the sub-view in the preview mode.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: December 24, 2019
    Assignee: Adobe Inc.
    Inventors: Jianming Zhang, Zijun Wei, Zhe Lin, Xiaohui Shen, Radomir Mech
  • Patent number: 10497122
    Abstract: Various embodiments describe using a neural network to evaluate image crops in substantially real-time. In an example, a computer system performs unsupervised training of a first neural network based on unannotated image crops, followed by a supervised training of the first neural network based on annotated image crops. Once this first neural network is trained, the computer system inputs image crops generated from images to this trained network and receives composition scores therefrom. The computer system performs supervised training of a second neural network based on the images and the composition scores.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: December 3, 2019
    Assignee: Adobe Inc.
    Inventors: Jianming Zhang, Zijun Wei, Zhe Lin, Xiaohui Shen, Radomir Mech
  • Publication number: 20190108640
    Abstract: Various embodiments describe using a neural network to evaluate image crops in substantially real-time. In an example, a computer system performs unsupervised training of a first neural network based on unannotated image crops, followed by a supervised training of the first neural network based on annotated image crops. Once this first neural network is trained, the computer system inputs image crops generated from images to this trained network and receives composition scores therefrom. The computer system performs supervised training of a second neural network based on the images and the composition scores.
    Type: Application
    Filed: October 11, 2017
    Publication date: April 11, 2019
    Inventors: Jianming Zhang, Zijun Wei, Zhe Lin, Xiaohui Shen, Radomir Mech
  • Publication number: 20190109981
    Abstract: Various embodiments describe facilitating real-time crops on an image. In an example, an image processing application executed on a device receives image data corresponding to a field of view of a camera of the device. The image processing application renders a major view on a display of the device in a preview mode. The major view presents a previewed image based on the image data. The image processing application receives a composition score of a cropped image from a deep-learning system. The image processing application renders a sub-view presenting the cropped image based on the composition score in a preview mode. Based on a user interaction, the image processing application renders the cropped image in the major view with the sub-view in the preview mode.
    Type: Application
    Filed: October 11, 2017
    Publication date: April 11, 2019
    Inventors: Jianming Zhang, Zijun Wei, Zhe Lin, Xiaohui Shen, Radomir Mech
  • Publication number: 20190110002
    Abstract: Various embodiments describe view switching of video on a computing device. In an example, a video processing application executed on the computing device receives a stream of video data. The video processing application renders a major view on a display of the computing device. The major view presents a video from the stream of video data. The video processing application inputs the stream of video data to a deep learning system and receives back information that identifies a cropped video from the video based on a composition score of the cropped video, while the video is presented in the major view. The composition score is generated by the deep learning system. The video processing application renders a sub-view on a display of the device, the sub-view presenting the cropped video. The video processing application renders the cropped video in the major view based on a user interaction with the sub-view.
    Type: Application
    Filed: October 11, 2017
    Publication date: April 11, 2019
    Inventors: Jianming Zhang, Zijun Wei, Zhe Lin, Xiaohui Shen, Radomir Mech
  • Patent number: 10257436
    Abstract: Various embodiments describe view switching of video on a computing device. In an example, a video processing application receives a stream of video data. The video processing application renders a major view on a display of the computing device. The major view presents a video from the stream of video data. The video processing application inputs the stream of video data to a deep learning system and receives back information that identifies a cropped video from the video based on a composition score of the cropped video, while the video is presented in the major view. The composition score is generated by the deep learning system. The video processing application renders a sub-view on a display of the device, the sub-view presenting the cropped video. The video processing application renders the cropped video in the major view based on a user interaction with the sub-view.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: April 9, 2019
    Assignee: Adobe Systems Incorporated
    Inventors: Jianming Zhang, Zijun Wei, Zhe Lin, Xiaohui Shen, Radomir Mech