Patents by Inventor Seyed Morteza Safdarnejad

Seyed Morteza Safdarnejad has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230281763
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a plurality of neural networks in a multi-branch pipeline to generate image masks for digital images. Specifically, the disclosed system can classify a digital image as a portrait or a non-portrait image. Based on classifying a portrait image, the disclosed system can utilize separate neural networks to generate a first mask portion for a portion of the digital image including a defined boundary region and a second mask portion for a portion of the digital image including a blended boundary region. The disclosed system can generate the mask portion for the blended boundary region by utilizing a trimap generation neural network to automatically generate a trimap segmentation including the blended boundary region. The disclosed system can then merge the first mask portion and the second mask portion to generate an image mask for the digital image.
    Type: Application
    Filed: May 15, 2023
    Publication date: September 7, 2023
    Inventors: He Zhang, Seyed Morteza Safdarnejad, Yilin Wang, Zijun Wei, Jianming Zhang, Salil Tambe, Brian Price
  • Patent number: 11651477
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a plurality of neural networks in a multi-branch pipeline to generate image masks for digital images. Specifically, the disclosed system can classify a digital image as a portrait or a non-portrait image. Based on classifying a portrait image, the disclosed system can utilize separate neural networks to generate a first mask portion for a portion of the digital image including a defined boundary region and a second mask portion for a portion of the digital image including a blended boundary region. The disclosed system can generate the mask portion for the blended boundary region by utilizing a trimap generation neural network to automatically generate a trimap segmentation including the blended boundary region. The disclosed system can then merge the first mask portion and the second mask portion to generate an image mask for the digital image.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: May 16, 2023
    Assignee: Adobe Inc.
    Inventors: He Zhang, Seyed Morteza Safdarnejad, Yilin Wang, Zijun Wei, Jianming Zhang, Salil Tambe, Brian Price
  • Publication number: 20230129341
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generate preliminary object masks for objects in an image, surface the preliminary object masks as object mask previews, and on-demand converts preliminary object masks into refined object masks. Indeed, in one or more implementations, an object mask preview and on-demand generation system automatically detects objects in an image. For the detected objects, the object mask preview and on-demand generation system generates preliminary object masks for the detected objects of a first lower resolution. The object mask preview and on-demand generation system surfaces a given preliminary object mask in response to detecting a first input. The object mask preview and on-demand generation system also generates a refined object mask of a second higher resolution in response to detecting a second input.
    Type: Application
    Filed: January 25, 2022
    Publication date: April 27, 2023
    Inventors: Betty Leong, Hyunghwan Byun, Alan L Erickson, Chih-Yao Hsieh, Sarah Kong, Seyed Morteza Safdarnejad, Salil Tambe, Yilin Wang, Zijun Wei, Zhengyun Zhang
  • Patent number: 11551338
    Abstract: The present disclosure is directed toward intelligently mixing and matching faces and/or people to generate an enhanced image that reduces or minimize artifacts and other defects. For example, the disclosed systems can selectively apply different alignment models to determine a relative alignment between a references image and a target image having an improved instance of the person. Upon aligning the digital images, the disclosed systems can intelligently identify a replacement region based on a boundary that includes the target instance and the reference instance of the person without intersecting other objects or people in the image. Using the size and shape of the replacement region around the target instance and the reference instance, the systems replace the instance of the person in the reference image with the target instance. The alignment of the images and the intelligent selection of the replacement region minimizes inconsistencies and/or artifacts in the final image.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: January 10, 2023
    Assignee: Adobe Inc.
    Inventors: Seyed Morteza Safdarnejad, Chih-Yao Hsieh
  • Patent number: 11393100
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a plurality of neural networks in a multi-branch pipeline to generate image masks for digital images. Specifically, the disclosed system can classify a digital image as a portrait or a non-portrait image. Based on classifying a portrait image, the disclosed system can utilize separate neural networks to generate a first mask portion for a portion of the digital image including a defined boundary region and a second mask portion for a portion of the digital image including a blended boundary region. The disclosed system can generate the mask portion for the blended boundary region by utilizing a trimap generation neural network to automatically generate a trimap segmentation including the blended boundary region. The disclosed system can then merge the first mask portion and the second mask portion to generate an image mask for the digital image.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: July 19, 2022
    Assignee: Adobe Inc.
    Inventors: He Zhang, Seyed Morteza Safdarnejad, Yilin Wang, Zijun Wei, Jianming Zhang, Salil Tambe, Brian Price
  • Publication number: 20220044365
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a plurality of neural networks in a multi-branch pipeline to generate image masks for digital images. Specifically, the disclosed system can classify a digital image as a portrait or a non-portrait image. Based on classifying a portrait image, the disclosed system can utilize separate neural networks to generate a first mask portion for a portion of the digital image including a defined boundary region and a second mask portion for a portion of the digital image including a blended boundary region. The disclosed system can generate the mask portion for the blended boundary region by utilizing a trimap generation neural network to automatically generate a trimap segmentation including the blended boundary region. The disclosed system can then merge the first mask portion and the second mask portion to generate an image mask for the digital image.
    Type: Application
    Filed: August 7, 2020
    Publication date: February 10, 2022
    Inventors: He Zhang, Seyed Morteza Safdarnejad, Yilin Wang, Zijun Wei, Jianming Zhang, Salil Tambe, Brian Price
  • Publication number: 20220044366
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a plurality of neural networks in a multi-branch pipeline to generate image masks for digital images. Specifically, the disclosed system can classify a digital image as a portrait or a non-portrait image. Based on classifying a portrait image, the disclosed system can utilize separate neural networks to generate a first mask portion for a portion of the digital image including a defined boundary region and a second mask portion for a portion of the digital image including a blended boundary region. The disclosed system can generate the mask portion for the blended boundary region by utilizing a trimap generation neural network to automatically generate a trimap segmentation including the blended boundary region. The disclosed system can then merge the first mask portion and the second mask portion to generate an image mask for the digital image.
    Type: Application
    Filed: August 7, 2020
    Publication date: February 10, 2022
    Inventors: He Zhang, Seyed Morteza Safdarnejad, Yilin Wang, Zijun Wei, Jianming Zhang, Salil Tambe, Brian Price
  • Patent number: 11216961
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that analyze feature points of digital images to selectively apply a pixel-adjusted-gyroscope-alignment model and a feature-based-alignment model to align the digital images. For instance, the disclosed systems can select an appropriate alignment model based on feature-point-deficiency metrics specific to an input image and reference-input image. Moreover, in certain implementations, the pixel-adjusted-gyroscope-alignment model utilizes parameters from pixel-based alignment and gyroscope-based alignment to align such digital images. Together with the feature-based-alignment model, the disclosed methods, non-transitory computer readable media, and systems can use a selective image-alignment algorithm that improves computational efficiency, accuracy, and flexibility in generating enhanced digital images from a set of input images.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: January 4, 2022
    Assignee: ADOBE INC.
    Inventors: Seyed Morteza Safdarnejad, Chih-Yao Hsieh
  • Patent number: 11196939
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate a virtual long exposure image from a sequence of short exposure images portraying a moving object. In various embodiments, the image transformation system aligns two digital images in the sequence of short exposure images. The image transformation system can determine a motion vector path for the moving object between the first digital image and the second digital image. The image transformation system can also blend pixels along the motion vector path to generate a blended image representative of the motion of the moving object between the first digital image and the second digital image. The image transformation system can generate additional blended images based on consecutive pairs of images in the sequence of digital images and generates a virtual long exposure image by combining the first blended image with the additional blended images.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: December 7, 2021
    Assignee: ADOBE INC.
    Inventors: Chih-Yao Hsieh, Sylvain Paris, Seyed Morteza Safdarnejad, Gregg Wilensky
  • Publication number: 20210073961
    Abstract: The present disclosure is directed toward intelligently mixing and matching faces and/or people to generate an enhanced image that reduces or minimize artifacts and other defects. For example, the disclosed systems can selectively apply different alignment models to determine a relative alignment between a references image and a target image having an improved instance of the person. Upon aligning the digital images, the disclosed systems can intelligently identify a replacement region based on a boundary that includes the target instance and the reference instance of the person without intersecting other objects or people in the image. Using the size and shape of the replacement region around the target instance and the reference instance, the systems replace the instance of the person in the reference image with the target instance. The alignment of the images and the intelligent selection of the replacement region minimizes inconsistencies and/or artifacts in the final image.
    Type: Application
    Filed: November 23, 2020
    Publication date: March 11, 2021
    Inventors: Seyed Morteza Safdarnejad, Chih-Yao Hsieh
  • Patent number: 10896493
    Abstract: The present disclosure is directed toward intelligently mixing and matching faces and/or people to generate an enhanced image that reduces or minimize artifacts and other defects. For example, the disclosed systems can selectively apply different alignment models to determine a relative alignment between a references image and a target image having an improved instance of the person. Upon aligning the digital images, the disclosed systems can intelligently identify a replacement region based on a boundary that includes the target instance and the reference instance of the person without intersecting other objects or people in the image. Using the size and shape of the replacement region around the target instance and the reference instance, the systems replace the instance of the person in the reference image with the target instance. The alignment of the images and the intelligent selection of the replacement region minimizes inconsistencies and/or artifacts in the final image.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: January 19, 2021
    Assignee: ADOBE INC.
    Inventors: Seyed Morteza Safdarnejad, Chih-Yao Hsieh
  • Publication number: 20200394808
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that analyze feature points of digital images to selectively apply a pixel-adjusted-gyroscope-alignment model and a feature-based-alignment model to align the digital images. For instance, the disclosed systems can select an appropriate alignment model based on feature-point-deficiency metrics specific to an input image and reference-input image. Moreover, in certain implementations, the pixel-adjusted-gyroscope-alignment model utilizes parameters from pixel-based alignment and gyroscope-based alignment to align such digital images. Together with the feature-based-alignment model, the disclosed methods, non-transitory computer readable media, and systems can use a selective image-alignment algorithm that improves computational efficiency, accuracy, and flexibility in generating enhanced digital images from a set of input images.
    Type: Application
    Filed: August 27, 2020
    Publication date: December 17, 2020
    Inventors: Seyed Morteza Safdarnejad, Chih-Yao Hsieh
  • Patent number: 10796421
    Abstract: Embodiments of the present invention are directed to facilitating images with selective application of the long-exposure effect. In accordance with some embodiments of the present invention, virtual long-exposure image comprising a plurality of aligned frames is provided and a selection of a region of pixels in the virtual long-exposure image is received. The virtual long-exposure image is combined with one of the frames forming the virtual long-exposure image to create a selective virtual long-exposure image. The selective virtual long-exposure image comprises a visible portion of the original virtual long-exposure image and a visible portion of the individual frame that corresponds to the selected region of pixels. Additional frames may be combined with the virtual long-exposure image to create a plurality of selective virtual long-exposure image options, and the user may select one for continued use or for saving.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: October 6, 2020
    Inventors: Seyed Morteza Safdarnejad, Sarah Aye Kong, Gregg Darryl Wilensky, Chih-Yao Hsieh
  • Patent number: 10783649
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that analyze feature points of digital images to selectively apply a pixel-adjusted-gyroscope-alignment model and a feature-based-alignment model to align the digital images. For instance, the disclosed systems can select an appropriate alignment model based on feature-point-deficiency metrics specific to an input image and reference-input image. Moreover, in certain implementations, the pixel-adjusted-gyroscope-alignment model utilizes parameters from pixel-based alignment and gyroscope-based alignment to align such digital images. Together with the feature-based-alignment model, the disclosed methods, non-transitory computer readable media, and systems can use a selective image-alignment algorithm that improves computational efficiency, accuracy, and flexibility in generating enhanced digital images from a set of input images.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: September 22, 2020
    Assignee: ADOBE INC.
    Inventors: Seyed Morteza Safdarnejad, Chih-Yao Hsieh
  • Publication number: 20200280670
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate a virtual long exposure image from a sequence of short exposure images portraying a moving object. In various embodiments, the image transformation system aligns two digital images in the sequence of short exposure images. The image transformation system can determine a motion vector path for the moving object between the first digital image and the second digital image. The image transformation system can also blend pixels along the motion vector path to generate a blended image representative of the motion of the moving object between the first digital image and the second digital image. The image transformation system can generate additional blended images based on consecutive pairs of images in the sequence of digital images and generates a virtual long exposure image by combining the first blended image with the additional blended images.
    Type: Application
    Filed: May 18, 2020
    Publication date: September 3, 2020
    Inventors: Chih-Yao Hsieh, Sylvain Paris, Seyed Morteza Safdarnejad, Gregg Wilensky
  • Patent number: 10701279
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate a virtual long exposure image from a sequence of short exposure images portraying a moving object. In various embodiments, the image transformation system aligns two digital images in the sequence of short exposure images. The image transformation system can determine a motion vector path for the moving object between the first digital image and the second digital image. The image transformation system can also blend pixels along the motion vector path to generate a blended image representative of the motion of the moving object between the first digital image and the second digital image. The image transformation system can generate additional blended images based on consecutive pairs of images in the sequence of digital images and generates a virtual long exposure image by combining the first blended image with the additional blended images.
    Type: Grant
    Filed: October 2, 2018
    Date of Patent: June 30, 2020
    Assignee: ADOBE INC.
    Inventors: Chih-Yao Hsieh, Sylvain Paris, Seyed Morteza Safdarnejad, Gregg Wilensky
  • Publication number: 20200151860
    Abstract: The present disclosure is directed toward intelligently mixing and matching faces and/or people to generate an enhanced image that reduces or minimize artifacts and other defects. For example, the disclosed systems can selectively apply different alignment models to determine a relative alignment between a references image and a target image having an improved instance of the person. Upon aligning the digital images, the disclosed systems can intelligently identify a replacement region based on a boundary that includes the target instance and the reference instance of the person without intersecting other objects or people in the image. Using the size and shape of the replacement region around the target instance and the reference instance, the systems replace the instance of the person in the reference image with the target instance. The alignment of the images and the intelligent selection of the replacement region minimizes inconsistencies and/or artifacts in the final image.
    Type: Application
    Filed: November 13, 2018
    Publication date: May 14, 2020
    Inventors: Seyed Morteza Safdarnejad, Chih-Yao Hsieh
  • Publication number: 20200106945
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate a virtual long exposure image from a sequence of short exposure images portraying a moving object. In various embodiments, the image transformation system aligns two digital images in the sequence of short exposure images. The image transformation system can determine a motion vector path for the moving object between the first digital image and the second digital image. The image transformation system can also blend pixels along the motion vector path to generate a blended image representative of the motion of the moving object between the first digital image and the second digital image. The image transformation system can generate additional blended images based on consecutive pairs of images in the sequence of digital images and generates a virtual long exposure image by combining the first blended image with the additional blended images.
    Type: Application
    Filed: October 2, 2018
    Publication date: April 2, 2020
    Inventors: Chih-Yao Hsieh, Sylvain Paris, Seyed Morteza Safdarnejad, Gregg Wilensky
  • Publication number: 20200090351
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that analyze feature points of digital images to selectively apply a pixel-adjusted-gyroscope-alignment model and a feature-based-alignment model to align the digital images. For instance, the disclosed systems can select an appropriate alignment model based on feature-point-deficiency metrics specific to an input image and reference-input image. Moreover, in certain implementations, the pixel-adjusted-gyroscope-alignment model utilizes parameters from pixel-based alignment and gyroscope-based alignment to align such digital images. Together with the feature-based-alignment model, the disclosed methods, non-transitory computer readable media, and systems can use a selective image-alignment algorithm that improves computational efficiency, accuracy, and flexibility in generating enhanced digital images from a set of input images.
    Type: Application
    Filed: September 17, 2018
    Publication date: March 19, 2020
    Inventors: Seyed Morteza Safdarnejad, Chih-Yao Hsieh
  • Patent number: 10573052
    Abstract: Embodiments of the present invention are directed to facilitate creating cinemagraphs from virtual long-exposure images. In accordance with some embodiments of the present invention, virtual long-exposure image comprising a plurality of aligned frames is provided and a selection of a region of pixels in the virtual long-exposure image is received. Based on the selected region of pixels, a set of frames for animation is identified from the plurality of frames. The set of frames may be identified by automatically detecting a sequence of frames or by receiving a user selection of frames. The virtual LE image is combined with the set of frames to create a cinemagraph having a visible non-animated portion formed by the virtual LE image and a visible animated portion formed by the set of frames.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: February 25, 2020
    Assignee: ADOBE INC.
    Inventors: Seyed Morteza Safdarnejad, Sarah Aye Kong, Chih-Yao Hsieh