Patents by Inventor Sarah Aye Kong

Sarah Aye Kong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11468614
    Abstract: Methods and systems are provided for presenting and using multiple masks based on a segmented image in editing the image. In particular, multiple masks can be presented to a user using a graphical user interface for easy selection and utilization in the editing of an image. The graphical user interface can include a display configured to display an image, a mask zone configured to display segmentations of the image using masks, and an edit zone configured to display edits to the image. Upon receiving segmentation for the image, the masks can be displayed in the mask zone where the masks are based on a selected segmentation detail level.
    Type: Grant
    Filed: March 28, 2021
    Date of Patent: October 11, 2022
    Assignee: Adobe Inc.
    Inventors: Sarah Aye Kong, I-Ming Pao, Hyunghwan Byun
  • Publication number: 20210350504
    Abstract: Methods and systems are provided for generating enhanced image. A neural network system is trained where the training includes training a first neural network that generates enhanced images conditioned on content of an image undergoing enhancement and training a second neural network that designates realism of the enhanced images generated by the first neural network. The neural network system is trained by determine loss and accordingly adjusting the appropriate neural network(s). The trained neural network system is used to generate an enhanced aesthetic image from a selected image where the output enhanced aesthetic image has increased aesthetics when compared to the selected image.
    Type: Application
    Filed: July 19, 2021
    Publication date: November 11, 2021
    Inventors: Xiaohui Shen, Zhe Lin, Xin Lu, Sarah Aye Kong, I-Ming Pao, Yingcong Chen
  • Patent number: 11069030
    Abstract: Methods and systems are provided for generating enhanced image. A neural network system is trained where the training includes training a first neural network that generates enhanced images conditioned on content of an image undergoing enhancement and training a second neural network that designates realism of the enhanced images generated by the first neural network. The neural network system is trained by determine loss and accordingly adjusting the appropriate neural network(s). The trained neural network system is used to generate an enhanced aesthetic image from a selected image where the output enhanced aesthetic image has increased aesthetics when compared to the selected image.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: July 20, 2021
    Assignee: Adobe, Inc.
    Inventors: Xiaohui Shen, Zhe Lin, Xin Lu, Sarah Aye Kong, I-Ming Pao, Yingcong Chen
  • Publication number: 20210217216
    Abstract: Methods and systems are provided for presenting and using multiple masks based on a segmented image in editing the image. In particular, multiple masks can be presented to a user using a graphical user interface for easy selection and utilization in the editing of an image. The graphical user interface can include a display configured to display an image, a mask zone configured to display segmentations of the image using masks, and an edit zone configured to display edits to the image. Upon receiving segmentation for the image, the masks can be displayed in the mask zone where the masks are based on a selected segmentation detail level.
    Type: Application
    Filed: March 28, 2021
    Publication date: July 15, 2021
    Applicant: Adobe Inc.
    Inventors: Sarah Aye Kong, I-Ming Pao, Hyunghwan Byun
  • Patent number: 11042990
    Abstract: Systems and techniques for automatic object replacement in an image include receiving an original image and a preferred image. The original image is automatically segmented into an original image foreground region and an original image object region. The preferred image is automatically segmented into a preferred image foreground region and a preferred image object region. A composite image is automatically composed by replacing the original image object region with the preferred image object region such that the composite image includes the original image foreground region and the preferred image object region. An attribute of the composite image is automatically adjusted.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: June 22, 2021
    Assignee: ADOBE INC.
    Inventors: I-Ming Pao, Sarah Aye Kong, Alan Lee Erickson, Kalyan Sunkavalli, Hyunghwan Byun
  • Patent number: 10977844
    Abstract: Methods and systems are provided for presenting and using multiple masks based on a segmented image in editing the image. In particular, multiple masks can be presented to a user using a graphical user interface for easy selection and utilization in the editing of an image. The graphical user interface can include a display configured to display an image, a mask zone configured to display segmentations of the image using masks, and an edit zone configured to display edits to the image. Upon receiving segmentation for the image, the masks can be displayed in the mask zone where the masks are based on a selected segmentation detail level.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: April 13, 2021
    Assignee: Adobe Inc.
    Inventors: Sarah Aye Kong, I-Ming Pao, Hyunghwan Byun
  • Patent number: 10796421
    Abstract: Embodiments of the present invention are directed to facilitating images with selective application of the long-exposure effect. In accordance with some embodiments of the present invention, virtual long-exposure image comprising a plurality of aligned frames is provided and a selection of a region of pixels in the virtual long-exposure image is received. The virtual long-exposure image is combined with one of the frames forming the virtual long-exposure image to create a selective virtual long-exposure image. The selective virtual long-exposure image comprises a visible portion of the original virtual long-exposure image and a visible portion of the individual frame that corresponds to the selected region of pixels. Additional frames may be combined with the virtual long-exposure image to create a plurality of selective virtual long-exposure image options, and the user may select one for continued use or for saving.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: October 6, 2020
    Inventors: Seyed Morteza Safdarnejad, Sarah Aye Kong, Gregg Darryl Wilensky, Chih-Yao Hsieh
  • Patent number: 10783408
    Abstract: Systems and techniques for identification of fonts include receiving a selection of an area of an image including text, where the selection is received from within an application. The selected area of the image is input to a font matching module within the application. The font matching module identifies one or more fonts similar to the text in the selected area using a convolutional neural network. The one or more fonts similar to the text are displayed within the application and the selection and use of the one or more fonts is enabled within the application.
    Type: Grant
    Filed: June 19, 2017
    Date of Patent: September 22, 2020
    Assignee: ADOBE INC.
    Inventors: Zhaowen Wang, Sarah Aye Kong, I-Ming Pao, Hailin Jin, Alan Lee Erickson
  • Publication number: 20200134834
    Abstract: Systems and techniques for automatic object replacement in an image include receiving an original image and a preferred image. The original image is automatically segmented into an original image foreground region and an original image object region. The preferred image is automatically segmented into a preferred image foreground region and a preferred image object region. A composite image is automatically composed by replacing the original image object region with the preferred image object region such that the composite image includes the original image foreground region and the preferred image object region. An attribute of the composite image is automatically adjusted.
    Type: Application
    Filed: October 31, 2018
    Publication date: April 30, 2020
    Inventors: I-Ming Pao, Sarah Aye Kong, Alan Lee Erickson, Kalyan Sunkavalli, Hyunghwan Byun
  • Patent number: 10573052
    Abstract: Embodiments of the present invention are directed to facilitate creating cinemagraphs from virtual long-exposure images. In accordance with some embodiments of the present invention, virtual long-exposure image comprising a plurality of aligned frames is provided and a selection of a region of pixels in the virtual long-exposure image is received. Based on the selected region of pixels, a set of frames for animation is identified from the plurality of frames. The set of frames may be identified by automatically detecting a sequence of frames or by receiving a user selection of frames. The virtual LE image is combined with the set of frames to create a cinemagraph having a visible non-animated portion formed by the virtual LE image and a visible animated portion formed by the set of frames.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: February 25, 2020
    Assignee: ADOBE INC.
    Inventors: Seyed Morteza Safdarnejad, Sarah Aye Kong, Chih-Yao Hsieh
  • Publication number: 20190295223
    Abstract: Methods and systems are provided for generating enhanced image. A neural network system is trained where the training includes training a first neural network that generates enhanced images conditioned on content of an image undergoing enhancement and training a second neural network that designates realism of the enhanced images generated by the first neural network. The neural network system is trained by determine loss and accordingly adjusting the appropriate neural network(s). The trained neural network system is used to generate an enhanced aesthetic image from a selected image where the output enhanced aesthetic image has increased aesthetics when compared to the selected image.
    Type: Application
    Filed: March 22, 2018
    Publication date: September 26, 2019
    Inventors: Xiaohui Shen, Zhe Lin, Xin Lu, Sarah Aye Kong, I-Ming Pao, Yingcong Chen
  • Publication number: 20190251729
    Abstract: Embodiments of the present invention are directed to facilitate creating cinemagraphs from virtual long-exposure images. In accordance with some embodiments of the present invention, virtual long-exposure image comprising a plurality of aligned frames is provided and a selection of a region of pixels in the virtual long-exposure image is received. Based on the selected region of pixels, a set of frames for animation is identified from the plurality of frames. The set of frames may be identified by automatically detecting a sequence of frames or by receiving a user selection of frames. The virtual LE image is combined with the set of frames to create a cinemagraph having a visible non-animated portion formed by the virtual LE image and a visible animated portion formed by the set of frames.
    Type: Application
    Filed: February 13, 2018
    Publication date: August 15, 2019
    Inventors: SEYED MORTEZA SAFDARNEJAD, SARAH AYE KONG, CHIH-YAO HSIEH
  • Publication number: 20190251683
    Abstract: Embodiments of the present invention are directed to facilitating images with selective application of the long-exposure effect. In accordance with some embodiments of the present invention, virtual long-exposure image comprising a plurality of aligned frames is provided and a selection of a region of pixels in the virtual long-exposure image is received. The virtual long-exposure image is combined with one of the frames forming the virtual long-exposure image to create a selective virtual long-exposure image. The selective virtual long-exposure image comprises a visible portion of the original virtual long-exposure image and a visible portion of the individual frame that corresponds to the selected region of pixels. Additional frames may be combined with the virtual long-exposure image to create a plurality of selective virtual long-exposure image options, and the user may select one for continued use or for saving.
    Type: Application
    Filed: February 13, 2018
    Publication date: August 15, 2019
    Inventors: SEYED MORTEZA SAFDARNEJAD, SARAH AYE KONG, GREGG DARRYL WILENSKY, CHIH-YAO HSIEH
  • Patent number: 10380723
    Abstract: In some embodiments, an image editing application stores, based on a first selection input, a selection state that identifies a first image portion of a target image as included in a preview image displayed in a mask-based editing interface of the image editing application. An edit to the preview image generated from the selected first image portion is applied in the mask-based editing interface. The image editing application also updates an edit state that tracks the edit applied to the preview image. The image editing application modifies, based on a second selection input received via the mask-based editing interface, the selection state to include a second image portion in the preview image. The edit state is maintained with the applied edit concurrently with modifying the selection state. The image editing application applies the edit to the modified preview image in the mask-based editing interface.
    Type: Grant
    Filed: June 19, 2017
    Date of Patent: August 13, 2019
    Assignee: Adobe Inc.
    Inventors: Betty M. Leong, Alan L. Erickson, Sarah Stuckey, Sarah Aye Kong, Bradee R. Evans
  • Publication number: 20190164322
    Abstract: Methods and systems are provided for presenting and using multiple masks based on a segmented image in editing the image. In particular, multiple masks can be presented to a user using a graphical user interface for easy selection and utilization in the editing of an image. The graphical user interface can include a display configured to display an image, a mask zone configured to display segmentations of the image using masks, and an edit zone configured to display edits to the image. Upon receiving segmentation for the image, the masks can be displayed in the mask zone where the masks are based on a selected segmentation detail level.
    Type: Application
    Filed: November 29, 2017
    Publication date: May 30, 2019
    Inventors: Sarah Aye Kong, I-Ming Pao, Hyunghwan Byun
  • Publication number: 20180365813
    Abstract: In some embodiments, an image editing application stores, based on a first selection input, a selection state that identifies a first image portion of a target image as included in a preview image displayed in a mask-based editing interface of the image editing application. An edit to the preview image generated from the selected first image portion is applied in the mask-based editing interface. The image editing application also updates an edit state that tracks the edit applied to the preview image. The image editing application modifies, based on a second selection input received via the mask-based editing interface, the selection state to include a second image portion in the preview image. The edit state is maintained with the applied edit concurrently with modifying the selection state. The image editing application applies the edit to the modified preview image in the mask-based editing interface.
    Type: Application
    Filed: June 19, 2017
    Publication date: December 20, 2018
    Inventors: Betty M. Leong, Alan L. Erickson, Sarah Stuckey, Sarah Aye Kong, Bradee R. Evans
  • Publication number: 20180365536
    Abstract: Systems and techniques for identification of fonts include receiving a selection of an area of an image including text, where the selection is received from within an application. The selected area of the image is input to a font matching module within the application. The font matching module identifies one or more fonts similar to the text in the selected area using a convolutional neural network. The one or more fonts similar to the text are displayed within the application and the selection and use of the one or more fonts is enabled within the application.
    Type: Application
    Filed: June 19, 2017
    Publication date: December 20, 2018
    Inventors: Zhaowen Wang, Sarah Aye Kong, I-Ming Pao, Hailin Jin, Alan Lee Erickson
  • Patent number: 9070230
    Abstract: Systems and methods are provided for simulating strobe effects with digital image content. In one embodiment, an image manipulation application can receive image content. The image manipulation application can generate blurred image content by applying a blurring operation to a portion of the received image content along a blur trajectory. The image manipulation application can sample pixels from multiple positions in the received image content along the blur trajectory. The image manipulation application can generate a simulated strobe images based on the sampled pixels and at least some of the blurred image content.
    Type: Grant
    Filed: July 23, 2013
    Date of Patent: June 30, 2015
    Assignee: Adobe Systems Incorporated
    Inventors: Gregg D. Wilensky, Sarah Aye Kong
  • Publication number: 20150030246
    Abstract: Systems and methods are provided for simulating strobe effects with digital image content. In one embodiment, an image manipulation application can receive image content. The image manipulation application can generate blurred image content by applying a blurring operation to a portion of the received image content along a blur trajectory. The image manipulation application can sample pixels from multiple positions in the received image content along the blur trajectory. The image manipulation application can generate a simulated strobe images based on the sampled pixels and at least some of the blurred image content.
    Type: Application
    Filed: July 23, 2013
    Publication date: January 29, 2015
    Applicant: Adobe Systems Incorporated
    Inventors: Gregg D. Wilensky, Sarah Aye Kong