Patents by Inventor Sarah Aye Kong
Sarah Aye Kong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11468614Abstract: Methods and systems are provided for presenting and using multiple masks based on a segmented image in editing the image. In particular, multiple masks can be presented to a user using a graphical user interface for easy selection and utilization in the editing of an image. The graphical user interface can include a display configured to display an image, a mask zone configured to display segmentations of the image using masks, and an edit zone configured to display edits to the image. Upon receiving segmentation for the image, the masks can be displayed in the mask zone where the masks are based on a selected segmentation detail level.Type: GrantFiled: March 28, 2021Date of Patent: October 11, 2022Assignee: Adobe Inc.Inventors: Sarah Aye Kong, I-Ming Pao, Hyunghwan Byun
-
Publication number: 20210350504Abstract: Methods and systems are provided for generating enhanced image. A neural network system is trained where the training includes training a first neural network that generates enhanced images conditioned on content of an image undergoing enhancement and training a second neural network that designates realism of the enhanced images generated by the first neural network. The neural network system is trained by determine loss and accordingly adjusting the appropriate neural network(s). The trained neural network system is used to generate an enhanced aesthetic image from a selected image where the output enhanced aesthetic image has increased aesthetics when compared to the selected image.Type: ApplicationFiled: July 19, 2021Publication date: November 11, 2021Inventors: Xiaohui Shen, Zhe Lin, Xin Lu, Sarah Aye Kong, I-Ming Pao, Yingcong Chen
-
Patent number: 11069030Abstract: Methods and systems are provided for generating enhanced image. A neural network system is trained where the training includes training a first neural network that generates enhanced images conditioned on content of an image undergoing enhancement and training a second neural network that designates realism of the enhanced images generated by the first neural network. The neural network system is trained by determine loss and accordingly adjusting the appropriate neural network(s). The trained neural network system is used to generate an enhanced aesthetic image from a selected image where the output enhanced aesthetic image has increased aesthetics when compared to the selected image.Type: GrantFiled: March 22, 2018Date of Patent: July 20, 2021Assignee: Adobe, Inc.Inventors: Xiaohui Shen, Zhe Lin, Xin Lu, Sarah Aye Kong, I-Ming Pao, Yingcong Chen
-
Publication number: 20210217216Abstract: Methods and systems are provided for presenting and using multiple masks based on a segmented image in editing the image. In particular, multiple masks can be presented to a user using a graphical user interface for easy selection and utilization in the editing of an image. The graphical user interface can include a display configured to display an image, a mask zone configured to display segmentations of the image using masks, and an edit zone configured to display edits to the image. Upon receiving segmentation for the image, the masks can be displayed in the mask zone where the masks are based on a selected segmentation detail level.Type: ApplicationFiled: March 28, 2021Publication date: July 15, 2021Applicant: Adobe Inc.Inventors: Sarah Aye Kong, I-Ming Pao, Hyunghwan Byun
-
Patent number: 11042990Abstract: Systems and techniques for automatic object replacement in an image include receiving an original image and a preferred image. The original image is automatically segmented into an original image foreground region and an original image object region. The preferred image is automatically segmented into a preferred image foreground region and a preferred image object region. A composite image is automatically composed by replacing the original image object region with the preferred image object region such that the composite image includes the original image foreground region and the preferred image object region. An attribute of the composite image is automatically adjusted.Type: GrantFiled: October 31, 2018Date of Patent: June 22, 2021Assignee: ADOBE INC.Inventors: I-Ming Pao, Sarah Aye Kong, Alan Lee Erickson, Kalyan Sunkavalli, Hyunghwan Byun
-
Patent number: 10977844Abstract: Methods and systems are provided for presenting and using multiple masks based on a segmented image in editing the image. In particular, multiple masks can be presented to a user using a graphical user interface for easy selection and utilization in the editing of an image. The graphical user interface can include a display configured to display an image, a mask zone configured to display segmentations of the image using masks, and an edit zone configured to display edits to the image. Upon receiving segmentation for the image, the masks can be displayed in the mask zone where the masks are based on a selected segmentation detail level.Type: GrantFiled: November 29, 2017Date of Patent: April 13, 2021Assignee: Adobe Inc.Inventors: Sarah Aye Kong, I-Ming Pao, Hyunghwan Byun
-
Patent number: 10796421Abstract: Embodiments of the present invention are directed to facilitating images with selective application of the long-exposure effect. In accordance with some embodiments of the present invention, virtual long-exposure image comprising a plurality of aligned frames is provided and a selection of a region of pixels in the virtual long-exposure image is received. The virtual long-exposure image is combined with one of the frames forming the virtual long-exposure image to create a selective virtual long-exposure image. The selective virtual long-exposure image comprises a visible portion of the original virtual long-exposure image and a visible portion of the individual frame that corresponds to the selected region of pixels. Additional frames may be combined with the virtual long-exposure image to create a plurality of selective virtual long-exposure image options, and the user may select one for continued use or for saving.Type: GrantFiled: February 13, 2018Date of Patent: October 6, 2020Inventors: Seyed Morteza Safdarnejad, Sarah Aye Kong, Gregg Darryl Wilensky, Chih-Yao Hsieh
-
Patent number: 10783408Abstract: Systems and techniques for identification of fonts include receiving a selection of an area of an image including text, where the selection is received from within an application. The selected area of the image is input to a font matching module within the application. The font matching module identifies one or more fonts similar to the text in the selected area using a convolutional neural network. The one or more fonts similar to the text are displayed within the application and the selection and use of the one or more fonts is enabled within the application.Type: GrantFiled: June 19, 2017Date of Patent: September 22, 2020Assignee: ADOBE INC.Inventors: Zhaowen Wang, Sarah Aye Kong, I-Ming Pao, Hailin Jin, Alan Lee Erickson
-
Publication number: 20200134834Abstract: Systems and techniques for automatic object replacement in an image include receiving an original image and a preferred image. The original image is automatically segmented into an original image foreground region and an original image object region. The preferred image is automatically segmented into a preferred image foreground region and a preferred image object region. A composite image is automatically composed by replacing the original image object region with the preferred image object region such that the composite image includes the original image foreground region and the preferred image object region. An attribute of the composite image is automatically adjusted.Type: ApplicationFiled: October 31, 2018Publication date: April 30, 2020Inventors: I-Ming Pao, Sarah Aye Kong, Alan Lee Erickson, Kalyan Sunkavalli, Hyunghwan Byun
-
Patent number: 10573052Abstract: Embodiments of the present invention are directed to facilitate creating cinemagraphs from virtual long-exposure images. In accordance with some embodiments of the present invention, virtual long-exposure image comprising a plurality of aligned frames is provided and a selection of a region of pixels in the virtual long-exposure image is received. Based on the selected region of pixels, a set of frames for animation is identified from the plurality of frames. The set of frames may be identified by automatically detecting a sequence of frames or by receiving a user selection of frames. The virtual LE image is combined with the set of frames to create a cinemagraph having a visible non-animated portion formed by the virtual LE image and a visible animated portion formed by the set of frames.Type: GrantFiled: February 13, 2018Date of Patent: February 25, 2020Assignee: ADOBE INC.Inventors: Seyed Morteza Safdarnejad, Sarah Aye Kong, Chih-Yao Hsieh
-
Publication number: 20190295223Abstract: Methods and systems are provided for generating enhanced image. A neural network system is trained where the training includes training a first neural network that generates enhanced images conditioned on content of an image undergoing enhancement and training a second neural network that designates realism of the enhanced images generated by the first neural network. The neural network system is trained by determine loss and accordingly adjusting the appropriate neural network(s). The trained neural network system is used to generate an enhanced aesthetic image from a selected image where the output enhanced aesthetic image has increased aesthetics when compared to the selected image.Type: ApplicationFiled: March 22, 2018Publication date: September 26, 2019Inventors: Xiaohui Shen, Zhe Lin, Xin Lu, Sarah Aye Kong, I-Ming Pao, Yingcong Chen
-
Publication number: 20190251729Abstract: Embodiments of the present invention are directed to facilitate creating cinemagraphs from virtual long-exposure images. In accordance with some embodiments of the present invention, virtual long-exposure image comprising a plurality of aligned frames is provided and a selection of a region of pixels in the virtual long-exposure image is received. Based on the selected region of pixels, a set of frames for animation is identified from the plurality of frames. The set of frames may be identified by automatically detecting a sequence of frames or by receiving a user selection of frames. The virtual LE image is combined with the set of frames to create a cinemagraph having a visible non-animated portion formed by the virtual LE image and a visible animated portion formed by the set of frames.Type: ApplicationFiled: February 13, 2018Publication date: August 15, 2019Inventors: SEYED MORTEZA SAFDARNEJAD, SARAH AYE KONG, CHIH-YAO HSIEH
-
Publication number: 20190251683Abstract: Embodiments of the present invention are directed to facilitating images with selective application of the long-exposure effect. In accordance with some embodiments of the present invention, virtual long-exposure image comprising a plurality of aligned frames is provided and a selection of a region of pixels in the virtual long-exposure image is received. The virtual long-exposure image is combined with one of the frames forming the virtual long-exposure image to create a selective virtual long-exposure image. The selective virtual long-exposure image comprises a visible portion of the original virtual long-exposure image and a visible portion of the individual frame that corresponds to the selected region of pixels. Additional frames may be combined with the virtual long-exposure image to create a plurality of selective virtual long-exposure image options, and the user may select one for continued use or for saving.Type: ApplicationFiled: February 13, 2018Publication date: August 15, 2019Inventors: SEYED MORTEZA SAFDARNEJAD, SARAH AYE KONG, GREGG DARRYL WILENSKY, CHIH-YAO HSIEH
-
Patent number: 10380723Abstract: In some embodiments, an image editing application stores, based on a first selection input, a selection state that identifies a first image portion of a target image as included in a preview image displayed in a mask-based editing interface of the image editing application. An edit to the preview image generated from the selected first image portion is applied in the mask-based editing interface. The image editing application also updates an edit state that tracks the edit applied to the preview image. The image editing application modifies, based on a second selection input received via the mask-based editing interface, the selection state to include a second image portion in the preview image. The edit state is maintained with the applied edit concurrently with modifying the selection state. The image editing application applies the edit to the modified preview image in the mask-based editing interface.Type: GrantFiled: June 19, 2017Date of Patent: August 13, 2019Assignee: Adobe Inc.Inventors: Betty M. Leong, Alan L. Erickson, Sarah Stuckey, Sarah Aye Kong, Bradee R. Evans
-
Publication number: 20190164322Abstract: Methods and systems are provided for presenting and using multiple masks based on a segmented image in editing the image. In particular, multiple masks can be presented to a user using a graphical user interface for easy selection and utilization in the editing of an image. The graphical user interface can include a display configured to display an image, a mask zone configured to display segmentations of the image using masks, and an edit zone configured to display edits to the image. Upon receiving segmentation for the image, the masks can be displayed in the mask zone where the masks are based on a selected segmentation detail level.Type: ApplicationFiled: November 29, 2017Publication date: May 30, 2019Inventors: Sarah Aye Kong, I-Ming Pao, Hyunghwan Byun
-
Publication number: 20180365813Abstract: In some embodiments, an image editing application stores, based on a first selection input, a selection state that identifies a first image portion of a target image as included in a preview image displayed in a mask-based editing interface of the image editing application. An edit to the preview image generated from the selected first image portion is applied in the mask-based editing interface. The image editing application also updates an edit state that tracks the edit applied to the preview image. The image editing application modifies, based on a second selection input received via the mask-based editing interface, the selection state to include a second image portion in the preview image. The edit state is maintained with the applied edit concurrently with modifying the selection state. The image editing application applies the edit to the modified preview image in the mask-based editing interface.Type: ApplicationFiled: June 19, 2017Publication date: December 20, 2018Inventors: Betty M. Leong, Alan L. Erickson, Sarah Stuckey, Sarah Aye Kong, Bradee R. Evans
-
Publication number: 20180365536Abstract: Systems and techniques for identification of fonts include receiving a selection of an area of an image including text, where the selection is received from within an application. The selected area of the image is input to a font matching module within the application. The font matching module identifies one or more fonts similar to the text in the selected area using a convolutional neural network. The one or more fonts similar to the text are displayed within the application and the selection and use of the one or more fonts is enabled within the application.Type: ApplicationFiled: June 19, 2017Publication date: December 20, 2018Inventors: Zhaowen Wang, Sarah Aye Kong, I-Ming Pao, Hailin Jin, Alan Lee Erickson
-
Patent number: 9070230Abstract: Systems and methods are provided for simulating strobe effects with digital image content. In one embodiment, an image manipulation application can receive image content. The image manipulation application can generate blurred image content by applying a blurring operation to a portion of the received image content along a blur trajectory. The image manipulation application can sample pixels from multiple positions in the received image content along the blur trajectory. The image manipulation application can generate a simulated strobe images based on the sampled pixels and at least some of the blurred image content.Type: GrantFiled: July 23, 2013Date of Patent: June 30, 2015Assignee: Adobe Systems IncorporatedInventors: Gregg D. Wilensky, Sarah Aye Kong
-
Publication number: 20150030246Abstract: Systems and methods are provided for simulating strobe effects with digital image content. In one embodiment, an image manipulation application can receive image content. The image manipulation application can generate blurred image content by applying a blurring operation to a portion of the received image content along a blur trajectory. The image manipulation application can sample pixels from multiple positions in the received image content along the blur trajectory. The image manipulation application can generate a simulated strobe images based on the sampled pixels and at least some of the blurred image content.Type: ApplicationFiled: July 23, 2013Publication date: January 29, 2015Applicant: Adobe Systems IncorporatedInventors: Gregg D. Wilensky, Sarah Aye Kong