Patents by Inventor Gahye Park

Gahye Park has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230260324
    Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for utilizing a machine learning model trained to determine subtle pose differentiations to analyze a repository of captured digital images of a particular user to automatically capture digital images portraying the user. For example, the disclosed systems can utilize a convolutional neural network to determine a pose/facial expression similarity metric between a sample digital image from a camera viewfinder stream of a client device and one or more previously captured digital images portraying the user. The disclosed systems can determine that the similarity metric satisfies a similarity threshold, and automatically capture a digital image utilizing a camera device of the client device. Thus, the disclosed systems can automatically and efficiently capture digital images, such as selfies, that accurately match previous digital images portraying a variety of unique facial expressions specific to individual users.
    Type: Application
    Filed: April 25, 2023
    Publication date: August 17, 2023
    Inventors: Jinoh Oh, Xin Lu, Gahye Park, Jen-Chan Jeff Chien, Yumin Jia
  • Patent number: 11670114
    Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for utilizing a machine learning model trained to determine subtle pose differentiations to analyze a repository of captured digital images of a particular user to automatically capture digital images portraying the user. For example, the disclosed systems can utilize a convolutional neural network to determine a pose/facial expression similarity metric between a sample digital image from a camera viewfinder stream of a client device and one or more previously captured digital images portraying the user. The disclosed systems can determine that the similarity metric satisfies a similarity threshold, and automatically capture a digital image utilizing a camera device of the client device. Thus, the disclosed systems can automatically and efficiently capture digital images, such as selfies, that accurately match previous digital images portraying a variety of unique facial expressions specific to individual users.
    Type: Grant
    Filed: October 20, 2020
    Date of Patent: June 6, 2023
    Assignee: Adobe Inc.
    Inventors: Jinoh Oh, Xin Lu, Gahye Park, Jen-Chan Jeff Chien, Yumin Jia
  • Publication number: 20220121841
    Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for utilizing a machine learning model trained to determine subtle pose differentiations to analyze a repository of captured digital images of a particular user to automatically capture digital images portraying the user. For example, the disclosed systems can utilize a convolutional neural network to determine a pose/facial expression similarity metric between a sample digital image from a camera viewfinder stream of a client device and one or more previously captured digital images portraying the user. The disclosed systems can determine that the similarity metric satisfies a similarity threshold, and automatically capture a digital image utilizing a camera device of the client device. Thus, the disclosed systems can automatically and efficiently capture digital images, such as selfies, that accurately match previous digital images portraying a variety of unique facial expressions specific to individual users.
    Type: Application
    Filed: October 20, 2020
    Publication date: April 21, 2022
    Inventors: Jinoh Oh, Xin Lu, Gahye Park, Jen-Chan Jeff Chien, Yumin Jia
  • Patent number: 10755459
    Abstract: Techniques and systems are described herein that support improved object painting in digital images through use of perspectives and transfers in a digital medium environment. In one example, a user interacts with a two-dimensional digital image in a user interface output by a computing device to apply digital paint. The computing device fits a three-dimensional model to an object within the image, e.g., the face. The object, as fit to the three-dimensional model, is used to support output of a plurality of perspectives of a view of the object with which a user may interact to digitally paint the object. As part of this, digital paint as specified through the user inputs is applied directly by the computing device to a two-dimensional texture map of the object. This may support transfer of digital paint by a computing device between objects by transferring the digital paint using respective two-dimensional texture maps.
    Type: Grant
    Filed: October 19, 2016
    Date of Patent: August 25, 2020
    Assignee: Adobe Inc.
    Inventors: Zhili Chen, Srinivasa Madhava Phaneendra Angara, Duygu Ceylan Aksit, Byungmoon Kim, Gahye Park
  • Patent number: 10318128
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for facilitating manipulation of images in response to gestures. A user can provide a gesture to effectuate a desired rotation or scaling of an image region. In some implementations, a user might provide a rotation gesture (i.e., a circular pattern) to cause a rotation of an image region or a stroke gesture (i.e., a straight line pattern) to cause a scaling of an image region. Using intuitive gestures, such as touch gestures, the user can control the direction and magnitude of manipulation to accomplish a desired manipulation (e.g., rotation or scaling) of an image region.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: June 11, 2019
    Assignee: Adobe Inc.
    Inventors: Byungmoon Kim, Gahye Park
  • Patent number: 10269142
    Abstract: The present disclosure is directed towards methods and systems for providing a digital mixed output color of two reference colors defined in an RGB model where the digital mixed output color at least generally reflects a color produced by mixing physical pigments of the two reference colors or a custom user-defined color. The systems and methods receive a selection of two reference colors to mix. Additionally, the systems and methods can determine a mixing ratio of the two reference colors. Moreover, the systems and methods query at least one predefined mixing table and identify from the at least one predefined mixing table a mixed output color correlating to a mixture of the two reference colors.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: April 23, 2019
    Assignee: ADOBE INC.
    Inventors: Zhili Chen, Daichi Ito, Byungmoon Kim, Gahye Park
  • Patent number: 10223767
    Abstract: In embodiments of facial feature liquifying using face mesh, an image processing application is implemented to modify facial features of a face in an image from a combination of deformation fields. The image processing application can generate a face mesh that includes landmark points, and then construct the deformation fields on the face mesh, where the deformation fields are defined by warpable elements formed from the landmark points. The image processing application can also combine the deformation fields. The image processing application can also receive an input to initiate modifying one or more of the facial features of the face in the image using the combined deformation fields.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: March 5, 2019
    Assignee: Adobe Inc.
    Inventors: Byungmoon Kim, Daichi Ito, Gahye Park
  • Publication number: 20180122103
    Abstract: The present disclosure is directed towards methods and systems for providing a digital mixed output color of two reference colors defined in an RGB model where the digital mixed output color at least generally reflects a color produced by mixing physical pigments of the two reference colors or a custom user-defined color. The systems and methods receive a selection of two reference colors to mix. Additionally, the systems and methods can determine a mixing ratio of the two reference colors. Moreover, the systems and methods query at least one predefined mixing table and identify from the at least one predefined mixing table a mixed output color correlating to a mixture of the two reference colors.
    Type: Application
    Filed: October 27, 2016
    Publication date: May 3, 2018
    Inventors: Zhili Chen, Daichi Ito, Byungmoon Kim, Gahye Park
  • Publication number: 20180108160
    Abstract: Techniques and systems are described herein that support improved object painting in digital images through use of perspectives and transfers in a digital medium environment. In one example, a user interacts with a two-dimensional digital image in a user interface output by a computing device to apply digital paint. The computing device fits a three-dimensional model to an object within the image, e.g., the face. The object, as fit to the three-dimensional model, is used to support output of a plurality of perspectives of a view of the object with which a user may interact to digitally paint the object. As part of this, digital paint as specified through the user inputs is applied directly by the computing device to a two-dimensional texture map of the object. This may support transfer of digital paint by a computing device between objects by transferring the digital paint using respective two-dimensional texture maps.
    Type: Application
    Filed: October 19, 2016
    Publication date: April 19, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Zhili Chen, Srinivasa Madhava Phaneendra Angara, Duygu Ceylan Aksit, Byungmoon Kim, Gahye Park
  • Publication number: 20170236250
    Abstract: In embodiments of facial feature liquifying using face mesh, an image processing application is implemented to modify facial features of a face in an image from a combination of deformation fields. The image processing application can generate a face mesh that includes landmark points, and then construct the deformation fields on the face mesh, where the deformation fields are defined by warpable elements formed from the landmark points. The image processing application can also combine the deformation fields. The image processing application can also receive an input to initiate modifying one or more of the facial features of the face in the image using the combined deformation fields.
    Type: Application
    Filed: May 1, 2017
    Publication date: August 17, 2017
    Applicant: Adobe Systems Incorporated
    Inventors: Byungmoon Kim, Daichi Ito, Gahye Park
  • Publication number: 20170132452
    Abstract: In embodiments of facial feature liquifying using face mesh, an image processing application is implemented to modify facial features of a face in an image using an updated face mesh generated from a combination of deformation fields. The image processing application can generate a face mesh that includes landmark points, and then construct the deformation fields on the face mesh, where the deformation fields are defined by warpable elements formed from the landmark points. The image processing application can also combine the deformation fields and generate the updated face mesh that includes the combined deformation fields. The image processing application can also display the updated face mesh and receive an input to initiate modifying one or more of the facial features of the face in the image using the combined deformation fields included in the updated face mesh.
    Type: Application
    Filed: November 11, 2015
    Publication date: May 11, 2017
    Inventors: Byungmoon Kim, Daichi Ito, Gahye Park
  • Patent number: 9646195
    Abstract: In embodiments of facial feature liquifying using face mesh, an image processing application is implemented to modify facial features of a face in an image using an updated face mesh generated from a combination of deformation fields. The image processing application can generate a face mesh that includes landmark points, and then construct the deformation fields on the face mesh, where the deformation fields are defined by warpable elements formed from the landmark points. The image processing application can also combine the deformation fields and generate the updated face mesh that includes the combined deformation fields. The image processing application can also display the updated face mesh and receive an input to initiate modifying one or more of the facial features of the face in the image using the combined deformation fields included in the updated face mesh.
    Type: Grant
    Filed: November 11, 2015
    Date of Patent: May 9, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Byungmoon Kim, Daichi Ito, Gahye Park
  • Publication number: 20170090728
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for facilitating manipulation of images in response to gestures. A user can provide a gesture to effectuate a desired rotation or scaling of an image region. In some implementations, a user might provide a rotation gesture (i.e., a circular pattern) to cause a rotation of an image region or a stroke gesture (i.e., a straight line pattern) to cause a scaling of an image region. Using intuitive gestures, such as touch gestures, the user can control the direction and magnitude of manipulation to accomplish a desired manipulation (e.g., rotation or scaling) of an image region.
    Type: Application
    Filed: September 30, 2015
    Publication date: March 30, 2017
    Inventors: Byungmoon Kim, Gahye Park