Patents by Inventor Gahye Park
Gahye Park has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230260324Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for utilizing a machine learning model trained to determine subtle pose differentiations to analyze a repository of captured digital images of a particular user to automatically capture digital images portraying the user. For example, the disclosed systems can utilize a convolutional neural network to determine a pose/facial expression similarity metric between a sample digital image from a camera viewfinder stream of a client device and one or more previously captured digital images portraying the user. The disclosed systems can determine that the similarity metric satisfies a similarity threshold, and automatically capture a digital image utilizing a camera device of the client device. Thus, the disclosed systems can automatically and efficiently capture digital images, such as selfies, that accurately match previous digital images portraying a variety of unique facial expressions specific to individual users.Type: ApplicationFiled: April 25, 2023Publication date: August 17, 2023Inventors: Jinoh Oh, Xin Lu, Gahye Park, Jen-Chan Jeff Chien, Yumin Jia
-
Patent number: 11670114Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for utilizing a machine learning model trained to determine subtle pose differentiations to analyze a repository of captured digital images of a particular user to automatically capture digital images portraying the user. For example, the disclosed systems can utilize a convolutional neural network to determine a pose/facial expression similarity metric between a sample digital image from a camera viewfinder stream of a client device and one or more previously captured digital images portraying the user. The disclosed systems can determine that the similarity metric satisfies a similarity threshold, and automatically capture a digital image utilizing a camera device of the client device. Thus, the disclosed systems can automatically and efficiently capture digital images, such as selfies, that accurately match previous digital images portraying a variety of unique facial expressions specific to individual users.Type: GrantFiled: October 20, 2020Date of Patent: June 6, 2023Assignee: Adobe Inc.Inventors: Jinoh Oh, Xin Lu, Gahye Park, Jen-Chan Jeff Chien, Yumin Jia
-
Publication number: 20220121841Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for utilizing a machine learning model trained to determine subtle pose differentiations to analyze a repository of captured digital images of a particular user to automatically capture digital images portraying the user. For example, the disclosed systems can utilize a convolutional neural network to determine a pose/facial expression similarity metric between a sample digital image from a camera viewfinder stream of a client device and one or more previously captured digital images portraying the user. The disclosed systems can determine that the similarity metric satisfies a similarity threshold, and automatically capture a digital image utilizing a camera device of the client device. Thus, the disclosed systems can automatically and efficiently capture digital images, such as selfies, that accurately match previous digital images portraying a variety of unique facial expressions specific to individual users.Type: ApplicationFiled: October 20, 2020Publication date: April 21, 2022Inventors: Jinoh Oh, Xin Lu, Gahye Park, Jen-Chan Jeff Chien, Yumin Jia
-
Patent number: 10755459Abstract: Techniques and systems are described herein that support improved object painting in digital images through use of perspectives and transfers in a digital medium environment. In one example, a user interacts with a two-dimensional digital image in a user interface output by a computing device to apply digital paint. The computing device fits a three-dimensional model to an object within the image, e.g., the face. The object, as fit to the three-dimensional model, is used to support output of a plurality of perspectives of a view of the object with which a user may interact to digitally paint the object. As part of this, digital paint as specified through the user inputs is applied directly by the computing device to a two-dimensional texture map of the object. This may support transfer of digital paint by a computing device between objects by transferring the digital paint using respective two-dimensional texture maps.Type: GrantFiled: October 19, 2016Date of Patent: August 25, 2020Assignee: Adobe Inc.Inventors: Zhili Chen, Srinivasa Madhava Phaneendra Angara, Duygu Ceylan Aksit, Byungmoon Kim, Gahye Park
-
Patent number: 10318128Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for facilitating manipulation of images in response to gestures. A user can provide a gesture to effectuate a desired rotation or scaling of an image region. In some implementations, a user might provide a rotation gesture (i.e., a circular pattern) to cause a rotation of an image region or a stroke gesture (i.e., a straight line pattern) to cause a scaling of an image region. Using intuitive gestures, such as touch gestures, the user can control the direction and magnitude of manipulation to accomplish a desired manipulation (e.g., rotation or scaling) of an image region.Type: GrantFiled: September 30, 2015Date of Patent: June 11, 2019Assignee: Adobe Inc.Inventors: Byungmoon Kim, Gahye Park
-
Patent number: 10269142Abstract: The present disclosure is directed towards methods and systems for providing a digital mixed output color of two reference colors defined in an RGB model where the digital mixed output color at least generally reflects a color produced by mixing physical pigments of the two reference colors or a custom user-defined color. The systems and methods receive a selection of two reference colors to mix. Additionally, the systems and methods can determine a mixing ratio of the two reference colors. Moreover, the systems and methods query at least one predefined mixing table and identify from the at least one predefined mixing table a mixed output color correlating to a mixture of the two reference colors.Type: GrantFiled: October 27, 2016Date of Patent: April 23, 2019Assignee: ADOBE INC.Inventors: Zhili Chen, Daichi Ito, Byungmoon Kim, Gahye Park
-
Patent number: 10223767Abstract: In embodiments of facial feature liquifying using face mesh, an image processing application is implemented to modify facial features of a face in an image from a combination of deformation fields. The image processing application can generate a face mesh that includes landmark points, and then construct the deformation fields on the face mesh, where the deformation fields are defined by warpable elements formed from the landmark points. The image processing application can also combine the deformation fields. The image processing application can also receive an input to initiate modifying one or more of the facial features of the face in the image using the combined deformation fields.Type: GrantFiled: May 1, 2017Date of Patent: March 5, 2019Assignee: Adobe Inc.Inventors: Byungmoon Kim, Daichi Ito, Gahye Park
-
Publication number: 20180122103Abstract: The present disclosure is directed towards methods and systems for providing a digital mixed output color of two reference colors defined in an RGB model where the digital mixed output color at least generally reflects a color produced by mixing physical pigments of the two reference colors or a custom user-defined color. The systems and methods receive a selection of two reference colors to mix. Additionally, the systems and methods can determine a mixing ratio of the two reference colors. Moreover, the systems and methods query at least one predefined mixing table and identify from the at least one predefined mixing table a mixed output color correlating to a mixture of the two reference colors.Type: ApplicationFiled: October 27, 2016Publication date: May 3, 2018Inventors: Zhili Chen, Daichi Ito, Byungmoon Kim, Gahye Park
-
Publication number: 20180108160Abstract: Techniques and systems are described herein that support improved object painting in digital images through use of perspectives and transfers in a digital medium environment. In one example, a user interacts with a two-dimensional digital image in a user interface output by a computing device to apply digital paint. The computing device fits a three-dimensional model to an object within the image, e.g., the face. The object, as fit to the three-dimensional model, is used to support output of a plurality of perspectives of a view of the object with which a user may interact to digitally paint the object. As part of this, digital paint as specified through the user inputs is applied directly by the computing device to a two-dimensional texture map of the object. This may support transfer of digital paint by a computing device between objects by transferring the digital paint using respective two-dimensional texture maps.Type: ApplicationFiled: October 19, 2016Publication date: April 19, 2018Applicant: Adobe Systems IncorporatedInventors: Zhili Chen, Srinivasa Madhava Phaneendra Angara, Duygu Ceylan Aksit, Byungmoon Kim, Gahye Park
-
Publication number: 20170236250Abstract: In embodiments of facial feature liquifying using face mesh, an image processing application is implemented to modify facial features of a face in an image from a combination of deformation fields. The image processing application can generate a face mesh that includes landmark points, and then construct the deformation fields on the face mesh, where the deformation fields are defined by warpable elements formed from the landmark points. The image processing application can also combine the deformation fields. The image processing application can also receive an input to initiate modifying one or more of the facial features of the face in the image using the combined deformation fields.Type: ApplicationFiled: May 1, 2017Publication date: August 17, 2017Applicant: Adobe Systems IncorporatedInventors: Byungmoon Kim, Daichi Ito, Gahye Park
-
Publication number: 20170132452Abstract: In embodiments of facial feature liquifying using face mesh, an image processing application is implemented to modify facial features of a face in an image using an updated face mesh generated from a combination of deformation fields. The image processing application can generate a face mesh that includes landmark points, and then construct the deformation fields on the face mesh, where the deformation fields are defined by warpable elements formed from the landmark points. The image processing application can also combine the deformation fields and generate the updated face mesh that includes the combined deformation fields. The image processing application can also display the updated face mesh and receive an input to initiate modifying one or more of the facial features of the face in the image using the combined deformation fields included in the updated face mesh.Type: ApplicationFiled: November 11, 2015Publication date: May 11, 2017Inventors: Byungmoon Kim, Daichi Ito, Gahye Park
-
Patent number: 9646195Abstract: In embodiments of facial feature liquifying using face mesh, an image processing application is implemented to modify facial features of a face in an image using an updated face mesh generated from a combination of deformation fields. The image processing application can generate a face mesh that includes landmark points, and then construct the deformation fields on the face mesh, where the deformation fields are defined by warpable elements formed from the landmark points. The image processing application can also combine the deformation fields and generate the updated face mesh that includes the combined deformation fields. The image processing application can also display the updated face mesh and receive an input to initiate modifying one or more of the facial features of the face in the image using the combined deformation fields included in the updated face mesh.Type: GrantFiled: November 11, 2015Date of Patent: May 9, 2017Assignee: Adobe Systems IncorporatedInventors: Byungmoon Kim, Daichi Ito, Gahye Park
-
Publication number: 20170090728Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for facilitating manipulation of images in response to gestures. A user can provide a gesture to effectuate a desired rotation or scaling of an image region. In some implementations, a user might provide a rotation gesture (i.e., a circular pattern) to cause a rotation of an image region or a stroke gesture (i.e., a straight line pattern) to cause a scaling of an image region. Using intuitive gestures, such as touch gestures, the user can control the direction and magnitude of manipulation to accomplish a desired manipulation (e.g., rotation or scaling) of an image region.Type: ApplicationFiled: September 30, 2015Publication date: March 30, 2017Inventors: Byungmoon Kim, Gahye Park