Patents by Inventor Hung-Yu Tseng
Hung-Yu Tseng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240212246Abstract: A method for generating an output image from an input image and an input text instruction that specifies a location and a modification of an edit applied to the input image using a neural network is described. The neural network includes an image encoder, an image decoder, and an instruction attention network. The method includes receiving the input image and the input text instruction; extracting, from the input image, an input image feature that represents features of the input image using the image encoder; generating a spatial feature and a modification feature from the input text instruction using the instruction attention network; generating an edited image feature from the input image feature, the spatial feature and the modification feature; and generating the output image from the edited image feature using the image decoder.Type: ApplicationFiled: December 29, 2023Publication date: June 27, 2024Inventors: Tianhao Zhang, Weilong Yang, Honglak Lee, Hung-Yu Tseng, Irfan Aziz Essa, Lu Jiang
-
Patent number: 11900517Abstract: A method for generating an output image from an input image and an input text instruction that specifies a location and a modification of an edit applied to the input image using a neural network is described. The neural network includes an image encoder, an image decoder, and an instruction attention network. The method includes receiving the input image and the input text instruction; extracting, from the input image, an input image feature that represents features of the input image using the image encoder; generating a spatial feature and a modification feature from the input text instruction using the instruction attention network; generating an edited image feature from the input image feature, the spatial feature and the modification feature; and generating the output image from the edited image feature using the image decoder.Type: GrantFiled: December 20, 2022Date of Patent: February 13, 2024Assignee: Google LLCInventors: Tianhao Zhang, Weilong Yang, Honglak Lee, Hung-Yu Tseng, Irfan Aziz Essa, Lu Jiang
-
Publication number: 20230260239Abstract: Aspects of the present disclosure are directed to creating a skybox for an artificial reality (“XR”) world from a two-dimensional (“2D”) image. The 2D image is scanned and split into at least two portions. The portions are mapped onto the interior of a virtual enclosed 3D shape, for example, a virtual cube. A generative adversarial network (GAN) interpolates from the information in the areas mapped from the portions to fill in at least some unmapped areas of the interior of the 3D shape. The 3D shape can be placed in a user's XR world to become the skybox surrounding that world.Type: ApplicationFiled: February 13, 2023Publication date: August 17, 2023Inventors: Vincent Charles CHEUNG, Jiemin ZHANG, Salvatore CANDIDO, Hung-Yu TSENG
-
Publication number: 20230177754Abstract: A method for generating an output image from an input image and an input text instruction that specifies a location and a modification of an edit applied to the input image using a neural network is described. The neural network includes an image encoder, an image decoder, and an instruction attention network. The method includes receiving the input image and the input text instruction; extracting, from the input image, an input image feature that represents features of the input image using the image encoder; generating a spatial feature and a modification feature from the input text instruction using the instruction attention network; generating an edited image feature from the input image feature, the spatial feature and the modification feature; and generating the output image from the edited image feature using the image decoder.Type: ApplicationFiled: December 20, 2022Publication date: June 8, 2023Inventors: Tianhao Zhang, Weilong Yang, Honglak Lee, Hung-Yu Tseng, Irfan Aziz Essa, Lu Jiang
-
Patent number: 11562518Abstract: A method for generating an output image from an input image and an input text instruction that specifies a location and a modification of an edit applied to the input image using a neural network is described. The neural network includes an image encoder, an image decoder, and an instruction attention network. The method includes receiving the input image and the input text instruction; extracting, from the input image, an input image feature that represents features of the input image using the image encoder; generating a spatial feature and a modification feature from the input text instruction using the instruction attention network; generating an edited image feature from the input image feature, the spatial feature and the modification feature; and generating the output image from the edited image feature using the image decoder.Type: GrantFiled: June 7, 2021Date of Patent: January 24, 2023Assignee: Google LLCInventors: Tianhao Zhang, Weilong Yang, Honglak Lee, Hung-Yu Tseng, Irfan Aziz Essa, Lu Jiang
-
Patent number: 11375176Abstract: When an image is projected from 3D, the viewpoint of objects in the image, relative to the camera, must be determined. Since the image itself will not have sufficient information to determine the viewpoint of the various objects in the image, techniques to estimate the viewpoint must be employed. To date, neural networks have been used to infer such viewpoint estimates on an object category basis, but must first be trained with numerous examples that have been manually created. The present disclosure provides a neural network that is trained to learn, from just a few example images, a unique viewpoint estimation network capable of inferring viewpoint estimations for a new object category.Type: GrantFiled: February 3, 2020Date of Patent: June 28, 2022Assignee: NVIDIA CORPORATIONInventors: Hung-Yu Tseng, Shalini De Mello, Jonathan Tremblay, Sifei Liu, Jan Kautz, Stanley Thomas Birchfield
-
Patent number: 11354792Abstract: Technologies for image processing based on a creation workflow for creating a type of images are provided. Both multi-stage image generation as well as multi-stage image editing of an existing image are supported. To accomplish this, one system models the sequential creation stages of the creation workflow. In the backward direction, inference networks can backward transform an image into various intermediate stages. In the forward direction, generation networks can forward transform an earlier-stage image into a later-stage image based on stage-specific operations. Advantageously, this technical solution overcomes the limitations of the single-stage generation strategy with a multi-stage framework to model different types of variation at various creation stages. Resultantly, both novices and seasoned artists can use these technologies to efficiently perform complex artwork creation or editing tasks.Type: GrantFiled: February 7, 2020Date of Patent: June 7, 2022Assignee: Adobe Inc.Inventors: Matthew David Fisher, Hung-Yu Tseng, Yijun Li, Jingwan Lu
-
Publication number: 20210383584Abstract: A method for generating an output image from an input image and an input text instruction that specifies a location and a modification of an edit applied to the input image using a neural network is described. The neural network includes an image encoder, an image decoder, and an instruction attention network. The method includes receiving the input image and the input text instruction; extracting, from the input image, an input image feature that represents features of the input image using the image encoder; generating a spatial feature and a modification feature from the input text instruction using the instruction attention network; generating an edited image feature from the input image feature, the spatial feature and the modification feature; and generating the output image from the edited image feature using the image decoder.Type: ApplicationFiled: June 7, 2021Publication date: December 9, 2021Inventors: Tianhao Zhang, Weilong Yang, Honglak Lee, Hung-Yu Tseng, Irfan Aziz Essa, Lu Jiang
-
Publication number: 20210248727Abstract: This disclosure includes technologies for image processing based on a creation workflow for creating a type of images. The disclosed technologies can support both multi-stage image generation as well as multi-stage image editing of an existing image. To accomplish this, the disclosed system models the sequential creation stages of the creation workflow. In the backward direction, inference networks can backward transform an image into various intermediate stages. In the forward direction, generation networks can forward transform an earlier-stage image into a later-stage image based on stage-specific operations. Advantageously, the disclosed technical solution overcomes the limitations of the single-stage generation strategy with a multi-stage framework to model different types of variation at various creation stages. Resultantly, both novices and seasoned artists can use the disclosed technologies to efficiently perform complex artwork creation or editing tasks.Type: ApplicationFiled: February 7, 2020Publication date: August 12, 2021Inventors: Matthew David Fisher, Hung-Yu Tseng, Yijun Li, Jingwan Lu
-
Publication number: 20210224947Abstract: Computer vision systems and methods for image to image translation are provided. The system receives a first input image and a second input image and applies a content adversarial loss function to the first input image and the second input image to determine a disentanglement representation of the first input image and a disentanglement representation of the second input image. The system trains a network to generate at least one output image by applying a cross cycle consistency loss function to the first disentanglement representation and the second disentanglement representation to perform multimodal mapping between the first input image and the second input image.Type: ApplicationFiled: January 19, 2021Publication date: July 22, 2021Applicant: Insurance Services Office, Inc.Inventors: Hsin-Ying Lee, Hung-Yu Tseng, Jia-Bin Huang, Maneesh Kumar Singh, Ming-Hsuan Yang
-
Publication number: 20200252600Abstract: When an image is projected from 3D, the viewpoint of objects in the image, relative to the camera, must be determined. Since the image itself will not have sufficient information to determine the viewpoint of the various objects in the image, techniques to estimate the viewpoint must be employed. To date, neural networks have been used to infer such viewpoint estimates on an object category basis, but must first be trained with numerous examples that have been manually created. The present disclosure provides a neural network that is trained to learn, from just a few example images, a unique viewpoint estimation network capable of inferring viewpoint estimations for a new object category.Type: ApplicationFiled: February 3, 2020Publication date: August 6, 2020Inventors: Hung-Yu Tseng, Shalini De Mello, Jonathan Tremblay, Sifei Liu, Jan Kautz, Stanley Thomas Birchfield
-
Patent number: 9607433Abstract: A geometric structure analyzing method, a geometric structure analyzing system, and a computer program product are provided, to analyze a two-dimensional geometric structure of a model composed of at least one magnetic building block. A magnetic field intensity image of the model is obtained, and a shape of the magnetic field intensity image is used as a contour of the model. The contour of the model is skeletonized to obtain the two-dimensional geometric structure of the model, and the two-dimensional geometric structure is displayed on a display panel. Therefore, a user is allowed to control the two-dimensional geometric structure on the display panel by manipulating the model, to achieve interactive effects including visual and tactile feedbacks.Type: GrantFiled: April 23, 2014Date of Patent: March 28, 2017Assignee: National Taiwan UniversityInventors: Bing-Yu Chen, Rong-Hao Liang, Liwei Chan, Hung-Yu Tseng, Han-Chih Kuo, Da-Yuan Huang
-
Publication number: 20150279096Abstract: A geometric structure analyzing method, a geometric structure analyzing system, and a computer program product are provided, to analyze a two-dimensional geometric structure of a model composed of at least one magnetic building block. A magnetic field intensity image of the model is obtained, and a shape of the magnetic field intensity image is used as a contour of the model. The contour of the model is skeletonized to obtain the two-dimensional geometric structure of the model, and the two-dimensional geometric structure is displayed on a display panel. Therefore, a user is allowed to control the two-dimensional geometric structure on the display panel by manipulating the model, to achieve interactive effects including visual and tactile feedbacks.Type: ApplicationFiled: April 23, 2014Publication date: October 1, 2015Applicant: NATIONAL TAIWAN UNIVERSITYInventors: Bing-Yu Chen, Rong-Hao Liang, Liwei Chan, Hung-Yu Tseng, Han-Chih Kuo, Da-Yuan Huang
-
Patent number: 8753559Abstract: A fabrication method of nanoparticles is provided. A substrate having a plurality of pillar structures is provided and then a plurality of ring structures is formed to surround the plurality of the pillar structures. The inner wall of each ring structure surrounds the sidewall of each pillar structure. A portion of each pillar structure is removed to reduce the height of each pillar structure and to expose the inner wall of each ring structure. The ring structures are separated from the pillar structures to form a plurality of nanoparticles. Surface modifications are applied to the ring structures before the ring structures are separated from the pillar structures on the substrate.Type: GrantFiled: June 21, 2012Date of Patent: June 17, 2014Assignee: National Taiwan UniversityInventors: Chih-Chung Yang, Hung-Yu Tseng, Wei-Fang Chen, Che-Hao Liao, Yu-Feng Yao
-
Publication number: 20130285267Abstract: A fabrication method of nanoparticles is provided. A substrate having a plurality of pillar structures is provided and then a plurality of ring structures is formed to surround the plurality of the pillar structures. The inner wall of each ring structure surrounds the sidewall of each pillar structure. A portion of each pillar structure is removed to reduce the height of each pillar structure and to expose the inner wall of each ring structure. The ring structures are separated from the pillar structures to form a plurality of nanoparticles. Surface modifications are applied to the ring structures before the ring structures are separated from the pillar structures on the substrate.Type: ApplicationFiled: June 21, 2012Publication date: October 31, 2013Applicant: NATIONAL TAIWAN UNIVERSITYInventors: Chih-Chung Yang, Hung-Yu Tseng, Wei-Fang Chen, Che-Hao Liao, Yu-Feng Yao
-
Publication number: 20110013192Abstract: A method for forming a localized surface plasmon resonance (LSPR) sensor is disclosed, including providing a substrate, forming a metal thin film on the substrate and irradiating the metal thin film with a laser to form a plurality of metal nanoparticles, wherein the metal nanoparticles have a fixed orientation.Type: ApplicationFiled: February 1, 2010Publication date: January 20, 2011Applicant: NATIONAL TAIWAN UNIVERSITYInventors: Chih-Chung Yang, Cheng-Yen Chen, Jyh-Yang Wang, Yen-Cheng Lu, Hung-Yu Tseng, Fu-Ji Tsai
-
Patent number: 7688171Abstract: A transformer includes a primary winding coil, a winding frame member, multiple first three-dimensional conductive pieces, a second three-dimensional conductive piece, a magnetic core assembly and a fixing plate. The winding frame member includes a first winding frame and a second winding frame for winding the primary winding coil thereon. The first three-dimensional conductive pieces are respectively sheathed around the first winding frame and the second winding frame of the winding frame member. The second three-dimensional conductive piece is arranged between the first three-dimensional conductive pieces. The magnetic core assembly is partially embedded into the first three-dimensional conductive pieces, the first winding frame, the second winding frame and the second three-dimensional conductive piece.Type: GrantFiled: August 28, 2008Date of Patent: March 30, 2010Assignee: Delta Electronics, Inc.Inventors: Sheng-Nan Tsai, Shun-Tai Wang, Hung-Yu Tseng, Ching-Hsien Teng
-
Publication number: 20090309684Abstract: A transformer includes a primary winding coil, a winding frame member, multiple first three-dimensional conductive pieces, a second three-dimensional conductive piece, a magnetic core assembly and a fixing plate. The winding frame member includes a first winding frame and a second winding frame for winding the primary winding coil thereon. The first three-dimensional conductive pieces are respectively sheathed around the first winding frame and the second winding frame of the winding frame member. The second three-dimensional conductive piece is arranged between the first three-dimensional conductive pieces. The magnetic core assembly is partially embedded into the first three-dimensional conductive pieces, the first winding frame, the second winding frame and the second three-dimensional conductive piece.Type: ApplicationFiled: August 28, 2008Publication date: December 17, 2009Applicant: DELTA ELECTRONICS, INC.Inventors: Sheng-Nan Tsai, Shun-Tai Wang, Hung-Yu Tseng, Ching-Hsien Teng