Patents by Inventor Hung-Yu Tseng

Hung-Yu Tseng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11900517
    Abstract: A method for generating an output image from an input image and an input text instruction that specifies a location and a modification of an edit applied to the input image using a neural network is described. The neural network includes an image encoder, an image decoder, and an instruction attention network. The method includes receiving the input image and the input text instruction; extracting, from the input image, an input image feature that represents features of the input image using the image encoder; generating a spatial feature and a modification feature from the input text instruction using the instruction attention network; generating an edited image feature from the input image feature, the spatial feature and the modification feature; and generating the output image from the edited image feature using the image decoder.
    Type: Grant
    Filed: December 20, 2022
    Date of Patent: February 13, 2024
    Assignee: Google LLC
    Inventors: Tianhao Zhang, Weilong Yang, Honglak Lee, Hung-Yu Tseng, Irfan Aziz Essa, Lu Jiang
  • Publication number: 20230260239
    Abstract: Aspects of the present disclosure are directed to creating a skybox for an artificial reality (“XR”) world from a two-dimensional (“2D”) image. The 2D image is scanned and split into at least two portions. The portions are mapped onto the interior of a virtual enclosed 3D shape, for example, a virtual cube. A generative adversarial network (GAN) interpolates from the information in the areas mapped from the portions to fill in at least some unmapped areas of the interior of the 3D shape. The 3D shape can be placed in a user's XR world to become the skybox surrounding that world.
    Type: Application
    Filed: February 13, 2023
    Publication date: August 17, 2023
    Inventors: Vincent Charles CHEUNG, Jiemin ZHANG, Salvatore CANDIDO, Hung-Yu TSENG
  • Publication number: 20230177754
    Abstract: A method for generating an output image from an input image and an input text instruction that specifies a location and a modification of an edit applied to the input image using a neural network is described. The neural network includes an image encoder, an image decoder, and an instruction attention network. The method includes receiving the input image and the input text instruction; extracting, from the input image, an input image feature that represents features of the input image using the image encoder; generating a spatial feature and a modification feature from the input text instruction using the instruction attention network; generating an edited image feature from the input image feature, the spatial feature and the modification feature; and generating the output image from the edited image feature using the image decoder.
    Type: Application
    Filed: December 20, 2022
    Publication date: June 8, 2023
    Inventors: Tianhao Zhang, Weilong Yang, Honglak Lee, Hung-Yu Tseng, Irfan Aziz Essa, Lu Jiang
  • Patent number: 11562518
    Abstract: A method for generating an output image from an input image and an input text instruction that specifies a location and a modification of an edit applied to the input image using a neural network is described. The neural network includes an image encoder, an image decoder, and an instruction attention network. The method includes receiving the input image and the input text instruction; extracting, from the input image, an input image feature that represents features of the input image using the image encoder; generating a spatial feature and a modification feature from the input text instruction using the instruction attention network; generating an edited image feature from the input image feature, the spatial feature and the modification feature; and generating the output image from the edited image feature using the image decoder.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: January 24, 2023
    Assignee: Google LLC
    Inventors: Tianhao Zhang, Weilong Yang, Honglak Lee, Hung-Yu Tseng, Irfan Aziz Essa, Lu Jiang
  • Patent number: 11375176
    Abstract: When an image is projected from 3D, the viewpoint of objects in the image, relative to the camera, must be determined. Since the image itself will not have sufficient information to determine the viewpoint of the various objects in the image, techniques to estimate the viewpoint must be employed. To date, neural networks have been used to infer such viewpoint estimates on an object category basis, but must first be trained with numerous examples that have been manually created. The present disclosure provides a neural network that is trained to learn, from just a few example images, a unique viewpoint estimation network capable of inferring viewpoint estimations for a new object category.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: June 28, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Hung-Yu Tseng, Shalini De Mello, Jonathan Tremblay, Sifei Liu, Jan Kautz, Stanley Thomas Birchfield
  • Patent number: 11354792
    Abstract: Technologies for image processing based on a creation workflow for creating a type of images are provided. Both multi-stage image generation as well as multi-stage image editing of an existing image are supported. To accomplish this, one system models the sequential creation stages of the creation workflow. In the backward direction, inference networks can backward transform an image into various intermediate stages. In the forward direction, generation networks can forward transform an earlier-stage image into a later-stage image based on stage-specific operations. Advantageously, this technical solution overcomes the limitations of the single-stage generation strategy with a multi-stage framework to model different types of variation at various creation stages. Resultantly, both novices and seasoned artists can use these technologies to efficiently perform complex artwork creation or editing tasks.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: June 7, 2022
    Assignee: Adobe Inc.
    Inventors: Matthew David Fisher, Hung-Yu Tseng, Yijun Li, Jingwan Lu
  • Publication number: 20210383584
    Abstract: A method for generating an output image from an input image and an input text instruction that specifies a location and a modification of an edit applied to the input image using a neural network is described. The neural network includes an image encoder, an image decoder, and an instruction attention network. The method includes receiving the input image and the input text instruction; extracting, from the input image, an input image feature that represents features of the input image using the image encoder; generating a spatial feature and a modification feature from the input text instruction using the instruction attention network; generating an edited image feature from the input image feature, the spatial feature and the modification feature; and generating the output image from the edited image feature using the image decoder.
    Type: Application
    Filed: June 7, 2021
    Publication date: December 9, 2021
    Inventors: Tianhao Zhang, Weilong Yang, Honglak Lee, Hung-Yu Tseng, Irfan Aziz Essa, Lu Jiang
  • Publication number: 20210248727
    Abstract: This disclosure includes technologies for image processing based on a creation workflow for creating a type of images. The disclosed technologies can support both multi-stage image generation as well as multi-stage image editing of an existing image. To accomplish this, the disclosed system models the sequential creation stages of the creation workflow. In the backward direction, inference networks can backward transform an image into various intermediate stages. In the forward direction, generation networks can forward transform an earlier-stage image into a later-stage image based on stage-specific operations. Advantageously, the disclosed technical solution overcomes the limitations of the single-stage generation strategy with a multi-stage framework to model different types of variation at various creation stages. Resultantly, both novices and seasoned artists can use the disclosed technologies to efficiently perform complex artwork creation or editing tasks.
    Type: Application
    Filed: February 7, 2020
    Publication date: August 12, 2021
    Inventors: Matthew David Fisher, Hung-Yu Tseng, Yijun Li, Jingwan Lu
  • Publication number: 20210224947
    Abstract: Computer vision systems and methods for image to image translation are provided. The system receives a first input image and a second input image and applies a content adversarial loss function to the first input image and the second input image to determine a disentanglement representation of the first input image and a disentanglement representation of the second input image. The system trains a network to generate at least one output image by applying a cross cycle consistency loss function to the first disentanglement representation and the second disentanglement representation to perform multimodal mapping between the first input image and the second input image.
    Type: Application
    Filed: January 19, 2021
    Publication date: July 22, 2021
    Applicant: Insurance Services Office, Inc.
    Inventors: Hsin-Ying Lee, Hung-Yu Tseng, Jia-Bin Huang, Maneesh Kumar Singh, Ming-Hsuan Yang
  • Publication number: 20200252600
    Abstract: When an image is projected from 3D, the viewpoint of objects in the image, relative to the camera, must be determined. Since the image itself will not have sufficient information to determine the viewpoint of the various objects in the image, techniques to estimate the viewpoint must be employed. To date, neural networks have been used to infer such viewpoint estimates on an object category basis, but must first be trained with numerous examples that have been manually created. The present disclosure provides a neural network that is trained to learn, from just a few example images, a unique viewpoint estimation network capable of inferring viewpoint estimations for a new object category.
    Type: Application
    Filed: February 3, 2020
    Publication date: August 6, 2020
    Inventors: Hung-Yu Tseng, Shalini De Mello, Jonathan Tremblay, Sifei Liu, Jan Kautz, Stanley Thomas Birchfield
  • Patent number: 9607433
    Abstract: A geometric structure analyzing method, a geometric structure analyzing system, and a computer program product are provided, to analyze a two-dimensional geometric structure of a model composed of at least one magnetic building block. A magnetic field intensity image of the model is obtained, and a shape of the magnetic field intensity image is used as a contour of the model. The contour of the model is skeletonized to obtain the two-dimensional geometric structure of the model, and the two-dimensional geometric structure is displayed on a display panel. Therefore, a user is allowed to control the two-dimensional geometric structure on the display panel by manipulating the model, to achieve interactive effects including visual and tactile feedbacks.
    Type: Grant
    Filed: April 23, 2014
    Date of Patent: March 28, 2017
    Assignee: National Taiwan University
    Inventors: Bing-Yu Chen, Rong-Hao Liang, Liwei Chan, Hung-Yu Tseng, Han-Chih Kuo, Da-Yuan Huang
  • Publication number: 20150279096
    Abstract: A geometric structure analyzing method, a geometric structure analyzing system, and a computer program product are provided, to analyze a two-dimensional geometric structure of a model composed of at least one magnetic building block. A magnetic field intensity image of the model is obtained, and a shape of the magnetic field intensity image is used as a contour of the model. The contour of the model is skeletonized to obtain the two-dimensional geometric structure of the model, and the two-dimensional geometric structure is displayed on a display panel. Therefore, a user is allowed to control the two-dimensional geometric structure on the display panel by manipulating the model, to achieve interactive effects including visual and tactile feedbacks.
    Type: Application
    Filed: April 23, 2014
    Publication date: October 1, 2015
    Applicant: NATIONAL TAIWAN UNIVERSITY
    Inventors: Bing-Yu Chen, Rong-Hao Liang, Liwei Chan, Hung-Yu Tseng, Han-Chih Kuo, Da-Yuan Huang
  • Patent number: 8753559
    Abstract: A fabrication method of nanoparticles is provided. A substrate having a plurality of pillar structures is provided and then a plurality of ring structures is formed to surround the plurality of the pillar structures. The inner wall of each ring structure surrounds the sidewall of each pillar structure. A portion of each pillar structure is removed to reduce the height of each pillar structure and to expose the inner wall of each ring structure. The ring structures are separated from the pillar structures to form a plurality of nanoparticles. Surface modifications are applied to the ring structures before the ring structures are separated from the pillar structures on the substrate.
    Type: Grant
    Filed: June 21, 2012
    Date of Patent: June 17, 2014
    Assignee: National Taiwan University
    Inventors: Chih-Chung Yang, Hung-Yu Tseng, Wei-Fang Chen, Che-Hao Liao, Yu-Feng Yao
  • Publication number: 20130285267
    Abstract: A fabrication method of nanoparticles is provided. A substrate having a plurality of pillar structures is provided and then a plurality of ring structures is formed to surround the plurality of the pillar structures. The inner wall of each ring structure surrounds the sidewall of each pillar structure. A portion of each pillar structure is removed to reduce the height of each pillar structure and to expose the inner wall of each ring structure. The ring structures are separated from the pillar structures to form a plurality of nanoparticles. Surface modifications are applied to the ring structures before the ring structures are separated from the pillar structures on the substrate.
    Type: Application
    Filed: June 21, 2012
    Publication date: October 31, 2013
    Applicant: NATIONAL TAIWAN UNIVERSITY
    Inventors: Chih-Chung Yang, Hung-Yu Tseng, Wei-Fang Chen, Che-Hao Liao, Yu-Feng Yao
  • Publication number: 20110013192
    Abstract: A method for forming a localized surface plasmon resonance (LSPR) sensor is disclosed, including providing a substrate, forming a metal thin film on the substrate and irradiating the metal thin film with a laser to form a plurality of metal nanoparticles, wherein the metal nanoparticles have a fixed orientation.
    Type: Application
    Filed: February 1, 2010
    Publication date: January 20, 2011
    Applicant: NATIONAL TAIWAN UNIVERSITY
    Inventors: Chih-Chung Yang, Cheng-Yen Chen, Jyh-Yang Wang, Yen-Cheng Lu, Hung-Yu Tseng, Fu-Ji Tsai
  • Patent number: 7688171
    Abstract: A transformer includes a primary winding coil, a winding frame member, multiple first three-dimensional conductive pieces, a second three-dimensional conductive piece, a magnetic core assembly and a fixing plate. The winding frame member includes a first winding frame and a second winding frame for winding the primary winding coil thereon. The first three-dimensional conductive pieces are respectively sheathed around the first winding frame and the second winding frame of the winding frame member. The second three-dimensional conductive piece is arranged between the first three-dimensional conductive pieces. The magnetic core assembly is partially embedded into the first three-dimensional conductive pieces, the first winding frame, the second winding frame and the second three-dimensional conductive piece.
    Type: Grant
    Filed: August 28, 2008
    Date of Patent: March 30, 2010
    Assignee: Delta Electronics, Inc.
    Inventors: Sheng-Nan Tsai, Shun-Tai Wang, Hung-Yu Tseng, Ching-Hsien Teng
  • Publication number: 20090309684
    Abstract: A transformer includes a primary winding coil, a winding frame member, multiple first three-dimensional conductive pieces, a second three-dimensional conductive piece, a magnetic core assembly and a fixing plate. The winding frame member includes a first winding frame and a second winding frame for winding the primary winding coil thereon. The first three-dimensional conductive pieces are respectively sheathed around the first winding frame and the second winding frame of the winding frame member. The second three-dimensional conductive piece is arranged between the first three-dimensional conductive pieces. The magnetic core assembly is partially embedded into the first three-dimensional conductive pieces, the first winding frame, the second winding frame and the second three-dimensional conductive piece.
    Type: Application
    Filed: August 28, 2008
    Publication date: December 17, 2009
    Applicant: DELTA ELECTRONICS, INC.
    Inventors: Sheng-Nan Tsai, Shun-Tai Wang, Hung-Yu Tseng, Ching-Hsien Teng