Patents by Inventor Keng-Hao Chang

Keng-Hao Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11960573
    Abstract: Neural network-based categorization can be improved by incorporating graph neural networks that operate on a graph representing the taxonomy of the categories into which a given input is to be categorized by the neural network based-categorization. The output of a graph neural network, operating on a graph representing the taxonomy of categories, can be combined with the output of a neural network operating upon the input to be categorized, such as through an interaction of multidimensional output data, such as a dot product of output vectors. In such a manner, information conveying the explicit relationships between categories, as defined by the taxonomy, can be incorporated into the categorization. To recapture information, incorporate new information, or reemphasize information a second neural network can also operate upon the input to be categorized, with the output of such a second neural network being merged with the output of the interaction.
    Type: Grant
    Filed: November 7, 2022
    Date of Patent: April 16, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Tianchuan Du, Keng-Hao Chang, Ruofei Zhang, Paul Liu
  • Publication number: 20240113143
    Abstract: Various embodiments of the present disclosure are directed towards an imaging device including a first image sensor element and a second image sensor element respectively comprising a pixel unit disposed within a semiconductor substrate. The first image sensor element is adjacent to the second image sensor element. A first micro-lens overlies the first image sensor element and is laterally shifted from a center of the pixel unit of the first image sensor element by a first lens shift amount. A second micro-lens overlies the second image sensor element and is laterally shifted from a center of the pixel unit of the second image sensor element by a second lens shift amount different from the first lens shift amount.
    Type: Application
    Filed: January 6, 2023
    Publication date: April 4, 2024
    Inventors: Cheng Yu Huang, Wen-Hau Wu, Chun-Hao Chuang, Keng-Yu Chou, Wei-Chieh Chiang, Chih-Kung Chang
  • Publication number: 20240088182
    Abstract: In some embodiments, an image sensor is provided. The image sensor includes a photodetector disposed in a semiconductor substrate. A wave guide filter having a substantially planar upper surface is disposed over the photodetector. The wave guide filter includes a light filter disposed in a light filter grid structure. The light filter includes a first material that is translucent and has a first refractive index. The light filter grid structure includes a second material that is translucent and has a second refractive index less than the first refractive index.
    Type: Application
    Filed: November 21, 2023
    Publication date: March 14, 2024
    Inventors: Cheng Yu Huang, Chun-Hao Chuang, Chien-Hsien Tseng, Kazuaki Hashimoto, Keng-Yu Chou, Wei-Chieh Chiang, Wen-Chien Yu, Ting-Cheng Chang, Wen-Hau Wu, Chih-Kung Chang
  • Patent number: 11923386
    Abstract: Various embodiments of the present disclosure are directed towards an integrated chip. The integrated chip includes a first photodetector disposed in a first pixel region of a semiconductor substrate and a second photodetector disposed in a second pixel region of the semiconductor substrate. The second photodetector is laterally separated from the first photodetector. A first diffuser is disposed along a back-side of the semiconductor substrate and over the first photodetector. A second diffuser is disposed along the back-side of the semiconductor substrate and over the second photodetector. A first midline of the first pixel region and a second midline of the second pixel region are both disposed laterally between the first diffuser and the second diffuser.
    Type: Grant
    Filed: April 24, 2023
    Date of Patent: March 5, 2024
    Assignee: Taiwan Semiconductor Manufacturing Company, Ltd.
    Inventors: Keng-Yu Chou, Chun-Hao Chuang, Kazuaki Hashimoto, Wei-Chieh Chiang, Cheng Yu Huang, Wen-Hau Wu, Chih-Kung Chang
  • Patent number: 11921766
    Abstract: Described herein are technologies related to constructing supplemental content items that summarize electronic landing pages. A sequence to sequence model that is configured to construct supplemental content items is trained based upon a corpus of electronic landing pages and supplemental content items that have been constructed by domain experts, wherein each landing page has a respective supplemental content item assigned thereto. The sequence to sequence model is additionally trained using self critical sequence training, where estimated click through rates of supplemental content items generated by the sequence to sequence model are employed to train the sequence to sequence model.
    Type: Grant
    Filed: September 2, 2022
    Date of Patent: March 5, 2024
    Assignee: MICRSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Keng-hao Chang, Ruofei Zhang, John Weston Hughes
  • Patent number: 11603017
    Abstract: The present application describes a system and method for converting a natural language query to a standard query using a sequence-to-sequence neural network. As described herein, when a natural language query is receive, the natural language query is converted to a standard query using a sequence-to-sequence model. In some cases, the sequence-to-sequence model is associated with an attention layer. A search using the standard query is performed and various documents may be returned. The documents that result from the search are scored based, at least in part, on a determined conditional entropy of the document. The conditional entropy is determined using the natural language query and the document.
    Type: Grant
    Filed: May 18, 2020
    Date of Patent: March 14, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Keng-hao Chang, Ruofei Zhang, Zi Yin
  • Patent number: 11551039
    Abstract: Neural network-based categorization can be improved by incorporating graph neural networks that operate on a graph representing the taxonomy of the categories into which a given input is to be categorized by the neural network based-categorization. The output of a graph neural network, operating on a graph representing the taxonomy of categories, can be combined with the output of a neural network operating upon the input to be categorized, such as through an interaction of multidimensional output data, such as a dot product of output vectors. In such a manner, information conveying the explicit relationships between categories, as defined by the taxonomy, can be incorporated into the categorization. To recapture information, incorporate new information, or reemphasize information a second neural network can also operate upon the input to be categorized, with the output of such a second neural network being merged with the output of the interaction.
    Type: Grant
    Filed: April 28, 2020
    Date of Patent: January 10, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Tianchuan Du, Keng-hao Chang, Ruofei Zhang, Paul Liu
  • Publication number: 20220414134
    Abstract: Described herein are technologies related to constructing supplemental content items that summarize electronic landing pages. A sequence to sequence model that is configured to construct supplemental content items is trained based upon a corpus of electronic landing pages and supplemental content items that have been constructed by domain experts, wherein each landing page has a respective supplemental content item assigned thereto. The sequence to sequence model is additionally trained using self critical sequence training, where estimated click through rates of supplemental content items generated by the sequence to sequence model are employed to train the sequence to sequence model.
    Type: Application
    Filed: September 2, 2022
    Publication date: December 29, 2022
    Inventors: Keng-hao CHANG, Ruofei ZHANG, John Weston HUGHES
  • Patent number: 11449536
    Abstract: Described herein are technologies related to constructing supplemental content items that summarize electronic landing pages. A sequence to sequence model that is configured to construct supplemental content items is trained based upon a corpus of electronic landing pages and supplemental content items that have been constructed by domain experts, wherein each landing page has a respective supplemental content item assigned thereto. The sequence to sequence model is additionally trained using self critical sequence training, where estimated click through rates of supplemental content items generated by the sequence to sequence model are employed to train the sequence to sequence model.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: September 20, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Keng-hao Chang, Ruofei Zhang, John Weston Hughes
  • Patent number: 11388134
    Abstract: A wireless network-based voice communication security protection method, which enables VoWiFi (Voice over Wi-Fi) to verify and prevent potential risks in communication, and secures the environment of network communications that can be verified by a user device. A real-time user interface indicates security and quality of the current network call and provides advice on when to cancel a call. A telecommunications provider side interface checks if the user's network communication environment is safe, and provides real-time recommendations to the user regarding the security status of the call. The user device side self-check interface and the telecommunications provider side detection interface simultaneously detect whether or not the user's network communication environment is secure.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: July 12, 2022
    Assignee: NATIONAL CHENG KUNG UNIVERSITY
    Inventors: Jung-Shian Li, I-Hsien Liu, Keng-Hao Chang, Kuan-Chu Lu
  • Patent number: 11315231
    Abstract: An industrial image inspection method includes: generating a test latent vector of a test image; measuring a distance between a training latent vector of a normal image and the test latent vector of the test image; and judging whether the test image is normal or defected according to the distance between the training latent vector of the normal image and the test latent vector of the test image.
    Type: Grant
    Filed: December 12, 2018
    Date of Patent: April 26, 2022
    Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Yu-Ting Lai, Jwu-Sheng Hu, Ya-Hui Tsai, Keng-Hao Chang
  • Patent number: 11301732
    Abstract: A computer-implemented technique uses one or more neural networks to identify at least one item name associated with an input image using a multi-modal fusion approach. The technique is said to be multi-modal because it collects and processes different kinds of evidence regarding each detected item name. The technique is said to adopt a fusion approach because it fuses the multi-modal evidence into an output conclusion that identifies at least one item name associated with the input image. In one example, a first mode collects evidence by identifying and analyzing regions in the input image that are likely to include item name-related information. A second mode collects and analyzes any text that appears as part of input image itself. A third mode collects and analyzes text that is not included in the input image itself, but is nonetheless associated with the input image.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: April 12, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Changbo Hu, Qun Li, Ruofei Zhang, Keng-hao Chang
  • Patent number: 11263487
    Abstract: A computer-implemented technique uses a generative adversarial network (GAN) to jointly train a generator neural network (“generator”) and a discriminator neural network (“discriminator”). Unlike traditional GAN designs, the discriminator performs the dual role of: (a) determining one or more attribute values associated with an object depicted in input image fed to the discriminator; and (b) determining whether the input image fed to the discriminator is real or synthesized by the generator. Also unlike traditional GAN designs, an image classifier can make use of a model produced by the GAN's discriminator. The generator receives generator input information that includes a conditional input image and one or more conditional values that express desired characteristics of the generator output image. The discriminator receives the conditional input image in conjunction with a discriminator input image, which corresponds to either the generator output image or a real image.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: March 1, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Qun Li, Changbo Hu, Keng-hao Chang, Ruofei Zhang
  • Patent number: 11250042
    Abstract: A taxonomy of categories, attributes, and values can be conflated with new data triplets by identifying one or more conflation candidates among the attribute-value pairs within a category of the taxonomy that matches the category of the data triplet, and determining a suitable merge action for conflating the data triplet with each conflation candidate. The task of determining merge actions may be cast as a classification problem, and may be solved by an ensemble classifier.
    Type: Grant
    Filed: June 6, 2018
    Date of Patent: February 15, 2022
    Assignee: Microsoft Technology Licensing LLC
    Inventors: Keng-hao Chang, Srinivasa Reddy Neerudu, Sujith Vishwajith, Ruofei Zhang
  • Patent number: 11163940
    Abstract: Technologies are described herein that relate to identifying supplemental content items that are related to objects captured in images of webpages. A computing system receives an indication that a client computing device has a webpage displayed thereon that includes an image. The image is provided to a first DNN that is configured to identify a portion of the image that includes an object of a type from amongst a plurality of predefined types. Once the portion of the image is identified, the portion of the image is provided to a plurality of DNNs, with each of the DNNs configured to output a word or phrase that represents a value of a respective attribute of the object. A sequence of words or phrases output by the plurality of DNNs is provided to a search computing system, which identifies a supplemental content item based upon the sequence of words or phrases.
    Type: Grant
    Filed: May 25, 2019
    Date of Patent: November 2, 2021
    Assignee: Microsoft Technology Licensing LLC
    Inventors: Qun Li, Changbo Hu, Keng-hao Chang, Ruofei Zhang
  • Publication number: 20210334606
    Abstract: Neural network-based categorization can be improved by incorporating graph neural networks that operate on a graph representing the taxonomy of the categories into which a given input is to be categorized by the neural network based-categorization. The output of a graph neural network, operating on a graph representing the taxonomy of categories, can be combined with the output of a neural network operating upon the input to be categorized, such as through an interaction of multidimensional output data, such as a dot product of output vectors. In such a manner, information conveying the explicit relationships between categories, as defined by the taxonomy, can be incorporated into the categorization. To recapture information, incorporate new information, or reemphasize information a second neural network can also operate upon the input to be categorized, with the output of such a second neural network being merged with the output of the interaction.
    Type: Application
    Filed: April 28, 2020
    Publication date: October 28, 2021
    Inventors: Tianchuan DU, Keng-hao CHANG, Ruofei ZHANG, Paul LIU
  • Publication number: 20210303927
    Abstract: A computer-implemented technique uses a generative adversarial network (GAN) to jointly train a generator neural network (“generator”) and a discriminator neural network (“discriminator”). Unlike traditional GAN designs, the discriminator performs the dual role of: (a) determining one or more attribute values associated with an object depicted in input image fed to the discriminator; and (b) determining whether the input image fed to the discriminator is real or synthesized by the generator. Also unlike traditional GAN designs, an image classifier can make use of a model produced by the GAN's discriminator. The generator receives generator input information that includes a conditional input image and one or more conditional values that express desired characteristics of the generator output image. The discriminator receives the conditional input image in conjunction with a discriminator input image, which corresponds to either the generator output image or a real image.
    Type: Application
    Filed: March 25, 2020
    Publication date: September 30, 2021
    Inventors: Qun LI, Changbo HU, Keng-hao CHANG, Ruofei ZHANG
  • Publication number: 20210303939
    Abstract: A computer-implemented technique uses one or more neural networks to identify at least one item name associated with an input image using a multi-modal fusion approach. The technique is said to be multi-modal because it collects and processes different kinds of evidence regarding each detected item name. The technique is said to adopt a fusion approach because it fuses the multi-modal evidence into an output conclusion that identifies at least one item name associated with the input image. In one example, a first mode collects evidence by identifying and analyzing regions in the input image that are likely to include item name-related information. A second mode collects and analyzes any text that appears as part of input image itself. A third mode collects and analyzes text that is not included in the input image itself, but is nonetheless associated with the input image.
    Type: Application
    Filed: March 25, 2020
    Publication date: September 30, 2021
    Inventors: Changbo HU, Qun LI, Ruofei ZHANG, Keng-hao CHANG
  • Publication number: 20210067478
    Abstract: A wireless network-based voice communication security protection method, which enables VoWiFi (Voice over Wi-Fi) to verify and prevent potential risks in communication, and secures the environment of network communications that can be verified by a user device. A real-time user interface indicates security and quality of the current network call and provides advice on when to cancel a call. A telecommunications provider side interface checks if the user's network communication environment is safe, and provides real-time recommendations to the user regarding the security status of the call. The user device side self-check interface and the telecommunications provider side detection interface simultaneously detect whether or not the user's network communication environment is secure.
    Type: Application
    Filed: May 28, 2020
    Publication date: March 4, 2021
    Inventors: Jung-Shian LI, I-Hsien LIU, Keng-Hao CHANG, Kuan-Chu LU
  • Publication number: 20200372103
    Abstract: Technologies are described herein that relate to identifying supplemental content items that are related to objects captured in images of webpages. A computing system receives an indication that a client computing device has a webpage displayed thereon that includes an image. The image is provided to a first DNN that is configured to identify a portion of the image that includes an object of a type from amongst a plurality of predefined types. Once the portion of the image is identified, the portion of the image is provided to a plurality of DNNs, with each of the DNNs configured to output a word or phrase that represents a value of a respective attribute of the object. A sequence of words or phrases output by the plurality of DNNs is provided to a search computing system, which identifies a supplemental content item based upon the sequence of words or phrases.
    Type: Application
    Filed: May 25, 2019
    Publication date: November 26, 2020
    Inventors: Qun LI, Changbo HU, Keng-hao CHANG, Ruofei ZHANG