Patents by Inventor Ching-Ling Huang

Ching-Ling Huang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250151366
    Abstract: A method for fabricating semiconductor device includes the steps of first providing a substrate having a core region and an input/output (I/O) region and then forming a first metal gate on the core region and a second metal gate on the I/O region. Preferably, the first metal gate includes a first gate dielectric layer, the second metal gate includes a second gate dielectric layer, the first gate dielectric layer and the second gate dielectric layer having different shapes such that the first gate dielectric layer includes an I-shape and the second gate dielectric layer includes a U-shape.
    Type: Application
    Filed: December 6, 2023
    Publication date: May 8, 2025
    Applicant: UNITED MICROELECTRONICS CORP.
    Inventors: Zi-Ting Huang, Ching-Ling Lin, Wen-An Liang
  • Publication number: 20250110307
    Abstract: An optical system affixed to an electronic apparatus is provided, including a first optical module, a second optical module, and a third optical module. The first optical module is configured to adjust the moving direction of a first light from a first moving direction to a second moving direction, wherein the first moving direction is not parallel to the second moving direction. The second optical module is configured to receive the first light moving in the second moving direction. The first light reaches the third optical module via the first optical module and the second optical module in sequence. The third optical module includes a first photoelectric converter configured to transform the first light into a first image signal.
    Type: Application
    Filed: December 12, 2024
    Publication date: April 3, 2025
    Inventors: Chao-Chang HU, Chih-Wei WENG, Chia-Che WU, Chien-Yu KAO, Hsiao-Hsin HU, He-Ling CHANG, Chao-Hsi WANG, Chen-Hsien FAN, Che-Wei CHANG, Mao-Gen JIAN, Sung-Mao TSAI, Wei-Jhe SHEN, Yung-Ping YANG, Sin-Hong LIN, Tzu-Yu CHANG, Sin-Jhong SONG, Shang-Yu HSU, Meng-Ting LIN, Shih-Wei HUNG, Yu-Huai LIAO, Mao-Kuo HSU, Hsueh-Ju LU, Ching-Chieh HUANG, Chih-Wen CHIANG, Yu-Chiao LO, Ying-Jen WANG, Shu-Shan CHEN, Che-Hsiang CHIU
  • Publication number: 20250080756
    Abstract: A method and apparatus for inter prediction in video coding system are disclosed. According to the method, one or more model parameters of one or more cross-color models for the second-color block are determined. Then, cross-color predictors for the second-color block are determined, wherein one cross-color predictor value for the second-color block is generated for each second-color pixel of the second-color block by applying said one or more cross-color models to corresponding reconstructed or predicted first-color pixels. The input data associated with the second-color block is encoded using prediction data comprising the cross-color predictors for the second-color block at the encoder side, or the input data associated with the second-color block is decoded using the prediction data comprising the cross-color predictors for the second-color block at the decoder side.
    Type: Application
    Filed: December 20, 2022
    Publication date: March 6, 2025
    Inventors: Man-Shu CHIANG, Olena CHUBACH, Yu-Ling HSIAO, Chia-Ming TSAI, Chun-Chia CHEN, Chih-Wei HSU, Tzu-Der CHUANG, Ching-Yeh CHEN, Yu-Wen HUANG
  • Publication number: 20250063155
    Abstract: A method and apparatus for inter prediction in video coding system are disclosed. According to the method, input data associated with a current block comprising at least one colour block are received. A blending predictor is determined according to a weighted sum of at least two candidate predictions generated based on one or more first hypotheses of prediction, one or more second hypotheses of prediction, or both. The first hypotheses of prediction are generated based on one or more intra prediction modes comprising a DC mode, a planar mode or at least one angular modes. The second hypotheses of prediction are generated based on one or more cross-component modes and a collocated block of said at least one colour block. The input data associated with the colour block is encoded or decoded using the blending predictor.
    Type: Application
    Filed: December 20, 2022
    Publication date: February 20, 2025
    Inventors: Man-Shu CHIANG, Olena CHUBACH, Chia-Ming TSAI, Yu-Ling HSIAO, Chun-Chia CHEN, Chih-Wei HSU, Tzu-Der CHUANG, Ching-Yeh CHEN, Yu-Wen HUANG
  • Publication number: 20250056008
    Abstract: A video coding system that uses multiple models to predict chroma samples is provided. The video coding system receives data for a block of pixels to be encoded or decoded as a current block of a current picture of a video. The system constructs two or more chroma prediction models based on luma and chroma samples neighboring the current block. The system applies the two or more chroma prediction models to incoming or reconstructed luma samples of the current block to produce two or more model predictions. The system computes predicted chroma samples by combining the two or more model predictions. The system uses the predicted chroma samples to reconstruct chroma samples of the current block or to encode the current block.
    Type: Application
    Filed: December 20, 2022
    Publication date: February 13, 2025
    Inventors: Yu-Ling HSIAO, Olena CHUBACH, Chun-Chia CHEN, Chia-Ming TSAI, Man-Shu CHIANG, Chih-Wei HSU, Tzu-Der CHUANG, Ching-Yeh CHEN, Yu-Wen HUANG
  • Publication number: 20250039356
    Abstract: A video coding system that uses multiple models to predict chroma samples is provided. The video coding system receives data for a block of pixels to be encoded or decoded as a current block of a current picture of a video. The video coding system derives multiple prediction linear models based on luma and chroma samples neighboring the current block. The video coding system constructs a composite linear model based on the multiple prediction linear models. The video coding system applies the composite linear model to incoming or reconstructed luma samples of the current block to generate a chroma predictor of the current block. The video coding system uses the chroma predictor to reconstruct chroma samples of the current block or to encode the current block.
    Type: Application
    Filed: December 29, 2022
    Publication date: January 30, 2025
    Inventors: Chia-Ming TSAI, Chun-Chia CHEN, Yu-Ling HSIAO, Man-Shu CHIANG, Chih-Wei HSU, Olena CHUBACH, Tzu-Der CHUANG, Ching-Yeh CHEN, Yu-Wen HUANG
  • Patent number: 12204163
    Abstract: An optical system affixed to an electronic apparatus is provided, including a first optical module, a second optical module, and a third optical module. The first optical module is configured to adjust the moving direction of a first light from a first moving direction to a second moving direction, wherein the first moving direction is not parallel to the second moving direction. The second optical module is configured to receive the first light moving in the second moving direction. The first light reaches the third optical module via the first optical module and the second optical module in sequence. The third optical module includes a first photoelectric converter configured to transform the first light into a first image signal.
    Type: Grant
    Filed: February 5, 2024
    Date of Patent: January 21, 2025
    Assignee: TDK TAIWAN CORP.
    Inventors: Chao-Chang Hu, Chih-Wei Weng, Chia-Che Wu, Chien-Yu Kao, Hsiao-Hsin Hu, He-Ling Chang, Chao-Hsi Wang, Chen-Hsien Fan, Che-Wei Chang, Mao-Gen Jian, Sung-Mao Tsai, Wei-Jhe Shen, Yung-Ping Yang, Sin-Hong Lin, Tzu-Yu Chang, Sin-Jhong Song, Shang-Yu Hsu, Meng-Ting Lin, Shih-Wei Hung, Yu-Huai Liao, Mao-Kuo Hsu, Hsueh-Ju Lu, Ching-Chieh Huang, Chih-Wen Chiang, Yu-Chiao Lo, Ying-Jen Wang, Shu-Shan Chen, Che-Hsiang Chiu
  • Publication number: 20250024072
    Abstract: A method and apparatus for video coding system that uses intra prediction based on cross-colour linear model are disclosed. According to the method, model parameters for a first-colour predictor model are determined and the first-colour predictor model provides a predicted first-colour pixel value according to a combination of at least two corresponding reconstructed second-colour pixel values. According to another method, the first-colour predictor model provides a predicted first-colour pixel value based on a second degree model or higher of one or more corresponding reconstructed second-colour pixel values. First-colour predictors for the current first-colour block are determined according to the first-colour prediction model. The input data are then encoded at the encoder side or decoded at the decoder side using the first-colour predictors.
    Type: Application
    Filed: October 26, 2022
    Publication date: January 16, 2025
    Inventors: Olena CHUBACH, Ching-Yeh CHEN, Tzu-Der CHUANG, Chun-Chia CHEN, Man-Shu CHIANG, Chia-Ming TSAI, Yu-Ling HSIAO, Chih-Wei HSU, Yu-Wen HUANG
  • Patent number: 11399048
    Abstract: A system, computer-readable medium, and a method including receiving, by a processor, sensor data related to a physical asset; obtaining, by the processor, at least a stored model of the physical asset from a data storage device; generating, by the processor, a visualization representation of the physical asset based on an integration of the sensor data related to the physical asset and the stored model of the physical asset; and presenting, by the processor in a shared virtual workspace accessible by a first user entity and at least one second user entity located remotely from the first user entity, the virtualization representation of the physical asset.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: July 26, 2022
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Ching-Ling Huang, Yoshifumi Nishida
  • Publication number: 20210126956
    Abstract: A system, computer-readable medium, and a method including receiving, by a processor, sensor data related to a physical asset; obtaining, by the processor, at least a stored model of the physical asset from a data storage device; generating, by the processor, a visualization representation of the physical asset based on an integration of the sensor data related to the physical asset and the stored model of the physical asset; and presenting, by the processor in a shared virtual workspace accessible by a first user entity and at least one second user entity located remotely from the first user entity, the virtualization representation of the physical asset.
    Type: Application
    Filed: May 31, 2017
    Publication date: April 29, 2021
    Inventors: Ching-Ling HUANG, Yoshifumi NISHIDA
  • Patent number: 10682677
    Abstract: A three-dimensional model data store may contain a three-dimensional model of an industrial asset, including points of interest associated with the industrial asset. An inspection plan data store may contain an inspection plan for the industrial asset, including a path of movement for an autonomous inspection robot. An industrial asset inspection platform may receive sensor data from an autonomous inspection robot indicating characteristics of the industrial asset and determine a current location of the autonomous inspection robot along the path of movement in the inspection plan along with current context information. A forward simulation of movement for the autonomous inspection robot may be executed from the current location, through a pre-determined time window, to determine a difference between the path of movement in the inspection plan and the forward simulation of movement along with future context information.
    Type: Grant
    Filed: May 10, 2017
    Date of Patent: June 16, 2020
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Shiraj Sen, Steven Gray, Nicholas Abate, Roberto Silva Filho, Ching-Ling Huang, Mauricio Castillo-Effen, Ghulam Ali Baloch, Raju Venkataramana, Douglas Forman
  • Patent number: 10633093
    Abstract: Provided are systems and methods for monitoring an asset via an autonomous model-driven inspection. In an example, the method may include storing an inspection plan including a virtually created three-dimensional (3D) model of a travel path with respect to a virtual asset that is created in virtual space, converting the virtually created 3D model of the travel path about the virtual asset into a physical travel path about a physical asset corresponding to the virtual asset, autonomously controlling vertical and lateral movement of the unmanned robot in three dimensions with respect to the physical asset based on the physical travel path and capturing data at one or more regions of interest, and capturing data at one or more regions of interest, and storing information concerning the captured data about the asset.
    Type: Grant
    Filed: May 5, 2017
    Date of Patent: April 28, 2020
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Mauricio Castillo-Effen, Ching-Ling Huang, Raju Venkataramana, Roberto Silva Filho, Alex Tepper, Steven Gray, Yakov Polishchuk, Viktor Holovashchenko, Charles Theurer, Yang Zhao, Ghulam Ali Baloch, Douglas Forman, Shiraj Sen, Huan Tan, Arpit Jain
  • Patent number: 10565994
    Abstract: A method, computer-readable medium, and system including a speech-to-text module to receive an input of speech including one or more words generated by a human and to output data including text, sentiment information, and other parameters corresponding to the speech input; a processing module like Artificial Intelligence to generate a reply to the speech input, the reply including a textual component, sentimental information associated with the textual component, and contextual information associated with the textual component; and a text-to-speech module to receive the textual component, sentimental information, and contextual information and to generate, based on the received textual component and its associated sentimental information and contextual information, a speech output including one or more spoken words, the spoken words to be presented with at least one of a pace, a tone, a volume, and an emphasis representative of the sentimental information and contextual information associated with the textual
    Type: Grant
    Filed: November 30, 2017
    Date of Patent: February 18, 2020
    Assignee: General Electric Company
    Inventors: Ching-Ling Huang, Raju Venkataramana, Yoshifumi Nishida
  • Publication number: 20190226456
    Abstract: A system and method for evaluating loads of a potential wind farm site for multiple wind scenarios includes (a) receiving, via a computer server, site data of the potential wind farm site representing at least one wind scenario for at least one wind turbine at the potential wind farm site. Further, the method includes (b) selecting, via a user interface, a wind farm configuration based on the at least one wind scenario. The method also includes (c) selecting, via the user interface, a time period for the at least one wind scenario. Thus, the method includes (d) automatically generating, via the computer server, a mechanical loads analysis for the selected wind farm configuration and the time period.
    Type: Application
    Filed: January 19, 2018
    Publication date: July 25, 2019
    Inventors: Mark Mitchell Korfein, Daniel Leathem, Ching-Ling Huang
  • Publication number: 20190164554
    Abstract: A method, computer-readable medium, and system including a speech-to-text module to receive an input of speech including one or more words generated by a human and to output data including text, sentiment information, and other parameters corresponding to the speech input; a processing module like Artificial Intelligence to generate a reply to the speech input, the reply including a textual component, sentimental information associated with the textual component, and contextual information associated with the textual component; and a text-to-speech module to receive the textual component, sentimental information, and contextual information and to generate, based on the received textual component and its associated sentimental information and contextual information, a speech output including one or more spoken words, the spoken words to be presented with at least one of a pace, a tone, a volume, and an emphasis representative of the sentimental information and contextual information associated with the textual
    Type: Application
    Filed: November 30, 2017
    Publication date: May 30, 2019
    Inventors: Ching-Ling HUANG, Raju VENKATARAMANA, Yoshifumi NISHIDA
  • Publication number: 20180330027
    Abstract: A three-dimensional model data store may contain a three-dimensional model of an industrial asset, including points of interest associated with the industrial asset. An inspection plan data store may contain an inspection plan for the industrial asset, including a path of movement for an autonomous inspection robot. An industrial asset inspection platform may receive sensor data from an autonomous inspection robot indicating characteristics of the industrial asset and determine a current location of the autonomous inspection robot along the path of movement in the inspection plan along with current context information. A forward simulation of movement for the autonomous inspection robot may be executed from the current location, through a pre-determined time window, to determine a difference between the path of movement in the inspection plan and the forward simulation of movement along with future context information.
    Type: Application
    Filed: May 10, 2017
    Publication date: November 15, 2018
    Inventors: Shiraj SEN, Steven GRAY, Nicholas ABATE, Roberto SILVA FILHO, Ching-Ling HUANG, Mauricio CASTILLO-EFFEN, Ghulam Ali BALOCH, Raju VENKATARAMANA, Douglas FORMAN
  • Publication number: 20180321692
    Abstract: Provided are systems and methods for monitoring an asset via an autonomous model-driven inspection. In an example, the method may include storing an inspection plan including a virtually created three-dimensional (3D) model of a travel path with respect to a virtual asset that is created in virtual space, converting the virtually created 3D model of the travel path about the virtual asset into a physical travel path about a physical asset corresponding to the virtual asset, autonomously controlling vertical and lateral movement of the unmanned robot in three dimensions with respect to the physical asset based on the physical travel path and capturing data at one or more regions of interest, and capturing data at one or more regions of interest, and storing information concerning the captured data about the asset.
    Type: Application
    Filed: May 5, 2017
    Publication date: November 8, 2018
    Inventors: Mauricio CASTILLO-EFFEN, Ching-Ling HUANG, Raju VENKATARAMANA, Roberto SILVA FILHO, Alex TEPPER, Steven GRAY, Yakov POLISHCHUK, Viktor HOLOVASHCHENKO, Charles THEURER, Yang ZHAO, Ghulam Ali BALOCH, Douglas FORMAN, Shiraj SEN, Huan TAN, Arpit JAIN
  • Publication number: 20180219935
    Abstract: A method for contextual content delivery of workflow management system information to a mobile computing device includes registering in a data record each mobile computing device accessible to a respective user, applying device selection rules in combination mobile computing device with specific information, constructing a real-time context model, receiving task information content to be sent to the respective user, retrieving device association rules, fitting criteria of the device association rules to the real-time context model, selecting a context model for delivery of the task information content, delivering the task information content to at least one mobile computing device accessible to the respective particular user, and updating the task information content delivery based on latest mobile computing devices detected, their specific information, and the next workflow task.
    Type: Application
    Filed: January 27, 2017
    Publication date: August 2, 2018
    Inventors: Ching-Ling HUANG, Bo YU, Raju VENKATARAMANA
  • Patent number: 9864890
    Abstract: A method for contextualizing barcode content data includes a mobile device obtaining images containing one or more barcodes, decoding at least one of the barcodes to determine its content, determining a metric magnitude quantifying a measurable relationship between at least one decoded barcode and the imaging device, comparing the metric magnitude to a predetermined condition, if the predetermined condition is satisfied, then accessing a respective barcode specific information data record based on respective barcode content, and providing at least one of contextual information and contextual instruction to the mobile computing device, via one or more forms of multi-modal communication (e.g., visual display, audio notification, tactile stimulus, etc.). Determining a distance between at least two barcodes in the image, and displaying a message based on the distance.
    Type: Grant
    Filed: January 25, 2017
    Date of Patent: January 9, 2018
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Yoshifumi Nishida, Ching-Ling Huang
  • Publication number: 20170061214
    Abstract: A method of managing bandwidth associated with video transmissions over a computer network is disclosed. A plurality of video transmissions is received from a plurality of video cameras connected to a computer surveillance system via the computer network. Quality levels of the plurality of video transmissions are set to a first level. A first analysis is performed on a video transmission to identify whether a region of interest exists. A quality level of the video transmission is increased to a second level with respect to the region of interest. A second analysis is performed on the region of interest to identify whether an actionable event has occurred in an area monitored by one of the plurality of video cameras. The quality level may subsequently be restored to the first level to keep usage of the bandwidth efficient and scalable for a large number of camera nodes.
    Type: Application
    Filed: August 31, 2015
    Publication date: March 2, 2017
    Inventors: Ching-Ling Huang, Yoshifumi Nishida