Patents by Inventor Yujie Hu

Yujie Hu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11971620
    Abstract: The present disclosure is related to a display panel and an electronic device. The display panel may include an array substrate and an opposing substrate. The array substrate includes scan lines, data lines, a first blocking wall and a second blocking wall. The first blocking wall and the second blocking wall are respectively arranged on opposite sides of at least one of the scan lines, and each of the first blocking wall and the second blocking wall includes a first blocking layer arranged in a same layer as the scan lines and a second blocking layer arranged in a same layer as the data lines. The distance between the first blocking layer and the scan line in a first direction is smaller than the distance between the second blocking layer and the scan line in the first direction.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: April 30, 2024
    Assignees: WUHAN BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Yang Hu, Yuanhui Guo, Xia Shi, Yujie Gao
  • Publication number: 20240129251
    Abstract: Embodiments of this application provide a data processing method and apparatus, a computer device, and a readable storage medium. The method may be applied to an in-vehicle central control device, and includes polling data packets in the database, to obtain a to-be-added data packet set, and adding the to-be-added data packet set to the cache queue; polling the cache queue, to obtain a to-be-uploaded data packet set, and uploading the to-be-uploaded data packet set through a wireless communication connection of the in-vehicle central control device; storing a first data packet in the database in response to the first data packet in the to-be-uploaded data packet set meeting a continuous uploading failure condition; and deleting a second data packet from the database, in response to the second data packet in the to-be-uploaded data packet set being successfully uploaded to a server.
    Type: Application
    Filed: December 20, 2023
    Publication date: April 18, 2024
    Inventors: Can HU, Yujie ZHANG, Lei WANG
  • Patent number: 11928273
    Abstract: The present disclosure discloses an array substrate and a display device. The array substrate includes a touch substrate, the touch substrate includes a plurality of touch electrodes and a plurality of touch electrode wires disposed in an array, each touch electrode includes a plurality of touch sensors disposed in an array, common electrodes of all pixel cells of the array substrate are reused as the touch sensors, and the array substrate includes data wires for providing display data to the pixel cells; the array substrate includes a base substrate and a thin film transistor which are stacked, the touch electrode wires and the data wires are provided on the same layer where a source and a drain of the thin film transistor are located, and the touch electrode wires are abreast provided on the two sides of each data wire.
    Type: Grant
    Filed: January 5, 2021
    Date of Patent: March 12, 2024
    Assignees: Wuhan BOE Optoelectronics Technology Co., Ltd., BOE Technology Group Co., Ltd.
    Inventors: Fang Hu, Ning Zhu, Yujie Gao, Peng Jiang
  • Patent number: 11910948
    Abstract: A sous vide cooker having a housing, a bulkhead attached to the housing to form a first cavity, a heat sink attached to the bulkhead to form a second cavity, and a skirt attached to the bulkhead to form a third cavity below the heat sink. A TRIAC is physically attached in conductive thermal communication to the heat sink in the second cavity. The cooker also has a motor, an impeller located in the third cavity, a drive shaft extending through at least the heat sink and into the third cavity and operatively connecting the motor to the impeller, and a heating element located within the third cavity.
    Type: Grant
    Filed: March 20, 2023
    Date of Patent: February 27, 2024
    Assignee: Anova Applied Electronics, Inc.
    Inventors: Carl HÃ¥kan Messler, Xia Yujie, Vivian Lee Hu
  • Patent number: 11106431
    Abstract: A computing device to implement fast floating-point adder tree for the neural network applications is disclosed. The fast float-point adder tree comprises a data preparation module, a fast fixed-point Carry-Save Adder (CSA) tree, and a normalization module. The floating-point input data comprises a sign bit, exponent part and fraction part. The data preparation module aligns the fraction part of the input data and prepares the input data for subsequent processing. The fast adder uses a signed fixed-point CSA tree to quickly add a large number of fixed-point data into 2 output values and then uses a normal adder to add the 2 output values into one output value. The fast adder uses for a large number of operands is based on multiple levels of fast adders for a small number of operands. The output from the signed fixed-point Carry-Save Adder tree is converted to a selected floating-point format.
    Type: Grant
    Filed: February 22, 2020
    Date of Patent: August 31, 2021
    Assignee: DINOPLUSAI HOLDINGS LIMITED
    Inventors: Yutian Feng, Yujie Hu
  • Publication number: 20210264257
    Abstract: An AI (Artificial Intelligence) processor for Neural Network (NN) Processing shared by multiple users is disclosed. The AI processor comprises a Multiplier Unit (MXU), a Scalar Computing Unit (SCU), a unified buffer coupled to the MXU and SCU to store data and a control circuitry coupled to the CCU and the unified buffer. The MXU comprises a plurality of Processing Elements (PEs) responsible for computing matrix multiplications. The SCU coupled to output of the MXU is responsible for computing the activation function. The control circuitry is configured to perform the space division and time division NN processing for a plurality of users. At one time instance, at least one of the MXU and SCU is shared by two or more users; and at least one user is using a part of the MXU while the other user is using a part of the SCU.
    Type: Application
    Filed: February 28, 2019
    Publication date: August 26, 2021
    Inventors: Yujie HU, Xiaosong WANG, Tong WU, Steven SERTILLANGE
  • Publication number: 20210141697
    Abstract: Embodiments described herein provide a mission-critical artificial intelligence (AI) processor (MAIP), which includes multiple types of HEs (hardware elements) comprising one or more HEs configured to perform operations associated with multi-layer NN (neural network) processing, at least one spare HE, a data buffer to store correctly computed data in a previous layer of multi-layer NN processing computed, and fault tolerance (FT) control logic. The FT control logic is configured to: determine a fault in a current layer NN processing associated with the HE; cause the correctly computed data in the previous layer of multi-layer NN processing to be copied or moved to said at least one spare HE; and cause said at least one spare HE to perform the current layer NN processing using said at least one spare HE and the correctly computed data in the previous layer of multi-layer NN processing.
    Type: Application
    Filed: February 25, 2019
    Publication date: May 13, 2021
    Inventors: Chung Kuang CHIN, Yujie HU, Tong WU, Clifford GOLD, Yick Kei WONG, Xiaosong WANG, Steven SERTILLANGE, Zongwei ZHU
  • Publication number: 20200272417
    Abstract: A computing device to implement fast floating-point adder tree for the neural network applications is disclosed. The fast float-point adder tree comprises a data preparation module, a fast fixed-point Carry-Save Adder (CSA) tree, and a normalization module. The floating-point input data comprises a sign bit, exponent part and fraction part. The data preparation module aligns the fraction part of the input data and prepares the input data for subsequent processing. The fast adder uses a signed fixed-point CSA tree to quickly add a large number of fixed-point data into 2 output values and then uses a normal adder to add the 2 output values into one output value. The fast adder uses for a large number of operands is based on multiple levels of fast adders for a small number of operands. The output from the signed fixed-point Carry-Save Adder tree is converted to a selected floating-point format.
    Type: Application
    Filed: February 22, 2020
    Publication date: August 27, 2020
    Inventors: Yutian Feng, Yujie Hu
  • Patent number: 10747631
    Abstract: Embodiments described herein provide a mission-critical artificial intelligence (AI) processor (MAIP), which includes an instruction buffer, processing circuitry, a data buffer, command circuitry, and communication circuitry. During operation, the instruction buffer stores a first hardware instruction and a second hardware instruction. The processing circuitry executes the first hardware instruction, which computes an intermediate stage of an AI model. The data buffer stores data generated from executing the first hardware instruction. The command circuitry determines that the second hardware instruction is a hardware-initiated store instruction for transferring the data from the data buffer. Based on the hardware-initiated store instruction, the communication circuitry transfers the data from the data buffer to a memory device of a computing system, which includes the mission-critical processor, via a communication interface.
    Type: Grant
    Filed: June 5, 2018
    Date of Patent: August 18, 2020
    Assignee: DINOPLUSAI HOLDINGS LIMITED
    Inventors: Yujie Hu, Tong Wu, Xiaosong Wang, Zongwei Zhu, Chung Kuang Chin, Clifford Gold, Steven Sertillange, Yick Kei Wong
  • Publication number: 20190279083
    Abstract: A computing device for fast weighted sum calculation in neural networks is disclosed. The computing device comprises an array of processing elements configured to accept an input array. Each processing element comprises a plurality of multipliers and a multiple levels of accumulators. A set of weights associated with the inputs and a target output are provided to a target processing element to compute the weighted sum for the target output. The device according to the present invention reduces the computation time from M clock cycles to log2M, where M is the size of the input array.
    Type: Application
    Filed: April 19, 2018
    Publication date: September 12, 2019
    Inventors: Cliff Gold, Tong Wu, Yujie Hu, Chung Kuang Chin, Xiaosong Wang, Yick Kei Wong
  • Publication number: 20190227887
    Abstract: Embodiments described herein provide a mission-critical artificial intelligence (AI) processor (MAIP), which includes an instruction buffer, processing circuitry, a data buffer, command circuitry, and communication circuitry. During operation, the instruction buffer stores a first hardware instruction and a second hardware instruction. The processing circuitry executes the first hardware instruction, which computes an intermediate stage of an AI model. The data buffer stores data generated from executing the first hardware instruction. The command circuitry determines that the second hardware instruction is a hardware-initiated store instruction for transferring the data from the data buffer. Based on the hardware-initiated store instruction, the communication circuitry transfers the data from the data buffer to a memory device of a computing system, which includes the mission-critical processor, via a communication interface.
    Type: Application
    Filed: June 5, 2018
    Publication date: July 25, 2019
    Applicant: DinoplusAI Holdings Limited
    Inventors: Yujie Hu, Tong Wu, Xiaosong Wang, Zongwei Zhu, Chung Kuang Chin, Clifford Gold, Steven Sertillange, Yick Kei Wong
  • Publication number: 20080127162
    Abstract: A cross-platform configuration system manages configuration information for application software. In one embodiment, a process includes, but is not limited to, storing configuration information in a configuration file using a cross-platform markup language, the configuration information including configuration data associated with the operating environment and user data associated with the application, and configuring the application by accessing the configuration file without using a registry of an operating environment in which the application is running.
    Type: Application
    Filed: November 29, 2006
    Publication date: May 29, 2008
    Inventors: Kui Xu, Yujie Hu, Ting Wang