Patents by Inventor Yao Zhang

Yao Zhang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240103196
    Abstract: A radiographic inspection device and a method of inspecting an object are provided. The radiographic inspection device includes a support frame, where an inspection space applicable to inspect an object is formed within the support frame, and the inspection space has a first opening connecting to an outside; a transfer mechanism applicable to carry the object and move through the inspection space; a shielding curtain mounted at the first opening; and a driving mechanism. The driving mechanism includes: a driver mounted on the support frame; and a joint portion, where an upper end of the shielding curtain is connected to the joint portion. The driver is configured to synchronously drive two ends of the joint portion, so that the shielding curtain moves up and down with the joint portion to open or close the first opening.
    Type: Application
    Filed: January 18, 2022
    Publication date: March 28, 2024
    Inventors: Zhiqiang CHEN, Li ZHANG, Yi CHENG, Qingping HUANG, Mingzhi HONG, Minghua QIU, Yao ZHANG, Jianxue YANG, Lei ZHENG
  • Patent number: 11934940
    Abstract: The present disclosure discloses a data processing method and related products, in which the data processing method includes: generating, by a general-purpose processor, a binary instruction according to device information of an AI processor, and generating an AI learning task according to the binary instruction; transmitting, by the general-purpose processor, the AI learning task to the cloud AI processor for running; receiving, by the general-purpose processor, a running result corresponding to the AI learning task; and determining, by the general-purpose processor, an offline running file according to the running result, where the offline running file is generated according to the device information of the AI processor and the binary instruction when the running result satisfies a preset requirement. By implementing the present disclosure, the debugging between the AI algorithm model and the AI processor can be achieved in advance.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: March 19, 2024
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Yao Zhang, Xiaofu Meng, Shaoli Liu
  • Patent number: 11934337
    Abstract: An electronic device includes a CPU, an acceleration module, and a memory. The acceleration module is communicatively connected with the CPU, and includes chips. The chip according to an embodiment includes a data bus, and a memory, a data receiver, a computing and processing unit, and a data transmitter connected to the data bus. The data receiver receives first data and header information from outside, writes the first data to a corresponding area of the memory through the data bus, and configures a corresponding computing and processing unit and/or data transmitter according to the header information. The computing and processing unit receives first task information, performs an operation processing according to the first task information and a configuration operation on the data transmitter. The data transmitter obtains second task information and second data, and outputs third data to outside based on at least part of the second data.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: March 19, 2024
    Assignee: ANHUI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Yao Zhang, Shaoli Liu, Dong Han
  • Publication number: 20240070183
    Abstract: A computer-implemented method according to one embodiment includes generating a first matrix based on words extracted from documents, and generating a second matrix based on deduplication chunks. The deduplication chunks include words of the documents. Word clustering is performed based on an analysis performed on the second matrix. Each cluster of the words represents a feature of at least one of the documents. The method further includes generating a third matrix based on the first matrix and the clusters, and performing text mining using the third matrix. A computer program product according to another embodiment includes a computer readable storage medium having program instructions embodied therewith. The program instructions are readable and/or executable by a computer to cause the computer to perform the foregoing method.
    Type: Application
    Filed: August 25, 2022
    Publication date: February 29, 2024
    Inventors: Jia Li Yun, Yin Xiang Xiong, Shan Gu, Yan Bin Hu, Yao Zhang
  • Patent number: 11908352
    Abstract: The present disclosure provides a spliced display screen, which includes a plurality of display modules arranged closely and one or more positioning mechanisms installed on each display module, each positioning mechanism includes: a positioning pin and a groove, the one or more positioning pins are clamped in the one or more grooves, the one or more positioning pins and the one or more grooves cooperate to restrict relative movement in the horizontal direction of two adjacent display modules; an elastic structure arranged corresponding to the positioning pin, one end being connected to the display module and the other end being connected to the positioning pin; in response to being subjected to an external force, the one or more positioning pins rotate along a predetermined direction relative to the one or more grooves and leave the one or more grooves; in response to being disengaged from the external force, the one or more positioning pins return to an initial position along a direction opposite to the pred
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: February 20, 2024
    Assignee: BOE Technology Group Co., Ltd.
    Inventors: Yao Zhang, Jinghua Yang, Ran Tao, Yunpeng Wu
  • Publication number: 20240054012
    Abstract: The present disclosure provides a circuit, method and system for inter-chip communication. The method is implemented in a computation apparatus, where the computation apparatus is included in a combined processing apparatus, and the combined processing apparatus includes a general interconnection interface and other processing apparatus. The computation apparatus interacts with other processing apparatus to jointly complete a computation operation specified by a user. The combined processing apparatus also includes a storage apparatus. The storage apparatus is respectively connected to the computation apparatus and other processing apparatus and is used for storing data of the computation apparatus and other processing apparatus.
    Type: Application
    Filed: December 30, 2021
    Publication date: February 15, 2024
    Inventors: Yingnan ZHANG, Qinglong CHAI, Lu CHAO, Yao ZHANG, Shaoli LIU, Jun LIANG
  • Patent number: 11900242
    Abstract: Provided are an integrated circuit chip apparatus and a related product, the integrated circuit chip apparatus being used for executing a multiplication operation, a convolution operation or a training operation of a neural network. The present technical solution has the advantages of a small amount of calculation and low power consumption.
    Type: Grant
    Filed: March 7, 2022
    Date of Patent: February 13, 2024
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Xinkai Song, Bingrui Wang, Yao Zhang, Shuai Hu
  • Patent number: 11900241
    Abstract: Provided are an integrated circuit chip apparatus and a related product, the integrated circuit chip apparatus being used for executing a multiplication operation, a convolution operation or a training operation of a neural network. The present technical solution has the advantages of a small amount of calculation and low power consumption.
    Type: Grant
    Filed: March 7, 2022
    Date of Patent: February 13, 2024
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Shaoli Liu, Xinkai Song, Bingrui Wang, Yao Zhang, Shuai Hu
  • Patent number: 11880330
    Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: January 23, 2024
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Shaoli Liu, Zhen Li, Yao Zhang
  • Patent number: 11880328
    Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system, and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: January 23, 2024
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Yao Zhang, Shaoli Liu, Jun Liang, Yu Chen
  • Patent number: 11880329
    Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: January 23, 2024
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Shaoli Liu, Zhen Li, Yao Zhang
  • Patent number: 11868299
    Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: January 9, 2024
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Shaoli Liu, Zhen Li, Yao Zhang
  • Publication number: 20240005446
    Abstract: In response to a graphics memory allocation request generated during the running of a target task and for graphics memory needed during running of the target task, target data generated during running of each sub-task of multiple sub-tasks is classified, where a type of the target data comprises at least first data, and where the first data is not used by a subsequent sub-task. Multiple target graphics memory pools are allocated to the multiple sub-tasks. Each target graphics memory pool of the multiple target graphics memory pools is divided into at least one graphics memory block based on a type of the target data, where the at least one graphics memory block includes at least a first graphics memory block corresponding to the first data, and where multiple first graphics memory blocks corresponding to the multiple sub-tasks are mapped to a same target physical memory address.
    Type: Application
    Filed: June 29, 2023
    Publication date: January 4, 2024
    Applicant: Alipay (Hangzhou) Information Technology Co., Ltd.
    Inventors: Xiaofeng Mei, Yao Zhang, Junping Zhao
  • Publication number: 20230410006
    Abstract: Disclosed are various embodiments for virtual desktop infrastructure optimization. A computing device can create a plurality of predictions for future demand for the VDI, each of the plurality of predictions using a respective one of a plurality of resource models, each representing a separate approach to predict future demand for the VDI. Then, the computing device can calculate a plurality of anticipated resource costs, each of the plurality of anticipated resource costs being based at least in part on a respective one of the plurality of predictions for future demand for the VDI. Moreover, the computing device can include, within a user interface, the plurality of predictions for future demand and the plurality of anticipated resource costs. Then, the computing device can implement a resource model from the plurality of resource models to manage an allocation of resources for the VDI in response to a selection of the resource model through the user interface.
    Type: Application
    Filed: July 29, 2022
    Publication date: December 21, 2023
    Inventors: Yao Zhang, Wenping Fan, Qichen Hao, Frank Stephen Taylor, Wei Tian, Puhui Meng
  • Patent number: 11847554
    Abstract: The present disclosure discloses a data processing method and related products, in which the data processing method includes: generating, by a general-purpose processor, a binary instruction according to device information of an AI processor, and generating an AI learning task according to the binary instruction; transmitting, by the general-purpose processor, the AI learning task to the cloud AI processor for running; receiving, by the general-purpose processor, a running result corresponding to the AI learning task; and determining, by the general-purpose processor, an offline running file according to the running result, where the offline running file is generated according to the device information of the AI processor and the binary instruction when the running result satisfies a preset requirement. By implementing the present disclosure, the debugging between the AI algorithm model and the AI processor can be achieved in advance.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: December 19, 2023
    Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITED
    Inventors: Yao Zhang, Xiaofu Meng, Shaoli Liu
  • Patent number: 11841816
    Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: December 12, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Yao Zhang, Shaoli Liu, Jun Liang, Yu Chen
  • Patent number: 11836497
    Abstract: There is provides an operation module, which includes a memory, a register unit, a dependency relationship processing unit, an operation unit, and a control unit. The memory is configured to store a vector, the register unit is configured to store an extension instruction, and the control unit is configured to acquire and parse the extension instruction, so as to obtain a first operation instruction and a second operation instruction. An execution sequence of the first operation instruction and the second operation instruction can be determined, and an input vector of the first operation instruction can be read from the memory. The operation unit is configured to convert an expression mode of the input data index of the first operation instruction and to screen data, and to execute the first and second operation instruction according to the execution sequence, so as to obtain an extension instruction.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: December 5, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD
    Inventors: Bingrui Wang, Shengyuan Zhou, Yao Zhang
  • Patent number: 11823618
    Abstract: The present application provides a displaying device and a controlling method thereof, which relates to the technical field of displaying. The displaying device can solve the problem of residual images generated in power-off and greatly improve the product quality and the user experience.
    Type: Grant
    Filed: April 25, 2021
    Date of Patent: November 21, 2023
    Assignees: BOE Intelligent IoT Technology Co., LTD., BOE Technology Group Co., Ltd.
    Inventors: Xingchen Liu, Jinghua Yang, Wei Lin, Yushun Jie, Yao Zhang, Guoshuai Zhu
  • Patent number: 11817378
    Abstract: A pin map covers a surface area of a layer of a printed circuit board (PCB). The pin map includes a plurality of electrical designations for each pin in the pin map and a plurality of empty spaces within the pin map. Each electrical designation may be assigned to a pin on the pin map. Each electrical designation includes a positive polarity (P+) pin, a negative polarity (P?) pin, or an electrical ground (G) pin. If a space in the pin map does not have an electrical designation, then it may include an empty space/plain portion of the printed circuit board (PCB). The pin map may include a plurality of rows and a first repeating pin polarity pattern. The first repeating pin polarity pattern may include a lane unit tile. The pin map may help couple two circuit elements together that are attached to one layer of a PCB.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: November 14, 2023
    Assignee: QUALCOMM INCORPORATED
    Inventors: Nelly Chen, Gary Yao Zhang, Michael Randy May, Shrinivas Gopalan Uppili, Varin Sriboonlue
  • Patent number: 11809360
    Abstract: The present application relates to a network-on-chip data processing method. The method is applied to a network-on-chip processing system, the network-on-chip processing system is used for executing machine learning calculation, and the network-on-chip processing system comprises a storage device and a calculation device. The method comprises: accessing the storage device in the network-on-chip processing system by means of a first calculation device in the network-on-chip processing system, and obtaining first operation data; performing an operation on the first operation data by means of the first calculation device to obtain a first operation result; and sending the first operation result to a second calculation device in the network-on-chip processing system. According to the method, operation overhead can be reduced and data read/write efficiency can be improved.
    Type: Grant
    Filed: December 29, 2021
    Date of Patent: November 7, 2023
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Shaoli Liu, Zhen Li, Yao Zhang