Patents by Inventor Hongbo Zeng
Hongbo Zeng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12001955Abstract: The present disclosure provides a data processing method, a board card device, a computer equipment, and a storage medium. The board card provided in the present disclosure includes a storage device, an interface apparatus, a control device, and an artificial intelligence chip of a data processing device, where the artificial intelligence chip is connected to the storage device, the control device, and the interface apparatus, respectively. The control device is configured to monitor a state of the artificial intelligence chip. According to the embodiments of the present disclosure, the data to be quantized is quantized according to the corresponding quantization parameter, which may reduce the storage space of data while ensuring the precision, ensure the precision and reliability of the operation result, and improve the operation efficiency.Type: GrantFiled: August 20, 2020Date of Patent: June 4, 2024Inventors: Shaoli Liu, Shiyi Zhou, Xishan Zhang, Hongbo Zeng
-
Patent number: 12003844Abstract: A camera assembly or an electronic device includes a camera and a flexible circuit board. The camera can swing around at least one of a first axis, a second axis, and a third axis, where the third axis is used as an optical axis of the camera. In an extension direction of the flexible circuit board, one end of the flexible circuit board is connected to a circuit board in the camera, and the other end of the flexible circuit board is fastened to and electrically connected to a mainboard. A bent redundant structure for releasing stress is disposed between the two ends of the flexible circuit board. The redundant structure can extend in a direction of rotation around at least one of the first axis, the second axis, or the third axis.Type: GrantFiled: June 24, 2022Date of Patent: June 4, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Xin Wang, Hongbo Luo, Kuni Lee, Linghui Zeng
-
Publication number: 20240111536Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.Type: ApplicationFiled: December 7, 2023Publication date: April 4, 2024Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Bingrui Wang, Xiaoyong ZHOU, Yimin ZHUANG, Huiying LAN, Jun LIANG, Hongbo ZENG
-
Publication number: 20240072973Abstract: The present application relates to devices and components including apparatus, systems, and methods to provide adaptive physical downlink control channel (PDCCH) monitoring. In particular, a unified signaling technique for adaptive PDCCH search space monitoring is descried. This signaling can indicate either one or a combination of the switching and skipping. In an example, the signaling includes multiple parts. In a first part, radio resource control (RRC) signaling is sent from the network to the device to configure the device for switching only, skipping only, or switching and skipping. In a second part, first DCI is sent from the network to the device to indicate the particular PDCCH monitoring configuration to use for the PDCCH search space monitoring. Thereafter, network sends second DCI in the relevant PDCCH search space, where this second DCI schedules traffic. The device performs blind decoding to determine the second DCI in order to exchange the traffic.Type: ApplicationFiled: January 7, 2022Publication date: February 29, 2024Applicant: Apple Inc.Inventors: Huaning Niu, Hongbo Yan, Dawei Zhang, Seyed Ali Akbar Fakoorian, Oghenekome Oteri, Sigen Ye, Weidong Yang, Wei Zeng, Hong He, Haitong Sun
-
Patent number: 11886880Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.Type: GrantFiled: June 24, 2022Date of Patent: January 30, 2024Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Bingrui Wang, Xiaoyong Zhou, Yimin Zhuang, Huiying Lan, Jun Liang, Hongbo Zeng
-
Publication number: 20230039892Abstract: An embodiment of the present disclosure provides an operation apparatus which includes a storage unit, a control unit and a compute unit. The technical solution provided in this disclosure can reduce resource consumption of convolution operation, improve the speed of convolution operation and reduce operation time.Type: ApplicationFiled: September 3, 2020Publication date: February 9, 2023Inventors: Yingnan ZHANG, Hongbo ZENG, Yao ZHANG, Shaoli LIU, Di HUANG, Shiyi ZHOU, Xishan ZHANG, Chang LIU, Jiaming GUO, Yufeng GAO
-
Publication number: 20220414183Abstract: The present disclosure provides a winograd convolution operation method, a winograd convolution operation apparatus, a device, and a storage medium. The apparatus includes: processors and a memory, where the memory is configured to store a program code, and the processors are configured to call the program code stored in the memory and execute the operation method. Through the operation method, a system, the device and the storage medium of the present disclosure, performance loss of a computer system may be reduced, and operation speed may be improved. Through the present disclosure, processing efficiency may be improved.Type: ApplicationFiled: September 3, 2020Publication date: December 29, 2022Inventors: Yingnan ZHANG, Hongbo ZENG, Yao ZHANG, Shaoli LIU, Di HUANG, Shiyi ZHOU, Xishan ZHANG, Chang LIU, Jiaming GUO, Yufeng GAO
-
Publication number: 20220405349Abstract: This disclosure relates to a data processing method, a data processing apparatus, and related products. The products include a control unit. The control unit includes: an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is used for storing a calculation instruction associated with an artificial neural network computation; the instruction processing unit is used for parsing the calculation instruction to obtain a plurality of computation instructions; and the storage queue unit is used for storing an instruction queue, where the instruction queue includes the plurality of computation instructions or calculation instructions to be executed according to a front-back sequence of a queue. Through the above method of this disclosure, computation efficiency of the related products during a neural network model computation may be improved.Type: ApplicationFiled: October 27, 2020Publication date: December 22, 2022Inventors: Yingnan ZHANG, Hongbo ZENG, Yao ZHANG, Shaoli LIU, Di HUANG, Shiyi ZHOU, Xishan ZHANG, Chang LIU, Jiaming GUO, Yufeng GAO
-
Publication number: 20220366238Abstract: A method for adjusting quantization parameters of a recurrent neural network according to an embodiment of the present disclosure may determine a target iteration interval according to the data variation range of the data to be quantized to adjust quantization parameters in the recurrent neural network computation according to the target iteration interval. The quantization parameter adjustment method, apparatus, and related products of the recurrent neural network of the present disclosure may improve the quantization precision, efficiency, and computation efficiency of the recurrent neural network.Type: ApplicationFiled: August 20, 2020Publication date: November 17, 2022Inventors: Shaoli LIU, Shiyi ZHOU, Xishan ZHANG, Hongbo ZENG
-
Patent number: 11494818Abstract: A negotiation device conducting automated negotiation by transmitting and receiving proposals to and from a counterpart device includes an estimating unit configured to estimate a negotiation characteristic of the counterpart device by using information corresponding to a proposal transmitted by the negotiation device.Type: GrantFiled: January 25, 2018Date of Patent: November 8, 2022Assignee: NEC CORPORATIONInventors: Hongbo Zeng, Shinji Nakadai
-
Publication number: 20220334840Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.Type: ApplicationFiled: June 24, 2022Publication date: October 20, 2022Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Bingrui WANG, Xiaoyong ZHOU, Yimin ZHUANG, Huiying LAN, Jun LIANG, Hongbo ZENG
-
Patent number: 11385895Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.Type: GrantFiled: September 29, 2021Date of Patent: July 12, 2022Assignee: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Bingrui Wang, Xiaoyong Zhou, Yimin Zhuang, Huiying Lan, Jun Liang, Hongbo Zeng
-
Publication number: 20220019439Abstract: The present disclosure provides a data processing apparatus and related products. The products include a control module including an instruction caching unit, an instruction processing unit, and a storage queue unit. The instruction caching unit is configured to store computation instructions associated with an artificial neural network operation; the instruction processing unit is configured to parse the computation instructions to obtain a plurality of operation instructions; and the storage queue unit is configured to store an instruction queue, where the instruction queue includes a plurality of operation instructions or computation instructions to be executed in the sequence of the queue. By adopting the above-mentioned method, the present disclosure can improve the operation efficiency of related products when performing operations of a neural network model.Type: ApplicationFiled: September 29, 2021Publication date: January 20, 2022Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Bingrui WANG, Xiaoyong ZHOU, Yimin ZHUANG, Huiying LAN, Jun LIANG, Hongbo ZENG
-
Publication number: 20210264270Abstract: The present disclosure provides a data processing method, a board card device, a computer equipment, and a storage medium. The board card provided in the present disclosure includes a storage device, an interface apparatus, a control device, and an artificial intelligence chip of a data processing device, where the artificial intelligence chip is connected to the storage device, the control device, and the interface apparatus, respectively. The control device is configured to monitor a state of the artificial intelligence chip. According to the embodiments of the present disclosure, the data to be quantized is quantized according to the corresponding quantization parameter, which may reduce the storage space of data while ensuring the precision, ensure the precision and reliability of the operation result, and improve the operation efficiency.Type: ApplicationFiled: August 20, 2020Publication date: August 26, 2021Inventors: Shaoli LIU, Shiyi ZHOU, Xishan ZHANG, Hongbo ZENG
-
Publication number: 20210117768Abstract: The present disclosure provides a data processing method, a data processing device, a computer equipment, and a storage medium. The data processing device includes a board card and the board card provided in the present disclosure includes a storage component, an interface device, a control component, and an artificial intelligence chip of a data processing device. According to the data processing method, the data processing device, the computer equipment, and the storage medium provided in the embodiments of the present disclosure, data to be quantized is quantized according to a corresponding quantization parameter, which may reduce the storage space of data while ensuring the precision, as well as ensure the accuracy and reliability of the operation result and improve the operation efficiency.Type: ApplicationFiled: December 30, 2020Publication date: April 22, 2021Inventors: Shaoli LIU, Shiyi ZHOU, Xishan ZHANG, Hongbo ZENG
-
Publication number: 20210049660Abstract: A negotiation device conducting automated negotiation by transmitting and receiving proposals to and from a counterpart device includes an estimating unit configured to estimate a negotiation characteristic of the counterpart device by using information corresponding to a proposal transmitted by the negotiation device.Type: ApplicationFiled: January 25, 2018Publication date: February 18, 2021Applicant: NEC CorporationInventors: Hongbo ZENG, Shinji NAKADAI
-
Patent number: 10766883Abstract: The instant invention provides compounds which are JAK inhibitors, and as such are useful for the treatment of JAK-mediated diseases such as rheumatoid arthritis, asthma, COPD and cancer.Type: GrantFiled: December 7, 2017Date of Patent: September 8, 2020Assignee: Merck Sharp & Dohme Corp.Inventors: Peter Fuller, Jason Brubaker, Hongbo Zeng, Joshua Close, Jonathan Young
-
Patent number: D916352Type: GrantFiled: September 24, 2020Date of Patent: April 13, 2021Inventor: Hongbo Zeng
-
Patent number: D1007435Type: GrantFiled: February 22, 2022Date of Patent: December 12, 2023Assignee: BITMAIN TECHNOLOGIES INC.Inventors: Shaoming Huang, Hongbo Zeng, Mingliang Hao, Hangkong Hu, Yurong Peng, Zhihao Zhu
-
Patent number: D1007436Type: GrantFiled: February 22, 2022Date of Patent: December 12, 2023Assignee: BITMAIN TECHNOLOGIES INC.Inventors: Shaoming Huang, Hongbo Zeng, Mingliang Hao, Hangkong Hu, Yurong Peng, Zhihao Zhu