Patents by Inventor Zheng Qi

Zheng Qi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240124451
    Abstract: Novel compounds of the structural formula (I), and the pharmaceutically acceptable salts thereof, are inhibitors of NLRP3 and may be useful in the treatment, prevention, management, amelioration, control and suppression of diseases mediated by NLPR3. The compounds of the present invention may be useful in the treatment, prevention or management of diseases, disorders and conditions mediated by NLRP3 such as, but not limited to, gout, pseudogout, CAPS, NASH fibrosis, heart failure, idiopathic pericarditis, atopic dermatitis, inflammatory bowel disease, Alzheimer's Disease, Parkinson's Disease and traumatic brain injury.
    Type: Application
    Filed: September 21, 2023
    Publication date: April 18, 2024
    Applicant: Merck Sharp & Dohme LLC
    Inventors: Donna A.A.W. Hayes, Prabha Karnachi, Madeleine Eileen Kieffer, Kyle S. McClymont, Rohan Rajiv Merchant, Essam Metwally, Anilkumar G. Nair, Ning Qi, Jillian Rose Sanzone, Nunzio Sciammetta, Emma H. Southgate, Zheng Tan, Brandon M. Taoka
  • Patent number: 11947835
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for controlling, by an on-chip memory controller, a plurality of hardware components that are configured to perform computations to access a shared memory. One of the on-chip memory controller includes at least one backside arbitration controller communicatively coupled with a memory bank group and a first hardware component, wherein the at least one backside arbitration controller is configured to perform bus arbitrations to determine whether the first hardware component can access the memory bank group using a first memory access protocol; and a frontside arbitration controller communicatively coupled with the memory bank group and a second hardware component, wherein the frontside arbitration controller is configured to perform bus arbitrations to determine whether the second hardware component can access the memory bank group using a second memory access protocol different from the first memory access protocol.
    Type: Grant
    Filed: September 21, 2021
    Date of Patent: April 2, 2024
    Assignee: Black Sesame Technologies Inc.
    Inventors: Zheng Qi, Yi Wang, Yanfeng Wang
  • Patent number: 11687336
    Abstract: An extensible multi-precision data pipeline system, comprising, a local buffer that stores an input local data set in a local storage format, an input tensor shaper coupled to the local buffer that reads the input local data set and converts the input local data set into an input tensor data set having a tensor format of vector width N by tensor length L, a cascaded pipeline coupled to the input tensor shaper that routes the input tensor data set through at least one function stage resulting in an output tensor data set, an output tensor shaper coupled to the cascaded pipeline that converts the output tensor data set into an output local data set having the local storage format and wherein the output tensor shaper writes the output local data set to the local buffer.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: June 27, 2023
    Assignee: Black Sesame Technologies Inc.
    Inventors: Yi Wang, Zheng Qi, Hui Wang, Zheng Li
  • Publication number: 20230177311
    Abstract: The present invention discloses a graph partitioning system for running neural networks on resource constrained hardware systems. The graph partitioning system used for partitioning a neural network graph into a series of sub-graphs and further allow the multiple sub-graphs to be executed in available hardware subsystems. The system based on cost function as estimated computation time and memory bandwidth of partitioned sub-graphs. The graph partitioning system is a cycle estimation model of hardware that can run fast and parameterize memory latency. The graph partitioning system supports heterogeneous partition for different type accelerators such as CPU, GPU, ASIC. The present invention also discloses a method for partitioning neural network graph in to series of sub-graphs.
    Type: Application
    Filed: December 8, 2021
    Publication date: June 8, 2023
    Inventors: Wei Zuo, Qiang Zhang, Chenhao Fang, Zheng Qi
  • Publication number: 20230090429
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for controlling, by an on-chip memory controller, a plurality of hardware components that are configured to perform computations to access a shared memory. One of the on-chip memory controller includes at least one backside arbitration controller communicatively coupled with a memory bank group and a first hardware component, wherein the at least one backside arbitration controller is configured to perform bus arbitrations to determine whether the first hardware component can access the memory bank group using a first memory access protocol; and a frontside arbitration controller communicatively coupled with the memory bank group and a second hardware component, wherein the frontside arbitration controller is configured to perform bus arbitrations to determine whether the second hardware component can access the memory bank group using a second memory access protocol different from the first memory access protocol.
    Type: Application
    Filed: September 21, 2021
    Publication date: March 23, 2023
    Inventors: Zheng Qi, Yi Wang, Yanfeng Wang
  • Publication number: 20230066518
    Abstract: The present invention relates to a method and a system for performing depthwise separable convolution on an input data in a convolutional neural network. The invention utilizes a heterogeneous architecture with a number of MAC arrays including 1D MAC arrays and 2D MAC arrays with a Winograd conversion logic to perform depthwise separable convolution. The depthwise separable convolution uses less weight parameters and thus less multiplications while it obtains the same computation results as the traditional convolution.
    Type: Application
    Filed: August 30, 2021
    Publication date: March 2, 2023
    Inventors: Yi Wang, Zheng Qi
  • Publication number: 20230013599
    Abstract: The present invention relates to convolution neural networks (CNN) and methods for improving computational efficiency of multiply accumulate (MAC) array structure Specifically, the invention relates to cutting of activation data into a number of tiles for increasing overall computation efficiency. The invention discloses techniques to cut an activation data into a plurality of tiles by using a 3-D convolution computation core and support bigger tensor sizes. Lastly, the invention provides adaptive scheduling of MAC array to achieve high utilization in multi-precision neural network acceleration.
    Type: Application
    Filed: July 8, 2021
    Publication date: January 19, 2023
    Inventors: Fen Zhou, Xiangdong Jin, Chengyu Xiong, Zheng Qi
  • Patent number: 11544009
    Abstract: A system on a chip, including a first domain having a first processor, a first local memory coupled to the first processor, wherein the first local memory having a first memory format and a first sub-network coupled to the first processor, a second domain having a second processor, a second local memory coupled to the second processor and a second sub-network coupled to the second processor, wherein the second local memory having a second memory format which differs from the first memory format, a multi-tier network coupled to the first sub-network and the second sub-network, a global memory coupled to the multi-tier network and a multi-port DDR controller coupled to the global memory to receive, transmit and share the first local memory having the first memory format and the second local memory having the second memory format based on a predetermined criteria.
    Type: Grant
    Filed: May 10, 2021
    Date of Patent: January 3, 2023
    Assignee: Black Sesame Technologies Inc.
    Inventors: Zheng Qi, Qun Gu, Chengyu Xiong
  • Publication number: 20220414438
    Abstract: A method of constructing sub-graphs, includes receiving a directed acyclic graph (DAG), partitioning the directed acyclic graph into an at least one section, determining at least one hardware attribute, determining at least one DAG hardware limitation of the at least one section and determining a largest continuous node list of the at least one section in which the at least one hardware attribute meets the at least one DAG hardware limitation.
    Type: Application
    Filed: June 24, 2021
    Publication date: December 29, 2022
    Inventors: Ting Zhou, Fen Zhou, Yi Wang, Zexi Ye, Wei Zuo, Zheng Qi, Qiang Zhang
  • Publication number: 20220391676
    Abstract: A method of quantization evaluation, including, receiving a floating point data set, determining a floating point neural network model output utilizing the floating point data set, quantizing the floating point data set utilizing a quantization model yielding a quantized data set, determining a quantized neural network model output utilizing the quantized data set, determining whether an accuracy error between the floating point neural network model output and the quantized neural network model output exceeds an predetermined error tolerance, determining a floating point neural network tensor output utilizing the floating point data set if the predetermined error tolerance is exceeded, determining a quantized neural network tensor output utilizing the quantized data set if the predetermined error tolerance is exceeded, determining a per-tensor error based on the floating point neural network tensor output and the quantized neural network tensor output and updating the quantization model based on the per-tenso
    Type: Application
    Filed: June 4, 2021
    Publication date: December 8, 2022
    Inventors: Zihao Zhao, Chenghao Zhang, Yi Wang, Zexi Ye, Hui Wang, Zheng Qi, Qiang Zhang
  • Patent number: 11516439
    Abstract: The invention discloses a method and a system for achieving a unified flow control system for multiple camera devices. The system provides inline and offline streams to share resources by converting the streams into one another. The resource sharing is performed by using different time interval to process inline and offline streams. The system also includes a STALL & REDO operation to keep whole image un-broken and shut down the write stream from ISP right away.
    Type: Grant
    Filed: August 30, 2021
    Date of Patent: November 29, 2022
    Assignee: Black Sesame Technologies Inc.
    Inventors: Ying Zhou, Zheng Qi, Chengyu Xiong
  • Patent number: 11508089
    Abstract: A method of wheel encoder to camera calibration, including receiving a LiDAR (Light Detection and Ranging) signal, receiving a camera signal, receiving a wheel encoder signal, calibrating the camera signal to the LiDAR signal, calibrating the wheel encoder signal to the LiDAR signal and calibrating the camera signal to the wheel encoder signal based on the calibration of the camera signal to the LiDAR signal and the wheel encoder signal to the LiDAR signal.
    Type: Grant
    Filed: March 5, 2021
    Date of Patent: November 22, 2022
    Assignee: Black Sesame Technologies Inc.
    Inventors: Yu Huang, Ruihui Di, Zheng Qi, Jizhang Shan
  • Publication number: 20220284626
    Abstract: A method of wheel encoder to camera calibration, including receiving a LiDAR (Light Detection and Ranging) signal, receiving a camera signal, receiving a wheel encoder signal, calibrating the camera signal to the LiDAR signal, calibrating the wheel encoder signal to the LiDAR signal and calibrating the camera signal to the wheel encoder signal based on the calibration of the camera signal to the LiDAR signal and the wheel encoder signal to the LiDAR signal.
    Type: Application
    Filed: March 5, 2021
    Publication date: September 8, 2022
    Inventors: Yu Huang, Ruihui Di, Zheng Qi, Jizhang Shan
  • Patent number: 11315209
    Abstract: An example method of image signal processing, comprising at least one of, receiving a set of high priority signals, receiving a set of low priority signals, reconfiguring a first portion of a pipeline to route the high priority signals through an in-line mode process and reconfiguring a second portion of the pipeline to route the low priority signals through an offline mode process.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: April 26, 2022
    Assignee: Black Sesame Technolgies Inc.
    Inventors: Ying Zhou, Zheng Qi
  • Publication number: 20220114413
    Abstract: An example fused convolutional layer, comprising, a comparator capable of reception of a first zero point and a multiply-accumulation result, a first multiplexer coupled to the comparator, wherein the first multiplexer receives a plurality of power-of-two exponent values, a shift normalizer, coupled to the first multiplexer, wherein the shift normalizer is capable of receiving the multiply-accumulation result and the plurality of power-of-two exponent values, wherein the shift normalizer limits a quantization of the multiply-accumulation result to a power-of-two scale and a second multiplexer coupled to an output of the shift normalizer, the first multiplexer and receives a second zero point and outputs an activation.
    Type: Application
    Filed: October 12, 2020
    Publication date: April 14, 2022
    Inventors: Zheng Qi, Qun Gu, Zheng Li, Chenghao Zhang, Tian Zhou, Zuoguan Wang
  • Publication number: 20210349718
    Abstract: An extensible multi-precision data pipeline system, comprising, a local buffer that stores an input local data set in a local storage format, an input tensor shaper coupled to the local buffer that reads the input local data set and converts the input local data set into an input tensor data set having a tensor format of vector width N by tensor length L, a cascaded pipeline coupled to the input tensor shaper that routes the input tensor data set through at least one function stage resulting in an output tensor data set, an output tensor shaper coupled to the cascaded pipeline that converts the output tensor data set into an output local data set having the local storage format and wherein the output tensor shaper writes the output local data set to the local buffer.
    Type: Application
    Filed: May 8, 2020
    Publication date: November 11, 2021
    Inventors: Yi Wang, Zheng Qi, Hui Wang, Zheng Li
  • Publication number: 20210350498
    Abstract: An example method of image signal processing, comprising at least one of, receiving a set of high priority signals, receiving a set of low priority signals, reconfiguring a first portion of a pipeline to route the high priority signals through an in-line mode process and reconfiguring a second portion of the pipeline to route the low priority signals through an offline mode process.
    Type: Application
    Filed: May 8, 2020
    Publication date: November 11, 2021
    Inventors: Ying Zhou, Zheng Qi
  • Publication number: 20210303216
    Abstract: A system on a chip, including a first domain having a first processor, a first local memory coupled to the first processor, wherein the first local memory having a first memory format and a first sub-network coupled to the first processor, a second domain having a second processor, a second local memory coupled to the second processor and a second sub-network coupled to the second processor, wherein the second local memory having a second memory format which differs from the first memory format, a multi-tier network coupled to the first sub-network and the second sub-network, a global memory coupled to the multi-tier network and a multi-port DDR controller coupled to the global memory to receive, transmit and share the first local memory having the first memory format and the second local memory having the second memory format based on a predetermined criteria.
    Type: Application
    Filed: May 10, 2021
    Publication date: September 30, 2021
    Inventors: Zheng Qi, Qun Gu, Chengyu Xiong
  • Publication number: 20200234396
    Abstract: A system on a chip, including a multi-port memory controller having a multi-level memory hierarchy, a multi-tier bus coupled to the multi-port memory controller to segregate memory access traffic based on the multi-level memory hierarchy, an interconnected plurality of networks on chip coupled to the multi-tier bus, a plurality of networked domains coupled to the plurality of networks on chip and at least one non-networked domain coupled directly to the multi-port memory controller.
    Type: Application
    Filed: April 11, 2019
    Publication date: July 23, 2020
    Inventors: Zheng Qi, Qun Gu, Chengyu Xiong
  • Patent number: 9954826
    Abstract: A method and system for secure and scalable key management for cryptographic processing of data is described herein. A method of secure key handling and cryptographic processing of data, comprising receiving a request from an entity to cryptographically process a block of data, the request including a key handle, wherein the key handle includes an authentication tag and an index; authenticating the requesting entity using the authentication tag; and referencing a plaintext key from a plurality of plaintext keys using the index if the requesting entity is authenticated successfully.
    Type: Grant
    Filed: January 5, 2015
    Date of Patent: April 24, 2018
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Mark Buer, Zheng Qi