Patents by Inventor Liangzhen Lai

Liangzhen Lai has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10977002
    Abstract: Disclosed herein includes a system, a method, and a device including shift circuitry and add circuitry for performing multiplication of a first value and a second value for a neural network. The first value has a predetermined format including a first bit, and two or more second bits to represent a value of zero or 2n where n is an integer greater than or equal to 0. The device shifts, when the two or more second bits represent the value of 2n, the second value by (n+1) bits via the shift circuitry to provide a first result, selectively outputs zero or the second value, based on a value of the first bit of the first value, to provide a second result, and adds the first result and the second results via the add circuitry to provide a result of the multiplication of the first and second values.
    Type: Grant
    Filed: July 15, 2019
    Date of Patent: April 13, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Ganesh Venkatesh, Liangzhen Lai, Pierce I-Jen Chuang, Meng Li, Vikas Chandra
  • Publication number: 20210019115
    Abstract: Disclosed herein includes a system, a method, and a device including shift circuitry and add circuitry for performing multiplication of a first value and a second value for a neural network. The first value has a predetermined format including a first bit, and two or more second bits to represent a value of zero or 2n where n is an integer greater than or equal to 0. The device shifts, when the two or more second bits represent the value of 2n, the second value by (n+1) bits via the shift circuitry to provide a first result, selectively outputs zero or the second value, based on a value of the first bit of the first value, to provide a second result, and adds the first result and the second results via the add circuitry to provide a result of the multiplication of the first and second values.
    Type: Application
    Filed: July 15, 2019
    Publication date: January 21, 2021
    Applicant: Facebook Technologies, LLC
    Inventors: Ganesh Venkatesh, Liangzhen Lai, Pierce I-Jen Chuang, Meng Li, Vikas Chandra
  • Publication number: 20210019591
    Abstract: Disclosed herein includes a system, a method, and a device for receiving input data to generate a plurality of outputs for a layer of a neural network. The plurality of outputs are arranged in a first array. Dimensions of the first array may be compared with dimensions of a processing unit (PE) array including a plurality of PEs. According to a result of the comparing, the first array is partitioned into subarrays by the processor. Each of the subarrays has dimensions less than or equal to the dimensions of the PE array. A first group of PEs in the PE array is assigned to a first one of the subarrays. A corresponding output of the plurality of outputs is generated using a portion of the input data by each PE of the first group of PEs assigned to the first one of the subarrays.
    Type: Application
    Filed: July 15, 2019
    Publication date: January 21, 2021
    Applicant: Facebook Technologies, LLC
    Inventors: Ganesh Venkatesh, Liangzhen Lai, Pierce I-Jen Chuang, Meng Li
  • Publication number: 20210012178
    Abstract: Disclosed herein includes a system, a method, and a device for early-exit from convolution. In some embodiments, at least one processing element (PE) circuit is configured to perform, for a node of a neural network corresponding to a dot-product operation with a set of operands, computation using a subset of the set of operands to generate a dot-product value of the subset of the set of operands. The at least one PE circuit can compare the dot-product value of the subset of the set of operands, to a threshold value. The at least one PE circuit can determine whether to activate the node of the neural network, based at least on a result of the comparing.
    Type: Application
    Filed: July 11, 2019
    Publication date: January 14, 2021
    Applicant: Facebook Technologies, LLC
    Inventors: Ganesh Venkatesh, Liangzhen Lai, Pierce I-Jen Chuang
  • Publication number: 20210011846
    Abstract: Disclosed herein includes a system, a method, and a device for reading and writing sparse data in a neural network accelerator. A plurality of slices can be established to access a memory having an access size of a data word. A first slice can be configured to access a first side of the data word in memory. Circuitry can access a mask identifying byte positions within the data word having non-zero values. The circuitry can modify the data word to have non-zero byte values stored starting at an end of the first side, and any zero byte values stored in a remainder of the data word. A determination can be made whether a number of non-zero byte values is less than or equal to a first access size of the first slice. The circuitry can write the modified data word to the memory via at least the first slice.
    Type: Application
    Filed: July 11, 2019
    Publication date: January 14, 2021
    Applicant: Facebook Technologies, LLC
    Inventors: Ganesh Venkatesh, Liangzhen Lai, Pierce I-Jen Chuang, Meng Li
  • Publication number: 20210011288
    Abstract: Disclosed herein is a method for using a neural network across multiple devices. The method can include receiving, by a first device configured with a first one or more layers of a neural network, input data for processing via the neural network implemented across the first device and a second device. The method can include outputting, by the first one or more layers of the neural network implemented on the first device, a data set that is reduced in size relative to the input data while identifying one or more features of the input data for processing by a second one or more layers of the neural network. The method can include communicating, by the first device, the data set to the second device for processing via the second one or more layers of the neural network implemented on the second device.
    Type: Application
    Filed: July 9, 2019
    Publication date: January 14, 2021
    Applicant: Facebook Technologies, LLC
    Inventors: Liangzhen Lai, Pierce I-Jen Chuang, Vikas Chandra, Ganesh Venkatesh
  • Publication number: 20210011971
    Abstract: Disclosed herein includes a system, a method, and a device for multiply-accumulate operation. In one aspect, an input operand is received by control circuitry. In one aspect, the control circuitry determines a sparsity of the input operand, where the sparsity may indicate whether a value of the input operand has a predetermined value or not. In one aspect, the control circuitry determines a stationarity of the input operand, where the stationarity may indicate whether the value of the input operand changes over one or more clock cycles. In one aspect, the input operand is provided to multiply-accumulate circuitry as an input, according to the determined sparsity and stationarity of the input operand.
    Type: Application
    Filed: July 11, 2019
    Publication date: January 14, 2021
    Applicant: Facebook Technologies, LLC
    Inventor: Liangzhen Lai
  • Publication number: 20210012186
    Abstract: Disclosed herein includes a system, a method, and a device for pipelined parallelism to accelerate distributed learning network graph. First data for a first layer of a neural network may be stored in memory. First circuitry including a first plurality of processing element (PE) circuits may read the first data from the memory and perform computation for the first layer of the neural network using the first data to generate second data. The first circuitry includes a plurality of buffers for outputting the generated second data as input to second circuitry to perform computation for a second layer of the neural network. The second circuitry includes a second plurality of PE circuits configured to perform computation for the second layer of the neural network using the second data.
    Type: Application
    Filed: July 11, 2019
    Publication date: January 14, 2021
    Applicant: Facebook Technologies, LLC
    Inventors: Ganesh Venkatesh, Liangzhen Lai
  • Publication number: 20210004208
    Abstract: Disclosed herein includes a system, a method, and a device for improving computation efficiency of a neural network. In one aspect, adder circuitry is configured to add input data from processing of the neural network and a first number of bits of accumulated data for the neural network to generate summation data. In one aspect, according to a carry value of the adding from the adder circuitry, a multiplexer is configured to select between i) a second number of bits of the accumulated data and ii) incremented data comprising the second number of bits of the accumulated data incremented by a predetermined value. The summation data appended with the selected one of the second number of bits of the accumulated data or the incremented data may form appended data.
    Type: Application
    Filed: July 2, 2019
    Publication date: January 7, 2021
    Applicant: Facebook Technologies, LLC
    Inventors: Liangzhen LAI, Pierce I-Jen CHUANG
  • Patent number: 10122384
    Abstract: Various implementations described herein are directed to a memory device. The memory device includes a first interleaving circuit that receives data words and generates a first error correction code based on the received data words. The memory device includes a second interleaving circuit that receives the data words and generates a second error correction code based on the received data words as a complement to the first error correction code. The second interleaving circuit interleaves data bits from multiple different data words and stores modified data words based on the multiple different data words.
    Type: Grant
    Filed: May 18, 2016
    Date of Patent: November 6, 2018
    Assignee: ARM Limited
    Inventors: Liangzhen Lai, Vikas Chandra, Gary Dale Carpenter
  • Patent number: 9922152
    Abstract: A computer-implemented system and method is provided for reducing failure-in-time (FIT) errors associated with one or more sequential devices of a circuit design for a process technology. The method comprises receiving an input data file that includes register transfer level (RTL) data of the circuit design. The RTL data includes the one or more sequential devices. The method further comprises identifying a preferred logic state for each sequential device of the one or more sequential devices. The method further comprises adjusting the one or more sequential devices based on the preferred logic state.
    Type: Grant
    Filed: March 23, 2016
    Date of Patent: March 20, 2018
    Assignee: ARM Limited
    Inventors: Liangzhen Lai, Vikas Chandra
  • Publication number: 20170338836
    Abstract: Various implementations described herein are directed to a memory device. The memory device may include a first interleaving circuit that receives data words and generates a first error correction code based on the received data words. The memory device may include a second interleaving circuit that receives the data words and generates a second error correction code based on the received data words as a complement to the first error correction code. The second interleaving circuit may interleave data bits from multiple different data words and store modified data words based on the multiple different data words.
    Type: Application
    Filed: May 18, 2016
    Publication date: November 23, 2017
    Inventors: Liangzhen Lai, Vikas Chandra, Gary Dale Carpenter
  • Publication number: 20170277817
    Abstract: A computer implemented system and method is provided for reducing failure in time (FIT) errors associated with one or more sequential devices of a circuit design for a process technology. The method comprises receiving an input data file that includes register transfer level (RTL) data of the circuit design. The RTL data includes the one or more sequential devices. The method further comprises identifying a preferred logic state for each of the one or more sequential devices. The method further comprises adjusting the one or more sequential devices based on the preferred logic state.
    Type: Application
    Filed: March 23, 2016
    Publication date: September 28, 2017
    Inventors: Liangzhen Lai, Vikas Chandra