Patents by Inventor Arthur John Redfern

Arthur John Redfern has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11960567
    Abstract: A method for performing a fundamental computational primitive in a device is provided, where the device includes a processor and a matrix multiplication accelerator (MMA). The method includes configuring a streaming engine in the device to stream data for the fundamental computational primitive from memory, configuring the MMA to format the data, and executing the fundamental computational primitive by the device.
    Type: Grant
    Filed: July 4, 2021
    Date of Patent: April 16, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Arthur John Redfern, Timothy David Anderson, Kai Chirca, Chenchi Luo, Zhenhua Yu
  • Publication number: 20230237368
    Abstract: Techniques for a machine learning model including the steps of summing values of a set of non-binary input feature values with bias values of a first set of bias values to generate first summed values; binarizing the first summed values; receiving a set of binary weights; performing a convolution operation on the binarized summed values and the set of binary weights to generate convolved output feature values; summing feature values of the convolved output feature values with bias values of a second set of bias values and applying a scale value of a first set of scale values to generate a first set of normalized feature values; summing the first set of normalized feature values with the non-binary input feature values to generate second summed values; and outputting a set of output feature values based on the second summed normalized feature values and non-binary input feature values.
    Type: Application
    Filed: January 26, 2022
    Publication date: July 27, 2023
    Inventors: Arthur John REDFERN, Lijun ZHU, Molly Katherine NEWQUIST
  • Publication number: 20220365700
    Abstract: A matrix transfer accelerator (MTA) system/method that coordinates data transfers between an external data memory (EDM) and a local data memory (LDM) using matrix tiling and/or grouping is disclosed. The system utilizes foreground/background buffering that overlaps compute and data transfer operations and permits EDM-to-LDM data transfers with or without zero pad peripheral matrix filling. The system may incorporate an automated zero-fill direct memory access (DMA) controller (ZDC) that transfers data from the EDM to the LDM based on a set of DMA controller registers including data width register (DWR), transfer count register (TCR), fill count register (FCR), EDM source address register (ESR), and LDM target address register (LTR). The ZDC transfers matrix data from the EDM[ESR] to the LDM[LTR] such that EDM matrix data of DWR row data width is automatically zero-filled around a periphery of a matrix written to the LDM matrix based on the FCR value.
    Type: Application
    Filed: July 29, 2022
    Publication date: November 17, 2022
    Inventors: Arthur John Redfern, Asheesh Bhadwaj
  • Patent number: 11403025
    Abstract: A matrix transfer accelerator (MTA) system/method that coordinates data transfers between an external data memory (EDM) and a local data memory (LDM) using matrix tiling and/or grouping is disclosed. The system utilizes foreground/background buffering that overlaps compute and data transfer operations and permits EDM-to-LDM data transfers with or without zero pad peripheral matrix filling. The system may incorporate an automated zero-fill direct memory access (DMA) controller (ZDC) that transfers data from the EDM to the LDM based on a set of DMA controller registers including data width register (DWR), transfer count register (TCR), fill count register (FCR), EDM source address register (ESR), and LDM target address register (LTR). The ZDC transfers matrix data from the EDM[ESR] to the LDM[LTR] such that EDM matrix data of DWR row data width is automatically zero-filled around a periphery of a matrix written to the LDM matrix based on the FCR value.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: August 2, 2022
    Assignee: Texas Instruments Incorporated
    Inventors: Arthur John Redfern, Asheesh Bhadwaj
  • Publication number: 20210334337
    Abstract: A method for performing a fundamental computational primitive in a device is provided, where the device includes a processor and a matrix multiplication accelerator (MMA). The method includes configuring a streaming engine in the device to stream data for the fundamental computational primitive from memory, configuring the MMA to format the data, and executing the fundamental computational primitive by the device.
    Type: Application
    Filed: July 4, 2021
    Publication date: October 28, 2021
    Inventors: Arthur John Redfern, Timothy David Anderson, Kai Chirca, Chenchi Luo, Zhenhua Yu
  • Patent number: 11086967
    Abstract: A method for performing a fundamental computational primitive in a device is provided, where the device includes a processor and a matrix multiplication accelerator (MMA). The method includes configuring a streaming engine in the device to stream data for the fundamental computational primitive from memory, configuring the MMA to format the data, and executing the fundamental computational primitive by the device.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: August 10, 2021
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Arthur John Redfern, Timothy David Anderson, Kai Chirca, Chenchi Luo, Zhenhua Yu
  • Publication number: 20210194498
    Abstract: A matrix compression/decompression accelerator (MCA) system/method that coordinates lossless data compression (LDC) and lossless data decompression (LDD) transfers between an external data memory (EDM) and a local data memory (LDM) is disclosed. The system implements LDC using a 2D-to-1D transformation of 2D uncompressed data blocks (2DU) within LDM to generate 1D uncompressed data blocks (1DU). The 1DU is then compressed to generate a 1D compressed superblock (CSB) in LDM. This LDM CSB may then be written to EDM with a reduced number of EDM bus cycles. The system implements LDD using decompression of CSB data retrieved from EDM to generate a 1D decompressed data block (1DD) in LDM. A 1D-to-2D transformation is then applied to the LDM 1DD to generate a 2D decompressed data block (2DD) in LDM. This 2DD may then be operated on by a matrix compute engine (MCE) using a variety of function operators.
    Type: Application
    Filed: March 9, 2021
    Publication date: June 24, 2021
    Inventors: Arthur John Redfern, Dan Wang
  • Patent number: 10979070
    Abstract: A matrix compression/decompression accelerator (MCA) system/method that coordinates lossless data compression (LDC) and lossless data decompression (LDD) transfers between an external data memory (EDM) and a local data memory (LDM) is disclosed. The system implements LDC using a 2D-to-1D transformation of 2D uncompressed data blocks (2DU) within LDM to generate 1D uncompressed data blocks (1DU). The 1DU is then compressed to generate a 1D compressed superblock (CSB) in LDM. This LDM CSB may then be written to EDM with a reduced number of EDM bus cycles. The system implements LDD using decompression of CSB data retrieved from EDM to generate a 1D decompressed data block (1DD) in LDM. A 1D-to-2D transformation is then applied to the LDM 1DD to generate a 2D decompressed data block (2DD) in LDM. This 2DD may then be operated on by a matrix compute engine (MCE) using a variety of function operators.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: April 13, 2021
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Arthur John Redfern, Dan Wang
  • Publication number: 20210034277
    Abstract: A matrix transfer accelerator (MTA) system/method that coordinates data transfers between an external data memory (EDM) and a local data memory (LDM) using matrix tiling and/or grouping is disclosed. The system utilizes foreground/background buffering that overlaps compute and data transfer operations and permits EDM-to-LDM data transfers with or without zero pad peripheral matrix filling. The system may incorporate an automated zero-fill direct memory access (DMA) controller (ZDC) that transfers data from the EDM to the LDM based on a set of DMA controller registers including data width register (DWR), transfer count register (TCR), fill count register (FCR), EDM source address register (ESR), and LDM target address register (LTR). The ZDC transfers matrix data from the EDM[ESR] to the LDM[LTR] such that EDM matrix data of DWR row data width is automatically zero-filled around a periphery of a matrix written to the LDM matrix based on the FCR value.
    Type: Application
    Filed: October 16, 2020
    Publication date: February 4, 2021
    Inventors: Arthur John Redfern, Asheesh Bhadwaj
  • Patent number: 10817587
    Abstract: A reconfigurable matrix multiplier (RMM) system/method allowing tight or loose coupling to supervisory control processor application control logic (ACL) in a system-on-a-chip (SOC) environment is disclosed. The RMM provides for C=A*B matrix multiplication operations having A-multiplier-matrix (AMM), B-multiplicand-matrix (BMM), and C-product-matrix (CPM), as well as C=A*B+D operations in which D-summation-matrix (DSM) represents the result of a previous multiplication operation or another previously defined matrix. The RMM provides for additional CPM LOAD/STORE paths allowing overlapping of compute/data transfer operations and provides for CPM data feedback to the AMM or BMM operand inputs from a previously calculated CPM result.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: October 27, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Arthur John Redfern, Donald Edward Steiss, Timothy David Anderson, Kai Chirca
  • Patent number: 10809933
    Abstract: A matrix transfer accelerator (MTA) system/method that coordinates data transfers between an external data memory (EDM) and a local data memory (LDM) using matrix tiling and/or grouping is disclosed. The system utilizes foreground/background buffering that overlaps compute and data transfer operations and permits EDM-to-LDM data transfers with or without zero pad peripheral matrix filling. The system may incorporate an automated zero-fill direct memory access (DMA) controller (ZDC) that transfers data from the EDM to the LDM based on a set of DMA controller registers including data width register (DWR), transfer count register (TCR), fill count register (FCR), EDM source address register (ESR), and LDM target address register (LTR). The ZDC transfers matrix data from the EDM[ESR] to the LDM[LTR] such that EDM matrix data of DWR row data width is automatically zero-filled around a periphery of a matrix written to the LDM matrix based on the FCR value.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: October 20, 2020
    Assignee: Texas Instruments Incorporated
    Inventors: Arthur John Redfern, Asheesh Bhardwaj
  • Patent number: 10809978
    Abstract: A merge sort accelerator (MSA) includes a pre-processing stage configured to receive an input vector and generate a pre-processing output vector based on a pre-processing instruction and the input vector. The MSA also includes a merge sort network having multiple sorting stages configured to be selectively enabled. The merge sort network is configured to receive the pre-processing output vector and generate a sorted output vector based on a sorting instruction and the pre-processing output vector. The MSA includes an accumulator stage configured to receive the sorted output vector and update an accumulator vector based on the accumulator instruction and the sorted output vector. The MSA also includes a post-processing stage configured to receive the accumulator vector and generate a post-processing output vector based on a post-processing instruction and the accumulator vector.
    Type: Grant
    Filed: June 1, 2018
    Date of Patent: October 20, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Arthur John Redfern, Asheesh Bhardwaj, Tarek Aziz Lahlou, William Franklin Leven
  • Patent number: 10810281
    Abstract: An outer product multiplier (GPM) system/method that integrates compute gating and input/output circular column rotation functions to balance time spent in compute and data transfer operations while limiting overall dynamic power dissipation is disclosed. Matrix compute gating (MCG) based on a computation decision matrix (CDM) limits the number of computations required on a per cycle basis to reduce overall matrix compute cycle power dissipation. A circular column rotation vector (CRV) automates input/output data formatting to reduce the number of data transfer operations required to achieve a given matrix computation result. Matrix function operators (MFO) utilizing these features are disclosed and include: matrix-matrix multiplication; matrix-matrix and vector-vector point-wise multiplication, addition, and assignment; matrix-vector multiplication; vector-vector inner product; matrix transpose; matrix row permute; and vector-column permute.
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: October 20, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Arthur John Redfern, Donald Edward Steiss, Mihir Narendra Mody, Tarek Aziz Lahlou
  • Publication number: 20200304141
    Abstract: A matrix compression/decompression accelerator (MCA) system/method that coordinates lossless data compression (LDC) and lossless data decompression (LDD) transfers between an external data memory (EDM) and a local data memory (LDM) is disclosed. The system implements LDC using a 2D-to-1D transformation of 2D uncompressed data blocks (2DU) within LDM to generate 1D uncompressed data blocks (1DU). The 1DU is then compressed to generate a 1D compressed superblock (CSB) in LDM. This LDM CSB may then be written to EDM with a reduced number of EDM bus cycles. The system implements LDD using decompression of CSB data retrieved from EDM to generate a 1D decompressed data block (1DD) in LDM. A 1D-to-2D transformation is then applied to the LDM 1DD to generate a 2D decompressed data block (2DD) in LDM. This 2DD may then be operated on by a matrix compute engine (MCE) using a variety of function operators.
    Type: Application
    Filed: June 12, 2020
    Publication date: September 24, 2020
    Inventors: Arthur John Redfern, Dan Wang
  • Patent number: 10735023
    Abstract: A matrix compression/decompression accelerator (MCA) system/method that coordinates lossless data compression (LDC) and lossless data decompression (LDD) transfers between an external data memory (EDM) and a local data memory (LDM) is disclosed. The system implements LDC using a 2D-to-1D transformation of 2D uncompressed data blocks (2DU) within LDM to generate 1D uncompressed data blocks (1DU). The 1DU is then compressed to generate a 1D compressed superblock (CSB) in LDM. This LDM CSB may then be written to EDM with a reduced number of EDM bus cycles. The system implements LDD using decompression of CSB data retrieved from EDM to generate a 1D decompressed data block (1DD) in LDM. A 1D-to-2D transformation is then applied to the LDM 1DD to generate a 2D decompressed data block (2DD) in LDM. This 2DD may then be operated on by a matrix compute engine (MCE) using a variety of function operators.
    Type: Grant
    Filed: February 20, 2018
    Date of Patent: August 4, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Arthur John Redfern, Dan Wang
  • Publication number: 20180373678
    Abstract: An outer product multiplier (GPM) system/method that integrates compute gating and input/output circular column rotation functions to balance time spent in compute and data transfer operations while limiting overall dynamic power dissipation is disclosed. Matrix compute gating (MCG) based on a computation decision matrix (CDM) limits the number of computations required on a per cycle basis to reduce overall matrix compute cycle power dissipation. A circular column rotation vector (CRV) automates input/output data formatting to reduce the number of data transfer operations required to achieve a given matrix computation result. Matrix function operators (MFO) utilizing these features are disclosed and include: matrix-matrix multiplication; matrix-matrix and vector-vector point-wise multiplication, addition, and assignment; matrix-vector multiplication; vector-vector inner product; matrix transpose; matrix row permute; and vector-column permute.
    Type: Application
    Filed: August 7, 2018
    Publication date: December 27, 2018
    Inventors: Arthur John Redfern, Donald Edward Steiss, Mihir Narendra Mody, Tarek Aziz Lahlou
  • Publication number: 20180349096
    Abstract: A merge sort accelerator (MSA) includes a pre-processing stage configured to receive an input vector and generate a pre-processing output vector based on a pre-processing instruction and the input vector. The MSA also includes a merge sort network having multiple sorting stages configured to be selectively enabled. The merge sort network is configured to receive the pre-processing output vector and generate a sorted output vector based on a sorting instruction and the pre-processing output vector. The MSA includes an accumulator stage configured to receive the sorted output vector and update an accumulator vector based on the accumulator instruction and the sorted output vector. The MSA also includes a post-processing stage configured to receive the accumulator vector and generate a post-processing output vector based on a post-processing instruction and the accumulator vector.
    Type: Application
    Filed: June 1, 2018
    Publication date: December 6, 2018
    Inventors: Arthur John REDFERN, Asheesh BHARDWAJ, Tarek Aziz LAHLOU, William Franklin LEVEN
  • Publication number: 20180253402
    Abstract: A method for performing a fundamental computational primitive in a device is provided, where the device includes a processor and a matrix multiplication accelerator (MMA). The method includes configuring a streaming engine in the device to stream data for the fundamental computational primitive from memory, configuring the MMA to format the data, and executing the fundamental computational primitive by the device.
    Type: Application
    Filed: February 28, 2018
    Publication date: September 6, 2018
    Inventors: Arthur John Redfern, Timothy David Anderson, Kai Chirca, Chenchi Luo, Zhenhua Yu
  • Publication number: 20180246855
    Abstract: A reconfigurable matrix multiplier (RMM) system/method allowing tight or loose coupling to supervisory control processor application control logic (ACL) in a system-on-a-chip (SOC) environment is disclosed. The RMM provides for C=A*B matrix multiplication operations having A-multiplier-matrix (AMM), B-multiplicand-matrix (BMM), and C-product-matrix (CPM), as well as C=A*B+D operations in which D-summation-matrix (DSM) represents the result of a previous multiplication operation or another previously defined matrix. The RMM provides for additional CPM LOAD/STORE paths allowing overlapping of compute/data transfer operations and provides for CPM data feedback to the AMM or BMM operand inputs from a previously calculated CPM result.
    Type: Application
    Filed: February 26, 2018
    Publication date: August 30, 2018
    Inventors: Arthur John Redfern, Donald Edward Steiss, Timothy David Anderson, Kai Chirca
  • Publication number: 20180248562
    Abstract: A matrix compression/decompression accelerator (MCA) system/method that coordinates lossless data compression (LDC) and lossless data decompression (LDD) transfers between an external data memory (EDM) and a local data memory (LDM) is disclosed. The system implements LDC using a 2D-to-1D transformation of 2D uncompressed data blocks (2DU) within LDM to generate 1D uncompressed data blocks (1DU). The 1DU is then compressed to generate a 1D compressed superblock (CSB) in LDM. This LDM CSB may then be written to EDM with a reduced number of EDM bus cycles. The system implements LDD using decompression of CSB data retrieved from EDM to generate a 1D decompressed data block (1DD) in LDM. A 1D-to-2D transformation is then applied to the LDM 1DD to generate a 2D decompressed data block (2DD) in LDM. This 2DD may then be operated on by a matrix compute engine (MCE) using a variety of function operators.
    Type: Application
    Filed: February 20, 2018
    Publication date: August 30, 2018
    Inventors: Arthur John Redfern, Dan Wang