Patents by Inventor Kai Chirca

Kai Chirca has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250094044
    Abstract: Techniques including receiving configuration information for a trigger control channel of the one or more trigger control channels, the configuration information defining a first one or more triggering events, receiving a first memory management command, store the first memory management command, detecting a first one or more triggering events, and triggering the stored first memory management command based on the detected first one or more triggering events.
    Type: Application
    Filed: November 27, 2024
    Publication date: March 20, 2025
    Inventors: Kai Chirca, Matthew David Pierson, David E. Smith, Timothy David Anderson
  • Publication number: 20250060873
    Abstract: A device includes a memory bank. The memory bank includes data portions of a first way group. The data portions of the first way group include a data portion of a first way of the first way group and a data portion of a second way of the first way group. The memory bank further includes data portions of a second way group. The device further includes a configuration register and a controller configured to individually allocate, based on one or more settings in the configuration register, the first way and the second way to one of an addressable memory space and a data cache.
    Type: Application
    Filed: November 6, 2024
    Publication date: February 20, 2025
    Inventors: Kai CHIRCA, Matthew David PIERSON
  • Patent number: 12223165
    Abstract: A system includes a multi-core shared memory controller (MSMC). The MSMC includes a snoop filter bank, a cache tag bank, and a memory bank. The cache tag bank is connected to both the snoop filter bank and the memory bank. The MSMC further includes a first coherent slave interface connected to a data path that is connected to the snoop filter bank. The MSMC further includes a second coherent slave interface connected to the data path that is connected to the snoop filter bank. The MSMC further includes an external memory master interface connected to the cache tag bank and the memory bank. The system further includes a first processor package connected to the first coherent slave interface and a second processor package connected to the second coherent slave interface. The system further includes an external memory device connected to the external memory master interface.
    Type: Grant
    Filed: July 28, 2022
    Date of Patent: February 11, 2025
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Matthew David Pierson, Kai Chirca, Timothy David Anderson
  • Publication number: 20250045230
    Abstract: This invention is a bus communication protocol. A master device stores bus credits. The master device may transmit a bus transaction only if it holds sufficient number and type of bus credits. Upon transmission, the master device decrements the number of stored bus credits. The bus credits correspond to resources on a slave device for receiving bus transactions. The slave device must receive the bus transaction if accompanied by the proper credits. The slave device services the transaction. The slave device then transmits a credit return. The master device adds the corresponding number and types of credits to the stored amount. The slave device is ready to accept another bus transaction and the master device is re-enabled to initiate the bus transaction. In many types of interactions a bus agent may act as both master and slave depending upon the state of the process.
    Type: Application
    Filed: August 26, 2024
    Publication date: February 6, 2025
    Inventors: David M. Thompson, Timothy D. Anderson, Joseph R.M. Zbiciak, Abhijeet A. Chachad, Kai Chirca, Matthew D. Pierson
  • Patent number: 12197917
    Abstract: A computer-implemented method includes fetching a fetch-packet containing a first hyper-block from a first address of a memory. The fetch-packet contains a bitwise distance from an entry point of the first hyper-block to a predicted exit point. A first branch instruction of the first hyper-block is executed that corresponds to a first exit point. The first branch instruction includes an address corresponding to an entry point of a second hyper-block. Responsive to executing the first branch instruction, a bitwise distance from the entry point of the first hyper-block to the first exit point is stored. A program counter is moved from the first exit point of the first hyper-block to the entry point of the second hyper-block.
    Type: Grant
    Filed: June 27, 2022
    Date of Patent: January 14, 2025
    Assignee: Texas Instruments Incorporated
    Inventors: Kai Chirca, Timothy D. Anderson, David E. Smith, Jr., Paul D. Gauvreau
  • Patent number: 12197332
    Abstract: In described examples, a processor system includes a processor core generating memory transactions, a lower level cache memory with a lower memory controller, and a higher level cache memory with a higher memory controller having a memory pipeline. The higher memory controller is connected to the lower memory controller by a bypass path that skips the memory pipeline. The higher memory controller: determines whether a memory transaction is a bypass write, which is a memory write request indicated not to result in a corresponding write being directed to the higher level cache memory; if the memory transaction is determined a bypass write, determines whether a memory transaction that prevents passing is in the memory pipeline; and if no transaction that prevents passing is determined to be in the memory pipeline, sends the memory transaction to the lower memory controller using the bypass path.
    Type: Grant
    Filed: February 22, 2024
    Date of Patent: January 14, 2025
    Assignee: Texas Instruments Incorporated
    Inventors: Abhijeet Ashok Chachad, Timothy Anderson, Kai Chirca, David Matthew Thompson
  • Publication number: 20250013569
    Abstract: A method includes receiving, by a level two (L2) controller, a first request for a cache line in a shared cache coherence state; mapping, by the L2 controller, the first request to a second request for a cache line in an exclusive cache coherence state; and responding, by the L2 controller, to the second request.
    Type: Application
    Filed: September 24, 2024
    Publication date: January 9, 2025
    Inventors: Abhijeet Ashok CHACHAD, David Matthew THOMPSON, Timothy David ANDERSON, Kai CHIRCA
  • Publication number: 20250013518
    Abstract: This invention is a streaming engine employed in a digital signal processor. A fixed data stream sequence is specified by a control register. The streaming engine fetches stream data ahead of use by a central processing unit and stores it in a stream buffer. Upon occurrence of a fault reading data from memory, the streaming engine identifies the data element triggering the fault preferably storing this address in a fault address register. The streaming engine defers signaling the fault to the central processing unit until this data element is used as an operand. If the data element is never used by the central processing unit, the streaming engine never signals the fault. The streaming engine preferably stores data identifying the fault in a fault source register. The fault address register and the fault source register are preferably extended control registers accessible only via a debugger.
    Type: Application
    Filed: September 23, 2024
    Publication date: January 9, 2025
    Inventors: Joseph Zbiciak, Timothy D. Anderson, Duc Bui, Kai Chirca
  • Patent number: 12182398
    Abstract: A device includes a data path, a first interface configured to receive a first memory access request from a first peripheral device, and a second interface configured to receive a second memory access request from a second peripheral device. The device further includes an arbiter circuit configured to, in a first clock cycle, a pre-arbitration winner between a first memory access request and a second memory access request based on a first number of credits allocated to a first destination device and a second number of credits allocated to a second destination device. The arbiter circuit is further configured to, in a second clock cycle select a final arbitration winner from among the pre-arbitration winner and a subsequent memory access request based on a comparison of a priority of the pre-arbitration winner and a priority of the subsequent memory access request.
    Type: Grant
    Filed: June 12, 2023
    Date of Patent: December 31, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Matthew David Pierson, Daniel Wu, Kai Chirca
  • Patent number: 12175244
    Abstract: A nested loop controller includes a first register having a first value initialized to an initial first value, a second register having a second value initialized to an initial second value, and a third register configured as a predicate FIFO, initialized to have a third value. The second value is advanced in response to a tick instruction during execution of a loop. In response to the second value reaching a second threshold, the second register is reset to the initial second value. The nested loop controller further includes a comparator coupled to the second register and to the predicate FIFO and configured to provide an outer loop indicator value as input to the predicate FIFO when the second value is equal to the second threshold, and provide an inner loop indicator value as input to the predicate FIFO when the second value is not equal to the second threshold.
    Type: Grant
    Filed: November 13, 2023
    Date of Patent: December 24, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Kai Chirca, Timothy D. Anderson, Todd T. Hahn, Alan L. Davis
  • Publication number: 20240411703
    Abstract: Disclosed embodiments include an electronic device having a processor core, a memory, a register, and a data load unit to receive a plurality of data elements stored in the memory in response to an instruction. All of the data elements hare the same data size, which is specified by one or more coding bits. The data load unit includes an address generator to generate addresses corresponding to locations in the memory at which the data elements are located, and a formatting unit to format the data elements. The register is configured to store the formatted data elements, and the processor core is configured to receive the formatted data elements from the register.
    Type: Application
    Filed: August 23, 2024
    Publication date: December 12, 2024
    Inventors: Timothy D. Anderson, Joseph Zbiciak, Duc Quang Bui, Abhijeet Chachad, Kai Chirca, Naveen Bhoria, Matthew D. Pierson, Daniel Wu, Ramakrishnan Venkatasubramanian
  • Patent number: 12159030
    Abstract: Techniques including receiving configuration information for a trigger control channel of the one or more trigger control channels, the configuration information defining a first one or more triggering events, receiving a first memory management command, store the first memory management command, detecting a first one or more triggering events, and triggering the stored first memory management command based on the detected first one or more triggering events.
    Type: Grant
    Filed: September 8, 2023
    Date of Patent: December 3, 2024
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Kai Chirca, Matthew David Pierson, David E. Smith, Timothy David Anderson
  • Publication number: 20240385840
    Abstract: A stream of data is accessed from a memory system using a stream of addresses generated in a first mode of operating a streaming engine in response to executing a first stream instruction. A block cache management operation is performed on a cache in the memory using a block of addresses generated in a second mode of operating the streaming engine in response to executing a second stream instruction.
    Type: Application
    Filed: July 29, 2024
    Publication date: November 21, 2024
    Inventors: Joseph Raymond Michael Zbiciak, Timothy David Anderson, Jonathan (Son) Hung Tran, Kai Chirca, Daniel Wu, Abhijeet Ashok Chachad, David M. Thompson
  • Patent number: 12141435
    Abstract: A device includes a memory bank. The memory bank includes data portions of a first way group. The data portions of the first way group include a data portion of a first way of the first way group and a data portion of a second way of the first way group. The memory bank further includes data portions of a second way group. The device further includes a configuration register and a controller configured to individually allocate, based on one or more settings in the configuration register, the first way and the second way to one of an addressable memory space and a data cache.
    Type: Grant
    Filed: August 3, 2023
    Date of Patent: November 12, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Kai Chirca, Matthew David Pierson
  • Patent number: 12135646
    Abstract: A method includes receiving, by a level two (L2) controller, a first request for a cache line in a shared cache coherence state; mapping, by the L2 controller, the first request to a second request for a cache line in an exclusive cache coherence state; and responding, by the L2 controller, to the second request.
    Type: Grant
    Filed: May 30, 2023
    Date of Patent: November 5, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Abhijeet Ashok Chachad, David Matthew Thompson, Timothy David Anderson, Kai Chirca
  • Patent number: 12099400
    Abstract: This invention is a streaming engine employed in a digital signal processor. A fixed data stream sequence is specified by a control register. The streaming engine fetches stream data ahead of use by a central processing unit and stores it in a stream buffer. Upon occurrence of a fault reading data from memory, the streaming engine identifies the data element triggering the fault preferably storing this address in a fault address register. The streaming engine defers signaling the fault to the central processing unit until this data element is used as an operand. If the data element is never used by the central processing unit, the streaming engine never signals the fault. The streaming engine preferably stores data identifying the fault in a fault source register. The fault address register and the fault source register are preferably extended control registers accessible only via a debugger.
    Type: Grant
    Filed: February 6, 2023
    Date of Patent: September 24, 2024
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Joseph Zbiciak, Timothy D. Anderson, Duc Bui, Kai Chirca
  • Patent number: 12072824
    Abstract: This invention is a bus communication protocol. A master device stores bus credits. The master device may transmit a bus transaction only if it holds sufficient number and type of bus credits. Upon transmission, the master device decrements the number of stored bus credits. The bus credits correspond to resources on a slave device for receiving bus transactions. The slave device must receive the bus transaction if accompanied by the proper credits. The slave device services the transaction. The slave device then transmits a credit return. The master device adds the corresponding number and types of credits to the stored amount. The slave device is ready to accept another bus transaction and the master device is re-enabled to initiate the bus transaction. In many types of interactions a bus agent may act as both master and slave depending upon the state of the process.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: August 27, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: David M. Thompson, Timothy D. Anderson, Joseph R. M. Zbiciak, Abhijeet A Chachad, Kai Chirca, Matthew D. Pierson
  • Patent number: 12072812
    Abstract: Disclosed embodiments include an electronic device having a processor core, a memory, a register, and a data load unit to receive a plurality of data elements stored in the memory in response to an instruction. All of the data elements hare the same data size, which is specified by one or more coding bits. The data load unit includes an address generator to generate addresses corresponding to locations in the memory at which the data elements are located, and a formatting unit to format the data elements. The register is configured to store the formatted data elements, and the processor core is configured to receive the formatted data elements from the register.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: August 27, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Timothy D. Anderson, Joseph Zbiciak, Duc Quang Bui, Abhijeet Chachad, Kai Chirca, Naveen Bhoria, Matthew D. Pierson, Daniel Wu, Ramakrishnan Venkatasubramanian
  • Publication number: 20240281231
    Abstract: A method for compiling and executing a nested loop includes initializing a nested loop controller with an outer loop count value and an inner loop count value. The nested loop controller includes a predicate FIFO. The method also includes coalescing the nested loop and, during execution of the coalesced nested loop, causing the nested loop controller to populate the predicate FIFO and executing a get predicate instruction having an offset value, where the get predicate returns a value from the predicate FIFO specified by the offset value. The method further includes predicating an outer loop instruction on the returned value from the predicate FIFO.
    Type: Application
    Filed: April 29, 2024
    Publication date: August 22, 2024
    Inventors: Kai CHIRCA, Timothy D. ANDERSON, Todd T. HAHN, Alan L. DAVIS
  • Publication number: 20240265062
    Abstract: A method for performing a fundamental computational primitive in a device is provided, where the device includes a processor and a matrix multiplication accelerator (MMA). The method includes configuring a streaming engine in the device to stream data for the fundamental computational primitive from memory, configuring the MMA to format the data, and executing the fundamental computational primitive by the device.
    Type: Application
    Filed: April 12, 2024
    Publication date: August 8, 2024
    Inventors: Arthur John Redfern, Timothy David Anderson, Kai Chirca, Chenchi Luo, Zhenhua Yu