Patents by Inventor Kai Chirca
Kai Chirca has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12386696Abstract: Techniques for maintaining cache coherency comprising storing data blocks associated with a main process in a cache line of a main cache memory, storing a first local copy of the data blocks in a first local cache memory of a first processor, storing a second local copy of the set of data blocks in a second local cache memory of a second processor executing a first child process of the main process to generate first output data, writing the first output data to the first data block of the first local copy as a write through, writing the first output data to the first data block of the main cache memory as a part of the write through, transmitting an invalidate request to the second local cache memory, marking the second local copy of the set of data blocks as delayed, and transmitting an acknowledgment to the invalidate request.Type: GrantFiled: November 17, 2023Date of Patent: August 12, 2025Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Kai Chirca, Timothy David Anderson
-
Patent number: 12379929Abstract: An integrated circuit comprising instruction processing circuitry for processing a plurality of program instructions and instruction prediction circuitry. The instruction prediction circuitry comprises circuitry for detecting successive occurrences of a same program loop sequence of program instructions. The instruction prediction circuitry also comprises circuitry for predicting a number of iterations of the same program loop sequence of program instructions, in response to detecting, by the circuitry for detecting, that a second occurrence of the same program loop sequence of program instructions comprises a same number of iterations as a first occurrence of the same program loop sequence of program instructions.Type: GrantFiled: January 13, 2024Date of Patent: August 5, 2025Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Kai Chirca, Paul Daniel Gauvreau, David Edward Smith, Jr.
-
Patent number: 12373515Abstract: A method for performing a fundamental computational primitive in a device is provided, where the device includes a processor and a matrix multiplication accelerator (MMA). The method includes configuring a streaming engine in the device to stream data for the fundamental computational primitive from memory, configuring the MMA to format the data, and executing the fundamental computational primitive by the device.Type: GrantFiled: April 12, 2024Date of Patent: July 29, 2025Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Arthur John Redfern, Timothy David Anderson, Kai Chirca, Chenchi Luo, Zhenhua Yu
-
Publication number: 20250231684Abstract: A device includes a data path, a first interface configured to receive a first memory access request from a first peripheral device, and a second interface configured to receive a second memory access request from a second peripheral device. The device further includes an arbiter circuit configured to, in a first clock cycle, a pre-arbitration winner between a first memory access request and a second memory access request based on a first number of credits allocated to a first destination device and a second number of credits allocated to a second destination device. The arbiter circuit is further configured to, in a second clock cycle select a final arbitration winner from among the pre-arbitration winner and a subsequent memory access request based on a comparison of a priority of the pre-arbitration winner and a priority of the subsequent memory access request.Type: ApplicationFiled: December 18, 2024Publication date: July 17, 2025Inventors: Matthew David PIERSON, Daniel WU, Kai CHIRCA
-
Patent number: 12360843Abstract: Techniques for accessing memory by a memory controller, comprising receiving, by the memory controller, a memory management command to perform a memory management operation at a virtual memory address, translating the virtual memory address to a physical memory address, wherein the physical memory address comprises an address within a cache memory, and outputting an instruction to the cache memory based on the memory management command and the physical memory address.Type: GrantFiled: June 30, 2021Date of Patent: July 15, 2025Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Kai Chirca, Timothy David Anderson, Joseph Zbiciak, David E. Smith, Matthew David Pierson
-
Patent number: 12360844Abstract: A device includes a data path, a first interface configured to receive a first memory access request from a first peripheral device, and a second interface configured to receive a second memory access request from a second peripheral device. The device further includes an arbiter circuit configured to determine a first destination device connected to the data path and associated with the first memory access request and a first credit threshold corresponding to the first memory access request. The arbiter circuit is further configured to determine a second destination device connected to the data path and associated with the second memory access request and a second credit threshold corresponding to the second memory access request. The arbiter circuit is configured to arbitrate access to the data path by the first memory access request and the second memory access request based on the first credit threshold and the second credit threshold.Type: GrantFiled: July 28, 2022Date of Patent: July 15, 2025Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Matthew David Pierson, Kai Chirca, Daniel Wu
-
Patent number: 12333284Abstract: A method for compiling and executing a nested loop includes initializing a nested loop controller with an outer loop count value and an inner loop count value. The nested loop controller includes a predicate FIFO. The method also includes coalescing the nested loop and, during execution of the coalesced nested loop, causing the nested loop controller to populate the predicate FIFO and executing a get predicate instruction having an offset value, where the get predicate returns a value from the predicate FIFO specified by the offset value. The method further includes predicating an outer loop instruction on the returned value from the predicate FIFO.Type: GrantFiled: April 29, 2024Date of Patent: June 17, 2025Assignee: Texas Instruments IncorporatedInventors: Kai Chirca, Timothy D. Anderson, Todd T. Hahn, Alan L. Davis
-
Publication number: 20250181238Abstract: A system includes a multi-core shared memory controller (MSMC). The MSMC includes a snoop filter bank, a cache tag bank, and a memory bank. The cache tag bank is connected to both the snoop filter bank and the memory bank. The MSMC further includes a first coherent slave interface connected to a data path that is connected to the snoop filter bank. The MSMC further includes a second coherent slave interface connected to the data path that is connected to the snoop filter bank. The MSMC further includes an external memory master interface connected to the cache tag bank and the memory bank. The system further includes a first processor package connected to the first coherent slave interface and a second processor package connected to the second coherent slave interface. The system further includes an external memory device connected to the external memory master interface.Type: ApplicationFiled: February 6, 2025Publication date: June 5, 2025Inventors: Matthew David PIERSON, Kai CHIRCA, Timothy David ANDERSON
-
Patent number: 12321282Abstract: A prefetch unit includes multiple memories; and a memory controller coupled to the multiple memories. The memory controller includes a prefetch stream filter and a prefetch buffer. The prefetch stream filter includes a first set of address slots and a set of direction prediction fields, each of which is associated with a respective one of the address slots of the first set of address slots. The prefetch buffer includes a set of buffer slots, each slot of the set of buffer slots including an address field, a direction prediction field, a data pending field, a data valid field, and a set of sub-slots configured to store data, wherein each address field of each slot of the set of buffer slots is configured to store at least a portion of an address associated with the corresponding slot.Type: GrantFiled: September 7, 2023Date of Patent: June 3, 2025Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Kai Chirca, Joseph R. M. Zbiciak, Matthew D. Pierson
-
Patent number: 12314187Abstract: A method includes receiving, by a memory management unit (MMU) comprising a translation lookaside buffer (TLB) and a configuration register, a request from a processor core to directly modify an entry in the TLB. The method also includes, responsive to the configuration register having a first value, operating the MMU in a software-managed mode by modifying the entry in the TLB according to the request. The method further includes, responsive to the configuration register having a second value, operating the MMU in a hardware-managed mode by denying the request.Type: GrantFiled: December 20, 2023Date of Patent: May 27, 2025Assignee: Texas Instruments IncorporatedInventors: Timothy D. Anderson, Joseph Raymond Michael Zbiciak, Kai Chirca, Daniel Brad Wu
-
Publication number: 20250147765Abstract: A method of branch prediction includes determining that a processor is to execute at least a portion of a first set of instructions. An address associated with a sequentially first instruction of the first set of instruction is determined, and a branch prediction index is determined based on the address and a branch history. A table is queried based on the branch prediction index to determine a predicted exit point of the first set of instructions. The processor fetches a subset of the first set of instructions based on the predicted exit point.Type: ApplicationFiled: January 10, 2025Publication date: May 8, 2025Inventors: Kai Chirca, Timothy D. Anderson, David E. Smith, JR., Paul D. Gauvreau
-
Publication number: 20250117224Abstract: A nested loop controller includes a first register having a first value initialized to an initial first value, a second register having a second value initialized to an initial second value, and a third register configured as a predicate FIFO, initialized to have a third value. The second value is advanced in response to a tick instruction during execution of a loop. In response to the second value reaching a second threshold, the second register is reset to the initial second value. The nested loop controller further includes a comparator coupled to the second register and to the predicate FIFO and configured to provide an outer loop indicator value as input to the predicate FIFO when the second value is equal to the second threshold, and provide an inner loop indicator value as input to the predicate FIFO when the second value is not equal to the second threshold.Type: ApplicationFiled: December 16, 2024Publication date: April 10, 2025Inventors: Kai Chirca, Timothy D. Anderson, Todd T. Hahn, Alan L. Davis
-
Publication number: 20250103502Abstract: In described examples, a processor system includes a processor core generating memory transactions, a lower level cache memory with a lower memory controller, and a higher level cache memory with a higher memory controller having a memory pipeline. The higher memory controller is connected to the lower memory controller by a bypass path that skips the memory pipeline. The higher memory controller: determines whether a memory transaction is a bypass write, which is a memory write request indicated not to result in a corresponding write being directed to the higher level cache memory; if the memory transaction is determined a bypass write, determines whether a memory transaction that prevents passing is in the memory pipeline; and if no transaction that prevents passing is determined to be in the memory pipeline, sends the memory transaction to the lower memory controller using the bypass path.Type: ApplicationFiled: December 11, 2024Publication date: March 27, 2025Inventors: Abhijeet Ashok Chachad, Timothy Anderson, Kai Chirca, David Matthew Thompson
-
Publication number: 20250094044Abstract: Techniques including receiving configuration information for a trigger control channel of the one or more trigger control channels, the configuration information defining a first one or more triggering events, receiving a first memory management command, store the first memory management command, detecting a first one or more triggering events, and triggering the stored first memory management command based on the detected first one or more triggering events.Type: ApplicationFiled: November 27, 2024Publication date: March 20, 2025Inventors: Kai Chirca, Matthew David Pierson, David E. Smith, Timothy David Anderson
-
Publication number: 20250060873Abstract: A device includes a memory bank. The memory bank includes data portions of a first way group. The data portions of the first way group include a data portion of a first way of the first way group and a data portion of a second way of the first way group. The memory bank further includes data portions of a second way group. The device further includes a configuration register and a controller configured to individually allocate, based on one or more settings in the configuration register, the first way and the second way to one of an addressable memory space and a data cache.Type: ApplicationFiled: November 6, 2024Publication date: February 20, 2025Inventors: Kai CHIRCA, Matthew David PIERSON
-
Patent number: 12223165Abstract: A system includes a multi-core shared memory controller (MSMC). The MSMC includes a snoop filter bank, a cache tag bank, and a memory bank. The cache tag bank is connected to both the snoop filter bank and the memory bank. The MSMC further includes a first coherent slave interface connected to a data path that is connected to the snoop filter bank. The MSMC further includes a second coherent slave interface connected to the data path that is connected to the snoop filter bank. The MSMC further includes an external memory master interface connected to the cache tag bank and the memory bank. The system further includes a first processor package connected to the first coherent slave interface and a second processor package connected to the second coherent slave interface. The system further includes an external memory device connected to the external memory master interface.Type: GrantFiled: July 28, 2022Date of Patent: February 11, 2025Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Matthew David Pierson, Kai Chirca, Timothy David Anderson
-
Publication number: 20250045230Abstract: This invention is a bus communication protocol. A master device stores bus credits. The master device may transmit a bus transaction only if it holds sufficient number and type of bus credits. Upon transmission, the master device decrements the number of stored bus credits. The bus credits correspond to resources on a slave device for receiving bus transactions. The slave device must receive the bus transaction if accompanied by the proper credits. The slave device services the transaction. The slave device then transmits a credit return. The master device adds the corresponding number and types of credits to the stored amount. The slave device is ready to accept another bus transaction and the master device is re-enabled to initiate the bus transaction. In many types of interactions a bus agent may act as both master and slave depending upon the state of the process.Type: ApplicationFiled: August 26, 2024Publication date: February 6, 2025Inventors: David M. Thompson, Timothy D. Anderson, Joseph R.M. Zbiciak, Abhijeet A. Chachad, Kai Chirca, Matthew D. Pierson
-
Patent number: 12197917Abstract: A computer-implemented method includes fetching a fetch-packet containing a first hyper-block from a first address of a memory. The fetch-packet contains a bitwise distance from an entry point of the first hyper-block to a predicted exit point. A first branch instruction of the first hyper-block is executed that corresponds to a first exit point. The first branch instruction includes an address corresponding to an entry point of a second hyper-block. Responsive to executing the first branch instruction, a bitwise distance from the entry point of the first hyper-block to the first exit point is stored. A program counter is moved from the first exit point of the first hyper-block to the entry point of the second hyper-block.Type: GrantFiled: June 27, 2022Date of Patent: January 14, 2025Assignee: Texas Instruments IncorporatedInventors: Kai Chirca, Timothy D. Anderson, David E. Smith, Jr., Paul D. Gauvreau
-
Patent number: 12197332Abstract: In described examples, a processor system includes a processor core generating memory transactions, a lower level cache memory with a lower memory controller, and a higher level cache memory with a higher memory controller having a memory pipeline. The higher memory controller is connected to the lower memory controller by a bypass path that skips the memory pipeline. The higher memory controller: determines whether a memory transaction is a bypass write, which is a memory write request indicated not to result in a corresponding write being directed to the higher level cache memory; if the memory transaction is determined a bypass write, determines whether a memory transaction that prevents passing is in the memory pipeline; and if no transaction that prevents passing is determined to be in the memory pipeline, sends the memory transaction to the lower memory controller using the bypass path.Type: GrantFiled: February 22, 2024Date of Patent: January 14, 2025Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, Timothy Anderson, Kai Chirca, David Matthew Thompson
-
Publication number: 20250013569Abstract: A method includes receiving, by a level two (L2) controller, a first request for a cache line in a shared cache coherence state; mapping, by the L2 controller, the first request to a second request for a cache line in an exclusive cache coherence state; and responding, by the L2 controller, to the second request.Type: ApplicationFiled: September 24, 2024Publication date: January 9, 2025Inventors: Abhijeet Ashok CHACHAD, David Matthew THOMPSON, Timothy David ANDERSON, Kai CHIRCA