Patents by Inventor Timothy David Anderson

Timothy David Anderson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240143516
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for allocation in a victim cache system. An example apparatus includes a first cache storage, a second cache storage, a cache controller coupled to the first cache storage and the second cache storage and operable to receive a memory operation that specifies an address, determine, based on the address, that the memory operation evicts a first set of data from the first cache storage, determine that the first set of data is unmodified relative to an extended memory, and cause the first set of data to be stored in the second cache storage.
    Type: Application
    Filed: January 8, 2024
    Publication date: May 2, 2024
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Patent number: 11960892
    Abstract: In one embodiment, a system includes a memory and a processor core. The processor core includes functional units and an instruction decode unit configured to determine whether an execute packet of instructions received by the processing core includes a first instruction that is designated for execution by a first functional unit of the functional units and a second instruction that is a condition code extension instruction that includes a plurality of sets of condition code bits, wherein each set of condition code bits corresponds to a different one of the functional units, and wherein the sets of condition code bits include a first set of condition code bits that corresponds to the first functional unit. When the execute packet includes the first and second instructions, the first functional unit is configured to execute the first instruction conditionally based upon the first set of condition code bits in the second instruction.
    Type: Grant
    Filed: July 22, 2022
    Date of Patent: April 16, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Timothy David Anderson, Duc Quang Bui, Joseph Raymond Michael Zbiciak
  • Patent number: 11960567
    Abstract: A method for performing a fundamental computational primitive in a device is provided, where the device includes a processor and a matrix multiplication accelerator (MMA). The method includes configuring a streaming engine in the device to stream data for the fundamental computational primitive from memory, configuring the MMA to format the data, and executing the fundamental computational primitive by the device.
    Type: Grant
    Filed: July 4, 2021
    Date of Patent: April 16, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Arthur John Redfern, Timothy David Anderson, Kai Chirca, Chenchi Luo, Zhenhua Yu
  • Publication number: 20240103863
    Abstract: A digital signal processor having a CPU with a program counter register and, optionally, an event context stack pointer register for saving and restoring the event handler context when higher priority event preempts a lower priority event handler. The CPU is configured to use a minimized set of addressing modes that includes using the event context stack pointer register and program counter register to compute an address for storing data in memory. The CPU may also eliminate post-decrement, pre-increment and post-decrement addressing and rely only on post-increment addressing.
    Type: Application
    Filed: December 5, 2023
    Publication date: March 28, 2024
    Inventors: Timothy David ANDERSON, Duc Quang BUI, Joseph ZBICIAK, Kai CHIRCA
  • Publication number: 20240104026
    Abstract: A caching system including a first sub-cache, and a second sub-cache, coupled in parallel with the first cache, for storing cache data evicted from the first sub-cache and write-memory commands that are not cached in the first sub-cache, and wherein the second sub-cache includes: color tag bits configured to store an indication that a corresponding cache line of the second sub-cache storing write miss data is associated with a color tag, and an eviction controller configured to evict cache lines of the second sub-cache storing write-miss data based on the color tag associated with the cache line.
    Type: Application
    Filed: December 11, 2023
    Publication date: March 28, 2024
    Inventors: Naveen BHORIA, Timothy David ANDERSON, Pete HIPPLEHEUSER
  • Patent number: 11940918
    Abstract: In described examples, a processor system includes a processor core generating memory transactions, a lower level cache memory with a lower memory controller, and a higher level cache memory with a higher memory controller having a memory pipeline. The higher memory controller is connected to the lower memory controller by a bypass path that skips the memory pipeline. The higher memory controller: determines whether a memory transaction is a bypass write, which is a memory write request indicated not to result in a corresponding write being directed to the higher level cache memory; if the memory transaction is determined a bypass write, determines whether a memory transaction that prevents passing is in the memory pipeline; and if no transaction that prevents passing is determined to be in the memory pipeline, sends the memory transaction to the lower memory controller using the bypass path.
    Type: Grant
    Filed: February 13, 2023
    Date of Patent: March 26, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Abhijeet Ashok Chachad, Timothy David Anderson, Kai Chirca, David Matthew Thompson
  • Patent number: 11940930
    Abstract: Methods, apparatus, systems and articles of manufacture to facilitate atomic operation in victim cache are disclosed. An example system includes a first cache storage to store a first set of data; a second cache storage to store a second set of data that has been evicted from the first cache storage; and a storage queue coupled to the first cache storage and the second cache storage, the storage queue including: an arithmetic component to: receive the second set of data from the second cache storage in response to a memory operation; and perform an arithmetic operation on the second set of data to produce a third set of data; and an arbitration manager to store the third set of data in the second cache storage.
    Type: Grant
    Filed: July 28, 2022
    Date of Patent: March 26, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Patent number: 11940929
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to reduce read-modify-write cycles for non-aligned writes. An example apparatus includes a memory that includes a plurality of memory banks, an interface configured to be coupled to a central processing unit, the interface to obtain a write operation from the central processing unit, wherein the write operation is to write a subset of the plurality of memory banks, and bank processing logic coupled to the interface and to the memory, the bank processing logic to determine the subset of the plurality of memory banks to write based on the write operation, and determine whether to cause a read operation to be performed in response to the write operation based on whether a number of addresses in the subset of the plurality of memory banks to write satisfies a threshold.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: March 26, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20240095164
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to evict in a dual datapath victim cache system. An example apparatus includes a cache storage, a cache controller operable to receive a first memory operation and a second memory operation concurrently, comparison logic operable to identify if the first and second memory operations missed in the cache storage, and a replacement policy component operable to, when at least one of the first and second memory operations corresponds to a miss in the cache storage, reserve an entry in the cache storage to evict based on the first and second memory operations.
    Type: Application
    Filed: September 15, 2022
    Publication date: March 21, 2024
    Inventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
  • Publication number: 20240086065
    Abstract: Techniques for maintaining cache coherency comprising storing data blocks associated with a main process in a cache line of a main cache memory, storing a first local copy of the data blocks in a first local cache memory of a first processor, storing a second local copy of the set of data blocks in a second local cache memory of a second processor executing a first child process of the main process to generate first output data, writing the first output data to the first data block of the first local copy as a write through, writing the first output data to the first data block of the main cache memory as a part of the write through, transmitting an invalidate request to the second local cache memory, marking the second local copy of the set of data blocks as delayed, and transmitting an acknowledgment to the invalidate request.
    Type: Application
    Filed: November 17, 2023
    Publication date: March 14, 2024
    Inventors: Kai CHIRCA, Timothy David ANDERSON
  • Publication number: 20240078190
    Abstract: A caching system including a first sub-cache, a second sub-cache, coupled in parallel with the first sub-cache, for storing write-memory commands that are not cached in the first sub-cache, the second sub-cache including privilege bits configured to store an indication that a corresponding cache line of the second sub-cache is associated with a level of privilege, and wherein the second sub-cache is further configured to receive a first write memory command for a memory address associated with a first level of privilege, store, in the second sub-cache, first data associated with the first write memory command and the level of privilege associated with the cache line, receive a second write memory command for the cache line, the second write memory command associated with a second level of privilege, merge the first level of privilege with the second level of privilege, and output the merged privilege level with the cache line.
    Type: Application
    Filed: October 30, 2023
    Publication date: March 7, 2024
    Inventors: Naveen BHORIA, Timothy David ANDERSON, Pete HIPPLEHEUSER
  • Patent number: 11921637
    Abstract: In described examples, a processor system includes a processor core that generates memory write requests, a cache memory, and a memory controller. The memory controller has a memory pipeline. The memory controller is coupled to control the cache memory and communicatively coupled to the processor core. The memory controller is configured to receive the memory write requests from the processor core; schedule the memory write requests on the memory pipeline; and contemporaneously with scheduling respective ones of the memory write requests on the memory pipeline, send to the processor core a write acknowledgment confirming that writing of a data payload of the respective memory write request to the cache memory has completed.
    Type: Grant
    Filed: May 14, 2020
    Date of Patent: March 5, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Abhijeet Ashok Chachad, Timothy David Anderson, David Matthew Thompson
  • Patent number: 11922166
    Abstract: A Very Long Instruction Word (VLIW) digital signal processor particularly adapted for single instruction multiple data (SIMD) operation on various operand widths and data sizes. A vector compare instruction compares first and second operands and stores compare bits. A companion vector conditional instruction performs conditional operations based upon the state of a corresponding predicate data register bit. A predicate unit performs data processing operations on data in at least one predicate data register including unary operations and binary operations. The predicate unit may also transfer data between a general data register file and the predicate data register file.
    Type: Grant
    Filed: January 17, 2023
    Date of Patent: March 5, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Timothy David Anderson, Duc Quang Bui, Mujibur Rahman, Joseph Raymond Michael Zbiciak, Eric Biscondi, Peter Dent, Jelena Milanovic, Ashish Shrivastava
  • Patent number: 11921643
    Abstract: A processor is provided that includes a first multiplication unit in a first data path of the processor, the first multiplication unit configured to perform single issue multiply instructions, and a second multiplication unit in the first data path, the second multiplication unit configured to perform single issue multiply instructions, wherein the first multiplication unit and the second multiplication unit are configured to execute respective single issue multiply instructions in parallel.
    Type: Grant
    Filed: March 9, 2022
    Date of Patent: March 5, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Mujibur Rahman, Timothy David Anderson, Soujanya Narnur
  • Publication number: 20240063827
    Abstract: A system, method, and device are shown that are operable to transform and align a plurality of fields from an input to an output data stream using a multilayer butterfly or inverse butterfly network by selectably switching bit positions of the input data stream. In some examples, a device includes a first circuit configured to selectably switch bit positions of a first subset of the data stream with a second subset of the data stream and a second circuit configured to: selectably switch bit positions of a first subset of the first subset of the data stream with a second subset of the first subset of the data stream, and selectably switch bit positions of a first subset of the second subset of the data stream with a second subset of the second subset of the data stream.
    Type: Application
    Filed: October 31, 2023
    Publication date: February 22, 2024
    Inventors: Dheera Balasubramanian, Joseph Zbiciak, Duc Quang Bui, Timothy David Anderson
  • Patent number: 11907721
    Abstract: Software instructions are executed on a processor within a computer system to configure a steaming engine with stream parameters to define a multidimensional array. The stream parameters define a size for each dimension of the multidimensional array and a pad value indicator. Data is fetched from a memory coupled to the streaming engine responsive to the stream parameters. A stream of vectors is formed for the multidimensional array responsive to the stream parameters from the data fetched from memory. A padded stream vector is formed that includes a specified pad value without accessing the pad value from system memory.
    Type: Grant
    Filed: July 19, 2021
    Date of Patent: February 20, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Asheesh Bhardwaj, Timothy David Anderson, Son Hung Tran
  • Patent number: 11907753
    Abstract: An apparatus includes a CPU core, a first cache subsystem coupled to the CPU core, and a second memory coupled to the cache subsystem. The first cache subsystem includes a configuration register, a first memory, and a controller. The controller is configured to: receive a request directed to an address in the second memory and, in response to the configuration register having a first value, operate in a non-caching mode. In the non-caching mode, the controller is configured to provide the request to the second memory without caching data returned by the request in the first memory. In response to the configuration register having a second value, the controller is configured to operate in a caching mode. In the caching mode the controller is configured to provide the request to the second memory and cache data returned by the request in the first memory.
    Type: Grant
    Filed: November 7, 2022
    Date of Patent: February 20, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Abhijeet Ashok Chachad, Timothy David Anderson, David Matthew Thompson
  • Patent number: 11900117
    Abstract: A streaming engine in a system receives a first set of stream parameters into a queue to define a first stream along with an indication of either a queue mode of operation or a speculative mode of operation for the first stream. Acquisition of the first stream then begins. At some point, a second set of stream parameters is received into the queue to define a second stream. When the queue mode of operation was specified for the first stream, the second set of parameters is queued and acquisition of the second stream is delayed until completion of acquisition of the first stream. When the speculative mode of operation was specified for the first stream, acquisition of the first stream is canceled upon receipt of the second set of stream parameters and acquisition of the second stream begins immediately.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: February 13, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Timothy David Anderson, Jonathan (Son) Hung Tran, Joseph Raymond Michael Zbiciak
  • Publication number: 20240045810
    Abstract: A processor and a vector sort instruction for the processor to execute are provided, in which the vector sort instructions includes instructions for comparing a first element of a set of vector elements of a vector to a remainder of the set of vector elements; determining, based on the comparing, a control vector that specifies a respective sorted position for each element of the set of vector elements; and reordering the set of vector elements based on the control vector.
    Type: Application
    Filed: October 17, 2023
    Publication date: February 8, 2024
    Inventors: Timothy David Anderson, Mujibur Rahman
  • Publication number: 20240045922
    Abstract: In described examples, an integrated circuit (IC) includes a matrix multiplication accelerator including a first memory, a second memory, and a memory controller. The second memory is configured to store multiple rows of an input feature map on a single line of cells of the memory, and to store a filter kernel. The memory controller reads multiple contiguous memory vectors of the second memory, different ones of the contiguous memory vectors corresponding to different portions of the input feature map. The memory controller also replaces (with padding zeroes) values of respective ones of the contiguous memory vectors. The number and location of replaced values are selected in response to a column index of an element of the filter kernel in response to which the respective contiguous memory vector is read. Zero padded contiguous memory vectors are written to the first memory.
    Type: Application
    Filed: July 30, 2022
    Publication date: February 8, 2024
    Inventors: Timothy David Anderson, Asheesh Bhardwaj, Burton Adrik Copeland