Patents by Inventor Kyle Castille
Kyle Castille has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9075743Abstract: Management of access to shared resources within a system comprising a plurality of requesters and a plurality of target resources is provided. A separate arbitration point is associated with each target resource. An access priority value is assigned to each requester. An arbitration contest is performed for access to a first target resource by requests from two or more of the requesters using a first arbitration point associated with the first target resource to determine a winning requester. The request from the winning requester is forwarded to a second target resource. A second arbitration contest is performed for access to the second target resource by the forwarded request from the winning requester and requests from one or more of the plurality of requesters using a second arbitration point associated with the second target resource.Type: GrantFiled: September 20, 2011Date of Patent: July 7, 2015Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Raguram Damodaran, Abhijeet Ashok Chachad, Dheera Balasubramanian, Roger Kyle Castille, David Quintin Bell
-
Patent number: 8607000Abstract: This invention is a data processing system having a multi-level cache system. The multi-level cache system includes at least first level cache and a second level cache. Upon a cache miss in both the at least one first level cache and the second level cache the data processing system evicts and allocates a cache line within the second level cache. The data processing system determine from the miss address whether the request falls within a low half or a high half of the allocated cache line. The data processing system first requests data from external memory of the miss half cache line. Upon receipt data is supplied to the at least one first level cache and the CPU. The data processing system then requests data from external memory for the other half of the second level cache line.Type: GrantFiled: September 23, 2011Date of Patent: December 10, 2013Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, Roger Kyle Castille, Joseph Raymond Michael Zbiciak, Dheera Balasubramanian
-
Publication number: 20120290756Abstract: Management of access to shared resources within a system comprising a plurality of requesters and a plurality of target resources is provided. A separate arbitration point is associated with each target resource. An access priority value is assigned to each requester. An arbitration contest is performed for access to a first target resource by requests from two or more of the requesters using a first arbitration point associated with the first target resource to determine a winning requester. The request from the winning requester is forwarded to a second target resource. A second arbitration contest is performed for access to the second target resource by the forwarded request from the winning requester and requests from one or more of the plurality of requesters using a second arbitration point associated with the second target resource.Type: ApplicationFiled: September 20, 2011Publication date: November 15, 2012Inventors: Raguram Damodaran, Abhijeet Ashok Chachad, Dheera Balasubramanian, Roger Kyle Castille, David Quintin Bell
-
Publication number: 20120198160Abstract: This invention is a data processing system having a multi-level cache system. The multi-level cache system includes at least first level cache and a second level cache. Upon a cache miss in both the at least one first level cache and the second level cache the data processing system evicts and allocates a cache line within the second level cache. The data processing system determine from the miss address whether the request falls within a low half or a high half of the allocated cache line. The data processing system first requests data from external memory of the miss half cache line. Upon receipt data is supplied to the at least one first level cache and the CPU. The data processing system then requests data from external memory for the other half of the second level cache line.Type: ApplicationFiled: September 23, 2011Publication date: August 2, 2012Applicant: TEXAS INSTRUMENTS INCORPORATEDInventors: Abhijeet Ashok Chachad, Roger Kyle Castille, Joseph Raymond Michael Zbiciak, Dheera Balasubramanian
-
Publication number: 20120191916Abstract: A second level memory controller uses shadow tags 711 to implement snoop read and write coherence. These shadow tags are generally used only for snoops intending to keep L2 SRAM coherent with the level one data cache. Thus updates for all external cache lines are ignored. The shadow tags are updated on all level one cache allocates and all dirty and invalidate modifications to data stored in L2 SRAM. These interactions happen on different interfaces, but the traffic on that interface includes level one data cache accesses to both external and level two directly addressable lines. These interactions create extra traffic on these interfaces and creating extra stalls to the CPU. Thus in this invention shadow tags are updated only on a subset of less than all updates of the level one tags.Type: ApplicationFiled: September 26, 2011Publication date: July 26, 2012Applicant: TEXAS INSTRUMENTS INCORPORATEDInventors: Abhijeet Ashok Chachad, Roger Kyle Castille, Joseph Raymond Michael Zbiciak, Dheera Balasubramanian
-
Patent number: 7716388Abstract: Command reordering in the hub interface unit (HIU) of Enhanced Direct Memory Access (EDMA) functions is described. Without command reordering in the EDMA, commands are issued by the HIU to the peripheral in order of issue. If the higher priority transfers are issued later by the EDMA, the previously issued lower priority transfers would block the higher priority transfers. Command reordering in the HIU causes transfers to be reordered and issued to the peripheral based on their priority. Reordering allows the EDMA and HIU to give due service to high priority transfer requests with decreased weight placed on the order in which the requests were issued.Type: GrantFiled: May 13, 2005Date of Patent: May 11, 2010Assignee: Texas Instruments IncorporatedInventors: Shoban Srikrishna Jagathesan, Sanjive Agarwala, Kyle Castille, Quang-Dieu An
-
Patent number: 7673076Abstract: An enhanced direct memory access (EDMA) operation issues a read command to the source port to request data. The port returns the data along with response information, which contains the channel and valid byte count. The EDMA stores the read data into a write buffer and acknowledges to the source port that the EDMA can accept more data. The read response and data can come from more than one port and belong to different channels. Removing channel prioritizing according to this invention allows the EDMA to store read data in the write buffer and the EDMA then can acknowledge the port read response concurrently across all channels. This improves the EDMA inbound and outbound data flow dramatically.Type: GrantFiled: May 13, 2005Date of Patent: March 2, 2010Assignee: Texas Instruments IncorporatedInventors: Sanjive Agarwala, Kyle Castille, Quang-Dieu An
-
Patent number: 7577774Abstract: The present invention provides for independent source-read and destination-write functionality for Enhanced Direct Memory Access (EDMA). Allowing source read and destination write pipelines to operate independently makes it possible for the source pipeline to issue multiple read requests and stay ahead of the destination write for fully pipelined operation. The result is that fully pipelined capability may be achieved and utilization of the full DMA bandwidth and maximum throughput performance are provided.Type: GrantFiled: May 13, 2005Date of Patent: August 18, 2009Assignee: Texas Instruments IncorporatedInventors: Sanjive Agarwala, Kyle Castille, Quang-Dieu An, Hung Ong
-
Patent number: 7191162Abstract: The invention describes a modification of FIFO hardware to allow improved use of FIFOs for burst reading from or writing to a processor direct memory access unit via either an expansion bus or an external memory interface using FIFO flag initiated bursts. The hardware and FIFO signal modifications make the FIFO-DMA interface immune to deadlock conditions and generation of spurious interrupt events in the process of initiating burst transfers. The FIFO function is modified to synchronize the frame transfer on the digital signal processor even if the digital signal processor lacks this functionality. By delaying the programmable flag assertions within the FIFO until after the current burst is complete the DSP-FIFO interface may be made immune to deadlock conditions and generation of spurious events.Type: GrantFiled: October 21, 2003Date of Patent: March 13, 2007Assignee: Texas Instruments IncorporatedInventors: Clayton Gibbs, Kyle Castille, Natarajan Kurian Seshan
-
Publication number: 20060259568Abstract: Command reordering in the hub-interface unit (HIU) of Enhanced Direct Memory Access (EDMA) functions is described. Without command reordering in the EDMA, commands are issued by the HIU to the peripheral in order of issue. If the higher priority transfers are issued later by the EDMA, the previously issued lower priority transfers would block the higher priority transfers. Command reordering in the HIU causes transfers to be reordered and issued to the peripheral based on their priority. Reordering allows the EDMA and HIU is to give due service to high priority transfer requests with decreased weight placed on the order in which the requests were issued.Type: ApplicationFiled: May 13, 2005Publication date: November 16, 2006Inventors: Shoban Jagathesan, Sanjive Agarwala, Kyle Castille, Quang-Dieu An
-
Publication number: 20060256796Abstract: The present invention provides for independent source-read and destination-write functionality for Enhanced Direct Memory Access (EDMA). Allowing source read and destination write pipelines to operate independently makes it possible for the source pipeline to issue multiple read requests and stay ahead of the destination write for fully pipelined operation. The result is that fully pipelined capability may be achieved and utilization of the full DMA bandwidth and maximum throughput performance are provided.Type: ApplicationFiled: May 13, 2005Publication date: November 16, 2006Inventors: Sanjive Agarwala, Kyle Castille, Quang-Dieu An, Hung Ong
-
Publication number: 20060259648Abstract: An extended direct memory access (EDMA) operation issues a read command to the source port to request data. The port returns the data along with response information, which contains the channel and valid byte count. The EDMA stores the read data into a write buffer and acknowledges to the source port that the EDMA can accept more data. The read response and data can come from more than one port and belong to different channels. Removing channel prioritizing according to this invention allows the EDMA to store read data in the write buffer and the EDMA then can acknowledge the port read response concurrently across all channels. This improves the EDMA inbound and outbound data flow dramatically.Type: ApplicationFiled: May 13, 2005Publication date: November 16, 2006Inventors: Sanjive Agarwala, Kyle Castille, Quang-Dieu An
-
Publication number: 20060259665Abstract: The configurable multiple write-enhanced EDMA of this invention processes multiple priority channels and utilizes as much write data bus as practical. A write queue stores write requests with their corresponding data width and priority. A dispatch circuit dispatches a highest priority maximum data width write request if that is the highest priority stored write request or if the prior dispatch was not a maximum data width write request. The dispatch circuit dispatches two write requests if their total data width is less than or equal to the maximum data width and they both have a priority higher than the highest priority maximum data width write request.Type: ApplicationFiled: May 13, 2005Publication date: November 16, 2006Inventors: Sanjive Agarwala, Kyle Castille, Quang An, David Bell, Natarajan Seshan
-
Publication number: 20050086400Abstract: The invention describes a modification of FIFO hardware to allow improved use of FIFOs for burst reading from or writing to a processor direct memory access unit via either an expansion bus or an external memory interface using FIFO flag initiated bursts. The hardware and FIFO signal modifications make the FIFO-DMA interface immune to deadlock conditions and generation of spurious interrupt events in the process of initiating burst transfers. The FIFO function is modified to synchronize the frame transfer on the digital signal processor even if the digital signal processor lacks this functionality. By delaying the programmable flag assertions within the FIFO until after the current burst is complete the DSP-FIFO interface may be made immune to deadlock conditions and generation of spurious events.Type: ApplicationFiled: October 21, 2003Publication date: April 21, 2005Inventors: Clayton Gibbs, Kyle Castille, Natarajan Seshan
-
Publication number: 20020136220Abstract: In a data processing system have a master-state data processing unit and at least one slave-state data processing unit, the data processing units can be provided with an asynchronous transfer mode interface unit for transferring data cells there between. The interface unit provides and receives signals formatted in the UTOPIA protocol. The interface unit includes processor acting as a state machine and a buffer out memory unit for buffering the data groups between the interface unit processor and the direct memory access unit of the data processing unit. The interface unit can act in a receive mode and a transmit mode for a master-state data processing unit and can act in a receive mode, and transmit mode in a slave-state data processing unit. An event signal provides an efficient exchange of transfer of data between the direct memory access unit and the buffer memory storage unit in the slave mode.Type: ApplicationFiled: September 26, 2001Publication date: September 26, 2002Inventors: Shakuntala Anjanaiah, Roger Kyle Castille, Natarajan Seshan