Patents by Inventor Chitra Natarajan
Chitra Natarajan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240168890Abstract: A processor package comprises a caching agent that is operable to respond to a first sequence of direct-to-cache (DTC) write misses to a partition in a set in a cache by writing data from those write misses to the partition. When the partition comprises W ways, the caching agent is operable to write data from those write misses to all W ways in the partition. After writing data from those write misses to the partition, and before any data from the partition in the set has been read, the caching agent is operable to receive a second sequence of DTC write misses to the partition, and in response, complete those write misses while retaining the data from the first sequence in at least W-1 of the ways in the partition. Other embodiments are described and claimed.Type: ApplicationFiled: November 23, 2022Publication date: May 23, 2024Inventors: Chitra Natarajan, Aneesh Aggarwal, Ritu Gupta, Niall Declan McDonnell, Kapil Sood, Youngsoo Choi, Asad Khan, Lokpraveen Mosur, Subhiksha Ravisundar, George Leonard Tkachuk
-
Publication number: 20240160570Abstract: Mechanisms to identify key sections of input-output (IO) packets and use for efficient IO caching and associated apparatus and methods. Data, such as packets, are received from an IO device coupled to an IO port on a processor including a cache domain including multiple caches, such as L1/L2 and L3 or Last Level Cache (LLC). The data are logically partitioned into cache lines and embedded logic on the processor is used to identify one or more important cache lines using a cache importance pattern. Cache lines that are identified as important are written to a cache or a first cache level, while unimportant cache lines are written to memory or a second cache level that is higher than the first cache level. Software running on one or more processor cores may be used to program cache importance patterns for one or more data types or transaction types.Type: ApplicationFiled: November 16, 2022Publication date: May 16, 2024Inventors: George Leonard TKACHUK, Aneesh AGGARWAL, Niall D. MCDONNELL, Youngsoo CHOI, Chitra NATARAJAN, Prasad GHATIGAR, Shrikant M. SHAH
-
Publication number: 20240160568Abstract: Examples include techniques associated with data movement to a cache in a disaggregated die system. Examples include circuitry at a first die receiving and granting requests to move data to a first cache resident on the first die or to a second cache resident on a second die that also includes a core of a processor. The granting of the request based, at least in part, on a traffic source type associated with a source of the request.Type: ApplicationFiled: November 15, 2022Publication date: May 16, 2024Inventors: Kapil SOOD, Lokpraveen MOSUR, Aneesh AGGARWAL, Niall D. MCDONNELL, Chitra NATARAJAN, Ritu GUPTA, Edwin VERPLANKE, George Leonard TKACHUK
-
Publication number: 20240122859Abstract: The present invention relates to the novel extended release multiple unit pellet system composition and process for the preparation. The present invention specifically relates to novel extended release multiple unit pellet system composition comprising active pharmaceutical ingredient and pharmaceutically acceptable excipients. The present invention also relates to novel dispersible extended release multiple unit pellet system composition comprising active pharmaceutical ingredient and pharmaceutically acceptable excipients, wherein the active pharmaceutical ingredient is present in the core spheres. The present invention more specifically relates to novel process for the preparation of extended release multiple unit pellet system composition using dual seal coating technology.Type: ApplicationFiled: February 12, 2022Publication date: April 18, 2024Inventors: Chandanmal Pukhraj Bothra, Hemanth Kumar Bothra, Elayaraja Natarajan, Chitra Varma, Bharat Ghanshyam Rughwani, Vaibhav Prakash Thorat
-
Publication number: 20240004662Abstract: Techniques for performing horizontal reductions are described. In some examples, an instance of a horizontal instruction is to include at least one field for an opcode, one or more fields to reference a first source operand, and one or more fields to reference a destination operand, wherein the opcode is to indicate that execution circuitry is, in response to a decoded instance of the single instruction, to at least perform a horizontal reduction using at least one data element of a non-masked data element position of at least the first source operand and store a result of the horizontal reduction in the destination operand.Type: ApplicationFiled: July 2, 2022Publication date: January 4, 2024Inventors: Menachem ADELMAN, Amit GRADSTEIN, Regev SHEMY, Chitra NATARAJAN, Leonardo BORGES, Chytra SHIVASWAMY, Igor ERMOLAEV, Michael ESPIG, Or BEIT AHARON, Jeff WIEDEMEIER
-
Publication number: 20230409333Abstract: Techniques for performing prefix sums in response to a single instruction are describe are described. In some examples, the single instruction includes fields for an opcode, one or fields to reference a first source operand, one or fields to reference a second source operand, one or fields to reference a destination operand, wherein the opcode is to indicate that execution circuitry is, in response to a decoded instance of the single instruction, to at least: perform a prefix sum by for each non-masked data element position of the second source operand adding a data element of that data element position to each data element of preceding data element positions and adding at least one data element of a defined data element position of the first source operand, and store each prefix sum for each data element position of the second source operand into a corresponding data element position of the destination operand.Type: ApplicationFiled: June 17, 2022Publication date: December 21, 2023Inventors: Menachem ADELMAN, Amit GRADSTEIN, Regev SHEMY, Chitra NATARAJAN, Igor ERMOLAEV
-
Publication number: 20220201103Abstract: Examples described herein relate to coalescing one or more messages into a coalesced message and representing one or more fields of the metadata of the one or more messages using one or more codes, wherein at least one of the one or more codes uses fewer bits than that of original metadata fields to compact the metadata fields. In some examples, the metadata includes at least one or more of: a target processing element (PE) number or identifier, message length, operation to perform, target address where to read or write data, source PE number or identifier, initiator address in which to write result data, or message identifier.Type: ApplicationFiled: March 9, 2022Publication date: June 23, 2022Inventors: David KEPPEL, Chitra NATARAJAN, Venkata KRISHNAN
-
Patent number: 10705962Abstract: Embodiment of this disclosure provides a mechanism to use a portion of an inactive processing element's private cache as an extended last-level cache storage space to adaptively adjust the size of shared cache. In one embodiment, a processing device is provided. The processing device comprising a cache controller is to identify a cache line to evict from a shared cache. An inactive processing core is selected by the cache controller from a plurality of processing cores associated with the shared cache. Then, a private cache of the inactive processing core is notified of an identifier of a cache line associated with the shared cache. Thereupon, the cache line is evicted from the shared cache to install in the private cache.Type: GrantFiled: December 21, 2017Date of Patent: July 7, 2020Assignee: Intel CorporationInventors: Carl J. Beckmann, Robert G. Blankenship, Chyi-Chang Miao, Chitra Natarajan, Anthony-Trung D. Nguyen
-
Publication number: 20190196968Abstract: Embodiment of this disclosure provides a mechanism to use a portion of an inactive processing element's private cache as an extended last-level cache storage space to adaptively adjust the size of shared cache. In one embodiment, a processing device is provided. The processing device comprising a cache controller is to identify a cache line to evict from a shared cache. An inactive processing core is selected by the cache controller from a plurality of processing cores associated with the shared cache. Then, a private cache of the inactive processing core is notified of an identifier of a cache line associated with the shared cache. Thereupon, the cache line is evicted from the shared cache to install in the private cache.Type: ApplicationFiled: December 21, 2017Publication date: June 27, 2019Inventors: Carl J. Beckmann, Robert G. Blankenship, Chyi-Chang Miao, Chitra Natarajan, Anthony-Trung D. Nguyen
-
Patent number: 7461218Abstract: A memory read request is received at a port from a device, wherein the port is connected to the device by a packet-based link. The memory read request is enqueued into a small request queue or a large request queue based on an amount of data requested in the memory read request. Memory read requests are interleave dequeued between the small request queue and the large request queue based on an interleave granularity.Type: GrantFiled: June 29, 2005Date of Patent: December 2, 2008Assignee: Intel CorporationInventors: Sridhar Muthrasanallur, Jeff Wilder, Chitra Natarajan
-
Publication number: 20070005913Abstract: A memory read request is received at a port from a device, wherein the port is connected to the device by a packet-based link. The memory read request is enqueued into a small request queue or a large request queue based on an amount of data requested in the memory read request. Memory read requests are interleave dequeued between the small request queue and the large request queue based on an interleave granularity.Type: ApplicationFiled: June 29, 2005Publication date: January 4, 2007Inventors: Sridhar Muthrasanallur, Jeff Wilder, Chitra Natarajan
-
Publication number: 20060200597Abstract: A memory controller to support fully buffered DIMMS by utilizing a write FIFO to switch from the default condition of a memory controller scheduling read requests out-of-order to scheduling write transactions from a write FIFO buffer for a predetermined set of conditions is discussed. For example, the predetermined set of conditions are a write buffer structure has exceeded a threshold (wherein the threshold is fixed or specified by a configuration register) and a memory controller has posted a predetermined number of writes to an AMB write FIFO structure (the predetermined number can be fixed or specified by a configuration register).Type: ApplicationFiled: March 3, 2005Publication date: September 7, 2006Inventors: Bruce Christenson, Chitra Natarajan
-
Patent number: 7047374Abstract: Memory bandwidth may be enhanced by reordering read and write requests to memory. A read queue can hold multiple read requests and a write queue can hold multiple write requests. By examining the contents of the queues, the order in which the read and write requests are presented to memory may be changed to avoid or minimize page replace conflicts, DIMM turn around conflicts, and other types of conflicts that could otherwise impair the efficiency of memory operations.Type: GrantFiled: February 5, 2003Date of Patent: May 16, 2006Assignee: Intel CorporationInventors: Suneeta Sah, Stanley S. Kulick, Varin Udompanyanan, Chitra Natarajan, Hrishikesh S. Pai
-
Publication number: 20060026375Abstract: A memory method may select a latency mode, such as read latency mode, based on measuring memory channel utilization. Memory channel utilization, for example, may include measurements in a memory controller queue structure. Other embodiments are described and claimed.Type: ApplicationFiled: July 30, 2004Publication date: February 2, 2006Inventors: Bruce Christenson, Chitra Natarajan
-
Publication number: 20040022094Abstract: In a system supporting concurrent multiple streams that pass through a cache between memory and the requesting devices, various techniques improve the efficient use of the cache. Some embodiments use adaptive pre-fetching of memory data using a dynamic table to determine the maximum number of pre-fetched cache lines permissible per stream. Other embodiments dynamically allocate the cache to the active streams. Still other embodiments use a programmable timer to deallocate inactive streams.Type: ApplicationFiled: February 5, 2003Publication date: February 5, 2004Inventors: Sivakumar Radhakrishnan, Chitra Natarajan, Kenneth C. Creta, Bradford B. Congdon, Hui Lu
-
Publication number: 20030177320Abstract: Memory bandwidth may be enhanced by reordering read and write requests to memory. A read queue can hold multiple read requests and a write queue can hold multiple write requests. By examining the contents of the queues, the order in which the read and write requests are presented to memory may be changed to avoid or minimize page replace conflicts, DIMM turn around conflicts, and other types of conflicts that could otherwise impair the efficiency of memory operations.Type: ApplicationFiled: February 5, 2003Publication date: September 18, 2003Inventors: Suneeta Sah, Stanley S. Kulick, Varin Udompanyanan, Chitra Natarajan, Hrishikesh S. Pai