Patents by Inventor Gowrisankar RADHAKRISHNAN
Gowrisankar RADHAKRISHNAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10649906Abstract: A system and method for efficient cache flushing are provided. The disclosed method includes maintaining a data structure in connection with a plurality of blocks used for data caching, the data structure including a row lock wait list section. The method further includes receiving an Input/Output (I/O) request, performing a hash search for the I/O request against the data structure, and based on the results of the hash search, locking at least one row in a data cache thereby preventing read and write operations from being performed on the at least one row until the at least one row is unlocked.Type: GrantFiled: April 30, 2018Date of Patent: May 12, 2020Assignee: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITEDInventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
-
Publication number: 20200057576Abstract: A system and method for efficient write through processing of Input/Output (I/O) requests are provided. One example of the illustrative method includes receiving a first write request to a first row, while processing the first write request, receiving a subsequent write request to the first row, and then caching the subsequent write request for processing until the first write request is completed.Type: ApplicationFiled: August 16, 2018Publication date: February 20, 2020Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
-
Patent number: 10528438Abstract: A system and method for managing bad blocks in a hardware accelerated caching solution are provided. The disclosed method includes receiving an Input/Output (I/O) request, performing a hash search for the I/O request against a hash slot data structure, and based on the results of the hash search, either performing the I/O request with a data block identified in the I/O request or diverting the I/O request to a new data block not identified in the I/O request. The diversion may also include diverting the I/O request from hardware to firmware of a memory controller.Type: GrantFiled: May 25, 2017Date of Patent: January 7, 2020Assignee: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITEDInventors: Horia Simionescu, Gowrisankar Radhakrishnan, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit
-
Patent number: 10467150Abstract: Disclosed are embodiments for supporting dynamic tier remapping of data stored in a hybrid storage system. One embodiment includes a storage controller and firmware, where the firmware maintains a plurality of mapping elements, where each mapping element includes a plurality of group identifiers, where each group identifier is configured to indicate a mapping of a logical block addresses, and where the storage controller performs: receiving a read command including a logical block address; parsing the logical block address to determine a mapping element and a group identifier; determining, for a particular mapping element of the plurality of elements, whether the particular mapping element is locked, wherein the particular mapping element corresponds to the mapping element of the logical block address; and dependent upon the particular mapping element, queuing the read command for firmware processing or remapping the logical block address.Type: GrantFiled: May 16, 2018Date of Patent: November 5, 2019Assignee: International Business Machines CorporationInventors: Joseph R. Edwards, Robert Galbraith, Adrian C. Gerhard, Daniel F. Moertl, Gowrisankar Radhakrishnan, Rick A. Weckwerth
-
Publication number: 20190332541Abstract: A system and method for efficient cache flushing are provided. The disclosed method includes maintaining a data structure in connection with a plurality of blocks used for data caching, the data structure including a row lock wait list section. The method further includes receiving an Input/Output (I/O) request, performing a hash search for the I/O request against the data structure, and based on the results of the hash search, locking at least one row in a data cache thereby preventing read and write operations from being performed on the at least one row until the at least one row is unlocked.Type: ApplicationFiled: April 30, 2018Publication date: October 31, 2019Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
-
Patent number: 10394673Abstract: A system and method for performing a copyback operation are provided. The disclosed method includes initiating a copyback process to move data from an online data storage drive to a spare data storage drive by setting an indicator in hardware to divert all write completions on the online data storage drive. The method further includes, while the indicator in hardware is set to divert the write completions, incrementing on a per-strip basis a copy of data from the online data storage drive to the spare data storage drive. The method further includes only after all data from the online data storage drive has been copied to the spare data storage drive, changing the setting of the indicator in hardware so that write requests received for the online data storage drive during the copyback process are re-issued on to the spare data storage drive.Type: GrantFiled: May 22, 2017Date of Patent: August 27, 2019Assignee: Avago Technologies International Sales Pte. LimitedInventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
-
Patent number: 10282301Abstract: A system and method for efficient cache buffering are provided. The disclosed method determining that a read-ahead operation is to be performed in response to receiving a host Input/Output (I/O) command. In response to determining that the read-ahead operation is to be performed, allocating a new Local Message Identifier (LMID) for the read-ahead operation. The method further includes sending a buffer allocation request to a buffer manager module, the buffer allocation request containing parameters associated with the read-ahead operation and then causing the buffer manager module to allocate at least one Internal Scatter Gather List (ISGL) and Buffer Section Identifier (BSID) in accordance with the parameters contained in the buffer allocation request. The method further includes enabling the cache manager module to perform a hash search using a row or strip number and identification information available in the new LMID.Type: GrantFiled: May 18, 2017Date of Patent: May 7, 2019Assignee: Avago Technologies International Sales Pte. LimitedInventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
-
Patent number: 10282116Abstract: A system and method for efficient cache flushing are provided. The disclosed method includes allocating one or more Internal Scatter Gather Lists (ISGLs) for the cache flush, populating the one or more ISGLs with Cache Segment Identifiers (CSIDs) and corresponding Buffer Segment Identifiers (BSIDs) of each strip that is identified as dirty, of a skip-type Internal Scatter Gather Element (ISGE), or of a missing arm-type ISGE. The disclosed method further includes allocating a flush Local Message Identifier (LMID) as a message to be used in connection with processing the cache flush, populating the flush LMID with an identifier of the one or more ISGLs, and transferring the flush LMID to a cache manager module to enable the cache manager module to execute the cache flush based on information contained in the flush LMID.Type: GrantFiled: July 19, 2017Date of Patent: May 7, 2019Assignee: Avago Technologies International Sales Pte. LimitedInventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
-
Patent number: 10223009Abstract: A system and method for efficient cache buffering are provided. The disclosed method includes receiving an Input/Output (I/O) command from a host system at a storage controller, parsing the I/O command at the storage controller with a host I/O manager to extract command instructions therefrom. The host I/O manager is able to generate at least one local message that includes the command instructions extracted from the I/O command and transmit the at least one local message to a cache manager. The cache manager is enabled to work in local memory to execute the command instructions contained in the at least one message. The cache manager is also configured to chain multiple buffer segments together on-demand to support multiple stripe sizes that are specific to the I/O command received from the host system.Type: GrantFiled: October 26, 2016Date of Patent: March 5, 2019Assignee: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITEDInventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
-
Publication number: 20190026033Abstract: A system and method for efficient cache flushing are provided. The disclosed method includes allocating one or more Internal Scatter Gather Lists (ISGLs) for the cache flush, populating the one or more ISGLs with Cache Segment Identifiers (CSIDs) and corresponding Buffer Segment Identifiers (BSIDs) of each strip that is identified as dirty, of a skip-type Internal Scatter Gather Element (ISGE), or of a missing arm-type ISGE. The disclosed method further includes allocating a flush Local Message Identifier (LMID) as a message to be used in connection with processing the cache flush, populating the flush LMID with an identifier of the one or more ISGLs, and transferring the flush LMID to a cache manager module to enable the cache manager module to execute the cache flush based on information contained in the flush LMID.Type: ApplicationFiled: July 19, 2017Publication date: January 24, 2019Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
-
Publication number: 20180341564Abstract: A system and method for managing bad blocks in a hardware accelerated caching solution are provided. The disclosed method includes receiving an Input/Output (I/O) request, performing a hash search for the I/O request against a hash slot data structure, and based on the results of the hash search, either performing the I/O request with a data block identified in the I/O request or diverting the I/O request to a new data block not identified in the I/O request. The diversion may also include diverting the I/O request from hardware to firmware of a memory controller.Type: ApplicationFiled: May 25, 2017Publication date: November 29, 2018Inventors: Horia Simionescu, Gowrisankar Radhakrishnan, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit
-
Publication number: 20180336138Abstract: A system and method for efficient cache buffering are provided. The disclosed method determining that a read-ahead operation is to be performed in response to receiving a host Input/Output (I/O) command. In response to determining that the read-ahead operation is to be performed, allocating a new Local Message Identifier (LMID) for the read-ahead operation. The method further includes sending a buffer allocation request to a buffer manager module, the buffer allocation request containing parameters associated with the read-ahead operation and then causing the buffer manager module to allocate at least one Internal Scatter Gather List (ISGL) and Buffer Section Identifier (BSID) in accordance with the parameters contained in the buffer allocation request. The method further includes enabling the cache manager module to perform a hash search using a row or strip number and identification information available in the new LMID.Type: ApplicationFiled: May 18, 2017Publication date: November 22, 2018Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
-
Publication number: 20180335963Abstract: A system and method for performing a copyback operation are provided. The disclosed method includes initiating a copyback process to move data from an online data storage drive to a spare data storage drive by setting an indicator in hardware to divert all write completions on the online data storage drive. The method further includes, while the indicator in hardware is set to divert the write completions, incrementing on a per-strip basis a copy of data from the online data storage drive to the spare data storage drive. The method further includes only after all data from the online data storage drive has been copied to the spare data storage drive, changing the setting of the indicator in hardware so that write requests received for the online data storage drive during the copyback process are re-issued on to the spare data storage drive.Type: ApplicationFiled: May 22, 2017Publication date: November 22, 2018Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
-
Patent number: 10108359Abstract: A system and method for efficient cache buffering are provided. The disclosed method includes receiving a host command from a host, extracting command information from the host command, determining an Input/Output (I/O) action to be taken in connection with the host command, determining that the I/O action spans more than one strip, and based on the I/O action spanning more than one strip, allocating a cache frame anchor for a row on-demand along with a cache frame anchor for a strip to accommodate the I/O action.Type: GrantFiled: October 26, 2016Date of Patent: October 23, 2018Assignee: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
-
Publication number: 20180267902Abstract: Disclosed are embodiments for supporting dynamic tier remapping of data stored in a hybrid storage system. One embodiment includes a storage controller and firmware, where the firmware maintains a plurality of mapping elements, where each mapping element includes a plurality of group identifiers, where each group identifier is configured to indicate a mapping of a logical block addresses, and where the storage controller performs: receiving a read command including a logical block address; parsing the logical block address to determine a mapping element and a group identifier; determining, for a particular mapping element of the plurality of elements, whether the particular mapping element is locked, wherein the particular mapping element corresponds to the mapping element of the logical block address; and dependent upon the particular mapping element, queuing the read command for firmware processing or remapping the logical block address.Type: ApplicationFiled: May 16, 2018Publication date: September 20, 2018Inventors: JOSEPH R. EDWARDS, ROBERT GALBRAITH, ADRIAN C. GERHARD, DANIEL F. MOERTL, GOWRISANKAR RADHAKRISHNAN, RICK A. WECKWERTH
-
Patent number: 10078595Abstract: A method and controller for implementing storage adapter enhanced write cache management, and a design structure on which the subject controller circuit resides are provided. The controller includes a hardware write cache engine implementing hardware acceleration for storage write cache management. The hardware write cache engine monitors cache levels used for managing cache destage rates and thresholds for destages from storage write cache substantially without using firmware for greatly enhancing performance.Type: GrantFiled: November 26, 2017Date of Patent: September 18, 2018Assignee: International Business Machines CorporationInventors: Brian E. Bakke, Joseph R. Edwards, Robert E. Galbraith, Adrian C. Gerhard, Daniel F. Moertl, Gowrisankar Radhakrishnan, Rick A. Weckwerth
-
Patent number: 10078460Abstract: A system and method for efficient cache buffering are provided. The disclosed method includes receiving a host command from a host, extracting command information from the host command, determining an Input/Output (I/O) action to be taken in connection with the host command, identifying a particular memory module from among a plurality of memory modules to execute the I/O action, generating an accelerated I/O message for transmission to the particular memory module, the accelerated I/O message comprising at least one Internal Scatter Gather List (ISGL) having a plurality of Scatter Gather Extents (SGEs) that enable the particular memory module to execute the I/O action solely based on the at least one ISGL, and transmitting the accelerated I/O message to the particular memory module.Type: GrantFiled: October 26, 2016Date of Patent: September 18, 2018Assignee: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
-
Patent number: 10067880Abstract: Disclosed are embodiments for supporting dynamic tier remapping of data stored in a hybrid storage system. One embodiment includes a storage controller and firmware, where the firmware maintains a plurality of mapping elements, where each mapping element includes a plurality of group identifiers, where each group identifier is configured to indicate a mapping of a logical block addresses, and where the storage controller performs: receiving a read command including a logical block address; parsing the logical block address to determine a mapping element and a group identifier; determining, for a particular mapping element of the plurality of elements, whether the particular mapping element is locked, wherein the particular mapping element corresponds to the mapping element of the logical block address; and dependent upon the particular mapping element, queuing the read command for firmware processing or remapping the logical block address.Type: GrantFiled: February 29, 2016Date of Patent: September 4, 2018Assignee: International Business Machines CorporationInventors: Joseph R. Edwards, Robert Galbraith, Adrian C. Gerhard, Daniel F. Moertl, Gowrisankar Radhakrishnan, Rick A. Weckwerth
-
Publication number: 20180113634Abstract: A system and method for efficient cache buffering are provided. The disclosed method includes receiving an Input/Output (I/O) command from a host system at a storage controller, parsing the I/O command at the storage controller with a host I/O manager to extract command instructions therefrom. The host I/O manager is able to generate at least one local message that includes the command instructions extracted from the I/O command and transmit the at least one local message to a cache manager. The cache manager is enabled to work in local memory to execute the command instructions contained in the at least one message. The cache manager is also configured to chain multiple buffer segments together on-demand to support multiple stripe sizes that are specific to the I/O command received from the host system.Type: ApplicationFiled: October 26, 2016Publication date: April 26, 2018Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
-
Publication number: 20180113639Abstract: A system and method for efficient variable length memory frame allocation are described. The method is described to include receiving a frame allocation request from a host system, allocating a super frame from a stack of free super frames for the frame allocation request, the super frame comprising a set of consecutively numbered frames, updating entries in a super frame bitmap to indicate that the super frame has been allocated from the stack of free super frames, determining a super frame identifier for the allocated super frame, and enabling the super frame or the set of consecutively numbered frames to be allocated to storing data in connection with the frame allocation request or subsequent frame allocation requests from the host system.Type: ApplicationFiled: October 26, 2016Publication date: April 26, 2018Inventors: Horia Simionescu, Eugene Saghi, Sridhar Rao Veerla, Panthini Pandit, Timothy Hoglund, Gowrisankar Radhakrishnan