Patents by Inventor Sridhar Rao Veerla

Sridhar Rao Veerla has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11150807
    Abstract: Embodiments herein provide for dynamic storage system configuration. In one embodiment, a storage controller is operable to configure a storage volume from a plurality of storage devices. The storage controller includes an interface operable to receive a first write I/O request from a host system, and to extract a storage configuration attribute from the first write I/O request. The storage controller also includes a processor communicatively coupled to the interface and operable to identify a storage configuration required by the first write I/O request based on the storage configuration attribute, to determine whether the storage volume comprises the required storage configuration of the first write I/O request, and to configure a portion of the storage volume according to the storage configuration required by the first write I/O request in response to a determination that the storage volume does not comprise the required storage configuration.
    Type: Grant
    Filed: February 23, 2015
    Date of Patent: October 19, 2021
    Assignee: Avago Technologies International Sales Pte. Limited
    Inventors: Naveen Krishnamurthy, Sridhar Rao Veerla, Basavaraj G. Hallyal
  • Patent number: 10649906
    Abstract: A system and method for efficient cache flushing are provided. The disclosed method includes maintaining a data structure in connection with a plurality of blocks used for data caching, the data structure including a row lock wait list section. The method further includes receiving an Input/Output (I/O) request, performing a hash search for the I/O request against the data structure, and based on the results of the hash search, locking at least one row in a data cache thereby preventing read and write operations from being performed on the at least one row until the at least one row is unlocked.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: May 12, 2020
    Assignee: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED
    Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
  • Publication number: 20200057576
    Abstract: A system and method for efficient write through processing of Input/Output (I/O) requests are provided. One example of the illustrative method includes receiving a first write request to a first row, while processing the first write request, receiving a subsequent write request to the first row, and then caching the subsequent write request for processing until the first write request is completed.
    Type: Application
    Filed: August 16, 2018
    Publication date: February 20, 2020
    Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
  • Patent number: 10528438
    Abstract: A system and method for managing bad blocks in a hardware accelerated caching solution are provided. The disclosed method includes receiving an Input/Output (I/O) request, performing a hash search for the I/O request against a hash slot data structure, and based on the results of the hash search, either performing the I/O request with a data block identified in the I/O request or diverting the I/O request to a new data block not identified in the I/O request. The diversion may also include diverting the I/O request from hardware to firmware of a memory controller.
    Type: Grant
    Filed: May 25, 2017
    Date of Patent: January 7, 2020
    Assignee: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED
    Inventors: Horia Simionescu, Gowrisankar Radhakrishnan, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit
  • Publication number: 20190332541
    Abstract: A system and method for efficient cache flushing are provided. The disclosed method includes maintaining a data structure in connection with a plurality of blocks used for data caching, the data structure including a row lock wait list section. The method further includes receiving an Input/Output (I/O) request, performing a hash search for the I/O request against the data structure, and based on the results of the hash search, locking at least one row in a data cache thereby preventing read and write operations from being performed on the at least one row until the at least one row is unlocked.
    Type: Application
    Filed: April 30, 2018
    Publication date: October 31, 2019
    Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
  • Patent number: 10394673
    Abstract: A system and method for performing a copyback operation are provided. The disclosed method includes initiating a copyback process to move data from an online data storage drive to a spare data storage drive by setting an indicator in hardware to divert all write completions on the online data storage drive. The method further includes, while the indicator in hardware is set to divert the write completions, incrementing on a per-strip basis a copy of data from the online data storage drive to the spare data storage drive. The method further includes only after all data from the online data storage drive has been copied to the spare data storage drive, changing the setting of the indicator in hardware so that write requests received for the online data storage drive during the copyback process are re-issued on to the spare data storage drive.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: August 27, 2019
    Assignee: Avago Technologies International Sales Pte. Limited
    Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
  • Patent number: 10282301
    Abstract: A system and method for efficient cache buffering are provided. The disclosed method determining that a read-ahead operation is to be performed in response to receiving a host Input/Output (I/O) command. In response to determining that the read-ahead operation is to be performed, allocating a new Local Message Identifier (LMID) for the read-ahead operation. The method further includes sending a buffer allocation request to a buffer manager module, the buffer allocation request containing parameters associated with the read-ahead operation and then causing the buffer manager module to allocate at least one Internal Scatter Gather List (ISGL) and Buffer Section Identifier (BSID) in accordance with the parameters contained in the buffer allocation request. The method further includes enabling the cache manager module to perform a hash search using a row or strip number and identification information available in the new LMID.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: May 7, 2019
    Assignee: Avago Technologies International Sales Pte. Limited
    Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
  • Patent number: 10282116
    Abstract: A system and method for efficient cache flushing are provided. The disclosed method includes allocating one or more Internal Scatter Gather Lists (ISGLs) for the cache flush, populating the one or more ISGLs with Cache Segment Identifiers (CSIDs) and corresponding Buffer Segment Identifiers (BSIDs) of each strip that is identified as dirty, of a skip-type Internal Scatter Gather Element (ISGE), or of a missing arm-type ISGE. The disclosed method further includes allocating a flush Local Message Identifier (LMID) as a message to be used in connection with processing the cache flush, populating the flush LMID with an identifier of the one or more ISGLs, and transferring the flush LMID to a cache manager module to enable the cache manager module to execute the cache flush based on information contained in the flush LMID.
    Type: Grant
    Filed: July 19, 2017
    Date of Patent: May 7, 2019
    Assignee: Avago Technologies International Sales Pte. Limited
    Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
  • Patent number: 10223009
    Abstract: A system and method for efficient cache buffering are provided. The disclosed method includes receiving an Input/Output (I/O) command from a host system at a storage controller, parsing the I/O command at the storage controller with a host I/O manager to extract command instructions therefrom. The host I/O manager is able to generate at least one local message that includes the command instructions extracted from the I/O command and transmit the at least one local message to a cache manager. The cache manager is enabled to work in local memory to execute the command instructions contained in the at least one message. The cache manager is also configured to chain multiple buffer segments together on-demand to support multiple stripe sizes that are specific to the I/O command received from the host system.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: March 5, 2019
    Assignee: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED
    Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
  • Publication number: 20190026033
    Abstract: A system and method for efficient cache flushing are provided. The disclosed method includes allocating one or more Internal Scatter Gather Lists (ISGLs) for the cache flush, populating the one or more ISGLs with Cache Segment Identifiers (CSIDs) and corresponding Buffer Segment Identifiers (BSIDs) of each strip that is identified as dirty, of a skip-type Internal Scatter Gather Element (ISGE), or of a missing arm-type ISGE. The disclosed method further includes allocating a flush Local Message Identifier (LMID) as a message to be used in connection with processing the cache flush, populating the flush LMID with an identifier of the one or more ISGLs, and transferring the flush LMID to a cache manager module to enable the cache manager module to execute the cache flush based on information contained in the flush LMID.
    Type: Application
    Filed: July 19, 2017
    Publication date: January 24, 2019
    Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
  • Publication number: 20180341564
    Abstract: A system and method for managing bad blocks in a hardware accelerated caching solution are provided. The disclosed method includes receiving an Input/Output (I/O) request, performing a hash search for the I/O request against a hash slot data structure, and based on the results of the hash search, either performing the I/O request with a data block identified in the I/O request or diverting the I/O request to a new data block not identified in the I/O request. The diversion may also include diverting the I/O request from hardware to firmware of a memory controller.
    Type: Application
    Filed: May 25, 2017
    Publication date: November 29, 2018
    Inventors: Horia Simionescu, Gowrisankar Radhakrishnan, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit
  • Publication number: 20180335963
    Abstract: A system and method for performing a copyback operation are provided. The disclosed method includes initiating a copyback process to move data from an online data storage drive to a spare data storage drive by setting an indicator in hardware to divert all write completions on the online data storage drive. The method further includes, while the indicator in hardware is set to divert the write completions, incrementing on a per-strip basis a copy of data from the online data storage drive to the spare data storage drive. The method further includes only after all data from the online data storage drive has been copied to the spare data storage drive, changing the setting of the indicator in hardware so that write requests received for the online data storage drive during the copyback process are re-issued on to the spare data storage drive.
    Type: Application
    Filed: May 22, 2017
    Publication date: November 22, 2018
    Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
  • Publication number: 20180336138
    Abstract: A system and method for efficient cache buffering are provided. The disclosed method determining that a read-ahead operation is to be performed in response to receiving a host Input/Output (I/O) command. In response to determining that the read-ahead operation is to be performed, allocating a new Local Message Identifier (LMID) for the read-ahead operation. The method further includes sending a buffer allocation request to a buffer manager module, the buffer allocation request containing parameters associated with the read-ahead operation and then causing the buffer manager module to allocate at least one Internal Scatter Gather List (ISGL) and Buffer Section Identifier (BSID) in accordance with the parameters contained in the buffer allocation request. The method further includes enabling the cache manager module to perform a hash search using a row or strip number and identification information available in the new LMID.
    Type: Application
    Filed: May 18, 2017
    Publication date: November 22, 2018
    Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
  • Patent number: 10108359
    Abstract: A system and method for efficient cache buffering are provided. The disclosed method includes receiving a host command from a host, extracting command information from the host command, determining an Input/Output (I/O) action to be taken in connection with the host command, determining that the I/O action spans more than one strip, and based on the I/O action spanning more than one strip, allocating a cache frame anchor for a row on-demand along with a cache frame anchor for a strip to accommodate the I/O action.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: October 23, 2018
    Assignee: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
    Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
  • Patent number: 10078460
    Abstract: A system and method for efficient cache buffering are provided. The disclosed method includes receiving a host command from a host, extracting command information from the host command, determining an Input/Output (I/O) action to be taken in connection with the host command, identifying a particular memory module from among a plurality of memory modules to execute the I/O action, generating an accelerated I/O message for transmission to the particular memory module, the accelerated I/O message comprising at least one Internal Scatter Gather List (ISGL) having a plurality of Scatter Gather Extents (SGEs) that enable the particular memory module to execute the I/O action solely based on the at least one ISGL, and transmitting the accelerated I/O message to the particular memory module.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: September 18, 2018
    Assignee: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
    Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
  • Publication number: 20180113633
    Abstract: A system and method for efficient cache buffering are provided. The disclosed method includes receiving a host command from a host, extracting command information from the host command, determining an Input/Output (I/O) action to be taken in connection with the host command, determining that the I/O action spans more than one strip, and based on the I/O action spanning more than one strip, allocating a cache frame anchor for a row on-demand along with a cache frame anchor for a strip to accommodate the I/O action.
    Type: Application
    Filed: October 26, 2016
    Publication date: April 26, 2018
    Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
  • Publication number: 20180113635
    Abstract: A system and method for efficient cache buffering are provided. The disclosed method includes receiving a host command from a host, extracting command information from the host command, determining an Input/Output (I/O) action to be taken in connection with the host command, identifying a particular memory module from among a plurality of memory modules to execute the I/O action, generating an accelerated I/O message for transmission to the particular memory module, the accelerated I/O message comprising at least one Internal Scatter Gather List (ISGL) having a plurality of Scatter Gather Extents (SGEs) that enable the particular memory module to execute the I/O action solely based on the at least one ISGL, and transmitting the accelerated I/O message to the particular memory module.
    Type: Application
    Filed: October 26, 2016
    Publication date: April 26, 2018
    Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
  • Publication number: 20180113634
    Abstract: A system and method for efficient cache buffering are provided. The disclosed method includes receiving an Input/Output (I/O) command from a host system at a storage controller, parsing the I/O command at the storage controller with a host I/O manager to extract command instructions therefrom. The host I/O manager is able to generate at least one local message that includes the command instructions extracted from the I/O command and transmit the at least one local message to a cache manager. The cache manager is enabled to work in local memory to execute the command instructions contained in the at least one message. The cache manager is also configured to chain multiple buffer segments together on-demand to support multiple stripe sizes that are specific to the I/O command received from the host system.
    Type: Application
    Filed: October 26, 2016
    Publication date: April 26, 2018
    Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
  • Publication number: 20180113810
    Abstract: A system and method for data caching are provided. The disclosed method includes organizing a plurality of hash slots into a plurality of hash slot buckets such that each hash slot bucket in the plurality of hash slot buckets contains a plurality of hash slots having Logical Block Addressing (LBA) and Cache Segment ID (CSID) pairs, receiving an Input/Output (I/O) request from a host system, determining that cache memory is needed to fulfill the I/O request, and performing a cache lookup in connection with fulfilling the I/O request, where the cache lookup includes analyzing the plurality of hash slots for unoccupied hash slots by comparing a hash against hash values assigned to the hash slot buckets instead of individual hash values assigned to the hash slots.
    Type: Application
    Filed: October 26, 2016
    Publication date: April 26, 2018
    Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
  • Publication number: 20180113639
    Abstract: A system and method for efficient variable length memory frame allocation are described. The method is described to include receiving a frame allocation request from a host system, allocating a super frame from a stack of free super frames for the frame allocation request, the super frame comprising a set of consecutively numbered frames, updating entries in a super frame bitmap to indicate that the super frame has been allocated from the stack of free super frames, determining a super frame identifier for the allocated super frame, and enabling the super frame or the set of consecutively numbered frames to be allocated to storing data in connection with the frame allocation request or subsequent frame allocation requests from the host system.
    Type: Application
    Filed: October 26, 2016
    Publication date: April 26, 2018
    Inventors: Horia Simionescu, Eugene Saghi, Sridhar Rao Veerla, Panthini Pandit, Timothy Hoglund, Gowrisankar Radhakrishnan