Patents by Inventor Venkateswarlu Tella

Venkateswarlu Tella has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11960448
    Abstract: Techniques are provided for implementing a unified object format. The unified object format is used to format data in a performance tier (e.g., infrequently accessed data, snapshot data, etc.) into objects that are stored into an object store for low cost, scalable, long term storage compared to storage of the performance tier. With the unified object format, compression of the data may be retained when the data is stored as the objects into the object store. Additional compression may also be provided for the data in the objects. The unified object format includes slot header metadata used to track the location of the data within the object notwithstanding the data being compressed and/or stored at non-fixed boundaries. The slot header metadata may be cached at the performance tier for improved read performance and may be repaired by a repair subsystem (a slot header repair subsystem).
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: April 16, 2024
    Assignee: NetApp, Inc.
    Inventors: Palak Sharma, Dibyasri Nandi, Sindhushree K N, Cheryl Marie Thompson, Qinghua Zheng, Venkateswarlu Tella, Debanjan Paul, Dinakaran Narayanan
  • Patent number: 11861169
    Abstract: Techniques are provided for a layout format for compressed data. A first set of data blocks are grouped into a first group based upon a first frequency of access to the first set of data blocks. A second set of data blocks are grouped into a second group based upon a second frequency of access to the second set of data blocks. The first set of data blocks are compressed into a first compression group using a first compression algorithm. The second set of data blocks are compressed into a second compression group using a second compression algorithm.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: January 2, 2024
    Assignee: NetApp, Inc.
    Inventors: Girish Hebbale Venkatasubbaiah, Rahul Thapliyal, Dnyaneshwar Nagorao Pawar, Kartik Rathnakar, Venkateswarlu Tella, Ananthan Subramanian
  • Patent number: 11740820
    Abstract: Methods and systems for a storage environment are provided. One method includes identifying, by a processor, a plurality of block numbers of a fragmented address space for re-allocation, each block number associated with data stored by a file system in a storage device of a storage system; determining, by the processor, compressed data associated with a block number from among the plurality of block numbers; verifying, by the processor, that an indirect block of a hierarchical structure maintained by the file system references the block number associated with the compressed data; and copying, by the processor, the compressed data to a new block, without decompressing the data.
    Type: Grant
    Filed: May 11, 2022
    Date of Patent: August 29, 2023
    Assignee: NETAPP, INC.
    Inventors: Mathankumar Devarajan, Girish Hebbale Venkatasubbaiah, Venkateswarlu Tella, Dnyaneshwar Nagorao Pawar, Harsh Tiwari
  • Publication number: 20230135151
    Abstract: Techniques are provided for implementing a unified object format. The unified object format is used to format data in a performance tier (e.g., infrequently accessed data, snapshot data, etc.) into objects that are stored into an object store for low cost, scalable, long term storage compared to storage of the performance tier. With the unified object format, compression of the data may be retained when the data is stored as the objects into the object store. Additional compression may also be provided for the data in the objects. The unified object format includes slot header metadata used to track the location of the data within the object notwithstanding the data being compressed and/or stored at non-fixed boundaries. The slot header metadata may be cached at the performance tier for improved read performance and may be repaired by a repair subsystem (a slot header repair subsystem).
    Type: Application
    Filed: April 28, 2022
    Publication date: May 4, 2023
    Inventors: Palak Sharma, Dibyasri Nandi, K N Sindhushree, Cheryl Marie Thompson, Qinghua Zheng, Venkateswarlu Tella, Debanjan Paul, Dinakaran Narayanan
  • Publication number: 20230133433
    Abstract: Techniques are provided for implementing a unified object format. The unified object format is used to format data in a performance tier (e.g., infrequently accessed data, snapshot data, etc.) into objects that are stored into an object store for low cost, scalable, long term storage compared to storage of the performance tier. With the unified object format, compression of the data may be retained when the data is stored as the objects into the object store. Additional compression may also be provided for the data in the objects. The unified object format includes slot header metadata used to track the location of the data within the object notwithstanding the data being compressed and/or stored at non-fixed boundaries. The slot header metadata may be cached at the performance tier for improved read performance and may be repaired by a repair subsystem (a slot header repair subsystem).
    Type: Application
    Filed: April 28, 2022
    Publication date: May 4, 2023
    Inventors: Palak Sharma, Dibyasri Nandi, Sindhushree K N, Cheryl Marie Thompson, Qinghua Zheng, Venkateswarlu Tella, Debanjan Paul, Dinakaran Narayanan
  • Publication number: 20230135954
    Abstract: Techniques are provided for implementing a unified object format. The unified object format is used to format data in a performance tier (e.g., infrequently accessed data, snapshot data, etc.) into objects that are stored into an object store for low cost, scalable, long term storage compared to storage of the performance tier. With the unified object format, compression of the data may be retained when the data is stored as the objects into the object store. Additional compression may also be provided for the data in the objects. The unified object format includes slot header metadata used to track the location of the data within the object notwithstanding the data being compressed and/or stored at non-fixed boundaries. The slot header metadata may be cached at the performance tier for improved read performance and may be repaired by a repair subsystem (a slot header repair subsystem).
    Type: Application
    Filed: April 28, 2022
    Publication date: May 4, 2023
    Inventors: Palak Sharma, Dibyasri Nandi, Sindhushree K N, Cheryl Marie Thompson, Qinghua Zheng, Venkateswarlu Tella, Debanjan Paul, Dinakaran Narayanan
  • Patent number: 11620064
    Abstract: Techniques are provided for asynchronous semi-inline deduplication. A multi-tiered storage arrangement comprises a first storage tier, a second storage tier, etc. An in-memory change log of data recently written to the first storage tier is evaluate to identify a fingerprint of a data block recently written to the first storage tier. A donor data store, comprising fingerprints of data blocks already stored within the first storage tier, is queried using the fingerprint. If the fingerprint is found, then deduplication is performed for the data block to create deduplicated data based upon a potential donor data block within the first storage tier. The deduplicated data is moved from the first storage tier to the second storage tier, such as in response to a determination that the deduplicated data has not been recently accessed. The deduplication is performed before cold data is moved from first storage tier to second storage tier.
    Type: Grant
    Filed: July 13, 2021
    Date of Patent: April 4, 2023
    Assignee: NetApp, Inc.
    Inventors: Alok Sharma, Girish Hebbale Venkata Subbaiah, Kartik Rathnakar, Venkateswarlu Tella, Mukul Sharma
  • Publication number: 20210405882
    Abstract: Techniques are provided for a layout format for compressed data. A first set of data blocks are grouped into a first group based upon a first frequency of access to the first set of data blocks. A second set of data blocks are grouped into a second group based upon a second frequency of access to the second set of data blocks. The first set of data blocks are compressed into a first compression group using a first compression algorithm. The second set of data blocks are compressed into a second compression group using a second compression algorithm.
    Type: Application
    Filed: September 22, 2020
    Publication date: December 30, 2021
    Inventors: Girish Hebbale Venkatasubbaiah, Rahul Thapliyal, Dnyaneshwar Nagorao Pawar, Kartik Rathnakar, Venkateswarlu Tella, Ananthan Subramanian
  • Publication number: 20210342082
    Abstract: Techniques are provided for asynchronous semi-inline deduplication. A multi-tiered storage arrangement comprises a first storage tier, a second storage tier, etc. An in-memory change log of data recently written to the first storage tier is evaluate to identify a fingerprint of a data block recently written to the first storage tier. A donor data store, comprising fingerprints of data blocks already stored within the first storage tier, is queried using the fingerprint. If the fingerprint is found, then deduplication is performed for the data block to create deduplicated data based upon a potential donor data block within the first storage tier. The deduplicated data is moved from the first storage tier to the second storage tier, such as in response to a determination that the deduplicated data has not been recently accessed. The deduplication is performed before cold data is moved from first storage tier to second storage tier.
    Type: Application
    Filed: July 13, 2021
    Publication date: November 4, 2021
    Inventors: Alok Sharma, Girish Hebbale Venkata Subbaiah, Kartik Rathnakar, Venkateswarlu Tella, Mukul Sharma
  • Patent number: 11068182
    Abstract: Techniques are provided for asynchronous semi-inline deduplication. A multi-tiered storage arrangement comprises a first storage tier, a second storage tier, etc. An in-memory change log of data recently written to the first storage tier is evaluate to identify a fingerprint of a data block recently written to the first storage tier. A donor data store, comprising fingerprints of data blocks already stored within the first storage tier, is queried using the fingerprint. If the fingerprint is found, then deduplication is performed for the data block to create deduplicated data based upon a potential donor data block within the first storage tier. The deduplicated data is moved from the first storage tier to the second storage tier, such as in response to a determination that the deduplicated data has not been recently accessed. The deduplication is performed before cold data is moved from first storage tier to second storage tier.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: July 20, 2021
    Assignee: NetApp, Inc.
    Inventors: Alok Sharma, Girish Hebbale Venkata Subbaiah, Kartik Rathnakar, Venkateswarlu Tella, Mukul Sharma
  • Publication number: 20200159432
    Abstract: One or more techniques and/or computing devices are provided for inline deduplication. For example, a checksum hash table and/or a block number hash table may be maintained within memory (e.g., a storage controller may maintain the hash tables in-core). The checksum hash table may be utilized for inline deduplication to identify potential donor blocks that may comprise the same data as an incoming storage operation. Data within an in-core buffer cache is eligible as potential donor blocks so that inline deduplication may be performed using data from the in-core buffer cache, which may mitigate disk access to underlying storage for which the in-core buffer cache is used for caching. The block number hash table may be used for updating or removing entries from the hash tables, such as for blocks that are no longer eligible as potential donor blocks (e.g., deleted blocks, blocks evicted from the in-core buffer cache, etc.).
    Type: Application
    Filed: January 28, 2020
    Publication date: May 21, 2020
    Inventors: Mukul Sharma, Kartik Rathnakar, Dnyaneshwar Nagorao Pawar, Venkateswarlu Tella, Kiran Nenmeli Srinivasan, Rajesh Khandelwal, Alok Sharma
  • Publication number: 20200081643
    Abstract: Techniques are provided for asynchronous semi-inline deduplication. A multi-tiered storage arrangement comprises a first storage tier, a second storage tier, etc. An in-memory change log of data recently written to the first storage tier is evaluate to identify a fingerprint of a data block recently written to the first storage tier. A donor data store, comprising fingerprints of data blocks already stored within the first storage tier, is queried using the fingerprint. If the fingerprint is found, then deduplication is performed for the data block to create deduplicated data based upon a potential donor data block within the first storage tier. The deduplicated data is moved from the first storage tier to the second storage tier, such as in response to a determination that the deduplicated data has not been recently accessed. The deduplication is performed before cold data is moved from first storage tier to second storage tier.
    Type: Application
    Filed: November 14, 2019
    Publication date: March 12, 2020
    Inventors: Alok Sharma, Girish Hebbale Venkata Subbaiah, Kartik Rathnakar, Venkateswarlu Tella, Mukul Sharma
  • Patent number: 10585611
    Abstract: One or more techniques and/or computing devices are provided for inline deduplication. For example, a checksum hash table and/or a block number hash table may be maintained within memory (e.g., a storage controller may maintain the hash tables in-core). The checksum hash table may be utilized for inline deduplication to identify potential donor blocks that may comprise the same data as an incoming storage operation. Data within an in-core buffer cache is eligible as potential donor blocks so that inline deduplication may be performed using data from the in-core buffer cache, which may mitigate disk access to underlying storage for which the in-core buffer cache is used for caching. The block number hash table may be used for updating or removing entries from the hash tables, such as for blocks that are no longer eligible as potential donor blocks (e.g., deleted blocks, blocks evicted from the in-core buffer cache, etc.).
    Type: Grant
    Filed: April 26, 2016
    Date of Patent: March 10, 2020
    Assignee: NetApp Inc.
    Inventors: Mukul Sharma, Kartik Rathnakar, Dnyaneshwar Nagorao Pawar, Venkateswarlu Tella, Kiran Nenmeli Srinivasan, Rajesh Khandelwal, Alok Sharma
  • Patent number: 10496314
    Abstract: Techniques are provided for asynchronous semi-inline deduplication. A multi-tiered storage arrangement comprises a first storage tier, a second storage tier, etc. An in-memory change log of data recently written to the first storage tier is evaluate to identify a fingerprint of a data block recently written to the first storage tier. A donor data store, comprising fingerprints of data blocks already stored within the first storage tier, is queried using the fingerprint. If the fingerprint is found, then deduplication is performed for the data block to create deduplicated data based upon a potential donor data block within the first storage tier. The deduplicated data is moved from the first storage tier to the second storage tier, such as in response to a determination that the deduplicated data has not been recently accessed. The deduplication is performed before cold data is moved from first storage tier to second storage tier.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: December 3, 2019
    Assignee: NetApp Inc.
    Inventors: Alok Sharma, Girish Hebbale Venkata Subbaiah, Kartik Rathnakar, Venkateswarlu Tella, Mukul Sharma
  • Publication number: 20180181339
    Abstract: Techniques are provided for asynchronous semi-inline deduplication. A multi-tiered storage arrangement comprises a first storage tier, a second storage tier, etc. An in-memory change log of data recently written to the first storage tier is evaluate to identify a fingerprint of a data block recently written to the first storage tier. A donor data store, comprising fingerprints of data blocks already stored within the first storage tier, is queried using the fingerprint. If the fingerprint is found, then deduplication is performed for the data block to create deduplicated data based upon a potential donor data block within the first storage tier. The deduplicated data is moved from the first storage tier to the second storage tier, such as in response to a determination that the deduplicated data has not been recently accessed. The deduplication is performed before cold data is moved from first storage tier to second storage tier.
    Type: Application
    Filed: February 23, 2018
    Publication date: June 28, 2018
    Inventors: Alok Sharma, Girish Hebbale Venkata Subbaiah, Kartik Rathnakar, Venkateswarlu Tella, Mukul Sharma
  • Publication number: 20180173449
    Abstract: Techniques are provided for asynchronous semi-inline deduplication. A multi-tiered storage arrangement comprises a first storage tier, a second storage tier, etc. An in-memory change log of data recently written to the first storage tier is evaluate to identify a fingerprint of a data block recently written to the first storage tier. A donor data store, comprising fingerprints of data blocks already stored within the first storage tier, is queried using the fingerprint. If the fingerprint is found, then deduplication is performed for the data block to create deduplicated data based upon a potential donor data block within the first storage tier. The deduplicated data is moved from the first storage tier to the second storage tier, such as in response to a determination that the deduplicated data has not been recently accessed. The deduplication is performed before cold data is moved from first storage tier to second storage tier.
    Type: Application
    Filed: December 21, 2016
    Publication date: June 21, 2018
    Inventors: Alok Sharma, Girish Hebbale Venkata Subbaiah, Kartik Rathnakar, Venkateswarlu Tella, Mukul Sharma
  • Patent number: 10001942
    Abstract: Techniques are provided for asynchronous semi-inline deduplication. A multi-tiered storage arrangement comprises a first storage tier, a second storage tier, etc. An in-memory change log of data recently written to the first storage tier is evaluate to identify a fingerprint of a data block recently written to the first storage tier. A donor data store, comprising fingerprints of data blocks already stored within the first storage tier, is queried using the fingerprint. If the fingerprint is found, then deduplication is performed for the data block to create deduplicated data based upon a potential donor data block within the first storage tier. The deduplicated data is moved from the first storage tier to the second storage tier, such as in response to a determination that the deduplicated data has not been recently accessed. The deduplication is performed before cold data is moved from first storage tier to second storage tier.
    Type: Grant
    Filed: December 21, 2016
    Date of Patent: June 19, 2018
    Assignee: NetApp Inc.
    Inventors: Alok Sharma, Girish Hebbale Venkata Subbaiah, Kartik Rathnakar, Venkateswarlu Tella, Mukul Sharma
  • Publication number: 20170308320
    Abstract: One or more techniques and/or computing devices are provided for inline deduplication. For example, a checksum hash table and/or a block number hash table may be maintained within memory (e.g., a storage controller may maintain the hash tables in-core). The checksum hash table may be utilized for inline deduplication to identify potential donor blocks that may comprise the same data as an incoming storage operation. Data within an in-core buffer cache is eligible as potential donor blocks so that inline deduplication may be performed using data from the in-core buffer cache, which may mitigate disk access to underlying storage for which the in-core buffer cache is used for caching. The block number hash table may be used for updating or removing entries from the hash tables, such as for blocks that are no longer eligible as potential donor blocks (e.g., deleted blocks, blocks evicted from the in-core buffer cache, etc.).
    Type: Application
    Filed: April 26, 2016
    Publication date: October 26, 2017
    Inventors: Mukul Sharma, Kartik Rathnakar, Dnyaneshwar Nagorao Pawar, Venkateswarlu Tella, Kiran Nenmeli Srinivasan, Rajesh Khandelwal, Alok Sharma