Patents by Inventor Anjaneya R. Chagam Reddy
Anjaneya R. Chagam Reddy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11550617Abstract: A method is described. The method includes performing the following with a storage end transaction agent within a storage sled of a rack mounted computing system: receiving a request to perform storage operations with one or more storage devices of the storage sled, the request specifying an all-or-nothing semantic for the storage operations; recognizing that all of the storage operations have successfully completed; after all of the storage operations have successfully completed, reporting to a CPU side transaction agent that sent the request that all of the storage operations have successfully completed.Type: GrantFiled: June 22, 2020Date of Patent: January 10, 2023Assignee: Intel CorporationInventors: Arun Raghunath, Yi Zou, Tushar Sudhakar Gohad, Anjaneya R. Chagam Reddy, Sujoy Sen
-
Patent number: 11194522Abstract: Apparatuses for computing are disclosed herein. An apparatus may include a set of data reduction modules to perform data reduction operations on sets of (key, value) data pairs to reduce an amount of values associated with a shared key, wherein the (key, value) data pairs are stored in a plurality of queues located in a plurality of solid state drives remote from the apparatus. The apparatus may further include a memory access module, communicably coupled to the set of data reduction modules, to directly transfer individual ones of the sets of queued (key, value) data pairs from the plurality of remote solid state drives through remote random access of the solid state drives, via a network, without using intermediate staging storage. Other embodiments may be disclosed or claimed.Type: GrantFiled: August 16, 2017Date of Patent: December 7, 2021Assignee: Intel CorporationInventors: Xiao Hu, Huan Zhou, Sujoy Sen, Anjaneya R. Chagam Reddy, Mohan J. Kumar, Chong Han
-
Patent number: 10990532Abstract: A method performed by a first hardware element in a hierarchical arrangement of hardware elements in an object storage system is described. The method includes performing a hash on a name of an object of the object storage system. The name is part of a request that is associated with the object. A result of the hash is to identify a second hardware element directly beneath the first hardware element in the hierarchical arrangement. The request is to be sent to the second hardware element to advance the request toward being serviced by the object storage system.Type: GrantFiled: March 29, 2018Date of Patent: April 27, 2021Assignee: Intel CorporationInventors: Mohan J. Kumar, Anjaneya R. Chagam Reddy
-
Publication number: 20200319915Abstract: A method is described. The method includes performing the following with a storage end transaction agent within a storage sled of a rack mounted computing system: receiving a request to perform storage operations with one or more storage devices of the storage sled, the request specifying an all-or-nothing semantic for the storage operations; recognizing that all of the storage operations have successfully completed; after all of the storage operations have successfully completed, reporting to a CPU side transaction agent that sent the request that all of the storage operations have successfully completed.Type: ApplicationFiled: June 22, 2020Publication date: October 8, 2020Inventors: Arun RAGHUNATH, Yi ZOU, Tushar Sudhakar GOHAD, Anjaneya R. CHAGAM REDDY, Sujoy SEN
-
Publication number: 20200210114Abstract: Apparatuses for computing are disclosed herein. An apparatus may include a set of data reduction modules to perform data reduction operations on sets of (key, value) data pairs to reduce an amount of values associated with a shared key, wherein the (key, value) data pairs are stored in a plurality of queues located in a plurality of solid state drives remote from the apparatus. The apparatus may further include a memory access module, communicably coupled to the set of data reduction modules, to directly transfer individual ones of the sets of queued (key, value) data pairs from the plurality of remote solid state drives through remote random access of the solid state drives, via a network, without using intermediate staging storage. Other embodiments may be disclosed or claimed.Type: ApplicationFiled: August 16, 2017Publication date: July 2, 2020Inventors: Xiao Hu, Huan Zhou, Sujoy Sen, Anjaneya R. Chagam Reddy, Mohan J. Kumar, Chong Han
-
Publication number: 20200117642Abstract: A storage resource may be coupled via an interface with a processing device that receives a data object associated with a request to store the data object at the storage resource. A type of workload associated with the data object that is associated with the request to store the data object at the storage resource may be identified. A size of a data block of the data object may be determined based on the identified type of workload.Type: ApplicationFiled: June 30, 2017Publication date: April 16, 2020Inventors: Malini K. BHANDARU, Anjaneya R. CHAGAM REDDY, Ganesh Maharaj MAHALINGAM, Tushar GOHAD, Wei CHEN, Yingxin CHENG, Xiaoyan LI, Qiaowei REN, Chunmei LIU
-
Patent number: 10592162Abstract: Examples include methods for obtaining one or more location hints applicable to a range of logical block addresses of a received input/output (I/O) request for a storage subsystem coupled with a host system over a non-volatile memory express over fabric (NVMe-oF) interconnect. The following steps are performed for each logical block address in the I/O request. A most specific location hint of the one or more location hints that matches that logical block address is applied to identify a destination in the storage subsystem for the I/O request. When the most specific location hint is a consistent hash hint, the consistent hash hint is processed. The I/O request is forwarded to the destination and a completion status for the I/O request is returned. When a location hint log page has changed, the location hint log page is processed. When any location hint refers to NVMe-oF qualified names not included in the immediately preceding query by the discovery service, the immediately preceding query is processed again.Type: GrantFiled: August 22, 2018Date of Patent: March 17, 2020Assignee: Intel CorporationInventors: Scott D. Peterson, Sujoy Sen, Anjaneya R. Chagam Reddy, Murugasamy K. Nachimuthu, Mohan J. Kumar
-
Patent number: 10503587Abstract: Apparatuses, systems and methods are disclosed herein that generally relate to distributed network storage and filesystems, such as Ceph, Hadoop®, or other big data storage environments utilizing resources and/or storage that may be remotely located across a communication link such as a network. More particularly, disclosed are techniques for one or more machines or devices to scrub data on remote resources and/or storage without requiring all or substantially all of the remote data to be read across the communication link in order to scrub it. Some disclosed embodiments discuss having validation be relatively local to storage(s) being scrubbed, and some embodiments discuss only providing to the one or more machines scrubbing data selected results of the relatively local scrubbing over the communication link.Type: GrantFiled: June 30, 2017Date of Patent: December 10, 2019Assignee: Intel CorporationInventors: Anjaneya R. Chagam Reddy, Mohan J. Kumar, Sujoy Sen, Tushar Gohad
-
Patent number: 10468077Abstract: Examples include techniques for storing an object in a non-volatile memory in a solid-state storage device (SSD), the SDD supporting input/output (I/O) operations of a block size, when a size of the object is greater than or equal to the block size. The object may be stored in a write buffer in a persistent memory in a computing platform when the size of the object is less than the block size. An object metadata component may be updated in the persistent memory to store attributes of stored objects, the attributes comprising at least an object identifier, an object state, and a location where the object is stored, the location being one or more of a cache in volatile memory, the write buffer, and the SSD. A flush operation may be performed to coalesce objects smaller than the block size together in the write buffer and to store the coalesced objects in the SSD when a size of coalesced objects is equal to the block size.Type: GrantFiled: February 7, 2018Date of Patent: November 5, 2019Assignee: Intel CorporationInventor: Anjaneya R. Chagam Reddy
-
Patent number: 10394634Abstract: Apparatuses, systems and methods are disclosed herein that generally relate to distributed storage, such as for big data, distributed databases, large datasets, artificial intelligence, genomics, or any other data processing environment using that host large data sets or utilize big data hosts using local storage or storage remotely located over a network. More particularly since large scale data requires many storage devices, scrubbing storage for reliability and accuracy requires communication bandwidth and processor resources. Discussed are various ways to use known storage structure, such as LBA, to offload scrubbing overhead to storage by having storage engage in autonomous self-validation. Storage may scrub itself and identify stored data failing data integrity validation, or identify unreadable storage locations, and report errors to a distributed storage system that may reverse-lookup the affected storage location to identify, for example, a data block at that location needing correction.Type: GrantFiled: June 30, 2017Date of Patent: August 27, 2019Assignee: Intel CorporationInventor: Anjaneya R. Chagam Reddy
-
Publication number: 20190108095Abstract: To reduce the cost of ensuring the integrity of data stored in distributed data storage systems, a storage-side system provides data integrity services without the involvement of the host-side data storage system. Processes for storage-side data integrity include maintaining a block ownership map and performing data integrity checking and repair functions in storage target subsystems. The storage target subsystems are configured to efficiently manage data stored remotely using a storage fabric protocol such as NVMe-oF. The storage target subsystems can be implemented in a disaggregated storage computing system on behalf of a host-side distributed data storage system, such as software-defined storage (SDS) system.Type: ApplicationFiled: December 7, 2018Publication date: April 11, 2019Inventors: Yi ZOU, Arun RAGHUNATH, Anjaneya R. CHAGAM REDDY, Sujoy SEN, Tushar Sudhakar GOHAD
-
Publication number: 20190042089Abstract: Examples include techniques for determining a storage policy for storing data in a computing system having one or more storage nodes, each storage node including one or more storage devices. One technique includes getting rating information from a storage device of a storage node; assigning the storage device to a storage pool based at least in part on the rating information; and automatically determining a storage policy for the computing system based at least in part on the assigned storage pool and the rating information.Type: ApplicationFiled: March 2, 2018Publication date: February 7, 2019Inventors: Anjaneya R. CHAGAM REDDY, Mohan J. KUMAR, Sujoy SEN, Murugasamy K. NACHIMUTHU, Gamil CAIN
-
Publication number: 20190043540Abstract: Examples include techniques for storing an object in a non-volatile memory in a solid-state storage device (SSD), the SDD supporting input/output (I/O) operations of a block size, when a size of the object is greater than or equal to the block size. The object may be stored in a write buffer in a persistent memory in a computing platform when the size of the object is less than the block size. An object metadata component may be updated in the persistent memory to store attributes of stored objects, the attributes comprising at least an object identifier, an object state, and a location where the object is stored, the location being one or more of a cache in volatile memory, the write buffer, and the SSD. A flush operation may be performed to coalesce objects smaller than the block size together in the write buffer and to store the coalesced objects in the SSD when a size of coalesced objects is equal to the block size.Type: ApplicationFiled: February 7, 2018Publication date: February 7, 2019Inventor: Anjaneya R. CHAGAM REDDY
-
Publication number: 20190042144Abstract: Examples include methods for obtaining one or more location hints applicable to a range of logical block addresses of a received input/output (I/O) request for a storage subsystem coupled with a host system over a non-volatile memory express over fabric (NVMe-oF) interconnect. The following steps are performed for each logical block address in the I/O request. A most specific location hint of the one or more location hints that matches that logical block address is applied to identify a destination in the storage subsystem for the I/O request. When the most specific location hint is a consistent hash hint, the consistent hash hint is processed. The I/O request is forwarded to the destination and a completion status for the I/O request is returned. When a location hint log page has changed, the location hint log page is processed. When any location hint refers to NVMe-oF qualified names not included in the immediately preceding query by the discovery service, the immediately preceding query is processed again.Type: ApplicationFiled: August 22, 2018Publication date: February 7, 2019Inventors: Scott D. PETERSON, Sujoy SEN, Anjaneya R. CHAGAM REDDY, Murugasamy K. NACHIMUTHU, Mohan J. KUMAR
-
Publication number: 20190042440Abstract: A method performed by a first hardware element in a hierarchical arrangement of hardware elements in an object storage system is described. The method includes performing a hash on a name of an object of the object storage system. The name is part of a request that is associated with the object. A result of the hash is to identify a second hardware element directly beneath the first hardware element in the hierarchical arrangement. The request is to be sent to the second hardware element to advance the request toward being serviced by the object storage system.Type: ApplicationFiled: March 29, 2018Publication date: February 7, 2019Inventors: Mohan J. KUMAR, Anjaneya R. CHAGAM REDDY
-
Publication number: 20190004888Abstract: Apparatuses, systems and methods are disclosed herein that generally relate to distributed storage, such as for big data, distributed databases, large datasets, artificial intelligence, genomics, or any other data processing environment using that host large data sets or utilize big data hosts using local storage or storage remotely located over a network. More particularly since large scale data requires many storage devices, scrubbing storage for reliability and accuracy requires communication bandwidth and processor resources. Discussed are various ways to use known storage structure, such as LBA, to offload scrubbing overhead to storage by having storage engage in autonomous self-validation. Storage may scrub itself and identify stored data failing data integrity validation, or identify unreadable storage locations, and report errors to a distributed storage system that may reverse-lookup the affected storage location to identify, for example, a data block at that location needing correction.Type: ApplicationFiled: June 30, 2017Publication date: January 3, 2019Inventor: Anjaneya R. Chagam Reddy
-
Publication number: 20190004894Abstract: Apparatuses, systems and methods are disclosed herein that generally relate to distributed network storage and filesystems, such as Ceph, Hadoop®, or other big data storage environments utilizing resources and/or storage that may be remotely located across a communication link such as a network. More particularly, disclosed are techniques for one or more machines or devices to scrub data on remote resources and/or storage without requiring all or substantially all of the remote data to be read across the communication link in order to scrub it. Some disclosed embodiments discuss having validation be relatively local to storage(s) being scrubbed, and some embodiments discuss only providing to the one or more machines scrubbing data selected results of the relatively local scrubbing over the communication link.Type: ApplicationFiled: June 30, 2017Publication date: January 3, 2019Inventors: Anjaneya R. Chagam Reddy, Mohan J. Kumar, Sujoy Sen, Tushar Gohad
-
Publication number: 20180285294Abstract: Apparatus and method to perform quality of service based handling of input/output (IO) requests are disclosed herein.Type: ApplicationFiled: April 1, 2017Publication date: October 4, 2018Inventor: ANJANEYA R. CHAGAM REDDY
-
Publication number: 20180288152Abstract: Apparatus and method for storage accessibility are disclosed herein. In some embodiments, a compute node may include one or more memories; and one or more processors in communication with the one or more memories, wherein the one or more processors include a module that is to select one or more particular storage devices of a plurality of storage devices distributed over the network in response to a data request made by an application that executes on the one or more processors, the one or more particular storage devices selected to fulfill the data request, and the module selects the one or more particular storage devices in accordance with a data object associated with the data request and one or more of current hardware operational state of respective storage devices of the plurality of storage devices and current performance characteristics of the respective storage devices of the plurality of storage devices.Type: ApplicationFiled: April 1, 2017Publication date: October 4, 2018Inventors: ANJANEYA R. CHAGAM REDDY, Mohan J. Kumar, Tushar Gohad
-
Publication number: 20180004452Abstract: Technologies for providing dynamically managed quality of service in a distributed storage system include an apparatus having a processor. The processor is to determine capabilities of one or more compute devices of the distributed storage system. The processor is also to obtain an indicator of a target quality of service to be provided by the distributed storage system, determine target performance metrics associated with the target quality of service, determine target configuration settings for the one or more compute devices of the distributed storage system to provide the target quality of service, configure the one or more compute devices with the target configuration settings, determine whether a present performance of the distributed storage system satisfies the target quality of service, and reconfigure the one or more compute devices in response to a determination that the target quality of service is not satisfied. Other embodiments are described and claimed.Type: ApplicationFiled: June 30, 2016Publication date: January 4, 2018Inventors: Mrittika Ganguli, Ananth S. Narayan, Anjaneya R. Chagam Reddy, Mohan J. Kumar