Patents by Inventor Shyamkumar Iyer

Shyamkumar Iyer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11829798
    Abstract: An information handling system for compressing data includes multiple compression engines, a source data buffer to provide compression data to the compression engines, at least one destination data buffer to receive compressed data from the compression engines, and a compression engine driver. Each compression engine is configured to provide a different compression function. The compression engine driver directs each compression engine to compress data from the source data buffer, and retrieves select compressed data from a first one of the compression engines from the at least one destination data buffer. The selection is based upon a selection criterion.
    Type: Grant
    Filed: October 21, 2020
    Date of Patent: November 28, 2023
    Assignee: Dell Products L.P.
    Inventors: Andrew Butcher, Shyamkumar Iyer, Glen Sescila
  • Patent number: 11601515
    Abstract: An information handling system includes a publisher device and an offload device. Multiple subscriber devices are associated with the publisher device. The offload device communicates with the publisher device. The offload device receives a packet transmission from the publisher device, and translates a topic address of the packet transmission to multiple destination addresses. The offload device sends the packet transmission to each of the subscriber devices. Each of the subscriber devices is associated with a corresponding destination address of the multiple destination address. The offload device receives one or more acknowledgements from the subscriber devices, and combines the one or more acknowledgements into a composite completion message. The offload device sends the composite completion message to the publisher device.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: March 7, 2023
    Assignee: Dell Products L.P.
    Inventors: Andrew Butcher, Shyamkumar Iyer, Srikrishna Ramaswamy
  • Patent number: 11507274
    Abstract: An information handling system for compressing data includes a data storage device and a processor. The data storage device stores a dictionary and an uncompressed data block. The processor prepends the dictionary to the uncompressed data block, determines, from the uncompressed data block, a literal data string and a match data string where the match data string is a matching entry of the dictionary, and compresses the uncompressed data block into a compressed data block that includes the literal data string and an offset pointer that points to the matching entry.
    Type: Grant
    Filed: October 22, 2020
    Date of Patent: November 22, 2022
    Assignee: Dell Products L.P.
    Inventors: Andrew Butcher, Shyamkumar Iyer, Glen Sescila
  • Patent number: 11507292
    Abstract: An information handling system includes a processor that detects a cache flush request of a memory device within the processor, and identifies multiple blocks of data within an address space associated with the cache flush request. The processor groups the multiple blocks of data into a single composite block of data, and compresses the composite block of data. The processor stores the compressed composite block of data, and stores metadata for the compressed composite block of data. The metadata includes information for both the composite block of data and information for each of the multiple blocks of data.
    Type: Grant
    Filed: October 15, 2020
    Date of Patent: November 22, 2022
    Assignee: Dell Products L.P.
    Inventors: Andrew Butcher, Shyamkumar Iyer, Glen Sescila
  • Patent number: 11422963
    Abstract: An information handling system includes a compression client, a memory, and a SDXI hardware module. The compression client issues a compression request for a block of data that is uncompressed. The memory has multiple storage locations identified by addresses, which include a source address and a destination address. The SDXI hardware module performs compression of the block of data to create compressed data of the block of data. The SDXI hardware module determines whether an amount of the compression of the block of data is less than a threshold amount of compression. In response to the amount of the compression being less than the threshold amount of compression, the SDXI hardware module disregards the compressed data of the block of data, and utilizes the uncompressed block of data in a source address. The SDXI hardware module updates metadata for the block of data to indicate that data returned to compression client is uncompressed.
    Type: Grant
    Filed: October 15, 2020
    Date of Patent: August 23, 2022
    Assignee: Dell Products L.P.
    Inventors: Shyamkumar Iyer, Andrew Butcher, Glen Sescila
  • Publication number: 20220129469
    Abstract: An information handling system includes a hardware device having a query processing engine to provide queries into source data and to provide responses to the queries. A processor stores a query to a query address in the memory device, issues a command to the hardware device, the command including the query address and a response address in the memory device, and retrieves a response to the query from the response address. The hardware device retrieves the query from the query address in response to the command, provides the query to the query processing engine, and stores a response to the query from the query processing engine to the response address.
    Type: Application
    Filed: October 27, 2020
    Publication date: April 28, 2022
    Inventors: Shyamkumar Iyer, Krishna Ramaswamy, Gaurav Chawla
  • Publication number: 20220129161
    Abstract: An information handling system for compressing data includes a data storage device and a processor. The data storage device stores a dictionary and an uncompressed data block. The processor prepends the dictionary to the uncompressed data block, determines, from the uncompressed data block, a literal data string and a match data string where the match data string is a matching entry of the dictionary, and compresses the uncompressed data block into a compressed data block that includes the literal data string and an offset pointer that points to the matching entry.
    Type: Application
    Filed: October 22, 2020
    Publication date: April 28, 2022
    Inventors: Andrew Butcher, Shyamkumar Iyer, Glen Sescila
  • Publication number: 20220121359
    Abstract: An information handling system includes a processor that detects a cache flush request of a memory device within the processor, and identifies multiple blocks of data within an address space associated with the cache flush request. The processor groups the multiple blocks of data into a single composite block of data, and compresses the composite block of data. The processor stores the compressed composite block of data, and stores metadata for the compressed composite block of data. The metadata includes information for both the composite block of data and information for each of the multiple blocks of data.
    Type: Application
    Filed: October 15, 2020
    Publication date: April 21, 2022
    Inventors: Andrew Butcher, Shyamkumar Iyer, Glen Sescila
  • Publication number: 20220121590
    Abstract: An information handling system includes a compression client, a memory, and a SDXI hardware module. The compression client issues a compression request for a block of data that is uncompressed. The memory has multiple storage locations identified by addresses, which include a source address and a destination address. The SDXI hardware module performs compression of the block of data to create compressed data of the block of data. The SDXI hardware module determines whether an amount of the compression of the block of data is less than a threshold amount of compression. In response to the amount of the compression being less than the threshold amount of compression, the SDXI hardware module disregards the compressed data of the block of data, and utilizes the uncompressed block of data in a source address. The SDXI hardware module updates metadata for the block of data to indicate that data returned to compression client is uncompressed.
    Type: Application
    Filed: October 15, 2020
    Publication date: April 21, 2022
    Inventors: Shyamkumar Iyer, Andrew Butcher, Glen Sescila
  • Publication number: 20220124164
    Abstract: An information handling system includes a publisher device and an offload device. Multiple subscriber devices are associated with the publisher device. The offload device communicates with the publisher device. The offload device receives a packet transmission from the publisher device, and translates a topic address of the packet transmission to multiple destination addresses. The offload device sends the packet transmission to each of the subscriber devices. Each of the subscriber devices is associated with a corresponding destination address of the multiple destination address. The offload device receives one or more acknowledgements from the subscriber devices, and combines the one or more acknowledgements into a composite completion message. The offload device sends the composite completion message to the publisher device.
    Type: Application
    Filed: October 16, 2020
    Publication date: April 21, 2022
    Inventors: Andrew Butcher, Shyamkumar Iyer, Srikrishna Ramaswamy
  • Publication number: 20220121499
    Abstract: An information handling system for compressing data includes multiple compression engines, a source data buffer to provide compression data to the compression engines, at least one destination data buffer to receive compressed data from the compression engines, and a compression engine driver. Each compression engine is configured to provide a different compression function. The compression engine driver directs each compression engine to compress data from the source data buffer, and retrieves select compressed data from a first one of the compression engines from the at least one destination data buffer. The selection is based upon a selection criterion.
    Type: Application
    Filed: October 21, 2020
    Publication date: April 21, 2022
    Inventors: Andrew Butcher, Shyamkumar Iyer, Glen Sescila
  • Patent number: 11281602
    Abstract: An information handling system includes a processor and a hardware device. The hardware device includes a first engine to provide a first operation on data, and a second engine to provide a second operation on data. The processor provides a command to the hardware device. The command directs the first engine to perform the first operation on first data to create second data based upon the performance of the first operation on the first data, and directs the second engine to perform the second operation on the second data to create third data based upon the performance of the second operation on the second data in response to a completion signal. The hardware device is configured to provide the completion signal to the second engine when the performance of the first operation on the first data is completed.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: March 22, 2022
    Assignee: Dell Products L.P.
    Inventors: Shyamkumar Iyer, Krishna Ramaswamy, Gaurav Chawla, Glen Sescila, Andrew Butcher
  • Patent number: 10380041
    Abstract: A cluster manager of a computer cluster determines an allocation of resources from the endpoints for running applications on the nodes of the computer cluster and configures the computer cluster to provide resources for the applications in accordance with the allocation. The cluster may include a Peripheral Component Interconnect express (PCIe) fabric. The cluster manager may configure PCIe multi-root input/output (I/O) virtualization topologies of the computer cluster. The allocations may satisfy Quality of Service requirements, including priority class and maximum latency requirements. The allocations may involve splitting I/O traffic.
    Type: Grant
    Filed: July 14, 2015
    Date of Patent: August 13, 2019
    Assignee: Dell Products, LP
    Inventors: Shyamkumar Iyer, Matthew L. Domsch
  • Publication number: 20180165228
    Abstract: A cluster manager of a computer cluster determines an allocation of resources from the endpoints for running applications on the nodes of the computer cluster and configures the computer cluster to provide resources for the applications in accordance with the allocation. The cluster may include a Peripheral Component Interconnect express (PCIe) fabric. The cluster manager may configure PCIe multi-root input/output (I/O) virtualization topologies of the computer cluster. The allocations may satisfy Quality of Service requirements, including priority class and maximum latency requirements. The allocations may involve splitting I/O traffic.
    Type: Application
    Filed: July 14, 2015
    Publication date: June 14, 2018
    Inventors: Shyamkumar Iyer, Matthew L. Domsch
  • Publication number: 20170017585
    Abstract: A cluster manager of a computer cluster determines an allocation of resources from the endpoints for running applications on the nodes of the computer cluster and configures the computer cluster to provide resources for the applications in accordance with the allocation. The cluster may include a Peripheral Component Interconnect express (PCIe) fabric. The cluster manager may configure PCIe multi-root input/output (I/O) virtualization topologies of the computer cluster. The allocations may satisfy Quality of Service requirements, including priority class and maximum latency requirements. The allocations may involve splitting I/O traffic.
    Type: Application
    Filed: July 14, 2015
    Publication date: January 19, 2017
    Inventors: Shyamkumar Iyer, Matthew L. Domsch
  • Patent number: 9086919
    Abstract: A cluster manager of a computer cluster determines an allocation of resources from the endpoints for running applications on the nodes of the computer cluster and configures the computer cluster to provide resources for the applications in accordance with the allocation. The cluster may include a Peripheral Component Interconnect express (PCIe) fabric. The cluster manager may configure PCIe multi-root input/output (I/O) virtualization topologies of the computer cluster. The allocations may satisfy Quality of Service requirements, including priority class and maximum latency requirements. The allocations may involve splitting I/O traffic.
    Type: Grant
    Filed: August 23, 2012
    Date of Patent: July 21, 2015
    Assignee: Dell Products, LP
    Inventors: Shyamkumar Iyer, Matthew L. Domsch
  • Publication number: 20140059265
    Abstract: A cluster manager of a computer cluster determines an allocation of resources from the endpoints for running applications on the nodes of the computer cluster and configures the computer cluster to provide resources for the applications in accordance with the allocation. The cluster may include a Peripheral Component Interconnect express (PCIe) fabric. The cluster manager may configure PCIe multi-root input/output (I/O) virtualization topologies of the computer cluster. The allocations may satisfy Quality of Service requirements, including priority class and maximum latency requirements. The allocations may involve splitting I/O traffic.
    Type: Application
    Filed: August 23, 2012
    Publication date: February 27, 2014
    Applicant: DELL PRODUCTS, LP
    Inventors: Shyamkumar Iyer, Matthew L. Domsch