Patents by Inventor Srilatha Manne

Srilatha Manne has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250110884
    Abstract: Systems and techniques for selectively transferring one or more portions of a cache block in response to a request are described. Computing system components are informed as to instances where data transfer operations involve moving less than an entirety of data included in a cache block cache block. In one example, executable code for a computational task includes hints that identify when memory requests involve accessing and transmitting less than an entirety of a cache block and cause system components to communicate a subset of the cache block during a memory access. In another example, a data differentiator unit is implemented to analyze a cache block and return a portion of the cache block that is selected based on one or more criteria specified for a computational task. The described techniques thus overcome conventional drawbacks facing systems that transmit an entire cache block when only a portion is needed.
    Type: Application
    Filed: September 29, 2023
    Publication date: April 3, 2025
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Shaizeen Dilawarhusen Aga, Nuwan S. Jayasena, Michael J. Schulte, Srilatha Manne
  • Patent number: 12210457
    Abstract: A network processor includes a memory subsystem serving a plurality of processor cores. The memory subsystem includes a hierarchy of caches. A mid-level instruction cache provides for caching instructions for multiple processor cores. Likewise, a mid-level data cache provides for caching data for multiple cores, and can optionally serve as a point of serialization of the memory subsystem. A low-level cache is partitionable into partitions that are subsets of both ways and sets, and each partition can serve an independent process and/or processor core.
    Type: Grant
    Filed: July 8, 2021
    Date of Patent: January 28, 2025
    Assignee: MARVELL ASIA PTE, LTD.
    Inventors: Shubhendu S. Mukherjee, David H. Asher, Richard E. Kessler, Srilatha Manne
  • Publication number: 20240333519
    Abstract: The disclosed computing device can include super flow control unit (flit) generation circuitry configured to generate a super flit containing two or more flits having two or more requests embedded therein, wherein the two or more requests have the same destination node identifiers and the super flit has a variable size based on a flit size and a number of existing requests in a source node that target a same destination node. The device can additionally include authentication circuitry configured to append a message authentication code to a last flit of the super flit. The device can also include communication circuitry configured to send the super flit to a network switch configured to route the super flit to a destination node corresponding to the same destination node identifiers. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Application
    Filed: March 31, 2023
    Publication date: October 3, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventors: SeyedMohammad SeyedzadehDelcheh, Sergey Blagodurov, Donald Matthews, Jr., Srilatha Manne
  • Patent number: 12093784
    Abstract: A quantum computing device comprises a surface code lattice that includes l logical qubits, where l is a positive integer. The surface code lattice is partitioned into two or more regions based on lattice geometry. A compression engine is coupled to each logical qubit of the l logical qubits. Each compression engine is configured to compress syndrome data generated by the surface code lattice using a geometry-based compression scheme. A decompression engine is coupled to each compression engine. Each decompression engine is configured to receive compressed syndrome data, decompress the received compressed syndrome data, and route the decompressed syndrome data to a decoder block.
    Type: Grant
    Filed: July 13, 2023
    Date of Patent: September 17, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Poulami Das, Nicolas Guillaume Delfosse, Christopher Anand Pattison, Srilatha Manne, Douglas Carmean, Krysta Marie Svore, Helmut Gottfried Katzgraber
  • Patent number: 12073287
    Abstract: A quantum computing device comprises at least one quantum register including l logical qubits, where l is a positive integer. The quantum computing device further includes a set of d decoder blocks coupled to the at least one quantum register, where d<2*l. In this way, the decoder blocks may share decoding requests generated by the logical qubits.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: August 27, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Poulami Das, Nicolas Guillaume Delfosse, Christopher Anand Pattison, Srilatha Manne, Douglas Carmean, Krysta Marie Svore, Helmut Gottfried Katzgraber
  • Publication number: 20240152434
    Abstract: A device for disabling faulty cores using proxy virtual machines includes a processor, a faulty core, and a physical memory. The processor is responsible for executing a hypervisor that is configured to assign a proxy virtual machine to the faulty core. The assigned proxy virtual machine also includes a minimal workload. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Application
    Filed: November 6, 2023
    Publication date: May 9, 2024
    Applicant: Advanced Micro Devices, Inc.
    Inventor: Srilatha Manne
  • Publication number: 20230359912
    Abstract: A quantum computing device comprises a surface code lattice that includes l logical qubits, where l is a positive integer. The surface code lattice is partitioned into two or more regions based on lattice geometry. A compression engine is coupled to each logical qubit of the l logical qubits. Each compression engine is configured to compress syndrome data generated by the surface code lattice using a geometry-based compression scheme. A decompression engine is coupled to each compression engine. Each decompression engine is configured to receive compressed syndrome data, decompress the received compressed syndrome data, and route the decompressed syndrome data to a decoder block.
    Type: Application
    Filed: July 13, 2023
    Publication date: November 9, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Poulami DAS, Nicolas Guillaume DELFOSSE, Christopher Anand PATTISON, Srilatha MANNE, Douglas CARMEAN, Krysta Marie SVORE, Helmut Gottfried KATZGRABER
  • Patent number: 11755941
    Abstract: A quantum computing device comprises a surface code lattice that includes l logical qubits, where l is a positive integer. The surface code lattice is partitioned into two or more regions based on lattice geometry. A compression engine is coupled to each logical qubit of the l logical qubits. Each compression engine is configured to compress syndrome data generated by the surface code lattice using a geometry-based compression scheme. A decompression engine is coupled to each compression engine. Each decompression engine is configured to receive compressed syndrome data, decompress the received compressed syndrome data, and route the decompressed syndrome data to a decoder block.
    Type: Grant
    Filed: August 8, 2022
    Date of Patent: September 12, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Poulami Das, Nicolas Guillaume Delfosse, Christopher Anand Pattison, Srilatha Manne, Douglas Carmean, Krysta Marie Svore, Helmut Gottfried Katzgraber
  • Patent number: 11645209
    Abstract: The size of a cache is modestly increased so that a short pointer to a predicted next memory address in the same cache is added to each cache line in the cache. In response to a cache hit, the predicted next memory address identified by the short pointer in the cache line of the hit along with an associated entry are pushed to a next faster cache when a valid short pointer to the predicted next memory address is present in the cache line of the hit.
    Type: Grant
    Filed: August 3, 2021
    Date of Patent: May 9, 2023
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Shay Gal-On, Srilatha Manne, Edward McLellan, Alexander Rucker
  • Publication number: 20220385306
    Abstract: A quantum computing device comprises a surface code lattice that includes/logical qubits, where/is a positive integer. The surface code lattice is partitioned into two or more regions based on lattice geometry. A compression engine is coupled to each logical qubit of the/logical qubits. Each compression engine is configured to compress syndrome data generated by the surface code lattice using a geometry-based compression scheme. A decompression engine is coupled to each compression engine. Each decompression engine is configured to receive compressed syndrome data, decompress the received compressed syndrome data, and route the decompressed syndrome data to a decoder block.
    Type: Application
    Filed: August 8, 2022
    Publication date: December 1, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Poulami DAS, Nicolas Guillaume DELFOSSE, Christopher Anand PATTISON, Srilatha MANNE, Douglas CARMEAN, Krysta Marie SVORE, Helmut Gottfried KATZGRABER
  • Patent number: 11410070
    Abstract: A quantum computing device comprises at least one quantum register including a plurality of logical qubits. A compression engine is coupled to each logical qubit of the plurality of logical qubits. Each compression engine is configured to compress syndrome data. A decompression engine is coupled to each compression engine. Each decompression engine is configured to receive compressed syndrome data, decompress the received compressed syndrome data, and route the decompressed syndrome data to a decoder block.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: August 9, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Poulami Das, Nicolas Guillaume Delfosse, Christopher Anand Pattison, Srilatha Manne, Douglas Carmean, Krysta Marie Svore, Helmut Gottfried Katzgraber
  • Publication number: 20210365378
    Abstract: The size of a cache is modestly increased so that a short pointer to a predicted next memory address in the same cache is added to each cache line in the cache. In response to a cache hit, the predicted next memory address identified by the short pointer in the cache line of the hit along with an associated entry are pushed to a next faster cache when a valid short pointer to the predicted next memory address is present in the cache line of the hit.
    Type: Application
    Filed: August 3, 2021
    Publication date: November 25, 2021
    Inventors: Shay Gal-On, Srilatha Manne, Edward McLellan, Alexander Rucker
  • Patent number: 11093405
    Abstract: A network processor includes a memory subsystem serving a plurality of processor cores. The memory subsystem includes a hierarchy of caches. A mid-level instruction cache provides for caching instructions for multiple processor cores. Likewise, a mid-level data cache provides for caching data for multiple cores, and can optionally serve as a point of serialization of the memory subsystem. A low-level cache is partitionable into partitions that are subsets of both ways and sets, and each partition can serve an independent process and/or processor core.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: August 17, 2021
    Assignee: MARVELL ASIA PTE, LTD.
    Inventors: Shubhendu S. Mukherjee, David H. Asher, Richard E. Kessler, Srilatha Manne
  • Patent number: 11080195
    Abstract: The size of a cache is modestly increased so that a short pointer to a predicted next memory address in the same cache is added to each cache line in the cache. In response to a cache hit, the predicted next memory address identified by the short pointer in the cache line of the hit along with an associated entry are pushed to a next faster cache when a valid short pointer to the predicted next memory address is present in the cache line of the hit.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: August 3, 2021
    Assignee: Marvell Asia Pte, Ltd.
    Inventors: Shay Gal-On, Srilatha Manne, Edward McLellan, Alexander Rucker
  • Publication number: 20210081323
    Abstract: The hit rate of a L1 icache when operating with large programs is substantially improved by reserving a section of the L1 icache for regular instructions and a section for non-instruction information. Instructions are prefetched for storage in the instruction section of the L1 icache based on information in the non-instruction section of the L1 icache.
    Type: Application
    Filed: September 12, 2019
    Publication date: March 18, 2021
    Inventors: Edward MCLELLAN, Alexander RUCKER, Shay GAL-ON, Srilatha MANNE
  • Publication number: 20210073132
    Abstract: The size of a cache is modestly increased so that a short pointer to a predicted next memory address in the same cache is added to each cache line in the cache. In response to a cache hit, the predicted next memory address identified by the short pointer in the cache line of the hit along with an associated entry are pushed to a next faster cache when a valid short pointer to the predicted next memory address is present in the cache line of the hit.
    Type: Application
    Filed: September 10, 2019
    Publication date: March 11, 2021
    Inventors: Shay Gal-On, Srilatha Manne, Edward McLellan, Alexander Rucker
  • Publication number: 20210042651
    Abstract: A quantum computing device comprises at least one quantum register including l logical qubits, where l is a positive integer. The quantum computing device further includes a set of d decoder blocks coupled to the at least one quantum register, where d<2*l. In this way, the decoder blocks may share decoding requests generated by the logical qubits.
    Type: Application
    Filed: November 18, 2019
    Publication date: February 11, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Poulami DAS, Nicolas Guillaume DELFOSSE, Christopher Anand PATTISON, Srilatha MANNE, Douglas CARMEAN, Krysta Marie SVORE, Helmut Gottfried KATZGRABER
  • Publication number: 20210042650
    Abstract: A quantum computing device comprises at least one quantum register including a plurality of qubits, and a hardware decoder. The hardware decoder is configured to: receive syndrome data from one or more of the plurality of qubits; and decode the received syndrome data by implementing a Union-Find decoding algorithm via a hardware microarchitecture including two or more pipeline stages.
    Type: Application
    Filed: November 15, 2019
    Publication date: February 11, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Poulami DAS, Nicolas Guillaume DELFOSSE, Christopher Anand PATTISON, Srilatha MANNE, Douglas CARMEAN, Krysta Marie SVORE, Helmut Gottfried KATZGRABER
  • Publication number: 20210042652
    Abstract: A quantum computing device comprises at least one quantum register including a plurality of logical qubits. A compression engine is coupled to each logical qubit of the plurality of logical qubits. Each compression engine is configured to compress syndrome data. A decompression engine is coupled to each compression engine. Each decompression engine is configured to receive compressed syndrome data, decompress the received compressed syndrome data, and route the decompressed syndrome data to a decoder block.
    Type: Application
    Filed: November 18, 2019
    Publication date: February 11, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Poulami DAS, Nicolas Guillaume DELFOSSE, Christopher Anand PATTISON, Srilatha MANNE, Douglas CARMEAN, Krysta Marie SVORE, Helmut Gottfried KATZGRABER
  • Patent number: 10558577
    Abstract: Managing memory access requests to a cache system including one or more cache levels that are configured to store cache lines that correspond to memory blocks in a main memory includes: storing stream information identifying recognized streams that were recognized based on previously received memory access requests, where one or more of the recognized streams comprise strided streams that each have an associated strided prefetch result corresponding to a stride that is larger than or equal to a size of a single cache line; and determining whether or not a next cache line prefetch request corresponding to a particular memory access request will be made based at least in part on whether or not the particular memory access request matches a strided prefetch result for at least one strided stream, and a history of past next cache line prefetch requests.
    Type: Grant
    Filed: June 18, 2018
    Date of Patent: February 11, 2020
    Assignee: Cavium, LLC
    Inventors: Shubhendu Sekhar Mukherjee, David Albert Carlson, Srilatha Manne