Patents by Inventor Karthik SUNDARAM

Karthik SUNDARAM has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240085964
    Abstract: A system and method for updating power supply voltages due to variations from aging are described. A functional unit includes a power supply monitor capable of measuring power supply variations in a region of the functional unit. An age counter measures an age of the functional unit. A control unit notifies the power supply monitor to measure an operating voltage reference. When the control unit receives a measured operating voltage reference, the control unit determines an updated age of the region different from the current age based on the measured operating voltage reference. The control unit updates the age counter with the corresponding age, which is younger than the previous age in some cases due to the region not experiencing predicted stress and aging. The control unit is capable of determining a voltage adjustment for the operating voltage reference based on an age indicated by the age counter.
    Type: Application
    Filed: November 20, 2023
    Publication date: March 14, 2024
    Inventors: Sriram Sambamurthy, Sriram Sundaram, Indrani Paul, Larry David Hewitt, Anil Harwani, Aaron Joseph Grenat, Dana Glenn Lewis, Leonardo Piga, Wonje Choi, Karthik Rao
  • Publication number: 20230403226
    Abstract: Described herein are methods and systems for network performance testing. A computing device may receive a network performance request. The computing device may perform a network performance test, and determine comparable devices of one or more devices associated with the network performance request. The computing device may determine a network performance parameter for the comparable devices, and determine that one or more devices associated with the network performance request are impacting the network performance test.
    Type: Application
    Filed: May 11, 2023
    Publication date: December 14, 2023
    Inventors: Imad Amadie, Karthik Sundaram, Peifong Ren, John Raezer, Brandon Groff, Eric Bertrand
  • Patent number: 11782845
    Abstract: An apparatus comprises memory management circuitry to perform a translation table walk for a target address of a memory access request and to signal a fault in response to the translation table walk identifying a fault condition for the target address, prefetch circuitry to generate a prefetch request to request prefetching of information associated with a prefetch target address to a cache; and faulting address prediction circuitry to predict whether the memory management circuitry would identify the fault condition for the prefetch target address if the translation table walk was performed by the memory management circuitry for the prefetch target address. In response to a prediction that the fault condition would be identified for the prefetch target address, the prefetch circuitry suppresses the prefetch request and the memory management circuitry prevents the translation table walk being performed for the prefetch target address of the prefetch request.
    Type: Grant
    Filed: December 2, 2021
    Date of Patent: October 10, 2023
    Assignee: Arm Limited
    Inventors: Alexander Cole Shulyak, Joseph Michael Pusdesris, Abhishek Raja, Karthik Sundaram, Anoop Ramachandra Iyer, Michael Brian Schinzler, James David Dundas, Yasuo Ishii
  • Patent number: 11775440
    Abstract: Indirect prefetch circuitry initiates a producer prefetch requesting return of producer data having a producer address and at least one consumer prefetch to request prefetching of consumer data having a consumer address derived from the producer data. A producer prefetch filter table stores producer filter entries indicative of previous producer addresses of previous producer prefetches. Initiation of a requested producer prefetch for producer data having a requested producer address is suppressed when a lookup of the producer prefetch filter table determines that the requested producer address hits against a producer filter entry of the table. The lookup of the producer prefetch filter table for the requested producer address depends on a subset of bits of the requested producer address including at least one bit which distinguishes different chunks of data within a same cache line.
    Type: Grant
    Filed: January 20, 2022
    Date of Patent: October 3, 2023
    Assignee: Arm Limited
    Inventors: Alexander Cole Shulyak, Balaji Vijayan, Karthik Sundaram, Yasuo Ishii, Joseph Michael Pusdesris
  • Patent number: 11729086
    Abstract: Described herein are methods and systems for network performance testing. A computing device may receive a network performance request. The computing device may perform a network performance test, and determine comparable devices of one or more devices associated with the network performance request. The computing device may determine a network performance parameter for the comparable devices, and determine that one or more devices associated with the network performance request are impacting the network performance test.
    Type: Grant
    Filed: April 1, 2021
    Date of Patent: August 15, 2023
    Assignee: Comcast Cable Communications, LLC
    Inventors: Imad Amadie, Karthik Sundaram, Peifong Ren, John Raezer, Brandon Groff, Eric Bertrand
  • Publication number: 20230229596
    Abstract: Indirect prefetch circuitry initiates a producer prefetch requesting return of producer data having a producer address and at least one consumer prefetch to request prefetching of consumer data having a consumer address derived from the producer data. A producer prefetch filter table stores producer filter entries indicative of previous producer addresses of previous producer prefetches. Initiation of a requested producer prefetch for producer data having a requested producer address is suppressed when a lookup of the producer prefetch filter table determines that the requested producer address hits against a producer filter entry of the table. The lookup of the producer prefetch filter table for the requested producer address depends on a subset of bits of the requested producer address including at least one bit which distinguishes different chunks of data within a same cache line.
    Type: Application
    Filed: January 20, 2022
    Publication date: July 20, 2023
    Inventors: Alexander Cole SHULYAK, Balaji VIJAYAN, Karthik SUNDARAM, Yasuo ISHII, Joseph Michael PUSDESRIS
  • Publication number: 20230176979
    Abstract: An apparatus comprises memory management circuitry to perform a translation table walk for a target address of a memory access request and to signal a fault in response to the translation table walk identifying a fault condition for the target address, prefetch circuitry to generate a prefetch request to request prefetching of information associated with a prefetch target address to a cache; and faulting address prediction circuitry to predict whether the memory management circuitry would identify the fault condition for the prefetch target address if the translation table walk was performed by the memory management circuitry for the prefetch target address. In response to a prediction that the fault condition would be identified for the prefetch target address, the prefetch circuitry suppresses the prefetch request and the memory management circuitry prevents the translation table walk being performed for the prefetch target address of the prefetch request.
    Type: Application
    Filed: December 2, 2021
    Publication date: June 8, 2023
    Inventors: Alexander Cole SHULYAK, Joseph Michael PUSDESRIS, . ABHISHEK RAJA, Karthik SUNDARAM, Anoop Ramachandra IYER, Michael Brian SCHINZLER, James David DUNDAS, Yasuo ISHII
  • Publication number: 20230176973
    Abstract: Prefetch generation circuitry generates requests to prefetch data to a cache, where the prefetch generation circuitry is configured to initiate a producer prefetch to request return of producer data having a producer address and to initiate at least one consumer prefetch to request prefetching of consumer data to the cache, the consumer data having an address derived from the producer data returned in response to the producer prefetch. Training circuitry updates, based on executed load operations, a training table indicating candidate producer-consumer relationships being trained for use by the prefetch generation circuitry in generating the producer/consumer prefetches.
    Type: Application
    Filed: December 8, 2021
    Publication date: June 8, 2023
    Inventors: Alexander Cole SHULYAK, Karthik SUNDARAM
  • Patent number: 11442863
    Abstract: Data processing apparatuses and methods of processing data are disclosed. The operations comprise: storing copies of data items; and storing, in a producer pattern history table, a plurality of producer-consumer relationships, each defining an association between producer load indicator and a plurality of consumer load entries, each consumer load entry comprising a consumer load indicator and one or more usefulness metrics. Further steps comprise: initiating, in response to a data load from an address corresponding to the producer load indicator in the producer pattern history table and when at least one of the corresponding one or more usefulness meets a criterion, a producer prefetch of data to be prefetched for storing as a local copy; and issuing, when the data is returned, one or more consumer prefetches to return consumer data from a consumer address generated from the data returned by the producer prefetch and a consumer load indicator of a consumer load entry.
    Type: Grant
    Filed: November 10, 2020
    Date of Patent: September 13, 2022
    Assignee: Arm Limited
    Inventors: Alexander Cole Shulyak, Adrian Montero, Joseph Michael Pusdesris, Karthik Sundaram, Yasuo Ishii
  • Publication number: 20220147459
    Abstract: Data processing apparatuses and methods of processing data are disclosed. The operations comprise: storing copies of data items; and storing, in a producer pattern history table, a plurality of producer-consumer relationships, each defining an association between producer load indicator and a plurality of consumer load entries, each consumer load entry comprising a consumer load indicator and one or more usefulness metrics. Further steps comprise: initiating, in response to a data load from an address corresponding to the producer load indicator in the producer pattern history table and when at least one of the corresponding one or more usefulness meets a criterion, a producer prefetch of data to be prefetched for storing as a local copy; and issuing, when the data is returned, one or more consumer prefetches to return consumer data from a consumer address generated from the data returned by the producer prefetch and a consumer load indicator of a consumer load entry.
    Type: Application
    Filed: November 10, 2020
    Publication date: May 12, 2022
    Inventors: Alexander Cole SHULYAK, Adrian MONTERO, Joseph Michael PUSDESRIS, Karthik SUNDARAM, Yasuo ISHII
  • Publication number: 20220060403
    Abstract: Described herein are methods and systems for network performance testing. A computing device may receive a network performance request. The computing device may perform a network performance test, and determine comparable devices of one or more devices associated with the network performance request. The computing device may determine a network performance parameter for the comparable devices, and determine that one or more devices associated with the network performance request are impacting the network performance test.
    Type: Application
    Filed: April 1, 2021
    Publication date: February 24, 2022
    Inventors: Imad Amadie, Karthik Sundaram, Peifong Ren, John Raezer, Brandon Groff, Eric Bertrand
  • Publication number: 20210391045
    Abstract: Disclosed are systems, methods and devices for providing healthcare coverage matching and verification. In some aspects, a method includes receiving, from a frontend graphical user interface (GUI), a healthcare coverage application comprising client demographics that include at least two of the following: a last name, a first name, a birthdate, an identification number, an address and a request type associated with a client, using a parallel matching and verification architecture to determine, based on a first set of criteria, a matching database record from an eligibility database that corresponds to the healthcare coverage application, and communicate with a healthcare provider using electronic data interchange transactions, identify, based on a second set of criteria, a medical policy that corresponds to the matching database record, and verify the medical policy, and transmitting, to the frontend GUI, the medical policy to allow reception of the medical policy by the client.
    Type: Application
    Filed: April 26, 2021
    Publication date: December 16, 2021
    Inventors: Eric Hallemeier, Deb Grier, Gopi Kolla, Anand Padmanaban, Chris Lolo, Karthik Sundaram, Kevin Nyaribo
  • Patent number: 11048637
    Abstract: A high-frequency and low-power L1 cache and associated access technique. The method may include inspecting a virtual address of an L1 data cache load instruction, and indexing into a row and a column of a way predictor table using metadata and a virtual address associated with the load instruction. The method may include matching information stored at the row and the column of the way predictor table to a location of a cache line. The method may include predicting the location of the cache line within the L1 data cache based on the information match. A hierarchy of way predictor tables may be used, with higher level way predictor tables refreshing smaller lower level way predictor tables. The way predictor tables may be trained to make better predictions over time. Only selected circuit macros need to be enabled based on the predictions, thereby saving power.
    Type: Grant
    Filed: August 21, 2019
    Date of Patent: June 29, 2021
    Inventor: Karthik Sundaram
  • Patent number: 10999181
    Abstract: Described herein are methods and systems for network performance testing. A computing device may receive a network performance request. The computing device may perform a network performance test, and determine comparable devices of one or more devices associated with the network performance request. The computing device may determine a network performance parameter for the comparable devices, and determine that one or more devices associated with the network performance request are impacting the network performance test.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: May 4, 2021
    Assignee: Comcast Cable Communications, LLC
    Inventors: Imad Amadie, Karthik Sundaram, Peifong Ren, John Christopher Raezer, Brandon Groff, Eric Bertrand
  • Patent number: 10991457
    Abstract: Disclosed are systems, methods and devices for providing healthcare coverage matching and verification. In some aspects, a method includes receiving, from a frontend graphical user interface (GUI), a healthcare coverage application comprising client demographics that include at least two of the following: a last name, a first name, a birthdate, an identification number, an address and a request type associated with a client, using a parallel matching and verification architecture to determine, based on a first set of criteria, a matching database record from an eligibility database that corresponds to the healthcare coverage application, and communicate with a healthcare provider using electronic data interchange transactions, identify, based on a second set of criteria, a medical policy that corresponds to the matching database record, and verify the medical policy, and transmitting, to the frontend GUI, the medical policy to allow reception of the medical policy by the client.
    Type: Grant
    Filed: November 6, 2018
    Date of Patent: April 27, 2021
    Assignee: HEALTH MANAGEMENT SYSTEMS, INC.
    Inventors: Eric Hallemeier, Deb Grier, Gopi Kolla, Anand Padmanaban, Chris Lolo, Karthik Sundaram, Kevin Nyaribo
  • Patent number: 10956155
    Abstract: A system and a method to cascade execution of instructions in a load-store unit (LSU) of a central processing unit (CPU) to reduce latency associated with the instructions. First data stored in a cache is read by the LSU in response a first memory load instruction of two immediately consecutive memory load instructions. Alignment, sign extension and/or endian operations are performed on the first data read from the cache in response to the first memory load instruction, and, in parallel, a memory-load address-forwarded result is selected based on a corrected alignment of the first data read in response to the first memory load instruction to provide a next address for a second of the two immediately consecutive memory load instructions. Second data stored in the cache is read by the LSU in response to the second memory load instruction based on the selected memory-load address-forwarded result.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: March 23, 2021
    Inventors: Paul E. Kitchin, Rama S. Gopal, Karthik Sundaram
  • Publication number: 20200401524
    Abstract: A high-frequency and low-power L1 cache and associated access technique. The method may include inspecting a virtual address of an L1 data cache load instruction, and indexing into a row and a column of a way predictor table using metadata and a virtual address associated with the load instruction. The method may include matching information stored at the row and the column of the way predictor table to a location of a cache line. The method may include predicting the location of the cache line within the L1 data cache based on the information match. A hierarchy of way predictor tables may be used, with higher level way predictor tables refreshing smaller lower level way predictor tables. The way predictor tables may be trained to make better predictions over time. Only selected circuit macros need to be enabled based on the predictions, thereby saving power.
    Type: Application
    Filed: August 21, 2019
    Publication date: December 24, 2020
    Inventor: Karthik SUNDARAM
  • Publication number: 20200076721
    Abstract: Described herein are methods and systems for network performance testing. A computing device may receive a network performance request. The computing device may perform a network performance test, and determine comparable devices of one or more devices associated with the network performance request. The computing device may determine a network performance parameter for the comparable devices, and determine that one or more devices associated with the network performance request are impacting the network performance test.
    Type: Application
    Filed: August 29, 2018
    Publication date: March 5, 2020
    Inventors: Imad Amadie, Karthik Sundaram, Peifong Ren, John Christopher Raezer, Brandon Groff, Eric Bertrand
  • Publication number: 20190278603
    Abstract: A system and a method to cascade execution of instructions in a load-store unit (LSU) of a central processing unit (CPU) to reduce latency associated with the instructions. First data stored in a cache is read by the LSU in response a first memory load instruction of two immediately consecutive memory load instructions. Alignment, sign extension and/or endian operations are performed on the first data read from the cache in response to the first memory load instruction, and, in parallel, a memory-load address-forwarded result is selected based on a corrected alignment of the first data read in response to the first memory load instruction to provide a next address for a second of the two immediately consecutive memory load instructions. Second data stored in the cache is read by the LSU in response to the second memory load instruction based on the selected memory-load address-forwarded result.
    Type: Application
    Filed: May 23, 2019
    Publication date: September 12, 2019
    Inventors: Paul E. KITCHIN, Rama S. GOPAL, Karthik SUNDARAM
  • Patent number: 10372452
    Abstract: A system and a method to cascade execution of instructions in a load-store unit (LSU) of a central processing unit (CPU) to reduce latency associated with the instructions. First data stored in a cache is read by the LSU in response a first memory load instruction of two immediately consecutive memory load instructions. Alignment, sign extension and/or endian operations are performed on the first data read from the cache in response to the first memory load instruction, and, in parallel, a memory-load address-forwarded result is selected based on a corrected alignment of the first data read in response to the first memory load instruction to provide a next address for a second of the two immediately consecutive memory load instructions. Second data stored in the cache is read by the LSU in response to the second memory load instruction based on the selected memory-load address-forwarded result.
    Type: Grant
    Filed: June 6, 2017
    Date of Patent: August 6, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Paul E. Kitchin, Rama S. Gopal, Karthik Sundaram