Patents by Inventor Karthik SUNDARAM
Karthik SUNDARAM has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240085964Abstract: A system and method for updating power supply voltages due to variations from aging are described. A functional unit includes a power supply monitor capable of measuring power supply variations in a region of the functional unit. An age counter measures an age of the functional unit. A control unit notifies the power supply monitor to measure an operating voltage reference. When the control unit receives a measured operating voltage reference, the control unit determines an updated age of the region different from the current age based on the measured operating voltage reference. The control unit updates the age counter with the corresponding age, which is younger than the previous age in some cases due to the region not experiencing predicted stress and aging. The control unit is capable of determining a voltage adjustment for the operating voltage reference based on an age indicated by the age counter.Type: ApplicationFiled: November 20, 2023Publication date: March 14, 2024Inventors: Sriram Sambamurthy, Sriram Sundaram, Indrani Paul, Larry David Hewitt, Anil Harwani, Aaron Joseph Grenat, Dana Glenn Lewis, Leonardo Piga, Wonje Choi, Karthik Rao
-
Publication number: 20230403226Abstract: Described herein are methods and systems for network performance testing. A computing device may receive a network performance request. The computing device may perform a network performance test, and determine comparable devices of one or more devices associated with the network performance request. The computing device may determine a network performance parameter for the comparable devices, and determine that one or more devices associated with the network performance request are impacting the network performance test.Type: ApplicationFiled: May 11, 2023Publication date: December 14, 2023Inventors: Imad Amadie, Karthik Sundaram, Peifong Ren, John Raezer, Brandon Groff, Eric Bertrand
-
Patent number: 11782845Abstract: An apparatus comprises memory management circuitry to perform a translation table walk for a target address of a memory access request and to signal a fault in response to the translation table walk identifying a fault condition for the target address, prefetch circuitry to generate a prefetch request to request prefetching of information associated with a prefetch target address to a cache; and faulting address prediction circuitry to predict whether the memory management circuitry would identify the fault condition for the prefetch target address if the translation table walk was performed by the memory management circuitry for the prefetch target address. In response to a prediction that the fault condition would be identified for the prefetch target address, the prefetch circuitry suppresses the prefetch request and the memory management circuitry prevents the translation table walk being performed for the prefetch target address of the prefetch request.Type: GrantFiled: December 2, 2021Date of Patent: October 10, 2023Assignee: Arm LimitedInventors: Alexander Cole Shulyak, Joseph Michael Pusdesris, Abhishek Raja, Karthik Sundaram, Anoop Ramachandra Iyer, Michael Brian Schinzler, James David Dundas, Yasuo Ishii
-
Patent number: 11775440Abstract: Indirect prefetch circuitry initiates a producer prefetch requesting return of producer data having a producer address and at least one consumer prefetch to request prefetching of consumer data having a consumer address derived from the producer data. A producer prefetch filter table stores producer filter entries indicative of previous producer addresses of previous producer prefetches. Initiation of a requested producer prefetch for producer data having a requested producer address is suppressed when a lookup of the producer prefetch filter table determines that the requested producer address hits against a producer filter entry of the table. The lookup of the producer prefetch filter table for the requested producer address depends on a subset of bits of the requested producer address including at least one bit which distinguishes different chunks of data within a same cache line.Type: GrantFiled: January 20, 2022Date of Patent: October 3, 2023Assignee: Arm LimitedInventors: Alexander Cole Shulyak, Balaji Vijayan, Karthik Sundaram, Yasuo Ishii, Joseph Michael Pusdesris
-
Patent number: 11729086Abstract: Described herein are methods and systems for network performance testing. A computing device may receive a network performance request. The computing device may perform a network performance test, and determine comparable devices of one or more devices associated with the network performance request. The computing device may determine a network performance parameter for the comparable devices, and determine that one or more devices associated with the network performance request are impacting the network performance test.Type: GrantFiled: April 1, 2021Date of Patent: August 15, 2023Assignee: Comcast Cable Communications, LLCInventors: Imad Amadie, Karthik Sundaram, Peifong Ren, John Raezer, Brandon Groff, Eric Bertrand
-
Publication number: 20230229596Abstract: Indirect prefetch circuitry initiates a producer prefetch requesting return of producer data having a producer address and at least one consumer prefetch to request prefetching of consumer data having a consumer address derived from the producer data. A producer prefetch filter table stores producer filter entries indicative of previous producer addresses of previous producer prefetches. Initiation of a requested producer prefetch for producer data having a requested producer address is suppressed when a lookup of the producer prefetch filter table determines that the requested producer address hits against a producer filter entry of the table. The lookup of the producer prefetch filter table for the requested producer address depends on a subset of bits of the requested producer address including at least one bit which distinguishes different chunks of data within a same cache line.Type: ApplicationFiled: January 20, 2022Publication date: July 20, 2023Inventors: Alexander Cole SHULYAK, Balaji VIJAYAN, Karthik SUNDARAM, Yasuo ISHII, Joseph Michael PUSDESRIS
-
Publication number: 20230176979Abstract: An apparatus comprises memory management circuitry to perform a translation table walk for a target address of a memory access request and to signal a fault in response to the translation table walk identifying a fault condition for the target address, prefetch circuitry to generate a prefetch request to request prefetching of information associated with a prefetch target address to a cache; and faulting address prediction circuitry to predict whether the memory management circuitry would identify the fault condition for the prefetch target address if the translation table walk was performed by the memory management circuitry for the prefetch target address. In response to a prediction that the fault condition would be identified for the prefetch target address, the prefetch circuitry suppresses the prefetch request and the memory management circuitry prevents the translation table walk being performed for the prefetch target address of the prefetch request.Type: ApplicationFiled: December 2, 2021Publication date: June 8, 2023Inventors: Alexander Cole SHULYAK, Joseph Michael PUSDESRIS, . ABHISHEK RAJA, Karthik SUNDARAM, Anoop Ramachandra IYER, Michael Brian SCHINZLER, James David DUNDAS, Yasuo ISHII
-
Publication number: 20230176973Abstract: Prefetch generation circuitry generates requests to prefetch data to a cache, where the prefetch generation circuitry is configured to initiate a producer prefetch to request return of producer data having a producer address and to initiate at least one consumer prefetch to request prefetching of consumer data to the cache, the consumer data having an address derived from the producer data returned in response to the producer prefetch. Training circuitry updates, based on executed load operations, a training table indicating candidate producer-consumer relationships being trained for use by the prefetch generation circuitry in generating the producer/consumer prefetches.Type: ApplicationFiled: December 8, 2021Publication date: June 8, 2023Inventors: Alexander Cole SHULYAK, Karthik SUNDARAM
-
Patent number: 11442863Abstract: Data processing apparatuses and methods of processing data are disclosed. The operations comprise: storing copies of data items; and storing, in a producer pattern history table, a plurality of producer-consumer relationships, each defining an association between producer load indicator and a plurality of consumer load entries, each consumer load entry comprising a consumer load indicator and one or more usefulness metrics. Further steps comprise: initiating, in response to a data load from an address corresponding to the producer load indicator in the producer pattern history table and when at least one of the corresponding one or more usefulness meets a criterion, a producer prefetch of data to be prefetched for storing as a local copy; and issuing, when the data is returned, one or more consumer prefetches to return consumer data from a consumer address generated from the data returned by the producer prefetch and a consumer load indicator of a consumer load entry.Type: GrantFiled: November 10, 2020Date of Patent: September 13, 2022Assignee: Arm LimitedInventors: Alexander Cole Shulyak, Adrian Montero, Joseph Michael Pusdesris, Karthik Sundaram, Yasuo Ishii
-
Publication number: 20220147459Abstract: Data processing apparatuses and methods of processing data are disclosed. The operations comprise: storing copies of data items; and storing, in a producer pattern history table, a plurality of producer-consumer relationships, each defining an association between producer load indicator and a plurality of consumer load entries, each consumer load entry comprising a consumer load indicator and one or more usefulness metrics. Further steps comprise: initiating, in response to a data load from an address corresponding to the producer load indicator in the producer pattern history table and when at least one of the corresponding one or more usefulness meets a criterion, a producer prefetch of data to be prefetched for storing as a local copy; and issuing, when the data is returned, one or more consumer prefetches to return consumer data from a consumer address generated from the data returned by the producer prefetch and a consumer load indicator of a consumer load entry.Type: ApplicationFiled: November 10, 2020Publication date: May 12, 2022Inventors: Alexander Cole SHULYAK, Adrian MONTERO, Joseph Michael PUSDESRIS, Karthik SUNDARAM, Yasuo ISHII
-
Publication number: 20220060403Abstract: Described herein are methods and systems for network performance testing. A computing device may receive a network performance request. The computing device may perform a network performance test, and determine comparable devices of one or more devices associated with the network performance request. The computing device may determine a network performance parameter for the comparable devices, and determine that one or more devices associated with the network performance request are impacting the network performance test.Type: ApplicationFiled: April 1, 2021Publication date: February 24, 2022Inventors: Imad Amadie, Karthik Sundaram, Peifong Ren, John Raezer, Brandon Groff, Eric Bertrand
-
Publication number: 20210391045Abstract: Disclosed are systems, methods and devices for providing healthcare coverage matching and verification. In some aspects, a method includes receiving, from a frontend graphical user interface (GUI), a healthcare coverage application comprising client demographics that include at least two of the following: a last name, a first name, a birthdate, an identification number, an address and a request type associated with a client, using a parallel matching and verification architecture to determine, based on a first set of criteria, a matching database record from an eligibility database that corresponds to the healthcare coverage application, and communicate with a healthcare provider using electronic data interchange transactions, identify, based on a second set of criteria, a medical policy that corresponds to the matching database record, and verify the medical policy, and transmitting, to the frontend GUI, the medical policy to allow reception of the medical policy by the client.Type: ApplicationFiled: April 26, 2021Publication date: December 16, 2021Inventors: Eric Hallemeier, Deb Grier, Gopi Kolla, Anand Padmanaban, Chris Lolo, Karthik Sundaram, Kevin Nyaribo
-
Patent number: 11048637Abstract: A high-frequency and low-power L1 cache and associated access technique. The method may include inspecting a virtual address of an L1 data cache load instruction, and indexing into a row and a column of a way predictor table using metadata and a virtual address associated with the load instruction. The method may include matching information stored at the row and the column of the way predictor table to a location of a cache line. The method may include predicting the location of the cache line within the L1 data cache based on the information match. A hierarchy of way predictor tables may be used, with higher level way predictor tables refreshing smaller lower level way predictor tables. The way predictor tables may be trained to make better predictions over time. Only selected circuit macros need to be enabled based on the predictions, thereby saving power.Type: GrantFiled: August 21, 2019Date of Patent: June 29, 2021Inventor: Karthik Sundaram
-
Patent number: 10999181Abstract: Described herein are methods and systems for network performance testing. A computing device may receive a network performance request. The computing device may perform a network performance test, and determine comparable devices of one or more devices associated with the network performance request. The computing device may determine a network performance parameter for the comparable devices, and determine that one or more devices associated with the network performance request are impacting the network performance test.Type: GrantFiled: August 29, 2018Date of Patent: May 4, 2021Assignee: Comcast Cable Communications, LLCInventors: Imad Amadie, Karthik Sundaram, Peifong Ren, John Christopher Raezer, Brandon Groff, Eric Bertrand
-
Patent number: 10991457Abstract: Disclosed are systems, methods and devices for providing healthcare coverage matching and verification. In some aspects, a method includes receiving, from a frontend graphical user interface (GUI), a healthcare coverage application comprising client demographics that include at least two of the following: a last name, a first name, a birthdate, an identification number, an address and a request type associated with a client, using a parallel matching and verification architecture to determine, based on a first set of criteria, a matching database record from an eligibility database that corresponds to the healthcare coverage application, and communicate with a healthcare provider using electronic data interchange transactions, identify, based on a second set of criteria, a medical policy that corresponds to the matching database record, and verify the medical policy, and transmitting, to the frontend GUI, the medical policy to allow reception of the medical policy by the client.Type: GrantFiled: November 6, 2018Date of Patent: April 27, 2021Assignee: HEALTH MANAGEMENT SYSTEMS, INC.Inventors: Eric Hallemeier, Deb Grier, Gopi Kolla, Anand Padmanaban, Chris Lolo, Karthik Sundaram, Kevin Nyaribo
-
Patent number: 10956155Abstract: A system and a method to cascade execution of instructions in a load-store unit (LSU) of a central processing unit (CPU) to reduce latency associated with the instructions. First data stored in a cache is read by the LSU in response a first memory load instruction of two immediately consecutive memory load instructions. Alignment, sign extension and/or endian operations are performed on the first data read from the cache in response to the first memory load instruction, and, in parallel, a memory-load address-forwarded result is selected based on a corrected alignment of the first data read in response to the first memory load instruction to provide a next address for a second of the two immediately consecutive memory load instructions. Second data stored in the cache is read by the LSU in response to the second memory load instruction based on the selected memory-load address-forwarded result.Type: GrantFiled: May 23, 2019Date of Patent: March 23, 2021Inventors: Paul E. Kitchin, Rama S. Gopal, Karthik Sundaram
-
Publication number: 20200401524Abstract: A high-frequency and low-power L1 cache and associated access technique. The method may include inspecting a virtual address of an L1 data cache load instruction, and indexing into a row and a column of a way predictor table using metadata and a virtual address associated with the load instruction. The method may include matching information stored at the row and the column of the way predictor table to a location of a cache line. The method may include predicting the location of the cache line within the L1 data cache based on the information match. A hierarchy of way predictor tables may be used, with higher level way predictor tables refreshing smaller lower level way predictor tables. The way predictor tables may be trained to make better predictions over time. Only selected circuit macros need to be enabled based on the predictions, thereby saving power.Type: ApplicationFiled: August 21, 2019Publication date: December 24, 2020Inventor: Karthik SUNDARAM
-
Publication number: 20200076721Abstract: Described herein are methods and systems for network performance testing. A computing device may receive a network performance request. The computing device may perform a network performance test, and determine comparable devices of one or more devices associated with the network performance request. The computing device may determine a network performance parameter for the comparable devices, and determine that one or more devices associated with the network performance request are impacting the network performance test.Type: ApplicationFiled: August 29, 2018Publication date: March 5, 2020Inventors: Imad Amadie, Karthik Sundaram, Peifong Ren, John Christopher Raezer, Brandon Groff, Eric Bertrand
-
Publication number: 20190278603Abstract: A system and a method to cascade execution of instructions in a load-store unit (LSU) of a central processing unit (CPU) to reduce latency associated with the instructions. First data stored in a cache is read by the LSU in response a first memory load instruction of two immediately consecutive memory load instructions. Alignment, sign extension and/or endian operations are performed on the first data read from the cache in response to the first memory load instruction, and, in parallel, a memory-load address-forwarded result is selected based on a corrected alignment of the first data read in response to the first memory load instruction to provide a next address for a second of the two immediately consecutive memory load instructions. Second data stored in the cache is read by the LSU in response to the second memory load instruction based on the selected memory-load address-forwarded result.Type: ApplicationFiled: May 23, 2019Publication date: September 12, 2019Inventors: Paul E. KITCHIN, Rama S. GOPAL, Karthik SUNDARAM
-
Patent number: 10372452Abstract: A system and a method to cascade execution of instructions in a load-store unit (LSU) of a central processing unit (CPU) to reduce latency associated with the instructions. First data stored in a cache is read by the LSU in response a first memory load instruction of two immediately consecutive memory load instructions. Alignment, sign extension and/or endian operations are performed on the first data read from the cache in response to the first memory load instruction, and, in parallel, a memory-load address-forwarded result is selected based on a corrected alignment of the first data read in response to the first memory load instruction to provide a next address for a second of the two immediately consecutive memory load instructions. Second data stored in the cache is read by the LSU in response to the second memory load instruction based on the selected memory-load address-forwarded result.Type: GrantFiled: June 6, 2017Date of Patent: August 6, 2019Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Paul E. Kitchin, Rama S. Gopal, Karthik Sundaram