Patents by Inventor Brad Simeral
Brad Simeral has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9823990Abstract: Embodiments of the claimed subject matter are directed to methods and systems that allow tracking and accounting of wear and other aging effects in integrated circuits and products which include integrated circuits over time, and the dynamic adjustment of operating conditions to increase or decrease wear in response to the accumulated wear relative to the expected wear during the lifetime of the circuit and/or product.Type: GrantFiled: September 5, 2012Date of Patent: November 21, 2017Assignee: Nvidia CorporationInventor: Brad Simeral
-
Patent number: 9760150Abstract: A method of entering a power conservation state comprises selecting and entering one of a plurality of low power states for the computer system in response to a detected system idle event. The plurality of low power states comprise a first low power state and a second low power state for the computer system. A memory of the computer system is self refreshed during the first low power state. A baseband module of the computer system remains powered, and the memory is accessible to the baseband module during the second low power state. The one low power state is selected depending upon baseband module activity. The method also includes exiting from the one of a plurality of low power states when a wake event is detected.Type: GrantFiled: November 27, 2012Date of Patent: September 12, 2017Assignee: NVIDIA CorporationInventors: Sagheer Ahmad, Pete Cumming, Brad Simeral, Matthew Longnecker, Sudeshna Guha
-
Publication number: 20140149770Abstract: A method of entering a power conservation state comprises selecting and entering one of a plurality of low power states for the computer system in response to a detected system idle event. The plurality of low power states comprise a first low power state and a second low power state for the computer system. A memory of the computer system is self refreshed during the first low power state. A baseband module of the computer system remains powered, and the memory is accessible to the baseband module during the second low power state. The one low power state is selected depending upon baseband module activity. The method also includes exiting from the one of a plurality of low power states when a wake event is detected.Type: ApplicationFiled: November 27, 2012Publication date: May 29, 2014Applicant: NVIDIA CORPORATIONInventors: Sagheer Ahmad, Pete Cumming, Brad Simeral, Matthew Longnecker, Sudeshna Guha
-
Patent number: 8736617Abstract: A method of displaying graphics data is described. The method involves accessing the graphics data in a memory subsystem associated with one graphics subsystem. The graphics data is transmitted to a second graphics subsystem, where it is displayed on a monitor coupled to the second graphics subsystem.Type: GrantFiled: August 4, 2008Date of Patent: May 27, 2014Assignee: Nvidia CorporationInventors: Stephen Lew, Bruce R. Intihar, Abraham B. de Waal, David G. Reed, Tony Tamasi, David Wyatt, Franck R. Diard, Brad Simeral
-
Publication number: 20140068298Abstract: Embodiments of the claimed subject matter are directed to methods and systems that allow tracking and accounting of wear and other aging effects in integrated circuits and products which include integrated circuits over time, and the dynamic adjustment of operating conditions to increase or decrease wear in response to the accumulated wear relative to the expected wear during the lifetime of the circuit and/or product.Type: ApplicationFiled: September 5, 2012Publication date: March 6, 2014Applicant: NVIDIA CORPORATIONInventor: Brad Simeral
-
Patent number: 8532098Abstract: A system and method for communicating over a single virtual channel. The method includes reserving a first group of credits of a credit pool for a first traffic class and a second group of credits of the credit pool for a second traffic class. In addition, a first and second respective groups of tags are reserved from a tag pool for the first and second traffic class. A packet may then be selected from a first buffer for transmission over the virtual channel. The packet may include a traffic indicator of the first traffic class operable to allow the packet to pass a packet of the second traffic class from a second buffer. The method further includes sending the packet over the virtual channel and adjusting the first group of credits and the first group of tags based on having sent a packet of the first traffic class.Type: GrantFiled: November 30, 2009Date of Patent: September 10, 2013Assignee: Nvidia CorporationInventors: David Reed, Oren Rubinstein, Brad Simeral, Devang Sachdev, Daphne Das, Radha Kanekal, Dennis Ma, Praveen Jain, Manas Mandal
-
Publication number: 20110128963Abstract: A system and method for communicating over a single virtual channel. The method includes reserving a first group of credits of a credit pool for a first traffic class and a second group of credits of the credit pool for a second traffic class. In addition, a first and second respective groups of tags are reserved from a tag pool for the first and second traffic class. A packet may then be selected from a first buffer for transmission over the virtual channel. The packet may include a traffic indicator of the first traffic class operable to allow the packet to pass a packet of the second traffic class from a second buffer. The method further includes sending the packet over the virtual channel and adjusting the first group of credits and the first group of tags based on having sent a packet of the first traffic class.Type: ApplicationFiled: November 30, 2009Publication date: June 2, 2011Applicant: NVIDIA CORPROATIONInventors: David Reed, Oren Rubinstein, Brad Simeral, Devang Sachdev, Daphane Das, Radha Kanekal, Dennis Ma, Praveen Jain, Manas Mandal
-
Publication number: 20100026692Abstract: A method of displaying graphics data is described. The method involves accessing the graphics data in a memory subsystem associated with one graphics subsystem. The graphics data is transmitted to a second graphics subsystem, where it is displayed on a monitor coupled to the second graphics subsystem.Type: ApplicationFiled: August 4, 2008Publication date: February 4, 2010Applicant: NVIDIA CORPORATIONInventors: Stephen Lew, Bruce R. Intihar, Abraham B. de Waal, David G. Reed, Tony Tamasi, David Wyatt, Franck R. Diard, Brad Simeral
-
Patent number: 7287202Abstract: Methods and apparatuses of debugging and/or testing an interface are disclosed. Briefly, in accordance with one particular embodiment, testing an interface includes detecting a data exchange error in a computing system having an interface. In response to detection of the data exchange error, one or more testing operations are triggered at a subsequent time to the detection of the error.Type: GrantFiled: April 5, 2005Date of Patent: October 23, 2007Inventors: Brad Simeral, Scott Cartier, Morgan Teachworth
-
Publication number: 20070143640Abstract: A data path controller, a computer device, an apparatus and a method are disclosed for integrating power management functions into a data path controller to manage power consumed by processors and peripheral devices. By embedding power management within the data path controller, the data path controller can advantageously modify its criteria in-situ so that it can adapt its power management actions in response to changes in processors and peripheral devices. In addition, the data path controller includes a power-managing interface that provides power-monitoring ports for monitoring and/or quantifying power consumption of various components. In one embodiment, the data path controller includes a power-monitoring interface for selectably monitoring power of a component.Type: ApplicationFiled: December 16, 2005Publication date: June 21, 2007Inventors: Brad Simeral, David Reed, Dmitry Vyshetsky, Roman Surgutchick, Robert Chapman, Joshua Titus, Anand Srinivasan, Hari Krishnan
-
Publication number: 20060095677Abstract: A system, apparatus, and method are disclosed for managing predictive accesses to memory. In one embodiment, an exemplary apparatus is configured as a prediction inventory that stores predictions in a number of queues. Each queue is configured to maintain predictions until a subset of the predictions is either issued to access a memory or filtered out as redundant. In another embodiment, an exemplary prefetcher predicts accesses to a memory. The prefetcher comprises a speculator for generating a number of predictions and a prediction inventory, which includes queues each configured to maintain a group of items. The group of items typically includes a triggering address that corresponds to the group. Each item of the group is of one type of prediction. Also, the prefetcher includes an inventory filter configured to compare the number of predictions against one of the queues having the either the same or different prediction type as the number of predictions.Type: ApplicationFiled: August 17, 2004Publication date: May 4, 2006Inventors: Ziyad Hakura, Brian Langendorf, Stefano Pescador, Radoslay Danilak, Brad Simeral
-
Publication number: 20060064561Abstract: Embodiments of methods and apparatuses of operating a memory controller are disclosed.Type: ApplicationFiled: September 20, 2004Publication date: March 23, 2006Inventors: Brad Simeral, Chin Shih, David Reed
-
Publication number: 20060041722Abstract: A system, apparatus, and method are disclosed for storing predictions as well as examining and using one or more caches for anticipating accesses to a memory. In one embodiment, an exemplary apparatus is a prefetcher for managing predictive accesses with a memory. The prefetcher can include a speculator to generate a range of predictions, and multiple caches. For example, the prefetcher can include a first cache and a second cache to store predictions. An entry of the first cache is addressable by a first representation of an address from the range of predictions, whereas an entry of the second cache is addressable by a second representation of the address. The first and the second representations are compared in parallel against the stored predictions of either the first cache and the second cache, or both.Type: ApplicationFiled: August 17, 2004Publication date: February 23, 2006Inventors: Ziyad Hakura, Radoslav Danilak, Brad Simeral, Brian Langendorf, Stefano Pescador, Dmitry Vyshetsky
-
Publication number: 20060041721Abstract: A system, apparatus, and method are disclosed for storing and prioritizing predictions to anticipate nonsequential accesses to a memory. In one embodiment, an exemplary apparatus is configured as a prefetcher for predicting accesses to a memory. The prefetcher includes a prediction generator configured to generate a prediction that is unpatternable to an address. Also, the prefetcher also can include a target cache coupled to the prediction generator to maintain the prediction in a manner that determines a priority for the prediction. In another embodiment, the prefetcher can also include a priority adjuster. The priority adjuster sets a priority for a prediction relative to other predictions. In some cases, the placement of the prediction is indicative of the priority relative to priorities for the other predictions. In yet another embodiment, the prediction generator uses the priority to determine that the prediction is to be generated before other predictions.Type: ApplicationFiled: August 17, 2004Publication date: February 23, 2006Inventors: Ziyad Hakura, Brian Langendorf, Stefano Pescador, Radoslav Danilak, Brad Simeral
-
Publication number: 20060041723Abstract: A system, apparatus, and method are disclosed for predicting accesses to memory. In one embodiment, an exemplary apparatus comprises a processor configured to execute program instructions and process program data, a memory including the program instructions and the program data, and a memory processor. The memory processor can include a speculator configured to receive an address containing the program instructions or the program data. Such a speculator can comprise a sequential predictor for generating a configurable number of sequential addresses. The speculator can also include a nonsequential predictor configured to associate a subset of addresses to the address and to predict a group of addresses based on at least one address of the subset, wherein at least one address of the subset is unpatternable to the address.Type: ApplicationFiled: August 17, 2004Publication date: February 23, 2006Inventors: Ziyad Hakura, Brian Langendorf, Stefano Pescador, Radoslav Danilak, Brad Simeral
-
Patent number: 6957298Abstract: A memory controller system is provided including a plurality of memory controller subsystems each coupled between memory and one of a plurality of computer components. Each memory controller subsystem includes at least one queue for managing pages in the memory. In use, each memory controller subsystem is capable of being loaded from the associated computer component independent of the state of the memory. Since high bandwidth and low latency are conflicting requirements in high performance memory systems, the present invention separates references from various computer components into multiple command streams. Each stream thus can hide activate bank preparation commands within its own stream for maximum bandwidth. A page context switch technique may be employed that allows instantaneous switching from one look ahead stream to another to allow low latency and high bandwidth while preserving maximum bank state from the previous stream.Type: GrantFiled: September 8, 2003Date of Patent: October 18, 2005Assignee: NVIDIA CorporationInventors: James M. Van Dyke, Nicholas J. Foskett, Brad Simeral, Sean Treichler
-
Publication number: 20050128203Abstract: Embodiments of the invention accelerate at least one special purpose processor, such as a GPU, or a driver managing a special purpose processor, by using at least one co-processor. Advantageously, embodiments of the invention are fault-tolerant in that the at least one GPU or other special purpose processor is able to execute all computations, although perhaps at a lower level of performance, if the at least one co-processor is rendered inoperable. The co-processor may also be used selectively, based on performance considerations.Type: ApplicationFiled: December 11, 2003Publication date: June 16, 2005Inventors: Jen-Hsun Huang, Michael Cox, Ziyad Hakura, John Montrym, Brad Simeral, Brian Langendorf, Blanton Kephart, Franck Diard
-
Patent number: 6647456Abstract: At memory controller system is provided including a plurality of memory controller subsystems each coupled between memory and one of a plurality of computer components. Each memory controller subsystem includes at least one queue for managing pages in the memory. In use, each memory controller subsystem is capable of being loaded from the associated computer component independent of the state of the memory. Since high bandwidth and low latency are conflicting requirements in high performance memory systems, the present invention separates references from various computer components into multiple command streams. Each stream thus can hide precharge and activate bank preparation commands within its own stream for maximum bandwidth.Type: GrantFiled: February 23, 2001Date of Patent: November 11, 2003Assignee: NVIDIA CorporationInventors: James M. Van Dyke, Nicholas J. Foskett, Brad Simeral, Sean Treichler