Patents by Inventor Brian D. Hutsell
Brian D. Hutsell has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8700865Abstract: A shared resource management system and method are described. In one embodiment a shared resource management system includes a plurality of engines, a shared resource, and a shared resource management unit. In one exemplary implementation the shared resource is a memory and the shared resource management unit is a memory management unit (MMU). The plurality of engines perform processing. The shared resource supports the processing. For example, a memory stores information and instructions for the engines. The shared resource management unit manages memory operations and handles access requests associated with compressed data.Type: GrantFiled: November 2, 2006Date of Patent: April 15, 2014Assignee: NVIDIA CorporationInventors: James M. Van Dyke, John H. Edmondson, Lingfeng Yuan, Brian D. Hutsell
-
Patent number: 8441495Abstract: Systems and methods for determining a compression tag state prior to memory client arbitration may reduce the latency for memory accesses. A compression tag is associated with each portion of a surface stored in memory and indicates whether or not the data stored in each portion is compressed or not. A client uses the compression tags to construct memory access requests and the size of each request is based on whether or not the portion of the surface to be accessed is compressed or not. When multiple clients access the same surface the compression tag reads are interlocked with the pending memory access requests to ensure that the compression tags provided to each client are accurate. This mechanism allows for memory bandwidth optimizations including reordering memory access requests for efficient access.Type: GrantFiled: December 29, 2009Date of Patent: May 14, 2013Assignee: NVIDIA CorporationInventors: James M. Van Dyke, John H. Edmondson, Brian D. Hutsell, Michael F. Harris
-
Patent number: 8271746Abstract: Efficient memory management can be performed using a computer system that includes a client which requests access to a memory, a memory interface coupled to the client and to the memory, wherein the memory interface comprises an arbiter to arbitrate requests received from the client to access data stored in the memory, a look ahead structure for managing the memory, a request queue for queuing memory access requests, and wherein the look ahead structure is located before the arbiter so that the look ahead structure communicates with the memory through the arbiter.Type: GrantFiled: December 18, 2006Date of Patent: September 18, 2012Assignee: NVIDIA CorporationInventors: Brian D. Hutsell, James M. Van Dyke
-
Patent number: 8139073Abstract: Systems and methods for determining a compression tag state prior to memory client arbitration may reduce the latency for memory accesses. A compression tag is associated with each portion of a surface stored in memory and indicates whether or not the data stored in each portion is compressed or not. A client uses the compression tags to construct memory access requests and the size of each request is based on whether or not the portion of the surface to be accessed is compressed or not. When multiple clients access the same surface the compression tag reads are interlocked with the pending memory access requests to ensure that the compression tags provided to each client are accurate. This mechanism allows for memory bandwidth optimizations including reordering memory access requests for efficient access.Type: GrantFiled: September 18, 2006Date of Patent: March 20, 2012Assignee: NVIDIA CorporationInventors: James M. Van Dyke, John H. Edmondson, Brian D. Hutsell, Michael F. Harris
-
Patent number: 8134568Abstract: A system and method for representing multiple prefetchable memory resources, such as frame buffers coupled to graphics devices, as a unified prefetchable memory space for access by a software application. A graphics surface may be processed by multiple graphics devices, with portions of the surface residing in separate frame buffers, each frame buffer coupled to one of the multiple graphics devices. One or more redirection regions may be specified within the unified prefetchable memory space. Accesses within a redirection region are transmitted to a prefetchable memory of a single graphics device. Accesses within the unified prefetchable memory space, but outside of any redirection region may be broadcast to all of the prefetchable memories of the multiple graphics devices.Type: GrantFiled: December 15, 2004Date of Patent: March 13, 2012Assignee: NVIDIA CorporationInventors: Rick M. Iwamoto, Franck R. Diard, Brian D. Hutsell
-
Patent number: 7882292Abstract: An arbiter decides to grant access from multiple clients to a shared resource (e.g. memory) using efficiency and/or urgency terms. Urgency for a client may be determined based on an “in-band” request identifier transmitted from the client to the resource along with the request, and an “out-of-band” request identifier that is buffered by the client. A difference between the out-of-band request identifier and the in-band request identifier indicates the location of the request in the client buffer. A small difference indicates that the request is near the end of the buffer (high urgency), and a large difference indicates that the request is far back in the buffer (low urgency). Efficiency terms include metrics on resource overhead, such as time needed to switch between reading/writing data from/to memory via a shared memory bus, or bank management overhead such as time for switching between DRAM banks.Type: GrantFiled: August 31, 2009Date of Patent: February 1, 2011Assignee: NVIDIA CorporationInventors: James M. Van Dyke, Brian D. Hutsell
-
Patent number: 7808507Abstract: Systems and methods for determining a compression tag state prior to memory client arbitration may reduce the latency for memory accesses. A compression tag is associated with each portion of a surface stored in memory and indicates whether or not the data stored in each portion is compressed or not. A client uses the compression tags to construct memory access requests and the size of each request is based on whether or not the portion of the surface to be accessed is compressed or not. When multiple clients access the same surface the compression tag reads are interlocked with the pending memory access requests to ensure that the compression tags provided to each client are accurate. This mechanism allows for memory bandwidth optimizations including reordering memory access requests for efficient access.Type: GrantFiled: September 18, 2006Date of Patent: October 5, 2010Assignee: NVIDIA CorporationInventors: James M. Van Dyke, John H. Edmondson, Brian D. Hutsell, Michael F. Harris
-
Patent number: 7680992Abstract: A memory interface permits a read-modify-write process to be implemented as an interruptible process. A pending read-modify-write is capable of being temporarily interrupted to service a higher priority memory request.Type: GrantFiled: December 19, 2006Date of Patent: March 16, 2010Assignee: Nvidia CorporationInventors: James M. Van Dyke, Brian D. Hutsell
-
Patent number: 7664907Abstract: Techniques and systems for dynamic binning, in which a stream of requests to access a memory is sorted into a reordered stream that enables efficient access of the memory. A page stream sorter can group requests to access a memory in a manner that results in some “locality” in the stream of requests issued from the page stream sorter to memory, such that as few pages as possible in the same bank are accessed and/or a number of page switches needed is minimized.Type: GrantFiled: March 8, 2007Date of Patent: February 16, 2010Assignee: NVIDIA CorporationInventors: Brian D. Hutsell, James M. VanDyke, John E. Edmondson, Benjamin C. Hertzberg
-
Patent number: 7647467Abstract: On the fly tuning of parameters used in an interface between a memory (e.g. high speed memory such as DRAM) and a processor requesting access to the memory. In an operational mode, a memory controller couples the processor to the memory. The memory controller can also inhibit the operational mode to initiate a training mode. In the training mode, the memory controller tunes one or more parameters (voltage references, timing skews, etc.) used in an upcoming operational mode. The access to the memory may be from an isochronous process running on a graphics processor. The memory controller determines whether the isochronous process may be inhibited before entering the training mode. If memory buffers for the isochronous process are such that the training mode will not impact the isochronous process, then the memory controller can enter the training mode to tune the interface parameters without negatively impacting the process.Type: GrantFiled: December 19, 2006Date of Patent: January 12, 2010Assignee: NVIDIA CorporationInventors: Brian D. Hutsell, Sameer M. Gauria, Philip R. Manela, John A. Robinson
-
Patent number: 7617368Abstract: A memory interface coupling a plurality of clients to a memory having memory banks provides independent arbitration of activate decisions and read/write decisions. In one implementation, precharge decisions are also independently arbitrated.Type: GrantFiled: December 19, 2006Date of Patent: November 10, 2009Assignee: NVIDIA CorporationInventors: James M. Van Dyke, Brian D. Hutsell
-
Patent number: 7603503Abstract: An arbiter decides to grant access from multiple clients to a shared resource (e.g. memory) using efficiency and/or urgency terms. Urgency for a client may be determined based on an “in-band” request identifier transmitted from the client to the resource along with the request, and an “out-of-band” request identifier that is buffered by the client. A difference between the out-of-band request identifier and the in-band request identifier indicates the location of the request in the client buffer. A small difference indicates that the request is near the end of the buffer (high urgency), and a large difference indicates that the request is far back in the buffer (low urgency). Efficiency terms include metrics on resource overhead, such as time needed to switch between reading/writing data from/to memory via a shared memory bus, or bank management overhead such as time for switching between DRAM banks.Type: GrantFiled: December 19, 2006Date of Patent: October 13, 2009Assignee: NVIDIA CorporationInventors: Brian D. Hutsell, James M. Van Dyke
-
Patent number: 7596647Abstract: An arbiter decides to grant access from multiple clients to a shared resource (e.g. memory) using efficiency and/or urgency terms. Urgency for a client may be determined based on an “in-band” request identifier transmitted from the client to the resource along with the request, and an “out-of-band” request identifier that is buffered by the client. A difference between the out-of-band request identifier and the in-band request identifier indicates the location of the request in the client buffer. A small difference indicates that the request is near the end of the buffer (high urgency), and a large difference indicates that the request is far back in the buffer (low urgency). Efficiency terms include metrics on resource overhead, such as time needed to switch between reading/writing data from/to memory via a shared memory bus, or bank management overhead such as time for switching between DRAM banks.Type: GrantFiled: December 19, 2006Date of Patent: September 29, 2009Assignee: NVIDIA CorporationInventors: James M. Van Dyke, Brian D. Hutsell
-
Patent number: 7508398Abstract: A system and method for providing antialiased memory access includes receiving a request to access a memory address. The memory address is examined to determine if the memory address is within a virtual frame buffer. If the memory address is within a virtual frame buffer then the memory address is transformed into one or more physical addresses within a frame buffer that is utilized for antialiasing. The frame buffer may be a single memory space containing subpixel information corresponding to pixels of the virtual frame buffer. Subpixels located at the physical addresses within the frame buffer are then accessed. The disclosed invention provides for direct access by a software application.Type: GrantFiled: August 22, 2003Date of Patent: March 24, 2009Assignee: NVIDIA CorporationInventors: John S. Montrym, Brian D. Hutsell, Steven E. Molnar, Gary M. Tarolli, Christopher T. Cheng, Emmett M. Kilgariff, Abraham B. de Waal
-
Publication number: 20070294470Abstract: A memory interface coupling a plurality of clients to a memory having memory banks provides independent arbitration of activate decisions and read/write decisions. In one implementation, precharge decisions are also independently arbitrated.Type: ApplicationFiled: December 19, 2006Publication date: December 20, 2007Applicant: NVIDIA CorporationInventors: JAMES M. VAN DYKE, Brian D. Hutsell