Patents by Inventor James M. Van Dyke

James M. Van Dyke has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8698814
    Abstract: A mapping engine maps general processing clusters (GPCs) within a parallel processing subsystem to screen tiles on a display screen based on the number of enabled streaming multiprocessors (SMs) within each GPC. A given GPC then generates pixels for the screen tiles to which the GPC is mapped. One advantage of the disclosed technique is a given GPC performs a fraction of the processing tasks associated with the parallel processing subsystem that is roughly proportional to the fraction of SMs included within the GPC.
    Type: Grant
    Filed: October 13, 2009
    Date of Patent: April 15, 2014
    Assignee: Nvidia Corporation
    Inventor: James M. Van Dyke
  • Patent number: 8700865
    Abstract: A shared resource management system and method are described. In one embodiment a shared resource management system includes a plurality of engines, a shared resource, and a shared resource management unit. In one exemplary implementation the shared resource is a memory and the shared resource management unit is a memory management unit (MMU). The plurality of engines perform processing. The shared resource supports the processing. For example, a memory stores information and instructions for the engines. The shared resource management unit manages memory operations and handles access requests associated with compressed data.
    Type: Grant
    Filed: November 2, 2006
    Date of Patent: April 15, 2014
    Assignee: NVIDIA Corporation
    Inventors: James M. Van Dyke, John H. Edmondson, Lingfeng Yuan, Brian D. Hutsell
  • Patent number: 8441495
    Abstract: Systems and methods for determining a compression tag state prior to memory client arbitration may reduce the latency for memory accesses. A compression tag is associated with each portion of a surface stored in memory and indicates whether or not the data stored in each portion is compressed or not. A client uses the compression tags to construct memory access requests and the size of each request is based on whether or not the portion of the surface to be accessed is compressed or not. When multiple clients access the same surface the compression tag reads are interlocked with the pending memory access requests to ensure that the compression tags provided to each client are accurate. This mechanism allows for memory bandwidth optimizations including reordering memory access requests for efficient access.
    Type: Grant
    Filed: December 29, 2009
    Date of Patent: May 14, 2013
    Assignee: NVIDIA Corporation
    Inventors: James M. Van Dyke, John H. Edmondson, Brian D. Hutsell, Michael F. Harris
  • Patent number: 8271746
    Abstract: Efficient memory management can be performed using a computer system that includes a client which requests access to a memory, a memory interface coupled to the client and to the memory, wherein the memory interface comprises an arbiter to arbitrate requests received from the client to access data stored in the memory, a look ahead structure for managing the memory, a request queue for queuing memory access requests, and wherein the look ahead structure is located before the arbiter so that the look ahead structure communicates with the memory through the arbiter.
    Type: Grant
    Filed: December 18, 2006
    Date of Patent: September 18, 2012
    Assignee: NVIDIA Corporation
    Inventors: Brian D. Hutsell, James M. Van Dyke
  • Patent number: 8139073
    Abstract: Systems and methods for determining a compression tag state prior to memory client arbitration may reduce the latency for memory accesses. A compression tag is associated with each portion of a surface stored in memory and indicates whether or not the data stored in each portion is compressed or not. A client uses the compression tags to construct memory access requests and the size of each request is based on whether or not the portion of the surface to be accessed is compressed or not. When multiple clients access the same surface the compression tag reads are interlocked with the pending memory access requests to ensure that the compression tags provided to each client are accurate. This mechanism allows for memory bandwidth optimizations including reordering memory access requests for efficient access.
    Type: Grant
    Filed: September 18, 2006
    Date of Patent: March 20, 2012
    Assignee: NVIDIA Corporation
    Inventors: James M. Van Dyke, John H. Edmondson, Brian D. Hutsell, Michael F. Harris
  • Patent number: 8072463
    Abstract: A graphics system utilizes virtual memory pages and has a partitioned graphics memory that includes memory elements. The system supports having an non-power of two number of active memory elements. Additionally, a partition swizzling operation is used to adjust the partition numbers associated with individual units of virtual memory allocation on particular virtual memory pages to achieve a selected partition interleaving pattern.
    Type: Grant
    Filed: October 4, 2006
    Date of Patent: December 6, 2011
    Assignee: NVIDIA Corporation
    Inventors: James M. Van Dyke, John H. Edmondson, John S. Montrym
  • Patent number: 7932912
    Abstract: A graphics system has virtual memory and a partitioned graphics memory that supports having an non-power of two number of dynamic random access memories (DRAMs). The graphics system utilizes page table entries to support addressing Tag RAMs used to store tag bits indicative of a compression status.
    Type: Grant
    Filed: November 2, 2006
    Date of Patent: April 26, 2011
    Assignee: Nvidia Corporation
    Inventor: James M. Van Dyke
  • Publication number: 20110078359
    Abstract: One embodiment of the present invention sets forth a technique for computing dynamic random access memory (DRAM) addresses from linear physical addresses for memory subsystems implementing integral power of two virtual page sizes, and an arbitrary number of available partitions. Each DRAM address comprises a row address, column address, bank address, and partition address. The linear physical address is used to generate to the DRAM address in units of a DRAM bank size. Address scrambling may be implemented to overcome transient access contention to specific DRAM pages by multiple client modules.
    Type: Application
    Filed: September 21, 2010
    Publication date: March 31, 2011
    Inventor: James M. VAN DYKE
  • Patent number: 7884829
    Abstract: A graphics system has a partitioned graphics memory that includes memory elements. The system supports having an non-power of two number of active memory elements. In one implementation, the memory elements are dynamic random access memories (DRAMs) and the system supports having a non-power of two number of active DRAMs.
    Type: Grant
    Filed: October 4, 2006
    Date of Patent: February 8, 2011
    Assignee: NVIDIA Corporation
    Inventors: James M. Van Dyke, John S. Montrym
  • Patent number: 7882292
    Abstract: An arbiter decides to grant access from multiple clients to a shared resource (e.g. memory) using efficiency and/or urgency terms. Urgency for a client may be determined based on an “in-band” request identifier transmitted from the client to the resource along with the request, and an “out-of-band” request identifier that is buffered by the client. A difference between the out-of-band request identifier and the in-band request identifier indicates the location of the request in the client buffer. A small difference indicates that the request is near the end of the buffer (high urgency), and a large difference indicates that the request is far back in the buffer (low urgency). Efficiency terms include metrics on resource overhead, such as time needed to switch between reading/writing data from/to memory via a shared memory bus, or bank management overhead such as time for switching between DRAM banks.
    Type: Grant
    Filed: August 31, 2009
    Date of Patent: February 1, 2011
    Assignee: NVIDIA Corporation
    Inventors: James M. Van Dyke, Brian D. Hutsell
  • Patent number: 7872657
    Abstract: Systems and methods for addressing memory where data is interleaved across different banks using different interleaving granularities improve graphics memory bandwidth by distributing graphics data for efficient access during rendering. Various partition strides may be selected to modify the number of sequential addresses mapped to each DRAM and change the interleaving granularity. A memory addressing scheme is used to allow different partition strides for each virtual memory page without causing memory aliasing problems in which physical memory locations in one virtual memory page are also mapped to another virtual memory page. When a physical memory address lies within a virtual memory page crossing region, the smallest partition stride is used to access the physical memory.
    Type: Grant
    Filed: June 16, 2006
    Date of Patent: January 18, 2011
    Assignee: NVIDIA Corporation
    Inventors: John H. Edmondson, James M. Van Dyke
  • Patent number: 7808507
    Abstract: Systems and methods for determining a compression tag state prior to memory client arbitration may reduce the latency for memory accesses. A compression tag is associated with each portion of a surface stored in memory and indicates whether or not the data stored in each portion is compressed or not. A client uses the compression tags to construct memory access requests and the size of each request is based on whether or not the portion of the surface to be accessed is compressed or not. When multiple clients access the same surface the compression tag reads are interlocked with the pending memory access requests to ensure that the compression tags provided to each client are accurate. This mechanism allows for memory bandwidth optimizations including reordering memory access requests for efficient access.
    Type: Grant
    Filed: September 18, 2006
    Date of Patent: October 5, 2010
    Assignee: NVIDIA Corporation
    Inventors: James M. Van Dyke, John H. Edmondson, Brian D. Hutsell, Michael F. Harris
  • Patent number: 7805587
    Abstract: Embodiments of the present invention enable virtual-to-physical memory address translation using optimized bank and partition interleave patterns to improve memory bandwidth by distributing data accesses over multiple banks and multiple partitions. Each virtual page has a corresponding page table entry that specifies the physical address of the virtual page in linear physical address space. The page table entry also includes a data kind field that is used to guide and optimize the mapping process from the linear physical address space to the DRAM physical address space, which is used to directly access one or more DRAM. The DRAM physical address space includes a row, bank and column address. The data kind field is also used to optimize the starting partition number and partition interleave pattern that defines the organization of the selected physical page of memory within the DRAM memory system.
    Type: Grant
    Filed: November 1, 2006
    Date of Patent: September 28, 2010
    Assignee: NVIDIA Corporation
    Inventors: James M. Van Dyke, John H. Edmondson
  • Patent number: 7680992
    Abstract: A memory interface permits a read-modify-write process to be implemented as an interruptible process. A pending read-modify-write is capable of being temporarily interrupted to service a higher priority memory request.
    Type: Grant
    Filed: December 19, 2006
    Date of Patent: March 16, 2010
    Assignee: Nvidia Corporation
    Inventors: James M. Van Dyke, Brian D. Hutsell
  • Patent number: 7617368
    Abstract: A memory interface coupling a plurality of clients to a memory having memory banks provides independent arbitration of activate decisions and read/write decisions. In one implementation, precharge decisions are also independently arbitrated.
    Type: Grant
    Filed: December 19, 2006
    Date of Patent: November 10, 2009
    Assignee: NVIDIA Corporation
    Inventors: James M. Van Dyke, Brian D. Hutsell
  • Patent number: 7603503
    Abstract: An arbiter decides to grant access from multiple clients to a shared resource (e.g. memory) using efficiency and/or urgency terms. Urgency for a client may be determined based on an “in-band” request identifier transmitted from the client to the resource along with the request, and an “out-of-band” request identifier that is buffered by the client. A difference between the out-of-band request identifier and the in-band request identifier indicates the location of the request in the client buffer. A small difference indicates that the request is near the end of the buffer (high urgency), and a large difference indicates that the request is far back in the buffer (low urgency). Efficiency terms include metrics on resource overhead, such as time needed to switch between reading/writing data from/to memory via a shared memory bus, or bank management overhead such as time for switching between DRAM banks.
    Type: Grant
    Filed: December 19, 2006
    Date of Patent: October 13, 2009
    Assignee: NVIDIA Corporation
    Inventors: Brian D. Hutsell, James M. Van Dyke
  • Patent number: 7596647
    Abstract: An arbiter decides to grant access from multiple clients to a shared resource (e.g. memory) using efficiency and/or urgency terms. Urgency for a client may be determined based on an “in-band” request identifier transmitted from the client to the resource along with the request, and an “out-of-band” request identifier that is buffered by the client. A difference between the out-of-band request identifier and the in-band request identifier indicates the location of the request in the client buffer. A small difference indicates that the request is near the end of the buffer (high urgency), and a large difference indicates that the request is far back in the buffer (low urgency). Efficiency terms include metrics on resource overhead, such as time needed to switch between reading/writing data from/to memory via a shared memory bus, or bank management overhead such as time for switching between DRAM banks.
    Type: Grant
    Filed: December 19, 2006
    Date of Patent: September 29, 2009
    Assignee: NVIDIA Corporation
    Inventors: James M. Van Dyke, Brian D. Hutsell
  • Patent number: 7525551
    Abstract: Ripmapping and footprint assembly are used to anisotropically filter texture maps. A subset of the set of ripmaps associated with a base texture is created and stored. The subset includes ripmaps selected to maximize anisotropic texture sampling performance and to minimize the texture memory requirements. For pixel footprints not aligned with the anisotropy of ripmaps or requiring a ripmap outside of the subset, footprint assembly is used to perform anisotropic filtering by taking multiple isotropic probes from a mipmap. For texture samples aligned within a tolerance range of the anisotropy of a ripmap, footprint assembly constructs an anisotropic texture sample from one or more samples of a ripmap. Ripmap statistics are collected during texture mapping to dynamically determine an optimal subset of ripmaps, and additional ripmaps can be added to the subset on demand if warranted. A graphics driver can analyze ripmap statistics to determine the subset of ripmaps.
    Type: Grant
    Filed: November 1, 2004
    Date of Patent: April 28, 2009
    Assignee: Nvidia Corporation
    Inventors: William P. Newhall, Jr., James M. Van Dyke
  • Patent number: 7400327
    Abstract: A memory system having a number of partitions each operative to independently service memory requests from a plurality of memory clients while maintaining the appearance to the memory client of a single partition memory subsystem. The memory request specifies a location in the memory system and a transfer size. A partition receives input from an arbiter circuit which, in turn, receives input from a number of client queues for the partition. The arbiter circuit selects a client queue based on a priority policy such as round robin or least recently used or a static or dynamic policy. A router receives a memory request, determines the one or more partitions needed to service the request and stores the request in the client queues for the servicing partitions.
    Type: Grant
    Filed: February 4, 2005
    Date of Patent: July 15, 2008
    Assignee: NVIDIA Corporation
    Inventors: James M. Van Dyke, John S. Montrym, Steven E. Molnar
  • Patent number: 7369133
    Abstract: A memory system having a number of partitions each operative to independently service memory requests from a plurality of memory clients while maintaining the appearance to the memory client of a single partition memory subsystem. The memory request specifies a location in the memory system and a transfer size. A partition receives input from an arbiter circuit which, in turn, receives input from a number of client queues for the partition. The arbiter circuit selects a client queue based on a priority policy such as round robin or least recently used or a static or dynamic policy. A router receives a memory request, determines the one or more partitions needed to service the request and stores the request in the client queues for the servicing partitions.
    Type: Grant
    Filed: February 4, 2005
    Date of Patent: May 6, 2008
    Assignee: Nvidia Corporation
    Inventors: James M. Van Dyke, John S. Montrym, Steven E. Molnar