Patents by Inventor Colyn S. Case

Colyn S. Case has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9262837
    Abstract: Circuits, methods, and apparatus for modifying the data rate of a data bus. In a circuit having two processors coupled by a data bus, the processors each learn that the other is capable of operating at a modified data rate. The data rate is then changed to the modified rate. Each processor may learn of the other's capability by reading a vendor identification, for example from a vendor defined message stored on the other processor. Alternately, each processor may provide an instruction to the other to operate at the modified rate, for example by writing to the other processor's extended capability registers. In another circuit having two processors communicating over a bus, it is determined that both are capable of transmitting and receiving data at a modified data rate. An instruction is provided to one or both of the processors to transmit at the modified rate.
    Type: Grant
    Filed: January 28, 2015
    Date of Patent: February 16, 2016
    Assignee: NVIDIA Corporation
    Inventors: Anthony Michael Tamasi, William Tsu, Colyn S. Case, David G. Reed
  • Publication number: 20150199822
    Abstract: Circuits, methods, and apparatus for modifying the data rate of a data bus. In a circuit having two processors coupled by a data bus, the processors each learn that the other is capable of operating at a modified data rate. The data rate is then changed to the modified rate. Each processor may learn of the other's capability by reading a vendor identification, for example from a vendor defined message stored on the other processor. Alternately, each processor may provide an instruction to the other to operate at the modified rate, for example by writing to the other processor's extended capability registers. In another circuit having two processors communicating over a bus, it is determined that both are capable of transmitting and receiving data at a modified data rate. An instruction is provided to one or both of the processors to transmit at the modified rate.
    Type: Application
    Filed: January 28, 2015
    Publication date: July 16, 2015
    Applicant: NVIDIA CORPORATION
    Inventors: Anthony Michael Tamasi, William Tsu, Colyn S. Case, David G. Reed
  • Patent number: 8161252
    Abstract: Devices and methods provide data from multiple storage locations to a processor. A data block containing data required by a processor is stored in two or more locations, e.g., in a local memory and a system memory, both of which are accessible to the processor's memory interface. The memory interface directs each read request for mirrored data to one or another of the mirror locations. Selection of a mirror location to be read is based on substantially real-time information about which mirror location is best able to handle the request. For instance, the selection of a mirror location to access can be based at least in part on information about the level of activity on various buses that connect the processor to the mirror locations.
    Type: Grant
    Filed: November 8, 2005
    Date of Patent: April 17, 2012
    Assignee: NVIDIA Corporation
    Inventors: Colyn S. Case, Anders M. Kugler, Peter Tong
  • Patent number: 8035647
    Abstract: A raster operations (ROP) unit interleaves read and write requests for efficiently communicating with a frame buffer via a PCI Express (PCI E) link or other system bus that provides separate upstream and downstream data transfer paths. One example of a ROP unit processes pixels in groups, performing read modify writeback sequences for each group. The read requests associated with pixels in a second group are advantageously interleaved with the writeback requests for pixels in the first group prior to sending the requests on the system bus.
    Type: Grant
    Filed: August 24, 2006
    Date of Patent: October 11, 2011
    Assignee: NVIDIA Corporation
    Inventors: Donald A. Bittel, Paul MacDougal, Manas Mandal, Colyn S. Case
  • Patent number: 7797510
    Abstract: In a virtual memory system, address translation information is provided using a cluster that is associated with some range of virtual addresses and that can be used to translate any virtual address in its range to a physical address, where the sizes of the ranges mapped by different clusters may be different. Clusters are stored in an address translation table that is indexed by virtual address so that, starting from any valid virtual address, the appropriate cluster for translating that address can be retrieved from the translation table. The clusters are dynamically created from a fragmented pool of physical addresses as new virtual address mappings are requested by consumers of the virtual memory space.
    Type: Grant
    Filed: April 30, 2008
    Date of Patent: September 14, 2010
    Assignee: NVIDIA Corporation
    Inventors: Colyn S. Case, Gary D. Lorensen, Sharon Rose Clay
  • Patent number: 7788439
    Abstract: A bus interface permits an upstream bandwidth and a downstream bandwidth to be separately selected. In one implementation a link control module forms a bidirectional link with another bus interface by separately configuring link widths of an upstream unidirectional sub-link and a downstream unidirectional sub-link.
    Type: Grant
    Filed: October 16, 2008
    Date of Patent: August 31, 2010
    Assignee: NVIDIA Corporation
    Inventors: William P. Tsu, Colyn S. Case
  • Patent number: 7664905
    Abstract: In some applications, such as video motion compression processing for example, a request pattern or “stream” of requests for accesses to memory (e.g., DRAM) may have, over a large number of requests, a relatively small number of requests to the same page. Due to the small number of requests to the same page, conventionally sorting to aggregate page hits may not be very effective. Reordering the stream can be used to “bury” or “hide” much of the necessary precharge/activate time, which can have a highly positive impact on overall throughput. For example, separating accesses to different rows of the same bank by at least a predetermined number of clocks can effectively hide the overhead involved in precharging/activating the rows.
    Type: Grant
    Filed: November 3, 2006
    Date of Patent: February 16, 2010
    Assignee: NVIDIA Corporation
    Inventors: David A. Jarosh, Sonny S. Yeoh, Colyn S. Case, John H. Edmondson
  • Patent number: 7624221
    Abstract: Optimization logic that optimizes a stream of requests being transmitted onto a link by a link interface unit can be enabled or disabled based on a performance metric that represents a measure of the degree to which a response to a request is likely to be slowed due to congestion, propagation delays, or other bottlenecks in the system. For example, the performance metric can be based on a measured level of link activity due to requests from the transmitting device and/or a prediction as to behavior (e.g., access time) of the target device that receives the stream of requests. The control logic advantageously does not require extra signals to be carried on the bus.
    Type: Grant
    Filed: July 28, 2006
    Date of Patent: November 24, 2009
    Assignee: NVIDIA Corporation
    Inventor: Colyn S. Case
  • Patent number: 7562205
    Abstract: A virtual address translation table and an on-chip address cache are usable for translating virtual addresses to physical addresses. Address translation information is provided using a cluster that is associated with some range of virtual addresses and that can be used to translate any virtual address in its range to a physical address, where the sizes of the ranges mapped by different clusters may be different. Clusters are stored in an address translation table that is indexed by virtual address so that, starting from any valid virtual address, the appropriate cluster for translating that address can be retrieved from the translation table. Recently retrieved clusters are stored in an on-chip cache, and a cached cluster can be used to translate any virtual address in its range without accessing the address translation table again.
    Type: Grant
    Filed: August 23, 2007
    Date of Patent: July 14, 2009
    Assignee: Nvidia Corporation
    Inventors: Colyn S. Case, Dmitry Vyshetsky, Sean J. Treichler
  • Patent number: 7526593
    Abstract: Multiple data transfer requests can be merged and transmitted as a single packet on a packetized bus such as a PCI Express (PCI-E) bus. In one embodiment, requests are combined if they are directed to contiguous address ranges in the same target device. An opportunistic merging procedure is advantageously used that merges a first request with a later request if the first request and the later request are mergeable and are received within a holdoff period that is dynamically determined based on a level of bus activity; otherwise, requests can be transmitted without merging.
    Type: Grant
    Filed: October 3, 2006
    Date of Patent: April 28, 2009
    Assignee: Nvidia Corporation
    Inventors: Manas Mandal, William P. Tsu, Colyn S. Case, Ashish Kishen Kaul
  • Patent number: 7469311
    Abstract: A bus interface permits an upstream bandwidth and a downstream bandwidth to be separately selected. In one implementation a link control module forms a bidirectional link with another bus interface by separately configuring link widths of an upstream unidirectional sub-link and a downstream unidirectional sub-link.
    Type: Grant
    Filed: December 19, 2006
    Date of Patent: December 23, 2008
    Assignee: Nvidia Corporation
    Inventors: William P. Tsu, Colyn S. Case
  • Patent number: 7415575
    Abstract: A cache shared by multiple clients implements a client specific policy for replacing entries in the event of a cache miss. A request from any client can hit any entry in the cache. For purposes of replacing entries, at least of the clients is restricted, and when a cache miss results from a request by the restricted client, the entry to be replaced is selected from a fixed subset of the cache entries. When a cache misses results from a request by any client other than the restricted client, any cache entry, including a restricted entry, can be selected to be replaced.
    Type: Grant
    Filed: December 8, 2005
    Date of Patent: August 19, 2008
    Assignee: NVIDIA, Corporation
    Inventors: Peter C. Tong, Colyn S. Case
  • Patent number: 7386697
    Abstract: In a virtual memory system, address translation information is provided using a cluster that is associated with some range of virtual addresses and that can be used to translate any virtual address in its range to a physical address, where the sizes of the ranges mapped by different clusters may be different. Clusters are stored in an address translation table that is indexed by virtual address so that, starting from any valid virtual address, the appropriate cluster for translating that address can be retrieved from the translation table. The clusters are dynamically created from a fragmented pool of physical addresses as new virtual address mappings are requested by consumers of the virtual memory space.
    Type: Grant
    Filed: March 10, 2005
    Date of Patent: June 10, 2008
    Assignee: NVIDIA Corporation
    Inventors: Colyn S. Case, Gary D. Lorensen, Sharon Rose Clay
  • Publication number: 20080109613
    Abstract: In some applications, such as video motion compression processing for example, a request pattern or “stream” of requests for accesses to memory (e.g., DRAM) may have, over a large number of requests, a relatively small number of requests to the same page. Due to the small number of requests to the same page, conventionally sorting to aggregate page hits may not be very effective. Reordering the stream can be used to “bury” or “hide” much of the necessary precharge/activate time, which can have a highly positive impact on overall throughput. For example, separating accesses to different rows of the same bank by at least a predetermined number of clocks can effectively hide the overhead involved in precharging/activating the rows.
    Type: Application
    Filed: November 3, 2006
    Publication date: May 8, 2008
    Applicant: NVIDIA Corporation
    Inventors: David A. Jarosh, Sonny S. Yeoh, Colyn S. Case, John H. Edmondson
  • Patent number: 7334108
    Abstract: A virtual address translation table and an on-chip address cache are usable for translating virtual addresses to physical addresses. Address translation information is provided using a cluster that is associated with some range of virtual addresses and that can be used to translate any virtual address in its range to a physical address, where the sizes of the ranges mapped by different clusters may be different. Clusters are stored in an address translation table that is indexed by virtual address so that, starting from any valid virtual address, the appropriate cluster for translating that address can be retrieved from the translation table. Recently retrieved clusters are stored in an on-chip cache, and a cached cluster can be used to translate any virtual address in its range without accessing the address translation table again.
    Type: Grant
    Filed: January 30, 2004
    Date of Patent: February 19, 2008
    Assignee: NVIDIA Corporation
    Inventors: Colyn S. Case, Dmitry Vyshetsky, Sean J. Treichler
  • Publication number: 20080028181
    Abstract: Circuits, methods, and apparatus that reduce or eliminate system memory accesses to retrieve address translation information. In one example, these accesses are reduced or eliminated by pre-populating a graphics TLB with entries that are used to translate virtual addresses used by a GPU to physical addresses used by a system memory. Translation information is maintained by locking or restricting entries in the graphics TLB that are needed for display access. This may be done by limiting access to certain locations in the graphics TLB, by storing flags or other identifying information in the graphics TLB, or by other appropriate methods. In another example, memory space is allocated by a system BIOS for a GPU, which stores a base address and address range. Virtual addresses in the address range are translated by adding them to the base address.
    Type: Application
    Filed: March 21, 2007
    Publication date: January 31, 2008
    Applicant: NVIDIA Corporation
    Inventors: Peter C. Tong, Sonny S. Yeoh, Kevin J. Kranzusch, Gary D. Lorensen, Kaymann L. Woo, Ashish Kishen Kaul, Colyn S. Case, Stefan A. Gottschalk, Dennis K. Ma
  • Patent number: 7296139
    Abstract: A virtual address translation table and an on-chip address cache are usable for translating virtual addresses to physical addresses. Address translation information is provided using a cluster that is associated with some range of virtual addresses and that can be used to translate any virtual address in its range to a physical address, where the sizes of the ranges mapped by different clusters may be different. Clusters are stored in an address translation table that is indexed by virtual address so that, starting from any valid virtual address, the appropriate cluster for translating that address can be retrieved from the translation table. Recently retrieved clusters are stored in an on-chip cache, and a cached cluster can be used to translate any virtual address in its range without accessing the address translation table again.
    Type: Grant
    Filed: January 30, 2004
    Date of Patent: November 13, 2007
    Assignee: NVIDIA Corporation
    Inventors: Colyn S. Case, Dmitry Vyshetsky
  • Patent number: 7278008
    Abstract: A virtual address translation table and an on-chip address cache are usable for translating virtual addresses to physical addresses. Address translation information is provided using a cluster that is associated with some range of virtual addresses and that can be used to translate any virtual address in its range to a physical address, where the sizes of the ranges mapped by different clusters may be different. Clusters are stored in an address translation table that is indexed by virtual address so that, starting from any valid virtual address, the appropriate cluster for translating that address can be retrieved from the translation table. Recently retrieved clusters are stored in an on-chip cache, and a cached cluster can be used to translate any virtual address in its range without accessing the address translation table again.
    Type: Grant
    Filed: January 30, 2004
    Date of Patent: October 2, 2007
    Assignee: NVIDIA Corporation
    Inventors: Colyn S. Case, Dmitry Vyshetsky, Sean J. Treichler
  • Patent number: 6820173
    Abstract: A system, method and article of manufacture are provided for retrieving information from memory. Initially, processor requests for information from a first memory are monitored. A future processor request for information is then predicted based on the previous step. Thereafter, one or more speculative requests are issued for retrieving information from the first memory in accordance with the prediction. The retrieved information is subsequently cached in a second memory for being retrieved in response to processor requests without accessing the first memory. By allowing multiple speculative requests to be issued, throughput of information in memory is maximized.
    Type: Grant
    Filed: February 23, 2001
    Date of Patent: November 16, 2004
    Assignee: NVIDIA Corporation
    Inventors: Donald A. Bittel, Colyn S. Case