Patents by Inventor Stephen W. Keckler

Stephen W. Keckler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20110072239
    Abstract: Methods, procedures, apparatuses, computer programs, computer-accessible mediums, processing arrangements and systems generally related to data multi-casting in a distributed processor architecture are described. Various implementations may include identifying a plurality of target instructions that are configured to receive a first message from a source; providing target routing instructions to the first message for each of the target instructions including selected information commonly shared by the target instructions; and, when two of the identified target instructions are located in different directions from one another relative to a router, replicating the first message and routing the replicated messages to each of the identified target instructions in the different directions.
    Type: Application
    Filed: September 18, 2009
    Publication date: March 24, 2011
    Applicant: Board of Regents, University of Texas System
    Inventors: Doug Burger, Stephen W. Keckler, Dong Li
  • Publication number: 20100325395
    Abstract: Techniques related to dependence prediction for a memory system are generally described. Various implementations may include a predictor storage storing a value corresponding to at least one prediction type associated with at least one load operation, and a state-machine having multiple states. For example, the state-machine may determine whether to execute the load operation based upon a prediction type associated with each of the states and a corresponding precedent to the load operation for the associated prediction type. The state-machine may further determine the prediction type for a subsequent load operation based on a result of the load operation. The states of the state machine may correspond to prediction types, which may be a conservative prediction type, an aggressive prediction type, or one or more N-store prediction types, for example.
    Type: Application
    Filed: June 19, 2009
    Publication date: December 23, 2010
    Inventors: Doug Burger, Stephen W. Keckler, Robert McDonald, Lakshminarasimhan Sethumadhavan, Franziska Roesner
  • Publication number: 20100325308
    Abstract: The present disclosure generally relates to systems for routing data across a multinodal network. Example systems include a multinodal array having a plurality of nodes and a plurality of physical communication channels connecting the nodes. At least one of the physical communication channels may be configured to route data from a first node to two or more other destination nodes of the plurality of nodes. The present disclosure also generally relates to methods for routing data across a multinodal network and computer accessible mediums having stored thereon computer executable instructions for performing techniques for routing data across a multinodal network.
    Type: Application
    Filed: June 19, 2009
    Publication date: December 23, 2010
    Inventors: Stephen W. Keckler, Boris Grot
  • Publication number: 20100211718
    Abstract: The present disclosure relates to an example of a method for a first router to adaptively determine status within a network. The network may include the first router, a second router and a third router. The method for the first router may comprise determining status information regarding the second router located in the network, and transmitting the status information to the third router located in the network. The second router and the third router may be indirectly coupled to one another.
    Type: Application
    Filed: February 17, 2009
    Publication date: August 19, 2010
    Inventors: Paul Gratz, Boris Grot, Stephen W. Keckler
  • Publication number: 20100146209
    Abstract: Methods, apparatus, computer programs and systems related to combining independent data caches are described. Various implementations can dynamically aggregate multiple level-one (L1) data caches from distinct processors together, change the degree of interleaving (e.g., how much consecutive data is mapped to each participating data cache before addresses go on to the next one) among the cache banks, and retain the ability to subsequently adjust the number of data caches participating as one coherent cache, i.e., the degree of interleaving, such as when the requirements of an application or process change.
    Type: Application
    Filed: December 5, 2008
    Publication date: June 10, 2010
    Applicant: Intellectual Ventures Management, LLC
    Inventors: Doug Burger, Stephen W. Keckler, Changkyu Kim
  • Publication number: 20100146249
    Abstract: The present disclosure generally describes computing systems with a multi-core processor comprising one or more branch predictor arrangements. The branch predictor are configured to predict a single and complete flow of program instructions associated therewith and to be performed on at least one processor core of the computing system. Overall processor performance and physical scalability may be improved by the described methods.
    Type: Application
    Filed: December 5, 2008
    Publication date: June 10, 2010
    Applicant: Intellectual Ventures Management, LLC
    Inventors: Doug Burger, Stephen W. Keckler, Nitya Ranganathan
  • Publication number: 20090013160
    Abstract: A method, system and computer program product for dynamically composing processor cores to form logical processors. Processor cores are composable in that the processor cores are dynamically allocated to form a logical processor to handle a change in the operating status. Once a change in the operating status is detected, a mechanism may be triggered to recompose one or more processor cores into a logical processor to handle the change in the operating status. An analysis may be performed as to how one or more processor cores should be recomposed to handle the change in the operating status. After the analysis, the one or more processor cores are recomposed into the logical processor to handle the change in the operating status. By dynamically allocating the processor cores to handle the change in the operating status, performance and power efficiency is improved.
    Type: Application
    Filed: July 2, 2008
    Publication date: January 8, 2009
    Applicant: Board of Regents, The University of Texas System
    Inventors: Douglas C. Burger, Stephen W. Keckler, Robert McDonald, Paul Gratz, Nitya Ranganathan, Lakshminarasimhan Sethumadhavan, Karthikevan Sankaralingam, Ramadass Nagarajan, Changkyu Kim, Haiming Liu
  • Publication number: 20090013135
    Abstract: A method and processor for providing full load/store queue functionality to an unordered load/store queue for a processor with out-of-order execution. Load and store instructions are inserted in a load/store queue in execution order. Each entry in the load/store queue includes an identification corresponding to a program order. Conflict detection in such an unordered load/store queue may be performed by searching a first CAM for all addresses that are the same or overlap with the address of the load or store instruction to be executed. A further search may be performed in a second CAM to identify those entries that are associated with younger or older instructions with respect to the sequence number of the load or store instruction to be executed. The output results of the Address CAM and Age CAM are logically ANDed.
    Type: Application
    Filed: July 2, 2008
    Publication date: January 8, 2009
    Applicant: Board of Regents, The University of Texas System
    Inventors: Douglas C. Burger, Stephen W. Keckler, Robert McDonald, Lakshminarasimhan Sethumadhavan, Franziska Roesner
  • Publication number: 20080244230
    Abstract: A computation node according to various embodiments of the invention includes at least one input port capable of being coupled to at least one first other 5 computation node, a first store coupled to the input port(s) to store input data, a second store to receive and store instructions, an instruction wakeup unit to match the input data to the instructions, at least one execution unit to execute the instructions, using the input data to produce output data, and at least one output port capable of being coupled to at least one second other computation node. The node may also include a router to direct the output data from the output port(s) to the second other node. A system according to various embodiments of the invention includes and external instruction sequencer to fetch a group of instructions, and one or more interconnected, preselected computational nodes.
    Type: Application
    Filed: June 10, 2008
    Publication date: October 2, 2008
    Applicant: Board of Regents, The University of Texas System
    Inventors: Douglas C. Burger, Stephen W. Keckler, Karthikevan Sankaralingam, Ramadass Nagarajan
  • Patent number: 6965969
    Abstract: An apparatus or system may comprises cache control circuitry coupled to a processor, and a plurality of independently accessible memory banks (228) coupled to the cache control circuitry. Some of the banks may have non-uniform latencies, organized into two or more spread bank sets (246). A method may include accessing data in the banks, wherein selected banks are closer to the cache control circuitry and/or processor than others, and migrating a first datum (445) to a closer bank from a further bank upon determining that the first datum is accessed more frequently than a second datum, which may be migrated to the further bank (451).
    Type: Grant
    Filed: October 8, 2004
    Date of Patent: November 15, 2005
    Assignee: Board of Regents, The University of Texas Systems
    Inventors: Doug Burger, Stephen W. Keckler, Changkyu Kim
  • Patent number: 6003123
    Abstract: A multiprocessor system having shared memory uses guarded pointers to identify protected segments of memory and permitted access to a location specified by the guarded pointer. Modification of pointers is restricted by the hardware system to limit access to memory segments and to limit operations which can be performed within the memory segments. Global address translation is based on grouping of pages which may be stored across multiple nodes. The page groups are identified in the global translation of each node and, with the virtual address, identify a node in which data is stored. Pages are subdivided into blocks and block status flags are stored for each page. The block status flags indicate whether a memory location may be read or written into at a particular node and indicate to a home node whether a remote node has written new data into a location.
    Type: Grant
    Filed: February 10, 1998
    Date of Patent: December 14, 1999
    Assignee: Massachusetts Institute of Technology
    Inventors: Nicholas P. Carter, Stephen W. Keckler, William J. Dally
  • Patent number: 5845331
    Abstract: A multiprocessor system having shared memory uses guarded pointers to identify protected segments of memory and permitted access to a location specified by the guarded pointer. Modification of pointers is restricted by the hardware system to limit access to memory segments and to limit operations which can be performed within the memory segments. Global address translation is based on grouping of pages which may be stored across multiple nodes. The page groups are identified in the global translation of each node and, with the virtual address, identify a node in which data is stored. Pages are subdivided into blocks and block status flags are stored for each page. The block status flags indicate whether a memory location may be read or written into at a particular node and indicate to a home node whether a remote node has written new data into a location.
    Type: Grant
    Filed: September 28, 1994
    Date of Patent: December 1, 1998
    Assignee: Massachusetts Institute of Technology
    Inventors: Nicholas P. Carter, Stephen W. Keckler, William J. Dally
  • Patent number: 5574939
    Abstract: In a parallel data processing system, very long instruction words (VLIW) define operations able to be executed in parallel. The VLIWs corresponding to plural threads of computation are made available to the processing system simultaneously. Each processing unit pipeline includes a synchronizer stage for selecting one of the plural threads of computation for execution in that unit. The synchronizers allow the plural units to select operations from different thread instruction words such that execution of VLIWs is interleaved across the plural units. The processors are grouped in clusters of processors which share register files. Cluster outputs may be stored directly in register files of other clusters through a cluster switch.
    Type: Grant
    Filed: June 29, 1995
    Date of Patent: November 12, 1996
    Assignee: Massachusetts Institute of Technology
    Inventors: Stephen W. Keckler, William J. Dally