Patents by Inventor James E McCormick, Jr.

James E McCormick, Jr. has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11641326
    Abstract: Examples are described herein that relate to a mesh in a switch fabric. The mesh can include one or more buses that permit operations (e.g., read, write, or responses) to continue in the same direction, drop off to a memory, drop off a bus to permit another operation to use the bus, or receive operations that are changing direction. A latency estimate can be determined at least for operations that drop off from a bus to permit another operation to use the bus or receive and channel operations that are changing direction. An operation with a highest latency estimate (e.g., time of traversing a mesh) can be permitted to use the bus, even causing another operation, that is not to change direction, to drop off the bus and re-enter later.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: May 2, 2023
    Assignee: Intel Corporation
    Inventors: Karl S. Papadantonakis, Robert Southworth, Arvind Srinivasan, Helia A. Naeimi, James E. McCormick, Jr., Jonathan Dama, Ramakrishna Huggahalli, Roberto Penaranda Cebrian
  • Publication number: 20200412666
    Abstract: Examples are described herein that relate to a mesh in a switch fabric. The mesh can include one or more buses that permit operations (e.g., read, write, or responses) to continue in the same direction, drop off to a memory, drop off a bus to permit another operation to use the bus, or receive operations that are changing direction. A latency estimate can be determined at least for operations that drop off from a bus to permit another operation to use the bus or receive and channel operations that are changing direction. An operation with a highest latency estimate (e.g., time of traversing a mesh) can be permitted to use the bus, even causing another operation, that is not to change direction, to drop off the bus and re-enter later.
    Type: Application
    Filed: August 23, 2019
    Publication date: December 31, 2020
    Inventors: Karl S. PAPADANTONAKIS, Robert SOUTHWORTH, Arvind SRINIVASAN, Helia A. NAEIMI, James E. McCORMICK, JR., Jonathan DAMA, Ramakrishna HUGGAHALLI, Roberto PENARANDA CEBRIAN
  • Patent number: 10346177
    Abstract: An embodiment of a memory apparatus may include a system memory, and a memory manager communicatively coupled to the system memory to determine a first amount of system memory needed for a boot process, initialize the first amount of system memory, start the boot process, and initialize additional system memory in parallel with the boot process. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: December 14, 2016
    Date of Patent: July 9, 2019
    Assignee: Intel Corporation
    Inventors: Mahesh S. Natu, Wei Chen, Jing Ling, James E. McCormick, Jr.
  • Patent number: 10261909
    Abstract: In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for implementing a speculative cache modification design. For example, in one embodiment, such means may include an integrated circuit having a data bus; a cache communicably interfaced with the data bus; a pipeline communicably interfaced with the data bus, in which the pipeline is to receive a store instruction corresponding to a cache line to be written to cache; caching logic to perform a speculative cache write of the cache line into the cache before the store instruction retires from the pipeline; and cache line validation logic to determine if the cache line written into the cache is valid or invalid, in which the cache line validation logic is to invalidate the cache line speculatively written into the cache when determined invalid and further in which the store instruction is allowed to retire from the pipeline when the cache line is determined to be valid.
    Type: Grant
    Filed: July 28, 2015
    Date of Patent: April 16, 2019
    Assignee: Intel Corporation
    Inventor: James E. McCormick, Jr.
  • Publication number: 20180217936
    Abstract: In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for implementing a speculative cache modification design. For example, in one embodiment, such means may include an integrated circuit having a data bus; a cache communicably interfaced with the data bus; a pipeline communicably interfaced with the data bus, in which the pipeline is to receive a store instruction corresponding to a cache line to be written to cache; caching logic to perform a speculative cache write of the cache line into the cache before the store instruction retires from the pipeline; and cache line validation logic to determine if the cache line written into the cache is valid or invalid, in which the cache line validation logic is to invalidate the cache line speculatively written into the cache when determined invalid and further in which the store instruction is allowed to retire from the pipeline when the cache line is determined to be valid.
    Type: Application
    Filed: July 28, 2015
    Publication date: August 2, 2018
    Applicant: Intel Corporation
    Inventor: James E. McCORMICK, JR.
  • Publication number: 20180165100
    Abstract: An embodiment of a memory apparatus may include a system memory, and a memory manager communicatively coupled to the system memory to determine a first amount of system memory needed for a boot process, initialize the first amount of system memory, start the boot process, and initialize additional system memory in parallel with the boot process. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: December 14, 2016
    Publication date: June 14, 2018
    Inventors: Mahesh S. Natu, Wei Chen, Jing Ling, James E. McCormick, JR.
  • Publication number: 20170031827
    Abstract: In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for implementing a speculative cache modification design. For example, in one embodiment, such means may include an integrated circuit having a data bus; a cache communicably interfaced with the data bus; a pipeline communicably interfaced with the data bus, in which the pipeline is to receive a store instruction corresponding to a cache line to be written to cache; caching logic to perform a speculative cache write of the cache line into the cache before the store instruction retires from the pipeline; and cache line validation logic to determine if the cache line written into the cache is valid or invalid, in which the cache line validation logic is to invalidate the cache line speculatively written into the cache when determined invalid and further in which the store instruction is allowed to retire from the pipeline when the cache line is determined to be valid.
    Type: Application
    Filed: July 28, 2015
    Publication date: February 2, 2017
    Inventor: James E. McCORMICK, JR.
  • Patent number: 9092346
    Abstract: In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for implementing a speculative cache modification design. For example, in one embodiment, such means may include an integrated circuit having a data bus; a cache communicably interfaced with the data bus; a pipeline communicably interfaced with the data bus, in which the pipeline is to receive a store instruction corresponding to a cache line to be written to cache; caching logic to perform a speculative cache write of the cache line into the cache before the store instruction retires from the pipeline; and cache line validation logic to determine if the cache line written into the cache is valid or invalid, in which the cache line validation logic is to invalidate the cache line speculatively written into the cache when determined invalid and further in which the store instruction is allowed to retire from the pipeline when the cache line is determined to be valid.
    Type: Grant
    Filed: December 22, 2011
    Date of Patent: July 28, 2015
    Assignee: Intel Corporation
    Inventor: James E. McCormick, Jr.
  • Publication number: 20130254486
    Abstract: In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for implementing a speculative cache modification design. For example, in one embodiment, such means may include an integrated circuit having a data bus; a cache communicably interfaced with the data bus; a pipeline communicably interfaced with the data bus, in which the pipeline is to receive a store instruction corresponding to a cache line to be written to cache; caching logic to perform a speculative cache write of the cache line into the cache before the store instruction retires from the pipeline; and cache line validation logic to determine if the cache line written into the cache is valid or invalid, in which the cache line validation logic is to invalidate the cache line speculatively written into the cache when determined invalid and further in which the store instruction is allowed to retire from the pipeline when the cache line is determined to be valid.
    Type: Application
    Filed: December 22, 2011
    Publication date: September 26, 2013
    Inventor: James E. McCormick, JR.
  • Publication number: 20130159679
    Abstract: In one embodiment, the present invention includes a method for receiving a data access instruction and obtaining an index into a data access hint register (DAHR) register file of a processor from the data access instruction, reading hint information from a register of the DAHR register file accessed using the index, and performing the data access instruction using the hint information. Other embodiments are described and claimed.
    Type: Application
    Filed: December 20, 2011
    Publication date: June 20, 2013
    Inventors: James E. McCormick, JR., Dale Morris
  • Patent number: 7747844
    Abstract: Systems, methodologies, media, and other embodiments associated with acquiring instruction addresses associated with performance monitoring events are described. One exemplary system embodiment includes logic for recording instruction and state data associated with events countable by performance monitoring logic associated with a pipelined processor. The exemplary system embodiment may also include logic for traversing the instruction and state data on a cycle count basis. The exemplary system may also include logic for traversing the instruction and state data on a retirement count basis.
    Type: Grant
    Filed: March 31, 2005
    Date of Patent: June 29, 2010
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: James E. McCormick, Jr., James R. Callister, Susith R. Fernando
  • Patent number: 7356674
    Abstract: A method of, and apparatus for, interfacing the hardware of a processor capable of processing instructions from more than one type of instruction set. More particularly, an engine responsible for fetching native instructions from a memory subsystem (such as an EM fetch engine) is interfaced with an engine that processes emulated instructions (such as an x86 engine). This is achieved using a handshake protocol, whereby the x86 engine sends an explicit fetch request signal to the EM fetch engine along with a fetch address. The EM fetch engine then accesses the memory subsystem and retrieves a line of instructions for subsequent decode and execution. The EM fetch engine sends this line of instructions to the x86 engine along with an explicit fetch complete signal. The EM fetch engine also includes a fetch address queue capable of holding the fetch addresses before they are processed by the EM fetch engine. The fetch requests are processed such that more than one fetch request may be pending at the same time.
    Type: Grant
    Filed: November 21, 2003
    Date of Patent: April 8, 2008
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Anuj Dua, James E McCormick, Jr., Stephen R. Undy, Barry J Arnold, Russell C Brockmann, David Carl Kubicek, James Curtis Stout
  • Patent number: 6721875
    Abstract: Disclosed is a computer architecture with single-syllable IP-relative branch instructions and long IP-relative branch instructions (IP=instruction pointer). The architecture fetches instructions in multi-syllable, bundle form. Single-syllable IP-relative branch instructions occupy a single syllable in an instruction bundle, and long IP-relative branch instructions occupy two syllables in an instruction bundle. The additional syllable of the long branch carries with it additional IP-relative offset bits, which when merged with offset bits carried in a core branch syllable provide a much greater offset than is carried by a single-syllable branch alone. Thus, the long branch provides for greater reach within an address space. Use of the long branch to patch IA-64 architecture instruction bundles is also disclosed. Such a patch provides the reach of an indirect branch with the overhead of a single-syllable IP-relative branch.
    Type: Grant
    Filed: February 22, 2000
    Date of Patent: April 13, 2004
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: James E McCormick, Jr., Stephen R. Undy, Donald Charles Soltis, Jr.
  • Patent number: 6678817
    Abstract: A method of, and apparatus for, interfacing the hardware of a processor capable of processing instructions from more than one type of instruction set. More particularly, an engine responsible for fetching native instructions from a memory subsystem (such as an EM fetch engine) is interfaced with an engine that processes emulated instructions (such as an x86 engine). This is achieved using a handshake protocol, whereby the x86 engine sends an explicit fetch request signal to the EM fetch engine along with a fetch address. The EM fetch engine then accesses the memory subsystem and retrieves a line of instructions for subsequent decode and execution. The EM fetch engine sends this line of instructions to the x86 engine along with an explicit fetch complete signal. The EM fetch engine also includes a fetch address queue capable of holding the fetch addresses before they are processed by the EM fetch engine. The fetch requests are processed such that more than one fetch request may be pending at the same time.
    Type: Grant
    Filed: February 22, 2000
    Date of Patent: January 13, 2004
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Anuj Dua, James E McCormick, Jr., Stephen R. Undy, Barry J Arnold, Russell C Brockmann, David Carl Kubicek, James Curtis Stout
  • Patent number: 6647487
    Abstract: An apparatus and methods for optimizing prefetch performance. Logical ones are shifted into the bits of a shift register from the left for each instruction address prefetched. As instruction addresses are fetched by the processor, logical zeros are shifted into the bit positions of the shift register from the right. Once initiated, prefetching continues until a logical one is stored in the nth-bit of the shift register. Detection of this logical one in the n-th bit causes prefetching to cease until a prefetched instruction address is removed from the prefetched instruction address register and a logical zero is shifted back into the n-th bit of the shift register. Thus, autonomous prefetch agents are prevented from prefetching too far ahead of the current instruction pointer resulting in wasted memory bandwidth and the replacement of useful instruction in the instruction cache.
    Type: Grant
    Filed: February 18, 2000
    Date of Patent: November 11, 2003
    Assignee: Hewlett-Packard Development Company, LP.
    Inventors: Stephen R. Undy, James E McCormick, Jr.
  • Patent number: 6629167
    Abstract: An apparatus for and a method of decoupling at least two multi-stage pipelines are described. At least two paths of data through which data from the first pipeline is send to the second pipeline are provided. During a pipelined execution of a task in the at least two pipelines, the second pipeline may not require every data produced in the first pipeline to process at least some subset of the task. The first pipeline may not be able to produce all data required by each of the stages of the second pipeline. One of the two data paths provides an early data path for a type of data that becomes available in a stage of the first pipeline and that may be processed in a stage of the second pipeline early in time. The other of the two data paths provides a late data path for a type of data that becomes available in a stage of the first pipeline and that may be processed in a stage of the second pipeline later in time. Each data path may comprise a buffer, e.g., a FIFO.
    Type: Grant
    Filed: February 18, 2000
    Date of Patent: September 30, 2003
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Stephen Undy, James E. McCormick, Jr.
  • Patent number: 6622209
    Abstract: In one embodiment of the invention, data values which are provided to a non-tagged, n-way cache are written into the cache in a non-count form. Whereas a counter tends to quickly saturate to one extreme or the other (e.g., all zeros or all ones), or briefly take on a value which approaches an extreme, a non-count data value (e.g., branch prediction history bits) tends to assume a wider variety of values.
    Type: Grant
    Filed: September 30, 2002
    Date of Patent: September 16, 2003
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: James E McCormick, Jr.
  • Patent number: 6516388
    Abstract: In a cache which writes new data over less recently used data, methods and apparatus which dispense with the convention of marking new cache data as most recently used. Instead, non-referenced data is marked as less recently used when it is written into a cache, and referenced data is marked as more recently used when it is written into a cache. Referenced data may correspond to fetch data, and non-referenced data may correspond to prefetch data. Upon fetch of a data value from the cache, its use status may be updated to more recently used. The methods and apparatus have the affect of preserving (n−1)/n of a cache's entries for the storage of fetch data, while limiting the storage of prefetch data to 1/n of a cache's entries. Pollution which results from unneeded prefetch data is therefore limited to 1/n of the cache.
    Type: Grant
    Filed: September 15, 2000
    Date of Patent: February 4, 2003
    Assignee: Hewlett-Packard Company
    Inventors: James E. McCormick, Jr., Stephen R. Undy
  • Patent number: 6470438
    Abstract: In one embodiment of the invention, each data value which is provided to a non-tagged, n-way cache is hashed with a number of bits which correspond to the data value, thereby producing a hashed data value. Preferably, the bits which are hashed with the data value are address bits. The hashed data value is then written into one or more ways of the cache using index hashing. A cache hit signal is produced using index hashing and voting. In a cache where data values assume only a few different values, or in a cache where many data values which are written to the cache tend to assume a small number of data values, data hashing helps to reduce false hits by insuring that the same data values will produce different hashed data values when the same data values are associated with different addresses. In another embodiment of the invention, data values which are provided to a non-tagged, n-way cache are written into the cache in a non-count form.
    Type: Grant
    Filed: February 22, 2000
    Date of Patent: October 22, 2002
    Assignee: Hewlett-Packard Company
    Inventor: James E McCormick, Jr.
  • Patent number: 6351796
    Abstract: Methods and apparatus for storing data in a multi-level memory hierarchy having at least a lower level cache and a higher level cache. Relevancy information is maintained for various data values stored in the lower level cache, the relevancy information indicating whether the various data values stored in the lower level cache, if lost, could only be generated from corresponding data stored in the higher level cache. If one of the various data values stored in the lower level cache is to be updated, a determination as to whether corresponding data should be stored in the higher level cache is based at least in part on 1) the status of the relevancy information corresponding to the one of the various data values stored in the lower level cache which is to be updated, and 2) whether the updated value which is to be written into the lower level cache matches one or more select data value patterns.
    Type: Grant
    Filed: February 22, 2000
    Date of Patent: February 26, 2002
    Assignee: Hewlett-Packard Company
    Inventors: James E McCormick, Jr., Steven Kenneth Saunders