Patents by Inventor James E. McCormick
James E. McCormick has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11641326Abstract: Examples are described herein that relate to a mesh in a switch fabric. The mesh can include one or more buses that permit operations (e.g., read, write, or responses) to continue in the same direction, drop off to a memory, drop off a bus to permit another operation to use the bus, or receive operations that are changing direction. A latency estimate can be determined at least for operations that drop off from a bus to permit another operation to use the bus or receive and channel operations that are changing direction. An operation with a highest latency estimate (e.g., time of traversing a mesh) can be permitted to use the bus, even causing another operation, that is not to change direction, to drop off the bus and re-enter later.Type: GrantFiled: August 23, 2019Date of Patent: May 2, 2023Assignee: Intel CorporationInventors: Karl S. Papadantonakis, Robert Southworth, Arvind Srinivasan, Helia A. Naeimi, James E. McCormick, Jr., Jonathan Dama, Ramakrishna Huggahalli, Roberto Penaranda Cebrian
-
Publication number: 20200412666Abstract: Examples are described herein that relate to a mesh in a switch fabric. The mesh can include one or more buses that permit operations (e.g., read, write, or responses) to continue in the same direction, drop off to a memory, drop off a bus to permit another operation to use the bus, or receive operations that are changing direction. A latency estimate can be determined at least for operations that drop off from a bus to permit another operation to use the bus or receive and channel operations that are changing direction. An operation with a highest latency estimate (e.g., time of traversing a mesh) can be permitted to use the bus, even causing another operation, that is not to change direction, to drop off the bus and re-enter later.Type: ApplicationFiled: August 23, 2019Publication date: December 31, 2020Inventors: Karl S. PAPADANTONAKIS, Robert SOUTHWORTH, Arvind SRINIVASAN, Helia A. NAEIMI, James E. McCORMICK, JR., Jonathan DAMA, Ramakrishna HUGGAHALLI, Roberto PENARANDA CEBRIAN
-
Patent number: 10346177Abstract: An embodiment of a memory apparatus may include a system memory, and a memory manager communicatively coupled to the system memory to determine a first amount of system memory needed for a boot process, initialize the first amount of system memory, start the boot process, and initialize additional system memory in parallel with the boot process. Other embodiments are disclosed and claimed.Type: GrantFiled: December 14, 2016Date of Patent: July 9, 2019Assignee: Intel CorporationInventors: Mahesh S. Natu, Wei Chen, Jing Ling, James E. McCormick, Jr.
-
Patent number: 10261909Abstract: In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for implementing a speculative cache modification design. For example, in one embodiment, such means may include an integrated circuit having a data bus; a cache communicably interfaced with the data bus; a pipeline communicably interfaced with the data bus, in which the pipeline is to receive a store instruction corresponding to a cache line to be written to cache; caching logic to perform a speculative cache write of the cache line into the cache before the store instruction retires from the pipeline; and cache line validation logic to determine if the cache line written into the cache is valid or invalid, in which the cache line validation logic is to invalidate the cache line speculatively written into the cache when determined invalid and further in which the store instruction is allowed to retire from the pipeline when the cache line is determined to be valid.Type: GrantFiled: July 28, 2015Date of Patent: April 16, 2019Assignee: Intel CorporationInventor: James E. McCormick, Jr.
-
Publication number: 20180217936Abstract: In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for implementing a speculative cache modification design. For example, in one embodiment, such means may include an integrated circuit having a data bus; a cache communicably interfaced with the data bus; a pipeline communicably interfaced with the data bus, in which the pipeline is to receive a store instruction corresponding to a cache line to be written to cache; caching logic to perform a speculative cache write of the cache line into the cache before the store instruction retires from the pipeline; and cache line validation logic to determine if the cache line written into the cache is valid or invalid, in which the cache line validation logic is to invalidate the cache line speculatively written into the cache when determined invalid and further in which the store instruction is allowed to retire from the pipeline when the cache line is determined to be valid.Type: ApplicationFiled: July 28, 2015Publication date: August 2, 2018Applicant: Intel CorporationInventor: James E. McCORMICK, JR.
-
Publication number: 20180165100Abstract: An embodiment of a memory apparatus may include a system memory, and a memory manager communicatively coupled to the system memory to determine a first amount of system memory needed for a boot process, initialize the first amount of system memory, start the boot process, and initialize additional system memory in parallel with the boot process. Other embodiments are disclosed and claimed.Type: ApplicationFiled: December 14, 2016Publication date: June 14, 2018Inventors: Mahesh S. Natu, Wei Chen, Jing Ling, James E. McCormick, JR.
-
Publication number: 20170031827Abstract: In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for implementing a speculative cache modification design. For example, in one embodiment, such means may include an integrated circuit having a data bus; a cache communicably interfaced with the data bus; a pipeline communicably interfaced with the data bus, in which the pipeline is to receive a store instruction corresponding to a cache line to be written to cache; caching logic to perform a speculative cache write of the cache line into the cache before the store instruction retires from the pipeline; and cache line validation logic to determine if the cache line written into the cache is valid or invalid, in which the cache line validation logic is to invalidate the cache line speculatively written into the cache when determined invalid and further in which the store instruction is allowed to retire from the pipeline when the cache line is determined to be valid.Type: ApplicationFiled: July 28, 2015Publication date: February 2, 2017Inventor: James E. McCORMICK, JR.
-
Patent number: 9092346Abstract: In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for implementing a speculative cache modification design. For example, in one embodiment, such means may include an integrated circuit having a data bus; a cache communicably interfaced with the data bus; a pipeline communicably interfaced with the data bus, in which the pipeline is to receive a store instruction corresponding to a cache line to be written to cache; caching logic to perform a speculative cache write of the cache line into the cache before the store instruction retires from the pipeline; and cache line validation logic to determine if the cache line written into the cache is valid or invalid, in which the cache line validation logic is to invalidate the cache line speculatively written into the cache when determined invalid and further in which the store instruction is allowed to retire from the pipeline when the cache line is determined to be valid.Type: GrantFiled: December 22, 2011Date of Patent: July 28, 2015Assignee: Intel CorporationInventor: James E. McCormick, Jr.
-
Publication number: 20130254486Abstract: In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for implementing a speculative cache modification design. For example, in one embodiment, such means may include an integrated circuit having a data bus; a cache communicably interfaced with the data bus; a pipeline communicably interfaced with the data bus, in which the pipeline is to receive a store instruction corresponding to a cache line to be written to cache; caching logic to perform a speculative cache write of the cache line into the cache before the store instruction retires from the pipeline; and cache line validation logic to determine if the cache line written into the cache is valid or invalid, in which the cache line validation logic is to invalidate the cache line speculatively written into the cache when determined invalid and further in which the store instruction is allowed to retire from the pipeline when the cache line is determined to be valid.Type: ApplicationFiled: December 22, 2011Publication date: September 26, 2013Inventor: James E. McCormick, JR.
-
Publication number: 20130159679Abstract: In one embodiment, the present invention includes a method for receiving a data access instruction and obtaining an index into a data access hint register (DAHR) register file of a processor from the data access instruction, reading hint information from a register of the DAHR register file accessed using the index, and performing the data access instruction using the hint information. Other embodiments are described and claimed.Type: ApplicationFiled: December 20, 2011Publication date: June 20, 2013Inventors: James E. McCormick, JR., Dale Morris
-
Patent number: 8443171Abstract: The present invention provides a system and method for runtime updating of hints in program instructions. The invention also provides for programs of instructions that include hint performance data. Also, the invention provides an instruction cache that modifies hints and writes them back. As runtime hint updates are stored in instructions, the impact of the updates is not limited by the limited memory capacity local to a processor. Also, there is no conflict between hardware and software hints, as they can share a common encoding in the program instructions.Type: GrantFiled: July 30, 2004Date of Patent: May 14, 2013Assignee: Hewlett-Packard Development Company, L.P.Inventors: Dale Morris, James E. McCormick
-
Patent number: 7747844Abstract: Systems, methodologies, media, and other embodiments associated with acquiring instruction addresses associated with performance monitoring events are described. One exemplary system embodiment includes logic for recording instruction and state data associated with events countable by performance monitoring logic associated with a pipelined processor. The exemplary system embodiment may also include logic for traversing the instruction and state data on a cycle count basis. The exemplary system may also include logic for traversing the instruction and state data on a retirement count basis.Type: GrantFiled: March 31, 2005Date of Patent: June 29, 2010Assignee: Hewlett-Packard Development Company, L.P.Inventors: James E. McCormick, Jr., James R. Callister, Susith R. Fernando
-
Patent number: 7356674Abstract: A method of, and apparatus for, interfacing the hardware of a processor capable of processing instructions from more than one type of instruction set. More particularly, an engine responsible for fetching native instructions from a memory subsystem (such as an EM fetch engine) is interfaced with an engine that processes emulated instructions (such as an x86 engine). This is achieved using a handshake protocol, whereby the x86 engine sends an explicit fetch request signal to the EM fetch engine along with a fetch address. The EM fetch engine then accesses the memory subsystem and retrieves a line of instructions for subsequent decode and execution. The EM fetch engine sends this line of instructions to the x86 engine along with an explicit fetch complete signal. The EM fetch engine also includes a fetch address queue capable of holding the fetch addresses before they are processed by the EM fetch engine. The fetch requests are processed such that more than one fetch request may be pending at the same time.Type: GrantFiled: November 21, 2003Date of Patent: April 8, 2008Assignee: Hewlett-Packard Development Company, L.P.Inventors: Anuj Dua, James E McCormick, Jr., Stephen R. Undy, Barry J Arnold, Russell C Brockmann, David Carl Kubicek, James Curtis Stout
-
Publication number: 20040107335Abstract: A method of, and apparatus for, interfacing the hardware of a processor capable of processing instructions from more than one type of instruction set. More particularly, an engine responsible for fetching native instructions from a memory subsystem (such as an EM fetch engine) is interfaced with an engine that processes emulated instructions (such as an x86 engine). This is achieved using a handshake protocol, whereby the x86 engine sends an explicit fetch request signal to the EM fetch engine along with a fetch address. The EM fetch engine then accesses the memory subsystem and retrieves a line of instructions for subsequent decode and execution. The EM fetch engine sends this line of instructions to the x86 engine along with an explicit fetch complete signal. The EM fetch engine also includes a fetch address queue capable of holding the fetch addresses before they are processed by the EM fetch engine. The fetch requests are processed such that more than one fetch request may be pending at the same time.Type: ApplicationFiled: November 21, 2003Publication date: June 3, 2004Inventors: Anuj Dua, James E. McCormick, Stephen R. Undy, Barry J. Arnold, Russell C. Brockmann, David Carl Kubicek, James Curtis Stout
-
Publication number: 20040095965Abstract: Wires that carry bits of an instruction syllable of an instruction bundle are routed to first and second branch execution units. The wires are routed over the first branch execution unit. When the first branch execution unit is configured to calculate a branch target of a long IP-relative branch instruction occupying multiple syllables of an instruction bundle, the wires are coupled to the first branch execution unit. Otherwise, the wires are not coupled to the first branch execution unit.Type: ApplicationFiled: October 21, 2003Publication date: May 20, 2004Inventors: James E. McCormick, Stephen R. Undy, Donald Charles Soltis
-
Patent number: 6721875Abstract: Disclosed is a computer architecture with single-syllable IP-relative branch instructions and long IP-relative branch instructions (IP=instruction pointer). The architecture fetches instructions in multi-syllable, bundle form. Single-syllable IP-relative branch instructions occupy a single syllable in an instruction bundle, and long IP-relative branch instructions occupy two syllables in an instruction bundle. The additional syllable of the long branch carries with it additional IP-relative offset bits, which when merged with offset bits carried in a core branch syllable provide a much greater offset than is carried by a single-syllable branch alone. Thus, the long branch provides for greater reach within an address space. Use of the long branch to patch IA-64 architecture instruction bundles is also disclosed. Such a patch provides the reach of an indirect branch with the overhead of a single-syllable IP-relative branch.Type: GrantFiled: February 22, 2000Date of Patent: April 13, 2004Assignee: Hewlett-Packard Development Company, L.P.Inventors: James E McCormick, Jr., Stephen R. Undy, Donald Charles Soltis, Jr.
-
Publication number: 20040049667Abstract: Compiled and linked program code having instructions grouped into bundles, wherein the instructions of each bundle are sequentially ordered, is patched by forming a patch bundle and one or more patch code bundles. This is done by writing a long IP-relative branch instruction into multiple syllables of the patch bundle, with the long IP-relative branch instruction providing a means of branching to patch code. Instructions which are similarly located in a bundle to be patched, and which precede the long IP-relative branch instruction, are copied into syllables of the patch bundle. Other instructions of the bundle to be patched are copied into ones of the one or more patch code bundles. The bundle to be patched is overwritten with the patch bundle, and the one or more patch code bundles are written into the patch code.Type: ApplicationFiled: September 4, 2003Publication date: March 11, 2004Inventors: James E. McCormick, Stephen R. Undy, Donald Charles Soltis
-
Patent number: 6678817Abstract: A method of, and apparatus for, interfacing the hardware of a processor capable of processing instructions from more than one type of instruction set. More particularly, an engine responsible for fetching native instructions from a memory subsystem (such as an EM fetch engine) is interfaced with an engine that processes emulated instructions (such as an x86 engine). This is achieved using a handshake protocol, whereby the x86 engine sends an explicit fetch request signal to the EM fetch engine along with a fetch address. The EM fetch engine then accesses the memory subsystem and retrieves a line of instructions for subsequent decode and execution. The EM fetch engine sends this line of instructions to the x86 engine along with an explicit fetch complete signal. The EM fetch engine also includes a fetch address queue capable of holding the fetch addresses before they are processed by the EM fetch engine. The fetch requests are processed such that more than one fetch request may be pending at the same time.Type: GrantFiled: February 22, 2000Date of Patent: January 13, 2004Assignee: Hewlett-Packard Development Company, L.P.Inventors: Anuj Dua, James E McCormick, Jr., Stephen R. Undy, Barry J Arnold, Russell C Brockmann, David Carl Kubicek, James Curtis Stout
-
Patent number: 6647487Abstract: An apparatus and methods for optimizing prefetch performance. Logical ones are shifted into the bits of a shift register from the left for each instruction address prefetched. As instruction addresses are fetched by the processor, logical zeros are shifted into the bit positions of the shift register from the right. Once initiated, prefetching continues until a logical one is stored in the nth-bit of the shift register. Detection of this logical one in the n-th bit causes prefetching to cease until a prefetched instruction address is removed from the prefetched instruction address register and a logical zero is shifted back into the n-th bit of the shift register. Thus, autonomous prefetch agents are prevented from prefetching too far ahead of the current instruction pointer resulting in wasted memory bandwidth and the replacement of useful instruction in the instruction cache.Type: GrantFiled: February 18, 2000Date of Patent: November 11, 2003Assignee: Hewlett-Packard Development Company, LP.Inventors: Stephen R. Undy, James E McCormick, Jr.
-
Patent number: 6629167Abstract: An apparatus for and a method of decoupling at least two multi-stage pipelines are described. At least two paths of data through which data from the first pipeline is send to the second pipeline are provided. During a pipelined execution of a task in the at least two pipelines, the second pipeline may not require every data produced in the first pipeline to process at least some subset of the task. The first pipeline may not be able to produce all data required by each of the stages of the second pipeline. One of the two data paths provides an early data path for a type of data that becomes available in a stage of the first pipeline and that may be processed in a stage of the second pipeline early in time. The other of the two data paths provides a late data path for a type of data that becomes available in a stage of the first pipeline and that may be processed in a stage of the second pipeline later in time. Each data path may comprise a buffer, e.g., a FIFO.Type: GrantFiled: February 18, 2000Date of Patent: September 30, 2003Assignee: Hewlett-Packard Development Company, L.P.Inventors: Stephen Undy, James E. McCormick, Jr.