Patents by Inventor Phillip M. Jones
Phillip M. Jones has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9804969Abstract: A method includes receiving an instruction to be executed by a processor. The method further includes performing a lookup in a page crossing buffer that includes one or more entries to determine if the instruction has an entry in the page crossing buffer. Each of the entries includes a physical address. The method further includes, when the page crossing buffer has the entry in the page crossing buffer, retrieving a particular physical address from the entry in the page crossing buffer.Type: GrantFiled: December 20, 2012Date of Patent: October 31, 2017Assignee: QUALCOMM IncorporatedInventors: Suresh K. Venkumahanti, Jiajin Tu, Phillip M. Jones
-
Patent number: 9122486Abstract: Each branch instruction having branch prediction support has branch prediction bits in architecture specified bit positions in the branch instruction. An instruction cache supports modifying the branch instructions with updated branch prediction bits that are dynamically determined when the branch instruction executes.Type: GrantFiled: November 8, 2010Date of Patent: September 1, 2015Assignee: QUALCOMM IncorporatedInventors: Suresh K. Venkumahanti, Lucian Codrescu, Stephen R. Shannon, Lin Wang, Phillip M. Jones, Daisy T. Palal, Jiajin Tu
-
Publication number: 20140181459Abstract: A method includes receiving an instruction to be executed by a processor. The method further includes performing a lookup in a page crossing buffer that includes one or more entries to determine if the instruction has an entry in the page crossing buffer. Each of the entries includes a physical address. The method further includes, when the page crossing buffer has the entry in the page crossing buffer, retrieving a particular physical address from the entry in the page crossing buffer.Type: ApplicationFiled: December 20, 2012Publication date: June 26, 2014Applicant: QUAL COMM IncorporatedInventors: Suresh K. Venkumahanti, Jiajin Tu, Phillip M. Jones
-
Publication number: 20120117327Abstract: Each branch instruction having branch prediction support has branch prediction bits in architecture specified bit positions in the branch instruction. An instruction cache supports modifying the branch instructions with updated branch prediction bits that are dynamically determined when the branch instruction executes.Type: ApplicationFiled: November 8, 2010Publication date: May 10, 2012Applicant: QUALCOMM INCORPORATEDInventors: Suresh K. Venkumahanti, Lucian Codrescu, Stephen R. Shannon, Lin Wang, Phillip M. Jones, Daisy T. Palal, Jiajin Tu
-
Patent number: 8078818Abstract: A system comprises a plurality of nodes coupled together via a switching device. Each node comprises a processor coupled to a memory. Migration logic in the switching device is configured to migrate segments of each memory to the switching device.Type: GrantFiled: February 25, 2005Date of Patent: December 13, 2011Assignee: Hewlett-Packard Development Company, L.P.Inventors: William J. Walker, Paras A. Shah, James K. Yu, Kenneth Jansen, Vasileios Balabanos, Andrew D. Olsen, Phillip M. Jones
-
Patent number: 7685411Abstract: An instruction memory unit comprises a first memory structure operable to store program instructions, and a second memory structure operable to store program instructions fetched from the first memory structure, and to issue stored program instructions for execution. The second memory structure is operable to identify a repeated issuance of a forward program redirect construct, and issue a next program instruction already stored in the second memory structure if a resolution of the forward branching instruction is identical to a last resolution of the same. The second memory structure is further operable to issue a backward program redirect construct, determine whether a target instruction is stored in the second memory structure, issue the target instruction if the target instruction is stored in the second memory structure, and fetch the target instruction from the first memory structure if the target instruction is not stored in the second memory structure.Type: GrantFiled: April 11, 2005Date of Patent: March 23, 2010Assignee: QUALCOMM IncorporatedInventors: Muhammad Ahmed, Lucian Codrescu, Erich Plondke, William C. Anderson, Robert Allan Lester, Phillip M. Jones
-
Patent number: 7120758Abstract: Method and apparatus for improving processor performance. In some embodiments, processing speed may be improved by reusing data stored in a buffer during an initial request by subsequent requests. Assignment of temporary storage buffers in a controller may be made to allow for the potential for reuse of the data. Further, a hot buffer may be designated to allow for reuse of the data stored in the hot buffer. On subsequent requests, data stored in the hot buffer may be sent to a requesting device without re-retrieving the data from memory.Type: GrantFiled: February 12, 2003Date of Patent: October 10, 2006Assignee: Hewlett-Packard Development Company, L.P.Inventors: Phillip M. Jones, Robert A. Lester, Jens K. Ramsey, William J. Walker, John E. Larson, James Andre, Paul Rawlins
-
Patent number: 6961800Abstract: Methods for improving processor performance. Specifically, by reducing some of the latency cycles within a host controller, request processing speed can be improved. One technique for improving processing speed involves initiating a deferred reply transaction before the data is available from a memory controller. A second technique involves anticipating the need to transition from a block next request (BNR) state to a bus priority request (BPRI) state, thereby eliminating the need to wait for a request check to determine if the BPRI state must be implemented.Type: GrantFiled: September 28, 2001Date of Patent: November 1, 2005Assignee: Hewlett-Packard Development Company, L.P.Inventors: Robert A. Lester, Kenneth T. Chin, Jim Blocker, John E. Larson, Phillip M. Jones, Paul B. Rawlins
-
Patent number: 6865647Abstract: A cache-based system is adapted for dynamic cache partitioning. A cache is partitioned into a plurality of cache partitions for a plurality of entities. Each cache partition can be assigned as a private cache for a different entity. If a first cache partition satisfying a first predetermined cache partition condition and a second cache partition satisfying a second predetermined cache partition condition are detected, then the size of the first cache partition is increased by a predetermined segment and the size of the second cache partition is decreased by the predetermined segment. An entity can perform cacheline replacement exclusively in its assigned cache partition, and also be capable of reading any cache partition.Type: GrantFiled: December 8, 2003Date of Patent: March 8, 2005Assignee: Hewlett-Packard Development Company, L.P.Inventors: Sompong P. Olarig, Phillip M. Jones, John E. Jenne
-
Patent number: 6848015Abstract: A computer system including multiple CPUs inform other logic in a computer system as to the priority level (e.g., task priority) associated with the CPU or software executing thereon. The logic makes arbitration decisions regarding CPU transactions based, at least in part, on the task priorities of the various CPUs. The logic that implements this technique may be a host bridge within a computer system having multiple CPUs or in a switch or router that interconnects multiple nodes or computer systems.Type: GrantFiled: November 30, 2001Date of Patent: January 25, 2005Assignee: Hewlett-Packard Development Company, L.P.Inventor: Phillip M. Jones
-
Patent number: 6829665Abstract: A technique for optimizing cycle time in maintaining cache coherency. Specifically, a method and apparatus are provided to optimize the processing of requests in a multi-processor-bus system which implements a snoop-based coherency scheme. The acts of snooping a bus for a first address and searching a posting queue for the next address to be snooped are performed simultaneously to minimize the request cycle time.Type: GrantFiled: September 28, 2001Date of Patent: December 7, 2004Assignee: Hewlett-Packard Development Company, L.P.Inventors: Phillip M. Jones, Paul B. Rawlins, Kenneth T. Chin
-
Patent number: 6823409Abstract: A mechanism for efficiently filtering snoop requests in a multi-processor bus system. Specifically, a snoop filter is provided to filter unnecessary snoops in a multi-bus system.Type: GrantFiled: September 28, 2001Date of Patent: November 23, 2004Assignee: Hewlett-Packard Development Company, L.P.Inventors: Phillip M. Jones, Paul B. Rawlins
-
Publication number: 20040158685Abstract: Method and apparatus for improving processor performance. In some embodiments, processing speed may be improved by reusing data stored in a buffer during an initial request by subsequent requests. Assignment of temporary storage buffers in a controller may be made to allow for the potential for reuse of the data. Further, a hot buffer may be designated to allow for reuse of the data stored in the hot buffer. On subsequent requests, data stored in the hot buffer may be sent to a requesting device without re-retrieving the data from memory.Type: ApplicationFiled: February 12, 2003Publication date: August 12, 2004Inventors: Phillip M. Jones, Robert A. Lester, Jens K. Ramsey, William J. Walker, John E. Larson, James Andre, Paul Rawlins
-
Publication number: 20040143707Abstract: A cache-based system is adapted for dynamic cache partitioning. A cache is partitioned into a plurality of cache partitions for a plurality of entities. Each cache partition can be assigned as a private cache for a different entity. If a first cache partition satisfying a first predetermined cache partition condition and a second cache partition satisfying a second predetermined cache partition condition are detected, then the size of the first cache partition is increased by a predetermined segment and the size of the second cache partition is decreased by the predetermined segment. An entity can perform cacheline replacement exclusively in its assigned cache partition, and also be capable of reading any cache partition.Type: ApplicationFiled: December 8, 2003Publication date: July 22, 2004Inventors: Sompong P. Olarig, Phillip M. Jones, John E. Jenne
-
Patent number: 6662272Abstract: A cache-based system is adapted for dynamic cache partitioning. A cache is partitioned into a plurality of cache partitions for a plurality of entities. Each cache partition can be assigned as a private cache for a different entity. If a first cache partition satisfying a first predetermined cache partition condition and a second cache partition satisfying a second predetermined cache partition condition are detected, then the size of the first cache partition is increased by a predetermined segment and the size of the second cache partition is decreased by the predetermined segment. An entity can perform cacheline replacement exclusively in its assigned cache partition, and also be capable of reading any cache partition.Type: GrantFiled: September 29, 2001Date of Patent: December 9, 2003Assignee: Hewlett-Packard Development Company, L.P.Inventors: Sompong P. Olarig, Phillip M. Jones, John E. Jenne
-
Publication number: 20030105911Abstract: A computer system including multiple CPUs inform other logic in a computer system as to the priority level (e.g., task priority) associated with the CPU or software executing thereon. The logic makes arbitration decisions regarding CPU transactions based, at least in part, on the task priorities of the various CPUs. The logic that implements this technique may be a host bridge within a computer system having multiple CPUs or in a switch or router that interconnects multiple nodes or computer systems.Type: ApplicationFiled: November 30, 2001Publication date: June 5, 2003Inventor: Phillip M. Jones
-
Publication number: 20030070016Abstract: A mechanism for efficiently filtering snoop requests in a multi-processor bus system. Specifically, a snoop filter is provided to filter unnecessary snoops in a multi-bus system.Type: ApplicationFiled: September 28, 2001Publication date: April 10, 2003Inventors: Phillip M. Jones, Paul B. Rawlins
-
Publication number: 20030065886Abstract: A cache-based system is adapted for dynamic cache partitioning. A cache is partitioned into a plurality of cache partitions for a plurality of entities. Each cache partition can be assigned as a private cache for a different entity. If a first cache partition satisfying a first predetermined cache partition condition and a second cache partition satisfying a second predetermined cache partition condition are detected, then the size of the first cache partition is increased by a predetermined segment and the size of the second cache partition is decreased by the predetermined segment. An entity can perform cacheline replacement exclusively in its assigned cache partition, and also be capable of reading any cache partition.Type: ApplicationFiled: September 29, 2001Publication date: April 3, 2003Inventors: Sompong P. Olarig, Phillip M. Jones, John E. Jenne
-
Publication number: 20030065860Abstract: An internal bus structure for a multi-processor-bus system. More specifically, an internal bus protocol/structure is described. The internal bus structure includes unidirectional, point-to-point connections between control modules. The individual buses carry unique transactions corresponding to a request. Each transaction includes an identification tag. The present protocol provides for efficient communication between processors, peripheral devices, memory and coherency modules. The present protocol and design scheme is generic in that the techniques are scalable and re-usable.Type: ApplicationFiled: September 28, 2001Publication date: April 3, 2003Inventors: Robert A. Lester, Kenneth T. Chin, Jim Blocker, John E. Larson, Phillip M. Jones, Paul B. Rawlins
-
Publication number: 20030065844Abstract: Methods for improving processor performance. Specifically, by reducing some of the latency cycles within a host controller, request processing speed can be improved. One technique for improving processing speed involves initiating a deferred reply transaction before the data is available from a memory controller. A second technique involves anticipating the need to transition from a block next request (BNR) state to a bus priority request (BPRI) state, thereby eliminating the need to wait for a request check to determine if the BPRI state must be implemented.Type: ApplicationFiled: September 28, 2001Publication date: April 3, 2003Inventors: Robert A. Lester, Kenneth T. Chin, Jim Blocker, John E. Larson, Phillip M. Jones, Paul B. Rawlins