Patents by Inventor Jeffry E. Gonion

Jeffry E. Gonion has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12242396
    Abstract: A memory permissions model for a processor that is based on the memory address accessed by an instruction as well as the program counter of the instruction. These permissions may be stored in permissions tables and indexed using the memory addresses of the instruction and the address of the memory locations that it accesses. Those indexes may be obtained from a page table in some cases. These memory permissions may be used in conjunction with other permissions, such as execute permissions and secondary execution privileges that are based on whether the instruction belongs to a particular instruction group.
    Type: Grant
    Filed: June 28, 2023
    Date of Patent: March 4, 2025
    Assignee: Apple Inc.
    Inventors: Jeffry E. Gonion, Bernard J. Semeria
  • Patent number: 12079142
    Abstract: A permissions model for a processor in which permissions are based on the instruction group of an instruction. These permissions may be stored in permissions tables and indexed using the program counter of the instruction. The permissions may identify which of a plurality of instruction groups of an instruction set architecture (ISA) of a processor are permitted to execute from that program counter value. Accordingly, the instruction group of the instruction can be compared to the permitted instruction groups to determine if the instruction has execution permission. In some cases, the instruction-group-based permissions are secondary execution privileges; additional primary execution permissions that are determined using the program counter may also be used.
    Type: Grant
    Filed: June 28, 2023
    Date of Patent: September 3, 2024
    Assignee: Apple Inc.
    Inventors: Jeffry E. Gonion, Bernard J. Semeria
  • Patent number: 11995446
    Abstract: Techniques are disclosed relating to protecting branch prediction information. In various embodiments, an integrated circuit includes branch prediction logic having a table that maintains a plurality of entries storing encrypted target address information for branch instructions. The branch prediction logic is configured to receive machine context information for a branch instruction having a target address being predicted by the branch prediction logic, the machine context information including a program counter associated with the branch instruction. The branch prediction logic is configured to use the machine context information to decrypt encrypted target address information stored in one of the plurality of entries identified based on the program counter.
    Type: Grant
    Filed: May 2, 2022
    Date of Patent: May 28, 2024
    Assignee: Apple Inc.
    Inventors: Steven A. Myers, Jeffry E. Gonion, Yannick L. Sierra, Thomas Icart
  • Patent number: 11972140
    Abstract: In an embodiment, a system may support programmable hashing of address bits at a plurality of levels of granularity to map memory addresses to memory controllers and ultimately at least to memory devices. The hashing may be programmed to distribute pages of memory across the memory controllers, and consecutive blocks of the page may be mapped to physically distant memory controllers. In an embodiment, address bits may be dropped from each level of granularity, forming a compacted pipe address to save power within the memory controller. In an embodiment, a memory folding scheme may be employed to reduce the number of active memory devices and/or memory controllers in the system when the full complement of memory is not needed.
    Type: Grant
    Filed: December 20, 2022
    Date of Patent: April 30, 2024
    Assignee: Apple Inc.
    Inventors: Steven Fishwick, Jeffry E. Gonion, Per H. Hammarlund, Eran Tamari, Lior Zimet, Gerard R. Williams, III
  • Publication number: 20230418929
    Abstract: A permissions model for a processor in which permissions are based on the instruction group of an instruction. These permissions may be stored in permissions tables and indexed using the program counter of the instruction. The permissions may identify which of a plurality of instruction groups of an instruction set architecture (ISA) of a processor are permitted to execute from that program counter value. Accordingly, the instruction group of the instruction can be compared to the permitted instruction groups to determine if the instruction has execution permission. In some cases, the instruction-group-based permissions are secondary execution privileges; additional primary execution permissions that are determined using the program counter may also be used.
    Type: Application
    Filed: June 28, 2023
    Publication date: December 28, 2023
    Inventors: Jeffry E. Gonion, Bernard J. Semeria
  • Publication number: 20230418767
    Abstract: A memory permissions model for a processor that is based on the memory address accessed by an instruction as well as the program counter of the instruction. These permissions may be stored in permissions tables and indexed using the memory addresses of the instruction and the address of the memory locations that it accesses. Those indexes may be obtained from a page table in some cases. These memory permissions may be used in conjunction with other permissions, such as execute permissions and secondary execution privileges that are based on whether the instruction belongs to a particular instruction group.
    Type: Application
    Filed: June 28, 2023
    Publication date: December 28, 2023
    Inventors: Jeffry E. Gonion, Bernard J. Semeria
  • Patent number: 11803471
    Abstract: An integrated circuit (IC) including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture. For example, the IC may include an interconnect fabric configured to provide communication between the one or more memory controller circuits and the processor cores, graphics processing units, and peripheral devices; and an off-chip interconnect coupled to the interconnect fabric and configured to couple the interconnect fabric to a corresponding interconnect fabric on another instance of the integrated circuit, wherein the interconnect fabric and the off-chip interconnect provide an interface that transparently connects the one or more memory controller circuits, the processor cores, graphics processing units, and peripheral devices in either a single instance of the integrated circuit or two or more instances of the integrated circuit.
    Type: Grant
    Filed: August 22, 2022
    Date of Patent: October 31, 2023
    Assignee: Apple Inc.
    Inventors: Per H. Hammarlund, Lior Zimet, Sergio Kolor, Sagi Lahav, James Vash, Gaurav Garg, Tal Kuzi, Jeffry E. Gonion, Charles E. Tucker, Lital Levy-Rubin, Dany Davidov, Steven Fishwick, Nir Leshem, Mark Pilip, Gerard R. Williams, III, Harshavardhan Kaushikkar, Srinivasa Rangan Sridharan
  • Publication number: 20230125798
    Abstract: In an embodiment, a system may support programmable hashing of address bits at a plurality of levels of granularity to map memory addresses to memory controllers and ultimately at least to memory devices. The hashing may be programmed to distribute pages of memory across the memory controllers, and consecutive blocks of the page may be mapped to physically distant memory controllers. In an embodiment, address bits may be dropped from each level of granularity, forming a compacted pipe address to save power within the memory controller. In an embodiment, a memory folding scheme may be employed to reduce the number of active memory devices and/or memory controllers in the system when the full complement of memory is not needed.
    Type: Application
    Filed: December 20, 2022
    Publication date: April 27, 2023
    Inventors: Steven Fishwick, Jeffry E. Gonion, Per H. Hammarlund, Eran Tamari, Lior Zimet, Gerard R. Williams, III
  • Publication number: 20230058989
    Abstract: An integrated circuit (IC) including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture. For example, the IC may include an interconnect fabric configured to provide communication between the one or more memory controller circuits and the processor cores, graphics processing units, and peripheral devices; and an off-chip interconnect coupled to the interconnect fabric and configured to couple the interconnect fabric to a corresponding interconnect fabric on another instance of the integrated circuit, wherein the interconnect fabric and the off-chip interconnect provide an interface that transparently connects the one or more memory controller circuits, the processor cores, graphics processing units, and peripheral devices in either a single instance of the integrated circuit or two or more instances of the integrated circuit.
    Type: Application
    Filed: August 22, 2022
    Publication date: February 23, 2023
    Inventors: Per H. Hammarlund, Lior Zimet, Sergio Kolor, Sagi Lahav, James Vash, Gaurav Garg, Tal Kuzi, Jeffry E. Gonion, Charles E. Tucker, Lital Levy-Rubin, Dany Davidov, Steven Fishwick, Nir Leshem, Mark Pilip, Gerard R. Williams, III, Harshavardhan Kaushikkar, Srinivasan Rangan Sridharan
  • Patent number: 11567861
    Abstract: In an embodiment, a system may support programmable hashing of address bits at a plurality of levels of granularity to map memory addresses to memory controllers and ultimately at least to memory devices. The hashing may be programmed to distribute pages of memory across the memory controllers, and consecutive blocks of the page may be mapped to physically distant memory controllers. In an embodiment, address bits may be dropped from each level of granularity, forming a compacted pipe address to save power within the memory controller. In an embodiment, a memory folding scheme may be employed to reduce the number of active memory devices and/or memory controllers in the system when the full complement of memory is not needed.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: January 31, 2023
    Assignee: Apple Inc.
    Inventors: Steven Fishwick, Jeffry E. Gonion, Per H. Hammarlund, Eran Tamari, Lior Zimet, Gerard R. Williams, III
  • Publication number: 20230010948
    Abstract: A system and method for efficiently protecting branch prediction information. In various embodiments, a computing system includes at least one processor with a branch predictor storing branch target addresses and security tags in a table. The security tag includes one or more components of machine context. When the branch predictor receives a portion of a first program counter of a first branch instruction, and hits on a first table entry during an access, the branch predictor reads out a first security tag. The branch predictor compares one or more components of machine context of the first security tag to one or more components of machine context of the first branch instruction. When there is at least one mismatch, the branch prediction information of the first table entry is not used. Additionally, there is no updating of any branch prediction training information of the first table entry.
    Type: Application
    Filed: September 16, 2022
    Publication date: January 12, 2023
    Inventors: Jeffry E. Gonion, Ian D. Kountanis, Conrado Blasco, Steven Andrew Myers, Yannick L. Sierra
  • Publication number: 20220342806
    Abstract: In an embodiment, a system may support programmable hashing of address bits at a plurality of levels of granularity to map memory addresses to memory controllers and ultimately at least to memory devices. The hashing may be programmed to distribute pages of memory across the memory controllers, and consecutive blocks of the page may be mapped to physically distant memory controllers. In an embodiment, address bits may be dropped from each level of granularity, forming a compacted pipe address to save power within the memory controller. In an embodiment, a memory folding scheme may be employed to reduce the number of active memory devices and/or memory controllers in the system when the full complement of memory is not needed.
    Type: Application
    Filed: November 4, 2021
    Publication date: October 27, 2022
    Inventors: Steven Fishwick, Jeffry E. Gonion, Per H. Hammarlund, Eran Tamari, Lior Zimet, Gerard R. Williams, III
  • Publication number: 20220326957
    Abstract: Techniques are disclosed relating to protecting branch prediction information. In various embodiments, an integrated circuit includes branch prediction logic having a table that maintains a plurality of entries storing encrypted target address information for branch instructions. The branch prediction logic is configured to receive machine context information for a branch instruction having a target address being predicted by the branch prediction logic, the machine context information including a program counter associated with the branch instruction. The branch prediction logic is configured to use the machine context information to decrypt encrypted target address information stored in one of the plurality of entries identified based on the program counter.
    Type: Application
    Filed: May 2, 2022
    Publication date: October 13, 2022
    Inventors: Steven A. Myers, Jeffry E. Gonion, Yannick L. Sierra, Thomas Icart
  • Patent number: 11449343
    Abstract: A system and method for efficiently protecting branch prediction information. In various embodiments, a computing system includes at least one processor with a branch predictor storing branch target addresses and security tags in a table. The security tag includes one or more components of machine context. When the branch predictor receives a portion of a first program counter of a first branch instruction, and hits on a first table entry during an access, the branch predictor reads out a first security tag. The branch predictor compares one or more components of machine context of the first security tag to one or more components of machine context of the first branch instruction. When there is at least one mismatch, the branch prediction information of the first table entry is not used. Additionally, there is no updating of any branch prediction training information of the first table entry.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: September 20, 2022
    Assignee: Apple Inc.
    Inventors: Jeffry E. Gonion, Ian D. Kountanis, Conrado Blasco, Steven Andrew Myers, Yannick L. Sierra
  • Patent number: 11321095
    Abstract: Techniques are disclosed relating to protecting branch prediction information. In various embodiments, an integrated circuit includes branch prediction logic having a table that maintains a plurality of entries storing encrypted target address information for branch instructions. The branch prediction logic is configured to receive machine context information for a branch instruction having a target address being predicted by the branch prediction logic, the machine context information including a program counter associated with the branch instruction. The branch prediction logic is configured to use the machine context information to decrypt encrypted target address information stored in one of the plurality of entries identified based on the program counter.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: May 3, 2022
    Assignee: Apple Inc.
    Inventors: Steven A. Myers, Jeffry E. Gonion, Yannick L. Sierra, Thomas Icart
  • Patent number: 11221962
    Abstract: A system and method for efficiently transferring address mappings and data access permissions corresponding to the address mappings. A computing system includes at least one processor and memory for storing a page table. In response to receiving a memory access operation comprising a first address, the address translation unit is configured to identify a data access permission based on a permission index corresponding to the first address, and access data stored in a memory location of the memory identified by a second address in a manner defined by the retrieved data access permission. The address translation unit is configured to access a table to identify the data access permission, and is configured to determine the permission index and the second address based on the first address. A single permission index may correspond to different permissions for different entities within the system.
    Type: Grant
    Filed: May 15, 2020
    Date of Patent: January 11, 2022
    Assignee: Apple Inc.
    Inventors: Jeffry E. Gonion, Bernard Joseph Semeria, Michael J. Swift, Pradeep Kanapathipillai, David J. Williamson
  • Patent number: 11042373
    Abstract: In an embodiment, a computation engine is configured to perform vector multiplications, producing either vector results or outer product (matrix) results. The instructions provided to the computation engine specify a matrix mode or a vector mode for the instructions. The computation engine performs the specified operation. The computation engine may perform numerous computations in parallel, in an embodiment. In an embodiment, the instructions may also specify an offset with the input memories, providing additional flexibility in the location of operands. More particularly, the computation engine may be configured to perform numerous multiplication operations in parallel and to accumulate results in a result memory, performing multiply-accumulate operations for each matrix/vector element in the targeted locations of the output memory.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: June 22, 2021
    Assignee: Apple Inc.
    Inventors: Eric Bainville, Jeffry E. Gonion, Ali Sazegari, Gerard R. Williams, III
  • Patent number: 10990401
    Abstract: In an embodiment, a computation engine may perform dot product computations on input vectors. The dot product operation may have a first operand and a second operand, and the dot product may be performed on a subset of the vector elements in the first operand and each of the vector elements in the second operand. The subset of vector elements may be separated in the first operand by a stride that skips one or more elements between each element to which the dot product operation is applied. More particularly, in an embodiment, the input operands of the dot product operation may be a first vector having second vectors as elements, and the stride may select a specified element of each second vector.
    Type: Grant
    Filed: April 1, 2020
    Date of Patent: April 27, 2021
    Assignee: Apple Inc.
    Inventors: Tal Uliel, Eric Bainville, Jeffry E. Gonion, Ali Sazegari
  • Patent number: 10970078
    Abstract: In an embodiment, a computation engine may perform computations on input vectors having vector elements of a first precision and data type. The computation engine may convert the vector elements from the first precision to a second precision and may also interleave the vector elements as specified by an instruction issued by the processor to the computation engine. The interleave may be based on a ratio of a result precision and the second precision. An extract instruction may be supported to extract results from the computations and convert and deinterleave the vector elements to provide a compact result in a desired order.
    Type: Grant
    Filed: April 5, 2018
    Date of Patent: April 6, 2021
    Assignee: Apple Inc.
    Inventors: Eric Bainville, Tal Uliel, Jeffry E. Gonion, Ali Sazegari, Erik K. Norden
  • Publication number: 20210064539
    Abstract: A system and method for efficiently transferring address mappings and data access permissions corresponding to the address mappings. A computing system includes at least one processor and memory for storing a page table. In response to receiving a memory access operation comprising a first address, the address translation unit is configured to identify a data access permission based on a permission index corresponding to the first address, and access data stored in a memory location of the memory identified by a second address in a manner defined by the retrieved data access permission. The address translation unit is configured to access a table to identify the data access permission, and is configured to determine the permission index and the second address based on the first address. A single permission index may correspond to different permissions for different entities within the system.
    Type: Application
    Filed: May 15, 2020
    Publication date: March 4, 2021
    Inventors: Jeffry E. Gonion, Bernard Joseph Semeria, Michael J. Swift, Pradeep Kanapathipillai, David J. Williamson