Patents by Inventor James Vash

James Vash has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12277074
    Abstract: Techniques are disclosed pertaining to utilizing a communication fabric via multiple ports. An agent circuit includes a plurality of command-and-data ports that couple the agent circuit to a communication fabric coupled to a plurality of hardware components that includes a plurality of memory controller circuits that facilitate access to a memory. The agent circuit can execute an instruction that involves issuing a command for data stored at the memory. The agent circuit may perform a hash operation on a memory address associated with the command to determine which one of the plurality of memory controller circuits to which to issue the command. The agent circuit issues the command to the determined memory controller circuit on a particular one of the plurality of command-and-data ports that is designated to the memory controller circuit. The agent circuit may issue all commands destined to that memory controller circuit on that port.
    Type: Grant
    Filed: September 25, 2023
    Date of Patent: April 15, 2025
    Assignee: Apple Inc.
    Inventors: Sergio Kolor, Sandeep Gupta, James Vash
  • Publication number: 20250103492
    Abstract: Techniques are disclosed relating to performing remote cache invalidations. In some embodiments, primary processor circuitry is configured to, based on execution of a remote invalidate instruction (e.g., an ISA-defined instruction), send a cache invalidate command to coprocessor circuitry. The coprocessor circuitry includes coprocessor cache circuitry and cache invalidation control circuitry configured to, in response to the cache invalidate command sent by the primary processor, invalidate one or more cache lines in the coprocessor cache circuitry without executing any instructions on the coprocessor circuitry.
    Type: Application
    Filed: February 20, 2024
    Publication date: March 27, 2025
    Inventors: Brett S. Feero, Dennis R. Bradford, Gaurav Garg, Jeff Gonion, Bernard J. Semeria, James Vash, Richard F. Russo
  • Patent number: 12253913
    Abstract: Techniques are disclosed relating to memory error tracking and logging. In some embodiments, a memory cache controller circuitry is configured to track, using multiple circuit entries, numbers of detected correctable errors associated with multiple respective locations, and in response to detecting a threshold number of correctable errors for a particular location, generate a signal to the one or more processors that identifies the particular location. In some embodiments, the memory cache controller circuitry includes multiple circuit entries for tracking uncorrectable errors.
    Type: Grant
    Filed: February 12, 2024
    Date of Patent: March 18, 2025
    Assignee: Apple Inc.
    Inventors: Farid Nemati, Steven R. Hutsell, Derek R. Kumar, Bernard J. Semeria, James Vash, Era K. Nangia, Gregory S. Mathews
  • Publication number: 20240427663
    Abstract: Techniques are disclosed relating to memory error tracking and logging. In some embodiments, a memory cache controller circuitry is configured to track, using multiple circuit entries, numbers of detected correctable errors associated with multiple respective locations, and in response to detecting a threshold number of correctable errors for a particular location, generate a signal to the one or more processors that identifies the particular location. In some embodiments, the memory cache controller circuitry includes multiple circuit entries for tracking uncorrectable errors.
    Type: Application
    Filed: February 12, 2024
    Publication date: December 26, 2024
    Inventors: Farid NEMATI, Steven R. HUTSELL, Derek R. KUMAR, Bernard J. SEMERIA, James VASH, Era K. NANGIA, Gregory S. MATHEWS
  • Publication number: 20240411695
    Abstract: A system including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture.
    Type: Application
    Filed: June 10, 2024
    Publication date: December 12, 2024
    Inventors: Per H. Hammarlund, Eran Tamari, Lior Zimet, Sergio Kolor, Sergio Tota, Sagi Lahav, James Vash, Gaurav Garg, Jonathan M. Redshaw, Steven R. Hutsell, Harshavardhan Kaushikkar, Shawn M. Fukami
  • Publication number: 20240370371
    Abstract: A system including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture.
    Type: Application
    Filed: March 15, 2024
    Publication date: November 7, 2024
    Inventors: Per H. Hammarlund, Lior Zimet, James Vash, Gaurav Garg, Sergio Kolor, Harshavardhan Kaushikkar, Ramesh B. Gunna, Steven R. Hutsell
  • Publication number: 20240273024
    Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.
    Type: Application
    Filed: February 20, 2024
    Publication date: August 15, 2024
    Inventors: James Vash, Gaurav Garg, Brian P. Lilly, Ramesh B. Gunna, Steven R. Hutsell, Lital Levy-Rubin, Per H. Hammarlund, Harshavardhan Kaushikkar
  • Patent number: 12007895
    Abstract: A system including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture.
    Type: Grant
    Filed: August 22, 2022
    Date of Patent: June 11, 2024
    Assignee: Apple Inc.
    Inventors: Per H. Hammarlund, Eran Tamari, Lior Zimet, Sergio Kolor, Sergio V. Tota, Sagi Lahav, James Vash, Gaurav Garg, Jonathan M. Redshaw, Steven R. Hutsell, Harshavardhan Kaushikkar, Shawn M. Fukami
  • Patent number: 11947457
    Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.
    Type: Grant
    Filed: November 22, 2022
    Date of Patent: April 2, 2024
    Assignee: Apple Inc.
    Inventors: James Vash, Gaurav Garg, Brian P. Lilly, Ramesh B. Gunna, Steven R. Hutsell, Lital Levy-Rubin, Per H. Hammarlund, Harshavardhan Kaushikkar
  • Patent number: 11934265
    Abstract: Techniques are disclosed relating to memory error tracking and logging. In some embodiments, a memory cache controller circuitry is configured to track, using multiple circuit entries, numbers of detected correctable errors associated with multiple respective locations, and in response to detecting a threshold number of correctable errors for a particular location, generate a signal to the one or more processors that identifies the particular location. In some embodiments, the memory cache controller circuitry includes multiple circuit entries for tracking uncorrectable errors.
    Type: Grant
    Filed: June 1, 2022
    Date of Patent: March 19, 2024
    Assignee: Apple Inc.
    Inventors: Farid Nemati, Steven R. Hutsell, Derek R. Kumar, Bernard J. Semeria, James Vash, Era K. Nangia, Gregory S. Mathews
  • Patent number: 11934313
    Abstract: A system including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture.
    Type: Grant
    Filed: August 22, 2022
    Date of Patent: March 19, 2024
    Assignee: Apple Inc.
    Inventors: Per H. Hammarlund, Lior Zimet, James Vash, Gaurav Garg, Sergio Kolor, Harshavardhan Kaushikkar, Ramesh B. Gunna, Steven R. Hutsell
  • Patent number: 11868258
    Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.
    Type: Grant
    Filed: January 27, 2023
    Date of Patent: January 9, 2024
    Assignee: Apple Inc.
    Inventors: James Vash, Gaurav Garg, Brian P. Lilly, Ramesh B. Gunna, Steven R. Hutsell, Lital Levy-Rubin, Per H. Hammarlund, Harshavardhan Kaushikkar
  • Publication number: 20230350828
    Abstract: In an embodiment, a system on a chip (SOC) comprises a semiconductor die on which circuitry is formed, wherein the circuitry comprises a plurality of agents and a plurality of network switches coupled to the plurality of agents. The plurality of network switches are interconnected to form a plurality of physical and logically independent networks. A first network of the plurality of physically and logically independent networks is constructed according to a first topology and a second network of the plurality of physically and logically independent networks is constructed according to a second topology that is different from the first topology. For example, the first topology may a ring topology and the second topology may be a mesh topology. In an embodiment, coherency may be enforced on the first network and the second network may be a relaxed order network.
    Type: Application
    Filed: April 28, 2023
    Publication date: November 2, 2023
    Inventors: Sergio Kolor, Sergio V. Tota, Tzach Zemer, Sagi Lahav, Jonathan M. Redshaw, Per H. Hammarlund, Eran Tamari, James Vash, Gaurav Garg, Lior Zimet, Harshavardhan Kaushikkar, Steven Fishwick, Steven R. Hutsell, Shawn M. Fukami
  • Patent number: 11803471
    Abstract: An integrated circuit (IC) including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture. For example, the IC may include an interconnect fabric configured to provide communication between the one or more memory controller circuits and the processor cores, graphics processing units, and peripheral devices; and an off-chip interconnect coupled to the interconnect fabric and configured to couple the interconnect fabric to a corresponding interconnect fabric on another instance of the integrated circuit, wherein the interconnect fabric and the off-chip interconnect provide an interface that transparently connects the one or more memory controller circuits, the processor cores, graphics processing units, and peripheral devices in either a single instance of the integrated circuit or two or more instances of the integrated circuit.
    Type: Grant
    Filed: August 22, 2022
    Date of Patent: October 31, 2023
    Assignee: Apple Inc.
    Inventors: Per H. Hammarlund, Lior Zimet, Sergio Kolor, Sagi Lahav, James Vash, Gaurav Garg, Tal Kuzi, Jeffry E. Gonion, Charles E. Tucker, Lital Levy-Rubin, Dany Davidov, Steven Fishwick, Nir Leshem, Mark Pilip, Gerard R. Williams, III, Harshavardhan Kaushikkar, Srinivasa Rangan Sridharan
  • Publication number: 20230333851
    Abstract: Techniques are disclosed relating to data synchronization barrier operations. A system includes a first processor that may receive a data barrier operation request from a second processor include in the system. Based on receiving that data barrier operation request from the second processor, the first processor may ensure that outstanding load/store operations executed by the first processor that are directed to addresses outside of an exclusion region have been completed. The first processor may respond to the second processor that the data barrier operation request is complete at the first processor, even in the case that one or more load/store operations that are directed to addresses within the exclusion region are outstanding and not complete when the first processor responds that the data barrier operation request is complete.
    Type: Application
    Filed: June 16, 2023
    Publication date: October 19, 2023
    Inventors: Jeff Gonion, John H. Kelm, James Vash, Pradeep Kanapathipillai, Mridul Agarwal, Gideon N. Levinsky, Richard F. Russo, Christopher M. Tsay
  • Publication number: 20230305924
    Abstract: Techniques are disclosed relating to memory error tracking and logging. In some embodiments, a memory cache controller circuitry is configured to track, using multiple circuit entries, numbers of detected correctable errors associated with multiple respective locations, and in response to detecting a threshold number of correctable errors for a particular location, generate a signal to the one or more processors that identifies the particular location. In some embodiments, the memory cache controller circuitry includes multiple circuit entries for tracking uncorrectable errors.
    Type: Application
    Filed: June 1, 2022
    Publication date: September 28, 2023
    Inventors: Farid Nemati, Steven R. Hutsell, Derek R. Kumar, Bernard J. Semeria, James Vash, Era K. Nangia, Gregory S. Mathews
  • Patent number: 11768690
    Abstract: A system may include a plurality of processors and a coprocessor. A plurality of coprocessor context priority registers corresponding to a plurality of contexts supported by the coprocessor may be included. The plurality of processors may use the plurality of contexts, and may program the coprocessor context priority register corresponding to a context with a value specifying a priority of the context relative to other contexts. An arbiter may arbitrate among instructions issued by the plurality of processors based on the priorities in the plurality of coprocessor context priority registers. In one embodiment, real-time threads may be assigned higher priorities than bulk processing tasks, improving bandwidth allocated to the real-time threads as compared to the bulk tasks.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: September 26, 2023
    Assignee: Apple Inc.
    Inventors: Aditya Kesiraju, Andrew J. Beaumont-Smith, Brian P. Lilly, James Vash, Jason M. Kassoff, Krishna C. Potnuru, Rajdeep L. Bhuyar, Ran A. Chachick, Tyler J. Huberty, Derek R. Kumar
  • Patent number: 11720360
    Abstract: Techniques are disclosed relating to data synchronization barrier operations. A system includes a first processor that may receive a data barrier operation request from a second processor include in the system. Based on receiving that data barrier operation request from the second processor, the first processor may ensure that outstanding load/store operations executed by the first processor that are directed to addresses outside of an exclusion region have been completed. The first processor may respond to the second processor that the data barrier operation request is complete at the first processor, even in the case that one or more load/store operations that are directed to addresses within the exclusion region are outstanding and not complete when the first processor responds that the data barrier operation request is complete.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: August 8, 2023
    Assignee: Apple Inc.
    Inventors: Jeff Gonion, John H. Kelm, James Vash, Pradeep Kanapathipillai, Mridul Agarwal, Gideon N. Levinsky, Richard F. Russo, Christopher M. Tsay
  • Patent number: 11675722
    Abstract: In an embodiment, a system on a chip (SOC) comprises a semiconductor die on which circuitry is formed, wherein the circuitry comprises a plurality of agents and a plurality of network switches coupled to the plurality of agents. The plurality of network switches are interconnected to form a plurality of physical and logically independent networks. A first network of the plurality of physically and logically independent networks is constructed according to a first topology and a second network of the plurality of physically and logically independent networks is constructed according to a second topology that is different from the first topology. For example, the first topology may a ring topology and the second topology may be a mesh topology. In an embodiment, coherency may be enforced on the first network and the second network may be a relaxed order network.
    Type: Grant
    Filed: June 3, 2021
    Date of Patent: June 13, 2023
    Assignee: Apple Inc.
    Inventors: Sergio Kolor, Sergio V. Tota, Tzach Zemer, Sagi Lahav, Jonathan M. Redshaw, Per H. Hammarlund, Eran Tamari, James Vash, Gaurav Garg, Lior Zimet, Harshavardhan Kaushikkar, Steven Fishwick, Steven R. Hutsell, Shawn M. Fukami
  • Publication number: 20230169003
    Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.
    Type: Application
    Filed: January 27, 2023
    Publication date: June 1, 2023
    Inventors: James Vash, Gaurav Garg, Brian P. Lilly, Ramesh B. Gunna, Steven R. Hutsell, Lital Levy-Rubin, Per H. Hammarlund, Harshavardhan Kaushikkar