Patents by Inventor James Vash
James Vash has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12277074Abstract: Techniques are disclosed pertaining to utilizing a communication fabric via multiple ports. An agent circuit includes a plurality of command-and-data ports that couple the agent circuit to a communication fabric coupled to a plurality of hardware components that includes a plurality of memory controller circuits that facilitate access to a memory. The agent circuit can execute an instruction that involves issuing a command for data stored at the memory. The agent circuit may perform a hash operation on a memory address associated with the command to determine which one of the plurality of memory controller circuits to which to issue the command. The agent circuit issues the command to the determined memory controller circuit on a particular one of the plurality of command-and-data ports that is designated to the memory controller circuit. The agent circuit may issue all commands destined to that memory controller circuit on that port.Type: GrantFiled: September 25, 2023Date of Patent: April 15, 2025Assignee: Apple Inc.Inventors: Sergio Kolor, Sandeep Gupta, James Vash
-
Publication number: 20250103492Abstract: Techniques are disclosed relating to performing remote cache invalidations. In some embodiments, primary processor circuitry is configured to, based on execution of a remote invalidate instruction (e.g., an ISA-defined instruction), send a cache invalidate command to coprocessor circuitry. The coprocessor circuitry includes coprocessor cache circuitry and cache invalidation control circuitry configured to, in response to the cache invalidate command sent by the primary processor, invalidate one or more cache lines in the coprocessor cache circuitry without executing any instructions on the coprocessor circuitry.Type: ApplicationFiled: February 20, 2024Publication date: March 27, 2025Inventors: Brett S. Feero, Dennis R. Bradford, Gaurav Garg, Jeff Gonion, Bernard J. Semeria, James Vash, Richard F. Russo
-
Patent number: 12253913Abstract: Techniques are disclosed relating to memory error tracking and logging. In some embodiments, a memory cache controller circuitry is configured to track, using multiple circuit entries, numbers of detected correctable errors associated with multiple respective locations, and in response to detecting a threshold number of correctable errors for a particular location, generate a signal to the one or more processors that identifies the particular location. In some embodiments, the memory cache controller circuitry includes multiple circuit entries for tracking uncorrectable errors.Type: GrantFiled: February 12, 2024Date of Patent: March 18, 2025Assignee: Apple Inc.Inventors: Farid Nemati, Steven R. Hutsell, Derek R. Kumar, Bernard J. Semeria, James Vash, Era K. Nangia, Gregory S. Mathews
-
Publication number: 20240427663Abstract: Techniques are disclosed relating to memory error tracking and logging. In some embodiments, a memory cache controller circuitry is configured to track, using multiple circuit entries, numbers of detected correctable errors associated with multiple respective locations, and in response to detecting a threshold number of correctable errors for a particular location, generate a signal to the one or more processors that identifies the particular location. In some embodiments, the memory cache controller circuitry includes multiple circuit entries for tracking uncorrectable errors.Type: ApplicationFiled: February 12, 2024Publication date: December 26, 2024Inventors: Farid NEMATI, Steven R. HUTSELL, Derek R. KUMAR, Bernard J. SEMERIA, James VASH, Era K. NANGIA, Gregory S. MATHEWS
-
Publication number: 20240411695Abstract: A system including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture.Type: ApplicationFiled: June 10, 2024Publication date: December 12, 2024Inventors: Per H. Hammarlund, Eran Tamari, Lior Zimet, Sergio Kolor, Sergio Tota, Sagi Lahav, James Vash, Gaurav Garg, Jonathan M. Redshaw, Steven R. Hutsell, Harshavardhan Kaushikkar, Shawn M. Fukami
-
Publication number: 20240370371Abstract: A system including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture.Type: ApplicationFiled: March 15, 2024Publication date: November 7, 2024Inventors: Per H. Hammarlund, Lior Zimet, James Vash, Gaurav Garg, Sergio Kolor, Harshavardhan Kaushikkar, Ramesh B. Gunna, Steven R. Hutsell
-
Publication number: 20240273024Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.Type: ApplicationFiled: February 20, 2024Publication date: August 15, 2024Inventors: James Vash, Gaurav Garg, Brian P. Lilly, Ramesh B. Gunna, Steven R. Hutsell, Lital Levy-Rubin, Per H. Hammarlund, Harshavardhan Kaushikkar
-
Patent number: 12007895Abstract: A system including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture.Type: GrantFiled: August 22, 2022Date of Patent: June 11, 2024Assignee: Apple Inc.Inventors: Per H. Hammarlund, Eran Tamari, Lior Zimet, Sergio Kolor, Sergio V. Tota, Sagi Lahav, James Vash, Gaurav Garg, Jonathan M. Redshaw, Steven R. Hutsell, Harshavardhan Kaushikkar, Shawn M. Fukami
-
Patent number: 11947457Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.Type: GrantFiled: November 22, 2022Date of Patent: April 2, 2024Assignee: Apple Inc.Inventors: James Vash, Gaurav Garg, Brian P. Lilly, Ramesh B. Gunna, Steven R. Hutsell, Lital Levy-Rubin, Per H. Hammarlund, Harshavardhan Kaushikkar
-
Patent number: 11934265Abstract: Techniques are disclosed relating to memory error tracking and logging. In some embodiments, a memory cache controller circuitry is configured to track, using multiple circuit entries, numbers of detected correctable errors associated with multiple respective locations, and in response to detecting a threshold number of correctable errors for a particular location, generate a signal to the one or more processors that identifies the particular location. In some embodiments, the memory cache controller circuitry includes multiple circuit entries for tracking uncorrectable errors.Type: GrantFiled: June 1, 2022Date of Patent: March 19, 2024Assignee: Apple Inc.Inventors: Farid Nemati, Steven R. Hutsell, Derek R. Kumar, Bernard J. Semeria, James Vash, Era K. Nangia, Gregory S. Mathews
-
Patent number: 11934313Abstract: A system including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture.Type: GrantFiled: August 22, 2022Date of Patent: March 19, 2024Assignee: Apple Inc.Inventors: Per H. Hammarlund, Lior Zimet, James Vash, Gaurav Garg, Sergio Kolor, Harshavardhan Kaushikkar, Ramesh B. Gunna, Steven R. Hutsell
-
Patent number: 11868258Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.Type: GrantFiled: January 27, 2023Date of Patent: January 9, 2024Assignee: Apple Inc.Inventors: James Vash, Gaurav Garg, Brian P. Lilly, Ramesh B. Gunna, Steven R. Hutsell, Lital Levy-Rubin, Per H. Hammarlund, Harshavardhan Kaushikkar
-
Publication number: 20230350828Abstract: In an embodiment, a system on a chip (SOC) comprises a semiconductor die on which circuitry is formed, wherein the circuitry comprises a plurality of agents and a plurality of network switches coupled to the plurality of agents. The plurality of network switches are interconnected to form a plurality of physical and logically independent networks. A first network of the plurality of physically and logically independent networks is constructed according to a first topology and a second network of the plurality of physically and logically independent networks is constructed according to a second topology that is different from the first topology. For example, the first topology may a ring topology and the second topology may be a mesh topology. In an embodiment, coherency may be enforced on the first network and the second network may be a relaxed order network.Type: ApplicationFiled: April 28, 2023Publication date: November 2, 2023Inventors: Sergio Kolor, Sergio V. Tota, Tzach Zemer, Sagi Lahav, Jonathan M. Redshaw, Per H. Hammarlund, Eran Tamari, James Vash, Gaurav Garg, Lior Zimet, Harshavardhan Kaushikkar, Steven Fishwick, Steven R. Hutsell, Shawn M. Fukami
-
Patent number: 11803471Abstract: An integrated circuit (IC) including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture. For example, the IC may include an interconnect fabric configured to provide communication between the one or more memory controller circuits and the processor cores, graphics processing units, and peripheral devices; and an off-chip interconnect coupled to the interconnect fabric and configured to couple the interconnect fabric to a corresponding interconnect fabric on another instance of the integrated circuit, wherein the interconnect fabric and the off-chip interconnect provide an interface that transparently connects the one or more memory controller circuits, the processor cores, graphics processing units, and peripheral devices in either a single instance of the integrated circuit or two or more instances of the integrated circuit.Type: GrantFiled: August 22, 2022Date of Patent: October 31, 2023Assignee: Apple Inc.Inventors: Per H. Hammarlund, Lior Zimet, Sergio Kolor, Sagi Lahav, James Vash, Gaurav Garg, Tal Kuzi, Jeffry E. Gonion, Charles E. Tucker, Lital Levy-Rubin, Dany Davidov, Steven Fishwick, Nir Leshem, Mark Pilip, Gerard R. Williams, III, Harshavardhan Kaushikkar, Srinivasa Rangan Sridharan
-
Publication number: 20230333851Abstract: Techniques are disclosed relating to data synchronization barrier operations. A system includes a first processor that may receive a data barrier operation request from a second processor include in the system. Based on receiving that data barrier operation request from the second processor, the first processor may ensure that outstanding load/store operations executed by the first processor that are directed to addresses outside of an exclusion region have been completed. The first processor may respond to the second processor that the data barrier operation request is complete at the first processor, even in the case that one or more load/store operations that are directed to addresses within the exclusion region are outstanding and not complete when the first processor responds that the data barrier operation request is complete.Type: ApplicationFiled: June 16, 2023Publication date: October 19, 2023Inventors: Jeff Gonion, John H. Kelm, James Vash, Pradeep Kanapathipillai, Mridul Agarwal, Gideon N. Levinsky, Richard F. Russo, Christopher M. Tsay
-
Publication number: 20230305924Abstract: Techniques are disclosed relating to memory error tracking and logging. In some embodiments, a memory cache controller circuitry is configured to track, using multiple circuit entries, numbers of detected correctable errors associated with multiple respective locations, and in response to detecting a threshold number of correctable errors for a particular location, generate a signal to the one or more processors that identifies the particular location. In some embodiments, the memory cache controller circuitry includes multiple circuit entries for tracking uncorrectable errors.Type: ApplicationFiled: June 1, 2022Publication date: September 28, 2023Inventors: Farid Nemati, Steven R. Hutsell, Derek R. Kumar, Bernard J. Semeria, James Vash, Era K. Nangia, Gregory S. Mathews
-
Patent number: 11768690Abstract: A system may include a plurality of processors and a coprocessor. A plurality of coprocessor context priority registers corresponding to a plurality of contexts supported by the coprocessor may be included. The plurality of processors may use the plurality of contexts, and may program the coprocessor context priority register corresponding to a context with a value specifying a priority of the context relative to other contexts. An arbiter may arbitrate among instructions issued by the plurality of processors based on the priorities in the plurality of coprocessor context priority registers. In one embodiment, real-time threads may be assigned higher priorities than bulk processing tasks, improving bandwidth allocated to the real-time threads as compared to the bulk tasks.Type: GrantFiled: November 22, 2021Date of Patent: September 26, 2023Assignee: Apple Inc.Inventors: Aditya Kesiraju, Andrew J. Beaumont-Smith, Brian P. Lilly, James Vash, Jason M. Kassoff, Krishna C. Potnuru, Rajdeep L. Bhuyar, Ran A. Chachick, Tyler J. Huberty, Derek R. Kumar
-
Patent number: 11720360Abstract: Techniques are disclosed relating to data synchronization barrier operations. A system includes a first processor that may receive a data barrier operation request from a second processor include in the system. Based on receiving that data barrier operation request from the second processor, the first processor may ensure that outstanding load/store operations executed by the first processor that are directed to addresses outside of an exclusion region have been completed. The first processor may respond to the second processor that the data barrier operation request is complete at the first processor, even in the case that one or more load/store operations that are directed to addresses within the exclusion region are outstanding and not complete when the first processor responds that the data barrier operation request is complete.Type: GrantFiled: September 8, 2021Date of Patent: August 8, 2023Assignee: Apple Inc.Inventors: Jeff Gonion, John H. Kelm, James Vash, Pradeep Kanapathipillai, Mridul Agarwal, Gideon N. Levinsky, Richard F. Russo, Christopher M. Tsay
-
Patent number: 11675722Abstract: In an embodiment, a system on a chip (SOC) comprises a semiconductor die on which circuitry is formed, wherein the circuitry comprises a plurality of agents and a plurality of network switches coupled to the plurality of agents. The plurality of network switches are interconnected to form a plurality of physical and logically independent networks. A first network of the plurality of physically and logically independent networks is constructed according to a first topology and a second network of the plurality of physically and logically independent networks is constructed according to a second topology that is different from the first topology. For example, the first topology may a ring topology and the second topology may be a mesh topology. In an embodiment, coherency may be enforced on the first network and the second network may be a relaxed order network.Type: GrantFiled: June 3, 2021Date of Patent: June 13, 2023Assignee: Apple Inc.Inventors: Sergio Kolor, Sergio V. Tota, Tzach Zemer, Sagi Lahav, Jonathan M. Redshaw, Per H. Hammarlund, Eran Tamari, James Vash, Gaurav Garg, Lior Zimet, Harshavardhan Kaushikkar, Steven Fishwick, Steven R. Hutsell, Shawn M. Fukami
-
Publication number: 20230169003Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.Type: ApplicationFiled: January 27, 2023Publication date: June 1, 2023Inventors: James Vash, Gaurav Garg, Brian P. Lilly, Ramesh B. Gunna, Steven R. Hutsell, Lital Levy-Rubin, Per H. Hammarlund, Harshavardhan Kaushikkar