Patents by Inventor Eric Francis Robinson

Eric Francis Robinson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230236889
    Abstract: Systems, methods, and devices are described coordinating a distributed accelerator. A command that includes instructions for performing a task is received. One or more sub-tasks of the task are determined to generate a set of sub-tasks. For each sub-task of the set of sub-tasks, an accelerator slice of a plurality of accelerator slices of a distributed accelerator is allocated, sub-task instructions for performing the sub-task are determined. Sub-task instructions are transmitted to the allocated accelerator slice for each sub-task. Each allocated accelerator slice is configured to generate a corresponding response indicative of the allocated accelerator slice having completed a respective sub-task. In a further example aspect, corresponding responses are received from each allocated accelerator slice and a coordinated response indicative of the corresponding responses is generated.
    Type: Application
    Filed: January 27, 2022
    Publication date: July 27, 2023
    Inventors: Gagan GUPTA, Andrew Joseph RUSHING, Eric Francis ROBINSON
  • Patent number: 11372757
    Abstract: Tracking repeated reads to guide dynamic selection of cache coherence protocols in processor-based devices is disclosed. In this regard, a processor-based device includes processing elements (PEs) and a central ordering point circuit (COP). The COP dynamically selects, on a store-by-store basis, either a write invalidate protocol or a write update protocol as a cache coherence protocol to use for maintaining cache coherency for a memory store operation. The COP's selection is based on protocol preference indicators generated by the PEs using repeat-read indicators that each PE maintains to track whether a coherence granule was repeatedly read by the PE (e.g., as a result of polling reads, or as a result of re-reading the coherence granule after it was evicted from a cache due to an invalidating snoop). After selecting the cache coherence protocol, the COP sends a response message to the PEs indicating the selected cache coherence protocol.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: June 28, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin Neal Magill, Eric Francis Robinson, Derek Bachand, Jason Panavich, Michael B. Mitchell, Michael P. Wilson
  • Patent number: 11354239
    Abstract: Maintaining domain coherence states including Domain State No-Owned (DSN) in processor-based devices is disclosed. In this regard, a processor-based device provides multiple processing elements (PEs) organized into multiple domains, each containing one or more PEs and a local ordering point circuit (LOP). The processor-based device supports domain coherence states for coherence granules cached by the PEs within a given domain. The domain coherence states include a DSN domain coherence state, which indicates that a coherence granule is not cached within a shared modified state within any domain. In some embodiments, upon receiving a request for a read access to a coherence granule, a system ordering point circuit (SOP) determines that the coherence granule is cached in the DSN domain coherence state within a domain of the plurality of domains, and can safely read the coherence granule from the system memory to satisfy the read access if necessary.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: June 7, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Eric Francis Robinson, Kevin Neal Magill, Jason Panavich, Derek Bachand, Michael B. Mitchell, Michael P. Wilson
  • Publication number: 20220091979
    Abstract: Maintaining domain coherence states including Domain State No-Owned (DSN) in processor-based devices is disclosed. In this regard, a processor-based device provides multiple processing elements (PEs) organized into multiple domains, each containing one or more PEs and a local ordering point circuit (LOP). The processor-based device supports domain coherence states for coherence granules cached by the PEs within a given domain. The domain coherence states include a DSN domain coherence state, which indicates that a coherence granule is not cached within a shared modified state within any domain. In some embodiments, upon receiving a request for a read access to a coherence granule, a system ordering point circuit (SOP) determines that the coherence granule is cached in the DSN domain coherence state within a domain of the plurality of domains, and can safely read the coherence granule from the system memory to satisfy the read access if necessary.
    Type: Application
    Filed: September 18, 2020
    Publication date: March 24, 2022
    Inventors: Eric Francis ROBINSON, Kevin Neal MAGILL, Jason PANAVICH, Derek BACHAND, Michael B. MITCHELL, Michael P. WILSON
  • Publication number: 20220075726
    Abstract: Tracking repeated reads to guide dynamic selection of cache coherence protocols in processor-based devices is disclosed. In this regard, a processor-based device includes processing elements (PEs) and a central ordering point circuit (COP). The COP dynamically selects, on a store-by-store basis, either a write invalidate protocol or a write update protocol as a cache coherence protocol to use for maintaining cache coherency for a memory store operation. The COP's selection is based on protocol preference indicators generated by the PEs using repeat-read indicators that each PE maintains to track whether a coherence granule was repeatedly read by the PE (e.g., as a result of polling reads, or as a result of re-reading the coherence granule after it was evicted from a cache due to an invalidating snoop). After selecting the cache coherence protocol, the COP sends a response message to the PEs indicating the selected cache coherence protocol.
    Type: Application
    Filed: September 4, 2020
    Publication date: March 10, 2022
    Inventors: Kevin Neal MAGILL, Eric Francis ROBINSON, Derek BACHAND, Jason PANAVICH, Michael B. MITCHELL, Michael P. WILSON
  • Patent number: 11226910
    Abstract: Disclosed are ticketed flow control mechanisms in a processing system with one or more masters and one or more slaves. In an aspect, a targeted slave receives a request from a requesting master. If the targeted slave is unavailable to service the request, a ticket for the request is provided to the requesting master. As resources in the targeted slave become available, messages are broadcasted for the requesting master to update the ticket value. When the ticket value has been updated to a final value, the requesting master may re-transmit the request.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: January 18, 2022
    Assignee: Qualcomm Incorporated
    Inventors: Joseph Gerald McDonald, Garrett Michael Drapala, Eric Francis Robinson, Thomas Philip Speier, Kevin Neal Magill, Richard Gerard Hofmann
  • Patent number: 11138114
    Abstract: Providing dynamic selection of cache coherence protocols in processor-based devices is disclosed. In this regard, a processor-based device includes a master PE and at least one snooper PE, as well as a central ordering point (COP). The COP dynamically selects, on a store-by-store basis, either a write invalidate protocol or a write update protocol as a cache coherence protocol to use for maintaining cache coherency for a memory store operation by the master PE. The selection is made by the COP based on one or more protocol preference indicators that may be generated and provided by one or more of the master PE, the at least one snooper PE, and the COP itself. After selecting the cache coherence protocol to use, the COP sends a response message to each of the master PE and the at least one snooper PE indicating the selected cache coherence protocol.
    Type: Grant
    Filed: January 8, 2020
    Date of Patent: October 5, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin Neal Magill, Eric Francis Robinson, Derek Bachand, Jason Panavich, Michael P. Wilson, Michael B. Mitchell
  • Patent number: 11119770
    Abstract: Performing atomic store-and-invalidate operations in processor-based devices is disclosed. In this regard, a processing element (PE) of one or more PEs of a processor-based device includes a store-and-invalidate logic circuit used by a memory access stage of an execution pipeline of the PE to perform an atomic store-and-invalidate operation. Upon receiving an indication to perform a store-and-invalidate operation (e.g., in response to a store-and-invalidate instruction execution) comprising a store address and store data, the memory access stage uses the store-and-invalidate logic circuit to write the store data to a memory location indicated by the store address, and to invalidate an instruction cache line corresponding to the store address in an instruction cache of the PE.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: September 14, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Thomas Philip Speier, Eric Francis Robinson
  • Patent number: 11093396
    Abstract: Enabling atomic memory accesses across coherence granule boundaries in processor-based devices is disclosed. In this regard, a processor-based device includes multiple processing elements (PEs), and further includes a special-purpose central ordering point (SPCOP) configured to distribute coherence granule (“cogran”) pair atomic access (CPAA) tokens. To perform an atomic memory access on a pair of coherence granules, a PE must hold a CPAA token for an address block containing one of the pair of coherence granules before the PE can obtain each of the pair of coherence granules in an exclusive state. Because a CPAA token must be acquired before obtaining exclusive access to at least one of the pair of coherence granules, and because the SPCOP is configured to allow only one CPAA token to be active for a given address block, deadlocks and livelocks between PEs seeking to access the same coherence granules can be avoided.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: August 17, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Eric Francis Robinson, Derek Bachand, Jason Panavich, Kevin Neal Magill, Michael B. Mitchell, Michael P. Wilson
  • Publication number: 20210209026
    Abstract: Providing dynamic selection of cache coherence protocols in processor-based devices is disclosed. In this regard, a processor-based device includes a master PE and at least one snooper PE, as well as a central ordering point (COP). The COP dynamically selects, on a store-by-store basis, either a write invalidate protocol or a write update protocol as a cache coherence protocol to use for maintaining cache coherency for a memory store operation by the master PE. The selection is made by the COP based on one or more protocol preference indicators that may be generated and provided by one or more of the master PE, the at least one snooper PE, and the COP itself. After selecting the cache coherence protocol to use, the COP sends a response message to each of the master PE and the at least one snooper PE indicating the selected cache coherence protocol.
    Type: Application
    Filed: January 8, 2020
    Publication date: July 8, 2021
    Inventors: Kevin Neal MAGILL, Eric Francis ROBINSON, Derek BACHAND, Jason PANAVICH, Michael P. WILSON, Michael B. MITCHELL
  • Patent number: 11016899
    Abstract: Selective honoring of speculative memory-prefetch requests based on bandwidth constraint of a memory access path component(s) in a processor-based system. To reduce memory access latency, a CPU includes a request size in a memory read request of requested data to be read from memory and a request mode of the requested data as required or preferred. A memory access path component includes a memory read honor circuit configured to receive the memory read request and consult the request size and request mode of requested data in the memory read request. If the selective prefetch data honor circuit determines that bandwidth of the memory system is less than a defined bandwidth constraint threshold, then the memory read request is forwarded to be fulfilled, otherwise, the memory read request is downgraded to only include any requested required data.
    Type: Grant
    Filed: May 6, 2019
    Date of Patent: May 25, 2021
    Assignee: Qualcomm Incorporated
    Inventors: Nikhil Narendradev Sharma, Eric Francis Robinson, Garrett Michael Drapala, Perry Willmann Remaklus, Jr., Joseph Gerald McDonald, Thomas Philip Speier
  • Publication number: 20210141726
    Abstract: Enabling atomic memory accesses across coherence granule boundaries in processor-based devices is disclosed. In this regard, a processor-based device includes multiple processing elements (PEs), and further includes a special-purpose central ordering point (SPCOP) configured to distribute coherence granule (“cogran”) pair atomic access (CPAA) tokens. To perform an atomic memory access on a pair of coherence granules, a PE must hold a CPAA token for an address block containing one of the pair of coherence granules before the PE can obtain each of the pair of coherence granules in an exclusive state. Because a CPAA token must be acquired before obtaining exclusive access to at least one of the pair of coherence granules, and because the SPCOP is configured to allow only one CPAA token to be active for a given address block, deadlocks and livelocks between PEs seeking to access the same coherence granules can be avoided.
    Type: Application
    Filed: November 8, 2019
    Publication date: May 13, 2021
    Inventors: Eric Francis ROBINSON, Derek BACHAND, Jason PANAVICH, Kevin Neal MAGILL, Michael B. MITCHELL, Michael P. WILSON
  • Publication number: 20210026636
    Abstract: Performing atomic store-and-invalidate operations in processor-based devices is disclosed. In this regard, a processing element (PE) of one or more PEs of a processor-based device includes a store-and-invalidate logic circuit used by a memory access stage of an execution pipeline of the PE to perform an atomic store-and-invalidate operation. Upon receiving an indication to perform a store-and-invalidate operation (e.g., in response to a store-and-invalidate instruction execution) comprising a store address and store data, the memory access stage uses the store-and-invalidate logic circuit to write the store data to a memory location indicated by the store address, and to invalidate an instruction cache line corresponding to the store address in an instruction cache of the PE.
    Type: Application
    Filed: July 26, 2019
    Publication date: January 28, 2021
    Inventors: Thomas Philip SPEIER, Eric Francis ROBINSON
  • Patent number: 10896135
    Abstract: Facilitating page table entry (PTE) maintenance in processor-based devices is disclosed. In this regard, a processor-based device includes processing elements (PEs) configured to support two new coherence states: walker-readable (W) and modified walker accessible (MW). The W coherence state indicates that read access to a corresponding coherence granule by hardware table walkers (HTWs) is permitted, but all write operations and all read operations by non-HTW agents are disallowed. The MW coherence state indicates that cached copies of the coherence granule visible only to HTWs may exist in other caches. In some embodiments, each PE is also configured to support a special page table entry (SP-PTE) field store instruction for modifying SP-PTE fields of a PTE, indicating to the PE's local cache that the corresponding coherence granule should transition to the MW state, and indicating to remote local caches that copies of the coherence granule should update their coherence state.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: January 19, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Eric Francis Robinson, Jason Panavich, Thomas Philip Speier
  • Publication number: 20200356486
    Abstract: Selective honoring of speculative memory-prefetch requests based on bandwidth constraint of a memory access path component(s) in a processor-based system. To reduce memory access latency, a CPU includes a request size in a memory read request of requested data to be read from memory and a request mode of the requested data as required or preferred. A memory access path component includes a memory read honor circuit configured to receive the memory read request and consult the request size and request mode of requested data in the memory read request. If the selective prefetch data honor circuit determines that bandwidth of the memory system is less than a defined bandwidth constraint threshold, then the memory read request is forwarded to be fulfilled, otherwise, the memory read request is downgraded to only include any requested required data.
    Type: Application
    Filed: May 6, 2019
    Publication date: November 12, 2020
    Inventors: Nikhil Narendradev Sharma, Eric Francis Robinson, Garrett Michael Drapala, Perry Willmann Remaklus, JR., Joseph Gerald McDonald, Thomas Philip Speier
  • Publication number: 20200285597
    Abstract: Disclosed are ticketed flow control mechanisms in a processing system with one or more masters and one or more slaves. In an aspect, a targeted slave receives a request from a requesting master. If the targeted slave is unavailable to service the request, a ticket for the request is provided to the requesting master. As resources in the targeted slave become available, messages are broadcasted for the requesting master to update the ticket value. When the ticket value has been updated to a final value, the requesting master may re-transmit the request.
    Type: Application
    Filed: March 3, 2020
    Publication date: September 10, 2020
    Inventors: Joseph Gerald MCDONALD, Garrett Michael DRAPALA, Eric Francis ROBINSON, Thomas Philip SPEIER, Kevin Neal MAGILL, Richard Gerard HOFMANN
  • Publication number: 20190087333
    Abstract: Converting a stale cache memory unique request to a read unique snoop response in a multiple (multi-) central processing unit (CPU) processor is disclosed. The multi-CPU processor includes a plurality of CPUs that each have access to either private or shared cache memories in a cache memory system. Multiple CPUs issuing unique requests to write data to a same coherence granule in a cache memory causes one unique request for a requested CPU to be serviced or “win” to allow that CPU to obtain the coherence granule in a unique state, while the other unsuccessful unique requests become stale. To avoid retried unique requests being reordered behind other pending, younger requests which would lead to lack of forward progress due to starvation or livelock, the snooped stale unique requests are converted to read unique snoop responses so that their request order can be maintained in the cache memory system.
    Type: Application
    Filed: September 12, 2018
    Publication date: March 21, 2019
    Inventors: Eric Francis Robinson, Thomas Philip Speier, Joseph Gerald McDonald, Garrett Michael Drapala, Kevin Neal Magill
  • Patent number: 9934149
    Abstract: Systems and methods relate to servicing a demand miss for a cache line in a first cache (e.g., an L1 cache) of a processing system, for example, when none of one or more fill buffers for servicing the demand miss are available. In exemplary aspects, the demand miss is converted to a prefetch operation to prefetch the cache line into a second cache (e.g., an L2 cache), wherein the second cache is a backing storage location for the first cache. Thus, servicing the demand miss is not delayed until a fill buffer becomes available, and once a fill buffer becomes available, the prefetched cache line is returned from the second cache to the available fill buffer.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: April 3, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Khary Jason Alexander, Eric Francis Robinson
  • Publication number: 20170371783
    Abstract: Self-aware, peer-to-peer cache transfers between local, shared cache memories in a multi-processor system is disclosed. A shared cache memory system is provided comprising local shared cache memories accessible by an associated central processing unit (CPU) and other CPUs in a peer-to-peer manner. When a CPU desires to request a cache transfer (e.g., in response to a cache eviction), the CPU acting as a master CPU issues a cache transfer request. In response, target CPUs issue snoop responses indicating their willingness to accept the cache transfer. The target CPUs also use the snoop responses to be self-aware of the willingness of other target CPUs to accept the cache transfer. The target CPUs willing to accept the cache transfer use a predefined target CPU selection scheme to determine its acceptance of the cache transfer. This can avoid a CPU making multiple requests to find a target CPU for a cache transfer.
    Type: Application
    Filed: June 24, 2016
    Publication date: December 28, 2017
    Inventors: Hien Minh Le, Thuong Quang Truong, Eric Francis Robinson, Brad Herold, Robert Bell, JR.
  • Patent number: 9817760
    Abstract: The disclosure relates to filtering snoops in coherent multiprocessor systems. For example, in response to a request to update a target memory location at a Level-2 (L2) cache shared among multiple local processing units each having a Level-1 (L1) cache, a lookup based on the target memory location may be performed in a snoop filter that tracks entries in the L1 caches. If the lookup misses the snoop filter and the snoop filter lacks space to store a new entry, a victim entry to evict from the snoop filter may be selected and a request to invalidate every cache line that maps to the victim entry may be sent to at least one of the processing units with one or more cache lines that map to the victim entry. The victim entry may then be replaced in the snoop filter with the new entry corresponding to the target memory location.
    Type: Grant
    Filed: March 7, 2016
    Date of Patent: November 14, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Eric Francis Robinson, Khary Jason Alexander, Zeid Hartuon Samoail, Benjamin Charles Michelson