Patents by Inventor Alejandro Rico Carro
Alejandro Rico Carro has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11954040Abstract: Various implementations described herein are directed to device. The device may include a first tier having a processor and a first cache memory that are coupled together via control logic to operate as a computing architecture. The device may include a second tier having a second cache memory that is coupled to the first cache memory. Also, the first tier and the second tier may be integrated together with the computing architecture to operate as a stackable cache memory architecture.Type: GrantFiled: June 15, 2020Date of Patent: April 9, 2024Assignee: Arm LimitedInventors: Alejandro Rico Carro, Douglas Joseph, Saurabh Pijuskumar Sinha
-
Patent number: 11899583Abstract: Various implementations described herein are directed to a device with a multi-layered logic structure with multiple layers including a first layer and a second layer arranged vertically in a stacked configuration. The device may have a first cache memory with first interconnect logic disposed in the first layer. The device may have a second cache memory with second interconnect logic disposed in the second layer, wherein the second interconnect logic in the second layer is linked to the first interconnect logic in the first layer.Type: GrantFiled: July 29, 2021Date of Patent: February 13, 2024Assignee: Arm LimitedInventors: Joshua Randall, Alejandro Rico Carro, Dam Sunwoo, Saurabh Pijuskumar Sinha, Jamshed Jalal
-
Patent number: 11899607Abstract: An apparatus comprises an interconnect providing communication paths between agents coupled to the interconnect. A coordination agent is provided which performs an operation requiring sending a request to each of a plurality of target agents, and receiving a response from each of the target agents, the operation being unable to complete until the response has been received from each of the target agents. Storage circuitry is provided which is accessible to the coordination agent and configured to store, for each agent that the coordination agent may communicate with via the interconnect, a latency indication for communication between that agent and the coordination agent. The coordination agent is configured, prior to performing the operation, to determine a sending order in which to send the request to each of the target agents, the sending order being determined in dependence on the latency indication for each of the target agents.Type: GrantFiled: June 1, 2021Date of Patent: February 13, 2024Assignee: Arm LimitedInventors: Timothy Hayes, Alejandro Rico Carro, Tushar P. Ringe, Kishore Kumar Jagadeesha
-
Publication number: 20230385127Abstract: Apparatus comprises a plurality of processing elements; and control circuitry to communicate with the plurality of processing elements by a data communication path; the control circuitry being configured, in response to a request issued by a given processing element of the plurality of processing elements, to initiate a hybrid operation by issuing a command defining the hybrid operation to a group of processing elements comprising at least a subset of the plurality of processing elements, the hybrid operation comprising performance of a control function selected from a predetermined set of one or more control functions and initiation of performance of a synchronization event, the synchronization event comprising each of the group of processing elements providing confirmation that any control functions pending at that processing element have reached at least a predetermined stage of execution; in which the given processing element is configured to inhibit the issuance of any further requests to the control cirType: ApplicationFiled: May 25, 2022Publication date: November 30, 2023Inventors: Timothy HAYES, Alejandro Rico CARRO
-
Patent number: 11797307Abstract: In response to an instruction decoder decoding a range prefetch instruction specifying first and second address-range-specifying parameters and a stride parameter, prefetch circuitry controls, depending on the first and second address-range-specifying parameters and the stride parameter, prefetching of data from a plurality of specified ranges of addresses into the at least one cache. A start address and size of each specified range is dependent on the first and second address-range-specifying parameters. The stride parameter specifies an offset between start addresses of successive specified ranges. Use of the range prefetch instruction helps to improve programmability and improve the balance between prefetch coverage and circuit area of the prefetch circuitry.Type: GrantFiled: June 23, 2021Date of Patent: October 24, 2023Assignee: Arm LimitedInventors: Krishnendra Nathella, David Hennah Mansell, Alejandro Rico Carro, Andrew Mundy
-
Publication number: 20230037714Abstract: Various implementations described herein refer to a device having a multi-layered logic structure with multiple layers including a first layer and a second layer arranged vertically in a stacked configuration. The device may have a first network that links nodes together in the first layer. The device may have a second network that links the nodes in the first layer together by way of the second layer so as to reduce latency related to data transfer between the nodes.Type: ApplicationFiled: August 6, 2021Publication date: February 9, 2023Inventors: Alejandro Rico Carro, Saurabh Pijuskumar Sinha, Douglas James Joseph, Tiago Rogerio Muck
-
Publication number: 20230029860Abstract: Various implementations described herein are directed to a device with a multi-layered logic structure with multiple layers including a first layer and a second layer arranged vertically in a stacked configuration. The device may have a first cache memory with first interconnect logic disposed in the first layer. The device may have a second cache memory with second interconnect logic disposed in the second layer, wherein the second interconnect logic in the second layer is linked to the first interconnect logic in the first layer.Type: ApplicationFiled: July 29, 2021Publication date: February 2, 2023Inventors: Joshua Randall, Alejandro Rico Carro, Dam Sunwoo, Saurabh Pijuskumar Sinha, Jamshed Jalal
-
Publication number: 20220413866Abstract: In response to an instruction decoder decoding a range prefetch instruction specifying first and second address-range-specifying parameters and a stride parameter, prefetch circuitry controls, depending on the first and second address-range-specifying parameters and the stride parameter, prefetching of data from a plurality of specified ranges of addresses into the at least one cache. A start address and size of each specified range is dependent on the first and second address-range-specifying parameters. The stride parameter specifies an offset between start addresses of successive specified ranges. Use of the range prefetch instruction helps to improve programmability and improve the balance between prefetch coverage and circuit area of the prefetch circuitry.Type: ApplicationFiled: June 23, 2021Publication date: December 29, 2022Inventors: Krishnendra NATHELLA, David Hennah MANSELL, Alejandro RICO CARRO, Andrew MUNDY
-
Publication number: 20220382703Abstract: An apparatus comprises an interconnect providing communication paths between agents coupled to the interconnect. A coordination agent is provided which performs an operation requiring sending a request to each of a plurality of target agents, and receiving a response from each of the target agents, the operation being unable to complete until the response has been received from each of the target agents. Storage circuitry is provided which is accessible to the coordination agent and configured to store, for each agent that the coordination agent may communicate with via the interconnect, a latency indication for communication between that agent and the coordination agent. The coordination agent is configured, prior to performing the operation, to determine a sending order in which to send the request to each of the target agents, the sending order being determined in dependence on the latency indication for each of the target agents.Type: ApplicationFiled: June 1, 2021Publication date: December 1, 2022Inventors: Timothy HAYES, Alejandro RICO CARRO, Tushar P. RINGE, Kishore Kumar JAGADEESHA
-
Patent number: 11263137Abstract: A method and apparatus is disclosed for transferring data from a first processor core to a second processor core. The first processor core executes a stash instruction having a first operand associated with a data address of the data. A second processor core is determined to be a stash target for a stash message, based on the data address or a second operand. A stash message is sent to the second processor core, notifying the second processor core of the written data. Responsive to receiving the stash message, the second processor core can opt to store the data in its cache. The data may be included in the stash message or retrieved in response to a read request by the second processing core. The second processor core may be determined by prediction based, at least in part, on monitored data transactions.Type: GrantFiled: May 27, 2020Date of Patent: March 1, 2022Assignee: Arm LimitedInventors: Jose Alberto Joao, Tiago Rogerio Muck, Joshua Randall, Alejandro Rico Carro, Bruce James Mathewson
-
Patent number: 11249657Abstract: Non-volatile storage circuitry is provided as primary storage accessible to processing circuitry, e.g. as registers, a cache, scratchpad memory, TLB or on-chip RAM. Power control circuitry powers down a given region of the non-volatile storage circuitry when information stored in said given region is not being used. This provides opportunities for more frequent power savings than would be possible if primary storage was implemented using volatile storage.Type: GrantFiled: July 10, 2019Date of Patent: February 15, 2022Assignee: Arm LimitedInventors: Christopher Neal Hinds, Jesse Garrett Beu, Alejandro Rico Carro, Jose Alberto Joao
-
Publication number: 20210390059Abstract: Various implementations described herein are directed to device. The device may include a first tier having a processor and a first cache memory that are coupled together via control logic to operate as a computing architecture. The device may include a second tier having a second cache memory that is coupled to the first cache memory. Also, the first tier and the second tier may be integrated together with the computing architecture to operate as a stackable cache memory architecture.Type: ApplicationFiled: June 15, 2020Publication date: December 16, 2021Inventors: Alejandro Rico Carro, Douglas Joseph, Saurabh Pijuskumar Sinha
-
Publication number: 20210374059Abstract: A method and apparatus is disclosed for transferring data from a first processor core to a second processor core. The first processor core executes a stash instruction having a first operand associated with a data address of the data. A second processor core is determined to be a stash target for a stash message, based on the data address or a second operand. A stash message is sent to the second processor core, notifying the second processor core of the written data. Responsive to receiving the stash message, the second processor core can opt to store the data in its cache. The data may be included in the stash message or retrieved in response to a read request by the second processing core. The second processor core may be determined by prediction based, at least in part, on monitored data transactions.Type: ApplicationFiled: May 27, 2020Publication date: December 2, 2021Applicant: Arm LimitedInventors: Jose Alberto Joao, Tiago Rogerio Muck, Joshua Randall, Alejandro Rico Carro, Bruce James Mathewson
-
Patent number: 11144318Abstract: A method and apparatus for application thread prioritization to mitigate the effects of operating system noise is disclosed. The method generally includes executing in parallel a plurality of application threads of a parallel application. An interrupt condition of an application thread of the plurality of application threads is detected. A priority of the interrupted application thread is changed relative to priorities of one or more other application threads of the plurality of application threads, and control is returned to the interrupted application thread after the interrupt condition. The interrupted application thread then resumes execution in accordance with the changed priority.Type: GrantFiled: August 26, 2019Date of Patent: October 12, 2021Assignee: Arm LimitedInventors: Alejandro Rico Carro, Joshua Randall, Jose Alberto Joao
-
Patent number: 11082493Abstract: Briefly, example methods, apparatuses, and/or articles of manufacture are disclosed that may be implemented, in whole or in part, using one or more mobile communication devices and/or processing devices to facilitate and/or support one or more operations and/or techniques for executing distributed memory operations. In particular, some embodiments are directed to techniques for the traversal of vertices of a data structure maintained in a distributed memory system.Type: GrantFiled: November 16, 2018Date of Patent: August 3, 2021Assignee: Arm LimitedInventors: Pavel Shamis, Alejandro Rico Carro
-
Publication number: 20210064371Abstract: A method and apparatus for application thread prioritization to mitigate the effects of operating system noise is disclosed. The method generally includes executing in parallel a plurality of application threads of a parallel application. An interrupt condition of an application thread of the plurality of application threads is detected. A priority of the interrupted application thread is changed relative to priorities of one or more other application threads of the plurality of application threads, and control is returned to the interrupted application thread after the interrupt condition. The interrupted application thread then resumes execution in accordance with the changed priority.Type: ApplicationFiled: August 26, 2019Publication date: March 4, 2021Applicant: Arm LimitedInventors: Alejandro Rico Carro, Joshua Randall, Jose Alberto Joao
-
Publication number: 20210011638Abstract: Non-volatile storage circuitry is provided as primary storage accessible to processing circuitry, e.g. as registers, a cache, scratchpad memory, TLB or on-chip RAM. Power control circuitry powers down a given region of the non-volatile storage circuitry when information stored in said given region is not being used. This provides opportunities for more frequent power savings than would be possible if primary storage was implemented using volatile storage.Type: ApplicationFiled: July 10, 2019Publication date: January 14, 2021Inventors: Christopher Neal HINDS, Jesse Garrett BEU, Alejandro RICO CARRO, Jose Alberto JOAO
-
Patent number: 10776266Abstract: Aspects of the present disclosure relate to an apparatus comprising a requester master processing device having an associated private cache storage to store data for access by the requester master processing device. The requester master processing device is arranged to issue a request to modify data that is associated with a given memory address and stored in a private cache storage associated with a recipient master processing device. The private cache storage associated with the recipient master processing device is arranged to store data for access by the recipient master processing device. The apparatus further comprises the recipient master processing device having its private cache storage. One of the recipient master processing device and its associated private cache storage is arranged to perform the requested modification of the data while the data is stored in the cache storage associated with the recipient master processing device.Type: GrantFiled: November 7, 2018Date of Patent: September 15, 2020Assignee: Arm LimitedInventors: Joshua Randall, Alejandro Rico Carro, Jose Alberto Joao, Richard William Earnshaw, Alasdair Grant
-
Patent number: 10733106Abstract: A method and apparatus are provided for automatic routing of messages in a data processing system. An incoming message at an input/output (I/O) interface of the data processing system includes a message identifier and payload data. Match information, including an indicator or whether the message identifier of the incoming message matches an identifier of a request in a receive queue (RQ), is used to determine a destination for the incoming message. The incoming message is forwarded to the determined destination. Information, such as payload size and RQ position, may be used to determine allocation of the payload within a cache or cache hierarchy.Type: GrantFiled: November 2, 2017Date of Patent: August 4, 2020Assignee: ARM LTDInventors: Pavel Shamis, Alejandro Rico Carro
-
Patent number: 10664419Abstract: A method and apparatus are provided for assigning transport priorities to messages in a data processing system. An incoming message at an input/output (I/O) interface of the data processing system includes a message identifier and payload data. Match information, including an indicator or whether the message identifier of the incoming message matches an identifier of a request in a receive queue (RQ), is used to assign a transport priority value to the incoming message. The incoming message is transported to the destination node through an interconnect structure dependent upon the assigned transport priority value.Type: GrantFiled: January 29, 2018Date of Patent: May 26, 2020Assignee: Arm LimitedInventors: Alejandro Rico Carro, Pavel Shamis, Stephan Diestelhorst