Patents by Inventor Niall Power

Niall Power has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240134786
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for sparse tensor storage for neural network accelerators. An example apparatus includes sparsity map generating circuitry to generate a sparsity map corresponding to a tensor, the sparsity map to indicate whether a data point of the tensor is zero, static storage controlling circuitry to divide the tensor into one or more storage elements, and a compressor to perform a first compression of the one or more storage elements to generate one or more compressed storage elements, the first compression to remove zero points of the one or more storage elements based on the sparsity map and perform a second compression of the one or more compressed storage elements, the second compression to store the one or more compressed storage elements contiguously in memory.
    Type: Application
    Filed: December 14, 2023
    Publication date: April 25, 2024
    Applicant: Intel Corporation
    Inventors: Martin-Thomas Grymel, David Bernard, Niall Hanrahan, Martin Power, Kevin Brady, Gary Baugh, Cormac Brick
  • Publication number: 20240118992
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to debug a hardware accelerator such as a neural network accelerator for executing Artificial Intelligence computational workloads. An example apparatus includes a core with a core input and a core output to execute executable code based on a machine-learning model to generate a data output based on a data input, and debug circuitry coupled to the core. The debug circuitry is configured to detect a breakpoint associated with the machine-learning model, compile executable code based on at least one of the machine-learning model or the breakpoint. In response to the triggering of the breakpoint, the debug circuitry is to stop the execution of the executable code and output data such as the data input, data output and the breakpoint for debugging the hardware accelerator.
    Type: Application
    Filed: October 16, 2023
    Publication date: April 11, 2024
    Applicant: Intel Corporation
    Inventors: Martin-Thomas Grymel, David Bernard, Martin Power, Niall Hanrahan, Kevin Brady
  • Patent number: 11940907
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for sparse tensor storage for neural network accelerators. An example apparatus includes sparsity map generating circuitry to generate a sparsity map corresponding to a tensor, the sparsity map to indicate whether a data point of the tensor is zero, static storage controlling circuitry to divide the tensor into one or more storage elements, and a compressor to perform a first compression of the one or more storage elements to generate one or more compressed storage elements, the first compression to remove zero points of the one or more storage elements based on the sparsity map and perform a second compression of the one or more compressed storage elements, the second compression to store the one or more compressed storage elements contiguously in memory.
    Type: Grant
    Filed: June 25, 2021
    Date of Patent: March 26, 2024
    Assignee: INTEL CORPORATION
    Inventors: Martin-Thomas Grymel, David Bernard, Niall Hanrahan, Martin Power, Kevin Brady, Gary Baugh, Cormac Brick
  • Publication number: 20240073796
    Abstract: A circuit arrangement includes a preprocessing circuit configured to obtain context information related to a user location, a learning circuit configured to determine a predicted user movement based on context information related to a user location to obtain a predicted route and to determine predicted radio conditions along the predicted route, and a decision circuit configured to, based on the predicted radio conditions, identify one or more first areas expected to have a first type of radio conditions and one or more second areas expected to have a second type of radio conditions different from the first type of radio conditions and to control radio activity while traveling on the predicted route according to the one or more first areas and the one or more second areas.
    Type: Application
    Filed: September 7, 2023
    Publication date: February 29, 2024
    Inventors: Shahrnaz AZIZI, Biljana BADIC, John BROWNE, Dave CAVALCANTI, Hyung-Nam CHOI, Thorsten CLEVORN, Ajay GUPTA, Maruti GUPTA HYDE, Ralph HASHOLZNER, Nageen HIMAYAT, Simon HUNT, Ingolf KARLS, Thomas KENNEY, Yiting LIAO, Christopher MACNAMARA, Marta MARTINEZ TARRADELL, Markus Dominik MUECK, Venkatesan NALLAMPATTI EKAMBARAM, Niall POWER, Bernhard RAAF, Reinhold SCHNEIDER, Ashish SINGH, Sarabjot SINGH, Srikathyayani SRIKANTESWARA, Shilpa TALWAR, Feng XUE, Zhibin YU, Robert ZAUS, Stefan FRANZ, Uwe KLIEMANN, Christian DREWES, Juergen KREUCHAUF
  • Patent number: 11847008
    Abstract: Technologies for providing efficient detection of idle poll loops include a compute device. The compute device has a compute engine that includes a plurality of cores and a memory. The compute engine is to determine a ratio of unsuccessful operations to successful operations over a predefined time period of a core of the plurality cores that is assigned to continually poll, within the predefined time period, a memory address for a change in status and determine whether the determined ratio satisfies a reference ratio of unsuccessful operations to successful operations. The reference ratio is indicative of a change in the operation of the assigned core. The compute engine is further to selectively increase or decrease a power usage of the assigned core as a function of whether the determined ratio satisfies the reference ratio. Other embodiments are also described and claimed.
    Type: Grant
    Filed: April 12, 2018
    Date of Patent: December 19, 2023
    Assignee: Intel Corporation
    Inventors: David Hunt, Niall Power, Kevin Devey, Changzheng Wei, Bruce Richardson, Eliezer Tamir, Andrew Cunningham, Chris MacNamara, Nemanja Marjanovic, Rory Sexton, John Browne
  • Patent number: 11800439
    Abstract: A wireless communication device includes a processor configured to select an offload processing task for performance by an edge computing device; cause a baseband modem to establish a direct wireless connection between the wireless communication device and the edge computing device; cause the baseband modem to send first data to the edge computing device via the direct wireless connection; and receive second data from the edge computing device, wherein the second data comprise a result of the offload processing task performed on the first data. The edge computing device includes a processor configured to receive, from a user device, offloaded data to be processed according to an offload processing task; execute the offload processing task on the offloaded data; and cause the radio via the interface to wirelessly send a result of the executed offload processing task via a direct wireless connection with the user device.
    Type: Grant
    Filed: December 16, 2022
    Date of Patent: October 24, 2023
    Assignee: Intel Corporation
    Inventors: Shahrnaz Azizi, Biljana Badic, John Browne, Dave Cavalcanti, Hyung-Nam Choi, Thorsten Clevorn, Ajay Gupta, Maruti Gupta Hyde, Ralph Hasholzner, Nageen Himayat, Simon Hunt, Ingolf Karls, Thomas Kenney, Yiting Liao, Christopher MacNamara, Marta Martinez Tarradell, Markus Dominik Mueck, Venkatesan Nallampatti Ekambaram, Niall Power, Bernhard Raaf, Reinhold Schneider, Ashish Singh, Sarabjot Singh, Srikathyayani Srikanteswara, Shilpa Talwar, Feng Xue, Zhibin Yu, Robert Zaus, Stefan Franz, Uwe Kliemann, Christian Drewes, Juergen Kreuchauf
  • Patent number: 11653292
    Abstract: A circuit arrangement includes a preprocessing circuit configured to obtain context information related to a user location, a learning circuit configured to determine a predicted user movement based on context information related to a user location to obtain a predicted route and to determine predicted radio conditions along the predicted route, and a decision circuit configured to, based on the predicted radio conditions, identify one or more first areas expected to have a first type of radio conditions and one or more second areas expected to have a second type of radio conditions different from the first type of radio conditions and to control radio activity while traveling on the predicted route according to the one or more first areas and the one or more second areas.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: May 16, 2023
    Assignee: INTEL CORPORATION
    Inventors: Shahrnaz Azizi, Biljana Badic, John Browne, Dave Cavalcanti, Hyung-Nam Choi, Thorsten Clevorn, Ajay Gupta, Maruti Gupta Hyde, Ralph Hasholzner, Nageen Himayat, Simon Hunt, Ingolf Karls, Thomas Kenney, Yiting Liao, Christopher Macnamara, Marta Martinez Tarradell, Markus Dominik Mueck, Venkatesan Nallampatti Ekambaram, Niall Power, Bernhard Raaf, Reinhold Schneider, Ashish Singh, Sarabjot Singh, Srikathyayani Srikanteswara, Shilpa Talwar, Feng Xue, Zhibin Yu, Robert Zaus, Stefan Franz, Uwe Kliemann, Christian Drewes, Juergen Kreuchauf
  • Publication number: 20230138578
    Abstract: A circuit arrangement includes a preprocessing circuit configured to obtain context information related to a user location, a learning circuit configured to determine a predicted user movement based on context information related to a user location to obtain a predicted route and to determine predicted radio conditions along the predicted route, and a decision circuit configured to, based on the predicted radio conditions, identify one or more first areas expected to have a first type of radio conditions and one or more second areas expected to have a second type of radio conditions different from the first type of radio conditions and to control radio activity while traveling on the predicted route according to the one or more first areas and the one or more second areas.
    Type: Application
    Filed: December 16, 2022
    Publication date: May 4, 2023
    Inventors: Shahrnaz AZIZI, Biljana BADIC, John BROWNE, Dave CAVALCANTI, Hyung-Nam CHOI, Thorsten CLEVORN, Ajay GUPTA, Maruti GUPTA HYDE, Ralph HASHOLZNER, Nageen HIMAYAT, Simon HUNT, Ingolf KARLS, Thomas KENNEY, Yiting LIAO, Christopher MACNAMARA, Marta MARTINEZ TARRADELL, Markus MUECK, Venkatesan NALLAMPATTI EKAMBARAM, Niall POWER, Bernhard RAAF, Reinhold SCHNEIDER, Ashish SINGH, Sarabjot SINGH, Srikathyayani SRIKANTESWARA, Shilpa TALWAR, Feng XUE, Zhibin YU, Robert ZAUS, Stefan FRANZ, Uwe KLIEMANN, Christian DREWES, Juergen KREUCHAUF
  • Publication number: 20220158897
    Abstract: Methods, systems, and computer programs for configuring a network functions virtualization orchestrator (NFVO). In one aspect, a method can include actions of generating, by one or more computers, a message that includes data representing a request to upload a virtual network function (VNF) package, encoding, by the one or more computers, the generated message that includes the VNF package for transmission to a network functions virtualization orchestrator (NFVO), and transmitting, by the one or more computers, the encoded message to the NFVO.
    Type: Application
    Filed: June 10, 2020
    Publication date: May 19, 2022
    Inventors: Joey Chou, Niall Power, Jianli Sun
  • Patent number: 11050682
    Abstract: A network interface device, including: an ingress interface; a host platform interface to communicatively couple to a host platform; and a packet preprocessor including logic to: receive via the ingress interface a data sequence including a plurality of discrete data units; identify the data sequence as data for a parallel processing operation; reorder the discrete data units into a reordered data frame, the reordered data frame configured to order the discrete data units for consumption by the parallel operation; and send the reordered data to the host platform via the host platform interface.
    Type: Grant
    Filed: September 28, 2017
    Date of Patent: June 29, 2021
    Assignee: Intel Corporation
    Inventors: Tomasz Kantecki, Niall Power, John J. Browne, Christopher MacNamara, Stephen Doyle
  • Publication number: 20210014324
    Abstract: Examples described herein relate to a network interface apparatus that includes an interface; circuitry to determine whether to store content of a received packet into a cache or into a memory, at least during a configuration of the network interface to store content directly into the cache, based at least in part on a fill level of a region of the cache allocated to receive copies of packet content directly from the network interface; and circuitry to store content of the received packet into the cache or the memory based on the determination, wherein the cache is external to the network interface. In some examples, the network interface is to determine to store content of the received packet into the memory based at least in part on a fill level of the region of the cache being identified as full or determine to store content of the received packet into the cache based at least in part on a fill level of the region of the cache being identified as not filled.
    Type: Application
    Filed: September 24, 2020
    Publication date: January 14, 2021
    Inventors: Andrey CHILIKIN, Tomasz KANTECKI, Chris MACNAMARA, John J. BROWNE, Declan DOHERTY, Niall POWER
  • Patent number: 10860714
    Abstract: Technologies for cache side channel attack detection and mitigation include an analytics server and one or more monitored computing devices. The analytics server polls each computing device for analytics counter data. The computing device generates the analytics counter data using a resource manager of a processor of the computing device. The analytics counter data may include last-level cache data or memory bandwidth data. The analytics server identifies suspicious core activity based on the analytics counter data and, if identified, deploys a detection process to the computing device. The computing device executes the detection process to identify suspicious application activity. If identified, the computing device may perform one or more corrective actions. Corrective actions include limiting resource usage by a suspicious process using the resource manager of the processor. The resource manager may limit cache occupancy or memory bandwidth used by the suspicious process.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: December 8, 2020
    Assignee: Intel Corporation
    Inventors: John J. Browne, Marcel Cornu, Timothy Verrall, Tomasz Kantecki, Niall Power, Weigang Li, Eoin Walsh, Maryam Tahhan
  • Publication number: 20200205062
    Abstract: A circuit arrangement includes a preprocessing circuit configured to obtain context information related to a user location, a learning circuit configured to determine a predicted user movement based on context information related to a user location to obtain a predicted route and to determine predicted radio conditions along the predicted route, and a decision circuit configured to, based on the predicted radio conditions, identify one or more first areas expected to have a first type of radio conditions and one or more second areas expected to have a second type of radio conditions different from the first type of radio conditions and to control radio activity while traveling on the predicted route according to the one or more first areas and the one or more second areas.
    Type: Application
    Filed: February 28, 2020
    Publication date: June 25, 2020
    Inventors: Shahrnaz Azizi, Biljana Badic, John Browne, Dave Cavalcanti, Hyung-Nam Choi, Thorsten Clevorn, Ajay Gupta, Maruti Gupta Hyde, Ralph Hasholzner, Nageen Himayat, Simon Hunt, Ingolf Karls, Thomas Kenney, Yiting Liao, Chris Macnamara, Marta Martinez Tarradell, Markus Dominik Mueck, Venkatesan Nallampatti Ekambaram, Niall Power, Bernhard Raaf, Reinhold Schneider, Ashish Singh, Sarabjot Singh, Srikathyayani Srikanteswara, Shilpa Talwar, Feng Xue, Zhibin Yu, Robert Zaus, Stefan Franz, Uwe Kliemann, Christian Drewes, Juergen Kreuchauf
  • Patent number: 10657056
    Abstract: Technologies for demoting cache lines to a shared cache include a compute device with at least one processor having multiple cores, a cache memory with a core-local cache and a shared cache, and a cache line demote device. A processor core of a processor of the compute device is configured to retrieve at least a portion of data of a received network packet and move the data into one or more core-local cache lines of the core-local cache. The processor core is further configured to perform a processing operation on the data and transmit a cache line demotion command to the cache line demote device subsequent to having completed the processing operation. The cache line demote device is configured to perform a cache line demotion operation to demote the data from the core-local cache lines to shared cache lines of the shared cache. Other embodiments are described herein.
    Type: Grant
    Filed: June 30, 2018
    Date of Patent: May 19, 2020
    Assignee: Intel Corporation
    Inventors: Eliezer Tamir, Bruce Richardson, Niall Power, Andrew Cunningham, David Hunt, Kevin Devey, Changzheng Wei
  • Publication number: 20190364492
    Abstract: A circuit arrangement includes a preprocessing circuit configured to obtain context information related to a user location, a learning circuit configured to determine a predicted user movement based on context information related to a user location to obtain a predicted route and to determine predicted radio conditions along the predicted route, and a decision circuit configured to, based on the predicted radio conditions, identify one or more first areas expected to have a first type of radio conditions and one or more second areas expected to have a second type of radio conditions different from the first type of radio conditions and to control radio activity while traveling on the predicted route according to the one or more first areas and the one or more second areas.
    Type: Application
    Filed: June 28, 2019
    Publication date: November 28, 2019
    Inventors: Shahrnaz Azizi, Biljana Badic, John Browne, Dave Cavalcanti, Hyung-Nam Choi, Thorsten Clevorn, Ajay Gupta, Maruti Gupta Hyde, Ralph Hasholzner, Nageen Himayat, Simon Hunt, Ingolf Karls, Thomas Kenney, Yiting Liao, Chris Macnamara, Marta Martinez Tarradell, Markus Dominik Mueck, Venkatesan Nallampatti Ekambaram, Niall Power, Bernhard Raaf, Reinhold Schneider, Ashish Singh, Sarabjot Singh, Srikathyayani Srikanteswara, Shilpa Talwar, Feng Xue, Zhibin Yu, Robert Zaus, Stefan Franz, Uwe Kliemann, Christian Drewes, Juergen Kreuchauf
  • Patent number: 10445272
    Abstract: A network system includes a central processing unit and a peripheral device in electrical communication with the central processing unit. The peripheral device has at least one power input and a data input. The network system also includes an out of band controller in electrical communication with the central processing unit, the peripheral device, and an external management interface. Responsive to an identified threat, the out of band controller is configured to disable the at least one power input and the data input to the peripheral device, where the disablement indicates to the central processing unit that a hot plug event has occurred with respect to the peripheral device. The out of band controller is also configured to enable auxiliary power to the peripheral device such that the out of band controller remains in communication with the peripheral device during remediation of the identified threat.
    Type: Grant
    Filed: July 5, 2018
    Date of Patent: October 15, 2019
    Assignee: Intel Corporation
    Inventors: Kevin Devey, John Browne, Chris Macnamara, Eoin Walsh, Bruce Richardson, Andrew Cunningham, Niall Power, David Hunt, Changzheng Wei, Eliezer Tamir
  • Publication number: 20190102223
    Abstract: In one embodiment, a hardware queue manager is to receive tasks from a plurality of producer threads and allocate the tasks to a plurality of consumer threads. The hardware queue manager may include: a plurality of input queues each associated with one of the plurality of producer threads, each of the plurality of input queues having a plurality of entries to store a queue element associated with a task, the queue element including a task portion and timing information associated with the task; and an arbiter to select a consumer thread of the plurality of consumer threads to receive a task and select the task from a plurality of tasks stored in the plurality of input queues, based at least in part on the timing information of the queue element associated with the task. Other embodiments are described and claimed.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Inventors: Niall Power, Sean Harte, Niall D. McDonnell, Andrew Cunningham
  • Publication number: 20190097951
    Abstract: A network interface device, including: an ingress interface; a host platform interface to communicatively couple to a host platform; and a packet preprocessor including logic to: receive via the ingress interface a data sequence including a plurality of discrete data units; identify the data sequence as data for a parallel processing operation; reorder the discrete data units into a reordered data frame, the reordered data frame configured to order the discrete data units for consumption by the parallel operation; and send the reordered data to the host platform via the host platform interface.
    Type: Application
    Filed: September 28, 2017
    Publication date: March 28, 2019
    Applicant: Intel Corporation
    Inventors: Tomasz Kantecki, Niall Power, John J. Browne, Christopher MacNamara, Stephen Doyle
  • Publication number: 20190042739
    Abstract: Technologies for cache side channel attack detection and mitigation include an analytics server and one or more monitored computing devices. The analytics server polls each computing device for analytics counter data. The computing device generates the analytics counter data using a resource manager of a processor of the computing device. The analytics counter data may include last-level cache data or memory bandwidth data. The analytics server identifies suspicious core activity based on the analytics counter data and, if identified, deploys a detection process to the computing device. The computing device executes the detection process to identify suspicious application activity. If identified, the computing device may perform one or more corrective actions. Corrective actions include limiting resource usage by a suspicious process using the resource manager of the processor. The resource manager may limit cache occupancy or memory bandwidth used by the suspicious process.
    Type: Application
    Filed: June 29, 2018
    Publication date: February 7, 2019
    Inventors: John J. Browne, Marcel Cornu, Timothy Verrall, Tomasz Kantecki, Niall Power, Weigang Li, Eoin Walsh, Maryam Tahhan
  • Publication number: 20190042419
    Abstract: Technologies for demoting cache lines to a shared cache include a compute device with at least one processor having multiple cores, a cache memory with a core-local cache and a shared cache, and a cache line demote device. A processor core of a processor of the compute device is configured to retrieve at least a portion of data of a received network packet and move the data into one or more core-local cache lines of the core-local cache. The processor core is further configured to perform a processing operation on the data and transmit a cache line demotion command to the cache line demote device subsequent to having completed the processing operation. The cache line demote device is configured to perform a cache line demotion operation to demote the data from the core-local cache lines to shared cache lines of the shared cache. Other embodiments are described herein.
    Type: Application
    Filed: June 30, 2018
    Publication date: February 7, 2019
    Inventors: Eliezer Tamir, Bruce Richardson, Niall Power, Andrew Cunningham, David Hunt, Kevin Devey, Changzheng Wei