Patents by Inventor Nemanja Marjanovic

Nemanja Marjanovic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11847008
    Abstract: Technologies for providing efficient detection of idle poll loops include a compute device. The compute device has a compute engine that includes a plurality of cores and a memory. The compute engine is to determine a ratio of unsuccessful operations to successful operations over a predefined time period of a core of the plurality cores that is assigned to continually poll, within the predefined time period, a memory address for a change in status and determine whether the determined ratio satisfies a reference ratio of unsuccessful operations to successful operations. The reference ratio is indicative of a change in the operation of the assigned core. The compute engine is further to selectively increase or decrease a power usage of the assigned core as a function of whether the determined ratio satisfies the reference ratio. Other embodiments are also described and claimed.
    Type: Grant
    Filed: April 12, 2018
    Date of Patent: December 19, 2023
    Assignee: Intel Corporation
    Inventors: David Hunt, Niall Power, Kevin Devey, Changzheng Wei, Bruce Richardson, Eliezer Tamir, Andrew Cunningham, Chris MacNamara, Nemanja Marjanovic, Rory Sexton, John Browne
  • Patent number: 11630693
    Abstract: Technologies for power-aware scheduling include a computing device that receives network packets. The computing device classifies the network packets by priority level and then assigns each network packet to a performance group bin. The packets are assigned based on priority level and other performance criteria. The computing device schedules the network packets assigned to each performance group for processing by a processing engine such as a processor core. Network packets assigned to performance groups having a high priority level are scheduled for processing by processing engines with a high performance level. The computing device may select performance levels for processing engines based on processing workload of the network packets. The computing device may control the performance level of the processing engines, for example by controlling the frequency of processor cores. The processing workload may include packet encryption. Other embodiments are described and claimed.
    Type: Grant
    Filed: April 12, 2018
    Date of Patent: April 18, 2023
    Assignee: Intel Corporation
    Inventors: John Browne, Chris MacNamara, Tomasz Kantecki, Peter McCarthy, Liang Ma, Mairtin O'Loingsigh, Rory Sexton, John Griffin, Nemanja Marjanovic, David Hunt
  • Patent number: 11301020
    Abstract: In an example, there is disclosed a demand scaling engine, including: a processor interface to communicatively couple to a processor; a network controller interface to communicatively couple to a network controller and to receive network demand data; a scaleup criterion; a current processor frequency scale datum; and logic, provided at least partly in hardware, to: receive the network demand data; compare the network demand data to the scaleup criterion; determine that the network demand data exceeds the scaleup criterion; and instruct the processor via the processor interface to scaleup processor frequency.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: April 12, 2022
    Assignee: Intel Corporation
    Inventors: Christopher MacNamara, John J. Browne, William J. Bowhill, Christopher Nolan, Nemanja Marjanovic, Rory Sexton, Padraic Agnew, Colin Hanily
  • Publication number: 20190042310
    Abstract: Technologies for power-aware scheduling include a computing device that receives network packets. The computing device classifies the network packets by priority level and then assigns each network packet to a performance group bin. The packets are assigned based on priority level and other performance criteria. The computing device schedules the network packets assigned to each performance group for processing by a processing engine such as a processor core. Network packets assigned to performance groups having a high priority level are scheduled for processing by processing engines with a high performance level. The computing device may select performance levels for processing engines based on processing workload of the network packets. The computing device may control the performance level of the processing engines, for example by controlling the frequency of processor cores. The processing workload may include packet encryption. Other embodiments are described and claimed.
    Type: Application
    Filed: April 12, 2018
    Publication date: February 7, 2019
    Inventors: John Browne, Chris MacNamara, Tomasz Kantecki, Peter McCarthy, Ma Liang, Mairtin O'Loingsigh, Rory Sexton, John Griffin, Nemanja Marjanovic, David Hunt
  • Publication number: 20190041957
    Abstract: Technologies for providing efficient detection of idle poll loops include a compute device. The compute device has a compute engine that includes a plurality of cores and a memory. The compute engine is to determine a ratio of unsuccessful operations to successful operations over a predefined time period of a core of the plurality cores that is assigned to continually poll, within the predefined time period, a memory address for a change in status and determine whether the determined ratio satisfies a reference ratio of unsuccessful operations to successful operations. The reference ratio is indicative of a change in the operation of the assigned core. The compute engine is further to selectively increase or decrease a power usage of the assigned core as a function of whether the determined ratio satisfies the reference ratio. Other embodiments are also described and claimed.
    Type: Application
    Filed: April 12, 2018
    Publication date: February 7, 2019
    Inventors: David Hunt, Niall Power, Kevin Devey, Changzheng Wei, Bruce Richardson, Eliezer Tamir, Andrew Cunningham, Chris MacNamara, Nemanja Marjanovic, Rory Sexton, John Browne
  • Publication number: 20180335824
    Abstract: In an example, there is disclosed a demand scaling engine, including: a processor interface to communicatively couple to a processor; a network controller interface to communicatively couple to a network controller and to receive network demand data; a scaleup criterion; a current processor frequency scale datum; and logic, provided at least partly in hardware, to: receive the network demand data; compare the network demand data to the scaleup criterion; determine that the network demand data exceeds the scaleup criterion; and instruct the processor via the processor interface to scaleup processor frequency.
    Type: Application
    Filed: May 22, 2017
    Publication date: November 22, 2018
    Applicant: Intel Corporation
    Inventors: Christopher MacNamara, John J. Browne, William J. Bowhill, Christopher Nolan, Nemanja Marjanovic, Rory Sexton, Padraic Agnew, Colin Hanily