Patents by Inventor Ronny Ronen
Ronny Ronen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11275637Abstract: A processor of an aspect includes an instruction pipeline to process a multiple memory address instruction that indicates multiple memory addresses. The processor also includes multiple page fault aggregation logic coupled with the instruction pipeline. The multiple page fault aggregation logic is to aggregate page fault information for multiple page faults that are each associated with one of the multiple memory addresses of the instruction. The multiple page fault aggregation logic is to provide the aggregated page fault information to a page fault communication interface. Other processors, apparatus, methods, and systems are also disclosed.Type: GrantFiled: August 14, 2020Date of Patent: March 15, 2022Assignee: Intel CorporationInventors: Boris Ginzburg, Ronny Ronen, Ilya Osadchiy
-
Patent number: 11243768Abstract: Disclosed embodiments relate to processing logic for performing function operations. In one example, and apparatus includes an execution unit within a processor to execute a code block, power management hardware coupled to the execution unit, wherein the power management hardware is to monitor a first execution of the code block, store a micro-architectural context of the processor in a metadata block associated with the code block, the micro-architectural context including performance data resulting from the first execution of the code block, the performance data comprising power and energy usage data, and power management related parameters, read the associated metadata block upon a second execution of the code block, and tune the second execution based on the performance data stored in the associated metadata block to increase efficiency of executing the code block.Type: GrantFiled: January 28, 2019Date of Patent: February 8, 2022Assignee: Intel CorporationInventors: Efraim Rotem, Eliezer Weissmann, Boris Ginzburg, Alon Naveh, Nadav Shulman, Ronny Ronen
-
Publication number: 20200379835Abstract: A processor of an aspect includes an instruction pipeline to process a multiple memory address instruction that indicates multiple memory addresses. The processor also includes multiple page fault aggregation logic coupled with the instruction pipeline. The multiple page fault aggregation logic is to aggregate page fault information for multiple page faults that are each associated with one of the multiple memory addresses of the instruction. The multiple page fault aggregation logic is to provide the aggregated page fault information to a page fault communication interface. Other processors, apparatus, methods, and systems are also disclosed.Type: ApplicationFiled: August 14, 2020Publication date: December 3, 2020Inventors: Boris Ginzburg, Ronny Ronen, Ilya Osadchiy
-
Publication number: 20200218568Abstract: An apparatus is described having multiple cores, each core having: a) a CPU; b) an accelerator; and, c) a controller and a plurality of order buffers coupled between the CPU and the accelerator. Each of the order buffers is dedicated to a different one of the CPU's threads. Each one of the order buffers is to hold one or more requests issued to the accelerator from its corresponding thread. The controller is to control issuance of the order buffers' respective requests to the accelerator.Type: ApplicationFiled: December 30, 2019Publication date: July 9, 2020Inventors: Ronny Ronen, Boris Ginzburg, Eliezer Weissmann
-
Patent number: 10558490Abstract: An apparatus is described having multiple cores, each core having: a) a CPU; b) an accelerator; and, c) a controller and a plurality of order buffers coupled between the CPU and the accelerator. Each of the order buffers is dedicated to a different one of the CPU's threads. Each one of the order buffers is to hold one or more requests issued to the accelerator from its corresponding thread. The controller is to control issuance of the order buffers' respective requests to the accelerator.Type: GrantFiled: March 30, 2012Date of Patent: February 11, 2020Assignee: Intel CorporationInventors: Ronny Ronen, Boris Ginzburg, Eliezer Weissmann
-
Patent number: 10467012Abstract: An apparatus and method are described for coupling a front end core to an accelerator component (e.g., such as a graphics accelerator). For example, an apparatus is described comprising: an accelerator comprising one or more execution units (EUs) to execute a specified set of instructions; and a front end core comprising a translation lookaside buffer (TLB) communicatively coupled to the accelerator and providing memory access services to the accelerator, the memory access services including performing TLB lookup operations to map virtual to physical addresses on behalf of the accelerator and in response to the accelerator requiring access to a system memory.Type: GrantFiled: December 29, 2016Date of Patent: November 5, 2019Assignee: Intel CorporationInventors: Eliezer Weissmann, Karthikeyan Karthik Vaithianathan, Yoav Zach, Boris Ginzburg, Ronny Ronen
-
Publication number: 20190205200Abstract: A processor of an aspect includes an instruction pipeline to process a multiple memory address instruction that indicates multiple memory addresses. The processor also includes multiple page fault aggregation logic coupled with the instruction pipeline. The multiple page fault aggregation logic is to aggregate page fault information for multiple page faults that are each associated with one of the multiple memory addresses of the instruction. The multiple page fault aggregation logic is to provide the aggregated page fault information to a page fault communication interface. Other processors, apparatus, methods, and systems are also disclosed.Type: ApplicationFiled: December 27, 2018Publication date: July 4, 2019Inventors: Boris Ginzburg, Ronny Ronen, Ilya Osadchiy
-
Publication number: 20190155606Abstract: Disclosed embodiments relate to processing logic for performing function operations. In one example, and apparatus includes an execution unit within a processor to execute a code block, power management hardware coupled to the execution unit, wherein the power management hardware is to monitor a first execution of the code block, store a micro-architectural context of the processor in a metadata block associated with the code block, the micro-architectural context including performance data resulting from the first execution of the code block, the performance data comprising power and energy usage data, and power management related parameters, read the associated metadata block upon a second execution of the code block, and tune the second execution based on the performance data stored in the associated metadata block to increase efficiency of executing the code block.Type: ApplicationFiled: January 28, 2019Publication date: May 23, 2019Inventors: Efraim Rotem, Eliezer Weissmann, Boris Ginzburg, Alon Naveh, Nadav Shulman, Ronny Ronen
-
Patent number: 10255126Abstract: A processor of an aspect includes an instruction pipeline to process a multiple memory address instruction that indicates multiple memory addresses. The processor also includes multiple page fault aggregation logic coupled with the instruction pipeline. The multiple page fault aggregation logic is to aggregate page fault information for multiple page faults that are each associated with one of the multiple memory addresses of the instruction. The multiple page fault aggregation logic is to provide the aggregated page fault information to a page fault communication interface. Other processors, apparatus, methods, and systems are also disclosed.Type: GrantFiled: February 12, 2018Date of Patent: April 9, 2019Assignee: Intel CorporationInventors: Boris Ginzburg, Ronny Ronen, Ilya Osadchiy
-
Patent number: 10191742Abstract: A processor saves micro-architectural contexts to increase the efficiency of code execution and power management. Power management hardware during runtime monitors execution of a code block. The code block has been compiled to have a reserved space appended to one end of the code block. The reserved space includes a metadata block associated with the code block or an identifier of the metadata block. The hardware stores a micro-architectural context of the processor in the metadata block. The micro-architectural context includes performance data resulting from a first execution of the code block. The hardware reads the metadata block upon a second execution of the code block and tunes the second execution based on the performance data.Type: GrantFiled: March 30, 2012Date of Patent: January 29, 2019Assignee: Intel CorporationInventors: Efraim Rotem, Eliezer Weissmann, Boris Ginzburg, Alon Naveh, Nadav Shulman, Ronny Ronen
-
Patent number: 10185566Abstract: In one embodiment, the present invention includes a multicore processor having first and second cores to independently execute instructions, the first core visible to an operating system (OS) and the second core transparent to the OS and heterogeneous from the first core. A task controller, which may be included in or coupled to the multicore processor, can cause dynamic migration of a first process scheduled by the OS to the first core to the second core transparently to the OS. Other embodiments are described and claimed.Type: GrantFiled: April 27, 2012Date of Patent: January 22, 2019Assignee: Intel CorporationInventors: Alon Naveh, Yuval Yosef, Eliezer Weissmann, Anil Aggarwal, Efraim Rotem, Avi Mendelson, Ronny Ronen, Boris Ginzburg, Michael Mishaeli, Scott D. Hahn, David A. Koufaty, Ganapati Srinivasa, Guy Therien
-
Patent number: 10120691Abstract: In one embodiment, a processor includes an accelerator, a decoder to decode a first instruction into a decoded first instruction, and a second instruction into a decoded second instruction, and an execution unit to execute the first decoded instruction to, for a thread executing on the accelerator that is to be placed in an inactive state, cause a save of context information for the thread, and a save of a vector identifying the accelerator corresponding to the context information, and execute the second decoded instruction to read the vector to determine the accelerator to restore saved context information into for the thread, read the saved context information, and restore the saved context information into the accelerator.Type: GrantFiled: July 19, 2016Date of Patent: November 6, 2018Assignee: INTEL CORPORATIONInventors: Boris Ginzburg, Ronny Ronen, Eliezer Weissmann, Karthikeyan Vaithianathan, Ehud Cohen
-
Patent number: 10078519Abstract: An apparatus and method are described for coupling a front end core to an accelerator component (e.g., such as a graphics accelerator). For example, an apparatus is described comprising: an accelerator comprising one or more execution units (EUs) to execute a specified set of instructions; and a front end core comprising a translation lookaside buffer (TLB) communicatively coupled to the accelerator and providing memory access services to the accelerator, the memory access services including performing TLB lookup operations to map virtual to physical addresses on behalf of the accelerator and in response to the accelerator requiring access to a system memory.Type: GrantFiled: July 27, 2016Date of Patent: September 18, 2018Assignee: Intel CorporationInventors: Eliezer Weissmann, Karthikeyan Karthik Vaithianathan, Yoav Zach, Boris Ginzburg, Ronny Ronen
-
Publication number: 20180181458Abstract: A processor of an aspect includes an instruction pipeline to process a multiple memory address instruction that indicates multiple memory addresses. The processor also includes multiple page fault aggregation logic coupled with the instruction pipeline. The multiple page fault aggregation logic is to aggregate page fault information for multiple page faults that are each associated with one of the multiple memory addresses of the instruction. The multiple page fault aggregation logic is to provide the aggregated page fault information to a page fault communication interface. Other processors, apparatus, methods, and systems are also disclosed.Type: ApplicationFiled: February 12, 2018Publication date: June 28, 2018Applicant: lntel CorporationInventors: Boris Ginzburg, Ronny Ronen, Ilya Osadchiy
-
Patent number: 9971688Abstract: An apparatus and method are described for coupling a front end core to an accelerator component (e.g., such as a graphics accelerator). For example, an apparatus is described comprising: an accelerator comprising one or more execution units (EUs) to execute a specified set of instructions; and a front end core comprising a translation lookaside buffer (TLB) communicatively coupled to the accelerator and providing memory access services to the accelerator, the memory access services including performing TLB lookup operations to map virtual to physical addresses on behalf of the accelerator and in response to the accelerator requiring access to a system memory.Type: GrantFiled: December 29, 2016Date of Patent: May 15, 2018Assignee: Intel CorporationInventors: Eliezer Weissmann, Karthikeyan Karthik Vaithianathan, Yoav Zach, Boris Ginzburg, Ronny Ronen
-
Patent number: 9891980Abstract: A processor of an aspect includes an instruction pipeline to process a multiple memory address instruction that indicates multiple memory addresses. The processor also includes multiple page fault aggregation logic coupled with the instruction pipeline. The multiple page fault aggregation logic is to aggregate page fault information for multiple page faults that are each associated with one of the multiple memory addresses of the instruction. The multiple page fault aggregation logic is to provide the aggregated page fault information to a page fault communication interface. Other processors, apparatus, methods, and systems are also disclosed.Type: GrantFiled: December 29, 2011Date of Patent: February 13, 2018Assignee: Intel CorporationInventors: Boris Ginzburg, Ronny Ronen, Ilya Osadchiy
-
Patent number: 9747221Abstract: A computer system may support one or more techniques to allow dynamic pinning of the memory pages accessed by a non-CPU device, such as a graphics processing unit (GPU). The non-CPU may support virtual to physical address mapping and may thus be aware of the memory pages, which may not be pinned but may be accessed by the non-CPU. The non-CPU may notify or send such information to a run-time component such as a device driver associated with the CPU. The device driver may, dynamically, perform pinning of such memory pages, which may be accessed by the non-CPU. The device driver may even unpin the memory pages, which may be no longer accessed by the non-CPU. Such an approach may allow the memory pages, which may be no longer accessed by the non-CPU to be available for allocation to the other CPUs and/or non-CPUs.Type: GrantFiled: September 23, 2015Date of Patent: August 29, 2017Assignee: Intel CorporationInventors: Gad Sheaffer, Boris Ginzburg, Ronny Ronen, Eliezer Weissmann
-
Patent number: 9720730Abstract: In one embodiment, the present invention includes a multicore processor with first and second groups of cores. The second group can be of a different instruction set architecture (ISA) than the first group or of the same ISA set but having different power and performance support level, and is transparent to an operating system (OS). The processor further includes a migration unit that handles migration requests for a number of different scenarios and causes a context switch to dynamically migrate a process from the second core to a first core of the first group. This dynamic hardware-based context switch can be transparent to the OS. Other embodiments are described and claimed.Type: GrantFiled: December 30, 2011Date of Patent: August 1, 2017Assignee: Intel CorporationInventors: Boris Ginzburg, Ilya Osadchiy, Ronny Ronen, Eliezer Weissmann, Michael Mishaeli, Alon Naveh, David A. Koufaty, Scott D. Hahn, Tong Li, Avi Mendleson, Eugene Gorbatov, Hisham Abu-Salah, Dheeraj R. Subbareddy, Paolo Narvaez, Aamer Jaleel, Efraim Rotem, Yuval Yosef, Anil Aggarwal, Kenzo Van Craeynest
-
Publication number: 20170153984Abstract: An apparatus and method are described for coupling a front end core to an accelerator component (e.g., such as a graphics accelerator). For example, an apparatus is described comprising: an accelerator comprising one or more execution units (EUs) to execute a specified set of instructions; and a front end core comprising a translation lookaside buffer (TLB) communicatively coupled to the accelerator and providing memory access services to the accelerator, the memory access services including performing TLB lookup operations to map virtual to physical addresses on behalf of the accelerator and in response to the accelerator requiring access to a system memory.Type: ApplicationFiled: December 29, 2016Publication date: June 1, 2017Inventors: ELIEZER WEISSMANN, KARTHIKEYAN KARTHIK VAITHIANATHAN, YOAV ZACH, BORIS GINZBURG, RONNY RONEN
-
Publication number: 20170109294Abstract: An apparatus and method are described for coupling a front end core to an accelerator component (e.g., such as a graphics accelerator). For example, an apparatus is described comprising: an accelerator comprising one or more execution units (EUs) to execute a specified set of instructions; and a front end core comprising a translation lookaside buffer (TLB) communicatively coupled to the accelerator and providing memory access services to the accelerator, the memory access services including performing TLB lookup operations to map virtual to physical addresses on behalf of the accelerator and in response to the accelerator requiring access to a system memory.Type: ApplicationFiled: December 29, 2016Publication date: April 20, 2017Inventors: ELIEZER WEISSMANN, KARTHIKEYAN KARTHIK VAITHIANATHAN, YOAV ZACH, BORIS GINZBURG, RONNY RONEN