Patents by Inventor Avi Mendelson

Avi Mendelson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11614959
    Abstract: The invention relates to a data processing system and a date processing method. The data processing system is configured to perform a hardware transactional memory (HTM) transaction. The data processing system comprises a byte-addressable nonvolatile memory for persistently storing data and a processor being configured to execute an atomic HTM write operation in connection with committing the HTM transaction by writing an indicator to the nonvolatile memory indicating the successful commit of the HTM transaction.
    Type: Grant
    Filed: January 17, 2018
    Date of Patent: March 28, 2023
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Hillel Avni, Eliezer Levy, Avi Mendelson, Zuguang Wu
  • Patent number: 10185566
    Abstract: In one embodiment, the present invention includes a multicore processor having first and second cores to independently execute instructions, the first core visible to an operating system (OS) and the second core transparent to the OS and heterogeneous from the first core. A task controller, which may be included in or coupled to the multicore processor, can cause dynamic migration of a first process scheduled by the OS to the first core to the second core transparently to the OS. Other embodiments are described and claimed.
    Type: Grant
    Filed: April 27, 2012
    Date of Patent: January 22, 2019
    Assignee: Intel Corporation
    Inventors: Alon Naveh, Yuval Yosef, Eliezer Weissmann, Anil Aggarwal, Efraim Rotem, Avi Mendelson, Ronny Ronen, Boris Ginzburg, Michael Mishaeli, Scott D. Hahn, David A. Koufaty, Ganapati Srinivasa, Guy Therien
  • Publication number: 20180143850
    Abstract: The invention relates to a data processing system and a date processing method. The data processing system is configured to perform a hardware transactional memory (HTM) transaction. The data processing system comprises a byte-addressable nonvolatile memory for persistently storing data and a processor being configured to execute an atomic HTM write operation in connection with committing the HTM transaction by writing an indicator to the nonvolatile memory indicating the successful commit of the HTM transaction.
    Type: Application
    Filed: January 17, 2018
    Publication date: May 24, 2018
    Applicant: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Hillel AVNI, Eliezer LEVY, Avi MENDELSON, Zuguang WU
  • Patent number: 9588826
    Abstract: Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.
    Type: Grant
    Filed: December 10, 2014
    Date of Patent: March 7, 2017
    Assignee: Intel Corporation
    Inventors: Hu Chen, Ying Gao, Xiaocheng Zhou, Shoumeng Yan, Peinan Zhang, Mohan Rajagopalan, Jesse Fang, Avi Mendelson, Bratin Saha
  • Patent number: 9535838
    Abstract: A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.
    Type: Grant
    Filed: November 30, 2012
    Date of Patent: January 3, 2017
    Assignee: Intel Corporation
    Inventors: Jasmin Ajanovic, Mahesh Wagh, Prashant Sethi, Debendra Das Sharma, David J. Harriman, Mark B. Rosenbluth, Ajay V. Bhatt, Peter Barry, Scott Dion Rodgers, Anil Vasudevan, Sridhar Muthrasanallur, James Akiyama, Robert G. Blankenship, Ohad Falik, Avi Mendelson, Ilan Pardo, Eran Tamari, Eliezer Weissmann, Doron Shamia
  • Patent number: 9442855
    Abstract: A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.
    Type: Grant
    Filed: December 27, 2014
    Date of Patent: September 13, 2016
    Assignee: Intel Corporation
    Inventors: Jasmin Ajanovic, Mahesh Wagh, Prashant Sethi, Debendra Das Sharma, David J. Harriman, Mark B. Rosenbluth, Ajay V. Bhatt, Peter Barry, Scott Dion Rodgers, Anil Vasudevan, Sridhar Muthrasanallur, James Akiyama, Robert G. Blankenship, Ohad Falik, Avi Mendelson, Ilan Pardo, Eran Tamari, Eliezer Weissmann, Doron Shamia
  • Patent number: 9400702
    Abstract: Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.
    Type: Grant
    Filed: July 1, 2014
    Date of Patent: July 26, 2016
    Assignee: Intel Corporation
    Inventors: Hu Chen, Ying Gao, Xiaocheng Zhou, Shoumeng Yan, Peinan Zhang, Mohan Rajagopalan, Jesse Fang, Avi Mendelson, Bratin Saha
  • Patent number: 9098415
    Abstract: A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.
    Type: Grant
    Filed: November 30, 2012
    Date of Patent: August 4, 2015
    Assignee: Intel Corporation
    Inventors: Jasmin Ajanovic, Mahesh Wagh, Prashant Sethi, Debendra Das Sharma, David J. Harriman, Mark B. Rosenbluth, Ajay V. Bhatt, Peter Barry, Scott Dion Rodgers, Anil Vasudevan, Sridhar Muthrasanallur, James Akiyama, Robert G. Blankenship, Ohad Falik, Avi Mendelson, Ilan Pardo, Eran Tamari, Eliezer Weissmann, Doron Shamia
  • Publication number: 20150161050
    Abstract: A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.
    Type: Application
    Filed: December 27, 2014
    Publication date: June 11, 2015
    Inventors: Jasmin Ajanovic, Mahesh Wagh, Prashant Sethi, Debendra Das Sharma, David J. Harriman, Mark B. Rosenbluth, Ajay V. Bhatt, Peter Barry, Scott Dion Rodgers, Anil Vasudevan, Sridhar Muthrasanallur, James Akiyama, Robert G. Blankenship, Ohad Falik, Avi Mendelson, Ilan Pardo, Eran Tamari, Eliezer Weissmann, Doron Shamia
  • Publication number: 20150149683
    Abstract: A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.
    Type: Application
    Filed: November 30, 2012
    Publication date: May 28, 2015
    Inventors: Jasmin Ajanovic, Mahesh Wagh, Prashant Sethi, Debendra Das Sharma, David J. Harriman, Mark B. Rosenbluth, Ajay V. Bhatt, Peter Barry, Scott Dion Rodgers, Anil Vasudevan, Sridhar Muthrasanallur, James Akiyama, Robert G. Blankenship, Ohad Falik, Avi Mendelson, Ilan Pardo, Eran Tamari, Eliezer Weissmann, Doron Shamia
  • Patent number: 9032103
    Abstract: A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.
    Type: Grant
    Filed: December 6, 2012
    Date of Patent: May 12, 2015
    Assignee: Intel Corporation
    Inventors: Jasmin Ajanovic, Mahesh Wagh, Prashant Sethi, Debendra Das Sharma, David J. Harriman, Mark B. Rosenbluth, Ajay V. Bhatt, Peter Barry, Scott Dion Rodgers, Anil Vasudevan, Sridhar Muthrasanallur, James Akiyama, Robert G. Blankenship, Ohad Falik, Avi Mendelson, Ilan Pardo, Eran Tamari, Eliezer Weissmann, Doron Shamia
  • Publication number: 20150123978
    Abstract: Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.
    Type: Application
    Filed: December 10, 2014
    Publication date: May 7, 2015
    Inventors: Hu Chen, Ying Gao, Xiaocheng Zhou, Shoumeng Yan, Peinan Zhang, Mohan Rajagopalan, Jesse Fang, Avi Mendelson, Bratin Saha
  • Patent number: 9026682
    Abstract: A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.
    Type: Grant
    Filed: December 13, 2012
    Date of Patent: May 5, 2015
    Assignee: Intel Corporation
    Inventors: Jasmin Ajanovic, Mahesh Wagh, Prashant Sethi, Debendra Das Sharma, David J. Harriman, Mark B. Rosenbluth, Ajay V. Bhatt, Peter Barry, Scott Dion Rodgers, Anil Vasudevan, Sridhar Muthrasanallur, James Akiyama, Robert G. Blankenship, Ohad Falik, Avi Mendelson, Ilan Pardo, Eran Tamari, Eliezer Weissmann, Doron Shamia
  • Patent number: 9003421
    Abstract: Disclosed are embodiments of a system, methods and mechanism for using idle thread units to perform acceleration threads that are transparent to the operating system. When the operating system scheduler has no work to schedule on the idle thread units, the operating system may issue a halt or monitor/mwait or other instruction to place the thread unit into an idle state. While the thread unit is idle, from the operating system perspective, the thread unit may be utilized to perform speculative acceleration threads in order to accelerate threads running on non-idle thread units. The context of the idle thread unit is saved prior to execution of the acceleration thread and is restored when the operating system requires use of the thread unit. The acceleration threads are transparent to the operating system. Other embodiments are also described and claimed.
    Type: Grant
    Filed: November 28, 2005
    Date of Patent: April 7, 2015
    Assignee: Intel Corporation
    Inventors: Ron Gabor, Gad Sheaffer, Avi Mendelson, Uri C. Weiser, Hong Wang
  • Patent number: 8997114
    Abstract: Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU.
    Type: Grant
    Filed: February 5, 2014
    Date of Patent: March 31, 2015
    Assignee: Intel Corporation
    Inventors: Xiaocheng Zhou, Shoumeng Yan, Ying Gao, Hu Chen, Peinan Zhang, Mohan Rajagopalan, Avi Mendelson, Bratin Saha
  • Publication number: 20140375662
    Abstract: Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.
    Type: Application
    Filed: July 1, 2014
    Publication date: December 25, 2014
    Inventors: HU CHEN, YING GAO, XIAOCHENG ZHOU, SHOUMENG YAN, PEINAN ZHANG, MOHAN RAJAGOPALAN, JESSE FANG, AVI MENDELSON, Bratin Saha
  • Publication number: 20140306972
    Abstract: Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU.
    Type: Application
    Filed: February 5, 2014
    Publication date: October 16, 2014
    Inventors: Xiaocheng Zhou, Shoumeng Yan, Ying Gao, Hu Chen, Peinan Zhang, Mohan Rajagopalan, Avi Mendelson, Bratin Saha
  • Patent number: 8806068
    Abstract: A method and apparatus for enhancing/extending a serial point-to-point interconnect architecture, such as Peripheral Component Interconnect Express (PCIe) is herein described. Temporal and locality caching hints and prefetching hints are provided to improve system wide caching and prefetching. Message codes for atomic operations to arbitrate ownership between system devices/resources are included to allow efficient access/ownership of shared data. Loose transaction ordering provided for while maintaining corresponding transaction priority to memory locations to ensure data integrity and efficient memory access. Active power sub-states and setting thereof is included to allow for more efficient power management. And, caching of device local memory in a host address space, as well as caching of system memory in a device local memory address space is provided for to improve bandwidth and latency for memory accesses.
    Type: Grant
    Filed: December 6, 2012
    Date of Patent: August 12, 2014
    Assignee: Intel Corporation
    Inventors: Jasmin Ajanovic, Mahesh Wagh, Prashant Sethi, Debendra Das Sharma, David J. Harriman, Mark B. Rosenbluth, Ajay V. Bhatt, Peter Barry, Scott Dion Rodgers, Anil Vasudevan, Sridhar Muthrasanallur, James Akiyama, Robert G. Blankenship, Ohad Falik, Avi Mendelson, Ilan Pardo, Eran Tamari, Eliezer Weissmann, Doron Shamia
  • Publication number: 20140129808
    Abstract: In one embodiment, the present invention includes a multicore processor having first and second cores to independently execute instructions, the first core visible to an operating system (OS) and the second core transparent to the OS and heterogeneous from the first core. A task controller, which may be included in or coupled to the multicore processor, can cause dynamic migration of a first process scheduled by the OS to the first core to the second core transparently to the OS. Other embodiments are described and claimed.
    Type: Application
    Filed: April 27, 2012
    Publication date: May 8, 2014
    Inventors: Alon Naveh, Yuval Yosef, Eliezer Weissmann, Anil Aggarwal, Efraim Rotem, Avi Mendelson, Ronny Ronen, Boris Ginzburg, Michael Mishaeli, Scott D. Hahn, David A. Koufaty, Ganapati Srinivasa, Guy Therien
  • Patent number: 8683487
    Abstract: Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU.
    Type: Grant
    Filed: March 11, 2013
    Date of Patent: March 25, 2014
    Assignee: Intel Corporation
    Inventors: Zhou Xiaocheng, Shoumeng Yan, Gao Ying, Hu Chen, Peinan Zhang, Mohan Rajagopalan, Avi Mendelson, Bratin Saha