Patents by Inventor Bassam N. Coury

Bassam N. Coury has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11625277
    Abstract: Systems and methods may be used to determine where to run a service based on workload-based conditions or system-level conditions. An example method may include determining whether power available to a resource of a compute device satisfies a target power, for example to satisfy a target performance for a workload. When the power available is insufficient, an additional resource may be provided, for example on a remote device from the compute device. The additional resource may be used as a replacement for the resource of the compute device or to augment the resource of the compute device.
    Type: Grant
    Filed: May 20, 2020
    Date of Patent: April 11, 2023
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Kshitij Arun Doshi, Bassam N. Coury, Suraj Prabhakaran, Timothy Verrall
  • Publication number: 20230092541
    Abstract: Methods and apparatus to minimize hot/cold page detection overhead on running workloads. A page meta data structure is populated with meta data associated with memory pages in one or more far memory tier. In conjunction with one or more processes accessing memory pages to perform workloads, the page meta data structure is updated to reflect accesses to the memory pages. The page meta data, which reflects the current state of memory, is used to determine which pages are “hot” pages and which pages are “cold” pages, wherein hot pages are memory pages with relatively higher access frequencies and cold pages are memory pages with relatively lower access frequencies. Variations on the approach including filtering meta data updates on pages in memory regions of interest and applying a filter(s) to trigger meta data updates based on (a) condition(s). A callback function may also be triggered to be executed synchronously with memory page accesses.
    Type: Application
    Filed: September 23, 2021
    Publication date: March 23, 2023
    Inventors: Francois DUGAST, Durgesh SRIVASTAVA, Sujoy SEN, Lidia WARNES, Thomas E. WILLIS, Bassam N. COURY
  • Publication number: 20220334963
    Abstract: Examples described herein relate to circuitry, when operational, configured to: store records of memory accesses to a memory device by at least one requester based on a configuration, wherein the configuration is to specify a duration of memory access capture. In some examples, the at least one requester comprises one or more workloads running on one or more processors. In some examples, the configuration is to specify collection of one or more of: physical address ranges or read or write access type.
    Type: Application
    Filed: June 24, 2022
    Publication date: October 20, 2022
    Inventors: Ankit PATEL, Lidia WARNES, Donald L. FAW, Bassam N. COURY, Douglas CARRIGAN, Hugh WILKINSON, Ananthan AYYASAMY, Michael F. FALLON
  • Patent number: 11245604
    Abstract: Embodiments may be generally direct to apparatuses, systems, method, and techniques to determine a configuration for a plurality of connectors, the configuration to associate a first interconnect protocol with a first subset of the plurality of connectors and a second interconnect protocol with a second subset of the plurality of connectors, the first interconnect protocol and the second interconnect protocol are different interconnect protocols and each comprising one of a serial link protocol, a coherent link protocol, and an accelerator link protocol, cause processing of data for communication via the first subset of the plurality of connectors in accordance with the first interconnect protocol, and cause processing of data for communication via the second subset of the plurality of connector in accordance with the second interconnect protocol.
    Type: Grant
    Filed: November 18, 2020
    Date of Patent: February 8, 2022
    Assignee: INTEL CORPORATION
    Inventors: Mahesh Wagh, Mark S. Myers, Stephen R. Van Doren, Dimitrios Ziakas, Bassam N. Coury
  • Publication number: 20210200667
    Abstract: Examples described herein relate to memory thin provisioning in a memory pool of one or more dual in-line memory modules or memory devices. At any instance, any central processing unit (CPU) can request and receive a full virtual allocation of memory in an amount that exceeds the physical memory attached to the CPU (near memory). A remote pool of additional memory can be dynamically utilized to fill the gap between allocated memory and near memory. This remote pool is shared between multiple CPUs, with dynamic assignment and address re-mapping provided for the remote pool. To improve performance, the near memory can be operated as a cache of the pool memory. Inclusive or exclusive content storage configurations can be applied. An inclusive cache configuration can include an entry in a near memory cache also being stored in a memory pool whereas an exclusive cache configuration can provide an entry in either a near memory cache or in a memory pool but not both.
    Type: Application
    Filed: December 26, 2019
    Publication date: July 1, 2021
    Inventors: Debra BERNSTEIN, Hugh WILKINSON, Douglas CARRIGAN, Bassam N. COURY, Matthew J. ADILETTA, Durgesh SRIVASTAVA, Lidia WARNES, William WHEELER, Michael F. FALLON
  • Publication number: 20210149812
    Abstract: Examples described herein includes an apparatus comprising: a network interface configured to: receive a request to copy data from a local memory to a remote memory; based on a configuration that the network interface is to manage a cache store the data into the cache and record that the data is stored in the cache. In some examples, store the data in the cache comprises store most recently evicted data from the local memory into the cache. In some examples, the network interface is to store data evicted from the local memory that is not stored into the cache into one or more remote memories.
    Type: Application
    Filed: November 24, 2020
    Publication date: May 20, 2021
    Inventors: Sujoy SEN, Durgesh SRIVASTAVA, Thomas E. WILLIS, Bassam N. COURY, Marcelo CINTRA
  • Publication number: 20210109300
    Abstract: Embodiments may be generally direct to apparatuses, systems, method, and techniques to determine a configuration for a plurality of connectors, the configuration to associate a first interconnect protocol with a first subset of the plurality of connectors and a second interconnect protocol with a second subset of the plurality of connectors, the first interconnect protocol and the second interconnect protocol are different interconnect protocols and each comprising one of a serial link protocol, a coherent link protocol, and an accelerator link protocol, cause processing of data for communication via the first subset of the plurality of connectors in accordance with the first interconnect protocol, and cause processing of data for communication via the second subset of the plurality of connector in accordance with the second interconnect protocol.
    Type: Application
    Filed: November 18, 2020
    Publication date: April 15, 2021
    Applicant: INTEL CORPORATION
    Inventors: MAHESH WAGH, MARK S. MYERS, STEPHEN R. VAN DOREN, DIMITRIOS ZIAKAS, BASSAM N. COURY
  • Publication number: 20210105207
    Abstract: Examples described herein include one or more processors; a network interface; and a direct memory access (DMA) engine communicatively coupled to the one or more processors. In some examples, the DMA engine is to receive a DMA data access request and based on an address in the DMA data access request corresponding to a remote memory device, the DMA engine is to cause the network interface to generate at least one packet for transmission to the remote memory device. In some examples, the DMA data access request includes a source address, a destination address, and a length. In some examples, if the source address corresponds to a local memory device and the destination address corresponds to a remote memory device, the DMA engine is to cause the network interface to generate at least one packet for transmission to the remote memory device, wherein the at least one packet includes data stored at the source address.
    Type: Application
    Filed: November 24, 2020
    Publication date: April 8, 2021
    Inventors: Sujoy SEN, Durgesh SRIVASTAVA, Thomas E. WILLIS, Bassam N. COURY, Marcelo CINTRA
  • Publication number: 20210081312
    Abstract: Examples described herein includes a network interface controller comprising a memory interface and a network interface, the network interface controller configurable to provide access to local memory and remote memory to a requester, wherein the network interface controller is configured with an amount of memory of different memory access speeds for allocation to one or more requesters. In some examples, the network interface controller is to grant or deny a memory allocation request from a requester based on a configuration of an amount of memory for different memory access speeds for allocation to the requester. In some examples, the network interface controller is to grant or deny a memory access request from a requester based on a configuration of memory allocated to the requester. In some examples, the network interface controller is to regulate quality of service of memory access requests from requesters.
    Type: Application
    Filed: November 24, 2020
    Publication date: March 18, 2021
    Inventors: Bassam N. COURY, Sujoy SEN, Thomas E. WILLIS, Durgesh SRIVASTAVA
  • Publication number: 20210075633
    Abstract: Examples described herein relate to a network interface. In some examples, the network interface is to access data designated for transmission in at least one packet to multiple memory nodes by inclusion of an multicast identifier of a memory node group and transmit the at least one packet to a destination network device, wherein the multicast identifier of the memory node group in the at least one packet is to cause an intermediate network device to multicast the packet to multiple memory nodes. In some examples, a memory node comprises a memory pool that includes one or more of: volatile memory, non-volatile memory, or persistent memory. In some examples, the intermediate network device comprises a switch configured to determine network addresses of memory nodes associated with the multicast identifier of the memory node group.
    Type: Application
    Filed: November 24, 2020
    Publication date: March 11, 2021
    Inventors: Sujoy SEN, Thomas E. WILLIS, Durgesh SRIVASTAVA, Marcelo CINTRA, Bassam N. COURY
  • Publication number: 20210073151
    Abstract: Examples described herein and includes at least one processor and a direct memory access (DMA) device. In some examples, the DMA device is to: access a command from a memory region allocated to receive commands for execution by the DMA device, wherein the command is to access content from a local memory device or remote memory node. In some examples, the DMA device is to: determine if the content is stored in a local memory device or a remote memory node based on a configuration that indicates whether a source address refers to a memory address associated with the local memory device or the remote memory node and whether a destination address refers to a memory address associated with the local memory device or the remote memory node. In some examples, the DMA device is to: copy the content from a local memory device or copy the content to the local memory device using a memory interface.
    Type: Application
    Filed: November 24, 2020
    Publication date: March 11, 2021
    Inventors: Sujoy SEN, Durgesh SRIVASTAVA, Thomas E. WILLIS, Bassam N. COURY, Marcelo CINTRA
  • Patent number: 10917321
    Abstract: Examples may include sleds for a rack in a data center including physical compute resources and memory for the physical compute resources. The memory can be disaggregated, or organized into near and far memory. A first sled can comprise the physical compute resources and a first set of physical memory resources while a second sled can comprise a second set of physical memory resources. The first set of physical memory resources can be coupled to the physical compute resources via a local interface while the second set of physical memory resources can be coupled to the physical compute resources via a fabric.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: February 9, 2021
    Assignee: INTEL CORPORATION
    Inventors: Mark A. Schmisseur, Bassam N. Coury
  • Publication number: 20210019069
    Abstract: Examples herein relate to a system capable of coupling to a remote memory pool, the system comprising: a memory controller and an interface to a connection, the interface coupled to the memory controller. In some examples, the interface is to translate a format of a memory access request to a format accepted by the memory controller and the memory controller is to provide the translated memory access request in a format accepted by a media. In some examples, a controller is to measure a number of addressable regions that are least accessed and cause at least one of the least accessed regions to be evicted to a local or remote memory device with relatively higher latency.
    Type: Application
    Filed: September 24, 2020
    Publication date: January 21, 2021
    Inventors: Sujoy SEN, Thomas E. WILLIS, Durgesh SRIVASTAVA, Marcelo CINTRA, Bassam N. COURY, Donald L. FAW, Francois DUGAST
  • Patent number: 10776524
    Abstract: Embodiments are directed to securing system management mode (SMM) in a computer system. A CPU is configurable to execute first code in a normal mode, and second code in a SMM. A SMM control engine is operative to transition the CPU from the normal mode to the SMM in response to a SMM transition call, and to control access by the CPU in the SMM to data from an originator of the SMM transition call. The access is controlled based on an authorization state assigned to the SMM transition call. An authorization engine is operative to perform authentication of the originator of the SMM transition call and to assign the authorization state based on an authentication result. The CPU in the SMM is prevented from accessing the data in response to the authentication result being a failure of authentication.
    Type: Grant
    Filed: January 14, 2016
    Date of Patent: September 15, 2020
    Assignee: Intel Corporation
    Inventors: Jiewen Jacques Yao, Vincent J. Zimmer, Bassam N. Coury
  • Publication number: 20200285523
    Abstract: Systems and methods may be used to determine where to run a service based on workload-based conditions or system-level conditions. An example method may include determining whether power available to a resource of a compute device satisfies a target power, for example to satisfy a target performance for a workload. When the power available is insufficient, an additional resource may be provided, for example on a remote device from the compute device. The additional resource may be used as a replacement for the resource of the compute device or to augment the resource of the compute device.
    Type: Application
    Filed: May 20, 2020
    Publication date: September 10, 2020
    Inventors: Francesc Guim Bernat, Kshitij Arun Doshi, Bassam N. Coury, Suraj Prabhakran, Timothy Verrall
  • Patent number: 10719443
    Abstract: A system and method are described for integrating a memory and storage hierarchy including a non-volatile memory tier within a computer system. In one embodiment, PCMS memory devices are used as one tier in the hierarchy, sometimes referred to as “far memory.” Higher performance memory devices such as DRAM placed in front of the far memory and are used to mask some of the performance limitations of the far memory. These higher performance memory devices are referred to as “near memory.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: July 21, 2020
    Assignee: Intel Corporation
    Inventors: Raj K. Ramanujan, Rajat Agarwal, Kai Cheng, Taarinya Polepeddi, Camille C. Raad, David J. Zimmerman, Muthukumar P. Swaminathan, Dimitrios Ziakas, Mohan J. Kumar, Bassam N. Coury, Glenn J. Hinton
  • Patent number: 10432627
    Abstract: The present disclosure is directed to secure sensor data transport and processing. End-to-end security may prevent attackers from altering data during the sensor-based security procedure. For example, following sensor data capture execution in a device may be temporarily suspended. During the suspension of execution, sensor interface circuitry in the device may copy the sensor data from a memory location associated with the sensor to a trusted execution environment (TEE) within the device. The TEE may provide a secure location in which the sensor data may be processed and a determination may be made as to whether to grant access to the secure resources. The TEE may comprise, for example, match circuitry to compare the sensor data to previously captured sensor data for users that are allowed to access the secured resources and output circuitry to grant access to the secured resources or to perform activities associated with a security exception.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: October 1, 2019
    Assignee: Intel Corporation
    Inventors: Hormuzd M. Khosravi, Bassam N. Coury, Vincent J. Zimmer
  • Publication number: 20190220406
    Abstract: A system and method are described for integrating a memory and storage hierarchy including a non-volatile memory tier within a computer system. In one embodiment, PCMS memory devices are used as one tier in the hierarchy, sometimes referred to as “far memory.” Higher performance memory devices such as DRAM placed in front of the far memory and are used to mask some of the performance limitations of the far memory. These higher performance memory devices are referred to as “near memory.
    Type: Application
    Filed: March 25, 2019
    Publication date: July 18, 2019
    Inventors: Raj K. RAMANUJAN, Rajat AGARWAL, Kai CHENG, Taarinya POLEPEDDI, Camille C. RAAD, David J. ZIMMERMAN, Muthukumar P. SWAMINATHAN, Dimitrios ZIAKAS, Mohan J. KUMAR, Bassam N. COURY, Glenn J. HINTON
  • Patent number: 10331614
    Abstract: Systems and methods of implementing server architectures that can facilitate the servicing of memory components in computer systems. The systems and methods employ nonvolatile memory/storage modules that include nonvolatile memory (NVM) that can be used for system memory and mass storage, as well as firmware memory. The respective NVM/storage modules can be received in front or rear-loading bays of the computer systems. The systems and methods further employ single, dual, or quad socket processors, in which each processor is communicably coupled to at least some of the NVM/storage modules disposed in the front or rear-loading bays by one or more memory and/or input/output (I/O) channels. By employing NVM/storage modules that can be received in front or rear-loading bays of computer systems, the systems and methods provide memory component serviceability heretofore unachievable in computer systems implementing conventional server architectures.
    Type: Grant
    Filed: November 27, 2013
    Date of Patent: June 25, 2019
    Assignee: Intel Corporation
    Inventors: Dimitrios Ziakas, Bassam N. Coury, Mohan J. Kumar, Murugasamy K. Nachimuthu, Thi Dang, Russell J. Wunderlich
  • Patent number: 10241912
    Abstract: A system and method are described for integrating a memory and storage hierarchy including a non-volatile memory tier within a computer system. In one embodiment, PCMS memory devices are used as one tier in the hierarchy, sometimes referred to as “far memory.” Higher performance memory devices such as DRAM placed in front of the far memory and are used to mask some of the performance limitations of the far memory. These higher performance memory devices are referred to as “near memory.
    Type: Grant
    Filed: March 13, 2017
    Date of Patent: March 26, 2019
    Assignee: Intel Corporation
    Inventors: Raj K. Ramanujan, Rajat Agarwal, Kai Cheng, Taarinya Polepeddi, Camille C. Raad, David J. Zimmerman, Muthukumar P. Swaminathan, Dimitrios Ziakas, Mohan J. Kumar, Bassam N. Coury, Glenn J. Hinton