Patents by Inventor Mark Schmisseur

Mark Schmisseur has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190102090
    Abstract: A memory controller method and apparatus, which includes a modification of at least one of a first timing scheme or a second timing scheme based on information about one or more data requests to be included in at least one of a first queue scheduler or a second queue scheduler, the first timing scheme indicating when one or more requests in the first queue scheduler are to be issued to the first memory set via a first memory set interface and over a channel, the second timing scheme indicating when one or more requests in the second queue scheduler are to be issued to the second memory set via a second memory set interface and over the channel. Furthermore, an issuance of a request to at least one of the first memory set in accordance with the modified first timing scheme or the second memory set in accordance with the modified second timing scheme may be included.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Inventors: Francesc GUIM BERNAT, Karthik KUMAR, Thomas WILLHALM, Mark SCHMISSEUR
  • Publication number: 20190102403
    Abstract: Techniques and apparatus for providing access to data in a plurality of storage formats are described. In one embodiment, for example, an apparatus may include logic, at least a portion of comprised in hardware coupled to the at least one memory, to determine a first storage format of a database operation on a database having a second storage format, and perform a format conversion process responsive to the first storage format being different than the second storage format, the format conversion process to translate a virtual address of the database operation to a physical address, and determine a converted physical address comprising a memory address according to the first storage format. Other embodiments are described and claimed.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Applicant: INTEL CORPORATION
    Inventors: Mark A. Schmisseur, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar
  • Publication number: 20190102315
    Abstract: Various embodiments are generally directed to an apparatus, method and other techniques to receive a request from a core, the request associated with a memory operation to read or write data, and the request comprising a first address and an offset, the first address to identify a memory location of a memory. Embodiments include performing a first iteration of a memory indirection operation comprising reading the memory at the memory location to determine a second address based on the first address, and determining a memory resource based on the second address and the offset, the memory resource to perform the memory operation for the computing resource or perform a second iteration of the memory indirection operation.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Applicant: INTEL CORPORATION
    Inventors: FRANCESC GUIM BERNAT, KARTHIK KUMAR, MARK SCHMISSEUR, THOMAS WILLHALM
  • Publication number: 20190102147
    Abstract: Examples may include a data center in which memory sleds are provided with logic to filter data stored on the memory sled responsive to filtering requests from a compute sled. Memory sleds may include memory filtering logic arranged to receive filtering requests, filter data stored on the memory sled, and provide filtering results to the requesting entity. Additionally, a data center is provided in which fabric interconnect protocols in which sleds in the data center communicate is provided with filtering instructions such that compute sleds can request filtering on memory sleds.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Applicant: INTEL CORPORATION
    Inventors: Karthik Kumar, Francesc Guim Bernat, Thomas Willhalm, Mark A. Schmisseur
  • Patent number: 10235526
    Abstract: Various embodiments are directed to a system for accessing a self-encrypting drive (SED) upon resuming from a sleep power mode (SPM) state. An SED may be authenticated within a system, for example, upon resuming from a sleep state, based on unwrapping the SED passphrase with a SPM resume passphrase stored in a standby power register to receive power during the SPM state.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: March 19, 2019
    Assignee: INTEL CORPORATION
    Inventors: Asher Altman, Mark Schmisseur
  • Publication number: 20190065112
    Abstract: Technologies for utilizing a split memory pool include a compute sled. The compute sled includes multiple processors communicatively coupled together through a processor communication link. Each processor is to communicate with a different memory sled through a respective memory network dedicated to the corresponding processor and memory sled. The compute sled includes a compute engine to generate a memory access request to access a memory address in far memory. The far memory includes memory located on one of the memory sleds. The compute engine is also to determine, as a function of the memory address and a map of memory address ranges to the memory sleds, the memory sled on which to access the far memory, and send the memory access request to the determined memory sled to access the far memory associated with the memory address.
    Type: Application
    Filed: December 30, 2017
    Publication date: February 28, 2019
    Inventors: Mark A. Schmisseur, Aaron Gorius
  • Publication number: 20190065231
    Abstract: Technologies for migrating virtual machines (VMs) includes a plurality of compute sleds and a memory sled each communicatively coupled to a resource manager server. The resource manager server is configured to identify a compute sled of a for a virtual machine instance, allocate a first set of resources of the identified compute sled for the VM instance, associate a region of memory in a memory pool of a memory sled with the compute sled, and create the VM instance on the compute sled. The resource manager server is further configured to migrate the VM instance to another compute sled, associate the region of memory in the memory pool with the other compute sled, and start-up the VM instance on the other compute sled. Other embodiments are described herein.
    Type: Application
    Filed: December 30, 2017
    Publication date: February 28, 2019
    Inventors: Mark A. Schmisseur, Mohan J. Kumar, Murugasamy K. Nachimuthu, Slawomir Putyrski, Dimitrios Ziakas
  • Publication number: 20190050261
    Abstract: Technology for a memory pool arbitration apparatus is described. The apparatus can include a memory pool controller (MPC) communicatively coupled between a shared memory pool of disaggregated memory devices and a plurality of compute resources. The MPC can receive a plurality of data requests from the plurality of compute resources. The MPC can assign each compute resource to one of a set of compute resource priorities. The MPC can send memory access commands to the shared memory pool to perform each data request prioritized according to the set of compute resource priorities. The apparatus can include a priority arbitration unit (PAU) communicatively coupled to the MPC. The PAU can arbitrate the plurality of data requests as a function of the corresponding compute resource priorities.
    Type: Application
    Filed: March 29, 2018
    Publication date: February 14, 2019
    Inventors: Mark A. Schmisseur, Francesc Guim Bernat, Andrew J. Herdrich, Karthik Kumar
  • Publication number: 20190042515
    Abstract: There is disclosed an example of an artificial intelligence (AI) system, including: a first hardware platform; a fabric interface configured to communicatively couple the first hardware platform to a second hardware platform; a processor hosted on the first hardware platform and programmed to operate on an AI problem; and a first training accelerator, including: an accelerator hardware; a platform inter-chip link (ICL) configured to communicatively couple the first training accelerator to a second training accelerator on the first hardware platform without aid of the processor; a fabric ICL to communicatively couple the first training accelerator to a third training accelerator on a second hardware platform without aid of the processor; and a system decoder configured to operate the fabric ICL and platform ICL to share data of the accelerator hardware between the first training accelerator and second and third training accelerators without aid of the processor.
    Type: Application
    Filed: December 20, 2017
    Publication date: February 7, 2019
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Da-Ming Chiang, Kshitij A. Doshi, Suraj Prabhakaran, Mark A. Schmisseur
  • Publication number: 20190042122
    Abstract: Technologies for efficiently managing the allocation of memory in a shared memory pool include a memory sled. The memory sled includes a memory pool of byte-addressable memory devices. The memory sled also includes a memory pool controller coupled to the memory pool. The memory pool controller receives a request to provision memory to a compute sled. Further, the memory pool controller maps, in response to the request, each of the memory devices of the memory pool to the compute sled. The memory pool controller additionally assigns access rights to the compute sled as a function of one or more memory characteristics of the compute sled. The memory characteristics are indicative of an amount of memory in the memory pool to be used by the compute sled and the access rights are indicative of access permissions to one or more memory address ranges associated with the one or more memory devices.
    Type: Application
    Filed: December 28, 2017
    Publication date: February 7, 2019
    Inventors: Mark Schmisseur, Dimitrios Ziakas, Murugasamy K. Nachimuthu
  • Publication number: 20190045005
    Abstract: A method for replicating data to one or more distributed network nodes of a network is proposed. A movement of a moving entity having associated data stored on a first node of the network is estimated. The moving entity is physically moving between nodes of the network. According to the method, at least a second node of the network depending on the estimated movement is chosen. The method contains replicating the associated data of the first node to the second node or a group of nodes and contains managing how data is stored at those nodes based on the moving entity.
    Type: Application
    Filed: April 12, 2018
    Publication date: February 7, 2019
    Inventors: Timothy Verrall, Mark Schmisseur, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar
  • Publication number: 20190042423
    Abstract: A method is described. The method includes configuring different software programs that are to execute on a computer with customized hardware caching service levels. The available set of hardware caching levels at least comprise L1, L2 and L3 caching levels and at least one of the following hardware caching levels is available for customized support of a software program L2, L3 and L4.
    Type: Application
    Filed: April 19, 2018
    Publication date: February 7, 2019
    Inventors: Karthik KUMAR, Benjamin GRANIELLO, Mark A. SCHMISSEUR, Thomas WILLHALM, Francesc GUIM BERNAT
  • Publication number: 20190042490
    Abstract: Examples provide a memory device, a dual inline memory module, a storage device, an apparatus for storing, a method for storing, a computer program, a machine readable storage, and a machine readable medium. A memory device is configured to store data and comprises one or more interfaces configured to receive and to provide data. The memory device further comprises a memory module configured to store the data, and a memory logic component configured to control the one or more interfaces and the memory module. The memory logic component is further configured to receive information on a specific memory region with one or more model identifications, to receive information on an instruction to perform an acceleration function for one or more certain model identifications, and to perform the acceleration function on data in a specific memory region with the one or more certain model identifications.
    Type: Application
    Filed: April 10, 2018
    Publication date: February 7, 2019
    Inventors: Mark Schmisseur, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar
  • Publication number: 20190042488
    Abstract: Technology for a memory controller is described. The memory controller can receive a request from a data consumer node in a data center for training data. The training data indicated in the request can correspond to a model identifier (ID) of a model that runs on the data consumer node. The memory controller can identify a data provider node in the data center that stores the training data that is requested by the data consumer node. The data provider node can be identified using a tracking table that is maintained at the memory controller. The memory controller can send an instruction to the data provider node that instructs the data provider node to send the training data to the data consumer node to enable training of the model that runs on the data consumer node.
    Type: Application
    Filed: December 28, 2017
    Publication date: February 7, 2019
    Applicant: Intel Corporation
    Inventors: FRANCESC GUIM BERNAT, MARK A. SCHMISSEUR, KARTHIK KUMAR, THOMAS WILLHALM
  • Publication number: 20190042783
    Abstract: An embodiment of a semiconductor apparatus may include technology to receive data with a unique identifier, and bypass encryption logic of a media controller based on the unique identifier. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: September 27, 2018
    Publication date: February 7, 2019
    Inventors: Francesc Guim Bernat, Mark Schmisseur, Kshitij Doshi, Kapil Sood, Tarun Viswanathan
  • Publication number: 20190042511
    Abstract: An apparatus is described. The apparatus includes a non volatile memory module for insertion into a rack implemented modular computer. The non volatile memory module includes a plurality of memory controllers. The non volatile memory includes respective non-volatile random access memory coupled to each of the memory controllers. The non volatile memory module includes a switch circuit to circuit switch incoming requests and outgoing responses between the rack's backplane and the plurality of memory controllers. The incoming requests are sent by one or more CPU modules of the rack implemented modular computer. The outgoing responses are sent to the one or more CPU modules.
    Type: Application
    Filed: June 29, 2018
    Publication date: February 7, 2019
    Inventors: Murugasamy K. NACHIMUTHU, Mark A. SCHMISSEUR, Dimitrios ZIAKAS, Debendra DAS SHARMA, Mohan J. KUMAR
  • Publication number: 20190041960
    Abstract: In one embodiment, an apparatus of an edge computing system includes memory that includes instructions and processing circuitry coupled to the memory. The processing circuitry implements the instructions to process a request to execute at least a portion of a workflow on pooled computing resources, the workflow being associated with a particular tenant, determine an amount of power to be allocated to particular resources of the pooled computing resources for execution of the portion of the workflow based on a power budget associated with the tenant and a current power cost, and control allocation of the determined amount of power to the particular resources of the pooled computing resources during execution of the portion of the workflow.
    Type: Application
    Filed: June 19, 2018
    Publication date: February 7, 2019
    Inventors: Francesc Guim Bernat, Suraj Prabhakaran, Timothy Verrall, Karthik Kumar, Mark A. Schmisseur
  • Publication number: 20190045022
    Abstract: An apparatus is described. The apparatus includes switch circuitry to route packets that are destined for one or more cloud storage services instead to local caching resources. The packets are sent from different tenants of the one or more cloud storage services and have respective payloads that contain read/write commands for one or more cloud storage services. The apparatus includes storage controller circuitry to be coupled to non volatile memory. The non volatile memory is to implement the local caching resources. The storage controller is to implement customized caching treatment for the different tenants. The apparatus includes network interface circuitry coupled between the switch circuitry and the storage controller circuitry to implement customized network end point processing for the different tenants.
    Type: Application
    Filed: March 29, 2018
    Publication date: February 7, 2019
    Inventors: Francesc GUIM BERNAT, Eoin WALSH, Paul MANNION, Timothy VERRALL, Mark A. SCHMISSEUR
  • Publication number: 20190042138
    Abstract: Devices and systems for distributing data across disaggregated memory resources is disclosed and described. An acceleration controller device can include an adaptive data migration engine (ADME) configured to communicatively couple to a fabric interconnect, and is further configured to monitor application data performance metrics at the plurality of disaggregated memory pools for a plurality of applications executing on the plurality of compute resources, select a current application having a current application data performance metric, determine an alternate memory pool from the plurality of disaggregated memory pools estimated to increase application data performance relative to the current application data performance metric, and migrate the data from the current memory pool to the alternate memory pool.
    Type: Application
    Filed: March 14, 2018
    Publication date: February 7, 2019
    Inventors: FRANCESC GUIM BERNAT, KARTHIK KUMAR, THOMAS WILLHALM, MARK A. SCHMISSEUR
  • Publication number: 20190042408
    Abstract: Technologies for interleaving memory that is accessible via a shared memory pool include a memory sled. The memory sled includes a memory pool of byte-addressable memory devices. The memory sled also includes a memory pool controller coupled to the memory pool. The memory pool controller receives a request to allocate memory addresses of the memory pool to a compute sled. The memory pool controller determines an interleaving configuration for the compute sled as a function of memory characteristics of the compute sled and configures the memory addresses according to the determined interleaving configuration.
    Type: Application
    Filed: January 11, 2018
    Publication date: February 7, 2019
    Inventors: Mark Schmisseur, Dimitrios Ziakas, Murugasamy K. Nachimuthu