Patents by Inventor Mark A. Schmisseur

Mark A. Schmisseur has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200326861
    Abstract: An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to allocate a first memory portion to a first application as a combination of a local memory and remote memory, wherein the remote memory is shared between multiple compute nodes, and manage a first memory balloon associated with the first memory portion based on two or more memory tiers associated with the local memory and the remote memory. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: June 26, 2020
    Publication date: October 15, 2020
    Applicant: Intel Corporation
    Inventors: Rasika Subramanian, Lidia Warnes, Francesc Guim Bernat, Mark A. Schmisseur, Durgesh Srivastava
  • Patent number: 10785295
    Abstract: Fabric encapsulated resilient storage is hardware-assisted resilient storage in which the reliability capabilities of a storage server are abstracted and managed transparently by a host fabric interface (HFI) to a switch. The switch abstracts the reliability capabilities of a storage server into a level of resilience in a hierarchy of levels of resilience. The resilience levels are accessible by clients as a quantifiable characteristic of the storage server. The resilience levels are used by the switch fabric to filter which storage servers store objects responsive to client requests to store objects at a specified level of resilience.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: September 22, 2020
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Daniel Rivas Barragan, Kshitij A. Doshi, Mark A. Schmisseur, Steen Larsen
  • Publication number: 20200278804
    Abstract: A memory request manager in a memory system registers a tenant for access to a plurality of memory devices, registers one or more service level agreement (SLA) requirements for the tenant for access to the plurality of memory devices, monitors usage of the plurality of memory devices by tenants, receives a memory request from the tenant to access a selected one of the plurality of memory devices, and allows the access when usage of the plurality of memory devices meets the one or more SLA requirements for the tenant.
    Type: Application
    Filed: April 13, 2020
    Publication date: September 3, 2020
    Inventors: Francesc GUIM BERNAT, Karthik KUMAR, Tushar Sudhakar GOHAD, Mark A. SCHMISSEUR, Thomas WILLHALM
  • Patent number: 10747691
    Abstract: Examples provide a memory device, a dual inline memory module, a storage device, an apparatus for storing, a method for storing, a computer program, a machine readable storage, and a machine readable medium. A memory device is configured to store data and comprises one or more interfaces configured to receive and to provide data. The memory device further comprises a memory module configured to store the data, and a memory logic component configured to control the one or more interfaces and the memory module. The memory logic component is further configured to receive information on a specific memory region with one or more model identifications, to receive information on an instruction to perform an acceleration function for one or more certain model identifications, and to perform the acceleration function on data in a specific memory region with the one or more certain model identifications.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: August 18, 2020
    Assignee: Intel Corporation
    Inventors: Mark Schmisseur, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar
  • Publication number: 20200226272
    Abstract: An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to process memory operation requests from a memory controller, and provide a front end interface to remote pooled memory hosted at a near edge device. An embodiment of another electronic apparatus may include local memory and logic communicatively coupled the local memory, the logic to allocate a range of the local memory as remote pooled memory, and provide a back end interface to the remote pooled memory for memory requests from a far edge device. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: March 26, 2020
    Publication date: July 16, 2020
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Mark Schmisseur, Thomas Willhalm
  • Publication number: 20200228630
    Abstract: A persistence service for edge architected computing systems extends current storage and memory schemes of edge resources to expose interfaces to allow an endpoint, such as an IoT device or client device, to specify criteria for achieving persistence for data stored in an edge resource. The persistence interface extends the storage and memory controllers to store data in accordance with the criteria, including determining whether a local or remote edge resource is best able to store data persistently in a manner that satisfies the criteria. The criteria include a persistence service level agreement, including a required time to persistence, cost of persistence and reliability level of persistence. Only edge resources that contain media, including storage subsystems and/or memory, capable of storing data persistently while satisfying the criteria will be permitted to service the request.
    Type: Application
    Filed: March 27, 2020
    Publication date: July 16, 2020
    Inventors: Francesc GUIM BERNAT, Karthik KUMAR, Dimitrios ZIAKAS, Mark A. SCHMISSEUR, Ned SMITH
  • Publication number: 20200218669
    Abstract: An apparatus and/or system is described including a memory device including a memory range and a temporal data management unit (TDMU) coupled to the memory device to receive from an interface, the memory range and a temporal range corresponding to validity of data in the memory range, check the temporal range against a time and/or date value provided by a timer or clock to identify the data in the memory range as expired, and invalidate the data that is expired in the memory device. In some embodiments, the TDMU includes hardware logic that resides on a memory module with the memory device and is coupled to invalidate expired data when the memory module is decoupled from the interface. Other embodiments may be disclosed and claimed.
    Type: Application
    Filed: March 16, 2020
    Publication date: July 9, 2020
    Inventors: Ginger H. Gilsdorf, Karthik Kumar, Mark A. Schmisseur, Thomas Willhalm, Francesc Guim Bernat
  • Patent number: 10691345
    Abstract: A memory controller method and apparatus, which includes a modification of at least one of a first timing scheme or a second timing scheme based on information about one or more data requests to be included in at least one of a first queue scheduler or a second queue scheduler, the first timing scheme indicating when one or more requests in the first queue scheduler are to be issued to the first memory set via a first memory set interface and over a channel, the second timing scheme indicating when one or more requests in the second queue scheduler are to be issued to the second memory set via a second memory set interface and over the channel. Furthermore, an issuance of a request to at least one of the first memory set in accordance with the modified first timing scheme or the second memory set in accordance with the modified second timing scheme may be included.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: June 23, 2020
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Mark Schmisseur
  • Patent number: 10649813
    Abstract: Technology for a memory pool arbitration apparatus is described. The apparatus can include a memory pool controller (MPC) communicatively coupled between a shared memory pool of disaggregated memory devices and a plurality of compute resources. The MPC can receive a plurality of data requests from the plurality of compute resources. The MPC can assign each compute resource to one of a set of compute resource priorities. The MPC can send memory access commands to the shared memory pool to perform each data request prioritized according to the set of compute resource priorities. The apparatus can include a priority arbitration unit (PAU) communicatively coupled to the MPC. The PAU can arbitrate the plurality of data requests as a function of the corresponding compute resource priorities.
    Type: Grant
    Filed: March 29, 2018
    Date of Patent: May 12, 2020
    Assignee: Intel Corporation
    Inventors: Mark A. Schmisseur, Francesc Guim Bernat, Andrew J. Herdrich, Karthik Kumar
  • Patent number: 10581968
    Abstract: A technology is described for performing a multi-node storage operation. An example networked memory storage group coupled to a plurality of computing nodes through s network fabric can be configured to receive a transaction detail message from a master computing node that includes a transaction identifier and transaction details for a multi-node storage operation. Thereafter, storage operation requests that include the transaction identifier may be received from computing nodes assigned storage operation tasks associated with the multi-node storage operation. The networked memory storage group may be configured to determine that storage operations for the multi-node storage operation have been completed and send a message to the master computing node indicating a completion state of the multi-node storage operation.
    Type: Grant
    Filed: April 1, 2017
    Date of Patent: March 3, 2020
    Assignee: Intel Corporation
    Inventors: Steen Larsen, Francesc Guim Bernat, Kshitij A. Doshi, Mark A. Schmisseur
  • Patent number: 10581596
    Abstract: Technologies for managing errors in a remotely accessible memory pool include a memory sled. The memory sled includes a memory pool having one or more byte-addressable memory devices and a memory pool controller coupled to the memory pool. The memory sled is to write test data to a byte-addressable memory region in the memory pool. The memory region is to be accessed by a remote compute sled. The memory sled is also to read data from the memory region to which the test data was written, compare the read data to the test data to determine whether a threshold number of errors are present in the read data, and send, in response to a determination that the threshold number of errors are present in the read data, a notification to the remote compute sled that the memory region is faulty.
    Type: Grant
    Filed: December 30, 2017
    Date of Patent: March 3, 2020
    Assignee: Intel Corporation
    Inventors: Mark Schmisseur, Dimitrios Ziakas, Murugasamy K. Nachimuthu
  • Publication number: 20200007460
    Abstract: There is disclosed in one example a communication apparatus, including: a telemetry interface; a management interface; and an edge gateway configured to: identify diverted traffic, wherein the diverted traffic includes traffic to be serviced by an edge microcloud configured to provide a plurality of services; receive telemetry via the telemetry interface; use the telemetry to anticipate a future per-service demand within the edge microcloud; compute a scale for a resource to meet the future per-service demand; and operate the management interface to instruct the edge microcloud to perform the scale before the future per-service demand occurs.
    Type: Application
    Filed: June 29, 2018
    Publication date: January 2, 2020
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Mark A. Schmisseur, Timothy Verrall
  • Publication number: 20200004429
    Abstract: Provided are a method, system, computer readable storage medium, and switch for configuring a switch to assign partitions in storage devices to compute nodes. A management controller configures the switch to dynamically allocate partitions of at least one of the storage devices to the compute nodes based on a workload at the compute node.
    Type: Application
    Filed: July 12, 2019
    Publication date: January 2, 2020
    Inventors: Mark A. SCHMISSEUR, Mohan J. KUMAR, Balint FLEISCHER, Debendra DAS SHARMA, Raj K. RAMANUJAN
  • Publication number: 20190391855
    Abstract: Technologies for providing efficient data access in an edge infrastructure include a compute device comprising circuitry configured to identify pools of resources that are usable to access data at an edge location. The circuitry is also configured to receive a request to execute a function at an edge location. The request identifies a data access performance target for the function. The circuitry is also configured to map, based on a data access performance of each pool and the data access performance target of the function, the function to a set of the pools to satisfy the data access performance target.
    Type: Application
    Filed: September 6, 2019
    Publication date: December 26, 2019
    Inventors: Francesc Guim Bernat, Karthik Kumar, Suraj Prabhakaran, Timothy Verrall, Thomas Willhalm, Mark Schmisseur
  • Publication number: 20190384837
    Abstract: A group of cache lines in cache may be identified as cache lines not to be flushed to persistent memory until all cache line writes for the group of cache lines have been completed.
    Type: Application
    Filed: June 19, 2018
    Publication date: December 19, 2019
    Inventors: Karthik KUMAR, Francesc GUIM BERNAT, Thomas WILLHALM, Mark A. SCHMISSEUR, Benjamin GRANIELLO
  • Publication number: 20190387634
    Abstract: In one embodiment, a circuit board includes: a plurality of layers including interconnects to carry processor-to-processor signaling between a first processor and a second processor; a first connector adapted to a first peripheral portion of the circuit board to couple to a first contact member of the first processor; and a second connector adapted to a second peripheral portion of the circuit board to couple to a first contact member of the second processor. Other embodiments are described and claimed.
    Type: Application
    Filed: August 29, 2019
    Publication date: December 19, 2019
    Inventors: Brian Aspnes, Marc Milobinski, Mark A. Schmisseur
  • Publication number: 20190384516
    Abstract: The present disclosure relates to a dynamically composable computing system comprising a computing fabric with a plurality of different disaggregated computing hardware resources having respective hardware characteristics. A resource manager has access to the respective hardware characteristics of the different disaggregated computing hardware resources and is configured to assemble a composite computing node by selecting one or more disaggregated computing hardware resources with respective hardware characteristics meeting requirements of an application to be executed on the composite computing node. An orchestrator is configured to schedule the application using the assembled composite computing node.
    Type: Application
    Filed: August 1, 2019
    Publication date: December 19, 2019
    Inventors: Francesc Guim Bernat, Karthik Kumar, John Chun Kwok LEUNG, Mark Schmisseur, Thomas Willhalm
  • Patent number: 10509728
    Abstract: Various embodiments are generally directed to an apparatus, method and other techniques to receive a request from a core, the request associated with a memory operation to read or write data, and the request comprising a first address and an offset, the first address to identify a memory location of a memory. Embodiments include performing a first iteration of a memory indirection operation comprising reading the memory at the memory location to determine a second address based on the first address, and determining a memory resource based on the second address and the offset, the memory resource to perform the memory operation for the computing resource or perform a second iteration of the memory indirection operation.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: December 17, 2019
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Karthik Kumar, Mark Schmisseur, Thomas Willhalm
  • Patent number: 10476670
    Abstract: Technologies for providing remote access to a shared memory pool include a memory sled. The memory sled includes a memory pool having one or more byte-addressable memory devices and a memory pool controller coupled to the memory pool. The memory pool controller is to produce, for each of a plurality of compute sleds, address space data indicative of addresses of byte-addressable memory in the memory pool accessible to the compute sled, and corresponding permissions associated with the addresses. The memory pool controller is also to provide the address space data to each corresponding compute sled and receive, from a requesting compute sled of the plurality of compute sleds, a memory access request. The memory access request includes an address from the address space data to be accessed. The memory pool controller is also to perform, in response to receiving the memory access request, a memory access operation on the memory pool. Other embodiments are also described and claimed.
    Type: Grant
    Filed: December 30, 2017
    Date of Patent: November 12, 2019
    Assignee: Intel Corporation
    Inventors: Mark Schmisseur, Dimitrios Ziakas, Murugasamy K. Nachimuthu
  • Publication number: 20190339869
    Abstract: Examples are disclosed for configuring a solid state drive (SSD) to operate in a storage mode or a memory mode. In some examples, one or more configuration commands may be received at a controller for an SSD having one or more non-volatile memory arrays. The SSD may be configured to operate in at least one of a storage mode, a memory mode or a combination of the storage mode or the memory mode based on the one or more configuration commands. Other examples are described and claimed.
    Type: Application
    Filed: May 20, 2019
    Publication date: November 7, 2019
    Inventors: Blaise FANNING, Mark A. SCHMISSEUR, Raymond S. TETRICK, Robert J. ROYER, JR., David B. MINTURN, Shane MATTHEWS