Patents by Inventor Mark Schmisseur

Mark Schmisseur has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190042294
    Abstract: A method and system for implementing virtualized network functions (VNFs) in a network. Physical resources of the network are abstracted into virtual resource pools and shared by virtual network entities. A virtual channel is set up for communicating data between a first VNF and a second VNF. A memory pool is allocated for the virtual channel from a set of memory pools. New interfaces are provided for communication between VNFs. The new interfaces may allow to push and pull payloads or data units from one VNF to another. The data may be stored in a queue in the pooled memory allocated for the VNFs/services. Certain processing may be performed before the data is stored in the memory pool.
    Type: Application
    Filed: April 13, 2018
    Publication date: February 7, 2019
    Inventors: Francesc Guim Bernat, Karthik Kumar, Timothy Verrall, Suraj Prabhakaran, Mark Schmisseur
  • Publication number: 20190042617
    Abstract: Examples provide a network component, a network switch, a central office, a base station, a data storage element, a method, an apparatus, a computer program, a machine readable storage, and a machine readable medium. A network component (10) is configured to manage data consistency among two or more data storage elements (20, 30) in a network (40). The network component (10) comprises one or more interfaces (12) configured to register information on the two or more data storage elements (20, 30) comprising the data, information on a temporal range for the data consistency, and information on one or more address spaces at the two or more data storage elements (20, 30) to address the data.
    Type: Application
    Filed: April 10, 2018
    Publication date: February 7, 2019
    Inventors: Francesc Guim Bernat, Karthik Kumar, Mark Schmisseur, Timothy Verrall, Thomas Willhalm
  • Publication number: 20190042372
    Abstract: An in-memory database is mirrored in persistent memory in nodes in a computer cluster for redundancy. Data can be recovered from persistent memory in a node that is powered down through the use of out-of-band techniques.
    Type: Application
    Filed: June 19, 2018
    Publication date: February 7, 2019
    Inventors: Karthik KUMAR, Francesc GUIM BERNAT, Mark A. SCHMISSEUR, Mustafa HAJEER, Thomas WILLHALM
  • Publication number: 20190034829
    Abstract: Technology for a data filter device operable to filter training data is described. The data filter device can receive training data from a data provider. The training data can be provided with corresponding metadata that indicates a model stored in a data store that is associated with the training data. The data filter device can identify a filter that is associated with the model stored in the data store. The data filter device can apply the filter to the training data received from the data provider to obtain filtered training data. The data filter device can provide the filtered training data to the model stored in the data store, wherein the filtered training data is used to train the model.
    Type: Application
    Filed: December 28, 2017
    Publication date: January 31, 2019
    Applicant: Intel Corporation
    Inventors: FRANCESC GUIM BERNAT, MARK A. SCHMISSEUR, KARTHIK KUMAR, THOMAS WILLHALM
  • Publication number: 20190035483
    Abstract: Technologies for managing errors in a remotely accessible memory pool include a memory sled. The memory sled includes a memory pool having one or more byte-addressable memory devices and a memory pool controller coupled to the memory pool. The memory sled is to write test data to a byte-addressable memory region in the memory pool. The memory region is to be accessed by a remote compute sled. The memory sled is also to read data from the memory region to which the test data was written, compare the read data to the test data to determine whether a threshold number of errors are present in the read data, and send, in response to a determination that the threshold number of errors are present in the read data, a notification to the remote compute sled that the memory region is faulty.
    Type: Application
    Filed: December 30, 2017
    Publication date: January 31, 2019
    Inventors: Mark Schmisseur, Dimitrios Ziakas, Murugasamy K. Nachimuthu
  • Publication number: 20190034383
    Abstract: Technologies for providing remote access to a shared memory pool include a memory sled. The memory sled includes a memory pool having one or more byte-addressable memory devices and a memory pool controller coupled to the memory pool. The memory pool controller is to produce, for each of a plurality of compute sleds, address space data indicative of addresses of byte-addressable memory in the memory pool accessible to the compute sled, and corresponding permissions associated with the addresses. The memory pool controller is also to provide the address space data to each corresponding compute sled and receive, from a requesting compute sled of the plurality of compute sleds, a memory access request. The memory access request includes an address from the address space data to be accessed. The memory pool controller is also to perform, in response to receiving the memory access request, a memory access operation on the memory pool. Other embodiments are also described and claimed.
    Type: Application
    Filed: December 30, 2017
    Publication date: January 31, 2019
    Inventors: Mark Schmisseur, Dimitrios Ziakas, Murugasamy K. Nachimuthu
  • Publication number: 20190034763
    Abstract: Technology for a memory controller is described. The memory controller can receive a request to store training data. The request can include a model identifier (ID) that identifies a model that is associated with the training data. The memory controller can send a write request to store the training data associated with the model ID in a memory region in a pooled memory that is allocated for the model ID. The training data that is stored in the memory region in the pooled memory can be addressable based on the model ID.
    Type: Application
    Filed: December 27, 2017
    Publication date: January 31, 2019
    Applicant: Intel Corporation
    Inventors: FRANCESC GUIM BERNAT, KARTHIK KUMAR, MARK A. SCHMISSEUR, THOMAS WILLHALM
  • Publication number: 20190007284
    Abstract: Technologies for producing proactive notifications of data storage performance include a compute device. The compute device is to obtain key indicator data indicative of a performance condition associated with operations of one or more data storage devices and an associated predefined threshold that, if satisfied, indicates the presence of a key indicator. The compute device is also to obtain remedial action data indicative of a remedial action to be performed by the compute device in response to identification of the key indicator in telemetry data produced by the compute device during operation, analyze the telemetry data to determine whether the key indicator is present in the telemetry data, perform, in response to a determination that the key indicator is present, the predefined remedial action, and send a notification of the predefined indicator to a remote compute device.
    Type: Application
    Filed: June 30, 2017
    Publication date: January 3, 2019
    Inventors: Kshitij A. Doshi, Francesc Guim Bernat, Mark A. Schmisseur
  • Publication number: 20190007334
    Abstract: A host fabric interface (HFI) apparatus, including: an HFI to communicatively couple to a fabric; and a remote hardware acceleration (RHA) engine to: query an orchestrator via the fabric to identify a remote resource having an accelerator; and send a remote accelerator request to the remote resource via the fabric.
    Type: Application
    Filed: June 30, 2017
    Publication date: January 3, 2019
    Applicant: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Kshitij A. Doshi, Mark A. Schmisseur, Narayan Ranganathan, John Chun Kwok Leung
  • Patent number: 10095629
    Abstract: Generally discussed herein are systems, devices, and methods for local and remote dual address decoding. According to an example a node can include one or more processors to generate a first memory request, the first memory request including a first address and a node identification, a caching agent coupled to the one or more processors, the caching agent to determine that the first address is homed to a remote node remote to the local node, a network interface controller (NIC) coupled to the caching agent, the NIC to produce a second memory request based on the first memory request, and the one or more processors further to receive a response to the second memory request, the response generated by a switch coupled to the NIC, the switch includes a remote system address decoder to determine a node identification to which the second memory request is homed.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: October 9, 2018
    Assignee: Intel Corporation
    Inventors: Francesc Cesc Guim Bernat, Kshitij A. Doshi, Steen Larsen, Mark A Schmisseur, Raj K. Ramanujan
  • Publication number: 20180285288
    Abstract: In an example, there is disclosed a host-fabric interface (HFI), including: an interconnect interface to communicatively couple the HFI to an interconnect; a network interface to communicatively couple the HFI to a network; network interface logic to provide communication between the interconnect and the network; a coprocessor configured to provide an offloaded function for the network; a memory; and a caching agent configured to: designate a region of the memory as a shared memory between the HFI and a core communicatively coupled to the HFI via the interconnect; receive a memory operation directed to the shared memory; and issue a memory instruction to the memory according to the memory operation.
    Type: Application
    Filed: March 31, 2017
    Publication date: October 4, 2018
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Daniel Rivas Barragan, Kshitij A. Doshi, Mark A. Schmisseur
  • Publication number: 20180288153
    Abstract: A technology is described for performing a multi-node storage operation. An example networked memory storage group coupled to a plurality of computing nodes through s network fabric can be configured to receive a transaction detail message from a master computing node that includes a transaction identifier and transaction details for a multi-node storage operation. Thereafter, storage operation requests that include the transaction identifier may be received from computing nodes assigned storage operation tasks associated with the multi-node storage operation. The networked memory storage group may be configured to determine that storage operations for the multi-node storage operation have been completed and send a message to the master computing node indicating a completion state of the multi-node storage operation.
    Type: Application
    Filed: April 1, 2017
    Publication date: October 4, 2018
    Applicant: Intel Corporation
    Inventors: Steen Larsen, Francesc Guim Bernat, Kshitij A. Doshi, Mark A. Schmisseur
  • Publication number: 20180285009
    Abstract: The present disclosure relates to a dynamically composable computing system comprising a computing fabric with a plurality of different disaggregated computing hardware resources having respective hardware characteristics. A resource manager has access to the respective hardware characteristics of the different disaggregated computing hardware resources and is configured to assemble a composite computing node by selecting one or more disaggregated computing hardware resources with respective hardware characteristics meeting requirements of an application to be executed on the composite computing node. An orchestrator is configured to schedule the application using the assembled composite computing node.
    Type: Application
    Filed: March 30, 2017
    Publication date: October 4, 2018
    Inventors: Francesc Guim Bernat, Karthik Kumar, John Chun Kwok Leung, Mark Schmisseur, Thomas Willhalm
  • Publication number: 20180284996
    Abstract: The present disclosure relates to a dynamically composable computing system. The dynamically composable computing system comprises at least one compute sled including a set of respective local computing hardware resources; a plurality of disaggregated memory modules; at least one disaggregated memory acceleration logic configured to perform one or more predefined computations on data stored in one or more of the plurality of disaggregated memory modules; and a resource manager module configured to assemble a composite computing node by associating, in accordance with requirements of a user, at least one of the plurality of disaggregated memory modules with the disaggregated memory acceleration logic to provide at least one accelerated disaggregated memory module and connecting the least one accelerated disaggregated memory module to the compute sled.
    Type: Application
    Filed: March 30, 2017
    Publication date: October 4, 2018
    Inventors: Francesc Guim Bernat, Mark Schmisseur, Karthik Kumar, Thomas Willhalm, Lidia Warnes
  • Patent number: 10067879
    Abstract: Provided are an apparatus and method for using block windows configured in a memory module to provide block level access to memory chips in the memory module. A plurality of block windows are configured that map to addresses corresponding to the addressable locations in the memory chips. A read/write request is received indicating a requested read or write operation with respect to a target block window comprising one of the block windows. The requested read or write operation is performed with respect to the addresses that map to the target block window.
    Type: Grant
    Filed: December 16, 2015
    Date of Patent: September 4, 2018
    Assignee: INTEL CORPORATION
    Inventors: Woojong Han, Andy M. Rudoff, Mark A. Schmisseur, Richard P. Mangold
  • Publication number: 20180189177
    Abstract: Apparatus and method for distributed management of data objects in a network of compute nodes are disclosed herein. A first compute node interface may be communicatively coupled to a first compute node to receive a request from the first compute node for at least a portion of a particular version of a data object, wherein the first compute node interface is to include mapping information and logic, wherein the logic is to redirect the request to a second compute node interface associated with a second compute node when the second compute node is mapped to a plurality of data object addresses that includes an address associated with the data object in accordance with the mapping information, and wherein the first compute node is to receive, as a response to the request, the at least a portion of the particular version of the data object from a third compute node interface associated with a third compute node.
    Type: Application
    Filed: December 29, 2016
    Publication date: July 5, 2018
    Inventors: Francesc Guim Bernat, Kshitij A. Doshi, Mark A. Schmisseur, Steen Larsen, Chet R. Douglas
  • Publication number: 20180157424
    Abstract: Provided are a method, system, computer readable storage medium, and switch for configuring a switch to assign partitions in storage devices to compute nodes. A management controller configures the switch to dynamically allocate partitions of at least one of the storage devices to the compute nodes based on a workload at the compute node.
    Type: Application
    Filed: November 7, 2017
    Publication date: June 7, 2018
    Inventors: Mark A. SCHMISSEUR, Mohan J. KUMAR, Balint FLEISCHER, Debendra DAS SHARMA, Raj Ramanujan
  • Publication number: 20180150256
    Abstract: Technologies for providing data deduplication in a disaggregated architecture include a network device. The network device is to receive, from a compute sled, a request to write a data block to one or more data storage sleds and determine, for each of one or more data sub-blocks within the data block and from deduplication data indicative of physical addresses of data sub-blocks, whether each data sub-block is already stored in a data storage device of a data storage sled. Additionally, the network device is to write, in the deduplication data and in response to a determination that a data sub-block is already stored in a data storage device, a pointer to a physical address of the already-stored data sub-block in association with a logical address of the data sub-block.
    Type: Application
    Filed: September 27, 2017
    Publication date: May 31, 2018
    Inventors: Karthik Kumar, Francesc Guim Bernat, Thomas Willhalm, Mark A. Schmisseur
  • Publication number: 20180150240
    Abstract: Technologies for offloading I/O intensive workload phases to a data storage sled include a compute sled. The compute sled is to execute a workload that includes multiple phases. Each phase is indicative of a different resource utilization over a time period. Additionally, the compute sled is to identify an I/O intensive phase of the workload, wherein the amount of data to be communicated through a network path between the compute sled and the data storage sled to execute the I/O intensive phase satisfies a predefined threshold. The compute sled is also to migrate the workload to the data storage sled to execute the I/O intensive phase locally on the data storage sled. Other embodiments as also described and claimed.
    Type: Application
    Filed: September 29, 2017
    Publication date: May 31, 2018
    Inventors: Francesc Guim Bernat, Karthik Kumar, Mark A. Schmisseur, Thomas Willhalm
  • Patent number: 9935653
    Abstract: Methods and apparatus related to enhanced Cyclical Redundancy Check (CRC) circuit based on Galois-Field arithmetic are described. In one embodiment, a plurality of exclusive OR logic include first exclusive OR logic and second exclusive OR logic. First Galois Field multiplier logic multiplies a first output from the first exclusive OR logic and a first portion of a plurality of portions of the input data. Second Galois Field multiplier logic multiplies a second output from the second exclusive OR logic and a second portion of the plurality of portions of the input data. Other embodiments are also disclosed and claimed.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: April 3, 2018
    Assignee: Intel Corporation
    Inventors: Sivakumar Radhakrishnan, Sin S. Tan, Kenneth C. Haren, Mark A. Schmisseur