Patents by Inventor Mark A. Schmisseur

Mark A. Schmisseur has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11269801
    Abstract: There is disclosed an example of an artificial intelligence (AI) system, including: a first hardware platform; a fabric interface configured to communicatively couple the first hardware platform to a second hardware platform; a processor hosted on the first hardware platform and programmed to operate on an AI problem; and a first training accelerator, including: an accelerator hardware; a platform inter-chip link (ICL) configured to communicatively couple the first training accelerator to a second training accelerator on the first hardware platform without aid of the processor; a fabric ICL to communicatively couple the first training accelerator to a third training accelerator on a second hardware platform without aid of the processor; and a system decoder configured to operate the fabric ICL and platform ICL to share data of the accelerator hardware between the first training accelerator and second and third training accelerators without aid of the processor.
    Type: Grant
    Filed: December 17, 2020
    Date of Patent: March 8, 2022
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Da-Ming Chiang, Kshitij A. Doshi, Suraj Prabhakaran, Mark A. Schmisseur
  • Patent number: 11263162
    Abstract: There is disclosed an example of an artificial intelligence (AI) system, including: a first hardware platform; a fabric interface configured to communicatively couple the first hardware platform to a second hardware platform; a processor hosted on the first hardware platform and programmed to operate on an AI problem; and a first training accelerator, including: an accelerator hardware; a platform inter-chip link (ICL) configured to communicatively couple the first training accelerator to a second training accelerator on the first hardware platform without aid of the processor; a fabric ICL to communicatively couple the first training accelerator to a third training accelerator on a second hardware platform without aid of the processor; and a system decoder configured to operate the fabric ICL and platform ICL to share data of the accelerator hardware between the first training accelerator and second and third training accelerators without aid of the processor.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: March 1, 2022
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Da-Ming Chiang, Kshitij A. Doshi, Suraj Prabhakaran, Mark A. Schmisseur
  • Publication number: 20220038388
    Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (Al) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the Al circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.
    Type: Application
    Filed: October 13, 2021
    Publication date: February 3, 2022
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Mark A. Schmisseur, Timothy Verrall
  • Publication number: 20220004468
    Abstract: An embodiment of an electronic apparatus may comprise one or more substrates, and a controller coupled to the one or more substrates, the controller to allocate a first secure portion of a pooled memory to a first instantiation of an application on a first node, and circuitry coupled to the one or more substrates and the controller, the circuitry to provide a failover interface for a second instantiation of the application on a second node to access the first secure portion of the pooled memory in the event of a failure of the first node. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: September 20, 2021
    Publication date: January 6, 2022
    Applicant: Intel Corporation
    Inventors: Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar, Rita Gupta, Mark Schmisseur, Dimitrios Ziakas
  • Patent number: 11216396
    Abstract: Aspects of the disclosure are directed to systems, methods, and devices that include an application processor. The application processor includes an interface logic to interface with a communication module using a bidirectional interconnect link compliant with a peripheral component interconnect express (PCIe) protocol. The interface logic to receive a data packet from across the link, the data packet comprises a header and data payload; determine a hint bit set in the header of the data packet; determine a steering tag value in the data packet header based on the hint bit set; and transmit the data payload to non-volatile memory based on the steering tag set in the header.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: January 4, 2022
    Assignee: Intel Corporation
    Inventors: Mark A. Schmisseur, Raj K. Ramanujan, Filip Schmole, David M. Lee, Ishwar Agarwal, David J. Harriman
  • Publication number: 20210389880
    Abstract: Systems, apparatuses, and methods provide for memory management where an infrastructure processing unit bypasses a central processing unit. Such an infrastructure processing unit determines if incoming packets of memory traffic trigger memory rules stored by the infrastructure processing unit. The incoming packets are routed to the central processing unit in a default mode when the incoming packets do not trigger the memory rules. Conversely, the incoming packets are routed to the infrastructure processing unit and bypass the central processing unit in an inline mode when the incoming packets trigger the memory rules. A memory architecture communicatively coupled to the central processing unit receives a set of atomic transactions from the infrastructure processing unit that bypasses the central processing unit and performs the set of atomic transactions from the infrastructure processing unit.
    Type: Application
    Filed: August 26, 2021
    Publication date: December 16, 2021
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Mark A. Schmisseur
  • Patent number: 11200104
    Abstract: Racks and rack pods to support a plurality of sleds are disclosed herein. Switches for use in the rack pods are also disclosed herein. A rack comprises a plurality of sleds and a plurality of electromagnetic waveguides. The plurality of sleds are vertically spaced from one another. The plurality of electromagnetic waveguides communicate data signals between the plurality of sleds.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: December 14, 2021
    Assignee: Intel Corporation
    Inventors: Matthew J. Adiletta, Myles Wilde, Aaron Gorius, Michael T. Crocker, Paul H. Dormitzer, Mark A. Schmisseur
  • Publication number: 20210373954
    Abstract: Data management for edge architected computing systems extends current storage and memory schemes of edge resources to expose interfaces to allow a device, such as an endpoint or client device, or another edge resource, to specify criteria for managing data originating from the device and stored in an edge resource, and extends the storage and memory controllers to manage data in accordance with the criteria, including removing stored data that no longer satisfies the criteria. The criteria includes a temporal hint to specify a time after which the data can be removed, a physical hint to specify a list of edge resources outside of which the data can be removed, an event-based hint to specify an event after which the data can be removed, and a quality of service condition to modify the time specified in the temporal hint based on a condition, such as memory and storage capacity of the edge resource in which the data is managed.
    Type: Application
    Filed: August 13, 2021
    Publication date: December 2, 2021
    Inventors: Francesc GUIM BERNAT, Ramanathan SETHURAMAN, Karthik KUMAR, Mark A. SCHMISSEUR, Brinda GANESH
  • Publication number: 20210357520
    Abstract: An embodiment of a semiconductor apparatus may include technology to receive data with a unique identifier, and bypass encryption logic of a media controller based on the unique identifier. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: July 27, 2021
    Publication date: November 18, 2021
    Inventors: Francesc Guim Bernat, Mark Schmisseur, Kshitij Doshi, Kapil Sood, Tarun Viswanathan
  • Patent number: 11176091
    Abstract: Techniques and apparatus for providing access to data in a plurality of storage formats are described. In one embodiment, for example, an apparatus may include logic, at least a portion of comprised in hardware coupled to the at least one memory, to determine a first storage format of a database operation on a database having a second storage format, and perform a format conversion process responsive to the first storage format being different than the second storage format, the format conversion process to translate a virtual address of the database operation to a physical address, and determine a converted physical address comprising a memory address according to the first storage format. Other embodiments are described and claimed.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: November 16, 2021
    Assignee: INTEL CORPORATION
    Inventors: Mark A. Schmisseur, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar
  • Patent number: 11178063
    Abstract: A host fabric interface (HFI) apparatus, including: an HFI to communicatively couple to a fabric; and a remote hardware acceleration (RHA) engine to: query an orchestrator via the fabric to identify a remote resource having an accelerator; and send a remote accelerator request to the remote resource via the fabric.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: November 16, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Kshitij A. Doshi, Mark A. Schmisseur, Narayan Ranganathan, John Chun Kwok Leung
  • Patent number: 11157422
    Abstract: In an example, there is disclosed a host-fabric interface (HFI), including: an interconnect interface to communicatively couple the HFI to an interconnect; a network interface to communicatively couple the HFI to a network; network interface logic to provide communication between the interconnect and the network; a coprocessor configured to provide an offloaded function for the network; a memory; and a caching agent configured to: designate a region of the memory as a shared memory between the HFI and a core communicatively coupled to the HFI via the interconnect; receive a memory operation directed to the shared memory; and issue a memory instruction to the memory according to the memory operation.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: October 26, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Daniel Rivas Barragan, Kshitij A. Doshi, Mark A. Schmisseur
  • Patent number: 11157642
    Abstract: An embodiment of a semiconductor apparatus may include technology to receive data with a unique identifier, and bypass encryption logic of a media controller based on the unique identifier. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: October 26, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Mark Schmisseur, Kshitij Doshi, Kapil Sood, Tarun Viswanathan
  • Publication number: 20210318929
    Abstract: Methods and apparatus for application aware memory patrol scrubbing techniques. The method may be performed on a computing system including one or more memory devices and running multiple applications with associated processes. The computer system may be implemented in a multi-tenant environment, where virtual instances of physical resources provided by the system are allocated to separate tenants, such as through virtualization schemes employing virtual machines or containers. Quality of Service (QoS) scrubbing logic and novel interfaces are provided to enable memory scrubbing QoS policies to be applied at the tenant, application, and/or process level. This QoS policies may include memory ranges for which specific policies are applied, as well as bandwidth allocations for performing scrubbing operations. A pattern generator is also provided for generating scrubbing patterns based on observed or predicted memory access patterns and/or predefined patterns.
    Type: Application
    Filed: June 23, 2021
    Publication date: October 14, 2021
    Inventors: Francesc GUIM BERNAT, Karthik KUMAR, Mark A. SCHMISSEUR, Thomas WILLHALM, Marcos E. CARRANZA
  • Publication number: 20210318965
    Abstract: The platform data aging for adaptive memory scaling described herein provides technical solutions for technical problems facing power management for electronic device processors. Technical solutions described herein include improved processor power management based on a memory region life-cycle (e.g., short-lived, long-lived, static). In an example, a short-term memory request is allocated to a short-term memory region, and that short-term memory region is powered down upon expiration of the lifetime of all short-term memory requests on the short-term memory region. Multiple memory regions may be scaled down (e.g., shut down) or scaled up based on demands for memory capacity and bandwidth.
    Type: Application
    Filed: June 24, 2021
    Publication date: October 14, 2021
    Inventors: Karthik Kumar, Francesc Guim Bernat, Mark A Schmisseur
  • Patent number: 11132353
    Abstract: Examples provide a network component, a network switch, a central office, a base station, a data storage element, a method, an apparatus, a computer program, a machine readable storage, and a machine readable medium. A network component (10) is configured to manage data consistency among two or more data storage elements (20, 30) in a network (40). The network component (10) comprises one or more interfaces (12) configured to register information on the two or more data storage elements (20, 30) comprising the data, information on a temporal range for the data consistency, and information on one or more address spaces at the two or more data storage elements (20, 30) to address the data.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: September 28, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Mark Schmisseur, Timothy Verrall, Thomas Willhalm
  • Publication number: 20210273863
    Abstract: Embodiments may be generally directed to techniques to cause communication of a registration request between a first end-point and a second end-point of an end-to-end path, the registration request to establish resource load monitoring for one or more resources of the end-to-end path, receive one or more acknowledgements indicating resource loads for each of the one or more resources of the end-to-end path, at least one of the acknowledgements to indicate a resource of the one or more resources is not meeting a threshold requirement for the end-to-end path, and perform an action for communication traffic utilizing the one or more resources based on the acknowledgement.
    Type: Application
    Filed: March 16, 2021
    Publication date: September 2, 2021
    Applicant: INTEL CORPORATION
    Inventors: FRANCESC GUIM BERNAT, KSHITIJ A. DOSHI, DANIEL RIVAS BARRAGAN, MARK A. SCHMISSEUR, STEEN LARSEN
  • Patent number: 11106427
    Abstract: Examples may include a data center in which memory sleds are provided with logic to filter data stored on the memory sled responsive to filtering requests from a compute sled. Memory sleds may include memory filtering logic arranged to receive filtering requests, filter data stored on the memory sled, and provide filtering results to the requesting entity. Additionally, a data center is provided in which fabric interconnect protocols in which sleds in the data center communicate is provided with filtering instructions such that compute sleds can request filtering on memory sleds.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: August 31, 2021
    Assignee: INTEL CORPORATION
    Inventors: Karthik Kumar, Francesc Guim Bernat, Thomas Willhalm, Mark A. Schmisseur
  • Patent number: 11093287
    Abstract: Data management for edge architected computing systems extends current storage and memory schemes of edge resources to expose interfaces to allow a device, such as an endpoint or client device, or another edge resource, to specify criteria for managing data originating from the device and stored in an edge resource, and extends the storage and memory controllers to manage data in accordance with the criteria, including removing stored data that no longer satisfies the criteria. The criteria includes a temporal hint to specify a time after which the data can be removed, a physical hint to specify a list of edge resources outside of which the data can be removed, an event-based hint to specify an event after which the data can be removed, and a quality of service condition to modify the time specified in the temporal hint based on a condition, such as memory and storage capacity of the edge resource in which the data is managed.
    Type: Grant
    Filed: May 24, 2019
    Date of Patent: August 17, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Ramanathan Sethuraman, Karthik Kumar, Mark A. Schmisseur, Brinda Ganesh
  • Patent number: 11086520
    Abstract: Provided are a method, system, computer readable storage medium, and switch for configuring a switch to assign partitions in storage devices to compute nodes. A management controller configures the switch to dynamically allocate partitions of at least one of the storage devices to the compute nodes based on a workload at the compute node.
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: August 10, 2021
    Assignee: Intel Corporation
    Inventors: Mark A. Schmisseur, Mohan J. Kumar, Balint Fleischer, Debendra Das Sharma, Raj K. Ramanujan