Patents by Inventor Mark A. Schmisseur
Mark A. Schmisseur has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12204471Abstract: In an example, there is disclosed a host-fabric interface (HFI), including: an interconnect interface to communicatively couple the HFI to an interconnect; a network interface to communicatively couple the HFI to a network; network interface logic to provide communication between the interconnect and the network; a coprocessor configured to provide an offloaded function for the network; a memory; and a caching agent configured to: designate a region of the memory as a shared memory between the HFI and a core communicatively coupled to the HFI via the interconnect; receive a memory operation directed to the shared memory; and issue a memory instruction to the memory according to the memory operation.Type: GrantFiled: September 7, 2023Date of Patent: January 21, 2025Assignee: Intel CorporationInventors: Francesc Guim Bernat, Daniel Rivas Barragan, Kshitij A. Doshi, Mark A. Schmisseur
-
Publication number: 20240421025Abstract: A semiconductor chip package is described. The semiconductor chip package has a substrate. The substrate has side I/Os on the additional surface area of the substrate. The side I/Os are coupled to I/Os of a semiconductor chip within the semiconductor chip package. A cooling assembly has also been described. The cooling assembly has a passageway to guide a cable to connect to a semiconductor chip's side I/Os that are located between a base of a cooling mass and an electronic circuit board that is between a bolster plate and a back plate and that is coupled to second I/Os of the semiconductor chip through a socket that the semiconductor chip's package is plugged into.Type: ApplicationFiled: December 16, 2021Publication date: December 19, 2024Inventors: Lianchang DU, Jeffory L. SMALLEY, Srikant NEKKANTY, Eric W. BUDDRIUS, Yi ZENG, Xinjun ZHANG, Maoxin YIN, Zhichao ZHANG, Chen ZHANG, Yuehong FAN, Mingli ZHOU, Guoliang YING, Yinglei REN, Chong J. ZHAO, Jun LU, Kai WANG, Timothy Glen HANNA, Vijaya K. BODDU, Mark A. SCHMISSEUR, Lijuan FENG
-
Publication number: 20240396852Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (AI) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the AI circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.Type: ApplicationFiled: August 1, 2024Publication date: November 28, 2024Applicant: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Mark A. Schmisseur, Timothy Verrall
-
Publication number: 20240378160Abstract: In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for presentation of direct accessed storage under a logical drive model; for implementing a distributed architecture for cooperative NVM Data protection; data mirroring for consistent SSD latency; for boosting a controller's performance and RAS with DIF support via concurrent RAID processing; for implementing arbitration and resource schemes of a doorbell mechanism, including doorbell arbitration for fairness and prevention of attack congestion; and for implementing multiple interrupt generation using a messaging unit and NTB in a controller through use of an interrupt coalescing scheme.Type: ApplicationFiled: July 22, 2024Publication date: November 14, 2024Inventors: Thomas M. Slaight, Sivakumar Radhakrishnan, Mark Schmisseur, Pankaj Kumar, Saptarshi Mondal, Sin S. Tan, David C. Lee, Marc T. Jones, Geetani R. Edirisooriya, Bradley A. Burres, Brian M. Leitner, Kenneth C. Haren, Michael T. Klinglesmith, Matthew R. Wilcox, Eric J. Dahlen
-
Publication number: 20240320179Abstract: There is disclosed an example of an artificial intelligence (AI) system, including: a first hardware platform; a fabric interface configured to communicatively couple the first hardware platform to a second hardware platform; a processor hosted on the first hardware platform and programmed to operate on an AI problem; and a first training accelerator, including: an accelerator hardware; a platform inter-chip link (ICL) configured to communicatively couple the first training accelerator to a second training accelerator on the first hardware platform without aid of the processor; a fabric ICL to communicatively couple the first training accelerator to a third training accelerator on a second hardware platform without aid of the processor; and a system decoder configured to operate the fabric ICL and platform ICL to share data of the accelerator hardware between the first training accelerator and second and third training accelerators without aid of the processor.Type: ApplicationFiled: May 31, 2024Publication date: September 26, 2024Applicant: Intel CorporationInventors: Francesc Guim Bernat, Da-Ming Chiang, Kshitij A. Doshi, Suraj Prabhakaran, Mark A. Schmisseur
-
Patent number: 12088507Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (AI) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the AI circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.Type: GrantFiled: July 11, 2022Date of Patent: September 10, 2024Assignee: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Mark A. Schmisseur, Timothy Verrall
-
Patent number: 12079149Abstract: In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for presentation of direct accessed storage under a logical drive model; for implementing a distributed architecture for cooperative NVM Data protection; data mirroring for consistent SSD latency; for boosting a controller's performance and RAS with DIF support via concurrent RAID processing; for implementing arbitration and resource schemes of a doorbell mechanism, including doorbell arbitration for fairness and prevention of attack congestion; and for implementing multiple interrupt generation using a messaging unit and NTB in a controller through use of an interrupt coalescing scheme.Type: GrantFiled: February 8, 2023Date of Patent: September 3, 2024Assignee: SK hynix NAND Product Solutions Corp.Inventors: Thomas M. Slaight, Sivakumar Radhakrishnan, Mark Schmisseur, Pankaj Kumar, Saptarshi Mondal, Sin S. Tan, David C. Lee, Marc T. Jones, Geetani R. Edirisooriya, Bradley A. Burres, Brian M. Leitner, Kenneth C. Haren, Michael T. Klinglesmith, Matthew R. Wilcox, Eric J. Dahlen
-
Publication number: 20240256685Abstract: An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to process memory operation requests from a memory controller, and provide a front end interface to remote pooled memory hosted at a near edge device. An embodiment of another electronic apparatus may include local memory and logic communicatively coupled the local memory, the logic to allocate a range of the local memory as remote pooled memory, and provide a back end interface to the remote pooled memory for memory requests from a far edge device. Other embodiments are disclosed and claimed.Type: ApplicationFiled: April 8, 2024Publication date: August 1, 2024Applicant: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Mark Schmisseur, Thomas Willhalm
-
Patent number: 12038861Abstract: There is disclosed an example of an artificial intelligence (AI) system, including: a first hardware platform; a fabric interface configured to communicatively couple the first hardware platform to a second hardware platform; a processor hosted on the first hardware platform and programmed to operate on an AI problem; and a first training accelerator, including: an accelerator hardware; a platform inter-chip link (ICL) configured to communicatively couple the first training accelerator to a second training accelerator on the first hardware platform without aid of the processor; a fabric ICL to communicatively couple the first training accelerator to a third training accelerator on a second hardware platform without aid of the processor; and a system decoder configured to operate the fabric ICL and platform ICL to share data of the accelerator hardware between the first training accelerator and second and third training accelerators without aid of the processor.Type: GrantFiled: October 25, 2022Date of Patent: July 16, 2024Assignee: Intel CorporationInventors: Francesc Guim Bernat, Da-Ming Chiang, Kshitij A. Doshi, Suraj Prabhakaran, Mark A. Schmisseur
-
Patent number: 12019768Abstract: An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to process memory operation requests from a memory controller, and provide a front end interface to remote pooled memory hosted at a near edge device. An embodiment of another electronic apparatus may include local memory and logic communicatively coupled the local memory, the logic to allocate a range of the local memory as remote pooled memory, and provide a back end interface to the remote pooled memory for memory requests from a far edge device. Other embodiments are disclosed and claimed.Type: GrantFiled: March 26, 2020Date of Patent: June 25, 2024Assignee: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Mark Schmisseur, Thomas Willhalm
-
Publication number: 20240179078Abstract: Embodiments may be generally directed to techniques to cause communication of a registration request between a first end-point and a second end-point of an end-to-end path, the registration request to establish resource load monitoring for one or more resources of the end-to-end path, receive one or more acknowledgements indicating resource loads for each of the one or more resources of the end-to-end path, at least one of the acknowledgements to indicate a resource of the one or more resources is not meeting a threshold requirement for the end-to-end path, and perform an action for communication traffic utilizing the one or more resources based on the acknowledgement.Type: ApplicationFiled: December 1, 2023Publication date: May 30, 2024Applicant: INTEL CORPORATIONInventors: FRANCESC GUIM BERNAT, KSHITIJ A. DOSHI, DANIEL RIVAS BARRAGAN, MARK A. SCHMISSEUR, STEEN LARSEN
-
Patent number: 11994997Abstract: Systems, apparatuses and methods may provide for a memory controller to manage quality of service enforcement and migration between local and pooled memory. A memory controller may include logic to communicate with a local memory and with a pooled memory controller to track memory page usage on a per application basis, instruct the pooled memory controller to perform a quality of service enforcement in response to a determination that an application is latency bound or bandwidth bound, wherein the determination that the application is latency bound or bandwidth bound is based on a cycles per instruction determination, and instruct a Direct Memory Access engine to perform a migration from a remote memory to the local memory in response to a determination that the quality of service cannot be enforced.Type: GrantFiled: December 23, 2020Date of Patent: May 28, 2024Assignee: Intel CorporationInventors: Francesc Guim Bernat, Karthik Kumar, Mark A. Schmisseur
-
Patent number: 11983408Abstract: An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to allocate a first memory portion to a first application as a combination of a local memory and remote memory, wherein the remote memory is shared between multiple compute nodes, and manage a first memory balloon associated with the first memory portion based on two or more memory tiers associated with the local memory and the remote memory. Other embodiments are disclosed and claimed.Type: GrantFiled: May 3, 2023Date of Patent: May 14, 2024Assignee: Intel CorporationInventors: Rasika Subramanian, Lidia Warnes, Francesc Guim Bernat, Mark A. Schmisseur, Durgesh Srivastava
-
Patent number: 11907136Abstract: An apparatus and/or system is described including a memory device including a memory range and a temporal data management unit (TDMU) coupled to the memory device to receive from an interface, the memory range and a temporal range corresponding to validity of data in the memory range, check the temporal range against a time and/or date value provided by a timer or clock to identify the data in the memory range as expired, and invalidate the data that is expired in the memory device. In some embodiments, the TDMU includes hardware logic that resides on a memory module with the memory device and is coupled to invalidate expired data when the memory module is decoupled from the interface. Other embodiments may be disclosed and claimed.Type: GrantFiled: March 16, 2020Date of Patent: February 20, 2024Assignee: Intel CorporationInventors: Ginger H. Gilsdorf, Karthik Kumar, Mark A. Schmisseur, Thomas Willhalm, Francesc Guim Bernat
-
Publication number: 20240045814Abstract: In an example, there is disclosed a host-fabric interface (HFI), including: an interconnect interface to communicatively couple the HFI to an interconnect; a network interface to communicatively couple the HFI to a network; network interface logic to provide communication between the interconnect and the network; a coprocessor configured to provide an offloaded function for the network; a memory; and a caching agent configured to: designate a region of the memory as a shared memory between the HFI and a core communicatively coupled to the HFI via the interconnect; receive a memory operation directed to the shared memory; and issue a memory instruction to the memory according to the memory operation.Type: ApplicationFiled: September 7, 2023Publication date: February 8, 2024Applicant: Intel CorporationInventors: Francesc Guim Bernat, Daniel Rivas Barragan, Kshitij A. Doshi, Mark A. Schmisseur
-
Patent number: 11870662Abstract: Embodiments may be generally directed to techniques to cause communication of a registration request between a first end-point and a second end-point of an end-to-end path, the registration request to establish resource load monitoring for one or more resources of the end-to-end path, receive one or more acknowledgements indicating resource loads for each of the one or more resources of the end-to-end path, at least one of the acknowledgements to indicate a resource of the one or more resources is not meeting a threshold requirement for the end-to-end path, and perform an action for communication traffic utilizing the one or more resources based on the acknowledgement.Type: GrantFiled: March 9, 2022Date of Patent: January 9, 2024Assignee: Intel CorporationInventors: Francesc Guim Bernat, Kshitij A. Doshi, Daniel Rivas Barragan, Mark A. Schmisseur, Steen Larsen
-
Patent number: 11809338Abstract: In an example, there is disclosed a host-fabric interface (HFI), including: an interconnect interface to communicatively couple the HFI to an interconnect; a network interface to communicatively couple the HFI to a network; network interface logic to provide communication between the interconnect and the network; a coprocessor configured to provide an offloaded function for the network; a memory; and a caching agent configured to: designate a region of the memory as a shared memory between the HFI and a core communicatively coupled to the HFI via the interconnect; receive a memory operation directed to the shared memory; and issue a memory instruction to the memory according to the memory operation.Type: GrantFiled: October 4, 2021Date of Patent: November 7, 2023Assignee: Intel CorporationInventors: Francesc Guim Bernat, Daniel Rivas Barragan, Kshitij A. Doshi, Mark A. Schmisseur
-
Patent number: 11797343Abstract: Data management for edge architected computing systems extends current storage and memory schemes of edge resources to expose interfaces to allow a device, such as an endpoint or client device, or another edge resource, to specify criteria for managing data originating from the device and stored in an edge resource, and extends the storage and memory controllers to manage data in accordance with the criteria, including removing stored data that no longer satisfies the criteria. The criteria includes a temporal hint to specify a time after which the data can be removed, a physical hint to specify a list of edge resources outside of which the data can be removed, an event-based hint to specify an event after which the data can be removed, and a quality of service condition to modify the time specified in the temporal hint based on a condition, such as memory and storage capacity of the edge resource in which the data is managed.Type: GrantFiled: August 13, 2021Date of Patent: October 24, 2023Assignee: Intel CorporationInventors: Francesc Guim Bernat, Ramanathan Sethuraman, Karthik Kumar, Mark A. Schmisseur, Brinda Ganesh
-
Publication number: 20230333738Abstract: An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to allocate a first memory portion to a first application as a combination of a local memory and remote memory, wherein the remote memory is shared between multiple compute nodes, and manage a first memory balloon associated with the first memory portion based on two or more memory tiers associated with the local memory and the remote memory. Other embodiments are disclosed and claimed.Type: ApplicationFiled: May 3, 2023Publication date: October 19, 2023Applicant: Intel CorporationInventors: Rasika Subramanian, Lidia Warnes, Francesc Guim Bernat, Mark A. Schmisseur, Durgesh Srivastava
-
Patent number: 11704424Abstract: An embodiment of a semiconductor apparatus may include technology to receive data with a unique identifier, and bypass encryption logic of a media controller based on the unique identifier. Other embodiments are disclosed and claimed.Type: GrantFiled: July 27, 2021Date of Patent: July 18, 2023Assignee: Intel CorporationInventors: Francesc Guim Bernat, Mark Schmisseur, Kshitij Doshi, Kapil Sood, Tarun Viswanathan