Patents by Inventor Mohan J. Kumar

Mohan J. Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200192710
    Abstract: Technologies for enabling and metering the utilization of components on demand include a compute device. The compute device includes a network interface controller and circuitry configured to receive, through a network and with the network interface controller, a request to enable a component of a sled to assist in the execution of a workload. The circuitry is further configured to enable, in response to the request, the component to assist in the execution of the workload, and meter the utilization of the component by the workload to determine a total monetary cost to a customer associated with the workload for the utilization of the component.
    Type: Application
    Filed: August 30, 2018
    Publication date: June 18, 2020
    Inventors: Mohan J. KUMAR, Murugasamy K. NACHIMUTHU
  • Patent number: 10687434
    Abstract: Mechanisms for SAS-free cabling in Rack Scale Design (RSD) environments and associated methods, apparatus, and systems. Pooled compute drawers containing multiple compute nodes are coupled to pooled storage drawers using fabric infrastructure, such as Ethernet links and switches. The pooled storage drawers includes a storage distributor that is coupled to a plurality of storage devices and includes one or more fabric ports and a PCIe switch with multiple PCIe ports. Under one configuration, the PCIe ports are connected to one or more IO hubs including a PCIe switch coupled to multiple storage device interfaces that are coupled to the storage devices. In another configuration, the PCIe ports are connected directly to PCIe storage devices. The storage distributor implements a NVMe-oF server driver that interacts with an NVMe-oF client driver running on compute nodes or a fabric switch.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: June 16, 2020
    Assignee: Intel Corporation
    Inventors: Mohan J. Kumar, Murugasamy K. Nachimuthu
  • Patent number: 10649690
    Abstract: In an example, there is disclosed a memory controller, including: a data buffer to drive a determinate value to a data bus to communicatively couple to a memory; and a register clock driver to: receive a memory initialization command from a processor; and incrementally step through a plurality of initialization addresses, sequentially driving each initialization address to an address bus to communicatively couple to the memory. There is also disclosed a computing device comprising the memory controller, and a method of initializing memory comprising incrementally stepping through a plurality of initialization addresses and sequentially writing a determinate value to each address.
    Type: Grant
    Filed: December 26, 2015
    Date of Patent: May 12, 2020
    Assignee: Intel Corporation
    Inventors: Mohan J. Kumar, George Vergis, Sarathy Jayakumar
  • Publication number: 20200133683
    Abstract: Technologies for fast boot-up of a compute device with error-correcting code (ECC) memory are disclosed. A basic input/output system (BIOS) of a compute device may assign memory addresses of the ECC memory to different processors on the compute device. The processors may then initialize the ECC memory in parallel by writing to the ECC memory. The processors may write to the ECC memory with direct-store operations that are immediately written to the ECC memory instead of being cached. The BIOS may continue to operation on one processor while the rest of the processors initialize the ECC memory.
    Type: Application
    Filed: December 28, 2019
    Publication date: April 30, 2020
    Inventors: Murugasamy K. Nachimuthu, Rajat Agarwal, Mohan J. Kumar
  • Publication number: 20200117526
    Abstract: Examples may include a basic input/output system (BIOS) for a computing platform communicating with a controller for a non-volatile dual in-line memory module (NVDIMM). Communication between the BIOS and the controller may include a request for the controller to scan and identify error locations in non-volatile memory at the NVDIMM. The non-volatile memory may be capable of providing persistent memory for the NVDIMM.
    Type: Application
    Filed: September 16, 2019
    Publication date: April 16, 2020
    Inventors: Mohan J. KUMAR, Murugasamy K. NACHIMUTHU, Camille C. RAAD
  • Patent number: 10616669
    Abstract: Examples may include sleds for a rack in a data center including physical compute resources and memory for the physical compute resources. The memory can be disaggregated, or organized into first level and second level memory. A first sled can comprise the physical compute resources and a first set of physical memory resources while a second sled can comprise a second set of physical memory resources. The first set of physical memory resources can be coupled to the physical compute resources via a local interface while the second set of physical memory resources can be coupled to the physical compute resources via a fabric.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: April 7, 2020
    Assignee: INTEL CORPORATION
    Inventors: Mohan J. Kumar, Murugasamy K. Nachimuthu
  • Patent number: 10592162
    Abstract: Examples include methods for obtaining one or more location hints applicable to a range of logical block addresses of a received input/output (I/O) request for a storage subsystem coupled with a host system over a non-volatile memory express over fabric (NVMe-oF) interconnect. The following steps are performed for each logical block address in the I/O request. A most specific location hint of the one or more location hints that matches that logical block address is applied to identify a destination in the storage subsystem for the I/O request. When the most specific location hint is a consistent hash hint, the consistent hash hint is processed. The I/O request is forwarded to the destination and a completion status for the I/O request is returned. When a location hint log page has changed, the location hint log page is processed. When any location hint refers to NVMe-oF qualified names not included in the immediately preceding query by the discovery service, the immediately preceding query is processed again.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: March 17, 2020
    Assignee: Intel Corporation
    Inventors: Scott D. Peterson, Sujoy Sen, Anjaneya R. Chagam Reddy, Murugasamy K. Nachimuthu, Mohan J. Kumar
  • Patent number: 10567166
    Abstract: Technologies for dividing resources across partitions include a compute sled. The compute sled is to determine partitions among sockets of the compute sled. Each socket is associated with a corresponding processor. The compute sled is also to establish a separate memory space for each determined partition, obtain, from an application executed in one of the sockets, a request to access a logical memory address, identify the partition associated with the memory access request, determine a corresponding physical memory address as a function of the identified partition and the logical memory address, and access a memory of the compute sled at the determined physical memory address. Other embodiments are also described and claimed.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: February 18, 2020
    Assignee: Intel Corporation
    Inventors: Murugasamy K. Nachimuthu, Mohan J. Kumar
  • Publication number: 20200053438
    Abstract: Embodiments are generally directed apparatuses, methods, techniques and so forth to receive a sled manifest comprising identifiers for physical resources of a sled, receive results of an authentication and validation operations performed to authenticate and validate the physical resources of the sled, determine whether the results of the authentication and validation operations indicate the physical resources are authenticate or not authenticate. Further and in response to the determination that the results indicate the physical resources are authenticated, permit the physical resources to process a workload, and in response to the determination that the results indicate the physical resources are not authenticated, prevent the physical resources from processing the workload.
    Type: Application
    Filed: October 17, 2019
    Publication date: February 13, 2020
    Applicant: INTEL CORPORATION
    Inventors: ALBERTO J. MUNOZ, MURUGASAMY K. NACHIMUTHU, MOHAN J. KUMAR, WOJCIECH POWIERTOWSKI, SERGIU D. GHETIE, NEERAJ S. UPASANI, SAGAR V. DALVI, CHUKWUNENYE S. NNEBE, JEANNE GUILLORY
  • Publication number: 20200050497
    Abstract: Technologies for composing a managed node with multiple processors on multiple compute sleds to cooperatively execute a workload include a memory, one or more processors connected to the memory, and an accelerator. The accelerator further includes a coherence logic unit that is configured to receive a node configuration request to execute a workload. The node configuration request identifies the compute sled and a second compute sled to be included in a managed node. The coherence logic unit is further configured to modify a portion of local working data associated with the workload on the compute sled in the memory with the one or more processors of the compute sled, determine coherence data indicative of the modification made by the one or more processors of the compute sled to the local working data in the memory, and send the coherence data to the second compute sled of the managed node.
    Type: Application
    Filed: November 29, 2017
    Publication date: February 13, 2020
    Inventors: Mohan J. KUMAR, Murugasamy K. NACHIMUTHU, Krishna BHUYAN
  • Publication number: 20200004429
    Abstract: Provided are a method, system, computer readable storage medium, and switch for configuring a switch to assign partitions in storage devices to compute nodes. A management controller configures the switch to dynamically allocate partitions of at least one of the storage devices to the compute nodes based on a workload at the compute node.
    Type: Application
    Filed: July 12, 2019
    Publication date: January 2, 2020
    Inventors: Mark A. SCHMISSEUR, Mohan J. KUMAR, Balint FLEISCHER, Debendra DAS SHARMA, Raj K. RAMANUJAN
  • Publication number: 20200004633
    Abstract: An apparatus and method are described for detecting and correcting data fetch errors within a processor core. For example, one embodiment of an instruction processing apparatus for detecting and recovering from data fetch errors comprises: at least one processor core having a plurality of instruction processing stages including a data fetch stage and a retirement stage; and error processing logic in communication with the processing stages to perform the operations of: detecting an error associated with data in response to a data fetch operation performed by the data fetch stage; and responsively performing one or more operations to ensure that the error does not corrupt an architectural state of the processor core within the retirement stage.
    Type: Application
    Filed: March 4, 2019
    Publication date: January 2, 2020
    Inventors: Theodros Yigzaw, Geeyarpuram N. Santhanakrishnan, Ganapati N. Srinivasa, Jose A. Vargas, Hisham Shafi, Michael Mishaeli, Ehud Cohen, Zeev Sperber, Shlomo Raikin, Mohan J. Kumar, Julius Y. Mandelblat
  • Patent number: 10521003
    Abstract: A method is described that includes deciding to enter a lower power state, and, shutting down a memory channel in a computer system in response where thereafter other memory channels in the computer system remain active so that computer remains operative while the memory channel is shutdown.
    Type: Grant
    Filed: April 3, 2017
    Date of Patent: December 31, 2019
    Assignee: Intel Corporation
    Inventors: Murugasamy K. Nachimuthu, Mohan J. Kumar
  • Patent number: 10503587
    Abstract: Apparatuses, systems and methods are disclosed herein that generally relate to distributed network storage and filesystems, such as Ceph, Hadoop®, or other big data storage environments utilizing resources and/or storage that may be remotely located across a communication link such as a network. More particularly, disclosed are techniques for one or more machines or devices to scrub data on remote resources and/or storage without requiring all or substantially all of the remote data to be read across the communication link in order to scrub it. Some disclosed embodiments discuss having validation be relatively local to storage(s) being scrubbed, and some embodiments discuss only providing to the one or more machines scrubbing data selected results of the relatively local scrubbing over the communication link.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: December 10, 2019
    Assignee: Intel Corporation
    Inventors: Anjaneya R. Chagam Reddy, Mohan J. Kumar, Sujoy Sen, Tushar Gohad
  • Patent number: 10498604
    Abstract: The present disclosure is directed to capability determination for computing resource allocation. A device may comprise a management engine (ME) to determine device information for use in generating an enhanced universally unique identifier (UUID) based on a UUID corresponding to the device. The ME may interact with equipment in the device to obtain the device information, and may augment the UUID using at least part of the device information. Device information may include a device media access control (MAC) address, a central processing unit (CPU) identification (ID) for at least one CPU in the device and a device capability ID. The capability ID may be generated utilizing capability information obtained from the equipment, and may be encoded into the capability ID based on tables that describe different capabilities. The device may provide the enhanced UUID to a group agent that may group the device with other devices comprising similar capabilities.
    Type: Grant
    Filed: December 31, 2013
    Date of Patent: December 3, 2019
    Assignee: Intel Corporation
    Inventors: Mrittika Ganguli, Jaiber J. John, Mohan J. Kumar, Tessil Thomas
  • Patent number: 10489156
    Abstract: Embodiments are generally directed apparatuses, methods, techniques and so forth to receive a sled manifest comprising identifiers for physical resources of a sled, receive results of an authentication and validation operations performed to authenticate and validate the physical resources of the sled, determine whether the results of the authentication and validation operations indicate the physical resources are authenticate or not authenticate. Further and in response to the determination that the results indicate the physical resources are authenticated, permit the physical resources to process a workload, and in response to the determination that the results indicate the physical resources are not authenticated, prevent the physical resources from processing the workload.
    Type: Grant
    Filed: July 21, 2017
    Date of Patent: November 26, 2019
    Assignee: INTEL CORPORATION
    Inventors: Alberto J. Munoz, Murugasamy K. Nachimuthu, Mohan J. Kumar, Wojciech Powiertowski, Sergiu D. Ghetie, Neeraj S. Upasani, Sagar V. Dalvi, Chukwunenye S. Nnebe, Jeanne Guillory
  • Patent number: 10474596
    Abstract: In one embodiment, a processor includes a plurality of cores including a first core to be reserved for execution in a protected domain, the first core to be hidden from an operating system. The processor may further include a filter coupled to the plurality of cores, where the filter includes a plurality of fields each associated with one of the plurality of cores to indicate whether an interrupt of the protected domain is to be directed to the corresponding core. Other embodiments are described and claimed.
    Type: Grant
    Filed: June 25, 2015
    Date of Patent: November 12, 2019
    Assignee: Intel Corporation
    Inventors: Sarathy Jayakumar, Ashok Raj, John G. Holm, Narayan Ranganathan, Mohan J. Kumar, Sergiu D. Ghetie
  • Publication number: 20190324811
    Abstract: Technologies for providing latency-aware consensus management in a disaggregated system include a compute device. The compute device includes circuitry to determine latencies associated with subsystems of the disaggregated system. Additionally, the circuitry is to determine, as a function of the determined latencies, a time period in which a configuration change to the disaggregated system is to reach a consistent state in the subsystems.
    Type: Application
    Filed: July 2, 2019
    Publication date: October 24, 2019
    Inventors: Mrittika Ganguli, Murugasamy K. Nachimuthu, Muralidharan Sundararajan, Susanne M. Balle, Mohan J. Kumar
  • Publication number: 20190320021
    Abstract: Mechanisms for disaggregated storage class memory over fabric and associated methods, apparatus, and systems. A rack is populated with pooled system drawers including pooled compute drawers and pooled storage class memory (SCM) drawers, also referred to as SCM nodes. Optionally, a pooled memory drawer may include a plurality of SCM nodes. Each SCM node provides access to multiple storage class memory devices. Compute nodes including one or more processors and local storage class memory devices are installed in the pooled compute drawers, and are enabled to be selectively-coupled to access remote storage class memory devices over a low-latency fabric. During a memory access from an initiator node (e.g., a compute node) to a target node including attached disaggregated memory (e.g., an SCM node), a fabric node identifier (ID) corresponding to the target node is identified, and an access request is forwarded to that target node over the low-latency fabric.
    Type: Application
    Filed: April 25, 2019
    Publication date: October 17, 2019
    Applicant: lntel Corporation
    Inventors: Murugasamy K. Nachimuthu, Mohan J. Kumar
  • Patent number: 10445154
    Abstract: This disclosure is directed to firmware-related event notification. A device may comprise an operating system (OS) configured to operate on a platform. During initialization of the device a firmware module in the platform may load at least one globally unique identifier (GUID) into a firmware configuration table. When the platform notifies the OS, the firmware module may load at least one GUID into a platform notification table and may set a platform notification bit in a platform notification table status field. Upon detecting the notification, an OS management module may establish a source of the notification by querying the platform notification table. The platform notification bit may cause the OS management module to compare GUIDs in the platform notification table and the firmware configuration table. Services may be called based on any matching GUIDs. If no GUIDs match, the services may be called based on firmware variables in the device.
    Type: Grant
    Filed: February 17, 2017
    Date of Patent: October 15, 2019
    Assignee: INTEL CORPORATION
    Inventors: Sarathy Jayakumar, Mohan J. Kumar, Vincent J. Zimmer, Rajesh Poornachandran