Patents by Inventor Mohan J. Kumar

Mohan J. Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180285123
    Abstract: A rack system includes a plurality of compute nodes can implement controller consolidation, a definition of a user mode, and/or debuggability hooks. In the controller consolidation, a plurality of nodes each include a minicontroller to communicate with a baseboard management controller. The baseboard management controller manages the nodes through communication with the minicontrollers. In the definition of a user mode, a compute node receives a request for an update and blocks the update, based on a determination that the update is to firmware of the compute node, to prevent an inband Basic Input/Output System (BIOS) update in a composed system in a rack scale environment. With the debuggability hooks, a processor receives from one of a plurality of processing cores a first message including a first POST code and either a first identifier of a first processing core or a second identifier of a second processing core.
    Type: Application
    Filed: March 29, 2017
    Publication date: October 4, 2018
    Applicant: Intel Corporation
    Inventors: Mohan J. Kumar, Murugasamy K. Nachimuthu
  • Publication number: 20180287949
    Abstract: A rack system including a plurality of nodes can implement thermal/power throttling, sub-node composition, and processing balancing based on voltage/frequency. In the thermal/power throttling, at least one resource is throttled, based at least in part on a heat event or a power event. In the sub-node composition, a plurality of computing cores is divided into a target number of domains. In the processing balancing based on voltage/frequency, a first core performs a first processing job at a first voltage or frequency, and a second core performs a second processing job at a second voltage or frequency different from the first voltage or frequency.
    Type: Application
    Filed: March 29, 2017
    Publication date: October 4, 2018
    Applicant: Intel Corporation
    Inventors: Mohan J. Kumar, Murugasamy K. Nachimuthu, Vasudevan Srinivasan
  • Publication number: 20180276137
    Abstract: An apparatus and method are described for system physical address to memory module address translation. For example, one embodiment of an apparatus comprises: a fetch circuit of a core to fetch a system physical address (SPA) translate instruction from memory; a decode circuit of the core to decode the SPA translate instruction; a first register to store an SPA associated with the SPA translate instruction; a memory controller comprising one or more channel controllers to initiate a translation using the SPA, the memory controller to transmit a translation request to a first channel controller; the first channel controller to synthesize a response including dual in-line memory module (DIMM) address information; and a second register to store the DIMM address information to be used to identify the DIMM during subsequent memory transactions.
    Type: Application
    Filed: March 21, 2017
    Publication date: September 27, 2018
    Inventors: ASHOK RAJ, SREENIVAS MANDAVA, SARATHY JAYAKUMAR, MOHAN J. KUMAR, THEODROS YIGZAW, RONALD N. STORY
  • Publication number: 20180262479
    Abstract: Technologies for verifying authorized operation includes an administration server to query a dual-headed identification device of a server for identification data indicative of an identity of the server. The dual-headed identification device includes a wired communication circuit, a wireless communication circuit, and a memory having the identification data stored therein. The administration server further obtains the identification data from the dual-headed identification device of the server, determines a context of the server, and determines whether boot of the server is authorized based on the context of the server, the identification data of the server, and a security policy of the server.
    Type: Application
    Filed: May 10, 2018
    Publication date: September 13, 2018
    Inventors: Rajesh Poornachandran, Vincent J. Zimmer, Shahrok Shahidzadeh, Mohan J. Kumar, Sergiu D. Ghetie
  • Publication number: 20180219797
    Abstract: Technologies for pooling accelerators over fabric are disclosed. In the illustrative embodiment, an application may access an accelerator device over an application programming interface (API) and the API can access an accelerator device that is either local or a remote accelerator device that is located on a remote accelerator sled over a network fabric. The API may employ a send queue and a receive queue to send and receive command capsules to and from the accelerator sled.
    Type: Application
    Filed: June 12, 2017
    Publication date: August 2, 2018
    Inventors: Sujoy Sen, Mohan J. Kumar, Donald L. Faw, Susanne M. Balle, Narayan Ranganathan
  • Patent number: 10025686
    Abstract: In an embodiment, a processor includes a plurality of counters each to provide a count of a performance metric of at least one core of the processor, a plurality of threshold registers each to store a threshold value with respect to a corresponding one of the plurality of counters, and an event logic to generate an event digest packet including a plurality of indicators each to indicate whether an event occurred based on a corresponding threshold value and a corresponding count value. Other embodiments are described and claimed.
    Type: Grant
    Filed: October 30, 2012
    Date of Patent: July 17, 2018
    Assignee: Intel Corporation
    Inventors: Mrittika Ganguli, Tessil Thomas, Vinila Rose, Hussam Mousa, Mohan J. Kumar
  • Patent number: 10019354
    Abstract: Apparatus, systems, and methods to manage memory operations are described. A cache controller is provided comprising logic to receive a transaction to operate on a data element in a cache memory, determine whether the data element is to be stored in a nonvolatile memory by querying a source address decoder (SAD), and, in response to a determination that the data element is to be stored in the nonvolatile memory, to forward the transaction to a memory controller coupled to the nonvolatile memory, and, in response to a determination that the data element is not to be stored in the nonvolatile memory, to drop the transaction from a cache flush procedure of the cache controller. Additionally, the cache controller may receive a confirmation signal from the memory controller that the data element was stored in the nonvolatile memory, and return a completion signal to an originator of the transaction. The cache controller may also include logic to place a processor core in a low power state.
    Type: Grant
    Filed: December 9, 2013
    Date of Patent: July 10, 2018
    Assignee: Intel Corporation
    Inventors: Sarathy Jayakumar, Mohan J. Kumar, Eswaramoorthi Nallusamy
  • Publication number: 20180189188
    Abstract: Unified hardware and software two-level memory mechanisms and associated methods, systems, and software. Data is stored on near and far memory devices, wherein an access latency for a near memory device is less than an access latency for a far memory device. The near memory devices store data in data units having addresses in a near memory virtual address space, while the far memory devices store data in data units having addresses in a far memory address space, with a portion of the data being stored on both near and far memory devices. In response to memory read access requests, a determination is made to where data corresponding to the request is located on a near memory device, and if so the data is read from the near memory device; otherwise, the data is read from a far memory device. Memory access patterns are observed, and portions of far memory that are frequently accessed are copied to near memory to reduce access latency for subsequent accesses.
    Type: Application
    Filed: December 31, 2016
    Publication date: July 5, 2018
    Applicant: Intel Corporation
    Inventors: MOHAN J. KUMAR, MURUGASAMY K. NACHIMUTHU
  • Publication number: 20180189081
    Abstract: Dynamically configurable server platforms and associated apparatus and methods. A server platform including a plurality of CPUs installed in respective sockets may be dynamically configured as multiple single-socket servers and as a multi-socket server. The CPUs are connected to a platform manager component comprising an SoC including one or more processors and an embedded FPGA. Following a platform reset, an FPGA image is loaded, dynamically configuring functional blocks and interfaces on the platform manager. The platform manager also includes pre-defined functional blocks and interfaces. During platform initialization the dynamically-configured functional blocks and interfaces are used to initialize the server platform, while both the pre-defined and dynamically-configured functional blocks and interfaces are used to support run-time operations.
    Type: Application
    Filed: December 30, 2016
    Publication date: July 5, 2018
    Inventors: Neeraj S. Upasani, Jeanne Guillory, Wojciech Powiertowski, Sergiu D Ghetie, Mohan J. Kumar, Murugasamy K. Nachimuthu
  • Publication number: 20180188966
    Abstract: A systems and methods for dynamic address based minoring are disclosed. A system may include a processor, comprising a mirror address range register to store data indicating a location and a size of a first portion of a system memory to be mirrored. The processor may further include a memory controller coupled to the mirror address range register and including circuitry to cause a second portion of the system memory to mirror the first portion of the system memory.
    Type: Application
    Filed: December 29, 2016
    Publication date: July 5, 2018
    Inventors: Sarathy Jayakumar, Mohan J. Kumar, Ashok Raj, Hemalatha Gurumoorthy, Ronald N. Story
  • Publication number: 20180188989
    Abstract: System, method, and machine readable medium implementing a mechanism for selecting and providing reconfigurable hardware resources in a rack architecture system are described herein. One embodiment of a system includes a plurality of nodes and a configuration manager. Each of the nodes further includes: a plurality of memory resources and a node manager. The node manager is to track the memory resources that are available in the node, determine different possible configurations of memory resources, and generate a performance estimate for each of the possible configurations. The configuration manager is to receive a request to select one or more nodes based on a set of performance requirements, receive from each node the different possible configurations of memory resources and the performance estimate for each of the possible configurations, and iterate through collected configurations and performance estimates to determine one or more node configurations best matching the set of performance requirements.
    Type: Application
    Filed: December 31, 2016
    Publication date: July 5, 2018
    Inventors: Murugasamy K. Nachimuthu, Mohan J. Kumar
  • Publication number: 20180192540
    Abstract: Mechanisms for SAS-free cabling in Rack Scale Design (RSD) environments and associated methods, apparatus, and systems. Pooled compute drawers containing multiple compute nodes are coupled to pooled storage drawers using fabric infrastructure, such as Ethernet links and switches. The pooled storage drawers includes a storage distributor that is coupled to a plurality of storage devices and includes one or more fabric ports and a PCIe switch with multiple PCIe ports. Under one configuration, the PCIe ports are connected to one or more IO hubs including a PCIe switch coupled to multiple storage device interfaces that are coupled to the storage devices. In another configuration, the PCIe ports are connected directly to PCIe storage devices. The storage distributor implements a NVMe-oF server driver that interacts with an NVMe-oF client driver running on compute nodes or a fabric switch.
    Type: Application
    Filed: December 30, 2016
    Publication date: July 5, 2018
    Inventors: MOHAN J. KUMAR, MURUGASAMY K. NACHIMUTHU
  • Publication number: 20180165207
    Abstract: One embodiment provides for a data processing system comprising a multi-level system memory including a first memory level of volatile memory and a second memory level that is larger and slower in comparison with the first memory level. The second memory level includes non-volatile memory and can additionally include volatile memory. The multi-level system memory includes a multi-level memory controller including logic to manage a list of faulty addresses within the multi-level system memory. The multi-level memory controller can manage a list of faulty addresses. The multi-level memory controller is configured to satisfy a request for data stored in the first memory level from the second memory level when the data is stored in an address on the list of faulty addresses.
    Type: Application
    Filed: December 9, 2016
    Publication date: June 14, 2018
    Inventors: Theodros Yigzaw, Ashok Raj, Robert C. Swanson, Mohan J. Kumar
  • Publication number: 20180165196
    Abstract: Embodiments provide for a processor including a cache a caching agent and a processing node to decode an instruction including at least one operand specifying an address range within a distributed shared memory (DSM) and perform a flush to a first of a plurality of memory devices in the DSM at the specified address range.
    Type: Application
    Filed: December 12, 2016
    Publication date: June 14, 2018
    Inventors: Francesc Guim Bernat, Karthik Kumar, Mohan J. Kumar, Thomas Willhalm, Robert G. Blankenship
  • Publication number: 20180165144
    Abstract: A processor includes an instruction decoder to receive an instruction to perform a machine check operation, the instruction having a first operand and a second operand. The processor further includes a machine check logic coupled to the instruction decoder to determine that the instruction is to determine a type of a machine check bank based on a command value stored in a first storage location indicated by the first operand, to determine a type of a machine check bank identified by a machine check bank identifier (ID) stored in a second storage location indicated by the second operand, and to store the determined type of the machine check bank in the first storage location indicated by the first operand.
    Type: Application
    Filed: December 8, 2016
    Publication date: June 14, 2018
    Inventors: Ashok RAJ, Narayan RANGANATHAN, Mohan J. KUMAR, Vincent J. ZIMMER
  • Publication number: 20180157424
    Abstract: Provided are a method, system, computer readable storage medium, and switch for configuring a switch to assign partitions in storage devices to compute nodes. A management controller configures the switch to dynamically allocate partitions of at least one of the storage devices to the compute nodes based on a workload at the compute node.
    Type: Application
    Filed: November 7, 2017
    Publication date: June 7, 2018
    Inventors: Mark A. SCHMISSEUR, Mohan J. KUMAR, Balint FLEISCHER, Debendra DAS SHARMA, Raj Ramanujan
  • Publication number: 20180159722
    Abstract: Apparatus and method to dynamically compose network resources are disclosed herein. In some embodiments, a network management fabric controller may include a module, in response to a request for a network service, that is to identify a child pool included in a particular network service pool, from among a plurality of network service pools associated with respective network services, that is capable of providing the network service, the child pool comprising identification of one or more particular ports of a particular compute node switch within the network; and another module that is to establish a connection between a compute component and the one or more particular ports of the particular compute node switch and between the one or more particular ports of the particular compute node switch and one or more particular ports of the main network switch in accordance with the particular network service pool.
    Type: Application
    Filed: December 6, 2016
    Publication date: June 7, 2018
    Inventors: Deepak Soma Reddy, Mrittika Ganguli, Mohan J. Kumar
  • Publication number: 20180150372
    Abstract: Technologies for generating manifest data for a sled include a sled to generate manifest data indicative of one or more characteristics of the sled (e.g., hardware resources, firmware resources, a configuration of the sled, or a health of sled components). The sled is also to associate an identifier with the manifest data. The identifier uniquely identifies the sled from other sleds. Additionally, the sled is to send the manifest data and the associated identifier to a server. The sled may also detect a change in the hardware resources, firmware resources, the configuration, or component health of the sled. The sled may also generate an update of the manifest data based on the detected change, where the update specifies the detected change in the hardware resources, firmware resources, the configuration, or component health of the sled. The sled may also send the update of the manifest data to the server.
    Type: Application
    Filed: November 29, 2017
    Publication date: May 31, 2018
    Inventors: Murugasamy K. Nachimuthu, Mohan J. Kumar, Alberto J. Munoz
  • Publication number: 20180150293
    Abstract: Technologies for lifecycle management include multiple computing devices in communication with a lifecycle management server. On boot, a computing device loads a lightweight firmware boot environment. The lightweight firmware boot environment connects to the lifecycle management server and downloads one or more firmware images for controllers of the computing device. The controllers may include baseboard management controllers, network interface controllers, solid-state drive controllers, or other controllers. The lifecycle management server may select firmware images and/or versions of firmware images based on the controllers or the computing device. The computing device installs each firmware image to a controller memory device coupled to a controller, and in use, each controller accesses the firmware image in the controller memory device. The controller memory device may be a DRAM device or a high-performance byte-addressable non-volatile memory. Other embodiments are described and claimed.
    Type: Application
    Filed: November 28, 2017
    Publication date: May 31, 2018
    Inventors: Murugasamy Nachimuthu, Mohan J. Kumar
  • Publication number: 20180143678
    Abstract: A non-volatile random access memory (NVRAM) is used in a computer system to enhance support to sleep states. The computer system includes a processor, a non-volatile random access memory (NVRAM) that is byte-rewritable and byte-erasable, and power management (PM) module. A dynamic random access memory (DRAM) provides a portion of system address space. The PM module intercepts a request initiated by an operating system for entry into a sleep state, copies data from the DRAM to the NVRAM, maps the portion of the system address space from the DRAM to the NVRAM, and turns off the DRAM when transitioning into the sleep state. Upon occurrence of a wake event, the PM module returns control to the operating system such that the computer system resumes working state operations without the operating system knowing that the portion of the system address space has been mapped to the NVRAM.
    Type: Application
    Filed: November 27, 2017
    Publication date: May 24, 2018
    Inventors: Mohan J. KUMAR, Murugasamy K. NACHIMUTHU