Patents by Inventor Dejan S. Milojicic

Dejan S. Milojicic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200097440
    Abstract: A method of computing in memory, the method including inputting a packet including data into a computing memory unit having a control unit, loading the data into at least one computing in memory micro-unit, processing the data in the computing in memory micro-unit, and outputting the processed data. Also, a computing in memory system including a computing in memory unit having a control unit, wherein the computing in memory unit is configured to receive a packet having data and a computing in memory micro-unit disposed in the computing in memory unit, the computing in memory micro-unit having at least one of a memory matrix and a logic elements matrix.
    Type: Application
    Filed: September 24, 2018
    Publication date: March 26, 2020
    Inventors: Dejan S. Milojicic, Kirk M. Bresniker, Paolo Faraboschi, John Paul Strachan
  • Patent number: 10592437
    Abstract: Memory blocks are associated with each memory level of a hierarchy of memory levels. Each memory block has a matching key capability (MaKC). The MaKC of a memory block governs access to the memory block, in accordance with permissions specified by the MaKC. The MaKC of a memory block can uniquely identify the memory block across the hierarchy of memory levels, and can be globally unique across the memory blocks. An MaKC of a memory block includes a block protection key (BPK) stored with the memory block, and an execution protection key (EPK). If a provided EPK for a memory block matches the memory block's BPK upon comparison, access to the memory block is allowed according to the permissions specified by the MaKC.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: March 17, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Geoffrey Ndu, Dejan S. Milojicic, Paolo Faraboschi, Chris I. Dalton
  • Patent number: 10592431
    Abstract: According to examples, an apparatus may include a processor to address a physical memory having memory sections, in which a first set of memory sections may be shared between processes and a second set of memory sections may be specific to an individual process. The apparatus may also include a shared virtual address space register to provide translation for the first set of memory sections shared between processes and a process virtual address space register to provide translation for the second set of memory sections specific to the individual process. The translation for the second set of memory sections may be independent from the translation for the first set of memory sections.
    Type: Grant
    Filed: August 13, 2018
    Date of Patent: March 17, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Izzat El Hajj, Alexander Marshall Merritt, Gerd Zellweger, Dejan S. Milojicic, Paolo Faraboschi
  • Publication number: 20200073755
    Abstract: A computer system includes multiple memory array components that include respective analog memory arrays which are sequenced to implement a multi-layer process. An error array data structure is obtained for at least a first memory array component, and from which a determination is made as to whether individual nodes (or cells) of the error array data structure are significant. A determination can be made as to any remedial operations that can be performed to mitigate errors of significance.
    Type: Application
    Filed: August 28, 2018
    Publication date: March 5, 2020
    Inventors: John Paul Strachan, Catherine Graves, Dejan S. Milojicic, Paolo Faraboschi, Martin Foltin, Sergey Serebryakov
  • Publication number: 20200050493
    Abstract: Examples relate to firmware-based provisioning of hardware resources. In some of the examples, firmware discovers and takes ownership of a hardware resource. At this stage, the firmware performs a test to verify the hardware resource. The firmware then assigns the hardware resource to an OS instance. At this stage, the firmware can suspend assigning further hardware resources to the OS instance in response to a satisfied notification from the OS instance.
    Type: Application
    Filed: August 8, 2018
    Publication date: February 13, 2020
    Inventors: Dejan S. Milojicic, Derek Schumacher, Zhikui Wang
  • Publication number: 20200050553
    Abstract: According to examples, an apparatus may include a processor to address a physical memory having memory sections, in which a first set of memory sections may be shared between processes and a second set of memory sections may be specific to an individual process. The apparatus may also include a shared virtual address space register to provide translation for the first set of memory sections shared between processes and a process virtual address space register to provide translation for the second set of memory sections specific to the individual process. The translation for the second set of memory sections may be independent from the translation for the first set of memory sections.
    Type: Application
    Filed: August 13, 2018
    Publication date: February 13, 2020
    Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Izzat El Hajj, Alexander Marshall Merritt, Gerd Zellweger, Dejan S. Milojicic, Paolo Faraboschi
  • Publication number: 20200042287
    Abstract: Disclosed techniques provide for dynamically changing precision of a multi-stage compute process. For example, changing neural network (NN) parameters on a per-layer basis depending on properties of incoming data streams and per-layer performance of an NN among other considerations. NNs include multiple layers that may each be calculated with a different degree of accuracy and therefore, compute resource overhead (e.g., memory, processor resources, etc.). NNs are usually trained with 32-bit or 16-bit floating-point numbers. Once trained, an NN may be deployed in production. One approach to reduce compute overhead is to reduce parameter precision of NNs to 16 or 8 for deployment. The conversion to an acceptable lower precision is usually determined manually before deployment and precision levels are fixed while deployed.
    Type: Application
    Filed: August 1, 2018
    Publication date: February 6, 2020
    Inventors: Sai Rahul Chalamalasetti, Paolo Faraboschi, Martin Foltin, Catherine Graves, Dejan S. Milojicic, Sergey Serebryakov, John Paul Strachan
  • Patent number: 10545909
    Abstract: A system management command is stored in a management partition of a global memory by a first node of a multi-node computing system. The global memory is shared by each node of the multi-node computing system. In response to an indication to access the management partition, the system management command is accessed from the management partition by a second node of the multi-node computing system. The system management command is executed by the second node. Executing the system management command includes managing the second node.
    Type: Grant
    Filed: April 29, 2014
    Date of Patent: January 28, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Yuan Chen, Daniel Juergen Gmach, Dejan S. Milojicic, Vanish Talwar, Zhikui Wang
  • Patent number: 10540286
    Abstract: Systems and methods for dynamically modifying coherence domains are discussed herein. In various embodiments, a hardware controller may be provided that is configured to automatically recognize application behavior and dynamically reconfigure coherence domains in hardware and software to tradeoff performance for reliability and scalability. Modifying the coherence domains may comprise repartitioning the system based on cache coherence independently of one or more software layers of the system. Memory-driven algorithms may be invoked to determine one or more dynamic coherence domain operations to implement. In some embodiments, declarative policy statements may be received from a user via one or more interfaces associated with the controller. The controller may be configured to dynamically adjust cache coherence policy based on the declarative policy statements received from the user.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: January 21, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Dejan S Milojicic, Keith Packard, Michael S. Woodacre, Andrew R Wheeler
  • Patent number: 10528752
    Abstract: Example implementations relate to non-volatile storage of management data. In example implementations, a system is disclosed, the system including a plurality of computing devices, a management device, and a non-volatile memory including a plurality of management spaces corresponding to the plurality of computing devices. In example implementations, at least one of the plurality of management spaces is to be accessible by the management device and by the corresponding computing device, be inaccessible by computing devices other than the corresponding computing device, and store management data associated with the corresponding computing device.
    Type: Grant
    Filed: August 13, 2014
    Date of Patent: January 7, 2020
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Dejan S. Milojicic, Chris I. Dalton, Zhikui Wang, Chandrasekar Venkatraman, Adrian Shaw
  • Publication number: 20190334771
    Abstract: Systems and methods for dynamically and programmatically controlling hardware and software to optimize bandwidth and latency across partitions in a computing system are discussed herein. In various embodiments, performance within a partitioned computing system may be monitored and used to automatically reconfigure the computing system to optimize aggregate bandwidth and latency. Reconfiguring the computing system may comprise reallocating hardware resources among partitions, programming firewalls to enable higher bandwidth for specific inter-partition traffic, switching programming models associated with individual partitions, starting additional instances of one or more applications running on the partitions, and/or one or more other operations to optimize the overall aggregate bandwidth and latency of the system.
    Type: Application
    Filed: April 30, 2018
    Publication date: October 31, 2019
    Inventors: Dejan S Milojicic, Sharad Singhal, Andrew R. Wheeler, Michael S. Woodacre
  • Publication number: 20190332538
    Abstract: Systems and methods for dynamically modifying coherence domains are discussed herein. In various embodiments, a hardware controller may be provided that is configured to automatically recognize application behavior and dynamically reconfigure coherence domains in hardware and software to tradeoff performance for reliability and scalability. Modifying the coherence domains may comprise repartitioning the system based on cache coherence independently of one or more software layers of the system. Memory-driven algorithms may be invoked to determine one or more dynamic coherence domain operations to implement. In some embodiments, declarative policy statements may be received from a user via one or more interfaces associated with the controller. The controller may be configured to dynamically adjust cache coherence policy based on the declarative policy statements received from the user.
    Type: Application
    Filed: April 30, 2018
    Publication date: October 31, 2019
    Inventors: Dejan S. Milojicic, Keith Packard, Michael S. Woodacre, Andrew R. Wheeler
  • Publication number: 20190332441
    Abstract: A Neural Network (NN) scheduler and techniques to implement features of different possible NN schedulers are disclosed. In a first example, an NN scheduler that accepts NN models in an interoperable format and performs optimizations on this interoperable format as part of converting it to a run-time format is provided. In a second example, an NN scheduler analyzes operations and annotations associated with those operations to determine scheduling options based on hardware availability, data availability, hardware efficiency, processor affinity, etc. In a third example, an NN scheduler that may be integrated with a feed-back loop to recognize actual run-time attributes may be used to “learn” and adapt to change its future scheduling behavior. Each of these examples may be integrated individually, or together, to provide an NN scheduler that optimizes and adapts processing functions for an NN model either prior to processing or for just-in-time determination of operation scheduling.
    Type: Application
    Filed: April 26, 2018
    Publication date: October 31, 2019
    Inventors: Guilherme James De Angelis Fachini, Dejan S. Milojicic, Gustavo Henrique Rodrigues Pinto Tomas, Francisco Silveria
  • Patent number: 10461926
    Abstract: Example implementations relate to cryptographic evidence of persisted capabilities. In an example implementation, in response to a request to access a persisted capability stored in a globally shared memory, a system may decide whether to trust the persisted capability by verification of cryptographic evidence accompanying the persisted capability. The system may load the persisted capability upon a decision to trust the persisted capability based on successful verification.
    Type: Grant
    Filed: August 31, 2016
    Date of Patent: October 29, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Chris I. Dalton, Dejan S. Milojicic
  • Patent number: 10452472
    Abstract: A dot-product engine (DPE) implemented on an integrated circuit as a crossbar array (CA) includes memory elements comprising a memristor and a transistor in series. A crossbar with N rows, M columns may have N×M memory elements. A vector input for N voltage inputs to the CA and a vector output for M voltage outputs from the CA. An analog-to-digital converter (ADC) and/or a digital-to-analog converter (DAC) may be coupled to each input/output register. Values representing a first matrix may be stored in the CA. Voltages/currents representing a second matrix may be applied to the crossbar. Ohm's Law and Kirchoff's Law may be used to determine values representing the dot-product as read from the crossbar. A portion of the crossbar may perform Error-correcting Codes (ECC) concurrently with calculating the dot-product results. ECC codes may be used to only indicate detection of errors, or for both detection and correction of results.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: October 22, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Catherine Graves, John Paul Strachan, Dejan S. Milojicic, Paolo Faraboschi, Martin Foltin, Sergey Serebryakov
  • Publication number: 20190319838
    Abstract: Systems and methods for system reconfiguration of a computing system that includes a plurality of memory and computing resources, may include: assigning a reconfiguration capability to a user, the reconfiguration capability granting the user a right to reconfigure at least one of memory and computing resources in the computing system; a controller of the computing system receiving a reconfiguration request from a user for a requested system reconfiguration along with that user's configuration capability; the controller of the computing system verifying that the user from which the reconfiguration request was received has the rights to make the requested system reconfiguration; and the controller of the system executing the requested system reconfiguration if the user has the rights to make the requested system reconfiguration.
    Type: Application
    Filed: April 17, 2018
    Publication date: October 17, 2019
    Inventor: DEJAN S. MILOJICIC
  • Publication number: 20190303249
    Abstract: Example implementations relate to an apparatus to support providing a computing service to a client including transferring control between a primary data processing system and a secondary data processing system in response to an event; the primary data processing system comprising a processor and associated memory and the secondary data processing system comprising a processor and associated memory; the apparatus comprising: circuitry to identify restoration data; the restoration data comprising at least data associated with at least one predetermined type of memory operation of the memory associated with the primary data processing system, and circuitry to output any identified restoration data for storage in the memory associated with the processor of the secondary data processing system.
    Type: Application
    Filed: April 2, 2018
    Publication date: October 3, 2019
    Inventors: Dejan S. Milojicic, Keith Packard, Michael Woodacre, Andrew R. Wheeler
  • Patent number: 10387059
    Abstract: According to an example, memory-driven OOB management may include OOB management of a computing node of a plurality of computing nodes. The OOB management may be executed independent of an OS of the computing node. A memory fabric may be used to provide for shared access to a plurality of NVM nodes by the plurality of computing nodes.
    Type: Grant
    Filed: January 30, 2015
    Date of Patent: August 20, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Zhikui Wang, Andy Brown, Stephen B. Lyle, Dejan S. Milojicic, Chandrasekar Venkatraman
  • Patent number: 10387335
    Abstract: In one example, a processor sends a memory access request including a data capability and a handle which references a master capability. In response to receiving the memory access request, a memory controller checks whether the handle references a valid master capability and checks whether the data capability is within a scope of the master capability. In response to determining that the master capability is valid and the data capability is within the scope of the master capability, the memory controller returns a result of the memory access request to the processor.
    Type: Grant
    Filed: September 28, 2017
    Date of Patent: August 20, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Dejan S. Milojicic, Moritz Josef Hoffmann, Alexander Richardson, Qiong Cai
  • Patent number: 10372897
    Abstract: Example implementations relate to encrypted capabilities stored in global memory. For example, in an implementation, a capability protection system may store an encrypted capability into global memory, where the encrypted capability is encrypted based on a condition. The capability protection system may receive, from a node in communication with the global memory, a request to access the encrypted capability stored in the global memory. The capability protection system may provide to the node a decrypted form of the encrypted capability upon satisfaction of the condition by the node.
    Type: Grant
    Filed: October 20, 2016
    Date of Patent: August 6, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Chris I. Dalton, Dejan S. Milojicic