Patents by Inventor Felipe Huici

Felipe Huici has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240036908
    Abstract: A method for supporting memory deduplication for unikernel images includes aligning, by a memory aligner entity, memory pages of unikernel images such that a consistent memory alignment is generated across the unikernel images. A memory deduplication identifier entity generates a unique page identifier for a plurality of memory pages of the unikernel images. The memory deduplication identifier entity matches page identifiers of memory pages for a unikernel image, which is to be loaded into a physical memory, with page identifiers of memory pages that have already been loaded into the physical memory and providing matching information about the matching to a page merger entity. The page merger entity performs page merging based on the matching information provided by the memory deduplication identifier entity.
    Type: Application
    Filed: April 21, 2021
    Publication date: February 1, 2024
    Inventors: Felipe HUICI, Giuseppe SIRACUSANO, Davide SANVITO
  • Patent number: 11733989
    Abstract: Systems and methods for automatically generating a secure image with a reduced or minimal set of system calls (syscalls) required by an application to run. A method includes the steps of receiving as input a configuration file specifying one or more image parameters to vary; generating a set of one or more unikernel images, or experiment images, each unikernel image including a specification of how to build the image and how to run the image, each unikernel image based on one of the one or more image parameters; populating a run queue with the one or more unikernel images; and iteratively: executing each of the one or more unikernel images in a host virtual machine; and monitoring, at run-time, a usage of syscalls in the executing image to identify syscalls actually used at any point in time during the executing.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: August 22, 2023
    Assignee: NEC CORPORATION
    Inventors: Felipe Huici, Sharan Santhanam
  • Publication number: 20220300266
    Abstract: Systems and methods for automatically generating a secure image with a reduced or minimal set of system calls (syscalls) required by an application to run. A method includes the steps of receiving as input a configuration file specifying one or more image parameters to vary; generating a set of one or more unikernel images, or experiment images, each unikernel image including a specification of how to build the image and how to run the image, each unikernel image based on one of the one or more image parameters; populating a run queue with the one or more unikernel images; and iteratively: executing each of the one or more unikernel images in a host virtual machine; and monitoring, at run-time, a usage of syscalls in the executing image to identify syscalls actually used at any point in time during the executing.
    Type: Application
    Filed: May 26, 2021
    Publication date: September 22, 2022
    Inventors: Felipe Huici, Sharan Santhanam
  • Publication number: 20220292013
    Abstract: A method searches and tests for performance optima in an operating system (OS) configuration space. The method includes generating a plurality of OS configurations. For at least a first OS configuration, of the generated OS configurations, the method further includes: fetching a plurality of OS modules based on the first OS configuration; building a first OS image from the fetched OS modules; and testing the first OS image to determine a first value of a performance metric.
    Type: Application
    Filed: June 16, 2021
    Publication date: September 15, 2022
    Inventors: Felipe Huici, Simon Kuenzer, Roberto Bifulco
  • Patent number: 11429855
    Abstract: A method for accelerating a neural network includes identifying neural network layers that meet a locality constraint. Code is generated to implement depth-first processing for different hardware based on the identified neural network layers. The generated code is used to perform the depth-first processing on the neural network based on the generated code.
    Type: Grant
    Filed: February 6, 2018
    Date of Patent: August 30, 2022
    Assignee: NEC CORPORATION
    Inventors: Nicolas Weber, Felipe Huici, Mathias Niepert
  • Publication number: 20210049469
    Abstract: A method of memory remapping for utilizing dense neural network computations with a sparse neural network includes the step of densifying the sparse neural network. The input and output data is remapped onto the densified neural network. The dense neural network computations are utilized for a prediction using the remapped input and output data.
    Type: Application
    Filed: August 16, 2019
    Publication date: February 18, 2021
    Inventors: Nicolas Weber, Felipe Huici
  • Patent number: 10817402
    Abstract: Methods and systems for building an optimized image for an application are provided. An operating system is decomposed into granular modules. An initial configuration file to a build system is provided. The build system builds an initial image including initial modules for the application based on the initial configuration file. A monitoring system monitors performance indicators for the initial image. Using a machine learning algorithm, a subsequent configuration file based on the performance indicators is derived. The build system, builds a subsequent image for the application.
    Type: Grant
    Filed: January 3, 2018
    Date of Patent: October 27, 2020
    Assignee: NEC CORPORATION
    Inventors: Felipe Huici, Simon Kuenzer
  • Patent number: 10579494
    Abstract: A method for monitoring resources in a computing system having system information includes transforming, via representation learning, variable-size information into fixed-size information, and creating a machine learning neural network model and training it the machine learning model to predict future resource usage of an application. The method further includes providing the prediction of further resources usage of the application as an input to an action component, wherein the action component is one of an anomaly detector or a reinforcement learner that drives a scheduler. The method additionally includes performing, by the action component, at least one of scheduling resources within the computing system or detecting a resources usage anomaly.
    Type: Grant
    Filed: September 18, 2018
    Date of Patent: March 3, 2020
    Assignee: NEC CORPORATION
    Inventors: Florian Schmidt, Mathias Niepert, Felipe Huici
  • Patent number: 10579412
    Abstract: A method for operating virtual machines on a virtualization platform includes: embedding control information in a predetermined memory area of a front-end virtual machine where at least one virtual device is to be initialized, the control information being required for initiating a communication with a back-end virtual machine where at least one back-end driver runs; retrieving, by the front-end virtual machine, the control information from the predetermined memory area of the front-end virtual machine; and performing the communication between the front-end virtual machine and the back-end virtual machine via a direct communication channel to exchange information for initializing the at least one virtual device of the front-end virtual machine, by communicating with the at least one back-end driver via the direct communication channel. The direct communication channel is established based on the control information embedded in the predetermined memory area of the front-end virtual machine.
    Type: Grant
    Filed: April 7, 2017
    Date of Patent: March 3, 2020
    Assignee: NEC CORPORATION
    Inventors: Filipe Manco, Simon Kuenzer, Florian Schmidt, Felipe Huici
  • Publication number: 20190392300
    Abstract: A method for processing a neural network includes performing a decompression step before executing operations associated with a block of layers of the neural network, performing a compression step after executing operations associated with the block of layers of a neural network, gathering performance indicators for the executing the operations associated with the block of layers of the neural network, and determining whether target performance metrics have been met with a compression format used for at least one of the decompression step and the compression step.
    Type: Application
    Filed: June 20, 2018
    Publication date: December 26, 2019
    Inventors: Nicolas Weber, Felipe Huici, Mathias Niepert
  • Publication number: 20190258503
    Abstract: A method for operating virtual machines on a virtualization platform, the method comprising: embedding, preferably by a toolstack of the virtualization platform, control information in a predetermined memory area of a front-end virtual machine where at least one virtual device is to be initialized, wherein said control information is required for initiating a communication with a back-end virtual machine where at least one back-end driver runs; retrieving, by the front-end virtual machine, said control information from said predetermined memory area of the front-end virtual machine; and performing the communication between the front-end virtual machine and the back-end virtual machine via a direct communication channel in order to exchange information for initializing said at least one virtual device of the front-end virtual machine, in particular by communicating with said at least one back-end driver via said direct communication channel, wherein said direct communication channel is established based on sai
    Type: Application
    Filed: April 7, 2017
    Publication date: August 22, 2019
    Inventors: Filipe Manco, Simon Kuenzer, Florian Schmidt, Felipe Huici
  • Publication number: 20190244091
    Abstract: A method for accelerating a neural network includes identifying neural network layers that meet a locality constraint. Code is generated to implement depth-first processing for different hardware based on the identified neural network layers. The generated code is used to perform the depth-first processing on the neural network based on the generated code.
    Type: Application
    Filed: February 6, 2018
    Publication date: August 8, 2019
    Inventors: Nicolas Weber, Felipe Huici, Mathias Niepert
  • Publication number: 20190213099
    Abstract: A method for monitoring resources in a computing system having system information includes transforming, via representation learning, variable-size information into fixed-size information, and creating a machine learning neural network model and training it the machine learning model to predict future resource usage of an application. The method further includes providing the prediction of further resources usage of the application as an input to an action component, wherein the action component is one of an anomaly detector or a reinforcement learner that drives a scheduler. The method additionally includes performing, by the action component, at least one of scheduling resources within the computing system or detecting a resources usage anomaly.
    Type: Application
    Filed: September 18, 2018
    Publication date: July 11, 2019
    Inventors: Florian Schmidt, Mathias Niepert, Felipe Huici
  • Publication number: 20190205241
    Abstract: Methods and systems for building an optimized image for an application are provided. An operating system is decomposed into granular modules. An initial configuration file to a build system is provided. The build system builds an initial image including initial modules for the application based on the initial configuration file. A monitoring system monitors performance indicators for the initial image. Using a machine learning algorithm, a subsequent configuration file based on the performance indicators is derived. The build system, builds a subsequent image for the application.
    Type: Application
    Filed: January 3, 2018
    Publication date: July 4, 2019
    Inventors: Felipe Huici, Simon Kuenzer
  • Patent number: 9648126
    Abstract: A method for caching objects at one or more cache servers of a content delivery network (CDN) includes: determining, by a processor, attributes of objects of a set of objects; calculating, by the processor, an efficiency metric for each object of the set of objects based on the attributes of each object, wherein the attributes of each object include an expected future popularity associated with the object; selecting, by the processor, a subset of objects from the set of objects for caching based on calculated efficiency metrics; and caching the subset of objects at the one or more cache servers.
    Type: Grant
    Filed: April 25, 2014
    Date of Patent: May 9, 2017
    Assignee: NEC CORPORATION
    Inventors: Felipe Huici, Mohamed Ahmed, Sofia Nikitaki, Saverio Niccolini
  • Patent number: 9338075
    Abstract: For providing a simple monitoring mechanism with reduced resource and performance requirements a method for monitoring traffic in a network is claimed, wherein a monitoring activity of at least two monitoring probes of the network is coordinated by a coordinating element, wherein at least two nodes of the network are able to operate as coordinating elements and wherein the responsibility for coordinating the monitoring activity of the monitoring probes is split between the nodes according to a compressed representation of flow parameter keys. Further, an according network is described, preferably for carrying out the above mentioned method.
    Type: Grant
    Filed: October 9, 2009
    Date of Patent: May 10, 2016
    Assignee: NEC EUROPE LTD.
    Inventors: Andrea Di Pietro, Felipe Huici, Saverio Niccolini
  • Patent number: 9305265
    Abstract: A method for probabilistic processing of data, wherein the data is provided in form of a data set S composed of multidimensional n-tuples of the form (x1, . . . , xn), is characterized in that an n-dimensional data structure is generated by way of providing a bit matrix, providing a number K of independent hash functions Hk that are employed in order to address the bits in the matrix, and inserting the n-tuples (x1, . . . , xn) into the bit matrix by computing the hash values Hk(x) for all values x of the n-tuple for each of the number K of independent hash functions Hk, and by setting the resulting bits [Hk(x1), . . . , Hk(xn)] of the matrix. Furthermore, a respective system is disclosed.
    Type: Grant
    Filed: September 29, 2010
    Date of Patent: April 5, 2016
    Assignee: NEC EUROPE LTD.
    Inventors: Andrea Di Pietro, Felipe Huici, Saverio Niccolini
  • Patent number: 9253098
    Abstract: For allowing a very flexible scheduling of data flows within an OpenFlow (OF) switch a method for operating an OpenFlow switch within a network includes using the OpenFlow switch to direct arriving data flows out of different ports of the OpenFlow switch. The method is characterized in that a scheduling mechanism for performing at least one scheduling task is assigned to the OpenFlow switch, wherein a metric will be used to assign weights to the arriving data flows and wherein the data flows will then be scheduled based on the assigned weights and based on a scheduling policy. Further, a corresponding OpenFlow switch and a corresponding network are described, preferably for carrying out the above mentioned method.
    Type: Grant
    Filed: March 7, 2011
    Date of Patent: February 2, 2016
    Assignee: NEC EUROPE LTD.
    Inventors: Felipe Huici, Mohamed Ahmed, Saverio Niccolini
  • Publication number: 20150370490
    Abstract: A method for caching using a solid-state drive (SSD)-based cache includes: determining a set of potential objects for storage at the SSD-based cache; ranking the potential objects for storage based on expected utility values corresponding to each potential object for storage; selecting objects for storage from the potential objects for storage based on the ranking; and causing the selected objects to be written to the SSD-based cache. Further, a reserve capacity for the SSD-based cache may be dynamically adjusted based on the write speed associated with an object being written to the SSD-based cache.
    Type: Application
    Filed: June 24, 2014
    Publication date: December 24, 2015
    Inventors: Felipe Huici, Mohamed Ahmed, Saverio Niccolini
  • Publication number: 20150312367
    Abstract: A method for caching objects at one or more cache servers of a content delivery network (CDN) includes: determining, by a processor, attributes of objects of a set of objects; calculating, by the processor, an efficiency metric for each object of the set of objects based on the attributes of each object, wherein the attributes of each object include an expected future popularity associated with the object; selecting, by the processor, a subset of objects from the set of objects for caching based on calculated efficiency metrics; and caching the subset of objects at the one or more cache servers.
    Type: Application
    Filed: April 25, 2014
    Publication date: October 29, 2015
    Inventors: Felipe Huici, Mohamed Ahmed, Sofia Nikitaki, Saverio Niccolini