Patents by Inventor Paulo Abelha Ferreira

Paulo Abelha Ferreira has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11521017
    Abstract: A prediction manager for providing responsiveness predictions for deployments includes persistent storage and a predictor. The persistent storage stores training data and conditioned training data. The predictor is programmed to obtain training data based on: a configuration of at least one deployment of the deployments, and a measured responsiveness of the at least one deployment, perform a peak extraction analysis on the measured responsiveness to obtain conditioned training data, obtain a prediction model using: the training data, and a first untrained prediction model, obtain a confidence prediction model using: the conditioned training data, and a second untrained prediction model, obtain a combined prediction using: the prediction model, and the confidence prediction model, and perform, based on the combined prediction, an action set to prevent a responsiveness failure.
    Type: Grant
    Filed: April 27, 2020
    Date of Patent: December 6, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Paulo Abelha Ferreira, Adriana Bechara Prado, Pablo Nascimento da Silva
  • Publication number: 20220383184
    Abstract: Techniques described herein relate to a method for model updating based on maximal cliques. The method may include transmitting, by a model coordinator, a probability distribution request signal to a plurality of edge nodes; receiving, by the model coordinator, a separate feature probability distribution from each of the plurality of edge nodes; executing, by the model coordinator, a maximal clique identification algorithm using the feature probability distributions to obtain a plurality of maximal cliques; selecting, by the model coordinator, a representative edge node from each of the plurality of maximal cliques to obtain a plurality of representative edge nodes; transmitting, by the model coordinator, a feature data request signal to each of the plurality of representative edge nodes; receiving, by the model coordinator, feature data from each of the plurality of representative edge nodes; and performing machine learning (ML) model training using a first portion of the feature data.
    Type: Application
    Filed: May 28, 2021
    Publication date: December 1, 2022
    Inventors: Paulo Abelha Ferreira, Pablo Nascimento da Silva, Vinicius Michel Gottin
  • Patent number: 11513961
    Abstract: A method and system for assessing sequentiality of a data stream is disclosed. Specifically, the method and system disclosed herein may entail receiving an incoming request to access a page in a cache memory, wherein the page is identified by a page address of an address space in a main memory; identifying, in a memory, a bin corresponding to an address range including the page address of the page of the incoming request, wherein the bin includes k address ranges of the address space of the main memory; determining whether to update an occupation count of the bin in the memory; locating the bin in a heuristics table to obtain an estimated total number of expected proximal accesses based on an updated occupation count of the bin; and determining, based on the estimated total number of expected proximal accesses, sequentiality of the data stream to device in order to generate a policy for the cache memory.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: November 29, 2022
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Vinicius Michel Gottin, Tiago Salviano Calmon, Paulo Abelha Ferreira, Hugo de Oliveira Barbalho, Rômulo Teixeira de Abreu Pinho
  • Patent number: 11455556
    Abstract: A deployment manager includes storage for storing a prediction model based on telemetry data from the deployments and a prediction manager. The prediction manager generates, using the prediction model and second telemetry data obtained from a deployment of the deployments: a prediction, and a prediction error estimate; in response to a determination that the prediction indicates a negative impact on the deployment: generates a confidence estimation for the prediction based on a variability of the second telemetry data from the telemetry data; in response to a second determination that the confidence estimation indicates that the prediction error estimate is inaccurate: remediates the prediction based on the variability to obtain an updated prediction; and performs an action set, based on the updated prediction, to reduce an impact of the negative impact on the deployment.
    Type: Grant
    Filed: April 27, 2020
    Date of Patent: September 27, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Paulo Abelha Ferreira, Adriana Bechara Prado, Pablo Nascimento da Silva
  • Publication number: 20220237338
    Abstract: A system and method for implementing design cycles for developing a hardware component including receiving sets of experimental data, each of set experimental data resulting from an application of a set of variables to the hardware component during a common or a different design cycle of the hardware component, where each variable represents an aspect of the hardware component, determining discretized classes of the experimental data based on one or more quality metrics, and obtaining statistical measurements of the variables to determine correlations between the discretized classes of the quality metrics and the statistical measurements of variables for determining a pattern of the quality metrics to reduce the number of design cycles implemented on the hardware component during the developing of the hardware component.
    Type: Application
    Filed: January 28, 2021
    Publication date: July 28, 2022
    Inventors: Paulo Abelha Ferreira, Adriana Bechara Prado, Jonas Furtado Dias
  • Publication number: 20220237211
    Abstract: An information handling system for managing detection of objects includes a storage and a processor. The storage is for storing an encoder; a critical class classifier; a general classifier; and a decoder. The processor obtains data that may include one or more of the objects; encodes the data using the encoder to obtain encoded data; obtains a critical class classification for the encoded data using the critical class classifier; obtains a general classification for the encoded data using the general classifier; conditions the encoded data to obtain conditioned encoded data; decodes the conditioned encoded data using the decoder to obtain reconstructed data; makes a determination that the reconstructed data and the critical class classification indicate that the data is an unknown classification; classifies the data as being an unknown classification based on the determination; and performs an action set based on the unknown classification of the data.
    Type: Application
    Filed: January 28, 2021
    Publication date: July 28, 2022
    Inventors: Vinicius Michel Gottin, Tiago Salviano Calmon, Paulo Abelha Ferreira
  • Publication number: 20220237124
    Abstract: A method and system for assessing sequentiality of a data stream is disclosed. Specifically, the method and system disclosed herein may entail receiving an incoming request to access a page in a cache memory, wherein the page is identified by a page address of an address space in a main memory; identifying, in a memory, a bin corresponding to an address range including the page address of the page of the incoming request, wherein the bin includes k address ranges of the address space of the main memory; determining whether to update an occupation count of the bin in the memory; locating the bin in a heuristics table to obtain an estimated total number of expected proximal accesses based on an updated occupation count of the bin; and determining, based on the estimated total number of expected proximal accesses, sequentiality of the data stream to device in order to generate a policy for the cache memory.
    Type: Application
    Filed: January 28, 2021
    Publication date: July 28, 2022
    Inventors: Vinicius Michel Gottin, Tiago Salviano Calmon, Paulo Abelha Ferreira, Hugo de Oliveira Barbalho, Rômulo Teixeira de Abreu Pinho
  • Publication number: 20220230092
    Abstract: Method for model updating in a federated learning environment, including distributing a current model to client nodes; receiving a first set of gradient sign vectors, wherein each gradient sign vector of the first set of gradient sign vectors is received from one client node; generating a first updated model based on the first set of gradient sign vectors; distributing the first updated model to the plurality of client nodes; storing a first shape parameter and a second shape parameter; receiving, in response to distributing the first updated model, a second set of gradient sign vectors, wherein each gradient sign vector of the second set of gradient sign vectors is received from one client node; generating a second updated model based on the second set of gradient sign vectors, the first shape parameter, and the second shape parameter; and distributing the second updated model to the plurality of client nodes.
    Type: Application
    Filed: January 21, 2021
    Publication date: July 21, 2022
    Inventors: Paulo Abelha Ferreira, Pablo Nascimento da Silva, Tiago Salviano Calmon, Roberto Nery Stelling Neto, Vinicius Michel Gottin
  • Patent number: 11354061
    Abstract: One or more aspects of the present disclosure relate to providing storage system configuration recommendations. System configurations of one or more storage devices can be determined based on their respective collected telemetry information. Performance of storage devices having different system configurations can be predicted based on one or more of: the collected telemetry information and each of the different system configurations. In response to receiving one or more requested performance characteristics and workload conditions, one or more recommended storage device configurations can be provided for each request based on the predicted performance characteristics, the requested performance characteristics, and the workload conditions.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: June 7, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Adriana Bechara Prado, Pablo Nascimento Da Silva, Paulo Abelha Ferreira
  • Publication number: 20220172075
    Abstract: Decoding random forest problem solving through node labeling and subtree distributions. Random forests, like any other type of machine learning algorithm, are designed and configured to solve classification, regression, and/or prediction problems. Solutions (or outputs) provided by random forests, given inputs in the form of values for a set of features, may sometimes be inaccurate, unexpected, or undesirable. Understanding or decoding how a random forest solves a given problem may be a way to correct or improve the random forest. The disclosed method, accordingly, proposes decoding random forest problem solving through the identification of subtrees (by way of node labeling) amongst a random forest, as well as the frequencies that these subtrees appear (or distributions thereof) throughout the random forest.
    Type: Application
    Filed: November 30, 2020
    Publication date: June 2, 2022
    Inventors: Paulo Abelha Ferreira, Jonas Furtado Dias, Adriana Bechara Prado
  • Publication number: 20220138498
    Abstract: Methods for compression switching that includes distributing a model to client nodes, which use the model to generate a gradient vector (GV) based on a client node data set. The method includes receiving a model update that includes a gradient sign vector (GSV) based on the gradient vector; generating an updated model using the GSV; and distributing the updated model to the client nodes. The client node uses the updated model to generate a second GV based on a second client node data set. The method also includes a determination that a compression switch condition exists; based on the determination, transmitting an instruction to the client node to perform a compression switch; receiving, in response to the instruction, another model update including a subset GSV based on the second gradient vector; generating a second updated model using the subset GSV; and distributing the second updated model to the client nodes.
    Type: Application
    Filed: October 29, 2020
    Publication date: May 5, 2022
    Inventors: Paulo Abelha Ferreira, Pablo Nascimento Da Silva, Tiago Salviano Calmon, Roberto Nery Stelling Neto, Vinicius Michel Gottin
  • Patent number: 11320986
    Abstract: A distribution of response times of a storage system can be estimated for a proposed workload using a trained learning process. Collections of information about operational characteristics of multiple storage systems are obtained, in which each collection includes parameters describing the configuration of the storage system that was used to create the collection, workload characteristics describing features of the workload that the storage system processed, and storage system response times. For each collection, workload characteristics are aggregated, and the storage system response information is used to train a probabilistic mixture model. The aggregated workload information, storage system characteristics, and probabilistic mixture model parameters of the collections form training examples that are used to train the learning process.
    Type: Grant
    Filed: January 20, 2020
    Date of Patent: May 3, 2022
    Assignee: Dell Products, L.P.
    Inventors: Paulo Abelha Ferreira, Adriana Bechara Prado, Pablo Nascimento da Silva
  • Publication number: 20220129786
    Abstract: A framework for rapidly prototyping federated learning algorithms. Specifically, the disclosed framework proposes a method and system for evaluating different hypotheses for configuring learning model, which may be optimized through federated learning. Through the disclosed framework, these hypotheses may be tested for scalability, hardware and network resource performance, as well as for new learning state compression and/or aggregation technique effectiveness. Further, these hypotheses may be tested through federated learning simulations, which avoid costs associated with deploying these hypotheses to be tested across production systems.
    Type: Application
    Filed: October 27, 2020
    Publication date: April 28, 2022
    Inventors: Pablo Nascimento da Silva, Paulo Abelha Ferreira, Tiago Salviano Calmon, Roberto Nery Stelling Neto, Vinicius Michel Gottin
  • Publication number: 20210383197
    Abstract: A method for adaptive stochastic learning state compression for federated learning in infrastructure domains. Specifically, the disclosed method introduces an adaptive data compressor directed to reducing the amount of information exchanged between nodes participating in the optimization of a shared machine learning model through federated learning. The adaptive data compressor may employ stochastic k-level quantization, and may include functionality to handle exceptions stemming from the detection of unbalanced and/or irregularly sized data.
    Type: Application
    Filed: June 4, 2020
    Publication date: December 9, 2021
    Inventors: Pablo Nascimento Da Silva, Paulo Abelha Ferreira, Roberto Nery Stelling Neto, Tiago Salviano Calmon, Vinicius Michel Gottin
  • Publication number: 20210342712
    Abstract: Training examples are created from telemetry data, in which each training example engineered features derived from the telemetry data, storage system characteristics about the storage system that processed the workload associated with the telemetry data, and the response time of the storage system while processing the workload. The training examples are provided to an unsupervised learning process which assigns the training examples to clusters. Training examples of each cluster are used to train/test a separate supervised learning process for the cluster, to cause each supervised learning process to learn a regression between independent variables (system characteristics and workload features) and a dependent variable (storage system response time). To determine a response time of a proposed storage system, the proposed workload is used to select one of the clusters, and then the trained learning process for the selected cluster is used to determine the response time of the proposed storage system.
    Type: Application
    Filed: May 4, 2020
    Publication date: November 4, 2021
    Inventors: Paulo Abelha Ferreira, Adriana Bechara Prado, Pablo Nascimento da Silva
  • Publication number: 20210334678
    Abstract: A deployment manager includes storage for storing a prediction model based on telemetry data from the deployments and a prediction manager. The prediction manager generates, using the prediction model and second telemetry data obtained from a deployment of the deployments: a prediction, and a prediction error estimate; in response to a determination that the prediction indicates a negative impact on the deployment: generates a confidence estimation for the prediction based on a variability of the second telemetry data from the telemetry data; in response to a second determination that the confidence estimation indicates that the prediction error estimate is inaccurate: remediates the prediction based on the variability to obtain an updated prediction; and performs an action set, based on the updated prediction, to reduce an impact of the negative impact on the deployment.
    Type: Application
    Filed: April 27, 2020
    Publication date: October 28, 2021
    Inventors: Paulo Abelha Ferreira, Adriana Bechara Prado, Pablo Nascimento da Silva
  • Publication number: 20210334597
    Abstract: A prediction manager for providing responsiveness predictions for deployments includes persistent storage and a predictor. The persistent storage stores training data and conditioned training data. The predictor is programmed to obtain training data based on: a configuration of at least one deployment of the deployments, and a measured responsiveness of the at least one deployment, perform a peak extraction analysis on the measured responsiveness to obtain conditioned training data, obtain a prediction model using: the training data, and a first untrained prediction model, obtain a confidence prediction model using: the conditioned training data, and a second untrained prediction model, obtain a combined prediction using: the prediction model, and the confidence prediction model, and perform, based on the combined prediction, an action set to prevent a responsiveness failure.
    Type: Application
    Filed: April 27, 2020
    Publication date: October 28, 2021
    Inventors: Paulo Abelha Ferreira, Adriana Bechara Prado, Pablo Nascimento da Silva
  • Publication number: 20210334668
    Abstract: A global prediction manager for generating predictions using data from data zones includes storage for storing a model repository comprising a global model set and a prediction manager. The prediction manager obtains a local model set from a data zone of the data zones indicating that the global model set is unacceptable; makes a determination that the local model set is acceptable; in response to the determination: distributes the local model set to at least one second data zone of the data zones; obtains compressed telemetry data, that was compressed using the local model set, from the data zone and the at least one second data zone; and generates a global prediction regarding a future operating condition of the data zones using: the compressed local telemetry data and the local model set.
    Type: Application
    Filed: April 27, 2020
    Publication date: October 28, 2021
    Inventors: Paulo Abelha Ferreira, Adriana Bechara Prado, Pablo Nascimento da Silva, Tiago Salviano Calmon
  • Publication number: 20210241110
    Abstract: Dynamic adapting neural networks. A latency of a neural network, such as time to inference, is controlled by dynamically compressing/decompressing the neural network. The level of compression or the compression ratio is based on a relationship between the latency and the desired service level. The compression ratio and thus the level of compression can be adjusted until the latency complies with a required latency. A minimum level of accuracy is maintained such that catastrophic forgetting does not occur in the neural network.
    Type: Application
    Filed: January 30, 2020
    Publication date: August 5, 2021
    Inventors: Tiago Salviano Calmon, Vinicius Michel Gottin, Paulo Abelha Ferreira
  • Publication number: 20210232968
    Abstract: An autoregressor that compresses input data for a specific purpose. Input data is compressed using a compression/decompression framework and by accounting for a purpose of a prediction model. The compression aspect of the framework is distributed and the decompression aspect of the framework may be centralized. The compression/decompression framework and a machine learning prediction model can be centrally trained. The compressor is distributed to nodes such that the input data can be compressed and transmitted to a central node. The model and the compression/decompression framework are continually trained on new data. This allows for lossy compression and higher compression rates while maintaining low prediction error rates.
    Type: Application
    Filed: January 29, 2020
    Publication date: July 29, 2021
    Inventors: Paulo Abelha Ferreira, Pablo Nascimento da Silva, Adriana Bechara Prado