Patents by Inventor Vinicius Gottin

Vinicius Gottin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240028974
    Abstract: Techniques are disclosed for dynamic edge-weighted quantization. For example, a system can include at least one processing device including a processor coupled to a memory, the at least one processing device being configured to implement the following steps: selecting edge nodes for sampling based on an edge node sampling algorithm configured to use a specified number of edge nodes to be sampled; causing the selected edge nodes to execute a quantization selection procedure; receiving, from the selected edge nodes, identifications of a quantization procedure based on the quantization selection procedure; and selecting a quantization procedure for each edge node, based on the identifications of the quantization procedures for the selected edge nodes.
    Type: Application
    Filed: July 21, 2022
    Publication date: January 25, 2024
    Applicant: Dell Products L.P.
    Inventors: Vinicius Gottin, Pablo Nascimento da Silva, Paulo Abelha Ferreira
  • Patent number: 11868890
    Abstract: A computer implemented method, computer program product, and system for managing execution of a workflow comprising a set of subworkflows, comprising optimizing the set of subworkflows using a deep neural network, wherein each subworkflow of the set of subworkflows has a set of tasks, wherein each task of the sets of tasks has a requirement of resources of a set of resources; wherein each task of the sets of tasks is enabled to be dependent on another task of the sets of tasks, training the deep neural network by: executing the set of subworkflows, collecting provenance data from the execution, and collecting monitoring data that represents the state of said set of resources, wherein the training causes the neural network to learn relationships between the states of said set of resources, the said sets of tasks, their parameters and the obtained performance, optimizing an allocation of resources of the set of resources to each task of the sets of tasks to ensure compliance with a user-defined quality metric b
    Type: Grant
    Filed: April 6, 2022
    Date of Patent: January 9, 2024
    Assignees: LANDMARK GRAPHICS CORPORATION, EMC IP HOLDING COMPANY LLC
    Inventors: Chandra Yeleshwarapu, Jonas F. Dias, Angelo Ciarlini, Romulo D. Pinho, Vinicius Gottin, Andre Maximo, Edward Pacheco, David Holmes, Keshava Rangarajan, Scott David Senften, Joseph Blake Winston, Xi Wang, Clifton Brent Walker, Ashwani Dev, Nagaraj Sirinivasan
  • Patent number: 11847175
    Abstract: Techniques for table row identification using machine learning are disclosed herein. For example, a method can include detecting a table body in a document by processing the document using a machine learning (ML)-based table body model; predicting an initial table row index for one or more words among a plurality of words obtained in the document, wherein the one or more words are determined to be within the table body; and determining a table row index for the one or more words using an ML-based table row model that is trained based on the predicted initial table row index for the one or more words.
    Type: Grant
    Filed: January 27, 2022
    Date of Patent: December 19, 2023
    Assignee: Dell Products L.P.
    Inventors: Paulo Abelha Ferreira, Romulo Teixeira de Abreu Pinho, Pablo Nascimento Da Silva, Vinicius Gottin
  • Publication number: 20230244957
    Abstract: Techniques are disclosed for machine learning model management using edge concept drift duration prediction. For example, a system can include at least one processing device including a processor coupled to a memory, the at least one processing device being configured to implement the following steps: detecting a drift period in a dataset, the drift period including a start time, wherein the dataset pertains to a machine learning (ML)-based model; determining a first confidence value for a period preceding the start time and a second confidence value for a period following the start time; and predicting a drift period duration for the dataset using an ML-based drift model that is trained based on the first and second confidence values.
    Type: Application
    Filed: January 28, 2022
    Publication date: August 3, 2023
    Applicant: Dell Products L.P.
    Inventors: Vinicius Gottin, Jaumir Valenca Da Silveira, JR., Eduardo Vera Sousa
  • Publication number: 20230237100
    Abstract: Techniques for table row identification using machine learning are disclosed herein. For example, a method can include detecting a table body in a document by processing the document using a machine learning (ML)-based table body model; predicting an initial table row index for one or more words among a plurality of words obtained in the document, wherein the one or more words are determined to be within the table body; and determining a table row index for the one or more words using an ML-based table row model that is trained based on the predicted initial table row index for the one or more words.
    Type: Application
    Filed: January 27, 2022
    Publication date: July 27, 2023
    Applicant: Dell Products L.P.
    Inventors: Paulo Abelha Ferreira, RĂ´mulo Teixeira de Abreu Pinho, Pablo Nascimento Da Silva, Vinicius Gottin
  • Publication number: 20230239658
    Abstract: Techniques for edge-enabled trajectory map generation are disclosed herein. For example, a method can include segmenting one or more node trajectories based on data collected at a node, determining one or more initial trajectories based on the segmented node trajectories, and generating a map including trajectories generated by associating attribute data and event data with the initial trajectories.
    Type: Application
    Filed: January 26, 2022
    Publication date: July 27, 2023
    Applicant: Dell Products L.P.
    Inventors: Vinicius Gottin, Pablo Nascimento Da Silva, Paulo Abelha Ferreira
  • Publication number: 20230237272
    Abstract: Techniques are disclosed for predicting a table column using machine learning. For example, a system can include at least one processing device including a processor coupled to a memory, the processing device being configured to implement the following: determining a local word density for words in a table, the local word density measuring a count of other words in a first region surrounding the words; determining a local numeric density for the words, the local numeric density measuring a proportion of digits in a second region surrounding the words; determining semantic associations for the words by processing the words using an ML-based semantic association model trained based on surrounding words in nearby table columns and rows; and predicting a table column index for the words by processing the table using an ML-based table column model trained based on the local word density, local numeric density, and semantic association.
    Type: Application
    Filed: January 27, 2022
    Publication date: July 27, 2023
    Applicant: Dell Products L.P.
    Inventors: Romulo Teixeira de Abreu Pinho, Paulo Abelha Ferreira, Vinicius Gottin, Pablo Nascimento Da Silva
  • Publication number: 20220300812
    Abstract: A computer implemented method, computer program product, and system for managing execution of a workflow comprising a set of subworkflows, comprising optimizing the set of subworkflows using a deep neural network, wherein each subworkflow of the set of subworkflows has a set of tasks, wherein each task of the sets of tasks has a requirement of resources of a set of resources; wherein each task of the sets of tasks is enabled to be dependent on another task of the sets of tasks, training the deep neural network by: executing the set of subworkflows, collecting provenance data from the execution, and collecting monitoring data that represents the state of said set of resources, wherein the training causes the neural network to learn relationships between the states of said set of resources, the said sets of tasks, their parameters and the obtained performance, optimizing an allocation of resources of the set of resources to each task of the sets of tasks to ensure compliance with a user-defined quality metric b
    Type: Application
    Filed: April 6, 2022
    Publication date: September 22, 2022
    Applicants: Landmark Graphics Corporation, EMC IP Holding Company LLC
    Inventors: Chandra YELESHWARAPU, Jonas F. DIAS, Angelo CIARLINI, Romulo D. Pinho, Vinicius GOTTIN, Andre MAXIMO, Edward PACHECO, David HOLMES, Keshava RANGARAJAN, Scott David SENFTEN, Joseph Blake WINSTON, Xi WANG, Clifton Brent WALKER, Ashwani DEV, Nagaraj SIRINIVASAN
  • Patent number: 11347645
    Abstract: Managing a cache memory in a storage system includes maintaining a queue that stores data indictive of the read requests for a particular logical storage unit of the storage system in an order that the read requests are received by the storage system, receiving a read request for a particular page of the particular logical storage unit, and removing a number of elements in the queue and resizing the queue in response to the queue being full. Managing the cache memory also includes placing data indicative of the read request in the queue, determining a prefetch metric that varies according to a number of adjacent elements in a sorted version of the queue having a difference that is less than a predetermined value and greater than zero, and prefetching a plurality of pages that come after the particular page sequentially if the prefetch metric is greater than a predefined value.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: May 31, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Vinicius Gottin, Jonas F. Dias, Hugo de Oliveira Barbalho, Romulo D. Pinho, Tiago Calmon
  • Patent number: 11315014
    Abstract: A computer implemented method, computer program product, and system for managing execution of a workflow comprising a set of subworkflows, comprising optimizing the set of subworkflows using a deep neural network, wherein each subworkflow of the set of subworkflows has a set of tasks, wherein each task of the sets of tasks has a requirement of resources of a set of resources; wherein each task of the sets of tasks is enabled to be dependent on another task of the sets of tasks, training the deep neural network by: executing the set of subworkflows, collecting provenance data from the execution, and collecting monitoring data that represents the state of said set of resources, wherein the training causes the neural network to learn relationships between the states of said set of resources, the said sets of tasks, their parameters and the obtained performance, optimizing an allocation of resources of the set of resources to each task of the sets of tasks to ensure compliance with a user-defined quality metric b
    Type: Grant
    Filed: August 16, 2018
    Date of Patent: April 26, 2022
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Jonas F. Dias, Angelo Ciarlini, Romulo D. Pinho, Vinicius Gottin, Andre Maximo, Edward Pacheco, David Holmes, Keshava Rangarajan, Scott David Senften, Joseph Blake Winston, Xi Wang, Clifton Brent Walker, Ashwani Dev, Chandra Yeleshwarapu, Nagaraj Srinivasan
  • Patent number: 11093404
    Abstract: Managing a cache memory in a storage system includes maintaining a first queue that stores data indictive of the read requests for a particular logical storage unit of the storage system in an order that the read requests are received by the storage system and maintaining a second queue that stores data indictive of the read requests for the particular logical storage unit in a sort order corresponding to page numbers of the read requests, the second queue persisting for a plurality of iterations of read requests. A read request is received and data indicative of the read request is placed in the first queue and in the second queue while maintaining the sort order of the second queue. The second queue is used to determine a prefetch metric that varies according to a number of adjacent elements in the second queue.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: August 17, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Vinicius Gottin, Jonas F. Dias, Hugo de Oliveira Barbalho, Romulo D. Pinho, Tiago Calmon
  • Publication number: 20210117808
    Abstract: A software agent running on a SAN node performs machine learning to adjust caching policy parameters. Learned cache hit rate distributions and cache hit rate rewards relative to baselines are used to dynamically adjust caching parameters such as prefetch size to improve state features such as cache hit rate. The agent may also detect performance degradation. The agent uses efficient state representations to learn the distribution of hit rates as a function of different caching policy parameters. Baselines are used to learn the difference between the baseline cache hit rate and the cache hit rate under an adjusted caching policy, rather than learning the cache hit rate directly.
    Type: Application
    Filed: October 17, 2019
    Publication date: April 22, 2021
    Applicant: EMC IP HOLDING COMPANY LLC
    Inventors: Vinicius Gottin, Jonas Dias, Tiago Calmon
  • Publication number: 20210109859
    Abstract: Managing a cache memory in a storage system includes maintaining a queue that stores data indictive of the read requests for a particular logical storage unit of the storage system in an order that the read requests are received by the storage system, receiving a read request for a particular page of the particular logical storage unit, and removing a number of elements in the queue and resizing the queue in response to the queue being full. Managing the cache memory also includes placing data indicative of the read request in the queue, determining a prefetch metric that varies according to a number of adjacent elements in a sorted version of the queue having a difference that is less than a predetermined value and greater than zero, and prefetching a plurality of pages that come after the particular page sequentially if the prefetch metric is greater than a predefined value.
    Type: Application
    Filed: October 14, 2019
    Publication date: April 15, 2021
    Applicant: EMC IP Holding Company LLC
    Inventors: Vinicius Gottin, Jonas F. Dias, Hugo de Oliveira Barbalho, Romulo D. Pinho, Tiago Calmon
  • Publication number: 20210109860
    Abstract: Managing a cache memory in a storage system includes maintaining a first queue that stores data indictive of the read requests for a particular logical storage unit of the storage system in an order that the read requests are received by the storage system and maintaining a second queue that stores data indictive of the read requests for the particular logical storage unit in a sort order corresponding to page numbers of the read requests, the second queue persisting for a plurality of iterations of read requests. A read request is received and data indicative of the read request is placed in the first queue and in the second queue while maintaining the sort order of the second queue. The second queue is used to determine a prefetch metric that varies according to a number of adjacent elements in the second queue.
    Type: Application
    Filed: October 14, 2019
    Publication date: April 15, 2021
    Applicant: EMC IP Holding Company LLC
    Inventors: Vinicius Gottin, Jonas F. Dias, Hugo de Oliveira Barbalho, Romulo D. Pinho, Tiago Calmon
  • Patent number: 10977177
    Abstract: A pre-fetching technique determines what data, if any, to pre-fetch on a per-logical storage unit basis. For a given logical storage unit, what, if any, data to prefetch is based at least in part on a collective sequential proximity of the most recently requested pages of the logical storage unit. Determining what, if any, data to pre-fetch for a logical storage unit may include determining a value for a proximity metric indicative of the collective sequential proximity of the most recently requested pages, comparing the value to a predetermined proximity threshold value, and determining whether to pre-fetch one or more pages of the logical storage unit based on the result of the comparison. A data structure may be maintained that includes most recently requested pages for one or more logical storage units. This data structure may be a table.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: April 13, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Vinicius Gottin, Tiago Calmon, Romulo D. Pinho, Jonas F. Dias, Eduardo Sousa, Roberto Nery Stelling Neto, Hugo de Oliveira Barbalho
  • Publication number: 20210011851
    Abstract: A pre-fetching technique determines what data, if any, to pre-fetch on a per-logical storage unit basis. For a given logical storage unit, what, if any, data to prefetch is based at least in part on a collective sequential proximity of the most recently requested pages of the logical storage unit. Determining what, if any, data to pre-fetch for a logical storage unit may include determining a value for a proximity metric indicative of the collective sequential proximity of the most recently requested pages, comparing the value to a predetermined proximity threshold value, and determining whether to pre-fetch one or more pages of the logical storage unit based on the result of the comparison. A data structure may be maintained that includes most recently requested pages for one or more logical storage units. This data structure may be a table.
    Type: Application
    Filed: July 11, 2019
    Publication date: January 14, 2021
    Applicant: EMC IP Holding Company LLC
    Inventors: Vinicius Gottin, Tiago Calmon, Romulo D. Pinho, Jonas F. Dias, Eduardo Sousa, Roberto Nery Stelling Neto, Hugo de Oliveira Barbalho
  • Publication number: 20200057675
    Abstract: A computer implemented method, computer program product, and system for managing execution of a workflow comprising a set of subworkflows, comprising optimizing the set of subworkflows using a deep neural network, wherein each subworkflow of the set of subworkflows has a set of tasks, wherein each task of the sets of tasks has a requirement of resources of a set of resources; wherein each task of the sets of tasks is enabled to be dependent on another task of the sets of tasks, training the deep neural network by: executing the set of subworkflows, collecting provenance data from the execution, and collecting monitoring data that represents the state of said set of resources, wherein the training causes the neural network to learn relationships between the states of said set of resources, the said sets of tasks, their parameters and the obtained performance, optimizing an allocation of resources of the set of resources to each task of the sets of tasks to ensure compliance with a user-defined quality metric b
    Type: Application
    Filed: August 16, 2018
    Publication date: February 20, 2020
    Inventors: Jonas F. Dias, Angelo Ciarlini, Romulo D. Pinho, Vinicius Gottin, Andre Maximo, Edward Pacheco, David Holmes, Keshava Rangarajan, Scott David Senften, Joseph Blake Winston, Xi Wang, Clifton Brent Walker, Ashwani Dev, Chandra Yeleshwarapu, Nagaraj Srinivasan