Patents by Inventor Romulo D. Pinho

Romulo D. Pinho has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11868890
    Abstract: A computer implemented method, computer program product, and system for managing execution of a workflow comprising a set of subworkflows, comprising optimizing the set of subworkflows using a deep neural network, wherein each subworkflow of the set of subworkflows has a set of tasks, wherein each task of the sets of tasks has a requirement of resources of a set of resources; wherein each task of the sets of tasks is enabled to be dependent on another task of the sets of tasks, training the deep neural network by: executing the set of subworkflows, collecting provenance data from the execution, and collecting monitoring data that represents the state of said set of resources, wherein the training causes the neural network to learn relationships between the states of said set of resources, the said sets of tasks, their parameters and the obtained performance, optimizing an allocation of resources of the set of resources to each task of the sets of tasks to ensure compliance with a user-defined quality metric b
    Type: Grant
    Filed: April 6, 2022
    Date of Patent: January 9, 2024
    Assignees: LANDMARK GRAPHICS CORPORATION, EMC IP HOLDING COMPANY LLC
    Inventors: Chandra Yeleshwarapu, Jonas F. Dias, Angelo Ciarlini, Romulo D. Pinho, Vinicius Gottin, Andre Maximo, Edward Pacheco, David Holmes, Keshava Rangarajan, Scott David Senften, Joseph Blake Winston, Xi Wang, Clifton Brent Walker, Ashwani Dev, Nagaraj Sirinivasan
  • Publication number: 20220300812
    Abstract: A computer implemented method, computer program product, and system for managing execution of a workflow comprising a set of subworkflows, comprising optimizing the set of subworkflows using a deep neural network, wherein each subworkflow of the set of subworkflows has a set of tasks, wherein each task of the sets of tasks has a requirement of resources of a set of resources; wherein each task of the sets of tasks is enabled to be dependent on another task of the sets of tasks, training the deep neural network by: executing the set of subworkflows, collecting provenance data from the execution, and collecting monitoring data that represents the state of said set of resources, wherein the training causes the neural network to learn relationships between the states of said set of resources, the said sets of tasks, their parameters and the obtained performance, optimizing an allocation of resources of the set of resources to each task of the sets of tasks to ensure compliance with a user-defined quality metric b
    Type: Application
    Filed: April 6, 2022
    Publication date: September 22, 2022
    Applicants: Landmark Graphics Corporation, EMC IP Holding Company LLC
    Inventors: Chandra YELESHWARAPU, Jonas F. DIAS, Angelo CIARLINI, Romulo D. Pinho, Vinicius GOTTIN, Andre MAXIMO, Edward PACHECO, David HOLMES, Keshava RANGARAJAN, Scott David SENFTEN, Joseph Blake WINSTON, Xi WANG, Clifton Brent WALKER, Ashwani DEV, Nagaraj SIRINIVASAN
  • Patent number: 11347645
    Abstract: Managing a cache memory in a storage system includes maintaining a queue that stores data indictive of the read requests for a particular logical storage unit of the storage system in an order that the read requests are received by the storage system, receiving a read request for a particular page of the particular logical storage unit, and removing a number of elements in the queue and resizing the queue in response to the queue being full. Managing the cache memory also includes placing data indicative of the read request in the queue, determining a prefetch metric that varies according to a number of adjacent elements in a sorted version of the queue having a difference that is less than a predetermined value and greater than zero, and prefetching a plurality of pages that come after the particular page sequentially if the prefetch metric is greater than a predefined value.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: May 31, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Vinicius Gottin, Jonas F. Dias, Hugo de Oliveira Barbalho, Romulo D. Pinho, Tiago Calmon
  • Patent number: 11315014
    Abstract: A computer implemented method, computer program product, and system for managing execution of a workflow comprising a set of subworkflows, comprising optimizing the set of subworkflows using a deep neural network, wherein each subworkflow of the set of subworkflows has a set of tasks, wherein each task of the sets of tasks has a requirement of resources of a set of resources; wherein each task of the sets of tasks is enabled to be dependent on another task of the sets of tasks, training the deep neural network by: executing the set of subworkflows, collecting provenance data from the execution, and collecting monitoring data that represents the state of said set of resources, wherein the training causes the neural network to learn relationships between the states of said set of resources, the said sets of tasks, their parameters and the obtained performance, optimizing an allocation of resources of the set of resources to each task of the sets of tasks to ensure compliance with a user-defined quality metric b
    Type: Grant
    Filed: August 16, 2018
    Date of Patent: April 26, 2022
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Jonas F. Dias, Angelo Ciarlini, Romulo D. Pinho, Vinicius Gottin, Andre Maximo, Edward Pacheco, David Holmes, Keshava Rangarajan, Scott David Senften, Joseph Blake Winston, Xi Wang, Clifton Brent Walker, Ashwani Dev, Chandra Yeleshwarapu, Nagaraj Srinivasan
  • Patent number: 11093404
    Abstract: Managing a cache memory in a storage system includes maintaining a first queue that stores data indictive of the read requests for a particular logical storage unit of the storage system in an order that the read requests are received by the storage system and maintaining a second queue that stores data indictive of the read requests for the particular logical storage unit in a sort order corresponding to page numbers of the read requests, the second queue persisting for a plurality of iterations of read requests. A read request is received and data indicative of the read request is placed in the first queue and in the second queue while maintaining the sort order of the second queue. The second queue is used to determine a prefetch metric that varies according to a number of adjacent elements in the second queue.
    Type: Grant
    Filed: October 14, 2019
    Date of Patent: August 17, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Vinicius Gottin, Jonas F. Dias, Hugo de Oliveira Barbalho, Romulo D. Pinho, Tiago Calmon
  • Publication number: 20210109860
    Abstract: Managing a cache memory in a storage system includes maintaining a first queue that stores data indictive of the read requests for a particular logical storage unit of the storage system in an order that the read requests are received by the storage system and maintaining a second queue that stores data indictive of the read requests for the particular logical storage unit in a sort order corresponding to page numbers of the read requests, the second queue persisting for a plurality of iterations of read requests. A read request is received and data indicative of the read request is placed in the first queue and in the second queue while maintaining the sort order of the second queue. The second queue is used to determine a prefetch metric that varies according to a number of adjacent elements in the second queue.
    Type: Application
    Filed: October 14, 2019
    Publication date: April 15, 2021
    Applicant: EMC IP Holding Company LLC
    Inventors: Vinicius Gottin, Jonas F. Dias, Hugo de Oliveira Barbalho, Romulo D. Pinho, Tiago Calmon
  • Publication number: 20210109859
    Abstract: Managing a cache memory in a storage system includes maintaining a queue that stores data indictive of the read requests for a particular logical storage unit of the storage system in an order that the read requests are received by the storage system, receiving a read request for a particular page of the particular logical storage unit, and removing a number of elements in the queue and resizing the queue in response to the queue being full. Managing the cache memory also includes placing data indicative of the read request in the queue, determining a prefetch metric that varies according to a number of adjacent elements in a sorted version of the queue having a difference that is less than a predetermined value and greater than zero, and prefetching a plurality of pages that come after the particular page sequentially if the prefetch metric is greater than a predefined value.
    Type: Application
    Filed: October 14, 2019
    Publication date: April 15, 2021
    Applicant: EMC IP Holding Company LLC
    Inventors: Vinicius Gottin, Jonas F. Dias, Hugo de Oliveira Barbalho, Romulo D. Pinho, Tiago Calmon
  • Patent number: 10977177
    Abstract: A pre-fetching technique determines what data, if any, to pre-fetch on a per-logical storage unit basis. For a given logical storage unit, what, if any, data to prefetch is based at least in part on a collective sequential proximity of the most recently requested pages of the logical storage unit. Determining what, if any, data to pre-fetch for a logical storage unit may include determining a value for a proximity metric indicative of the collective sequential proximity of the most recently requested pages, comparing the value to a predetermined proximity threshold value, and determining whether to pre-fetch one or more pages of the logical storage unit based on the result of the comparison. A data structure may be maintained that includes most recently requested pages for one or more logical storage units. This data structure may be a table.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: April 13, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Vinicius Gottin, Tiago Calmon, Romulo D. Pinho, Jonas F. Dias, Eduardo Sousa, Roberto Nery Stelling Neto, Hugo de Oliveira Barbalho
  • Publication number: 20210011851
    Abstract: A pre-fetching technique determines what data, if any, to pre-fetch on a per-logical storage unit basis. For a given logical storage unit, what, if any, data to prefetch is based at least in part on a collective sequential proximity of the most recently requested pages of the logical storage unit. Determining what, if any, data to pre-fetch for a logical storage unit may include determining a value for a proximity metric indicative of the collective sequential proximity of the most recently requested pages, comparing the value to a predetermined proximity threshold value, and determining whether to pre-fetch one or more pages of the logical storage unit based on the result of the comparison. A data structure may be maintained that includes most recently requested pages for one or more logical storage units. This data structure may be a table.
    Type: Application
    Filed: July 11, 2019
    Publication date: January 14, 2021
    Applicant: EMC IP Holding Company LLC
    Inventors: Vinicius Gottin, Tiago Calmon, Romulo D. Pinho, Jonas F. Dias, Eduardo Sousa, Roberto Nery Stelling Neto, Hugo de Oliveira Barbalho
  • Publication number: 20200057675
    Abstract: A computer implemented method, computer program product, and system for managing execution of a workflow comprising a set of subworkflows, comprising optimizing the set of subworkflows using a deep neural network, wherein each subworkflow of the set of subworkflows has a set of tasks, wherein each task of the sets of tasks has a requirement of resources of a set of resources; wherein each task of the sets of tasks is enabled to be dependent on another task of the sets of tasks, training the deep neural network by: executing the set of subworkflows, collecting provenance data from the execution, and collecting monitoring data that represents the state of said set of resources, wherein the training causes the neural network to learn relationships between the states of said set of resources, the said sets of tasks, their parameters and the obtained performance, optimizing an allocation of resources of the set of resources to each task of the sets of tasks to ensure compliance with a user-defined quality metric b
    Type: Application
    Filed: August 16, 2018
    Publication date: February 20, 2020
    Inventors: Jonas F. Dias, Angelo Ciarlini, Romulo D. Pinho, Vinicius Gottin, Andre Maximo, Edward Pacheco, David Holmes, Keshava Rangarajan, Scott David Senften, Joseph Blake Winston, Xi Wang, Clifton Brent Walker, Ashwani Dev, Chandra Yeleshwarapu, Nagaraj Srinivasan