Patents by Inventor Irina Calciu

Irina Calciu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11947458
    Abstract: A device is connected via a coherence interconnect to a CPU with a cache. The device monitors cache coherence events via the coherence interconnect, where the cache coherence events relate to the cache of the CPU. The device also includes a buffer that can contain representations, such as addresses, of cache lines. If a coherence event occurs on the coherence interconnect indicating that a cache line in the CPU's cache is dirty, then the device is configured to add an entry to the buffer to record the dirty cache line.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: April 2, 2024
    Assignee: VMware, Inc.
    Inventors: Irina Calciu, Jayneel Gandhi, Aasheesh Kolli, Pratap Subrahmanyam
  • Patent number: 11886470
    Abstract: A non-transitory computer readable storage medium has instructions executed by a processor to receive from a network connection different sources of unstructured data, where the unstructured data has multiple modes of semantically distinct data types and the unstructured data has time-varying data instances aggregated over time. An entity combining different sources of the unstructured data is formed. A representation for the entity is created, where the representation includes embeddings that are numeric vectors computed using machine learning embedding models. These operations are repeated to form an aggregation of multimodal, time-varying entities and a corresponding index of individual entities and corresponding embeddings. Proximity searches are performed on embeddings within the index.
    Type: Grant
    Filed: February 23, 2022
    Date of Patent: January 30, 2024
    Assignee: Graft, Inc.
    Inventors: Adam Oliner, Maria Kazandjieva, Eric Schkufza, Mher Hakobyan, Irina Calciu, Brian Calvert, Daniel Woolridge
  • Patent number: 11809417
    Abstract: A non-transitory computer readable storage medium has instructions executed by a processor to receive from a network connection different sources of unstructured data. An entity is formed by combining one or more sources of the unstructured data, where the entity has relational data attributes. A representation for the entity is created, where the representation includes embeddings that are numeric vectors computed using machine learning embedding models, including trunk models, where a trunk model is a machine learning model trained on data in a self-supervised manner. An enrichment model is created to predict a property of the entity. A query is processed to produce a query result, where the query is applied to one or more of the entity, the embeddings, the machine learning embedding models, and the enrichment model.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: November 7, 2023
    Assignee: Graft, Inc.
    Inventors: Adam Oliner, Maria Kazandjieva, Eric Schkufza, Mher Hakobyan, Irina Calciu, Brian Calvert
  • Patent number: 11782832
    Abstract: In a computer system, a processor and an I/O device controller communicate with each other via a coherence interconnect and according to a cache coherence protocol. Registers of the I/O device controllers are mapped to the cache coherent memory space to allow the processor to treat the registers as cacheable memory. As a result, latency of processor commands executed by the I/O device controller is decreased, and size of data stored in the I/O device controller that can be accessed by the processor is increased from the size of a single register to the size of an entire cache line.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: October 10, 2023
    Assignee: VMware, Inc.
    Inventors: Isam Wadih Akkawi, Andreas Nowatzyk, Pratap Subrahmanyam, Nishchay Dua, Adarsh Seethanadi Nayak, Venkata Subhash Reddy Peddamallu, Irina Calciu
  • Patent number: 11720447
    Abstract: Techniques for achieving application high availability via application-transparent battery-backed replication of persistent data are provided. In one set of embodiments, a computer system can detect a failure that causes an application of the computer system to stop running. In response to detecting the failure, the computer system can copy persistent data written by the application and maintained locally at the computer system to one or more remote destinations, where the copying is performed in a manner that is transparent to the application and while the computer system runs on battery power. The application can then be restarted on another computer system using the copied data.
    Type: Grant
    Filed: January 7, 2021
    Date of Patent: August 8, 2023
    Assignee: VMware, Inc.
    Inventors: Pratap Subrahmanyam, Rajesh Venkatasubramanian, Kiran Tati, Qasim Ali, Marcos Aguilera, Irina Calciu, Venkata Subhash Reddy Peddamallu, Xavier Deguillard, Yi Yao
  • Publication number: 20230069958
    Abstract: A non-transitory computer readable storage medium has instructions executed by a processor to receive from a network connection different sources of unstructured data, where the unstructured data has multiple modes of semantically distinct data types and the unstructured data has time-varying data instances aggregated over time. An entity combining different sources of the unstructured data is formed. A representation for the entity is created, where the representation includes embeddings that are numeric vectors computed using machine learning embedding models. These operations are repeated to form an aggregation of multimodal, time-varying entities and a corresponding index of individual entities and corresponding embeddings. Proximity searches are performed on embeddings within the index.
    Type: Application
    Filed: February 23, 2022
    Publication date: March 9, 2023
    Inventors: Adam OLINER, Maria KAZANDJIEVA, Eric SCHKUFZA, Mher HAKOBYAN, Irina CALCIU, Brian CALVERT, Daniel WOOLRIDGE
  • Publication number: 20230072311
    Abstract: A non-transitory computer readable storage medium has instructions executed by a processor to receive from a network connection different sources of unstructured data. An entity is formed by combining one or more sources of the unstructured data, where the entity has relational data attributes. A representation for the entity is created, where the representation includes embeddings that are numeric vectors computed using machine learning embedding models, including trunk models, where a trunk model is a machine learning model trained on data in a self-supervised manner. An enrichment model is created to predict a property of the entity. A query is processed to produce a query result, where the query is applied to one or more of the entity, the embeddings, the machine learning embedding models, and the enrichment model.
    Type: Application
    Filed: September 28, 2021
    Publication date: March 9, 2023
    Inventors: Adam OLINER, Maria KAZANDJIEVA, Eric SCHKUFZA, Mher HAKOBYAN, Irina CALCIU, Brian CALVERT
  • Publication number: 20230069152
    Abstract: In a computer system, a processor and an I/O device controller communicate with each other via a coherence interconnect and according to a cache coherence protocol. Registers of the I/O device controllers are mapped to the cache coherent memory space to allow the processor to treat the registers as cacheable memory. As a result, latency of processor commands executed by the I/O device controller is decreased, and size of data stored in the I/O device controller that can be accessed by the processor is increased from the size of a single register to the size of an entire cache line.
    Type: Application
    Filed: August 25, 2021
    Publication date: March 2, 2023
    Inventors: Isam Wadih AKKAWI, Andreas NOWATZYK, Pratap SUBRAHMANYAM, Nishchay DUA, Adarsh Seethanadi NAYAK, Venkata Subhash Reddy PEDDAMALLU, Irina CALCIU
  • Patent number: 11586545
    Abstract: Memory pages of a local application program are prefetched from a memory of a remote host. A method of prefetching the memory pages from the remote memory includes detecting that a cache-line access made by a processor executing the local application program is an access to a cache line containing page table data of the local application program, identifying data pages that are referenced by the page table data, and fetching the identified data pages from the remote memory and storing the fetched data pages in a local memory.
    Type: Grant
    Filed: July 2, 2021
    Date of Patent: February 21, 2023
    Assignee: VMware, Inc.
    Inventors: Irina Calciu, Andreas Nowatzyk, Isam Wadih Akkawi, Venkata Subhash Reddy Peddamallu, Pratap Subrahmanyam
  • Publication number: 20230028825
    Abstract: A device tracks accesses to pages of code executed by processors and modifies a portion of the code without terminating the execution of the code. The device is connected to the processors via a coherence interconnect and a local memory of the device stores the code pages. As a result, any requests to access cache lines of the code pages made by the processors will be placed on the coherence interconnect, and the device is able to track any cache-line accesses of the code pages by monitoring the coherence interconnect. In response to a request to read a cache line having a particular address, a modified code portion is returned in place of the code portion stored in the code pages.
    Type: Application
    Filed: November 19, 2021
    Publication date: January 26, 2023
    Inventors: Irina CALCIU, Andreas NOWATZYK, Pratap SUBRAHMANYAM
  • Publication number: 20230023256
    Abstract: A method of performing a copy-on-write on a shared memory page is carried out by a device communicating with a processor via a coherence interconnect. The method includes: adding a page table entry so that a request to read a first cache line of the shared memory page includes a cache-line address of the shared memory page and a request to write to a second cache line of the shared memory page includes a cache-line address of a new memory page; in response to the request to write to the second cache line, storing new data of the second cache line in a second memory and associating the second cache-line address with the new data stored in the second memory; and in response to a request to read the second cache line, reading the new data of the second cache line from the second memory.
    Type: Application
    Filed: September 28, 2021
    Publication date: January 26, 2023
    Inventors: Irina CALCIU, Andreas NOWATZYK, Pratap SUBRAHMANYAM
  • Publication number: 20230022096
    Abstract: While an application or a virtual machine (VM) is running, a device tracks accesses to cache lines to detect access patterns that indicate security attacks, such as cache-based side channel attacks or row hammer attacks. To enable the device to detect accesses to cache lines, the device is connected to processors via a coherence interconnect, and the application/VM data is stored in a local memory of the device. The device collects the cache lines of the application/VM data that are accessed while the application/VM is running into a buffer and the buffer is analyzed for access patterns that indicate security attacks.
    Type: Application
    Filed: July 22, 2021
    Publication date: January 26, 2023
    Inventors: Irina CALCIU, Andreas NOWATZYK, Pratap SUBRAHMANYAM
  • Publication number: 20230004497
    Abstract: A method of prefetching memory pages from remote memory includes detecting that a cache-line access made by a processor executing an application program is an access to a cache line containing page table data of the application program, identifying data pages that are referenced by the page table data, initiating a fetch of a data page, which is one of the identified data pages, and starting a timer. If the fetch completes prior to expiration of the timer, the data page is stored in a local memory. On the other hand, if the fetch does not complete prior to expiration of timer, a presence bit of the data page in the page table data is set to indicate that the data page is not present.
    Type: Application
    Filed: July 25, 2022
    Publication date: January 5, 2023
    Inventors: Irina CALCIU, Andreas NOWATZYK, Isam Wadih AKKAWI, Venkata Subhash Reddy PEDDAMALLU, Pratap SUBRAHMANYAM
  • Publication number: 20230004496
    Abstract: Memory pages of a local application program are prefetched from a memory of a remote host. A method of prefetching the memory pages from the remote memory includes detecting that a cache-line access made by a processor executing the local application program is an access to a cache line containing page table data of the local application program, identifying data pages that are referenced by the page table data, and fetching the identified data pages from the remote memory and storing the fetched data pages in a local memory.
    Type: Application
    Filed: July 2, 2021
    Publication date: January 5, 2023
    Inventors: Irina CALCIU, Andreas NOWATZYK, Isam Wadih AKKAWI, Venkata Subhash Reddy PEDDAMALLU, Pratap SUBRAHMANYAM
  • Patent number: 11544194
    Abstract: A method of performing a copy-on-write on a shared memory page is carried out by a device communicating with a processor via a coherence interconnect. The method includes: adding a page table entry so that a request to read a first cache line of the shared memory page includes a cache-line address of the shared memory page and a request to write to a second cache line of the shared memory page includes a cache-line address of a new memory page; in response to the request to write to the second cache line, storing new data of the second cache line in a second memory and associating the second cache-line address with the new data stored in the second memory; and in response to a request to read the second cache line, reading the new data of the second cache line from the second memory.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: January 3, 2023
    Assignee: VMware, Inc.
    Inventors: Irina Calciu, Andreas Nowatzyk, Pratap Subrahmanyam
  • Publication number: 20220414254
    Abstract: A non-transitory computer readable storage medium with instructions executed by a processor maintains a collection of data access connectors configured to access different sources of unstructured data. A user interface with prompts for designating a selected data access connector from the data access connectors is supplied. Unstructured data is received from the selected data access connector. Numeric vectors characterizing the unstructured data are created from the unstructured data. The numeric vectors are stored and indexed.
    Type: Application
    Filed: May 3, 2022
    Publication date: December 29, 2022
    Inventors: Adam OLINER, Maria KAZANDJIEVA, Eric SCHKUFZA, Mher HAKOBYAN, Irina CALCIU, Brian CALVERT, Deven NAVANI
  • Publication number: 20220414157
    Abstract: A non-transitory computer readable storage medium has instructions executed by a processor to maintain a repository of machine learning directed acyclic graphs. Each machine learning directed acyclic graph has machine learning artifacts as nodes and machine learning executors as edges joining machine learning artifacts. Each machine learning artifact has typed data that has associated conflict rules maintained by the repository. Each machine learning executor specifies executable code that executes a machine learning artifact as an input and produces a new machine learning artifact as an output. A request about an object in the repository is received. A response with information about the object is supplied.
    Type: Application
    Filed: June 29, 2022
    Publication date: December 29, 2022
    Inventors: Adam OLINER, Maria KAZANDJIEVA, Eric SCHKUFZA, Mher HAKOBYAN, Irina CALCIU, Brian CALVERT, Daniel WOOLRIDGE, Deven NAVANI
  • Publication number: 20220398199
    Abstract: Techniques for implementing user-space remote memory paging are provided. In one set of embodiments, these techniques include a user-space remote memory paging (RMP) runtime that can: (1) pre-allocate one or more regions of remote memory for use by an application; (2) at a time of receiving/intercepting a memory allocation function call invoked by the application, map the virtual memory address range of the allocated local memory to a portion of the pre-allocated remote memory; (3) at a time of detecting a page fault directed to a page that is mapped to remote memory, retrieve the page via Remote Direct Memory Access (RDMA) from its remote memory location and store the retrieved page in a local main memory cache; and (4) on a periodic basis, identify pages in the local main memory cache that are candidates for eviction and write out the identified pages via RDMA to their mapped remote memory locations if they have been modified.
    Type: Application
    Filed: June 15, 2021
    Publication date: December 15, 2022
    Inventors: Irina Calciu, Muhammad Talha Imran, Nadav Amit
  • Patent number: 11442865
    Abstract: A method of prefetching memory pages from remote memory includes detecting that a cache-line access made by a processor executing an application program is an access to a cache line containing page table data of the application program, identifying data pages that are referenced by the page table data, initiating a fetch of a data page, which is one of the identified data pages, and starting a timer. If the fetch completes prior to expiration of the timer, the data page is stored in a local memory. On the other hand, if the fetch does not complete prior to expiration of timer, a presence bit of the data page in the page table data is set to indicate that the data page is not present.
    Type: Grant
    Filed: July 2, 2021
    Date of Patent: September 13, 2022
    Assignee: VMware, Inc.
    Inventors: Irina Calciu, Andreas Nowatzyk, Isam Wadih Akkawi, Venkata Subhash Reddy Peddamallu, Pratap Subrahmanyam
  • Patent number: 11231949
    Abstract: Disclosed are embodiments for migrating a virtual machine (VM) from a source host to a destination host while the virtual machine is running on the destination host. The system includes an RDMA facility connected between the source and destination hosts and a device coupled to a local memory, the local memory being responsible for memory pages of the VM instead of the source host. The device is configured to copy pages of the VM to the destination host and to maintain correct operation of the VM by monitoring coherence events, such as a cache miss, caused by the virtual machine running on the destination host. The device services these cache misses using the RDMA facility and copies the cache line satisfying the cache miss to the CPU running the VM. The device also tracks the cache misses to create an access pattern that it uses to predict future cache misses.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: January 25, 2022
    Assignee: VMware, Inc.
    Inventors: Irina Calciu, Jayneel Gandhi, Aasheesh Kolli, Pratap Subrahmanyam