Patents by Inventor Eric Schkufza
Eric Schkufza has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11886470Abstract: A non-transitory computer readable storage medium has instructions executed by a processor to receive from a network connection different sources of unstructured data, where the unstructured data has multiple modes of semantically distinct data types and the unstructured data has time-varying data instances aggregated over time. An entity combining different sources of the unstructured data is formed. A representation for the entity is created, where the representation includes embeddings that are numeric vectors computed using machine learning embedding models. These operations are repeated to form an aggregation of multimodal, time-varying entities and a corresponding index of individual entities and corresponding embeddings. Proximity searches are performed on embeddings within the index.Type: GrantFiled: February 23, 2022Date of Patent: January 30, 2024Assignee: Graft, Inc.Inventors: Adam Oliner, Maria Kazandjieva, Eric Schkufza, Mher Hakobyan, Irina Calciu, Brian Calvert, Daniel Woolridge
-
Patent number: 11809417Abstract: A non-transitory computer readable storage medium has instructions executed by a processor to receive from a network connection different sources of unstructured data. An entity is formed by combining one or more sources of the unstructured data, where the entity has relational data attributes. A representation for the entity is created, where the representation includes embeddings that are numeric vectors computed using machine learning embedding models, including trunk models, where a trunk model is a machine learning model trained on data in a self-supervised manner. An enrichment model is created to predict a property of the entity. A query is processed to produce a query result, where the query is applied to one or more of the entity, the embeddings, the machine learning embedding models, and the enrichment model.Type: GrantFiled: September 28, 2021Date of Patent: November 7, 2023Assignee: Graft, Inc.Inventors: Adam Oliner, Maria Kazandjieva, Eric Schkufza, Mher Hakobyan, Irina Calciu, Brian Calvert
-
Publication number: 20230072311Abstract: A non-transitory computer readable storage medium has instructions executed by a processor to receive from a network connection different sources of unstructured data. An entity is formed by combining one or more sources of the unstructured data, where the entity has relational data attributes. A representation for the entity is created, where the representation includes embeddings that are numeric vectors computed using machine learning embedding models, including trunk models, where a trunk model is a machine learning model trained on data in a self-supervised manner. An enrichment model is created to predict a property of the entity. A query is processed to produce a query result, where the query is applied to one or more of the entity, the embeddings, the machine learning embedding models, and the enrichment model.Type: ApplicationFiled: September 28, 2021Publication date: March 9, 2023Inventors: Adam OLINER, Maria KAZANDJIEVA, Eric SCHKUFZA, Mher HAKOBYAN, Irina CALCIU, Brian CALVERT
-
Publication number: 20230069958Abstract: A non-transitory computer readable storage medium has instructions executed by a processor to receive from a network connection different sources of unstructured data, where the unstructured data has multiple modes of semantically distinct data types and the unstructured data has time-varying data instances aggregated over time. An entity combining different sources of the unstructured data is formed. A representation for the entity is created, where the representation includes embeddings that are numeric vectors computed using machine learning embedding models. These operations are repeated to form an aggregation of multimodal, time-varying entities and a corresponding index of individual entities and corresponding embeddings. Proximity searches are performed on embeddings within the index.Type: ApplicationFiled: February 23, 2022Publication date: March 9, 2023Inventors: Adam OLINER, Maria KAZANDJIEVA, Eric SCHKUFZA, Mher HAKOBYAN, Irina CALCIU, Brian CALVERT, Daniel WOOLRIDGE
-
Patent number: 11573817Abstract: Examples provide a method of virtualizing a hardware accelerator in a virtualized computing system. The virtualized computing system includes a hypervisor supporting execution of a plurality of virtual machines (VMs). The method includes: receiving a plurality of sub-programs at a compiler in the hypervisor from a plurality of compilers in the respective plurality of VMs, each of the sub-programs including a hardware-description language (HDL) description; combining, at the compiler in the hypervisor, the plurality of sub-programs into a monolithic program; generating, by the compiler in the hypervisor, a circuit implementation for the monolithic program, the circuit implementation including a plurality of sub-circuits for the respective plurality of sub-programs; and loading, by the compiler in the hypervisor, the circuit implementation to a programmable device of the hardware accelerator.Type: GrantFiled: July 21, 2020Date of Patent: February 7, 2023Assignee: VMware, Inc.Inventors: Eric Schkufza, Christopher J. Rossbach
-
Publication number: 20220414157Abstract: A non-transitory computer readable storage medium has instructions executed by a processor to maintain a repository of machine learning directed acyclic graphs. Each machine learning directed acyclic graph has machine learning artifacts as nodes and machine learning executors as edges joining machine learning artifacts. Each machine learning artifact has typed data that has associated conflict rules maintained by the repository. Each machine learning executor specifies executable code that executes a machine learning artifact as an input and produces a new machine learning artifact as an output. A request about an object in the repository is received. A response with information about the object is supplied.Type: ApplicationFiled: June 29, 2022Publication date: December 29, 2022Inventors: Adam OLINER, Maria KAZANDJIEVA, Eric SCHKUFZA, Mher HAKOBYAN, Irina CALCIU, Brian CALVERT, Daniel WOOLRIDGE, Deven NAVANI
-
Publication number: 20220414254Abstract: A non-transitory computer readable storage medium with instructions executed by a processor maintains a collection of data access connectors configured to access different sources of unstructured data. A user interface with prompts for designating a selected data access connector from the data access connectors is supplied. Unstructured data is received from the selected data access connector. Numeric vectors characterizing the unstructured data are created from the unstructured data. The numeric vectors are stored and indexed.Type: ApplicationFiled: May 3, 2022Publication date: December 29, 2022Inventors: Adam OLINER, Maria KAZANDJIEVA, Eric SCHKUFZA, Mher HAKOBYAN, Irina CALCIU, Brian CALVERT, Deven NAVANI
-
Patent number: 11347373Abstract: Methods and systems to sample event messages are described. As event messages are generated by one or more sources, the event messages are stored in a storage queue. An event message policy that represents conditions for storing event messages in a sample log file are input. For each event message output from the storage queue, the event message may be stored in a sample log file when one or more of the conditions of the event message policy are satisfied. The event messages of the sample log file may be displayed in a graphical user interface that enables a user to change the event message policy.Type: GrantFiled: October 5, 2016Date of Patent: May 31, 2022Assignee: VMware, Inc.Inventors: Udi Wieder, Dahlia Malkhi, Eric Schkufza, Mayank Agarwal, Nicholas Kushmerick, Ramses Morales
-
Publication number: 20220027181Abstract: Examples provide a method of virtualizing a hardware accelerator in a virtualized computing system. The virtualized computing system includes a hypervisor supporting execution of a plurality of virtual machines (VMs). The method includes: receiving a plurality of sub-programs at a compiler in the hypervisor from a plurality of compilers in the respective plurality of VMs, each of the sub-programs including a hardware-description language (HDL) description; combining, at the compiler in the hypervisor, the plurality of sub-programs into a monolithic program; generating, by the compiler in the hypervisor, a circuit implementation for the monolithic program, the circuit implementation including a plurality of sub-circuits for the respective plurality of sub-programs; and loading, by the compiler in the hypervisor, the circuit implementation to a programmable device of the hardware accelerator.Type: ApplicationFiled: July 21, 2020Publication date: January 27, 2022Inventors: Eric SCHKUFZA, Christopher J. ROSSBACH
-
Patent number: 11003472Abstract: A system and method are disclosed for executing a hardware component of a design in a hardware engine, where the component includes a pre-compiled library component. The hardware component is compiled to include an interface that supports a ‘forward( )’ function which, when invoked, requests that the hardware engine running the hardware component run such that interactions between the library component and the hardware component occur without communicating with the runtime system because interactions between the library component and the hardware component are handled locally by the hardware engine and not the runtime system. Handling the library component without the runtime system intervening allows the library component to run at a speed that is close to the native speed of the target re-programmable hardware fabric. In addition, library components targeted to the specific reprogrammable hardware fabric are available to the design without compilation.Type: GrantFiled: January 25, 2019Date of Patent: May 11, 2021Assignee: VMware, Inc.Inventors: Eric Schkufza, Michael Wei
-
Patent number: 11003471Abstract: A system and method are disclosed for executing a component of a design in a hardware engine. The component is compiled to include an interface that supports an ‘open_loop(n)’ function which, when invoked, requests that the hardware engine run for a specified number of steps before communicating with other hardware or software engines via a runtime system. After the compiled hardware component is transferred to the hardware engine, the hardware engine runs for the specified number of steps unless and until it encounters a system function, such as a ‘display(s)’ function, in the code of the component that requires the runtime system to intervene. The hardware engine pauses awaiting the completion of the system function and continues its execution. The ‘open_loop(n)’ operation of the hardware engine permits components in hardware engines to run at a speed close to the native speed of the target programmable hardware fabric.Type: GrantFiled: January 25, 2019Date of Patent: May 11, 2021Assignee: VMware, Inc.Inventors: Eric Schkufza, Michael Wei
-
Patent number: 10997338Abstract: A system and method for executing a hardware component of a design are disclosed. The system and method execute hardware components that are constructed with a ‘display(s)’ function that permits the hardware component to display values “s” internal to the hardware component while the component is executing on a hardware engine. The values are displayed on a user output interface, such as a user terminal, supported by a runtime system controlling the execution of the hardware engine and thus allows the user to debug the component while it is executing on the hardware engine.Type: GrantFiled: January 25, 2019Date of Patent: May 4, 2021Assignee: VMware, Inc.Inventors: Eric Schkufza, Michael Wei
-
Patent number: 10990730Abstract: A method for implementing a distributed hardware system includes retrieving a hardware design described in a hardware description language, where the hardware design includes a plurality of modules. The method includes sending modules of the design to software engines, where the runtime software maintains for each module being simulated an update queue and evaluate queue. The update queue contains events that update stateful objects in the module and cause evaluation events to be enqueued onto the evaluate queue, while the evaluate queue contains evaluate events that update stateless objects and cause update events to be enqueued onto the update queue. Having a update and evaluate queues for each module permits the runtime to manage module simulations so that the executions of each module run concurrently with each other.Type: GrantFiled: July 9, 2018Date of Patent: April 27, 2021Assignee: VMware, Inc.Inventors: Eric Schkufza, Michael Wei
-
Patent number: 10885247Abstract: A method for implementing a distributed hardware system includes retrieving a hardware design described in a hardware description language, where the hardware design includes a plurality of components. The method further includes, for each component of the hardware design, sending the component to a hardware compiler and to one of a plurality of software engines, where the hardware compiler compiles the component to run in one of a plurality of hardware engines and the one software engine simulates the component while the hardware compiler compiles the component for the one hardware engine, and upon completion of the compilation of the component, sending the compiled component to one of the hardware engines to be executed by the one hardware engine and monitoring communication so that the one hardware engine can interact with other components in other hardware engines or software engines.Type: GrantFiled: January 26, 2018Date of Patent: January 5, 2021Assignee: VMware, Inc.Inventors: Eric Schkufza, Michael Wei
-
Patent number: 10515029Abstract: Techniques for facilitating conversion of an application from a block-based persistence model to a byte-based persistence model are provided. In one embodiment, a computer system can receive source code of the application and automatically identify data structures in the source code that are part of the application's semantic persistent state. The computer system can then output a list of data types corresponding to the identified data structures.Type: GrantFiled: November 18, 2016Date of Patent: December 24, 2019Assignee: VMware, Inc.Inventors: Vijaychidambaram Velayudhan Pillai, Irina Calciu, Himanshu Chauhan, Eric Schkufza, Onur Mutlu, Pratap Subrahmanyam
-
Publication number: 20190236229Abstract: A method for implementing a distributed hardware system includes retrieving a hardware design described in a hardware description language, where the hardware design includes a plurality of components. The method further includes, for each component of the hardware design, sending the component to a hardware compiler and to one of a plurality of software engines, where the hardware compiler compiles the component to run in one of a plurality of hardware engines and the one software engine simulates the component while the hardware compiler compiles the component for the one hardware engine, and upon completion of the compilation of the component, sending the compiled component to one of the hardware engines to be executed by the one hardware engine and monitoring communication so that the one hardware engine can interact with other components in other hardware engines or software engines.Type: ApplicationFiled: January 26, 2018Publication date: August 1, 2019Inventors: Eric SCHKUFZA, Michael WEI
-
Publication number: 20190235892Abstract: A system and method are disclosed for executing a component of a design in a hardware engine. The component is compiled to include an interface that supports an ‘open_loop(n)’ function which, when invoked, requests that the hardware engine run for a specified number of steps before communicating with other hardware or software engines via a runtime system. After the compiled hardware component is transferred to the hardware engine, the hardware engine runs for the specified number of steps unless and until it encounters a system function, such as a ‘display(s)’ function, in the code of the component that requires the runtime system to intervene. The hardware engine pauses awaiting the completion of the system function and continues its execution. The ‘open_loop(n)’ operation of the hardware engine permits components in hardware engines to run at a speed close to the native speed of the target programmable hardware fabric.Type: ApplicationFiled: January 25, 2019Publication date: August 1, 2019Inventors: Eric SCHKUFZA, Michael WEI
-
Publication number: 20190236231Abstract: A system and method for executing a hardware component of a design is disclosed. The system and method execute hardware components that are constructed with a ‘display(s)’ function that permits the hardware component to display values internal to the hardware component while the component is executing on a hardware engine. The values are displayed on a user output interface, such as a user terminal, supported by a runtime system controlling the execution of the hardware engine and thus allows the user to have a view into the executing hardware that would be otherwise unavailable. This view permits the user to debug the component executing on the hardware engine.Type: ApplicationFiled: January 25, 2019Publication date: August 1, 2019Inventors: Eric SCHKUFZA, Michael WEI
-
Publication number: 20190236230Abstract: A method for implementing a distributed hardware system includes retrieving a hardware design described in a hardware description language, where the hardware design includes a plurality of modules. The method includes sending modules of the design to software engines, where the runtime maintains for each module being simulated a queue for update and evaluation events. Update events on the queue are those that update stateful objects in the module and cause evaluation events to be enqueued onto the module's queue while evaluation events are those that update stateless objects in the module and cause update events to be enqueued onto the module's queue. Having a queue for each module permits the runtime to manage module simulations so that the executions of each module run concurrently with each other. This leads to faster executions of the modules and less complex communications between modules during execution.Type: ApplicationFiled: July 9, 2018Publication date: August 1, 2019Inventors: Eric SCHKUFZA, Michael WEI
-
Publication number: 20190235893Abstract: A system and method are disclosed for executing a hardware component of a design in a hardware engine, where the component includes a pre-compiled library component. The hardware component is compiled to include an interface that supports a ‘forward( )’ function which, when invoked, requests that the hardware engine running the hardware component run such that interactions between the library component and the hardware component occur without communicating with the runtime system because interactions between the library component and the hardware component are handled locally by the hardware engine and not the runtime system. Handling the library component without the runtime system intervening allows the library component to run at a speed that is close to the native speed of the target re-programmable hardware fabric. In addition, library components targeted to the specific reprogrammable hardware fabric are available to the design without compilation.Type: ApplicationFiled: January 25, 2019Publication date: August 1, 2019Inventors: Eric SCHKUFZA, Michael WEI