Patents by Inventor Michael Robillard
Michael Robillard has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240394586Abstract: One example method includes obtaining information about a first pre-defined implementation of a QUBO (quadratic unconstrained binary optimization) problem configured for execution on a gate-based device, obtaining information about a second pre-defined implementation of the QUBO problem configured for execution on an annealing device, receiving information about a QUBO job that is to be executed, identifying first hardware and second hardware that are available to execute the QUBO job, and the first hardware is different from the second hardware, using the information about the first and second pre-defined implementations of the QUBO problem to generate respective predictions concerning performance of the QUBO job on the first hardware and the second hardware, comparing the predictions, and based on the comparing, selecting one of the first hardware and the second hardware for execution of the QUBO job.Type: ApplicationFiled: May 22, 2023Publication date: November 28, 2024Inventors: Rômulo Teixeira de Abreu Pinho, Michael Robillard, Benjamin E. Santaus, Brendan Burns Healy, Victor Fong, Miguel Paredes Quiñones
-
Patent number: 11900174Abstract: Techniques are disclosed for processing unit virtualization with scalable over-provisioning in an information processing system. For example, the method accesses a data structure that maps a correspondence between a plurality of virtualized processing units and a plurality of abstracted processing units, wherein the plurality of abstracted processing units are configured to decouple an allocation decision from the plurality of virtualized processing units, and further wherein at least one of the virtualized processing units is mapped to multiple ones of the abstracted processing units. The method allocates one or more virtualized processing units to execute a given application by allocating one or more abstracted processing units identified from the data structure. The method also enables migration of one or more virtualized processing units across the system.Type: GrantFiled: June 22, 2022Date of Patent: February 13, 2024Assignee: Dell Products L.P.Inventors: Anzhou Hou, Zhen Jia, Qiang Chen, Victor Fong, Michael Robillard
-
Patent number: 11874719Abstract: Techniques are disclosed for management of edge devices. For example, a method comprises coordinating operation of a plurality of edge devices in a system to process a plurality of workloads. The coordinating of the operation of the plurality of edge devices in the system comprises coordinating one or more times for changing a state of at least a subset of the plurality of edge devices from a first state corresponding to a first level of activity to a second state corresponding to a second level of activity. By way of further example, the coordinating of the operation of the plurality of edge devices in the system may further comprise coordinating one or more times for the processing of the plurality of workloads by at least the subset of the plurality of edge devices in the system.Type: GrantFiled: February 28, 2022Date of Patent: January 16, 2024Assignee: Dell Products L.P.Inventors: Michael Robillard, Amy N. Seibel
-
Publication number: 20240012570Abstract: One example method includes receiving a hybrid/classical algorithm, determining a runtime characteristic of the hybrid/classical algorithm, based on the runtime characteristic, checking a memory availability for execution of the hybrid/classical algorithm, when adequate memory is not available to support execution of the hybrid/classical algorithm, modifying a classical/quantum memory fabric to provide enough memory to support execution of the hybrid/classical algorithm, and orchestrating the hybrid classical/quantum algorithm to an execution environment that includes the classical/quantum memory fabric.Type: ApplicationFiled: July 7, 2022Publication date: January 11, 2024Inventors: Kenneth Durazzo, Stephen J. Todd, Michael Robillard, Victor Fong, Brendan Burns Healy, Eric Bruno
-
Publication number: 20240012678Abstract: Distributing quantum jobs are disclosed. When a quantum processing unit is underutilized or when wait times are long, quantum jobs may be distributed from the job queue of one vendor to another vendor. This improves utilization and reduces wait times.Type: ApplicationFiled: July 8, 2022Publication date: January 11, 2024Inventors: Kenneth Durazzo, Stephen J. Todd, Michael Robillard, Victor Fong, Eric Bruno, Amy N. Seibel, Benjamin Santaus, Brendan Burns Healy
-
Publication number: 20240012681Abstract: Quantum job prioritization is disclosed. Quantum jobs may be stored as placeholders in a job queue associated with a quantum processing unit. The quantum jobs are prioritized to improve the usage of the quantum processing unit. Prioritizing quantum jobs allows the quantum processing unit to execute quantum jobs in different orders rather than on an application basis. This allows grace periods to be used for executing quantum jobs.Type: ApplicationFiled: July 8, 2022Publication date: January 11, 2024Inventors: Kenneth Durazzo, Stephen J. Todd, Michael Robillard, Victor Fong, Eric Bruno, Benjamin Santaus
-
Publication number: 20240012786Abstract: One example method includes receiving, by a hybrid classical-quantum computing system, data from a node of a data confidence fabric, processing the data to create processed data, generating one or more confidence scores relating to the processed data, and making the one or more confidence scores and the processed data available to an end user. The hybrid classical-quantum computing system may also be a node of the data confidence fabric and may perform classical and/or quantum computing operations on the data.Type: ApplicationFiled: July 7, 2022Publication date: January 11, 2024Inventors: Kenneth Durazzo, Stephen J. Todd, Michael Robillard, Victor Fong
-
Publication number: 20240012691Abstract: Global optimization of quantum jobs in a multi-cloud or multi-edge environment is disclosed. The quantum jobs of multiple vendors are consolidated in a telemetry plane. The quantum jobs are evaluated based on user intents, quantum job characteristics, and quantum processing unit characteristics. The quantum jobs are then assigned to the quantum systems of the vendors based on the evaluation.Type: ApplicationFiled: July 8, 2022Publication date: January 11, 2024Inventors: Kenneth Durazzo, Stephen J. Todd, Michael Robillard, Victor Fong, Brendan Burns Healy, Benjamin Santaus
-
Publication number: 20230419378Abstract: One example method includes receiving job configuration information from a user with a quantum computing job to be performed, receiving quantum computing information from a quantum computing service vendor, generating, based on the quantum computing information, a vendor score for the quantum computing service vendor, and transmitting the vendor score to the user. The quantum computing information received from the quantum computing service vendor may include information about an accuracy of results produced by execution of a quantum circuit or other quantum hardware operated by the quantum computing service vendor.Type: ApplicationFiled: June 27, 2022Publication date: December 28, 2023Inventors: Kenneth Durazzo, Stephen J. Todd, Michael Robillard, Victor Fong, Brendan Burns Healy, Benjamin E. Santaus, Eric Bruno
-
Publication number: 20230419160Abstract: One example method includes evaluating a function invoked by a request that is received at a local classical computing execution environment, and the request also implies performance of a quantum computing function in a quantum computing execution environment, based on an outcome of the evaluating, determining whether or not the function should be run in the local classical computing execution environment, or whether the function should be run in a separate classical computing execution environment, and when the determining indicates that the function should be run in the separate classical computing execution environment, forwarding the request to the separate classical computing environment for execution of the function. The local classical computing execution environment, the separate classical computing execution environment, and the quantum computing execution environment, are respective first, second, and third, tiers of a hybrid computing execution environment.Type: ApplicationFiled: June 27, 2022Publication date: December 28, 2023Inventors: Kenneth Durazzo, Stephen J. Todd, Michael Robillard, Victor Fong, Brendan Burns Healy, Benjamin E. Santaus, Xuebin He
-
Publication number: 20230418679Abstract: Techniques are disclosed for processing unit virtualization with scalable over-provisioning in an information processing system. For example, the method accesses a data structure that maps a correspondence between a plurality of virtualized processing units and a plurality of abstracted processing units, wherein the plurality of abstracted processing units are configured to decouple an allocation decision from the plurality of virtualized processing units, and further wherein at least one of the virtualized processing units is mapped to multiple ones of the abstracted processing units. The method allocates one or more virtualized processing units to execute a given application by allocating one or more abstracted processing units identified from the data structure. The method also enables migration of one or more virtualized processing units across the system.Type: ApplicationFiled: June 22, 2022Publication date: December 28, 2023Inventors: Anzhou Hou, Zhen Jia, Qiang Chen, Victor Fong, Michael Robillard
-
Publication number: 20230273663Abstract: Techniques are disclosed for management of edge resources. For example, a method comprises receiving energy consumption data corresponding to operation of a plurality of edge devices from a plurality of edge service providers. In the method, a plurality of energy efficiency scores are computed based, at least in part, on the energy consumption data. The energy efficiency scores correspond to operation of one or more of the edge devices associated with respective ones of the edge service providers. The method further comprises receiving one or more energy consumption parameters from at least one user device for the operation of the one or more edge devices, and identifying based, at least in part, on the energy efficiency scores, a subset of the edge devices corresponding to the energy consumption parameters. Data corresponding to the subset of the edge devices is transmitted to the at least one user device.Type: ApplicationFiled: February 28, 2022Publication date: August 31, 2023Inventors: Amy N. Seibel, Michael Robillard
-
Publication number: 20230273665Abstract: Techniques are disclosed for management of edge devices. For example, a method comprises coordinating operation of a plurality of edge devices in a system to process a plurality of workloads. The coordinating of the operation of the plurality of edge devices in the system comprises coordinating one or more times for changing a state of at least a subset of the plurality of edge devices from a first state corresponding to a first level of activity to a second state corresponding to a second level of activity. By way of further example, the coordinating of the operation of the plurality of edge devices in the system may further comprise coordinating one or more times for the processing of the plurality of workloads by at least the subset of the plurality of edge devices in the system.Type: ApplicationFiled: February 28, 2022Publication date: August 31, 2023Inventors: Michael Robillard, Amy N. Seibel
-
Patent number: 11086739Abstract: A system includes a host processor, a volatile memory device coupled to the host processor, and at least a first persistent memory device coupled to the host processor. The host processor is configured to execute one or more applications. The volatile memory device and the first persistent memory device are in respective distinct fault domains of the system, and at least one of a plurality of data objects generated by a given one of the applications is accessible from multiple distinct storage locations in respective ones of the distinct fault domains. For example, the host processor and the volatile memory device may be in a first one of the distinct fault domains and the first persistent memory device may be in a second one of the distinct fault domains. The data object remains accessible in one of the fault domains responsive to a failure in another of the fault domains.Type: GrantFiled: August 29, 2019Date of Patent: August 10, 2021Assignee: EMC IP Holding Company LLCInventors: Michael Robillard, Adrian Michaud, Dragan Savic
-
Publication number: 20210064489Abstract: A system includes a host processor, a volatile memory device coupled to the host processor, and at least a first persistent memory device coupled to the host processor. The host processor is configured to execute one or more applications. The volatile memory device and the first persistent memory device are in respective distinct fault domains of the system, and at least one of a plurality of data objects generated by a given one of the applications is accessible from multiple distinct storage locations in respective ones of the distinct fault domains. For example, the host processor and the volatile memory device may be in a first one of the distinct fault domains and the first persistent memory device may be in a second one of the distinct fault domains. The data object remains accessible in one of the fault domains responsive to a failure in another of the fault domains.Type: ApplicationFiled: August 29, 2019Publication date: March 4, 2021Inventors: Michael Robillard, Adrian Michaud, Dragan Savic
-
Patent number: 10922078Abstract: A system includes a host processor and at least one storage device coupled to the host processor. The host processor is configured to execute instructions of an instruction set, the instruction set comprising a first move instruction for moving data identified by at least one operand of the first move instruction into each of multiple distinct storage locations. The host processor, in executing the first move instruction, is configured to store the data in a first one of the storage locations identified by one or more additional operands of the first move instruction, and to store the data in a second one of the storage locations identified based at least in part on the first storage location. The instruction set in some embodiments further comprises a second move instruction for moving the data from the multiple distinct storage locations to another storage location.Type: GrantFiled: June 18, 2019Date of Patent: February 16, 2021Assignee: EMC IP Holding Company LLCInventors: Michael Robillard, Adrian Michaud, Dragan Savic
-
Publication number: 20200401404Abstract: A system includes a host processor and at least one storage device coupled to the host processor. The host processor is configured to execute instructions of an instruction set, the instruction set comprising a first move instruction for moving data identified by at least one operand of the first move instruction into each of multiple distinct storage locations. The host processor, in executing the first move instruction, is configured to store the data in a first one of the storage locations identified by one or more additional operands of the first move instruction, and to store the data in a second one of the storage locations identified based at least in part on the first storage location. The instruction set in some embodiments further comprises a second move instruction for moving the data from the multiple distinct storage locations to another storage location.Type: ApplicationFiled: June 18, 2019Publication date: December 24, 2020Inventors: Michael Robillard, Adrian Michaud, Dragan Savic
-
Patent number: 10873630Abstract: Systems, methods, and articles of manufacture comprising processor-readable storage media are provided for implementing server architectures having dedicated systems for processing infrastructure-related workloads. For example, a computing system includes a server node. The server node includes a first processor, a second processor, and a shared memory system. The first processor is configured to execute data computing functions of an application. The second processor is configured to execute input/output (I/O) functions for the application in parallel with the data computing functions of the application executed by the first processor. The shared memory system is configured to enable exchange of messages and data between the first and second processors.Type: GrantFiled: September 10, 2018Date of Patent: December 22, 2020Assignee: EMC IP Holding Company LLCInventors: Dragan Savic, Michael Robillard, Adrian Michaud
-
Patent number: 10719238Abstract: A first endpoint comprises a fabric attach point for attachment to a memory fabric, a first media controller, and a first non-volatile memory media. The memory fabric comprises a reliability zone comprising the first endpoint and at least a second endpoint. The first media controller is configured to receive, from at least one processor coupled to the first endpoint via the at least one fabric attach point, a memory fabric store command to store an object in the reliability zone. The first media controller is further configured to store the object in the first non-volatile memory media, to receive from the second endpoint a message indicating that the same object has been stored by the second endpoint, and to send to the at least one processor a single acknowledgement indicating that the at least one object has been stored in both the first and second endpoints of the reliability zone.Type: GrantFiled: October 12, 2017Date of Patent: July 21, 2020Assignee: EMC IP Holding Company LLCInventors: James Espy, William P. Dawkins, Dragan Savic, Amnon Izhar, Patrick J. Weiler, Michael Robillard
-
Patent number: 10241906Abstract: Systems and methods are provided for implementing a memory subsystem to augment physical memory of a computing system. For example, a system comprises a memory subsystem, and a computing system coupled to the memory subsystem. The computing system comprises a processor, a first memory module, and a second memory module. The first memory module comprises random access memory which is utilized by the processor to store data associated with an application executing on the computing system. The second memory module comprises control logic circuitry that is configured to control access to the memory subsystem on behalf of the processor to store and retrieve data associated with the application executing on the computing system.Type: GrantFiled: July 30, 2016Date of Patent: March 26, 2019Assignee: EMC IP Holding Company LLCInventors: Michael Robillard, Dragan Savic, Adrian Michaud, Robert Beauchamp