Abstract: Systems and methods for resolving dependencies of computing interfaces. An example method includes identifying at least one example value with respect to a specification of a first computing interface, wherein each of the at least one example value is a value included in a respective request to the first computing interface; and linking the first computing interface to at least one second computing interface based on the identified at least one example value.
Abstract: A processor unit comprising a first controller to couple to a host processing unit over a first link; a second controller to couple to a second processor unit over a second link, wherein the second processor unit is to couple to the host central processing unit via a third link; and circuitry to determine whether to send a cache coherent request to the host central processing unit over the first link or over the second link via the second processing unit.
Type:
Grant
Filed:
June 25, 2021
Date of Patent:
January 14, 2025
Assignee:
Intel Corporation
Inventors:
Rahul Pal, Nayan Amrutlal Suthar, David M. Puffer, Ashok Jagannathan
Abstract: An apparatus and a method for pipeline control are provided. The apparatus includes a preload predictor, an arithmetic logic unit (ALU) and a data buffer. The preload predictor is configured to determine whether a load instruction conforms to at least one specific condition, to generate a preload determination result. The ALU is configured to perform arithmetic logic operations, and the data buffer is configured to provide data for being used by the ALU. When the preload determination result indicates that the load instruction conforms to the at least one specific condition, the data buffer fetches preload data from a cache memory according to information carried by the load instruction and stores the preload data in the data buffer, where the preload data is data requested by a subsequent load instruction.
Abstract: Technologies for a quantum/classical hybrid approach to solving optimization problems is disclosed. In the illustrative embodiment, an optimization problem is decomposed into two sub-problems. The first sub-problem is solved on a classical computer, and a result from the first sub-problem is provided to a quantum computer. The quantum computer then solves the second sub-problem based on the result of the first sub-problem from the classical computer. The quantum computer can then provide a result to the classical computer to re-solve the first problem. The iterative calculation is continued until an end condition is met.
Abstract: An electronic device with a connector supporting multiple connection standards includes the connector, a processor, a controller, an EDID (Extended Display Identification Data) ROM, a first multiplexer circuit and a second multiplexer circuit. The first multiplexer circuit is coupled to at least one signal pin of the connector, the processor and the controller. The second multiplexer circuit is coupled to the EDID ROM, the first multiplexer circuit, the processor and the controller. Under an update state, the controller is electrically connected to the EDID ROM through the second multiplexer circuit, and updates EDID in the EDID ROM with update data.
Abstract: An integrated circuit chip device includes monitoring circuitry for monitoring system circuitry, the monitoring circuitry having units connected in a tree-based structure for routing communications through the integrated circuit chip device. The tree-based structure has branches extending from a root unit, each branch comprising a plurality of units, each unit connected to a single unit above in the branch and a single unit below in the branch. Communications are routable between the root unit and a destination unit of a branch via intermediate units of that branch. Crosslinks connect corresponding units of adjacent branches, each crosslink can be enabled to route communications between the root unit and a destination unit of one of the branches the crosslink connects via the other branch the crosslink connects in response to an intermediate unit being deemed defective, that intermediate unit being in the same branch as the destination unit.
Abstract: A method for anticipatory bidirectional packet steering involves receiving, by a first packet steering module of a network, a first encapsulated packet traveling in a forward traffic direction. The first encapsulated packet includes a first encapsulating data structure. The network includes two or more packet steering modules and two or more network nodes. Each of the packet steering modules includes a packet classifier module, a return path learning module, a flow policy table, and a replicated data structure (RDS). The return path learning module of the first packet steering module generates return traffic path information associated with the first encapsulated packet and based on the first encapsulating data structure. The first packet steering module updates the RDS using the return traffic path information and transmits the return traffic path information to one or more other packet steering modules.
Abstract: Methods and apparatus for client-allocatable bandwidth pools are disclosed. A system includes a plurality of resources of a provider network and a resource manager. In response to a determination to accept a bandwidth pool creation request from a client for a resource group, where the resource group comprises a plurality of resources allocated to the client, the resource manager stores an indication of a total network traffic rate limit of the resource group. In response to a bandwidth allocation request from the client to allocate a specified portion of the total network traffic rate limit to a particular resource of the resource group, the resource manager initiates one or more configuration changes to allow network transmissions within one or more network links of the provider network accessible from the particular resource at a rate up to the specified portion.
Abstract: For flexibly setting up an execution environment according to contents of processing to be executed while taking stability or a security level into consideration, the multiple processor system includes the execution environment main control unit 10 which determines CPU assignment at the time of deciding CPU assignment, the execution environment sub control unit 20 which controls starting, stopping and switching of an execution environment according to an instruction from the execution environment main control unit 10 to synchronize with the execution environment main control unit 10, and the execution environment management unit 30 which receives input of management information or reference refusal information of shared resources for each CPU 4 or each execution environment 100 to separate the execution environment main control unit 10 from the execution environment sub control units 20a through 20n, or the execution environment sub control units 20a through 20n from each other.
Abstract: A data processing system features a hardware trusted platform module (TPM), and a virtual TPM (vTPM) manager. When executed, the vTPM manager detects a first request from a service virtual machine (VM) in the processing system, the first request to involve access to the hardware TPM (hTPM). In response, the vTPM manager automatically determines whether the first request should be allowed, based on filter rules identifying allowed or disallowed operations for the hTPM. The vTPM manager may also detect a second request to involve access to a software TPM (sTPM) in the processing system. In response, the vTPM manager may automatically determine whether the second request should be allowed, based on a second filter list identifying allowed or disallowed operations for the sTPM. Other embodiments are described and claimed.
Type:
Grant
Filed:
December 21, 2007
Date of Patent:
November 12, 2013
Assignee:
Intel Corporation
Inventors:
Tasneem Brutch, Alok Kumar, Murari Kumar, Kalpana M. Roge, Vincent R. Scarlata, Ned M. Smith, Faraz A. Siddiqi, Willard M. Wiseman
Abstract: A shared memory management system and method are described. In one embodiment, a memory management system includes a memory management unit for concurrently managing memory access requests from a plurality of engines. The shared memory management system independently controls access to the context memory without interference from other engine activities. In one exemplary implementation, the memory management unit tracks an identifier for each of the plurality of engines making a memory access request. The memory management unit associates each of the plurality of engines with particular translation information respectively. This translation information is specified by a block bind operation. In one embodiment the translation information is stored in a portion of instance memory. A memory management unit can be non-blocking and can also permit a hit under miss.
Type:
Grant
Filed:
November 1, 2006
Date of Patent:
January 1, 2013
Inventors:
David B. Glasco, John S. Montrym, Lingfeng Yuan
Abstract: A data processing device and method are provided. The data processing device includes a code storage unit storing an original code to be translated into a machine language code, a code analyzer analyzing the original code stored in the code storage unit, a register allocator allocating a predesignated register for a command included in the original code based on the result of analysis, and a code executor executing a machine language code generated using the allocated register.