SYSTEMS AND METHODS FOR REDUCING CONGESTION ON NETWORK-ON-CHIP
Systems or methods of the present disclosure may provide a programmable logic device including a network-on-chip (NoC) to facilitate data transfer between one or more main intellectual property components (main IP) and one or more secondary intellectual property components (secondary IP). To reduce or prevent excessive congestion on the NoC, the NoC may include one or more traffic throttlers that may receive feedback from a data buffer, a main bridge, or both and adjust data injection rate based on the feedback. Additionally, the NoC may include a data mapper to enable data transfer to be remapped from a first destination to a second destination if congestion is detected at the first destination.
The present disclosure relates generally to relieving network traffic congestion in an integrated circuit. More particularly, the present disclosure relates to relieving congestion in a network-on-chip (NoC) implemented on a field-programmable gate array (FPGA).
A NoC may be implemented on an FPGA to facilitate data transfer between various intellectual property (IP) cores of the FPGA. However, data may be transferred between main IP (e.g., accelerator functional units (AFUs) of a host processor, direct memory accesses (DMAs)) and secondary IP (e.g., memory controllers, artificial intelligence engines) of an FPGA system faster than the data can be processed at the NoC, and the NoC may become congested. Network congestion may degrade performance of the FPGA and the FPGA system, causing greater power consumption, reduced data transfer, and/or slower data processing.
This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present disclosure, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it may be understood that these statements are to be read in this light, and not as admissions of prior art.
Various aspects of this disclosure may be better understood upon reading the following detailed description and upon reference to the drawings in which:
One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
When introducing elements of various embodiments of the present disclosure, the articles “a,” “an,” and “the” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Additionally, it should be understood that references to “one embodiment” or “an embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features.
The present systems and techniques relate to embodiments for reducing data traffic congestion on a network-on-chip (NoC). A NoC may be implemented on an integrated circuit such as a field-programmable gate array (FPGA) integrated circuit to facilitate data transfer between various intellectual property (IP) cores of the FPGA or FPGA system. However, data may be transferred between main IP (e.g., accelerator functional units (AFUs) of a host processor, direct memory accesses (DMAs)) and secondary IP (e.g., memory controllers, artificial intelligence engines) of an FPGA or FPGA system faster than the data can be processed and the NoC may become congested. Network congestion may degrade performance of the FPGA and the FPGA system, causing greater power consumption, reduced data transfer, and/or slower data processing. The main IP and secondary IP may be internal (e.g., on the FPGA itself) or external to the FPGA (e.g., implemented on an external processor of the FPGA system).
In some embodiments, a traffic throttler may be implemented to reduce incoming data when the NoC becomes congested. However, some traffic throttlers may be static, meaning the traffic throttler does not have a feedback mechanism, and thus may not be aware of the traffic on the NoC. Without the feedback mechanism, traffic throttlers may over-throttle or under-throttle the data on the NoC that may limit the FPGA's overall performance. Accordingly, in some embodiments, the traffic throttler may be provided with feedback from various components of the NoC (e.g., a buffer, a secondary bridge, a main bridge, and so on), enabling the traffic throttler to dynamically adjust the throttle rate according to the amount of data coming into the NoC, the amount of data processing in the buffer, the amount of data processing in the main bridge, and so on.
In other embodiments, congestion at the NoC may be prevented or alleviated by remapping a logical address from one physical address to another physical address. For example, if multiple logical addresses are mapped to a single physical address or are mapped to multiple physical addresses pointing to one component (e.g., a memory controller), congestion may form in the NoC at those physical addresses. As such, it may be beneficial to enable one or more logical addresses to be remapped to any number of available physical addresses. For example, data from an AFU may have a logical address corresponding to a physical address associated with a double data rate (DDR) memory controller. However, if congestion occurs at a secondary bridge or a buffer of the DDR memory controller, the logical address of the AFU may be remapped to correspond to a physical address associated with another DDR controller or a controller for another type of memory (e.g., high-bandwidth memory (HBM)). By enabling a logical address to be remapped to various physical addresses, congestion on the NoC may be alleviated without adjusting user logic of the FPGA and without consuming additional logic resources.
While the NoC may facilitate data transfer between multiple main IP and secondary IP, different application running on the FPGA may communicate using a variety of data widths. For example, certain components of the integrated circuit, such as a main bridge, may support 256-bit data throughput while other components may support 128-bit, 64-bit, 32-bit, 16-bit, or 8-bit data throughput. In some embodiments, the lower data widths may be supported by providing at least one instance of a component for each data width a user may wish to support. Continuing with the above example, to support the desired data widths, the integrated circuit may be designed with a 256-bit main bridge, a 64-bit main bridge, a 32-bit main bridge, a 16-bit, and an 8-bit main bridge. Certain components, such as main bridges, may take up significant space on the integrated circuit and may each draw significant power, even when not in use. As such, it may be beneficial to enable an integrated circuit to support various data widths using only one instance of the component instead of several instances of the same component, each supporting a different data width.
As such, a flexible data width adapter that enables narrow data bus conversion for a variety of data widths may be deployed on the integrated circuit. The data width adapter may include an embedded shim that handles narrow transfer by emulating narrow data width components, thus enabling support of various data widths without implementing costly and redundant components that may consume excessive space and power.
With the foregoing in mind,
In a configuration mode of the integrated circuit 12, a designer may use an electronic device 13 (e.g., a computer) to implement high-level designs (e.g., a system user design) using design software 14, such as a version of INTEL® QUARTUS® by INTEL CORPORATION. The electronic device 13 may use the design software 14 and a compiler 16 to convert the high-level program into a lower-level description (e.g., a configuration program, a bitstream). The compiler 16 may provide machine-readable instructions representative of the high-level program to a host 18 and the integrated circuit 12. The host 18 may receive a host program 22 that may be implemented by the kernel programs 20. To implement the host program 22, the host 18 may communicate instructions from the host program 22 to the integrated circuit 12 via a communications link 24 that may be, for example, direct memory access (DMA) communications or peripheral component interconnect express (PCIe) communications. In some embodiments, the kernel programs 20 and the host 18 may enable configuration of programmable logic 26 on the integrated circuit 12. The programmable logic 26 may include circuitry and/or other logic elements and may be configurable to implement arithmetic operations, such as addition and multiplication.
The designer may use the design software 14 to generate and/or to specify a low-level program, such as the low-level hardware description languages described above. Further, in some embodiments, the system 10 may be implemented without a separate host program 22. Thus, embodiments described herein are intended to be illustrative and not limiting.
Turning now to a more detailed discussion of the integrated circuit 12,
Programmable logic devices, such as the integrated circuit 12, may include programmable elements 50 with the programmable logic 26. For example, as discussed above, a designer (e.g., a customer) may program (e.g., configure) or reprogram (e.g., reconfigure, partially reconfigure) the programmable logic 26 to perform one or more desired functions. By way of example, some programmable logic devices may be programmed or reprogrammed by configuring programmable elements 50 using mask programming arrangements that is performed during semiconductor manufacturing. Other programmable logic devices are configurable after semiconductor fabrication operations have been completed, such as by using electrical programming or laser programming to program programmable elements 50. In general, programmable elements 50 may be based on any suitable programmable technology, such as fuses, antifuses, electrically programmable read-only-memory technology, random-access memory cells, mask-programmed elements, and so forth.
Many programmable logic devices are electrically programmed. With electrical programming arrangements, the programmable elements 50 may be formed from one or more memory cells. For example, during programming (i.e., configuration), configuration data is loaded into the memory cells using input/output pins 44 and input/output circuitry 42. In one embodiment, the memory cells may be implemented as random-access-memory (RAM) cells. The use of memory cells based on RAM technology is described herein is intended to be only one example. Further, since these RAM cells are loaded with configuration data during programming, they are sometimes referred to as configuration RAM cells (CRAM). These memory cells may each provide a corresponding static control output signal that controls the state of an associated logic component in programmable logic 26. For instance, in some embodiments, the output signals may be applied to the gates of metal-oxide-semiconductor (MOS) transistors within the programmable logic 26.
Keeping the discussion of
The integrated circuit device 12 may include any programmable logic device such as a field programmable gate array (FPGA) 70, as shown in
In the example of
A power supply 78 may provide a source of voltage (e.g., supply voltage) and current to a power distribution network (PDN) 80 that distributes electrical power to the various components of the FPGA 70. Operating the circuitry of the FPGA 70 causes power to be drawn from the power distribution network 80.
There may be any suitable number of programmable logic sectors 74 on the FPGA 70. Indeed, while 29 programmable logic sectors 74 are shown here, it should be appreciated that more or fewer may appear in an actual implementation (e.g., in some cases, on the order of 50, 100, 500, 1000, 5000, 10,000, 50,000 or 100,000 sectors or more). Programmable logic sectors 74 may include a sector controller (SC) 82 that controls operation of the programmable logic sector 74. Sector controllers 82 may be in communication with a device controller (DC) 84.
Sector controllers 82 may accept commands and data from the device controller 84 and may read data from and write data into its configuration memory 76 based on control signals from the device controller 84. In addition to these operations, the sector controller 82 may be augmented with numerous additional capabilities. For example, such capabilities may include locally sequencing reads and writes to implement error detection and correction on the configuration memory 76 and sequencing test control signals to effect various test modes.
The sector controllers 82 and the device controller 84 may be implemented as state machines and/or processors. For example, operations of the sector controllers 82 or the device controller 84 may be implemented as a separate routine in a memory containing a control program. This control program memory may be fixed in a read-only memory (ROM) or stored in a writable memory, such as random-access memory (RAM). The ROM may have a size larger than would be used to store only one copy of each routine. This may allow routines to have multiple variants depending on “modes” the local controller may be placed into. When the control program memory is implemented as RAM, the RAM may be written with new routines to implement new operations and functionality into the programmable logic sectors 74. This may provide usable extensibility in an efficient and easily understood way. This may be useful because new commands could bring about large amounts of local activity within the sector at the expense of only a small amount of communication between the device controller 84 and the sector controllers 82.
Sector controllers 82 thus may communicate with the device controller 84 that may coordinate the operations of the sector controllers 82 and convey commands initiated from outside the FPGA 70. To support this communication, the interconnection resources 46 may act as a network between the device controller 84 and sector controllers 82. The interconnection resources 46 may support a wide variety of signals between the device controller 84 and sector controllers 82. In one example, these signals may be transmitted as communication packets.
The use of configuration memory 76 based on RAM technology as described herein is intended to be only one example. Moreover, configuration memory 76 may be distributed (e.g., as RAM cells) throughout the various programmable logic sectors 74 of the FPGA 70. The configuration memory 76 may provide a corresponding static control output signal that controls the state of an associated programmable logic element 50 or programmable component of the interconnection resources 46. The output signals of the configuration memory 76 may be applied to the gates of metal-oxide-semiconductor (MOS) transistors that control the states of the programmable logic elements 50 or programmable components of the interconnection resources 46.
As discussed above, some embodiments of the programmable logic fabric may be included in programmable fabric-based packages that include multiple die connected using, 2-D, 2.5-D, or 3-D interfaces. Each of the die may include logic and/or tiles that correspond to a power state and thermal level. Additionally, the power usage and thermal level of each die within the package may be monitored, and control circuitry may dynamically control operations of the one or more die based on the power data and thermal data collected.
With the foregoing in mind,
The main bridges 106 may interface with the host 112 and the host memory 113 via a host interface 122 to send data packets to and receive data packets from the main IP in the host 112. Although only one host interface 122 is illustrated, it should be noted that there may be any appropriate number of host interfaces 122, such as one host interface 122 per programmable logic sector 128, per column of programmable logic sectors 128, or any other suitable number of host interfaces 122 in the integrated circuit 12. The host interface 122 may communicate with the host 112 via an Altera Interface Bus (AIB) 114, a Master Die Altera Interface Bus (MAIB) 118, and a PCI Express (PCIe) bus 120. The AIB 114 and the MAIB 118 may interface via an Embedded Multi-Die Interconnect Bridge (eMIB) 116. The main bridges 106 of the NoC 104 may be communicatively coupled to one or more programmable logic sectors 128 of the FPGA 102 and may facilitate data transfer from one or more of the programmable logic sectors 128. In some embodiments, one or more micro-NoCs may be present in each sector 128 to facilitate local data transfers (e.g., between the NoC 104 and a sector controller or device controller of a single programmable logic sector 128).
As previously mentioned, the secondary bridges 108 may serve as network interfaces between the secondary IP and the NoC 104, facilitating transmission and reception of data packets to and from the secondary IP. For instance, the NoC 104 may interface with DDR memory (e.g., by interfacing with a DDR memory controller). The NoC 104 may interface with the high bandwidth memory 126 (e.g., by interfacing with a high bandwidth memory controller) via a universal interface board (UIB) 124. The UIB 124 may interface with the high bandwidth memory 126 via an eMIB 125. It should be noted that the FPGA 102 and the FPGA 70 may be the same device or may be separate devices. Regardless, the FPGA 102 may be or include programmable logic devices as FPGAs or as any other programmable logic devices, such as ASICs. Likewise, the host 112 may be the same device as or a different device than the host 18. Regardless, the host 112 may be a processor that is internal or external to the FPGA 102. While 10 programmable logic sectors 128 are shown here, it should be appreciated that more or fewer may appear in an actual implementation (e.g., in some cases, on the order of 50, 100, 500, 1000, 5000, 10,000, 50,000 or 100,000 sectors or more).
For example, the switching network 156 may transmit data from the main IP 152A via the main bridge 106A and send the data to the secondary IP 154A via the secondary bridge 108A, to the secondary IP 154B via the secondary bridge 108B, to the secondary IP 154C via the secondary bridge 108C, and/or to the secondary IP 154D via the secondary bridge 108D. Additionally, any of the secondary IP 154 (e.g., 154A) may receive data from the main IP 152A, from the main IP 152B, from the main IP 152C, and/or from the main IP 152D. While the switching network 156 is illustrated as a 2×2 mesh for simplicity, it should be noted that there may be any appropriate number of switches 110, links 111, main IP 152, main bridges 106, secondary IP 154, and secondary bridges 108 arranged in a mesh of any appropriate size (e.g., a 10×10 mesh, a 100×100 mesh, a 1000×1000 mesh, and so on).
While the system 150 (and the NoC 104 generally) may increase the efficiency of data transfer in the FPGA 102 and in the system 100, congestion may still occur in the NoC 104. Congestion may be concentrated at the main bridge 106, at the secondary bridge 108, or in the switching network 156. As will be discussed in greater detail below, in certain embodiments a dynamic throttler may be implemented in the NoC 104 to adjust the flow of traffic according to the level of congestion that may occur in different areas of the NoC 104.
Dynamic Traffic ThrottlerThe dynamic throttler 202 may include a target selector 272 that may receive congestion control signals from the buffer 204 and, based on the particular control signals received from the buffer 204, may send throttling instructions to the throttle action control 274. The throttling instructions may include throttle rate instructions (e.g., based on whether the warning control signals 260 and/or the stop control signals 262 are asserted or deasserted) to reduce data injection rate, increase data injection rate, or stop data injection. The throttling instructions may also include target instructions, which may include identifiers (e.g., based on target enable signal 264 and 266 sent to the target selector from the secondary IP) for identifying which secondary IP is to have its incoming data increased, throttled, or stopped. The buffer 204 may be programmed (e.g., by a designer of the FPGA 102 or the system 100) with a warning threshold 254 and a stop threshold 256. For example, the warning threshold 254 may be set at 20% or more of total buffer depth of the buffer 204, 33.333% or more of total buffer depth of the buffer 204, 50% or more of total buffer depth of the buffer 204, and so on. Further, the warning threshold 254 may be set as a range. For example, the warning threshold may be set as the range from 25% of the total buffer depth to 75% of the total buffer depth of the buffer 204. Once the warning threshold 254 is reached, the buffer 204 may output a warning control signal 260 to the target selector 272 of the dynamic throttler 202 to slow traffic due to elevated congestion.
Similarly, the stop threshold 256 may be set at a higher threshold, such as 51% or more of total buffer depth of the buffer 204, 66.666% or more of total buffer depth of the buffer 204, 75% or more of total buffer depth of the buffer 204, 90% or more of total buffer depth of the buffer 204, and so on. Once the stop threshold 256 is reached, the buffer 204 may output a stop control signal 262 to the target selector 272. As will be discussed in greater detail below, the dynamic throttler 202 may apply a single throttle rate upon receiving the warning control signal 260 or may gradually increment the throttle rate (e.g., increase the throttle rate by an amount less than a maximum throttle rate, or adjusting throttle rate in multiple incremental increases) as the congestion in the buffer 204 increases. Once the dynamic throttler 202 receives the stop signal, the dynamic throttler 202 may stop all incoming data (e.g., may apply a throttle rate of 100%). The programmable stop threshold 256 and warning threshold 254 may be determined by a user and loaded into the buffer via a threshold configuration register 258.
In some embodiments, multiple secondary IP 154 may be communicatively coupled to multiple buffers 204, and multiple buffers 204 may communicate with multiple dynamic throttler 202. In this way, multiple dynamic throttlers 202 may have the ability to throttle data injection rates for multiple buffers 204 and multiple secondary IP 154. As such, the target selector 272 may receive feedback signals from multiple buffers 204 corresponding to multiple secondary IP 154. For example, the target selector 272 may receive the stop control signal 262 and a target enable signal 264 corresponding to a first secondary IP 154, indicating to the target selector 272 that the data being sent to the first secondary IP 154 is to be reduced or stopped completely according to the programmed stop threshold 256. The target selector 272 may also receive the warning control signal 260 and a target enable signal 266 corresponding to a second secondary IP 154, indicating to the target selector 272 that the data being sent to the second secondary IP 154 is to be reduced according to the warning threshold 254. The target selector 272 may send this information as a single signal to a throttle action control 274. In some embodiments, the target selector 272 may receive control signals from other secondary IP 154 in the NoC, such as the stop control signal 268 and the warning control signal 270. It should be noted that, while certain components (e.g., the throttle action control 274, the target selector 272) are illustrated as physical hardware components, they may be implemented as physical hardware components implementing software instructions or software modules or components. As used herein, a software module or component may include any type of computer instruction or computer-executable code located within a memory device and/or transmitted as electrical signals over a bus, wired connection, or wireless network executing on a processor physical components.
The throttle action control 274 may adjust throttling rate based on the status of the control signal from the target selector 272. The throttling rate may be programmable. For example, for the warning threshold 254, a designer may set a throttling rate of 5% or more, 10% or more, 25% or more, 50% or more, 65% or more, and so on. The dynamic throttler 202 may apply a single throttling rate or may set a range of throttling rates. For example, the dynamic throttler 202 may throttle the incoming traffic 252 at 10% at the lower edge of a range of the warning threshold 254 (e.g., as the buffer 204 fills to 25% of the buffer depth) and may increment (e.g., adjusting throttle rate in multiple incremental increases) the throttling rate until the throttling rate is at 50% at the upper edge of the range of the warning threshold 254 (e.g., as the buffer 204 fills to 50% of the buffer depth). In some embodiments, once the buffer 204 reaches the stop threshold 256, the data transfer may be stopped completely (e.g., throttling rate set to 100%). However, in other embodiments the data transfer may be significantly reduced, but not stopped, by setting a throttling rate of 75% or more, 85% or more, 95% or more, 99% or more, and so on.
If the dynamic throttler 202 receives from the buffer 204 an assertion of the warning control signal 260, the dynamic throttler 202 may enter a warning state 354. Continuing with the example above, if the warning threshold 254 is programmed at 50% of the total buffer depth, once the buffer 204 fills to 50% of the total buffer depth the buffer 204 may send the warning control signal 260 to the dynamic throttler 202. As previously discussed, the warning threshold 254 may be programmed for a range. For example, the warning threshold 254 may be programmed for the range from 25% of the total buffer depth to 50% of the total buffer depth.
In the warning state 354, the dynamic throttler 202 may reduce the injection rate below full bandwidth. In some embodiments, the dynamic throttler 202 may reduce the injection rate to a programmed warning throttle rate all at once (e.g., the throttle action control 274 may set the throttle rate to the programmed warning throttle rate, such as 50%). In other embodiments, the dynamic throttler 202 may gradually reduce the injection rate by incrementing the throttle rate (e.g., decrementing the injection rate) by a particular increment. The increment may be a 1% or greater increase in throttle rate, a 2% or greater in throttle rate, a 5% or greater increase in throttle rate, a 10% or greater increase in throttle rate, and so on. Continuing with the example above, if the warning threshold 254 is programmed for the range from 25% of the total buffer depth to 50% of the total buffer depth, the dynamic throttler 202 may gradually increment the throttle rate (i.e., decrement the injection rate) as the buffer fills from 25% to 50% of the total buffer depth of the buffer 204. By incrementing or decrementing the throttle rates/injection rate (e.g., by adjusting throttle rate in multiple incremental increases or decreases), the likelihood of causing instability in the dynamic throttler 202 may be reduced.
While in the warning state 354, if the dynamic throttler 202 receives an additional warning control signal 260 and does not receive a stop control signal 262, the dynamic throttler 202 may further reduce the injection rate (e.g., the throttle action control 274 may increase the throttle rate by a predetermined amount, such as increasing the throttle rate by 1%, 2%, 5%, 10%, and so on).
While in the warning state 354, if the dynamic throttler 202 does not receive the warning control signal 260 nor the stop control signal 262 (or receives a deassertion of the warning control signal 260 or the stop control signal 262), the dynamic throttler 202 may enter an increase bandwidth state 356. In the increase bandwidth state 356, the dynamic throttler 202 may increment the injection rate (i.e., decrement the throttle rate) of the incoming traffic 252 from the injection rate of the warning state. If, while in the increase bandwidth state 356, the dynamic throttler 202 receives an indication of a deassertion of the warning control signal 260 and/or the stop control signal and the injection rate is less than an injection rate threshold, the dynamic throttler 202 may remain in the increase bandwidth state and continue to increment the injection rate (e.g., until the injection rate meets or exceeds the injection rate threshold).
If, while in the increase bandwidth state 356, the dynamic throttler does not receive the warning control signal 260 nor the stop control signal 262 (or receives a deassertion of the warning control signal 260 or the stop control signal 262) and the injection rate is greater than an injection rate threshold, the dynamic throttler 202 may reenter the idle state 352, and the injection rate may return to full capacity (e.g., the throttle action control 274 may decrease the throttle rate to 0%). However, if, while in the increase bandwidth state 356, the dynamic throttler receives an indication of an assertion of the warning control signal 260, the dynamic throttler 202 may reenter the warning state 354.
While in the warning state 354, if the dynamic throttler 202 receives an indication of an assertion of the stop control signal 262, the dynamic throttler may enter a stop state 358 and reduce the injection rate based on the stop threshold 256 (e.g., the throttle action control 274 may increase the throttle rate to 100%). While in the stop state 358, if the dynamic throttler 202 receives an indication of the assertion of the warning control signal 260 and the stop control signal 262, the dynamic throttler 202 may maintain a throttle rate of 100%. However, while in the stop state 358, if the warning control signal 260 remains asserted while the stop control signal 262, the dynamic throttler 202 may reenter the warning state 354 may reduce the throttling rate accordingly. For example, the throttle action control 274 may reduce the throttle rate to the programmed warning throttle rate, such as 50%.
Returning to
As with the warning threshold 254 and the stop threshold 256 for the buffer 204, a warning threshold and a stop threshold for the pending transactions in the main bridge 106 may be programmable. For example, a user may set a warning threshold corresponding to a first number of pending transactions, and once the pending transaction counter 282 reaches the first number, the dynamic throttler 202 may throttle the incoming traffic 252 such that the transaction gating circuitry 280 receives fewer incoming transactions. A user may also set a stop threshold corresponding to a second number of pending transactions, and once the pending transaction counter 282 reaches the second number, the dynamic throttler 202 may throttle the incoming traffic 252, stopping or significantly limiting the number transactions entering the transaction gating circuitry 280.
The warning and stop thresholds for pending transactions may be loaded into the throttle action control 274 from a write/read pending transaction threshold register 288. Based on the number of pending transactions reported by the pending transaction counter and the warning and stop thresholds, the mode (e.g., the transaction throttling rate) may be selected based on a value loaded into the throttle action control 274 via a mode selection register 290 that may indicate whether dynamic throttling is enabled for the integrated circuit 12. As with the throttling based on the feedback from the buffer 204, the throttling rate based on the pending transactions in the main bridge 106 may be a single throttling rate or may be a range of throttling rates incremented over a programmable range of pending transactions thresholds.
Once the number of pending read and write transactions is received from the pending transaction counter 282 and the mode is selected by the mode selection register 290, the throttle action control 274 may determine the amount of incoming transactions to limit. The throttle action control 274 may send instructions on the amount of incoming transactions to limit to a bandwidth limiter 276. The bandwidth limiter 276 may control how long to control the gating mechanism of the transaction gating circuitry 280. Based on the instructions received from the throttle action control 274, the bandwidth limiter 276 may cause the transaction gating circuitry 280 to receive or limit the incoming transactions. It should be noted that, while certain components (e.g., the bandwidth limiter 276) are illustrated as physical hardware components, they may be implemented as physical hardware components implementing software instructions or software modules or components.
When the pending transaction counter 282 indicates that the number of pending transaction is high (e.g., indicating congestion at the main bridge 106), the bandwidth limiter 276 may set the read and write signals low (e.g., the gating for read transaction signal 402 may be set to low and the gating for write transactions signal 416 may be set to low), thus the ready signals ARREADY 408, and AWREADY 420 may be low, the valid signals ARVALID 414 and AWVALID 424 may be low, and the main bridge 106 may not accept incoming read/write transactions. As a result, the injection rate of the incoming traffic 252 is reduced.
When the pending transaction counter 282 indicates that the number of pending transaction is low (e.g., indicating little or no congestion at the main bridge 106), the bandwidth limiter 276 may set the read and write signals high (e.g., the gating for read transaction signal 402 may be set to high and the gating for write transactions signal 416 may be set to high), thus the ready signals ARREADY 408, and AWREADY 420 may be high, the valid signals ARVALID 414 and AWVALID 424 may be high, and the main bridge 106 may accept incoming read/write transactions. As a result, the traffic injection bandwidth will be high, and the transaction gating circuitry 280 may transmit the incoming traffic 252 to the main bridge 106.
If the dynamic throttler 202 receives from the main bridge 106 an assertion of the warning control signal, the dynamic throttler 202 may enter a warning state 454. In the warning state 454, the dynamic throttler 202 may increase the gating of the transaction gating circuitry 280 to reduce the injection rate of the transactions entering the main bridge 106. In some embodiments, the dynamic throttler 202 may reduce the injection rate to a programmed warning throttle rate all at once. For example, once the number of pending transactions received from the pending transaction counter 282 reaches a first threshold, the throttle action control 274 may set the throttling rate to a programmed warning throttling rate to throttle 50% of the transactions coming from the incoming traffic 252. In other embodiments, the dynamic throttler 202 may gradually reduce the injection rate by incrementing the throttle rate (e.g., decrementing the injection rate) by a particular increment. The increment may be a 1% or greater increase in throttling rate of incoming transactions, a 2% or greater in throttle rate of incoming transactions, a 5% or greater increase in throttle rate of incoming transactions, a 10% or greater increase in throttle rate of incoming transactions, and so on. By incrementing or decrementing the throttle rates/injection rate, the likelihood of causing instability in the dynamic throttler 202 may be reduced.
While in the warning state 454, if the dynamic throttler 202 receives an additional indication of an assertion of the warning control signal and does not receive a stop control signal (e.g., receives an indication of a deassertion of the stop control signal), the dynamic throttler 202 may further reduce the injection rate of the incoming transactions (e.g., the throttle action control 274 may increase the throttle rate by a predetermined amount, such as increasing the throttle rate by 1%, 2%, 5%, 10%, and so on).
While in the warning state 454, if the dynamic throttler 202 receives an indication of a deassertion of the warning control signal and/or the stop control signal, the dynamic throttler 202 may enter an increase transaction state 456. In the increase transactions state 456, the dynamic throttler 202 may increment the injection rate of the transactions in the incoming traffic 252 from the injection rate of the warning state 454. If, while in the increase transactions state 456, the dynamic throttler 202 receives an indication of a deassertion of the warning control signal and/or the stop control signal and the injection rate is less than an injection rate threshold, the dynamic throttler 202 may remain in the increase transactions state 456 and continue to increment the injection rate (e.g., until the injection rate meets or exceeds the injection rate threshold).
If, while in the increase transactions state 456, the dynamic throttler receives an indication of a deassertion of the warning control signal and/or the stop control signal and the injection rate is greater than an injection rate threshold, the dynamic throttler 202 may reenter the idle state 452. In the idle state 452, the injection rate may return to full capacity (e.g., the throttle action control 274 may decrease the throttle rate for incoming transactions to 0%). However, if, while in the increase transactions state 456, the dynamic throttler receives an indication of an assertion of the warning control signal, the dynamic throttler 202 may reenter the warning state 454.
While in the warning state 454, if the dynamic throttler 202 receives an indication of an assertion of the warning control signal and the stop control signal, the dynamic throttler may enter a stop state 458 and reduce the injection rate of the incoming transactions based on the stop threshold (e.g., the throttle action control 274 may increase the throttle rate to 100%). While in the stop state 458, if the dynamic throttler 202 receives the warning control signal and/or the stop control signal, the dynamic throttler 202 may maintain a throttle rate of 100%. However, while in the stop state 458, if the dynamic throttler 202 receives an indication of an assertion of the warning control signal and an indication of a deassertion of the stop control signal, the dynamic throttler 202 may reenter the warning state 454, and the dynamic throttler 202 may reduce the throttling rate accordingly.
Programmable Address RemappingAs previously stated, congestion may accrue on the NoC 104 as a result of multiple main IP 152 being mapped to multiple secondary IP 154.
For example, the main IP 152 may have an associated logical address 606 that may be mapped by the address mapper 608 to the secondary IP 154E. However, if congestion develops at the secondary IP 154E or the secondary bridge 108E, the address mapper 608 may remap the logical address 606 of the main IP 152 to a destination that has little or no congestion, such as the secondary IP 154H.
In query block 756, the address mapper 608 determines if the range check returns a match for a range corresponding to the first memory type. The address mapper 608 may perform the range check on the first memory type. If the range check returns a match for a range corresponding to the first memory type, then in process block 758, the address mapper 608 may remap the logical address to a physical address corresponding to the first memory type. If the range check does not return a match for a range corresponding to the first memory type, then in query block 760, the address mapper 608 determines if the range check returns a match for a range corresponding to the second memory type. If the range check returns a match for a range corresponding to the second memory type, then, in process block 762, the address mapper 608 may remap the logical address to a physical address corresponding to the second memory type.
If the range check does not return a match for a range corresponding to the first memory type or the second memory type, then in query block 764, the address mapper 608 determines if the range check returns a match for a range corresponding to the third memory type. If the range check returns a match for a range corresponding to the third memory type, then, in process block 766, the address mapper 608 may remap the logical address to a physical address corresponding to the third memory type. However, if the range check does not return a match for a range corresponding to the first memory type, the second memory type, nor the third memory type, the address mapper 608 may assign a default out-of-range address in block 768, which may generate a user-readable error message.
The address mapper 608 may perform range checks on the different types of memory used in the address mapper 608 (e.g., as described in the method 750 above). As may be observed, the range checks for DDR memory (e.g., as described in the query block 756) may be performed by DDR range check circuitry 810, the range checks for HBM memory (e.g., as described in the query block 760) may be performed by the HBM range check circuitry 812, and the range checks for AXI4-Lite memory (e.g., as described in the query block 764) may be performed by the AXI4-Lite range check circuitry 814.
For example, the DDR range check circuitry 810 may perform a range check for DDR. If the address mapper 608 detects a DDR memory range match (i.e., a range hit) in the circuitry 816, the matching range may be sent to the remapping circuitry 818 and the logical address may be remapped to a physical address of the DDR memory. If there is no range hit for the DDR memory range, the HBM range check circuitry 812 may perform a range check for HBM memory. If the circuitry 816 detects an HBM memory range hit, the matching range may be sent to the remapping circuitry 818 and the logical address may be remapped to a physical address of the HBM memory.
If there is no range hit for the DDR memory range nor the HBM memory range, the AXI4-Lite range check circuitry 814 may perform a range check for the AXI4-Lite memory. If the circuitry 816 detects an AXI4-Lite memory range hit, the matching range may be sent to the remapping circuitry 818 and the logical address may be remapped to a physical address of the AXI4-Lite memory. If there is no range hit for the DDR memory range, the HBM memory range, nor the AXI4-Lite memory range, the address mapper 608 may assign a default out-of-range address in block 768, which may generate a user-readable error message. While
If the range check does not return a match for a range corresponding to the first memory type and the second memory type, then in query block 764, the address mapper 608 determines if the range check returns a match for a range corresponding to the third memory type. If the range check returns a match for a range corresponding to the third memory type, then, in process block 766, the address mapper 608 may remap the logical address to a physical address corresponding to the third memory type. In the example illustrated in
Similar to the example in
As may be observed, the logical address 1002 is mapped to the entry 1008C, corresponding to the secondary IP PC2 illustrated in table 1010. Similarly, the logical address 1006 is mapped to the entry 1008D, corresponding to the secondary IP PC3 illustrated in the table 1010. For example, the secondary IP PC2 and the secondary IP PC3 may include multiple HBM memory controllers. Using the lower logical address 960 and the upper logical address 962 keyed in by the user, the address mapper 608 may determine the values for the address base 968 and the address mapping data 970 using the systems and methods described in
As previously stated, different applications running on an integrated circuit (e.g., the FPGA 102) may communicate using a variety of data widths.
For example, to support a 256-bit main IP 1110 (e.g., a 256-bit processing element), the NoC 104 may provide the 256-bit main bridge 1102. To support a 128-bit main IP 1112 (e.g., a 128-bit processing element), the NoC 104 may provide the 128-bit main bridge 1104. To support a 64-bit main IP 1114 (e.g., a 64-bit processing element), the NoC 104 may provide the 64-bit main bridge 1106. To support a 32-bit main IP 1116 (e.g., a 32-bit processing element), the NoC 104 may provide the 32-bit main bridge 1108. However, main bridges may be large and draw significant power. Thus, implementing numerous main bridges may consume excessive area on the FPGA 102 and may consume excessive power. To avoid disposing numerous components such as the main bridges 1102-1108 on the FPGA 102, a flexible data width converter may be implemented to use dynamic widths in fewer (e.g., one) main bridges.
To do so, the main bridge 106 may send 256-bit data to the data width converter 1152. To communicate with the main IP 152 that supports 256-bit data, the data width converter 1152 may leave the 256-bit data unchanged, as the data transferred between the main bridge 106 is already in a format supported by the main IP 152. For main IP 152 that supports 64-bit data, however, the data width converter 1152 may downconvert the 256-bit data to 64-bit data to communicate with the main IP 152 that supports 64-bit data. Likewise, for main IP 152 that supports 16-bit data, the data width converter 1152 may downconvert the 256-bit data to 16-bit data. Similarly, the data width converter 1152 may receive data from the main IP 152 that supports a first data width (e.g., 256-bit data, 128-bit data, 64-bit data, 32-bit data, 16-bit data, and so on) and may either downconvert or upconvert the data to a data width supported by the secondary IP 154 (e.g., 256-bit data, 128-bit data, 64-bit data, 32-bit data, 16-bit data, 8-bit data, and so on).
For example, the data width converter 1152 may receive 256-bit data from the main IP 152, and may downconvert the data to a data width supported by the secondary IP (e.g., 128-bit data, 64-bit data, 32-bit data, 16-bit data, 8-bit data, and so on) supported by the secondary IP 154. As may be observed, by utilizing the data width converter 1152, the NoC 104 may be designed with significantly fewer main bridges. As such, the data width converter 1152 may enable space and power conservation on the FPGA 102.
The table 1254 illustrates the data from the user logic once it has been adapted by the data width converter 1152 to 256-bit data such that the data may be processed by the main bridge 106. It may be observed that each byte of user logic data from the table 1252 is packed to occupy 32-bits of data on the main bridge 106. For instance, for WDATA0, the data from the user logic is packed to be preceded by 224 0s, and the WSTRB value of 32′h0000000F indicates that the least significant 15 bits are high, and thus are to be used to transfer the data. For WDATA1, the data is packed to be preceded by 192 0s and followed by 32 0s, and the WSTRB value of 32′000000F0 indicates that bits 16-32 are high, and thus are to be used to transfer the data. For WDATA2, the data from the user logic is packed to be preceded by 160 0s, followed by 64 0s, and the WSTRB value of 32′00000F00 indicates that bits 32-48 are high, and thus are to be used to transfer the data. For WDATA3, the data from the user logic is packed to be preceded by 128 0s and followed by 96 0s, and the WSTRB value of 32′0000F000 indicates that bits 48-64 are high, and thus are to be used to transfer the data.
As the converted data is 256 bits long, WDATA2, similarly to WDATA0, is converted to occupy the first 128 bits and the last 128 bits are 0s, with the WSTRB value of 32′h0000FFFF indicating that the least significant 128 bits are high, and thus are to be used to transfer the data. Similarly to WDATA1, for WDATA3 the first 128 bits are 0s, and the data from the user logic is converted to occupy the last 128 bits, and the WSTRB value of 32′hFFFF0000 indicates that the most significant 128 bits are high, and thus are to be used to transfer the data. As such, the data width convert 1152 may convert 128-bit write data to 256-bit write data.
While the embodiments set forth in the present disclosure may be susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, it should be understood that the disclosure is not intended to be limited to the particular forms disclosed. The disclosure is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure as defined by the following appended claims.
The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function] . . . ” or “step for [perform]ing [a function] . . . ”, it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f).
Example EmbodimentsEXAMPLE EMBODIMENT 1. A field-programmable gate array, comprising:
a network-on-chip (NoC) communicatively coupled to one or more internal components and one or more external components, the NoC comprising:
-
- a first bridge configurable to receive data from the internal components;
- a second bridge configurable to transmit data to the one or more external components;
- one or more switches communicatively configurable to facilitate data transfer between the first bridge and the second bridge;
- a buffer configurable to store data transferred from the one or more internal components to the one or more external components; and
- dynamic data throttling circuitry configurable to adjust a rate of flow of the data transferred from the one or more internal components to the one or more external components based at least upon receiving a first control signal from the buffer, a second control signal from the buffer, or both.
EXAMPLE EMBODIMENT 2. The field-programmable gate array of example embodiment 1, wherein the buffer is configurable to:
send the first control signal to the dynamic data throttling circuitry in response to determining that a first buffer threshold has been reached or exceeded;
send the second control signal to the dynamic data throttling circuitry in response to determining that a second buffer threshold has been reached or exceeded; or
both.
EXAMPLE EMBODIMENT 3. The field-programmable gate array of example embodiment 2, wherein the first buffer threshold, the second buffer threshold, or both are programmable.
EXAMPLE EMBODIMENT 4. The field-programmable gate array of example embodiment 1, wherein the dynamic data throttling circuitry is configurable to adjust the rate of flow of the data to a first throttling rate in response to receiving the first control signal.
EXAMPLE EMBODIMENT 5. The field-programmable gate array of example embodiment 4, wherein the dynamic data throttling circuitry is configurable to adjust the rate of flow of the data to a second throttling rate in response to receiving a third control signal, wherein the second throttling rate comprises an incremental difference from the first throttling rate.
EXAMPLE EMBODIMENT 6. The field-programmable gate array of example embodiment 1, wherein the dynamic data throttling circuitry is configurable to adjust the rate of flow of the data to a throttling rate of 100% in response to receiving an assertion of the second control signal.
EXAMPLE EMBODIMENT 7. The field-programmable gate array of example embodiment 1, wherein the dynamic data throttling circuitry comprises one or more hardware components implementing software instructions.
EXAMPLE EMBODIMENT 8. The field-programmable gate array of example embodiment 1, wherein the dynamic data throttling circuitry is disposed between a switch of the one or more switches and the one or more external components.
EXAMPLE EMBODIMENT 9. The field-programmable gate array of example embodiment 1, wherein the first bridge is configurable to determine a number of pending transactions received at the first bridge, and, in response to determining the number of pending transactions, transmit a first gating control signal, a second gating control signal, or both to the dynamic data throttling circuitry.
EXAMPLE EMBODIMENT 10. The field-programmable gate array of example embodiment 9, wherein, in response to receiving the first gating control signal, the second gating control signal, or both, the dynamic data throttling circuitry is configurable to adjust a number of transactions entering transaction gating circuitry.
EXAMPLE EMBODIMENT 11. The field-programmable gate array of example embodiment 9, wherein the first gating control signal comprises a pending write transaction signal, and the second gating control signal comprises a pending read transaction signal.
EXAMPLE EMBODIMENT 12. A method, comprising:
receiving, from a data buffer, a first control signal indicating a first level of congestion at a first bridge, a second control signal indicating a second level of congestion at the first bridge, or both;
receiving, from an external component, an enable signal;
in response to receiving the enable signal and the first control signal, the second control signal, or both, sending a target instruction to a throttle action controller, wherein the target instruction identifies the external component;
determining, at the throttle action controller, a throttle rate for incoming data to the external component based on whether the first control signal or the second control signal has been asserted; and
throttling the incoming data to the identified external component at the determined throttle rate.
EXAMPLE EMBODIMENT 13. The method of example embodiment 12, wherein the throttle action controller comprises one or more hardware components implementing software instructions.
EXAMPLE EMBODIMENT 14. The method of example embodiment 12, comprising, in response to an assertion of the first control signal, throttling the incoming data at a first throttling rate.
EXAMPLE EMBODIMENT 15. The method of example embodiment 14, wherein the throttle action controller is configurable to adjust a rate of flow of the incoming data to a second throttling rate in response to receiving an assertion of the second control signal, where the second throttling rate is less than the first throttling rate.
EXAMPLE EMBODIMENT 16. The method of example embodiment 12, wherein receiving the enable signal comprises receiving the enable signal at target selection circuitry comprising one or more hardware components implementing software instructions.
EXAMPLE EMBODIMENT 17. A method, comprising:
receiving, at a hardened address mapper, a logical address from an internal component of an integrated circuit;
performing, at the hardened address mapper, a range check on a first memory type;
in response to determining that the logical address matches a first range corresponding to the first memory type, mapping, via the hardened address mapper, the logical address to a first physical address corresponding to the first memory type;
in response to determining that the logical address does not match the first range, performing a range check on a second memory type;
in response to determining that the logical address matches a second range corresponding to the second memory type, mapping, via the hardened address mapper, the logical address to a second physical address corresponding to the second memory type;
in response to determining that the logical address does not match the first range or the second range, performing a range check on a third memory type;
in response to determining that the logical address matches a third range corresponding to the third memory type, mapping, via the hardened address mapper, the logical address to a third physical address corresponding to the third memory type; and in response to determining that the logical address does not match the first range, the second range, or the third range, mapping the logical address to a default out-of-range address.
EXAMPLE EMBODIMENT 18. The method of example embodiment 17, wherein the first memory type comprises double data rate memory.
EXAMPLE EMBODIMENT 19. The method of example embodiment 17, wherein the second memory type comprises high bandwidth memory.
EXAMPLE EMBODIMENT 20. The method of example embodiment 17, wherein the integrated circuit comprises a field-programmable gate array.
Claims
1. A field-programmable gate array, comprising:
- a network-on-chip (NoC) communicatively coupled to one or more internal components and one or more external components, the NoC comprising: a first bridge configurable to receive data from the internal components; a second bridge configurable to transmit data to the one or more external components; one or more switches communicatively configurable to facilitate data transfer between the first bridge and the second bridge; a buffer configurable to store data transferred from the one or more internal components to the one or more external components; and dynamic data throttling circuitry configurable to adjust a rate of flow of the data transferred from the one or more internal components to the one or more external components based at least upon receiving a first control signal from the buffer, a second control signal from the buffer, or both.
2. The field-programmable gate array of claim 1, wherein the buffer is configurable to:
- send the first control signal to the dynamic data throttling circuitry in response to determining that a first buffer threshold has been reached or exceeded;
- send the second control signal to the dynamic data throttling circuitry in response to determining that a second buffer threshold has been reached or exceeded; or
- both.
3. The field-programmable gate array of claim 2, wherein the first buffer threshold, the second buffer threshold, or both are programmable.
4. The field-programmable gate array of claim 1, wherein the dynamic data throttling circuitry is configurable to adjust the rate of flow of the data to a first throttling rate in response to receiving the first control signal.
5. The field-programmable gate array of claim 4, wherein the dynamic data throttling circuitry is configurable to adjust the rate of flow of the data to a second throttling rate in response to receiving a third control signal, wherein the second throttling rate comprises an incremental difference from the first throttling rate.
6. The field-programmable gate array of claim 1, wherein the dynamic data throttling circuitry is configurable to adjust the rate of flow of the data to a throttling rate of 100% in response to receiving an assertion of the second control signal.
7. The field-programmable gate array of claim 1, wherein the dynamic data throttling circuitry comprises one or more hardware components implementing software instructions.
8. The field-programmable gate array of claim 1, wherein the dynamic data throttling circuitry is disposed between a switch of the one or more switches and the one or more external components.
9. The field-programmable gate array of claim 1, wherein the first bridge is configurable to determine a number of pending transactions received at the first bridge, and, in response to determining the number of pending transactions, transmit a first gating control signal, a second gating control signal, or both to the dynamic data throttling circuitry.
10. The field-programmable gate array of claim 9, wherein, in response to receiving the first gating control signal, the second gating control signal, or both, the dynamic data throttling circuitry is configurable to adjust a number of transactions entering transaction gating circuitry.
11. The field-programmable gate array of claim 9, wherein the first gating control signal comprises a pending write transaction signal, and the second gating control signal comprises a pending read transaction signal.
12. A method, comprising:
- receiving, from a data buffer, a first control signal indicating a first level of congestion at a first bridge, a second control signal indicating a second level of congestion at the first bridge, or both;
- receiving, from an external component, an enable signal;
- in response to receiving the enable signal and the first control signal, the second control signal, or both, sending a target instruction to a throttle action controller, wherein the target instruction identifies the external component;
- determining, at the throttle action controller, a throttle rate for incoming data to the external component based on whether the first control signal or the second control signal has been asserted; and
- throttling the incoming data to the identified external component at the determined throttle rate.
13. The method of claim 12, wherein the throttle action controller comprises one or more hardware components implementing software instructions.
14. The method of claim 12, comprising, in response to an assertion of the first control signal, throttling the incoming data at a first throttling rate.
15. The method of claim 14, wherein the throttle action controller is configurable to adjust a rate of flow of the incoming data to a second throttling rate in response to receiving an assertion of the second control signal, where the second throttling rate is less than the first throttling rate.
16. The method of claim 12, wherein receiving the enable signal comprises receiving the enable signal at target selection circuitry comprising one or more hardware components implementing software instructions.
17. A method, comprising: in response to determining that the logical address matches a second range corresponding to the second memory type, mapping, via the hardened address mapper, the logical address to a second physical address corresponding to the second memory type;
- receiving, at a hardened address mapper, a logical address from an internal component of an integrated circuit;
- performing, at the hardened address mapper, a range check on a first memory type;
- in response to determining that the logical address matches a first range corresponding to the first memory type, mapping, via the hardened address mapper, the logical address to a first physical address corresponding to the first memory type;
- in response to determining that the logical address does not match the first range, performing a range check on a second memory type;
- in response to determining that the logical address does not match the first range or the second range, performing a range check on a third memory type;
- in response to determining that the logical address matches a third range corresponding to the third memory type, mapping, via the hardened address mapper, the logical address to a third physical address corresponding to the third memory type; and
- in response to determining that the logical address does not match the first range, the second range, or the third range, mapping the logical address to a default out-of-range address.
18. The method of claim 17, wherein the first memory type comprises double data rate memory.
19. The method of claim 17, wherein the second memory type comprises high bandwidth memory.
20. The method of claim 17, wherein the integrated circuit comprises a field-programmable gate array.
Type: Application
Filed: Jun 30, 2022
Publication Date: Dec 22, 2022
Inventors: Rahul Pal (Bangalore), Ashish Gupta (San Jose, CA), Navid Azizi (Toronto), Jeffrey Schulz (Milpitas, CA), Yin Chong Hew (Sungai Petani), Thuyet Ngo (Bayan Lepas), George Chong Hean Ooi (Bayan Lepas), Vikrant Kapila (Singapore), Kok Kee Looi (Permatang Pauh)
Application Number: 17/854,341