Abstract: An inverter of an embodiment includes a power convertor that can perform at least one of a first action of generating electricity to be output to a power system based on a pseudo inertia and a second action of generating electricity to be output to the power system without based on the pseudo inertia, and a transmitter that transmits, to a high-order control system, first information indicating which of the first action and the second action the power converter is performing.
Abstract: Disclosed are techniques for external quiesce of a core in a multi-core system. In some aspects, a method for external quiesce of a core in a multi-core system-on-chip (SoC), comprises, at control circuitry for the multi-core SoC, receiving an indication that a core in a multi-core SoC should be quiesced, determining that the core should be externally quiesced, and asserting an external quiesce request input into the core.
Abstract: A method includes inputting at least one compressed image in a computing system. The method also includes an inplace patching process. Another image is decompressed over the compressed image by a processor. Local variables are stored periodically, receiving restored power after an interruption to the inplace patching, wherein an execution of the inplace patching is resumed at a later time interval by the processor by restoring the local variables. The method also includes completing the inplace patching process of decompressing the image over the inputted compressed image after restoring the local variables.
Abstract: A method of operating a storage device including a neural network processor includes outputting, by a controller device, a trigger signal instructing the neural network processor to perform a neural network operation in response to a command from a host device, requesting, by a neural network processor, target model data about parameters of a target model and instruction data for instructing the neural network operation to a memory device storing the target model data and the instruction data in response to the trigger signal, receiving, by the neural network processor, the target model data and the instruction data from the memory device and outputting, by the neural network processor, inference data based on the target model data and the instruction data.
Type:
Grant
Filed:
May 1, 2023
Date of Patent:
July 30, 2024
Assignees:
SAMSUNG ELECTRONICS CO., LTD., SEOUL NATIONAL UNIVERSITY R&DB FOUNDATION
Inventors:
Sungroh Yoon, Hyeokjun Choe, Seongsik Park, Seijoon Kim
Abstract: A prefetcher for a coprocessor is disclosed. An apparatus includes a processor and a coprocessor that are configured to execute processor and coprocessor instructions, respectively. The processor and coprocessor instructions appear together in code sequences fetched by the processor, with the coprocessor instructions being provided to the coprocessor by the processor. The apparatus further includes a coprocessor prefetcher configured to monitor a code sequence fetched by the processor and, in response to identifying a presence of coprocessor instructions in the code sequence, capture the memory addresses, generated by the processor, of operand data for coprocessor instructions. The coprocessor is further configured to issue, for a cache memory accessible to the coprocessor, prefetches for data associated with the memory addresses prior to execution of the coprocessor instructions by the coprocessor.
Type:
Grant
Filed:
July 28, 2023
Date of Patent:
July 30, 2024
Assignee:
Apple Inc.
Inventors:
Brandon H. Dwiel, Andrew J. Beaumont-Smith, Eric J. Furbish, John D. Pape, Stephen G. Meier, Tyler J. Huberty
Abstract: In an example there is provided a computer-implemented method which comprises generating an execution plan for a received user query, converting the execution plan into bytecode, compiling to unoptimized machine code using the bytecode and beginning execution of the execution plan by executing the unoptimized machine code, compiling optimized machine code using the bytecode whilst executing the unoptimized machine code; and switching to executing the optimized machine code in order to execute the execution plan, when the optimized machine code has been compiled.
Abstract: A processor may include an instruction pipeline that executes program instructions in-order according to a program order. During operation, the instruction pipeline may detect that data is missing for a first instruction. In response, the instruction pipeline may send a request to load the missing data for the first instruction. However, the instruction pipeline may not necessarily stall operation to wait for the missing data to be loaded. Instead, the instruction pipeline may continue executing instructions subsequent to the first instruction. During the continued execution, the instruction pipeline may detect that data is missing for a second instruction, and send a request to load the missing data for the second instruction. The instruction pipeline may continue such operation until it determines that a condition occurs that prevents the continued execution. When the condition occurs, the instruction pipeline may stop the continued execution, and then re-execute the first instruction.
Type:
Grant
Filed:
August 30, 2022
Date of Patent:
June 4, 2024
Assignee:
Apple Inc.
Inventors:
Justin M Deinlein, Michael L Karm, Brett S Feero, David E Kroesche
Abstract: Systems, methods, and apparatuses relating to instructions to reset software thread runtime property histories in a hardware processor are described. In one embodiment, a hardware processor includes a hardware guide scheduler comprising a plurality of software thread runtime property histories; a decoder to decode a single instruction into a decoded single instruction, the single instruction having a field that identifies a model-specific register; and an execution circuit to execute the decoded single instruction to check that an enable bit of the model-specific register is set, and when the enable bit is set, to reset the plurality of software thread runtime property histories of the hardware guide scheduler.
Type:
Grant
Filed:
May 3, 2023
Date of Patent:
April 23, 2024
Assignee:
Intel Corporation
Inventors:
Eliezer Weissmann, Mark Charney, Michael Mishaeli, Robert Valentine, Itai Ravid, Jason W. Brandt, Gilbert Neiger, Baruch Chaikin, Efraim Rotem
Abstract: A method of constructing an adaptive multiply accumulate layer in a convolutional neural network, including determining an activation data map width, an activation data map height, a channel depth, a batch, a kernel width, a kernel height and a filter set number, setting a first dimension of an adaptive multiplier layer based on the activation data map width, setting a second dimension of the adaptive multiplier layer based on the channel depth, setting a third dimension of the adaptive multiplier layer based on the filter set number and constructing the adaptive multiplier layer based on the first dimension, the second dimension and the third dimension.
Abstract: A multithread processor includes a time counter and a register scoreboard and provides a method for statically dispatching instructions with preset execution times based on a write time of a register in the register scoreboard and the time counter provided to an execution pipeline.
Abstract: Disclosed herein are vector index registers in vector processors that each store multiple addresses for accessing multiple positions in vectors. It is known to use scalar index registers in vector processors to access multiple positions of vectors by changing the scalar index registers in vector operations. By using a vector indexing register for indexing positions of one or more operand vectors, the scalar index register can be replaced and at least the continual changing of the scalar index register can be avoided.
Abstract: A method creates a table of keys and values. Each key is an element of an input array which is an input of a machine-learning pre-processing pipeline, and each value is an output of the pipeline. The method measures (1) a hit rate H to the memo table, (2) an average time Ttable to look up the table, (3) an average time Tpipeline to execute the pipeline, and (4) a threshold Telements on a number of elements of the input array. The method looks up the value in the table by using an element of the input array as a key when Tpipeline×H>Ttable and the number of elements in the input array is less than Telements. The method calls the pipeline in place of the lookup for all of the remaining elements in the input array when the value is not in the table.
Type:
Grant
Filed:
December 15, 2021
Date of Patent:
February 20, 2024
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Abstract: Facilitating running a multi-process application using a set of unikernels includes receiving an indication of a request to fork a first process running in a first unikernel virtual machine. It further includes, in response to receiving the indication of the request to fork the process running in the first unikernel virtual machine, deploying a second unikernel virtual machine to run a second process that is a child of the first process. Unikernel scaling includes determining that a unikernel virtual machine to be deployed is associated with at least a portion of a kernel image that is already cached. It further includes, in response to determining that the unikernel virtual machine to be deployed is associated with the at least portion of the kernel image that is already cached, mapping the unikernel virtual machine to the at least portion of the kernel image that is already cached.
Abstract: A processor includes a time counter and provides a method for statically dispatching instructions with preset execution times based on a time count from the time counter provided to an execution pipeline.
Abstract: A processor includes a time counter and a time-resource matrix and provides a method for statically dispatching instructions if the resources are available based on data stored in the time-resource matrix, and wherein execution times for the instructions use a time count from the time counter to specify when the instructions may be provided to an execution pipeline.
Abstract: A method performed by a coordinating entity in a disaggregated data center architecture wherein computing resources are separated in discrete resource pools and associated together to represent a functional server. The coordinating entity obtains a setup of processor cores that are coupled logically as the functional server, and determines an index indicating an identity of a cache coherency domain based on the obtained setup of processor cores. The coordinating entity further configures one or more communicating entities associated with the obtained setup of processor cores, to use the determined index when handling updated cache related data.
Type:
Grant
Filed:
June 20, 2019
Date of Patent:
September 12, 2023
Assignee:
Telefonaktiebolaget LM Ericsson (publ)
Inventors:
Chakri Padala, Amir Roozbeh, Ahsan Javed Awan
Abstract: An example method of generating an execution profile of a firmware module comprises: receiving an execution trace of a firmware module comprising a plurality of executable instructions, wherein the execution trace comprises a plurality of execution trace records, wherein each execution trace record of the plurality of execution trace records indicates a successful execution of an executable instruction identified by a program counter (PC) value; retrieving a first execution trace record of the plurality of execution trace records, wherein the first execution trace record comprises a first PC value; identifying a first executable instruction referenced by the first PC value; identifying a firmware function containing the first executable instruction; incrementing a cycle count for the firmware function by a number of cycles associated with the first executable instruction; and generating, using the cycle count, an execution profile of the firmware module.
Type:
Grant
Filed:
October 2, 2020
Date of Patent:
August 8, 2023
Assignee:
Micron Technology, Inc.
Inventors:
Yun Li, Harini Komandur Elayavalli, Mark Ish
Abstract: Systems, methods, and other embodiments associated with autonomous cloud-node scoping for big-data machine learning use cases are described. In some example embodiments, an automated scoping tool, method, and system are presented that, for each of multiple combinations of parameter values, (i) set a combination of parameter values describing a usage scenario, (ii) execute a machine learning application according to the combination of parameter values on a target cloud environment, and (iii) measure the computational cost for the execution of the machine learning application. A recommendation regarding configuration of central processing unit(s), graphics processing unit(s), and memory for the target cloud environment to execute the machine learning application is generated based on the measured computational costs.
Type:
Grant
Filed:
May 26, 2022
Date of Patent:
August 8, 2023
Assignee:
Oracle International Corporation
Inventors:
Edward R. Wetherbee, Kenny C. Gross, Guang C. Wang, Matthew T. Gerdes
Abstract: Aspects disclosed in the detailed description include providing load address predictions using address prediction tables based on load path history in processor-based systems. In one aspect, a load address prediction engine provides a load address prediction table containing multiple load address prediction table entries. Each load address prediction table entry includes a predictor tag field and a memory address field for a load instruction. The load address prediction engine generates a table index and a predictor tag based on an identifier and a load path history for a detected load instruction. The table index is used to look up a corresponding load address prediction table entry. If the predictor tag matches the predictor tag field of the load address prediction table entry corresponding to the table index, the memory address field of the load address prediction table entry is provided as a predicted memory address for the load instruction.
Type:
Grant
Filed:
March 31, 2016
Date of Patent:
July 25, 2023
Assignee:
QUALCOMM Incorporated
Inventors:
Rami Mohammad Al Sheikh, Raguram Damodaran
Abstract: Embodiments of the present disclosure provide an instruction processing apparatus, comprising a first register configured to store a source string, wherein the source string comprises at least one byte, and an execution circuitry, communicatively coupled to the first register and configured to execute a comparison instruction to compare the at least one byte in the source string with an ending identifier to obtain a result value corresponding to the source string, wherein the comparison instruction is executed on each of the at least one byte in the source string and the comparison instruction is an assembly code instruction.
Abstract: Dataflow optimization by dead store elimination focusing on logically dividing a contiguous storage area into different portions by use to allow a different number and type of dataflow and dead store techniques on each portion. A first storage portion, containing the storage for control flow related metadata, is split from a remaining storage portion. Liveness analysis is executed on the first storage portion using bitvectors with each bit representing four bytes. The remaining storage portion, containing the temporary storage for computational values, is processed using a deadness-range-based dataflow analysis. IN and OUT sets for each basic block are generated by processing blocks GEN and KILL sets by performing a backwards intersection dataflow analysis. Stores that write to the set of dead ranges in the IN sets of blocks are eliminated as dead stores.
Type:
Grant
Filed:
December 17, 2021
Date of Patent:
February 28, 2023
Assignee:
International Business Machines Corporation
Abstract: A data management method includes receiving, by a management server, a first request, determining, based on an identifier of a first user in the first request, whether a shadow tenant bucket associated with the identifier of the first user exists, and if the shadow tenant bucket associated with the identifier of the first user exists, storing, in the shadow tenant bucket associated with the identifier of the first user, an acceleration engine image (AEI) that the first user requests to register, where a shadow tenant bucket is used to store an AEI of a specified user, and each shadow tenant bucket is in a one-to-one correspondence with a user.
Abstract: A solution providing for the dynamic design, use, and modification of models is provided. The solution can receive an electronic communication identifying a request or event and process the electronic communication in a runtime environment by binding a model of the collection of models to dynamically construct an implementation of the model. Collective properties of the set of related models can emerge dynamically. The binding can comprise late-binding of an application associated with the collection of models to enable at least one user to perform at least one interaction using the environment without disrupting any of the environment or the application.
Abstract: A graph-based data multi-operation system includes a data multi-operation management subsystem coupled to an application and accelerator subsystems. The data multi-operation management subsystem receives a data multi-operation graph from the application that identifies first data and defines operations for performance on the first data to transform the first data into second data. The data multi-operation management subsystem assigns each of the operations to at least one of the accelerator systems, and configures the accelerator subsystems to perform the operations in a sequence that transforms the first data into the second data, When the data multi-operation management subsystem determine a completion status for the performance of the operations by the accelerator subsystems, it transmits a completion status communication to the application that indicates the completion status of the performance of the plurality of operations by the plurality of accelerator subsystems.
Type:
Grant
Filed:
October 21, 2020
Date of Patent:
November 15, 2022
Assignee:
Dell Products L.P.
Inventors:
Gaurav Chawla, Mark Steven Sanders, William Price Dawkins, Jimmy D. Pike, Elie Jreij, Robert W. Hormuth
Abstract: A priority queue sorting system including a priority queue and a message storage. The priority queue includes multiple priority blocks that are cascaded in order from a lowest priority block to a highest priority block. Each priority block includes a register block storing an address and an identifier, compare circuitry that compares a new identifier with the stored identifier for determining relative priority, and select circuitry that determines whether to keep or shift and replace the stored address and identifier within the priority queue based on the relative priority. The message storage stores message payloads, each pointed to by a corresponding stored address of a corresponding priority block. Each priority block contains its own compare and select circuitry and determines a keep, shift, or store operation. Thus, sorting is independent of the length of the priority queue thereby achieving deterministic sorting latency that is independent of the queue length.
Type:
Grant
Filed:
July 23, 2021
Date of Patent:
November 1, 2022
Assignee:
NXP B.V.
Inventors:
Abhijit Kumar Deb, Donald Robert Pannell, Claude Robert Gauthier
Abstract: A computing device (e.g., a processor) having a plurality of branch target buffers. A first branch target buffer in the plurality of branch target buffers is used in execution of a set of instructions containing a call to a subroutine. In response to the call to the subroutine, a second branch target buffer is allocated from the plurality of branch target buffers for execution of instructions in the subroutine. The second branch target buffer is cleared before the execution of the instructions in the subroutine. The execution of the instructions in the subroutine is restricted to access the second branch target buffer and blocked from accessing branch target buffers other than the second branch target buffer.
Abstract: A processor comprises a trusted execution environment and a non-trusted execution environment. The processor further comprises a common resource accessible in both the trusted execution environment and the non-trusted execution environment and an instruction processing device including circuitry configured to fetch an instruction for decoding and execute the decoded instruction. The instruction processing device includes circuitry further configured to determine consistency between a current execution environment of the processor and a resource status in response to a result from instruction decoding indicating that instruction involves access to the common resource, and load content corresponding to the current execution environment into the common resource in response to a determination that the current execution environment is inconsistent with the resource status, wherein the resource status indicates an execution environment corresponding to content in the common resource.
Abstract: Measurements of a device's firmware are made regularly and compared with prior, derived measurements. Prior measurements are derived from a set of identical firmware measurements obtained from multiple devices having the same make, model and firmware version number. The firmware integrity status is reported on a data and device security console for a group of managed endpoints. Alerts about firmware changes, which may be potential attacks on the firmware, are given automatically.
Abstract: Efficient scaling of in-network compute operations to large numbers of compute nodes is disclosed. Each compute node is connected to a same plurality of network compute nodes, such as compute-enabled network switches. Compute processes at the compute nodes generate local gradients or other vectors by, for instance, performing a forward pass on a neural network. Each vector comprises values for a same set of vector elements. Each network compute node is assigned to, based on the local vectors, reduce vector data for a different a subset of the vector elements. Each network compute node returns a result chunk for the elements it processed back to each of the compute nodes, whereby each compute node receives the full result vector. This configuration may, in some embodiments, reduce buffering, processing, and/or other resource requirements for the network compute node or network at large.
Type:
Grant
Filed:
March 12, 2021
Date of Patent:
August 23, 2022
Assignee:
Innovium, Inc.
Inventors:
William Brad Matthews, Puneet Agarwal, Bruce Hui Kwan
Abstract: The present disclosure relates to a processor that includes one or more processing elements associated with one or more instruction set architectures. The processor is configured to receive a request from an application executed by a first processing element of the one or more processing elements to enable a feature associated with an instruction set architecture. Additionally, the processor is configured to enable the application to utilize the feature without a system call occurring when the feature is associated with an instruction set architecture associated with the first processing element.
Type:
Grant
Filed:
September 27, 2019
Date of Patent:
August 9, 2022
Assignee:
Intel Corporation
Inventors:
Toby Opferman, Eliezer Weissmann, Robert Valentine, Russell Cameron Arnold
Abstract: A data management method includes receiving, by a management server, a first request, determining, based on an identifier of a first user in the first request, whether a shadow tenant bucket associated with the identifier of the first user exists, and if the shadow tenant bucket associated with the identifier of the first user exists, storing, in the shadow tenant bucket associated with the identifier of the first user, an acceleration engine image (AEI) that the first user requests to register, where a shadow tenant bucket is used to store an AEI of a specified user, and each shadow tenant bucket is in a one-to-one correspondence with a user.
Abstract: A method of performing safety-critical rendering at a graphics processing unit within a graphics processing system, the method comprising: receiving, at the graphics processing system, graphical data for safety-critical rendering at the graphics processing unit; scheduling at a safety controller, in accordance with a reset frequency, a plurality of resets of the graphics processing unit; rendering the graphical data at the graphics processing unit; and the safety controller causing the plurality of resets of the graphics processing unit to be performed commensurate with the reset frequency.
Type:
Grant
Filed:
September 30, 2020
Date of Patent:
July 5, 2022
Assignee:
Imagination Technologies Limited
Inventors:
Philip Morris, Mario Sopena Novales, Jamie Broome
Abstract: An accelerator manager monitors and logs performance of multiple accelerators, analyzes the logged performance, determines from the logged performance of a selected accelerator a desired programmable device for the selected accelerator, and specifies the desired programmable device to one or more accelerator developers. The accelerator manager can further analyze the logged performance of the accelerators, and generate from the analyzed logged performance an ordered list of test cases, ordered from fastest to slowest. A test case is selected, and when the estimated simulation time for the selected test case is less than the estimated synthesis time for the test case, the test case is simulated and run. When the estimated simulation time for the selected test case is greater than the estimated synthesis time for the text case, the selected test case is synthesized and run.
Type:
Grant
Filed:
August 24, 2020
Date of Patent:
June 28, 2022
Assignee:
International Business Machines Corporation
Inventors:
Paul E. Schardt, Jim C. Chen, Lance G. Thompson, James E. Carey
Abstract: Systems, methods, and other embodiments associated with autonomous cloud-node scoping for big-data machine learning use cases are described. In some example embodiments, an automated scoping tool, method, and system are presented that, for each of multiple combinations of parameter values, (i) set a combination of parameter values describing a usage scenario, (ii) execute a machine learning application according to the combination of parameter values on a target cloud environment, and (iii) measure the computational cost for the execution of the machine learning application. A recommendation regarding configuration of central processing unit(s), graphics processing unit(s), and memory for the target cloud environment to execute the machine learning application is generated based on the measured computational costs.
Type:
Grant
Filed:
January 2, 2020
Date of Patent:
June 21, 2022
Assignee:
Oracle International Corporation
Inventors:
Edward R. Wetherbee, Kenny C. Gross, Guang C. Wang, Matthew T. Gerdes
Abstract: Embodiments for implementing optimized accelerators in a computing environment are provided. Selected instruction sequence code blocks derived from one or more application workloads may be consolidated together to activate one or more accelerators subject to one or more constraints and projections.
Type:
Grant
Filed:
March 31, 2020
Date of Patent:
June 14, 2022
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
Alper Buyuktosunoglu, David Trilla Rodriguez, John-David Wellman, Pradip Bose
Abstract: A mechanism is described for facilitating localized load-balancing for processors in computing devices. A method of embodiments, as described herein, includes facilitating hosting, at a processor of a computing device, a local load-balancing mechanism. The method may further include monitoring balancing of loads at the processor and serving as a local scheduler to maintain de-centralized load-balancing at the processor and between the processor and other one or more processors.
Type:
Grant
Filed:
December 22, 2020
Date of Patent:
June 7, 2022
Assignee:
Intel Corporation
Inventors:
Prasoonkumar Surti, David Cowperthwaite, Abhishek R. Appu, Joydeep Ray, Vasanth Ranganathan, Altug Koker, Balaji Vembu
Abstract: Systems, methods, and computer-readable media are disclosed for associating and reconciling disparate key-value pairs corresponding to a target entity across multiple organizational entities using a distributed match. A shared output mapping may be generated that associates and reconciles common and/or conceptually aligned key-value pairs across the multiple organizational entities. The shared output mapping allows any given organizational entity to leverage information known to other organizational entities about a target entity. In this manner, the organizational entities participate in an information sharing ecosystem that enables each organizational entity to provide a user with a more optimally customized user experience based on the greater breadth of information available through the shared output mapping.
Type:
Grant
Filed:
December 8, 2017
Date of Patent:
April 5, 2022
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
Thomas A. Brunet, Pushpalatha M. Hiremath, Soma Shekar Naganna, Willie L. Scott, II
Abstract: A method is implemented in a computing system for managing resources to decrease busy-looping, the method using a sliding window template including at least a first sliding window. The method includes initializing the sliding window template for a monitored resource, determining a current status of the monitored resource, updating the first sliding window with the current status, determining a first sliding window status based on whether a first sliding window threshold is met, and determining whether to sleep the monitored resource based on a decision-making table that uses at least the first sliding window status as input.
Abstract: A method includes receiving a first input indicating at least a selected controller type and generating, based on the first input, a model that represents a controller corresponding to the selected controller type. The method also includes receiving a second input indicating at least one functional aspect of the selected controller type, updating, based on the second input, the model to represent the at least one functional aspect of the selected controller type, and compiling, using the model, a binary file that represents, at least, the at least one functional aspect of the selected controller type. The method also includes uploading the binary file to a controller corresponding to the selected controller type.
Type:
Grant
Filed:
July 17, 2020
Date of Patent:
April 5, 2022
Assignee:
Steering Solutions IP Holding Corporation
Inventors:
Anthony Champagne, Rangarajan Ramanujam, Michael Story, Owen K. Tosh
Abstract: A universal floating-point Instruction Set Architecture (ISA) implemented entirely in hardware. Using a single instruction, the universal floating-point ISA has the ability, in hardware, to compute directly with dual decimal character sequences up to IEEE 754-2008 “H=20” in length, without first having to explicitly perform a conversion-to-binary-format process in software before computing with these human-readable floating-point or integer representations. The ISA does not employ opcodes, but rather pushes and pulls “gobs” of data without the encumbering opcode fetch, decode, and execute bottleneck. Instead, the ISA employs stand-alone, memory-mapped operators, complete with their own pipeline that is completely decoupled from the processor's primary push-pull pipeline.
Abstract: The techniques disclosed herein improve the efficiency, reliability and scalability of flow processing systems by providing a multi-tier flow cache structure that can reduce the size of a flow table and also reduce replicated flow sets. In some configurations, a system can partition a flow space across workers and replicate the flows within a partition to a set of workers. In some configurations, a flow cache structure can include three tiers: (1) a scalable flow processing layer for executing the actions and transformations of a flow, (2) a flow state management layer for managing distributed flow state decisions, and (3) a flow decider layer for identifying actions and transformations needs to be executed on each packet of a flow. Flow replications allow other workers to pick up flows allocated to a particular worker that is taken offline in the event of a crash or update.
Type:
Grant
Filed:
August 6, 2020
Date of Patent:
February 22, 2022
Assignee:
Microsoft Technology Licensing, LLC
Inventors:
Selim Ciraci, Shekhar Agarwal, Geoffrey Outhred
Abstract: Disclosed are a resource selection method and apparatus under multiple carriers, a computer device and a storage medium. The resource selection method comprises: determining at least one candidate carrier according to a resource occupancy exclusion result on each carrier; setting a resource on the candidate carrier to be available, performing exclusion according to a sensing result and obtaining a set of available resources; selecting a transmission resource from the set of available resources, and setting a semi-persistent scheduling counter for resource scheduling. The present application provides a resource selection solution reducing half-duplex influence as far as possible, and reducing the impact due to loss of receiving opportunities and the number of skip subframes, and also avoids the problem of too severe power allocation caused by simultaneous transmission with multiple service packets.
Type:
Grant
Filed:
June 21, 2018
Date of Patent:
February 8, 2022
Assignee:
DATANG MOBILE COMMUNICATIONS EQUIPMENT CO., LTD.
Inventors:
Chenxin Li, Rui Zhao, Li Zhao, Lin Lin, Yuan Feng
Abstract: An apparatus includes a command buffer configured to temporarily store commands. The apparatus also includes processing units disposed at a substrate. The processing units are configured to access a plurality of copies of a command from the command buffer. The processing units include first processing units (such as fixed function hardware blocks) to perform geometry operations indicated by the command on a set of primitives. The geometry operations are performed concurrently by the first processing units. The processing units also include second processing units (such as shaders) to process mutually exclusive sets of pixels generated by rasterizing the set of primitives. The apparatus also includes a cache to temporarily store the pixels after shading by the shaders. The processing units stop or interrupt processing commands in response to detecting a synchronization point and resume processing the commands in response to all the processing units completing commands before synchronization point.
Abstract: A method and system of managing addresses translations where in response to a request to invalidate an address translation, the scope of the address translation invalidation operation is determined; an address translation invalidation probe is installed or activated in a memory management unit (MMU) pipeline; whether an address translation undergoing a table walk operation is within a scope of the address translation invalidation probe is determined; and in response to the address translation undergoing a table walk operation being within the scope of the address translation invalidation probe, preventing or blocking the table walk operation from writing data to a translation buffer in the MMU. The probe also performs an address translation comparison to determine whether an address translation request coming down the MMU pipeline is within the scope of the probe, and if within the scope of the probe, prevents, blocks and/or rejects the address translation.
Type:
Grant
Filed:
January 7, 2020
Date of Patent:
December 28, 2021
Assignee:
International Business Machines Corporation
Abstract: It is possible to reduce the latency attributable to memory protection in shared memory systems by performing access protection at a central Data Ownership Manager (DOM), rather than at distributed memory management units in the central processing unit (CPU) elements (CEs) responsible for parallel thread processing. In particular, the DOM may monitor read requests communicated over a data plane between the CEs and a memory controller, and perform access protection verification in parallel with the memory controller's generation of the data response. The DOM may be separate and distinct from both the CEs and the memory controller, and therefore may generally be able to make the access determination without interfering with data plane processing/generation of the read requests and data responses exchanged between the memory controller and the CEs.
Type:
Grant
Filed:
October 21, 2019
Date of Patent:
December 7, 2021
Assignee:
Futurewei Technologies, Inc.
Inventors:
Sushma Wokhlu, Lee Dobson McFearin, Alan Gatherer, Hao Luan
Abstract: Embodiments provide a service acceleration method, system, apparatus, and server in an NFV system. For achieving these, a programmable package determining entity in the NFV system can determine a target service function that needs to be accelerated. A target programmable package corresponding to the target service function that needs to be accelerated can be obtained and the target programmable package to an acceleration engine in a network functions virtualization infrastructure (NFVI) can be sent. The acceleration engine runs the target programmable package to accelerate the target service function that needs to be accelerated. A programmable package of the acceleration engine can thus be dynamically replaced, and a service diversity requirement can be met, thereby improving scalability of a service acceleration function in the NFV system.
Abstract: A method, an apparatus, and a computer-readable storage medium having instructions for cancelling a redundancy of two or more redundant modules. Results of the two or more redundant modules are received; reliabilities of the results are ascertained; and, based on the ascertained reliabilities, an overall result is determined from the results. The overall result is output for further processing.
Abstract: A pipeline in a processor core includes: at least one stage that decodes instructions including load instructions that retrieve data stored at respective virtual addresses, at least one stage that issues at least some decoded load instructions out-of-order, and at least one stage that initiates at least one prefetch operation. Copies of page table entries mapping virtual addresses to physical addresses are stored in a TLB. Managing misses in the TLB includes: handling a load instruction issued out-of-order using a hardware page table walker, after a miss in the TLB, handling a prefetch operation using the hardware page table walker, after a miss in the TLB, and handling any software-calling faults triggered by out-of-order load instructions handled by the hardware page table walker differently from any software-calling faults triggered by prefetch operations handled by the hardware page table walker.
Type:
Grant
Filed:
August 6, 2019
Date of Patent:
November 16, 2021
Assignee:
Marvell Asia Pte, Ltd.
Inventors:
Shubhendu Sekhar Mukherjee, David Albert Carlson, Michael Bertone
Abstract: Control circuitry controls the operations of a central processing unit, CPU, which is associated with a nominal clock frequency. The CPU is further coupled to an I/O range and configured to deliver input to an application. The control circuitry controls the CPU to poll the I/O range for input to the application. The control circuitry also monitors whether or not each poll results in input to the application and adjusts a clock frequency at which the CPU operates to a clock frequency lower than the nominal clock frequency if a pre-defined number of polls resulting in no input is detected.
Abstract: Provided are techniques for extracting, deriving, and using legal matter semantics to generate e-discovery queries in an e-discovery system. A semantic knowledge graph is iteratively built by receiving meet and confer document instances, legal matter types, historical e-discovery queries for different legal matters, and legal semantic types extracted from the historical e-discovery queries. The legal semantic types are added to the semantic knowledge graph, and a list of terms that serve as a basis of an initial query are identified. An e-discovery query is generated for an e-discovery system. The e-discovery query is modified using the semantic knowledge graph and additional input by receiving a legal matter type and meet and confer information, obtaining the legal semantic types that are relevant to the legal matter type and the meet and confer information, and modifying the e-discovery query. The modified e-discovery query is provided. Then, the modified e-discovery query is executed.
Type:
Grant
Filed:
October 30, 2018
Date of Patent:
September 28, 2021
Assignee:
International Business Machines Corporation
Inventors:
Roger C. Raphael, Rajesh M. Desai, Nazrul Islam, Satwik Hebbar