Abstract: A solution providing for the dynamic design, use, and modification of models is provided. The solution can receive an electronic communication identifying a request or event and process the electronic communication in a runtime environment by binding a model of the collection of models to dynamically construct an implementation of the model. Collective properties of the set of related models can emerge dynamically. The binding can comprise late-binding of an application associated with the collection of models to enable at least one user to perform at least one interaction using the environment without disrupting any of the environment or the application.
Abstract: A graph-based data multi-operation system includes a data multi-operation management subsystem coupled to an application and accelerator subsystems. The data multi-operation management subsystem receives a data multi-operation graph from the application that identifies first data and defines operations for performance on the first data to transform the first data into second data. The data multi-operation management subsystem assigns each of the operations to at least one of the accelerator systems, and configures the accelerator subsystems to perform the operations in a sequence that transforms the first data into the second data, When the data multi-operation management subsystem determine a completion status for the performance of the operations by the accelerator subsystems, it transmits a completion status communication to the application that indicates the completion status of the performance of the plurality of operations by the plurality of accelerator subsystems.
Type:
Grant
Filed:
October 21, 2020
Date of Patent:
November 15, 2022
Assignee:
Dell Products L.P.
Inventors:
Gaurav Chawla, Mark Steven Sanders, William Price Dawkins, Jimmy D. Pike, Elie Jreij, Robert W. Hormuth
Abstract: A priority queue sorting system including a priority queue and a message storage. The priority queue includes multiple priority blocks that are cascaded in order from a lowest priority block to a highest priority block. Each priority block includes a register block storing an address and an identifier, compare circuitry that compares a new identifier with the stored identifier for determining relative priority, and select circuitry that determines whether to keep or shift and replace the stored address and identifier within the priority queue based on the relative priority. The message storage stores message payloads, each pointed to by a corresponding stored address of a corresponding priority block. Each priority block contains its own compare and select circuitry and determines a keep, shift, or store operation. Thus, sorting is independent of the length of the priority queue thereby achieving deterministic sorting latency that is independent of the queue length.
Type:
Grant
Filed:
July 23, 2021
Date of Patent:
November 1, 2022
Assignee:
NXP B.V.
Inventors:
Abhijit Kumar Deb, Donald Robert Pannell, Claude Robert Gauthier
Abstract: A computing device (e.g., a processor) having a plurality of branch target buffers. A first branch target buffer in the plurality of branch target buffers is used in execution of a set of instructions containing a call to a subroutine. In response to the call to the subroutine, a second branch target buffer is allocated from the plurality of branch target buffers for execution of instructions in the subroutine. The second branch target buffer is cleared before the execution of the instructions in the subroutine. The execution of the instructions in the subroutine is restricted to access the second branch target buffer and blocked from accessing branch target buffers other than the second branch target buffer.
Abstract: Measurements of a device's firmware are made regularly and compared with prior, derived measurements. Prior measurements are derived from a set of identical firmware measurements obtained from multiple devices having the same make, model and firmware version number. The firmware integrity status is reported on a data and device security console for a group of managed endpoints. Alerts about firmware changes, which may be potential attacks on the firmware, are given automatically.
Abstract: A processor comprises a trusted execution environment and a non-trusted execution environment. The processor further comprises a common resource accessible in both the trusted execution environment and the non-trusted execution environment and an instruction processing device including circuitry configured to fetch an instruction for decoding and execute the decoded instruction. The instruction processing device includes circuitry further configured to determine consistency between a current execution environment of the processor and a resource status in response to a result from instruction decoding indicating that instruction involves access to the common resource, and load content corresponding to the current execution environment into the common resource in response to a determination that the current execution environment is inconsistent with the resource status, wherein the resource status indicates an execution environment corresponding to content in the common resource.
Abstract: Efficient scaling of in-network compute operations to large numbers of compute nodes is disclosed. Each compute node is connected to a same plurality of network compute nodes, such as compute-enabled network switches. Compute processes at the compute nodes generate local gradients or other vectors by, for instance, performing a forward pass on a neural network. Each vector comprises values for a same set of vector elements. Each network compute node is assigned to, based on the local vectors, reduce vector data for a different a subset of the vector elements. Each network compute node returns a result chunk for the elements it processed back to each of the compute nodes, whereby each compute node receives the full result vector. This configuration may, in some embodiments, reduce buffering, processing, and/or other resource requirements for the network compute node or network at large.
Type:
Grant
Filed:
March 12, 2021
Date of Patent:
August 23, 2022
Assignee:
Innovium, Inc.
Inventors:
William Brad Matthews, Puneet Agarwal, Bruce Hui Kwan
Abstract: The present disclosure relates to a processor that includes one or more processing elements associated with one or more instruction set architectures. The processor is configured to receive a request from an application executed by a first processing element of the one or more processing elements to enable a feature associated with an instruction set architecture. Additionally, the processor is configured to enable the application to utilize the feature without a system call occurring when the feature is associated with an instruction set architecture associated with the first processing element.
Type:
Grant
Filed:
September 27, 2019
Date of Patent:
August 9, 2022
Assignee:
Intel Corporation
Inventors:
Toby Opferman, Eliezer Weissmann, Robert Valentine, Russell Cameron Arnold
Abstract: A data management method includes receiving, by a management server, a first request, determining, based on an identifier of a first user in the first request, whether a shadow tenant bucket associated with the identifier of the first user exists, and if the shadow tenant bucket associated with the identifier of the first user exists, storing, in the shadow tenant bucket associated with the identifier of the first user, an acceleration engine image (AEI) that the first user requests to register, where a shadow tenant bucket is used to store an AEI of a specified user, and each shadow tenant bucket is in a one-to-one correspondence with a user.
Abstract: A method of performing safety-critical rendering at a graphics processing unit within a graphics processing system, the method comprising: receiving, at the graphics processing system, graphical data for safety-critical rendering at the graphics processing unit; scheduling at a safety controller, in accordance with a reset frequency, a plurality of resets of the graphics processing unit; rendering the graphical data at the graphics processing unit; and the safety controller causing the plurality of resets of the graphics processing unit to be performed commensurate with the reset frequency.
Type:
Grant
Filed:
September 30, 2020
Date of Patent:
July 5, 2022
Assignee:
Imagination Technologies Limited
Inventors:
Philip Morris, Mario Sopena Novales, Jamie Broome
Abstract: An accelerator manager monitors and logs performance of multiple accelerators, analyzes the logged performance, determines from the logged performance of a selected accelerator a desired programmable device for the selected accelerator, and specifies the desired programmable device to one or more accelerator developers. The accelerator manager can further analyze the logged performance of the accelerators, and generate from the analyzed logged performance an ordered list of test cases, ordered from fastest to slowest. A test case is selected, and when the estimated simulation time for the selected test case is less than the estimated synthesis time for the test case, the test case is simulated and run. When the estimated simulation time for the selected test case is greater than the estimated synthesis time for the text case, the selected test case is synthesized and run.
Type:
Grant
Filed:
August 24, 2020
Date of Patent:
June 28, 2022
Assignee:
International Business Machines Corporation
Inventors:
Paul E. Schardt, Jim C. Chen, Lance G. Thompson, James E. Carey
Abstract: Systems, methods, and other embodiments associated with autonomous cloud-node scoping for big-data machine learning use cases are described. In some example embodiments, an automated scoping tool, method, and system are presented that, for each of multiple combinations of parameter values, (i) set a combination of parameter values describing a usage scenario, (ii) execute a machine learning application according to the combination of parameter values on a target cloud environment, and (iii) measure the computational cost for the execution of the machine learning application. A recommendation regarding configuration of central processing unit(s), graphics processing unit(s), and memory for the target cloud environment to execute the machine learning application is generated based on the measured computational costs.
Type:
Grant
Filed:
January 2, 2020
Date of Patent:
June 21, 2022
Assignee:
Oracle International Corporation
Inventors:
Edward R. Wetherbee, Kenny C. Gross, Guang C. Wang, Matthew T. Gerdes
Abstract: Embodiments for implementing optimized accelerators in a computing environment are provided. Selected instruction sequence code blocks derived from one or more application workloads may be consolidated together to activate one or more accelerators subject to one or more constraints and projections.
Type:
Grant
Filed:
March 31, 2020
Date of Patent:
June 14, 2022
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
Alper Buyuktosunoglu, David Trilla Rodriguez, John-David Wellman, Pradip Bose
Abstract: A mechanism is described for facilitating localized load-balancing for processors in computing devices. A method of embodiments, as described herein, includes facilitating hosting, at a processor of a computing device, a local load-balancing mechanism. The method may further include monitoring balancing of loads at the processor and serving as a local scheduler to maintain de-centralized load-balancing at the processor and between the processor and other one or more processors.
Type:
Grant
Filed:
December 22, 2020
Date of Patent:
June 7, 2022
Assignee:
Intel Corporation
Inventors:
Prasoonkumar Surti, David Cowperthwaite, Abhishek R. Appu, Joydeep Ray, Vasanth Ranganathan, Altug Koker, Balaji Vembu
Abstract: A method includes receiving a first input indicating at least a selected controller type and generating, based on the first input, a model that represents a controller corresponding to the selected controller type. The method also includes receiving a second input indicating at least one functional aspect of the selected controller type, updating, based on the second input, the model to represent the at least one functional aspect of the selected controller type, and compiling, using the model, a binary file that represents, at least, the at least one functional aspect of the selected controller type. The method also includes uploading the binary file to a controller corresponding to the selected controller type.
Type:
Grant
Filed:
July 17, 2020
Date of Patent:
April 5, 2022
Assignee:
Steering Solutions IP Holding Corporation
Inventors:
Anthony Champagne, Rangarajan Ramanujam, Michael Story, Owen K. Tosh
Abstract: Systems, methods, and computer-readable media are disclosed for associating and reconciling disparate key-value pairs corresponding to a target entity across multiple organizational entities using a distributed match. A shared output mapping may be generated that associates and reconciles common and/or conceptually aligned key-value pairs across the multiple organizational entities. The shared output mapping allows any given organizational entity to leverage information known to other organizational entities about a target entity. In this manner, the organizational entities participate in an information sharing ecosystem that enables each organizational entity to provide a user with a more optimally customized user experience based on the greater breadth of information available through the shared output mapping.
Type:
Grant
Filed:
December 8, 2017
Date of Patent:
April 5, 2022
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
Thomas A. Brunet, Pushpalatha M. Hiremath, Soma Shekar Naganna, Willie L. Scott, II
Abstract: A method is implemented in a computing system for managing resources to decrease busy-looping, the method using a sliding window template including at least a first sliding window. The method includes initializing the sliding window template for a monitored resource, determining a current status of the monitored resource, updating the first sliding window with the current status, determining a first sliding window status based on whether a first sliding window threshold is met, and determining whether to sleep the monitored resource based on a decision-making table that uses at least the first sliding window status as input.
Abstract: A universal floating-point Instruction Set Architecture (ISA) implemented entirely in hardware. Using a single instruction, the universal floating-point ISA has the ability, in hardware, to compute directly with dual decimal character sequences up to IEEE 754-2008 “H=20” in length, without first having to explicitly perform a conversion-to-binary-format process in software before computing with these human-readable floating-point or integer representations. The ISA does not employ opcodes, but rather pushes and pulls “gobs” of data without the encumbering opcode fetch, decode, and execute bottleneck. Instead, the ISA employs stand-alone, memory-mapped operators, complete with their own pipeline that is completely decoupled from the processor's primary push-pull pipeline.
Abstract: The techniques disclosed herein improve the efficiency, reliability and scalability of flow processing systems by providing a multi-tier flow cache structure that can reduce the size of a flow table and also reduce replicated flow sets. In some configurations, a system can partition a flow space across workers and replicate the flows within a partition to a set of workers. In some configurations, a flow cache structure can include three tiers: (1) a scalable flow processing layer for executing the actions and transformations of a flow, (2) a flow state management layer for managing distributed flow state decisions, and (3) a flow decider layer for identifying actions and transformations needs to be executed on each packet of a flow. Flow replications allow other workers to pick up flows allocated to a particular worker that is taken offline in the event of a crash or update.
Type:
Grant
Filed:
August 6, 2020
Date of Patent:
February 22, 2022
Assignee:
Microsoft Technology Licensing, LLC
Inventors:
Selim Ciraci, Shekhar Agarwal, Geoffrey Outhred
Abstract: Disclosed are a resource selection method and apparatus under multiple carriers, a computer device and a storage medium. The resource selection method comprises: determining at least one candidate carrier according to a resource occupancy exclusion result on each carrier; setting a resource on the candidate carrier to be available, performing exclusion according to a sensing result and obtaining a set of available resources; selecting a transmission resource from the set of available resources, and setting a semi-persistent scheduling counter for resource scheduling. The present application provides a resource selection solution reducing half-duplex influence as far as possible, and reducing the impact due to loss of receiving opportunities and the number of skip subframes, and also avoids the problem of too severe power allocation caused by simultaneous transmission with multiple service packets.
Type:
Grant
Filed:
June 21, 2018
Date of Patent:
February 8, 2022
Assignee:
DATANG MOBILE COMMUNICATIONS EQUIPMENT CO., LTD.
Inventors:
Chenxin Li, Rui Zhao, Li Zhao, Lin Lin, Yuan Feng
Abstract: An apparatus includes a command buffer configured to temporarily store commands. The apparatus also includes processing units disposed at a substrate. The processing units are configured to access a plurality of copies of a command from the command buffer. The processing units include first processing units (such as fixed function hardware blocks) to perform geometry operations indicated by the command on a set of primitives. The geometry operations are performed concurrently by the first processing units. The processing units also include second processing units (such as shaders) to process mutually exclusive sets of pixels generated by rasterizing the set of primitives. The apparatus also includes a cache to temporarily store the pixels after shading by the shaders. The processing units stop or interrupt processing commands in response to detecting a synchronization point and resume processing the commands in response to all the processing units completing commands before synchronization point.
Abstract: A method and system of managing addresses translations where in response to a request to invalidate an address translation, the scope of the address translation invalidation operation is determined; an address translation invalidation probe is installed or activated in a memory management unit (MMU) pipeline; whether an address translation undergoing a table walk operation is within a scope of the address translation invalidation probe is determined; and in response to the address translation undergoing a table walk operation being within the scope of the address translation invalidation probe, preventing or blocking the table walk operation from writing data to a translation buffer in the MMU. The probe also performs an address translation comparison to determine whether an address translation request coming down the MMU pipeline is within the scope of the probe, and if within the scope of the probe, prevents, blocks and/or rejects the address translation.
Type:
Grant
Filed:
January 7, 2020
Date of Patent:
December 28, 2021
Assignee:
International Business Machines Corporation
Abstract: Embodiments provide a service acceleration method, system, apparatus, and server in an NFV system. For achieving these, a programmable package determining entity in the NFV system can determine a target service function that needs to be accelerated. A target programmable package corresponding to the target service function that needs to be accelerated can be obtained and the target programmable package to an acceleration engine in a network functions virtualization infrastructure (NFVI) can be sent. The acceleration engine runs the target programmable package to accelerate the target service function that needs to be accelerated. A programmable package of the acceleration engine can thus be dynamically replaced, and a service diversity requirement can be met, thereby improving scalability of a service acceleration function in the NFV system.
Abstract: It is possible to reduce the latency attributable to memory protection in shared memory systems by performing access protection at a central Data Ownership Manager (DOM), rather than at distributed memory management units in the central processing unit (CPU) elements (CEs) responsible for parallel thread processing. In particular, the DOM may monitor read requests communicated over a data plane between the CEs and a memory controller, and perform access protection verification in parallel with the memory controller's generation of the data response. The DOM may be separate and distinct from both the CEs and the memory controller, and therefore may generally be able to make the access determination without interfering with data plane processing/generation of the read requests and data responses exchanged between the memory controller and the CEs.
Type:
Grant
Filed:
October 21, 2019
Date of Patent:
December 7, 2021
Assignee:
Futurewei Technologies, Inc.
Inventors:
Sushma Wokhlu, Lee Dobson McFearin, Alan Gatherer, Hao Luan
Abstract: A method, an apparatus, and a computer-readable storage medium having instructions for cancelling a redundancy of two or more redundant modules. Results of the two or more redundant modules are received; reliabilities of the results are ascertained; and, based on the ascertained reliabilities, an overall result is determined from the results. The overall result is output for further processing.
Abstract: A pipeline in a processor core includes: at least one stage that decodes instructions including load instructions that retrieve data stored at respective virtual addresses, at least one stage that issues at least some decoded load instructions out-of-order, and at least one stage that initiates at least one prefetch operation. Copies of page table entries mapping virtual addresses to physical addresses are stored in a TLB. Managing misses in the TLB includes: handling a load instruction issued out-of-order using a hardware page table walker, after a miss in the TLB, handling a prefetch operation using the hardware page table walker, after a miss in the TLB, and handling any software-calling faults triggered by out-of-order load instructions handled by the hardware page table walker differently from any software-calling faults triggered by prefetch operations handled by the hardware page table walker.
Type:
Grant
Filed:
August 6, 2019
Date of Patent:
November 16, 2021
Assignee:
Marvell Asia Pte, Ltd.
Inventors:
Shubhendu Sekhar Mukherjee, David Albert Carlson, Michael Bertone
Abstract: Control circuitry controls the operations of a central processing unit, CPU, which is associated with a nominal clock frequency. The CPU is further coupled to an I/O range and configured to deliver input to an application. The control circuitry controls the CPU to poll the I/O range for input to the application. The control circuitry also monitors whether or not each poll results in input to the application and adjusts a clock frequency at which the CPU operates to a clock frequency lower than the nominal clock frequency if a pre-defined number of polls resulting in no input is detected.
Abstract: Provided are techniques for extracting, deriving, and using legal matter semantics to generate e-discovery queries in an e-discovery system. A semantic knowledge graph is iteratively built by receiving meet and confer document instances, legal matter types, historical e-discovery queries for different legal matters, and legal semantic types extracted from the historical e-discovery queries. The legal semantic types are added to the semantic knowledge graph, and a list of terms that serve as a basis of an initial query are identified. An e-discovery query is generated for an e-discovery system. The e-discovery query is modified using the semantic knowledge graph and additional input by receiving a legal matter type and meet and confer information, obtaining the legal semantic types that are relevant to the legal matter type and the meet and confer information, and modifying the e-discovery query. The modified e-discovery query is provided. Then, the modified e-discovery query is executed.
Type:
Grant
Filed:
October 30, 2018
Date of Patent:
September 28, 2021
Assignee:
International Business Machines Corporation
Inventors:
Roger C. Raphael, Rajesh M. Desai, Nazrul Islam, Satwik Hebbar
Abstract: A method may include the following. Preset events related to a peer communication party are determined. The preset events are generated by operations performed by the peer communication party based on a communication application. Whether a communication page of a local communication party with the peer communication party is in an open state is detected. Description information of the preset events is displayed in an expedited processing page associated with the communication page in a centralized manner when the communication page is detected to be in an open state. Using the technical solutions of the present application, the local communication party can open and view the expedited processing page conveniently and quickly when communicating with the peer communication party, and view and process corresponding preset events, thereby further simplifying the user operations and improving processing efficiency.
Type:
Grant
Filed:
March 7, 2019
Date of Patent:
September 14, 2021
Assignee:
Alibaba Group Holding Limited
Inventors:
Hang Chen, Zhenhao Wu, Lili Zhang, Daping Zhang, Di Zhang, Lidong Cao, Di Su, Yixin Huang, Jianjun Zhao
Abstract: A computer system is presented. The computer system comprises a memory system that stores data, a computer processor, and a memory access engine. The memory access engine is configured to: receive a first instruction of a computing process from the computer processor, wherein the first instruction is for accessing the data from the memory system; acquire at least a part of the data from the memory system based on the first instruction; and after the acquisition of the at least a first part of the data, transmit an indication to the computer processor to enable the computer processor to execute a second instruction of the computing process.
Abstract: A computer-implemented method is provided and includes allocating, by a processor, an instruction to a first thread, decoding, by the processor, the instruction, determining, by the processor, a type of the instruction based on information obtained by decoding the instruction, and based on determining that the instruction is a disruptive complex instruction, changing a mode of allocating hardware resources to an instruction-based allocation mode. In the instruction-based allocation mode, the processor adjusts allocation of the hardware resources among a first thread and a second thread based on types of instructions allocated to the first and second threads.
Type:
Grant
Filed:
February 19, 2019
Date of Patent:
July 20, 2021
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION
Inventors:
Avery Francois, Gregory William Alexander, Christian Jacobi
Abstract: Apparatus and method for widened SIMD execution on a limited register file. For example, one embodiment of an apparatus comprises: instruction dispatch circuitry to dispatch instructions of a thread for execution, including a first instruction to indicate a start of a double execution instruction sequence and a second instruction to indicate an end of a double execution instruction sequence; and execution circuitry including single instruction multiple data (SIMD) circuitry, the execution circuitry to execute the double execution instruction sequence in a first pass using a first set of lanes of the SIMD circuitry and to execute the double execution instruction sequence in a second pass following the first pass using a second set of lanes of the SIMD circuitry.
Abstract: A method, system, and computer program product, the method comprising: obtaining a data path representing flow of data in processing a service request within a network computing environment having system resources; analyzing the data path to identify usage of the system resources required by the service request processing; determining, based on the usage of the system resources, an optimization action expected to improve the usage of the system resources; and implementing the optimization action in accordance with the data path, thereby modifying operation of the cloud computing environment in handling future service requests.
Abstract: Systems and methods of automated machine learning for modeling a data set according to a modeling intent are presented. A modeling service receives a data set from a submitting party as well as a set of constraints. A pipeline generator generates a set of pipelines according to a modeling intent of a data set and in view of the set of constraints. A machine learned trained judge conducts an analysis of the pipelines to identify an optimal pipeline to train. Optimal results are generated according to the optimal pipeline and the optimal results are provided to the submitting party in response to receiving the data set and constraints.
Type:
Grant
Filed:
August 28, 2017
Date of Patent:
April 27, 2021
Assignee:
Microsoft Technology Licensing, LLC
Inventors:
Justin Ormont, Yunling Wang, Aidan C Crook, Sarthak Shah
Abstract: Technologies for hybrid acceleration of code include a computing device (100) having a processor (120), a field-programmable gate array (FPGA) (130), and an application-specific integrated circuit (ASIC) (132). The computing device (100) offloads a service request, such as a cryptographic request or a packet processing request, to the FPGA (130). The FPGA (130) performs one or more algorithmic tasks of an algorithm to perform the service request. The FPGA (130) determines one or more primitive tasks associated with an algorithm task and encapsulates each primitive task in a buffer that is accessible by the ASIC (132). The ASIC (132) performs the primitive tasks in response to encapsulation in the buffer, and the FPGA (130) returns results of the algorithm. The primitive operations may include cryptographic primitives such as modular exponentiation, modular multiplicative inverse, and modular multiplication.
Type:
Grant
Filed:
March 28, 2017
Date of Patent:
April 6, 2021
Assignee:
INTEL CORPORATION
Inventors:
Ned M. Smith, Changzheng Wei, Songwu Shen, Ziye Yang, Junyuan Wang, Weigang Li, Wenqian Yu
Abstract: A system-on-chip (SoC) includes: a plurality of processors configured to store respective debugging information in response to respective information extraction commands received in a deadlock state, the plurality of processors having different architectures; a system bus connected to the plurality of processors; and an SoC manager configured to generate the respective information extraction commands differently according to an architecture of each of the plurality of processors in response to detecting occurrence of the deadlock state, and transmit the respective information extraction commands to the plurality of processors through a bus separate from the system bus.
Abstract: Methods, apparatus, and articles of manufacture are disclosed to trigger a scaling action for scaling an application having a set of one or more virtual machines (VMs). Virtualized Network Functions (VNF) are scaled by adding or removing resources to/from existing VMs. In an example method for triggering a scaling action for scaling an application having a set of one or more VMs, a threshold value is adapted based on an evaluation of a monitored system key performance indicator and a monitored external key performance indicator. The threshold value is used for triggering the scaling action. The scaling action is validated based on the monitored external key performance indicator.
Abstract: A controller includes one or more hardware components for performing operations, an interconnect, and a plurality of processors connected to the one or more hardware components through the interconnect. Each processor of the plurality of processors is configured to perform multithreading to concurrently handle multiple threads of execution, and assign a different thread identifier or master ID value to each concurrently handled thread of execution. An instruction is generated for a hardware component by executing a thread of the concurrently handled threads of execution. The instruction includes the thread identifier or indicates the master ID value assigned to the thread. The generated instruction is sent to the hardware component through the interconnect.
Type:
Grant
Filed:
February 6, 2019
Date of Patent:
March 16, 2021
Assignee:
Western Digital Technologies, Inc.
Inventors:
Shay Benisty, Leonid Minz, Tal Sharifie
Abstract: A method and device for the synchronization of processes, a first signal being sent by a clock-giving processor, the first signal having, in an alternating manner, first edges having a first direction and second edges having a second direction opposite the first edge, a temporal distance between at least one of the first edges and at least one of the second edges being determined as a function of a state of a counter in the clock-giving processor. A method for the synchronization of processes, a first signal being received by a clock-receiving processor, the first signal having, in an alternating manner, first edges having a first direction and second edges having a second direction opposite the first edge, a state of a counter in the clock-receiving processor being determined as a function of a temporal distance between at least one of the first edges and at least one of the second edges.
Type:
Grant
Filed:
August 16, 2018
Date of Patent:
February 9, 2021
Assignee:
Robert Bosch GmbH
Inventors:
Thomas Gebauer, Christoph Mueller, Cristina Murillo Miranda
Abstract: The present disclosure provides apparatus and methods for the calibration of analog circuitry on an integrated circuit. One embodiment relates to a method of calibrating analog circuitry within an integrated circuit. A microcontroller that is embedded in the integrated circuit is booted up. A reset control signal is sent to reset an analog circuit in the integrated circuit, and a response signal for the analog circuit is monitored by the microcontroller. Based on the response signal, a calibration parameter for the analog circuit is determined, and the analog circuit is 10 configured using the calibration parameter. Other embodiments, aspects and features are also disclosed.
Type:
Grant
Filed:
September 24, 2018
Date of Patent:
February 2, 2021
Assignee:
Altera Corporation
Inventors:
Neville Carvalho, Tim Tri Hoang, Sergey Shumarayev
Abstract: A solution providing for the dynamic design, use, and modification of models is provided. The solution can receive an electronic communication identifying a request or event and process the electronic communication in a runtime environment by binding a model of the collection of models to dynamically construct an implementation of the model. Collective properties of the set of related models can emerge dynamically. The binding can comprise late-binding of an application associated with the collection of models to enable at least one user to perform at least one interaction using the environment without disrupting any of the environment or the application.
Abstract: A cloud-based hardware accelerator is selected by deploying an accelerator image to first and second clouds to generate first and second cloud-based hardware accelerators, executing a first request on the first and second cloud-based hardware accelerators, monitoring characteristics of the first and second cloud-based hardware accelerators executing the first request, which may include execution time and monetary cost, and selecting one of the first and second hardware accelerators according to defined selection criteria. Subsequent requests are then routed to the selected cloud-based accelerator.
Type:
Grant
Filed:
November 29, 2018
Date of Patent:
January 12, 2021
Assignee:
International Business Machines Corporation
Inventors:
Paul E. Schardt, Jim C. Chen, Lance G. Thompson, James E. Carey
Abstract: A method and system for partial wavefront merger is described. Vector processing machines employ the partial wavefront merger to merge partial wavefronts into one or more wavefronts. The system includes a partial wavefront manager and unified registers. The partial wavefront manager detects wavefronts in different single-instruction-multiple-data (“SIMD”) units which contain inactive work items and active work items (hereinafter referred to as “partial wavefronts”), moves the partial wavefronts into one or more SIMD unit(s) and merges the partial wavefronts into one or more wavefront(s). The unified register allows each active work item in the one or more merged wavefront(s) to access the previously allocated registers in the originating SIMD units. Consequently, the contents of the unified registers do not have to be copied to the SIMD unit(s) executing the one or merged wavefront(s).
Type:
Grant
Filed:
July 23, 2018
Date of Patent:
December 29, 2020
Assignees:
Advanced Micro Devices, Inc., ATI Technologies ULC
Abstract: Methods and apparatuses relating to assigning a logical thread to a physical thread. In one embodiment, an apparatus includes a data storage device that stores code that when executed by a hardware processor causes the hardware processor to perform the following: translating an instruction into a translated instruction, assigning a logical thread for the translated instruction, and providing a thread map hint for the translated instruction; and a hardware scheduler to assign a physical thread of the hardware processor to execute the logical thread based on the thread map hint.
Type:
Grant
Filed:
March 10, 2015
Date of Patent:
December 29, 2020
Assignee:
Intel Corporation
Inventors:
Sebastian Winkel, Ethan Schuchman, Rainer Theuer, Gregor Stellpflug, Tyler N. Sondag
Abstract: Techniques for identifying permitted illegal access operations in a module system are disclosed. An operation, expressed in a first module, that attempts to access a module element of a second module is identified. Based on a module declaration associated with the second module, the module element is determined inaccessible to the first module. Additionally or alternatively, based on an access modifier associated with the module element, the module element is determined inaccessible to the operation. The operation is determined as an illegal access operation. The illegal access operation is permitted to access the module element. A warning corresponding to the illegal access operation is generated.
Type:
Grant
Filed:
October 17, 2017
Date of Patent:
November 24, 2020
Assignee:
Oracle International Corporation
Inventors:
Alan Bateman, Chris Hegarty, Alexander R. Buckley, Brian Goetz, Mark B. Reinhold
Abstract: A method, executed by a computer, includes pairing a first core with a second core to form a first core group, wherein each core of the group has a plurality of functional units, transferring instructions received by the first core to the second core for execution via a first inter-core communication bus, and executing the instructions on the second core. A computer system and computer program product corresponding to the above method are also disclosed herein.
Type:
Grant
Filed:
June 15, 2016
Date of Patent:
November 10, 2020
Assignee:
International Business Machines Corporation
Inventors:
Manoj Dusanapudi, Prasanna Jayaraman, Rahul M. Rao
Abstract: A single workload scheduler schedules sessions and tasks having a tree structure to resources, wherein the single workload scheduler has scheduling control of the resources and the tasks of the parent-child workload sessions and tasks. The single workload scheduler receives a request to schedule a child session created by a scheduled parent task that when executed results in a child task; the scheduled parent task is dependent on a result of the child task. The single workload scheduler receives a message from the scheduled parent task yielding a resource based on the resource not being used by the scheduled parent task, schedules tasks to backfill the resource, and returns the resource yielded by the scheduled parent task to the scheduled parent task based on receiving a resume request from the scheduled parent task or determining dependencies of the scheduled parent task have been met.
Type:
Grant
Filed:
April 18, 2017
Date of Patent:
November 10, 2020
Assignee:
International Business Machines Corporation
Inventors:
Alicia E. Chin, Yonggang Hu, Zhenhua Hu, Jason T S Lam, Zhimin Lin
Abstract: Provided are an apparatus and a method for effectively managing threads diverged by a conditional branch based on Single Instruction Multiple-based Data (SIMD). The apparatus includes: a plurality of Front End Units (FEUs) configured to fetch, for execution by SIMD lanes, instructions of thread groups of a program flow; and a controller configured to schedule a thread group based on SIMD lane availability information, activate an FEU of the plurality of FEUs, and control the activated FEU to fetch an instruction for processing the scheduled thread group.
Abstract: In an embodiment, a method includes identifying a core of a multicore processor to which an incoming packet that is received in a packet buffer is to be directed, and if the core is powered down, transmitting a first message to cause the core to be powered up prior to arrival of the incoming packet at a head of the packet buffer. Other embodiments are described and claimed.
Type:
Grant
Filed:
September 21, 2017
Date of Patent:
October 27, 2020
Assignee:
Intel Corporation
Inventors:
Steen K. Larsen, Bryan E. Veal, Daniel S. Lake, Travis T. Schluessler, Mazhar I. Memon
Abstract: A system and method of managing and prioritizing tasks amongst resources and, more particularly, to a system and method for providing automatic task assignment and notification amongst globally dispersed human resources. The system includes a change of management application configured to store a list of tasks and a task notifier configured to retrieve a list of geographically-dispersed resources and notify selected ones of the geographically-dispersed resources of a priority of completion of one or more tasks retrieved from the change of management application. The system further includes a message application configured to be polled by the task notifier to determine which of the geographically dispersed resources is online or currently working.
Type:
Grant
Filed:
March 30, 2018
Date of Patent:
October 13, 2020
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION