Partitioning Patents (Class 712/13)
-
Patent number: 12072730Abstract: The present disclosure provides a synchronization signal generating circuit, a chip, and a synchronization method and a synchronization device, based on a multi-core architecture, configured to generate a synchronization signal for M node groups, wherein each of the node groups includes at least one node, and M is an integer greater than or equal to 1. The synchronization signal generating circuit includes: a synchronization signal generating sub-circuit and M group ready signal generating sub-circuits. The M group ready signal generating sub-circuits are in one-to-one correspondence with the M node groups. The synchronization signal generating sub-circuit generates a first synchronization signal based on the first to-be-started signal, wherein the first synchronization signal is configured to instruct the K nodes in the first node group to start synchronization.Type: GrantFiled: January 28, 2022Date of Patent: August 27, 2024Assignee: Stream Computing Inc.Inventors: Weiwei Wang, Fei Luo
-
Patent number: 12067404Abstract: A baseboard management controller in a multi-processor system may perform operations including: identifying a partitioning mode (partitioned state or unified state) to implement on the multi-processor system having first and second central processing units (CPUs) located on a single motherboard; accessing, in response to the partitioned state, a first partitioned node configuration (P1C) for a first partitioned node (P1) and a second partitioned node configuration (P2C) for a second partitioned node (P2), wherein P1C identifies a first firmware interface level (F1L) and a first operating system to be used by P1, and wherein P2C identifies a second firmware interface level (F2L) and a second operating system to be used by P2; and causing the first CPU to load a first firmware interface having the identified F1L identified in the P1C and causing the second CPU to load a second firmware interface having the F2L identified in the P2C.Type: GrantFiled: December 22, 2022Date of Patent: August 20, 2024Inventors: Gary D. Cudak, Mehul Shah, Pravin S. Patel, James Parsonese
-
Patent number: 11989416Abstract: A computing device includes a system-on-a-chip. The computing device comprises a network interface controller (NIC) that hosts a plurality of virtual functions and physical functions. Two or more compute nodes are coupled to the NIC. Each compute node is configured to operate a plurality of Virtual Machines (VMs). Each VM is configured to operate in conjunction with a virtual function via a virtual function driver. A dedicated VM operates in conjunction with a virtual NIC using a physical function hosted by the NIC via a physical function driver hosted by the compute node. The computing device further comprises a fabric manager configured to own a physical function of the NIC, to bind virtual functions hosted by the NIC to individual compute nodes, and to pool I/O devices across the two or more compute nodes.Type: GrantFiled: October 24, 2022Date of Patent: May 21, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Siamak Tavallaei, Ishwar Agarwal
-
Patent number: 11755240Abstract: A method for an associative memory device includes storing a plurality of pairs of multi-bit operands X and Y in rows of a memory array of the associative memory device, each pair in a different column of the memory array. Cells in a column are connected by a first bit-line providing a value of activated cells and a second bit-line providing an inverse value of the activated cells. The bits of X are stored in first rows and the bits of Y are stored in second rows. The method includes reading an inverse value of a bit stored in each of the second rows using the second bit-line, writing it to third rows and concurrently, on all columns, performing multi-bit add operations between a value of X, an inverse value of Y and a carry-in bit initiated to 1, providing the difference between X and Y in each of the columns.Type: GrantFiled: February 23, 2022Date of Patent: September 12, 2023Assignee: GSI Technology Inc.Inventors: Moshe Lazer, Eyal Amiel
-
Patent number: 11663454Abstract: A digital integrated circuit with embedded memory for neural network inferring may include a controller and a matrix of processing blocks and cyclic bidirectional interconnections, where each processing block is coupled to 4 neighboring processing blocks regardless of its position in the matrix. A cyclic bidirectional interconnection may transmit every processing block's output to its upper, lower, left, right neighboring blocks or to its cyclic neighbors of the same row or column in replacement of any missing upper, lower, left or right neighbors. Each processing block may include invariant word buffers, variant word buffers, a multiplexer, and a processing unit. The multiplexer may select one of the 4 neighbor processing blocks' outputs. The processing unit may accept as inputs the multiplexer's selected value, a selected value from the variant word buffers and a selected value from the invariant word buffer and produce output which acts as the processing block's output.Type: GrantFiled: March 27, 2020Date of Patent: May 30, 2023Assignee: Aspiring Sky Co. LimitedInventors: Yujie Wen, Zhijiong Luo
-
Patent number: 11526363Abstract: An electronic apparatus includes: a memory; a storage configured to store a first operating system; and a processor configured to: perform booting by loading the first operating system stored in the storage to the memory, and store data, obtained based on the first operating system running, in the storage, load an obtained second operating system and the data stored in the storage to the memory, identify operation compatibility between the second operating system and the data loaded to the memory, perform booting by loading the second operating system to the memory, based on identification of normal operation compatibility, and perform booting by loading the first operating system to the memory, based on identification of abnormal operation compatibility.Type: GrantFiled: June 30, 2020Date of Patent: December 13, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Hyungjong Shin, Changsu Lee
-
Patent number: 11436043Abstract: For a process of an operating system, it is detected that a live migration has occurred, the live migration comprising a change in a hardware characteristic of a computer system on which the process executes. A first message is broadcast to a set of processors, the first message causing each processor in the set of processors to enter a waiting state. While each of the set of processors is in the waiting state, a portion of a set of program instructions of the operating system is modified.Type: GrantFiled: November 13, 2019Date of Patent: September 6, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Brian Frank Veale, Juan M. Casas, Jr., Caleb Russell Olson, Amanda Liem
-
Patent number: 11256517Abstract: A programmable hardware system for machine learning (ML) includes a core and an inference engine. The core receives commands from a host. The commands are in a first instruction set architecture (ISA) format. The core divides the commands into a first set for performance-critical operations, in the first ISA format, and a second set of performance non-critical operations, in the first ISA format. The core executes the second set to perform the performance non-critical operations of the ML operations and streams the first set to inference engine. The inference engine generates a stream of the first set of commands in a second ISA format based on the first set of commands in the first ISA format. The first set of commands in the second ISA format programs components within the inference engine to execute the ML operations to infer data.Type: GrantFiled: December 19, 2018Date of Patent: February 22, 2022Assignee: Marvell Asia Pte LtdInventors: Avinash Sodani, Ulf Hanebutte, Senad Durakovic, Hamid Reza Ghasemi, Chia-Hsin Chen
-
Patent number: 11237856Abstract: According to one aspect of the present disclosure, a method and technique for mobility operation resource allocation is disclosed. The method includes: receiving a request to migrate a running application from a first machine to a second machine; displaying an adjustable resource allocation mobility setting interface indicating a plurality of mobility settings comprising at least one performance-based mobility setting and at least one concurrency-based mobility setting; receiving, via the interface, a selection of a mobility setting defining a resource allocation to utilize for the migration; and migrating the running application from the first machine to the second machine utilizing resources as set by the selected mobility setting.Type: GrantFiled: September 30, 2015Date of Patent: February 1, 2022Assignee: International Business Machines CorporationInventors: Maria Garza, Neal R. Marion, Nathaniel S. Tomsic, Vasu Vallabhaneni
-
Patent number: 11169811Abstract: A method of context bouncing includes receiving, at a command processor of a graphics processing unit (GPU), a conditional execute packet providing a hash identifier corresponding to an encapsulated state. The encapsulated state includes one or more context state packets following the conditional execute packet. A command packet following the encapsulated state is executed based at least in part on determining whether the hash identifier of the encapsulated state matches one of a plurality of hash identifiers of active context states currently stored at the GPU.Type: GrantFiled: May 30, 2019Date of Patent: November 9, 2021Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Rex Eldon McCrary, Yi Luo, Harry J. Wise, Alexander Fuad Ashkar, Michael Mantor
-
Patent number: 11171788Abstract: A converged infrastructure includes a shared device and compute devices. The compute devices include a baseboard management controller and applications including one or more entitled initiators. The baseboard management controllers generate a distributed provision list including certificates chains for the entitled initiators; and configure the shared device with the certificate chains. The shared device receive a critical command and an encrypted hash, determine a calculated hash of the critical command, decrypt the encrypted hash using keys from the certificate chains, and compare the calculated hash with the decrypted hashes to determine if the critical command comes from one of the entitled initiators.Type: GrantFiled: June 3, 2019Date of Patent: November 9, 2021Assignee: Dell Products L.P.Inventors: Balaji Bapu Gururaja Rao, Cyril Jose, Chandrashekar Nelogal, Akshata Sheshagiri Naik
-
Patent number: 11080227Abstract: The technology disclosed partitions a dataflow graph of a high-level program into memory allocations and execution fragments. The memory allocations represent creation of logical memory spaces in on-processor and/or off-processor memories for data required to implement the dataflow graph. The execution fragments represent operations on the data. The technology disclosed designates the memory allocations to virtual memory units and the execution fragments to virtual compute units. The technology disclosed partitions the execution fragments into memory fragments and compute fragments, and assigns the memory fragments to the virtual memory units and the compute fragments to the virtual compute units. The technology disclosed then allocates the virtual memory units to physical memory units and the virtual compute units to physical compute units.Type: GrantFiled: August 8, 2019Date of Patent: August 3, 2021Assignee: SambaNova Systems, Inc.Inventors: David Alan Koeplinger, Raghu Prabhakar, Sumti Jairath
-
Patent number: 10990410Abstract: Systems and methods for virtually partitioning an integrated circuit may include identifying dimensional attributes of a target input dataset and selecting a data partitioning scheme from a plurality of distinct data partitioning schemes for the target input dataset based on the dimensional attributes of the target dataset and architectural attributes of an integrated circuit. The method may include disintegrating the target dataset into a plurality of distinct subsets of data based on the selected data partitioning scheme and identifying a virtual processing core partitioning scheme from a plurality of distinct processing core partitioning schemes for an architecture of the integrated circuit based on the disintegration of the target input dataset.Type: GrantFiled: May 1, 2020Date of Patent: April 27, 2021Assignee: quadric.io, Inc.Inventors: Nigel Drego, Aman Sikka, Mrinalini Ravichandran, Robert Daniel Firu, Veerbhan Kheterpal
-
Patent number: 10963263Abstract: An apparatus of an aspect includes a plurality of cores and shared core extension logic coupled with each of the plurality of cores. The shared core extension logic has shared data processing logic that is shared by each of the plurality of cores. Instruction execution logic, for each of the cores, in response to a shared core extension call instruction, is to call the shared core extension logic. The call is to have data processing performed by the shared data processing logic on behalf of a corresponding core. Other apparatus, methods, and systems are also disclosed.Type: GrantFiled: August 8, 2018Date of Patent: March 30, 2021Assignee: Intel CorporationInventors: Eran Shifer, Mostafa Hagog, Eliyahu Turiel
-
Patent number: 10884952Abstract: Enforcing memory operand types using protection keys is generally described herein. A processor system to provide sandbox execution support for protection key rights attacks includes a processor core to execute a task associated with an untrusted application and execute the task using a designated page of a memory; and a memory management unit to designate the page of the memory to support execution of the untrusted application.Type: GrantFiled: September 30, 2016Date of Patent: January 5, 2021Assignee: Intel CorporationInventors: Michael Lemay, David A Koufaty, Ravi Sahita
-
Patent number: 10783003Abstract: Embodiments of the present disclosure relate to a method, a device and a computer readable medium for managing a dedicated processing resource. According to the embodiments of the present disclosure, a server receives a request of a first application from a client, and based on an index of a resource subset as comprised in the request, determines a dedicated processing resource corresponding to the resource subset for processing the first application request. According to the embodiments of the present disclosure, the dedicated processing resource is divided into a plurality of resource subsets, so that the utilization efficiency of the dedicated processing resource is improved.Type: GrantFiled: October 29, 2018Date of Patent: September 22, 2020Assignee: EMC IP Holding Company LLCInventors: Junping Zhao, Kun Wang, Fan Guo
-
Patent number: 10769097Abstract: An autonomous memory device in a distributed memory sub-system can receive a database downloaded from a host controller. The autonomous memory device can pass configuration routing information and initiate instructions to disperse portions of the database to neighboring die using an interface that handles inter-die communication. Information is then extracted from the pool of autonomous memory and passed through a host interface to the host controller.Type: GrantFiled: October 2, 2017Date of Patent: September 8, 2020Assignee: Micron Technologies, Inc.Inventors: Sean Eilert, Mark Leinwander, Jared Hulbert
-
Patent number: 10713558Abstract: In one embodiment, a method comprises determining that a membrane potential of a first neuron of a first neuron core exceeds a threshold; determining a first plurality of synapse cores that each store at least one synapse weight associated with the first neuron; and sending a spike message to the determined first plurality of synapse cores.Type: GrantFiled: December 30, 2016Date of Patent: July 14, 2020Assignee: Intel CorporationInventors: Huseyin Ekin Sumbul, Gregory K. Chen, Raghavan Kumar, Phil Knag, Ram K. Krishnamurthy
-
Patent number: 10684865Abstract: The present application is directed to access isolation for multi-operating system devices. In general, a device may be configured using firmware to accommodate more than one operating system (OS) operating concurrently on the device or to transition from one OS to another. An access isolation module (AIM) in the firmware may determine a device equipment configuration and may partition the equipment for use by multiple operating systems. The AIM may disable OS-based equipment sensing and may allocate at least a portion of the equipment to each OS using customized tables. When transitioning between operating systems, the AIM may help to ensure that information from one OS is not accessible to others. For example, the AIM may detect when a foreground OS is to be replaced by a background OS, and may protect (e.g., lockout or encrypt) the files of the foreground OS prior to the background OS becoming active.Type: GrantFiled: May 13, 2019Date of Patent: June 16, 2020Assignee: Intel CorporationInventors: Kevin Y. Li, Vincent J. Zimmer, Xiaohu Zhou, Ping Wu, Zijian You, Michael A. Rothman
-
Patent number: 10623383Abstract: Disclosed aspects relate to symmetric multiprocessing (SMP) management. A first SMP topology may be identified by a service processor firmware. The first SMP topology may indicate a first set of connection paths for a plurality of processor chips of a multi-node server. A second SMP topology may be identified by the service processor firmware. The second SMP topology may indicate a second set of connection paths for the plurality of processor chips of the multi-node server. The second SMP topology may differ from the first SMP topology. An error event related to the first SMP topology may be detected. A set of traffic may be routed using the second SMP topology. The set of traffic may be routed by the service processor firmware in response to detecting the error event related to the first SMP topology.Type: GrantFiled: June 27, 2018Date of Patent: April 14, 2020Assignee: International Business Machines CorporationInventors: Deepak Kodihalli, Venkatesh Sainath, Dhruvaraj Subhashchandran
-
Patent number: 10563497Abstract: A system including a plurality of subsystems having a controller coupled with a sensor, an actuator, and a processor; and a resource allocation processor coupled with each of the plurality of subsystems, the resource allocation processor comprising a memory storing instructions which cause the resource allocation processor to determine a drilling system model, receive the measurement from each of the plurality of subsystems, generate a subsystem interaction model based at least in part on the dynamic subsystem model of each of the plurality of subsystems, run a risk evaluation based at least in part on the subsystem interaction module and a risk threshold, generate a resource allocation model, and transmit the resource allocation model to each of the plurality of subsystems; and wherein the subsystem controller in each of the plurality of subsystems activates the actuator to adjust a subsystem parameter to meet the resource allocation model.Type: GrantFiled: February 18, 2016Date of Patent: February 18, 2020Assignee: HALLIBURTON ENERGY SERVICESInventors: Jason D. Dykstra, Yuzhen Xue
-
Patent number: 10567494Abstract: A data processing system, a computing node, and a data processing method are provided. The data processing system includes a management node and a first class of computing nodes. The management node is configured to allocate first processing tasks to the first class of computing nodes. At least two computing nodes in the first class of computing nodes concurrently perform the first processing tasks allocated by the management node. A computing node performs a combine2 operation and a reduce2 operation on a data block Mx and a data block V1x, to obtain a first intermediate result. Then, the management node obtains a processing result for a to-be-processed dataset according to first intermediate results obtained by the first class of computing nodes. According to the data processing system, when a combine operation and a reduce operation are being performed on data blocks, memory space occupied by computation can be reduced.Type: GrantFiled: August 3, 2017Date of Patent: February 18, 2020Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Guowei Huang, Youliang Yan, Wangbin Zhu
-
Patent number: 10515046Abstract: Systems, methods, and apparatuses relating to a configurable spatial accelerator are described.Type: GrantFiled: July 1, 2017Date of Patent: December 24, 2019Assignee: Intel CorporationInventors: Kermin Fleming, Kent D. Glossop, Simon C. Steely, Jr.
-
Patent number: 10503551Abstract: An information handling system may include a field-programmable gate array (FPGA), and a hypervisor to manage virtual machines. The hypervisor may host a first FPGA service manager that loads instances of binary images for FPGA services into respective regions of the FPGA for the benefit of software applications. The virtual machine may host a second FPGA service manager that receives a request for an FPGA service from a software application running in the virtual machine, and sends a query to the first FPGA service manager to determine whether a binary image for the FPGA service exists on the FPGA. The first FPGA service manager may receive the query and, if a binary image instance for the FPGA service exists on the FPGA, may provide information to the second FPGA service manager to facilitate the use of the FPGA service by the software application running in the virtual machine.Type: GrantFiled: June 7, 2017Date of Patent: December 10, 2019Assignee: Dell Products L.P.Inventors: Shawn Joel Dube, Andrew Butcher
-
Patent number: 10425324Abstract: A device and method for providing balanced routing paths in a computational grid including determining a type of topology of the computational grid having a plurality of levels, wherein each level includes a plurality of switches, determining whether the type of topology of the computational grid is a fat-tree, determining whether the fat-tree is odd, determining whether the fat-tree is a regular fat-tree, computing a first set of routing paths for the computational grid based on the determining of whether the fat-tree is odd and is a regular fat-tree, computing a second set of routing paths for the computational grid using a topology agnostic routing technique, and configuring forwarding tables in said switches with the first set of computed routing paths when the topology is determined to be a fat-tree and with the second set of computed routing paths when the topology is determined to not be a fat-tree.Type: GrantFiled: August 17, 2017Date of Patent: September 24, 2019Assignee: Fabriscale Technologies ASInventors: Jesus Camacho Villanueva, Tor Skeie, Sven-Arne Reinemo
-
Patent number: 10379828Abstract: A computer is configured to generate a parallel program for a multi-core microcomputer from a single program for a single-core microcomputer, based on a dependency analysis of a bundle of unit processes in the single program. The computer obtains dependency information that enables dependency determination of dependency un-analyzable unit processes. Further, the computer performs a dependency analysis of dependency analyzable unit processes. Then, the computer assigns the dependency un-analyzable unit processes and the dependency analyzable unit processes respectively to multiple cores of the multi-core microcomputer, while fulfilling dependency among those processes, based on the obtained dependency information of the dependency un-analyzable unit processes and an analysis result of the dependency analyzable unit processes.Type: GrantFiled: June 9, 2017Date of Patent: August 13, 2019Assignee: DENSO CORPORATIONInventors: Kenichi Mineda, Takayuki Nagai, Yu Nakagawa
-
Patent number: 10318297Abstract: A self-timed parallelized multi-core processor has an instruction decoder unit for receiving a program code instruction, determining an operating code and latency for the instruction, and assigning a loop index to the instruction. An instruction decomposer creates a primitive by decomposing the instruction, replacing the loop index with a core index, and broadcasting the primitive. Self-timed processing cores each having a unique core index compare the core index to their unique processing core index. The processing cores act on the primitive when their processing core index is within a threshold of the core index.Type: GrantFiled: January 30, 2015Date of Patent: June 11, 2019Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Yiqun Ge, Wuxian Shi, Lan Hu
-
Patent number: 10229043Abstract: Methods of requesting memory spaces and resources using a memory controller are provided. A particular method may include communicating, by a memory controller, a request to a computer program for a resource, and using the resource in response to an indication from the computer program that the resource is available. Another particular method may include communicating a request to a memory controller for at least one of a memory space of a memory or a second resource. The memory controller may be configured to communicate the request from the first resource to a computer program. Another particular method may also include using, by the first resource, at least one of the memory space or the second resource in response to an indication that the memory space or the second resource is available.Type: GrantFiled: July 23, 2013Date of Patent: March 12, 2019Assignee: Intel Business Machines CorporationInventors: Edgar R. Cordero, Varkey K. Varghese, Diyanesh B. Vidyapoornachary
-
Patent number: 10203960Abstract: A reconfigurable processor and a conditional execution method for the same are provided. The reconfigurable processor includes: a routing unit, configured to assign a conditional judgment statement and a conditional execution statement to process the conditional judgment statement and the conditional execution statement in parallel; a first arithmetic logic unit, configured to process the conditional judgment statement according to an assignment of the routing unit to obtain a single-bit signal; a second arithmetic logic unit, configured to: process the conditional execution statement according to the assignment of the routing unit to obtain a conditional execution result; receive the single-bit signal; and control an output of the conditional execution result according to the single-bit signal.Type: GrantFiled: February 20, 2014Date of Patent: February 12, 2019Assignee: TSINGHUA UNIVERSITYInventors: Leibo Liu, Jianfeng Zhu, Xiao Yang, Shouyi Yin, Shaojun Wei
-
Patent number: 10185567Abstract: A method for translating instructions for a processor. The method includes accessing a guest instruction and performing a first level translation of the guest instruction using a first level conversion table. The method further includes outputting a resulting native instruction when the first level translation proceeds to completion. A second level translation of the guest instruction is performed using a second level conversion table when the first level translation does not proceed to completion, wherein the second level translation further processes the guest instruction based upon a partial translation from the first level conversion table. The resulting native instruction is output when the second level translation proceeds to completion.Type: GrantFiled: December 7, 2015Date of Patent: January 22, 2019Assignee: Intel CorporationInventor: Mohammad Abdallah
-
Patent number: 10176021Abstract: Actual capacity usage limits for one or more logical partitions or groups of logical partitions are managed based on hardware-specific determinations of actual capacity usage.Type: GrantFiled: November 23, 2015Date of Patent: January 8, 2019Assignee: CA, Inc.Inventors: Johannes Gerardus Jozef Peeters, Friedhelm Herbert Stoehler, Horst Walter Doehler
-
Patent number: 10140156Abstract: Automated techniques are disclosed for minimizing communication between nodes in a system comprising multiple nodes for executing requests in which a request type is associated with a particular node. For example, a technique comprises the following steps. Information is maintained about frequencies of compound requests received and individual requests comprising the compound requests. For a plurality of request types which frequently occur in a compound request, the plurality of request types is associated to a same node. As another example, a technique for minimizing communication between nodes, in a system comprising multiple nodes for executing a plurality of applications, comprises the steps of maintaining information about an amount of communication between said applications, and using said information to place said applications on said nodes to minimize communication among said nodes.Type: GrantFiled: January 8, 2014Date of Patent: November 27, 2018Assignee: International Business Machines CorporationInventors: Paul M. Dantzig, Arun Kwangil Iyengar, Francis Nicholas Parr, Gong Su
-
Patent number: 10133504Abstract: A system and method of partitioning host processing system resources is provided. An integrated circuit device having a plurality of processors or processing cores and a number of interfaces is portioned at boot into different hardware partitions based on the application needs of the host processing system. The technology provides a non-transitory memory storage including instructions; and a plurality of processors in communication with the memory. The integrated circuit device also includes a plurality of communication interfaces in communication with the processors. At least one of the plurality of processors executes instructions to configure a subset of the plurality of processors to a first hardware partition, and configure a different subset of the plurality of processors and at least one of the plurality of communication interfaces to a second hardware partition.Type: GrantFiled: April 6, 2016Date of Patent: November 20, 2018Assignee: FUTUREWEI TECHNOLOGIES, INC.Inventors: Weimin Pan, Kangkang Shen
-
Patent number: 10069756Abstract: Techniques are disclosed for integration, provisioning and management of entities and processes in a computing system such as, by way of example only, business entities and business processes. In particular, techniques are disclosed for implementing an extensible support system for multiple service offerings. For example, such a support system can be a business support system which may be employed in conjunction with a cloud computing environment.Type: GrantFiled: February 22, 2017Date of Patent: September 4, 2018Assignee: International Business Machines CorporationInventors: Yu Deng, Murthy V. Devarakonda, Michael Reuben Head, Rafah A. Hosn, Andrzej Kochut, Jonathan Paul Munson, Hidayatullah Habeebullah Shaikh
-
Patent number: 10025638Abstract: The present application is directed to a multiple-cloud-computing-facility aggregation that provides multi-cloud aggregation and that includes a cloud-connector server and cloud-connector nodes that cooperate to provide services that are distributed across multiple clouds. These services include the transfer of virtual-machine containers, or workloads, between two different clouds and remote management interfaces.Type: GrantFiled: July 2, 2012Date of Patent: July 17, 2018Assignee: VMware, Inc.Inventor: Jagannath N. Raghu
-
Patent number: 9946544Abstract: Instructions and logic provide SIMD permute controls with leading zero count functionality. Some embodiments include processors with a register with a plurality of data fields, each of the data fields to store a second plurality of bits. A destination register has corresponding data fields, each of these data fields to store a count of the number of most significant contiguous bits set to zero for corresponding data fields. Responsive to decoding a vector leading zero count instruction, execution units count the number of most significant contiguous bits set to zero for each of data fields in the register, and store the counts in corresponding data fields of the first destination register. Vector leading zero count instructions can be used to generate permute controls and completion masks to be used along with the set of permute controls, to resolve dependencies in gather-modify-scatter SIMD operations.Type: GrantFiled: October 23, 2017Date of Patent: April 17, 2018Assignee: Intel CorporationInventors: Christopher J. Hughes, Mikhail Plotnikov, Andrey Naraikin, Robert Valentine
-
Patent number: 9946665Abstract: Fetch Less Instruction Processing (FLIP) Computer Architecture for Central Processing Units (CPU). This embodiment relates to computing systems, and more particularly to central processing units in computing systems. The principal object of this embodiment is to provide a Fetch Less Instruction Processing (FLIP) computer architecture using FLIP elements as building blocks for computer program processing. Another object of the embodiment is to use a protocol to interconnect FLIP elements, which makes the current operating systems, program execution models, compilers, libraries and so on to be easily transitioned to the FLIP computer architecture with minimal changes.Type: GrantFiled: May 14, 2012Date of Patent: April 17, 2018Assignee: MELANGE SYSTEMS PRIVATE LIMITEDInventor: Narain Venkata Surendra Attili
-
Patent number: 9921880Abstract: A system and method for facilitating allocating computing resources to workloads, facilitating workload performance isolation. An example method includes determining one or more workloads to be allocated a set of computing resources in the computing environment, the one or more workloads characterized by metadata describing one or more workload properties; and using the one or more workload properties to calculate a binding between each of the one or more workloads and one or more corresponding portions of the computing resources. Plural competing workloads may be isolated by binding each workload to a disjunct set of Central Processing Units (CPUs) that share as few common hardware resources as possible given a topology the computing resources. Resource allocation adjustments need not require any reconfiguration of the system or adjustment to already provisioned workloads.Type: GrantFiled: January 28, 2016Date of Patent: March 20, 2018Assignee: Oracle International CorporationInventors: Nicolas Michael, Chen Wang, Jonathan Chew
-
Patent number: 9886307Abstract: Methods, systems, and computer program products for cross-platform scheduling with fairness and platform-specific optimization are provided herein. A method includes determining dimensions of a set of containers in which multiple tasks associated with a request are to be executed; assigning each of the containers to a processing node on one of multiple platforms based on the dimensions of the given container, and to a platform owner selected from the multiple platforms based on a comparison of resource requirements of each of the multiple platforms and the dimensions of the given container; and generating container assignments across the set of containers by incorporating the assigned node of each container in the set of containers, the assigned platform owner of each container in the set of containers, one or more scheduling requirements of each of the platforms, one or more utilization objectives, and enforcing a sharing guarantee of each of the platforms.Type: GrantFiled: July 1, 2015Date of Patent: February 6, 2018Assignee: International Business Machines CorporationInventors: Kirsten W. Hildrum, Zubair Nabi, Viswanath Nagarajan, Robert Saccone, Kanthi K. Sarpatwar, Rohit Wagle, Joel Leonard Wolf
-
Patent number: 9774651Abstract: A method and an apparatus for rapid data distribution, the method includes: sending, by a central processing unit, data description information to a rapid forwarding module, where the data description information includes an address and length information of data requested by a user; reading, by the rapid forwarding module according to the data description information, the data requested by the user and forwarding the data requested by the user to a network interface controller; and sending, by the network interface controller, the data requested by the user to the user. By using the method provided in the present invention, after services are increased, only the network interface controller and a storage device need to be added, and cost for the memory and the central processing unit does not need to be increased.Type: GrantFiled: April 14, 2014Date of Patent: September 26, 2017Assignee: Huawei Technologies Co., Ltd.Inventors: Fan Fang, Keping Chen
-
Patent number: 9734071Abstract: A method and apparatus for snooping caches is disclosed. In one embodiment, a system includes a number of processing nodes and a cache shared by each of the processing nodes. The cache is partitioned such that each of the processing nodes utilizes only one assigned partition. If a query by a processing node to its assigned partition of the cache results in a miss, a cache controller may determine whether to snoop other partitions in search of the requested information. The determination may be made based on history of where requested information was obtained from responsive to previous misses in that partition.Type: GrantFiled: May 15, 2015Date of Patent: August 15, 2017Assignee: Oracle International CorporationInventors: Serena Leung, Ramaswamy Sivaramakrishnan, Joann Lam, David Smentek
-
Patent number: 9703721Abstract: Embodiments are directed to a method of accessing a data frame, wherein a first portion of the data frame is in a first memory block, and wherein a second portion of the data frame is in a second memory block. The method includes determining that an access of the data frame crosses a boundary between the first second memory blocks, determining that an attempted translation of an address of the first portion of the data frame in the first memory block did not result in a translation fault, and accessing the first portion of the data frame. The method further includes, based at least in part on a determination that an attempted translation of an address of the second portion of the data frame in the second memory block resulted in a translation fault, accessing at least one default character as a replacement for accessing the second portion of the data frame.Type: GrantFiled: December 29, 2014Date of Patent: July 11, 2017Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael Gschwind, Brett Olsson
-
Patent number: 9690509Abstract: Embodiments are directed to a computer implemented method of accessing a data frame, wherein a first portion of the data frame is in a first memory block, and wherein a second portion of the data frame is in a second memory block. The method includes initiating, by a processor, an access of the data frame. The method further includes accessing, by the processor, the first portion of the data frame. The method further includes, based at least in part on a determination that the processor does not have access to the second memory block, accessing at least one default character as a replacement for accessing the second portion of the data frame.Type: GrantFiled: August 10, 2015Date of Patent: June 27, 2017Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael Gschwind, Brett Olsson, Raul E. Silvera
-
Patent number: 9684517Abstract: A multi-core processor system includes a first resource, a first core, a second resource, and a second core. The first core runs a first operating system (OS), and the first resource is allocated to the first OS. The second core runs a second OS, and the second resource is exclusively allocated to the second OS. The first OS and the second OS are designed for running at the same time, and the second OS is configured for monitoring or debugging the first resource, the first core, or the first OS.Type: GrantFiled: October 29, 2014Date of Patent: June 20, 2017Assignee: Lenovo Enterprise Solutions (Singapore) PTE. LTD.Inventors: Alpus P C Chen, Chun-Wei Chen, Elysee Y H Hsieh, Kelvin Shieh
-
Patent number: 9678886Abstract: Embodiments are directed to a method of accessing a data frame, wherein a first portion of the data frame is in a first memory block, and wherein a second portion of the data frame is in a second memory block. The method includes determining that an access of the data frame crosses a boundary between the first second memory blocks, determining that an attempted translation of an address of the first portion of the data frame in the first memory block did not result in a translation fault, and accessing the first portion of the data frame. The method further includes, based at least in part on a determination that an attempted translation of an address of the second portion of the data frame in the second memory block resulted in a translation fault, accessing at least one default character as a replacement for accessing the second portion of the data frame.Type: GrantFiled: August 19, 2015Date of Patent: June 13, 2017Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael Gschwind, Brett Olsson
-
Patent number: 9626207Abstract: A computer implemented method of managing an adapter includes determining that an adapter is assigned to an operating system and generating a single root input/output virtualization (SR-IOV) function associated with the adapter. The SR-IOV function may be correlated to a non-SR-IOV function, and the non-SR-IOV function may be used to modify an operational status of the adapter.Type: GrantFiled: December 16, 2011Date of Patent: April 18, 2017Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Charles S. Graham, Gregory M. Nordstrom, John R. Oberly, III
-
Patent number: 9582287Abstract: An apparatus of an aspect includes a plurality of cores and shared core extension logic coupled with each of the plurality of cores. The shared core extension logic has shared data processing logic that is shared by each of the plurality of cores. Instruction execution logic, for each of the cores, in response to a shared core extension call instruction, is to call the shared core extension logic. The call is to have data processing performed by the shared data processing logic on behalf of a corresponding core. Other apparatus, methods, and systems are also disclosed.Type: GrantFiled: September 27, 2012Date of Patent: February 28, 2017Assignee: Intel CorporationInventors: Eran Shifer, Mostafa Hagog, Eliyahu Turiel
-
Patent number: 9569127Abstract: Embodiments are directed to a method of accessing a data frame. The method includes, based at least in part on a determination that the data frame spans first and second memory blocks, and further based at least in part on a determination that the processor has access to the first and second memory blocks, accessing the data frame. The method includes, based at least in part on a determination that the data frame spans the first and second memory blocks, and based at least in part on a determination that the processor has access to the first memory block but does not have access to the second memory block, accessing a first portion of the data frame that is in the first memory block, and accessing at least one default character as a replacement for accessing a second portion of the data frame that is in the second memory block.Type: GrantFiled: December 29, 2014Date of Patent: February 14, 2017Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael Gschwind, Brett Olsson, Raul E. Silvera
-
Patent number: 9563432Abstract: Various embodiments relating to executing different types of instruction code in a micro-processing system are provided.Type: GrantFiled: April 19, 2013Date of Patent: February 7, 2017Assignee: Nvidia CorporationInventors: Ross Segelken, Darrell D. Boggs, Shiaoli Mendyke
-
Patent number: 9558003Abstract: A reconfigurable processor and an operation method of the reconfigurable processor may include: a status register configured to store a status value used to determine at least one execution mode in a processor; a parallel processing scheduler configured to schedule at least one of a very long instruction word (VLIW) logic and a coarse grained architecture (CGA) logic to be used based on the stored status value; a VLIW register configured to store processed data according to the VLIW logic; and a CGA register configured to store processed data according to the CGA logic.Type: GrantFiled: November 27, 2013Date of Patent: January 31, 2017Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Doo Hyun Kim, Joon Ho Song, Do Hyung Kim, Shi Hwa Lee