Partitioning Patents (Class 712/13)
  • Patent number: 11755240
    Abstract: A method for an associative memory device includes storing a plurality of pairs of multi-bit operands X and Y in rows of a memory array of the associative memory device, each pair in a different column of the memory array. Cells in a column are connected by a first bit-line providing a value of activated cells and a second bit-line providing an inverse value of the activated cells. The bits of X are stored in first rows and the bits of Y are stored in second rows. The method includes reading an inverse value of a bit stored in each of the second rows using the second bit-line, writing it to third rows and concurrently, on all columns, performing multi-bit add operations between a value of X, an inverse value of Y and a carry-in bit initiated to 1, providing the difference between X and Y in each of the columns.
    Type: Grant
    Filed: February 23, 2022
    Date of Patent: September 12, 2023
    Assignee: GSI Technology Inc.
    Inventors: Moshe Lazer, Eyal Amiel
  • Patent number: 11663454
    Abstract: A digital integrated circuit with embedded memory for neural network inferring may include a controller and a matrix of processing blocks and cyclic bidirectional interconnections, where each processing block is coupled to 4 neighboring processing blocks regardless of its position in the matrix. A cyclic bidirectional interconnection may transmit every processing block's output to its upper, lower, left, right neighboring blocks or to its cyclic neighbors of the same row or column in replacement of any missing upper, lower, left or right neighbors. Each processing block may include invariant word buffers, variant word buffers, a multiplexer, and a processing unit. The multiplexer may select one of the 4 neighbor processing blocks' outputs. The processing unit may accept as inputs the multiplexer's selected value, a selected value from the variant word buffers and a selected value from the invariant word buffer and produce output which acts as the processing block's output.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: May 30, 2023
    Assignee: Aspiring Sky Co. Limited
    Inventors: Yujie Wen, Zhijiong Luo
  • Patent number: 11526363
    Abstract: An electronic apparatus includes: a memory; a storage configured to store a first operating system; and a processor configured to: perform booting by loading the first operating system stored in the storage to the memory, and store data, obtained based on the first operating system running, in the storage, load an obtained second operating system and the data stored in the storage to the memory, identify operation compatibility between the second operating system and the data loaded to the memory, perform booting by loading the second operating system to the memory, based on identification of normal operation compatibility, and perform booting by loading the first operating system to the memory, based on identification of abnormal operation compatibility.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: December 13, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hyungjong Shin, Changsu Lee
  • Patent number: 11436043
    Abstract: For a process of an operating system, it is detected that a live migration has occurred, the live migration comprising a change in a hardware characteristic of a computer system on which the process executes. A first message is broadcast to a set of processors, the first message causing each processor in the set of processors to enter a waiting state. While each of the set of processors is in the waiting state, a portion of a set of program instructions of the operating system is modified.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: September 6, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Brian Frank Veale, Juan M. Casas, Jr., Caleb Russell Olson, Amanda Liem
  • Patent number: 11256517
    Abstract: A programmable hardware system for machine learning (ML) includes a core and an inference engine. The core receives commands from a host. The commands are in a first instruction set architecture (ISA) format. The core divides the commands into a first set for performance-critical operations, in the first ISA format, and a second set of performance non-critical operations, in the first ISA format. The core executes the second set to perform the performance non-critical operations of the ML operations and streams the first set to inference engine. The inference engine generates a stream of the first set of commands in a second ISA format based on the first set of commands in the first ISA format. The first set of commands in the second ISA format programs components within the inference engine to execute the ML operations to infer data.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: February 22, 2022
    Assignee: Marvell Asia Pte Ltd
    Inventors: Avinash Sodani, Ulf Hanebutte, Senad Durakovic, Hamid Reza Ghasemi, Chia-Hsin Chen
  • Patent number: 11237856
    Abstract: According to one aspect of the present disclosure, a method and technique for mobility operation resource allocation is disclosed. The method includes: receiving a request to migrate a running application from a first machine to a second machine; displaying an adjustable resource allocation mobility setting interface indicating a plurality of mobility settings comprising at least one performance-based mobility setting and at least one concurrency-based mobility setting; receiving, via the interface, a selection of a mobility setting defining a resource allocation to utilize for the migration; and migrating the running application from the first machine to the second machine utilizing resources as set by the selected mobility setting.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: February 1, 2022
    Assignee: International Business Machines Corporation
    Inventors: Maria Garza, Neal R. Marion, Nathaniel S. Tomsic, Vasu Vallabhaneni
  • Patent number: 11169811
    Abstract: A method of context bouncing includes receiving, at a command processor of a graphics processing unit (GPU), a conditional execute packet providing a hash identifier corresponding to an encapsulated state. The encapsulated state includes one or more context state packets following the conditional execute packet. A command packet following the encapsulated state is executed based at least in part on determining whether the hash identifier of the encapsulated state matches one of a plurality of hash identifiers of active context states currently stored at the GPU.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: November 9, 2021
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Rex Eldon McCrary, Yi Luo, Harry J. Wise, Alexander Fuad Ashkar, Michael Mantor
  • Patent number: 11171788
    Abstract: A converged infrastructure includes a shared device and compute devices. The compute devices include a baseboard management controller and applications including one or more entitled initiators. The baseboard management controllers generate a distributed provision list including certificates chains for the entitled initiators; and configure the shared device with the certificate chains. The shared device receive a critical command and an encrypted hash, determine a calculated hash of the critical command, decrypt the encrypted hash using keys from the certificate chains, and compare the calculated hash with the decrypted hashes to determine if the critical command comes from one of the entitled initiators.
    Type: Grant
    Filed: June 3, 2019
    Date of Patent: November 9, 2021
    Assignee: Dell Products L.P.
    Inventors: Balaji Bapu Gururaja Rao, Cyril Jose, Chandrashekar Nelogal, Akshata Sheshagiri Naik
  • Patent number: 11080227
    Abstract: The technology disclosed partitions a dataflow graph of a high-level program into memory allocations and execution fragments. The memory allocations represent creation of logical memory spaces in on-processor and/or off-processor memories for data required to implement the dataflow graph. The execution fragments represent operations on the data. The technology disclosed designates the memory allocations to virtual memory units and the execution fragments to virtual compute units. The technology disclosed partitions the execution fragments into memory fragments and compute fragments, and assigns the memory fragments to the virtual memory units and the compute fragments to the virtual compute units. The technology disclosed then allocates the virtual memory units to physical memory units and the virtual compute units to physical compute units.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: August 3, 2021
    Assignee: SambaNova Systems, Inc.
    Inventors: David Alan Koeplinger, Raghu Prabhakar, Sumti Jairath
  • Patent number: 10990410
    Abstract: Systems and methods for virtually partitioning an integrated circuit may include identifying dimensional attributes of a target input dataset and selecting a data partitioning scheme from a plurality of distinct data partitioning schemes for the target input dataset based on the dimensional attributes of the target dataset and architectural attributes of an integrated circuit. The method may include disintegrating the target dataset into a plurality of distinct subsets of data based on the selected data partitioning scheme and identifying a virtual processing core partitioning scheme from a plurality of distinct processing core partitioning schemes for an architecture of the integrated circuit based on the disintegration of the target input dataset.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: April 27, 2021
    Assignee: quadric.io, Inc.
    Inventors: Nigel Drego, Aman Sikka, Mrinalini Ravichandran, Robert Daniel Firu, Veerbhan Kheterpal
  • Patent number: 10963263
    Abstract: An apparatus of an aspect includes a plurality of cores and shared core extension logic coupled with each of the plurality of cores. The shared core extension logic has shared data processing logic that is shared by each of the plurality of cores. Instruction execution logic, for each of the cores, in response to a shared core extension call instruction, is to call the shared core extension logic. The call is to have data processing performed by the shared data processing logic on behalf of a corresponding core. Other apparatus, methods, and systems are also disclosed.
    Type: Grant
    Filed: August 8, 2018
    Date of Patent: March 30, 2021
    Assignee: Intel Corporation
    Inventors: Eran Shifer, Mostafa Hagog, Eliyahu Turiel
  • Patent number: 10884952
    Abstract: Enforcing memory operand types using protection keys is generally described herein. A processor system to provide sandbox execution support for protection key rights attacks includes a processor core to execute a task associated with an untrusted application and execute the task using a designated page of a memory; and a memory management unit to designate the page of the memory to support execution of the untrusted application.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: January 5, 2021
    Assignee: Intel Corporation
    Inventors: Michael Lemay, David A Koufaty, Ravi Sahita
  • Patent number: 10783003
    Abstract: Embodiments of the present disclosure relate to a method, a device and a computer readable medium for managing a dedicated processing resource. According to the embodiments of the present disclosure, a server receives a request of a first application from a client, and based on an index of a resource subset as comprised in the request, determines a dedicated processing resource corresponding to the resource subset for processing the first application request. According to the embodiments of the present disclosure, the dedicated processing resource is divided into a plurality of resource subsets, so that the utilization efficiency of the dedicated processing resource is improved.
    Type: Grant
    Filed: October 29, 2018
    Date of Patent: September 22, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Junping Zhao, Kun Wang, Fan Guo
  • Patent number: 10769097
    Abstract: An autonomous memory device in a distributed memory sub-system can receive a database downloaded from a host controller. The autonomous memory device can pass configuration routing information and initiate instructions to disperse portions of the database to neighboring die using an interface that handles inter-die communication. Information is then extracted from the pool of autonomous memory and passed through a host interface to the host controller.
    Type: Grant
    Filed: October 2, 2017
    Date of Patent: September 8, 2020
    Assignee: Micron Technologies, Inc.
    Inventors: Sean Eilert, Mark Leinwander, Jared Hulbert
  • Patent number: 10713558
    Abstract: In one embodiment, a method comprises determining that a membrane potential of a first neuron of a first neuron core exceeds a threshold; determining a first plurality of synapse cores that each store at least one synapse weight associated with the first neuron; and sending a spike message to the determined first plurality of synapse cores.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: July 14, 2020
    Assignee: Intel Corporation
    Inventors: Huseyin Ekin Sumbul, Gregory K. Chen, Raghavan Kumar, Phil Knag, Ram K. Krishnamurthy
  • Patent number: 10684865
    Abstract: The present application is directed to access isolation for multi-operating system devices. In general, a device may be configured using firmware to accommodate more than one operating system (OS) operating concurrently on the device or to transition from one OS to another. An access isolation module (AIM) in the firmware may determine a device equipment configuration and may partition the equipment for use by multiple operating systems. The AIM may disable OS-based equipment sensing and may allocate at least a portion of the equipment to each OS using customized tables. When transitioning between operating systems, the AIM may help to ensure that information from one OS is not accessible to others. For example, the AIM may detect when a foreground OS is to be replaced by a background OS, and may protect (e.g., lockout or encrypt) the files of the foreground OS prior to the background OS becoming active.
    Type: Grant
    Filed: May 13, 2019
    Date of Patent: June 16, 2020
    Assignee: Intel Corporation
    Inventors: Kevin Y. Li, Vincent J. Zimmer, Xiaohu Zhou, Ping Wu, Zijian You, Michael A. Rothman
  • Patent number: 10623383
    Abstract: Disclosed aspects relate to symmetric multiprocessing (SMP) management. A first SMP topology may be identified by a service processor firmware. The first SMP topology may indicate a first set of connection paths for a plurality of processor chips of a multi-node server. A second SMP topology may be identified by the service processor firmware. The second SMP topology may indicate a second set of connection paths for the plurality of processor chips of the multi-node server. The second SMP topology may differ from the first SMP topology. An error event related to the first SMP topology may be detected. A set of traffic may be routed using the second SMP topology. The set of traffic may be routed by the service processor firmware in response to detecting the error event related to the first SMP topology.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: April 14, 2020
    Assignee: International Business Machines Corporation
    Inventors: Deepak Kodihalli, Venkatesh Sainath, Dhruvaraj Subhashchandran
  • Patent number: 10567494
    Abstract: A data processing system, a computing node, and a data processing method are provided. The data processing system includes a management node and a first class of computing nodes. The management node is configured to allocate first processing tasks to the first class of computing nodes. At least two computing nodes in the first class of computing nodes concurrently perform the first processing tasks allocated by the management node. A computing node performs a combine2 operation and a reduce2 operation on a data block Mx and a data block V1x, to obtain a first intermediate result. Then, the management node obtains a processing result for a to-be-processed dataset according to first intermediate results obtained by the first class of computing nodes. According to the data processing system, when a combine operation and a reduce operation are being performed on data blocks, memory space occupied by computation can be reduced.
    Type: Grant
    Filed: August 3, 2017
    Date of Patent: February 18, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Guowei Huang, Youliang Yan, Wangbin Zhu
  • Patent number: 10563497
    Abstract: A system including a plurality of subsystems having a controller coupled with a sensor, an actuator, and a processor; and a resource allocation processor coupled with each of the plurality of subsystems, the resource allocation processor comprising a memory storing instructions which cause the resource allocation processor to determine a drilling system model, receive the measurement from each of the plurality of subsystems, generate a subsystem interaction model based at least in part on the dynamic subsystem model of each of the plurality of subsystems, run a risk evaluation based at least in part on the subsystem interaction module and a risk threshold, generate a resource allocation model, and transmit the resource allocation model to each of the plurality of subsystems; and wherein the subsystem controller in each of the plurality of subsystems activates the actuator to adjust a subsystem parameter to meet the resource allocation model.
    Type: Grant
    Filed: February 18, 2016
    Date of Patent: February 18, 2020
    Assignee: HALLIBURTON ENERGY SERVICES
    Inventors: Jason D. Dykstra, Yuzhen Xue
  • Patent number: 10515046
    Abstract: Systems, methods, and apparatuses relating to a configurable spatial accelerator are described.
    Type: Grant
    Filed: July 1, 2017
    Date of Patent: December 24, 2019
    Assignee: Intel Corporation
    Inventors: Kermin Fleming, Kent D. Glossop, Simon C. Steely, Jr.
  • Patent number: 10503551
    Abstract: An information handling system may include a field-programmable gate array (FPGA), and a hypervisor to manage virtual machines. The hypervisor may host a first FPGA service manager that loads instances of binary images for FPGA services into respective regions of the FPGA for the benefit of software applications. The virtual machine may host a second FPGA service manager that receives a request for an FPGA service from a software application running in the virtual machine, and sends a query to the first FPGA service manager to determine whether a binary image for the FPGA service exists on the FPGA. The first FPGA service manager may receive the query and, if a binary image instance for the FPGA service exists on the FPGA, may provide information to the second FPGA service manager to facilitate the use of the FPGA service by the software application running in the virtual machine.
    Type: Grant
    Filed: June 7, 2017
    Date of Patent: December 10, 2019
    Assignee: Dell Products L.P.
    Inventors: Shawn Joel Dube, Andrew Butcher
  • Patent number: 10425324
    Abstract: A device and method for providing balanced routing paths in a computational grid including determining a type of topology of the computational grid having a plurality of levels, wherein each level includes a plurality of switches, determining whether the type of topology of the computational grid is a fat-tree, determining whether the fat-tree is odd, determining whether the fat-tree is a regular fat-tree, computing a first set of routing paths for the computational grid based on the determining of whether the fat-tree is odd and is a regular fat-tree, computing a second set of routing paths for the computational grid using a topology agnostic routing technique, and configuring forwarding tables in said switches with the first set of computed routing paths when the topology is determined to be a fat-tree and with the second set of computed routing paths when the topology is determined to not be a fat-tree.
    Type: Grant
    Filed: August 17, 2017
    Date of Patent: September 24, 2019
    Assignee: Fabriscale Technologies AS
    Inventors: Jesus Camacho Villanueva, Tor Skeie, Sven-Arne Reinemo
  • Patent number: 10379828
    Abstract: A computer is configured to generate a parallel program for a multi-core microcomputer from a single program for a single-core microcomputer, based on a dependency analysis of a bundle of unit processes in the single program. The computer obtains dependency information that enables dependency determination of dependency un-analyzable unit processes. Further, the computer performs a dependency analysis of dependency analyzable unit processes. Then, the computer assigns the dependency un-analyzable unit processes and the dependency analyzable unit processes respectively to multiple cores of the multi-core microcomputer, while fulfilling dependency among those processes, based on the obtained dependency information of the dependency un-analyzable unit processes and an analysis result of the dependency analyzable unit processes.
    Type: Grant
    Filed: June 9, 2017
    Date of Patent: August 13, 2019
    Assignee: DENSO CORPORATION
    Inventors: Kenichi Mineda, Takayuki Nagai, Yu Nakagawa
  • Patent number: 10318297
    Abstract: A self-timed parallelized multi-core processor has an instruction decoder unit for receiving a program code instruction, determining an operating code and latency for the instruction, and assigning a loop index to the instruction. An instruction decomposer creates a primitive by decomposing the instruction, replacing the loop index with a core index, and broadcasting the primitive. Self-timed processing cores each having a unique core index compare the core index to their unique processing core index. The processing cores act on the primitive when their processing core index is within a threshold of the core index.
    Type: Grant
    Filed: January 30, 2015
    Date of Patent: June 11, 2019
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Yiqun Ge, Wuxian Shi, Lan Hu
  • Patent number: 10229043
    Abstract: Methods of requesting memory spaces and resources using a memory controller are provided. A particular method may include communicating, by a memory controller, a request to a computer program for a resource, and using the resource in response to an indication from the computer program that the resource is available. Another particular method may include communicating a request to a memory controller for at least one of a memory space of a memory or a second resource. The memory controller may be configured to communicate the request from the first resource to a computer program. Another particular method may also include using, by the first resource, at least one of the memory space or the second resource in response to an indication that the memory space or the second resource is available.
    Type: Grant
    Filed: July 23, 2013
    Date of Patent: March 12, 2019
    Assignee: Intel Business Machines Corporation
    Inventors: Edgar R. Cordero, Varkey K. Varghese, Diyanesh B. Vidyapoornachary
  • Patent number: 10203960
    Abstract: A reconfigurable processor and a conditional execution method for the same are provided. The reconfigurable processor includes: a routing unit, configured to assign a conditional judgment statement and a conditional execution statement to process the conditional judgment statement and the conditional execution statement in parallel; a first arithmetic logic unit, configured to process the conditional judgment statement according to an assignment of the routing unit to obtain a single-bit signal; a second arithmetic logic unit, configured to: process the conditional execution statement according to the assignment of the routing unit to obtain a conditional execution result; receive the single-bit signal; and control an output of the conditional execution result according to the single-bit signal.
    Type: Grant
    Filed: February 20, 2014
    Date of Patent: February 12, 2019
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Leibo Liu, Jianfeng Zhu, Xiao Yang, Shouyi Yin, Shaojun Wei
  • Patent number: 10185567
    Abstract: A method for translating instructions for a processor. The method includes accessing a guest instruction and performing a first level translation of the guest instruction using a first level conversion table. The method further includes outputting a resulting native instruction when the first level translation proceeds to completion. A second level translation of the guest instruction is performed using a second level conversion table when the first level translation does not proceed to completion, wherein the second level translation further processes the guest instruction based upon a partial translation from the first level conversion table. The resulting native instruction is output when the second level translation proceeds to completion.
    Type: Grant
    Filed: December 7, 2015
    Date of Patent: January 22, 2019
    Assignee: Intel Corporation
    Inventor: Mohammad Abdallah
  • Patent number: 10176021
    Abstract: Actual capacity usage limits for one or more logical partitions or groups of logical partitions are managed based on hardware-specific determinations of actual capacity usage.
    Type: Grant
    Filed: November 23, 2015
    Date of Patent: January 8, 2019
    Assignee: CA, Inc.
    Inventors: Johannes Gerardus Jozef Peeters, Friedhelm Herbert Stoehler, Horst Walter Doehler
  • Patent number: 10140156
    Abstract: Automated techniques are disclosed for minimizing communication between nodes in a system comprising multiple nodes for executing requests in which a request type is associated with a particular node. For example, a technique comprises the following steps. Information is maintained about frequencies of compound requests received and individual requests comprising the compound requests. For a plurality of request types which frequently occur in a compound request, the plurality of request types is associated to a same node. As another example, a technique for minimizing communication between nodes, in a system comprising multiple nodes for executing a plurality of applications, comprises the steps of maintaining information about an amount of communication between said applications, and using said information to place said applications on said nodes to minimize communication among said nodes.
    Type: Grant
    Filed: January 8, 2014
    Date of Patent: November 27, 2018
    Assignee: International Business Machines Corporation
    Inventors: Paul M. Dantzig, Arun Kwangil Iyengar, Francis Nicholas Parr, Gong Su
  • Patent number: 10133504
    Abstract: A system and method of partitioning host processing system resources is provided. An integrated circuit device having a plurality of processors or processing cores and a number of interfaces is portioned at boot into different hardware partitions based on the application needs of the host processing system. The technology provides a non-transitory memory storage including instructions; and a plurality of processors in communication with the memory. The integrated circuit device also includes a plurality of communication interfaces in communication with the processors. At least one of the plurality of processors executes instructions to configure a subset of the plurality of processors to a first hardware partition, and configure a different subset of the plurality of processors and at least one of the plurality of communication interfaces to a second hardware partition.
    Type: Grant
    Filed: April 6, 2016
    Date of Patent: November 20, 2018
    Assignee: FUTUREWEI TECHNOLOGIES, INC.
    Inventors: Weimin Pan, Kangkang Shen
  • Patent number: 10069756
    Abstract: Techniques are disclosed for integration, provisioning and management of entities and processes in a computing system such as, by way of example only, business entities and business processes. In particular, techniques are disclosed for implementing an extensible support system for multiple service offerings. For example, such a support system can be a business support system which may be employed in conjunction with a cloud computing environment.
    Type: Grant
    Filed: February 22, 2017
    Date of Patent: September 4, 2018
    Assignee: International Business Machines Corporation
    Inventors: Yu Deng, Murthy V. Devarakonda, Michael Reuben Head, Rafah A. Hosn, Andrzej Kochut, Jonathan Paul Munson, Hidayatullah Habeebullah Shaikh
  • Patent number: 10025638
    Abstract: The present application is directed to a multiple-cloud-computing-facility aggregation that provides multi-cloud aggregation and that includes a cloud-connector server and cloud-connector nodes that cooperate to provide services that are distributed across multiple clouds. These services include the transfer of virtual-machine containers, or workloads, between two different clouds and remote management interfaces.
    Type: Grant
    Filed: July 2, 2012
    Date of Patent: July 17, 2018
    Assignee: VMware, Inc.
    Inventor: Jagannath N. Raghu
  • Patent number: 9946665
    Abstract: Fetch Less Instruction Processing (FLIP) Computer Architecture for Central Processing Units (CPU). This embodiment relates to computing systems, and more particularly to central processing units in computing systems. The principal object of this embodiment is to provide a Fetch Less Instruction Processing (FLIP) computer architecture using FLIP elements as building blocks for computer program processing. Another object of the embodiment is to use a protocol to interconnect FLIP elements, which makes the current operating systems, program execution models, compilers, libraries and so on to be easily transitioned to the FLIP computer architecture with minimal changes.
    Type: Grant
    Filed: May 14, 2012
    Date of Patent: April 17, 2018
    Assignee: MELANGE SYSTEMS PRIVATE LIMITED
    Inventor: Narain Venkata Surendra Attili
  • Patent number: 9946544
    Abstract: Instructions and logic provide SIMD permute controls with leading zero count functionality. Some embodiments include processors with a register with a plurality of data fields, each of the data fields to store a second plurality of bits. A destination register has corresponding data fields, each of these data fields to store a count of the number of most significant contiguous bits set to zero for corresponding data fields. Responsive to decoding a vector leading zero count instruction, execution units count the number of most significant contiguous bits set to zero for each of data fields in the register, and store the counts in corresponding data fields of the first destination register. Vector leading zero count instructions can be used to generate permute controls and completion masks to be used along with the set of permute controls, to resolve dependencies in gather-modify-scatter SIMD operations.
    Type: Grant
    Filed: October 23, 2017
    Date of Patent: April 17, 2018
    Assignee: Intel Corporation
    Inventors: Christopher J. Hughes, Mikhail Plotnikov, Andrey Naraikin, Robert Valentine
  • Patent number: 9921880
    Abstract: A system and method for facilitating allocating computing resources to workloads, facilitating workload performance isolation. An example method includes determining one or more workloads to be allocated a set of computing resources in the computing environment, the one or more workloads characterized by metadata describing one or more workload properties; and using the one or more workload properties to calculate a binding between each of the one or more workloads and one or more corresponding portions of the computing resources. Plural competing workloads may be isolated by binding each workload to a disjunct set of Central Processing Units (CPUs) that share as few common hardware resources as possible given a topology the computing resources. Resource allocation adjustments need not require any reconfiguration of the system or adjustment to already provisioned workloads.
    Type: Grant
    Filed: January 28, 2016
    Date of Patent: March 20, 2018
    Assignee: Oracle International Corporation
    Inventors: Nicolas Michael, Chen Wang, Jonathan Chew
  • Patent number: 9886307
    Abstract: Methods, systems, and computer program products for cross-platform scheduling with fairness and platform-specific optimization are provided herein. A method includes determining dimensions of a set of containers in which multiple tasks associated with a request are to be executed; assigning each of the containers to a processing node on one of multiple platforms based on the dimensions of the given container, and to a platform owner selected from the multiple platforms based on a comparison of resource requirements of each of the multiple platforms and the dimensions of the given container; and generating container assignments across the set of containers by incorporating the assigned node of each container in the set of containers, the assigned platform owner of each container in the set of containers, one or more scheduling requirements of each of the platforms, one or more utilization objectives, and enforcing a sharing guarantee of each of the platforms.
    Type: Grant
    Filed: July 1, 2015
    Date of Patent: February 6, 2018
    Assignee: International Business Machines Corporation
    Inventors: Kirsten W. Hildrum, Zubair Nabi, Viswanath Nagarajan, Robert Saccone, Kanthi K. Sarpatwar, Rohit Wagle, Joel Leonard Wolf
  • Patent number: 9774651
    Abstract: A method and an apparatus for rapid data distribution, the method includes: sending, by a central processing unit, data description information to a rapid forwarding module, where the data description information includes an address and length information of data requested by a user; reading, by the rapid forwarding module according to the data description information, the data requested by the user and forwarding the data requested by the user to a network interface controller; and sending, by the network interface controller, the data requested by the user to the user. By using the method provided in the present invention, after services are increased, only the network interface controller and a storage device need to be added, and cost for the memory and the central processing unit does not need to be increased.
    Type: Grant
    Filed: April 14, 2014
    Date of Patent: September 26, 2017
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Fan Fang, Keping Chen
  • Patent number: 9734071
    Abstract: A method and apparatus for snooping caches is disclosed. In one embodiment, a system includes a number of processing nodes and a cache shared by each of the processing nodes. The cache is partitioned such that each of the processing nodes utilizes only one assigned partition. If a query by a processing node to its assigned partition of the cache results in a miss, a cache controller may determine whether to snoop other partitions in search of the requested information. The determination may be made based on history of where requested information was obtained from responsive to previous misses in that partition.
    Type: Grant
    Filed: May 15, 2015
    Date of Patent: August 15, 2017
    Assignee: Oracle International Corporation
    Inventors: Serena Leung, Ramaswamy Sivaramakrishnan, Joann Lam, David Smentek
  • Patent number: 9703721
    Abstract: Embodiments are directed to a method of accessing a data frame, wherein a first portion of the data frame is in a first memory block, and wherein a second portion of the data frame is in a second memory block. The method includes determining that an access of the data frame crosses a boundary between the first second memory blocks, determining that an attempted translation of an address of the first portion of the data frame in the first memory block did not result in a translation fault, and accessing the first portion of the data frame. The method further includes, based at least in part on a determination that an attempted translation of an address of the second portion of the data frame in the second memory block resulted in a translation fault, accessing at least one default character as a replacement for accessing the second portion of the data frame.
    Type: Grant
    Filed: December 29, 2014
    Date of Patent: July 11, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael Gschwind, Brett Olsson
  • Patent number: 9690509
    Abstract: Embodiments are directed to a computer implemented method of accessing a data frame, wherein a first portion of the data frame is in a first memory block, and wherein a second portion of the data frame is in a second memory block. The method includes initiating, by a processor, an access of the data frame. The method further includes accessing, by the processor, the first portion of the data frame. The method further includes, based at least in part on a determination that the processor does not have access to the second memory block, accessing at least one default character as a replacement for accessing the second portion of the data frame.
    Type: Grant
    Filed: August 10, 2015
    Date of Patent: June 27, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael Gschwind, Brett Olsson, Raul E. Silvera
  • Patent number: 9684517
    Abstract: A multi-core processor system includes a first resource, a first core, a second resource, and a second core. The first core runs a first operating system (OS), and the first resource is allocated to the first OS. The second core runs a second OS, and the second resource is exclusively allocated to the second OS. The first OS and the second OS are designed for running at the same time, and the second OS is configured for monitoring or debugging the first resource, the first core, or the first OS.
    Type: Grant
    Filed: October 29, 2014
    Date of Patent: June 20, 2017
    Assignee: Lenovo Enterprise Solutions (Singapore) PTE. LTD.
    Inventors: Alpus P C Chen, Chun-Wei Chen, Elysee Y H Hsieh, Kelvin Shieh
  • Patent number: 9678886
    Abstract: Embodiments are directed to a method of accessing a data frame, wherein a first portion of the data frame is in a first memory block, and wherein a second portion of the data frame is in a second memory block. The method includes determining that an access of the data frame crosses a boundary between the first second memory blocks, determining that an attempted translation of an address of the first portion of the data frame in the first memory block did not result in a translation fault, and accessing the first portion of the data frame. The method further includes, based at least in part on a determination that an attempted translation of an address of the second portion of the data frame in the second memory block resulted in a translation fault, accessing at least one default character as a replacement for accessing the second portion of the data frame.
    Type: Grant
    Filed: August 19, 2015
    Date of Patent: June 13, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael Gschwind, Brett Olsson
  • Patent number: 9626207
    Abstract: A computer implemented method of managing an adapter includes determining that an adapter is assigned to an operating system and generating a single root input/output virtualization (SR-IOV) function associated with the adapter. The SR-IOV function may be correlated to a non-SR-IOV function, and the non-SR-IOV function may be used to modify an operational status of the adapter.
    Type: Grant
    Filed: December 16, 2011
    Date of Patent: April 18, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Charles S. Graham, Gregory M. Nordstrom, John R. Oberly, III
  • Patent number: 9582287
    Abstract: An apparatus of an aspect includes a plurality of cores and shared core extension logic coupled with each of the plurality of cores. The shared core extension logic has shared data processing logic that is shared by each of the plurality of cores. Instruction execution logic, for each of the cores, in response to a shared core extension call instruction, is to call the shared core extension logic. The call is to have data processing performed by the shared data processing logic on behalf of a corresponding core. Other apparatus, methods, and systems are also disclosed.
    Type: Grant
    Filed: September 27, 2012
    Date of Patent: February 28, 2017
    Assignee: Intel Corporation
    Inventors: Eran Shifer, Mostafa Hagog, Eliyahu Turiel
  • Patent number: 9569127
    Abstract: Embodiments are directed to a method of accessing a data frame. The method includes, based at least in part on a determination that the data frame spans first and second memory blocks, and further based at least in part on a determination that the processor has access to the first and second memory blocks, accessing the data frame. The method includes, based at least in part on a determination that the data frame spans the first and second memory blocks, and based at least in part on a determination that the processor has access to the first memory block but does not have access to the second memory block, accessing a first portion of the data frame that is in the first memory block, and accessing at least one default character as a replacement for accessing a second portion of the data frame that is in the second memory block.
    Type: Grant
    Filed: December 29, 2014
    Date of Patent: February 14, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael Gschwind, Brett Olsson, Raul E. Silvera
  • Patent number: 9563432
    Abstract: Various embodiments relating to executing different types of instruction code in a micro-processing system are provided.
    Type: Grant
    Filed: April 19, 2013
    Date of Patent: February 7, 2017
    Assignee: Nvidia Corporation
    Inventors: Ross Segelken, Darrell D. Boggs, Shiaoli Mendyke
  • Patent number: 9558003
    Abstract: A reconfigurable processor and an operation method of the reconfigurable processor may include: a status register configured to store a status value used to determine at least one execution mode in a processor; a parallel processing scheduler configured to schedule at least one of a very long instruction word (VLIW) logic and a coarse grained architecture (CGA) logic to be used based on the stored status value; a VLIW register configured to store processed data according to the VLIW logic; and a CGA register configured to store processed data according to the CGA logic.
    Type: Grant
    Filed: November 27, 2013
    Date of Patent: January 31, 2017
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Doo Hyun Kim, Joon Ho Song, Do Hyung Kim, Shi Hwa Lee
  • Patent number: 9544586
    Abstract: A system and method for processing video information. Various aspects of the present invention may provide a decoder module that decodes block encoded video information. The system may, for example, include a first memory module, communicatively coupled to the decoder module, that stores video processing information utilized by the decoder module for decoding a current video block from a current video frame. The system may also, for example, include a second memory module, communicatively coupled to the decoder module, that stores reference video information from a previous video frame utilized by the decoder module for decoding the current video block. In a non-limiting exemplary scenario, the first memory module and the second memory module may be communicatively coupled to the decoder module with independent respective data and/or address buses.
    Type: Grant
    Filed: July 19, 2013
    Date of Patent: January 10, 2017
    Assignee: BROADCOM CORPORATION
    Inventors: Stephen Gordon, Darren Neuman
  • Patent number: 9544402
    Abstract: A multi-rule approach for encoding rules grouped in a rule chunk is provided. The approach includes a multi-rule with a multi-rule header representing headers of the rules and, in some cases, dimensional data representing dimensional data of the rules. The approach further includes disabling dimension matching of always matching dimensions, responding to an always match rule with a match response without matching, interleaving minimum/maximum values in a range field, interleaving value/mask values in a mask field, and for a given rule of rule chunk, encoding a priority field at the end of dimension data stored for the rule in the multi-rule. Advantageously, this approach provides efficient storage of rules and enables the efficient comparison of rules to keys.
    Type: Grant
    Filed: December 31, 2013
    Date of Patent: January 10, 2017
    Assignee: CAVIUM, INC.
    Inventors: Frank Worrell, Rajan Goyal, Satyanarayana Lakshmipathi Billa
  • Patent number: 9483503
    Abstract: A method and system for placing database. The method includes: receiving a request of creating a new database; determining whether there is a need to migrate current database among current virtual machines based on resource demand and free resource in the current virtual machines; determining database placement plan based on the resource demand, migration strategy and migration cost associated with the migration strategy in response to whether there is a need to migrate the database; and executing the database placement plan. The invention can help a database service provider to optimize database layout in database provision through database migration.
    Type: Grant
    Filed: May 24, 2013
    Date of Patent: November 1, 2016
    Assignee: International Business Machines Corporation
    Inventors: Jie Qiu, Berthold Reinwald, Qi Rong Wang, Tao Yu, Lei Zhi