Distributed Processing System Patents (Class 712/28)
  • Patent number: 11100426
    Abstract: Systems and methods are disclosed to implement a distributed matrix decomposition system using gossip. In embodiments, the matrix decomposition system employs a scalable, parallel, and decentralized approach to divide an input matrix into a grid blocks, and individually decompose the blocks into local decomposed matrices by communicating (gossiping) with a limited set of neighboring blocks. In embodiments, the decomposition may be implemented as an iterative process using Stochastic Gradient Descent, where the decomposed matrices are iteratively updated and kept in approximate agreement for neighboring blocks. The division of the input matrix allows the decomposition operation to be easily parallelized among nodes of a distributed computing system and scaled to suit the size of the input matrix. Moreover, the distributed approach eliminates the need for a central server, which in some systems may represent an operational bottle neck, a single point of failure, or a target for attacks.
    Type: Grant
    Filed: January 9, 2018
    Date of Patent: August 24, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Bamdev Mishra, Mukul Bhutani
  • Patent number: 11087428
    Abstract: There is provided with an image processing apparatus for performing image processing for an input image. Each of N processing modules refers to a processing result for a reference pixel different from a processing target pixel and generates a processing result for the processing target pixel. Each of the N processing modules generates a processing result for a first pixel included in the first pixel line and next generates a processing result for a second pixel. The second pixel is included in a second pixel line different from the first pixel line in the processing target region and becomes processable in accordance with the generation of the processing result for the first pixel.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: August 10, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Shigeo Kodama, Kohei Kishi
  • Patent number: 11074110
    Abstract: A computer-implemented method for scheduling a series of recurring events including: receiving one or more requests to allocate resource(s) to a series of recurring events, wherein the one or more requests specify, for each event, a corresponding desired time period over which the resource(s) are to be allocated, and the one or more requests further specify one or more adjustment criteria for defining, for one or more of the events, one or more permissibly adjusted time periods from the desired time period; obtaining, for each event, resource availability data indicative of an availability of the resource(s) during the desired time period; and, for each event: determining, based on the resource availability data, a viable time period, wherein the viable time period is either the desired time period or a permissibly adjusted time period that satisfies the one or more adjustment criteria; and allocating the resource(s) to the viable time period.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: July 27, 2021
    Assignee: Hubstar International Limited
    Inventors: Stefanos Vatidis, Denis Mequinion
  • Patent number: 11048656
    Abstract: Representative apparatus, method, and system embodiments are disclosed for configurable computing. A representative system includes an interconnection network; a processor; and a plurality of configurable circuit clusters. Each configurable circuit cluster includes a plurality of configurable circuits arranged in an array; a synchronous network coupled to each configurable circuit of the array; and an asynchronous packet network coupled to each configurable circuit of the array.
    Type: Grant
    Filed: March 31, 2019
    Date of Patent: June 29, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Tony M. Brewer
  • Patent number: 11042412
    Abstract: A memory allocation method and a server, wherein the method includes: identifying, by a server, a node topology table; generating fetch hop tables of the NUMA nodes based on the node topology table; calculating fetch priorities of the NUMA nodes based on the fetch hop tables of the NUMA nodes, and using an NC hop count as an important parameter for fetch priority calculation; and when a NUMA node applies for memory, allocating memory based on the fetch priority table, and for a higher priority, more preferentially allocating memory from a NUMA node corresponding to the priority.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: June 22, 2021
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Beilei Sun, Shengyu Shen, Jianrong Xu
  • Patent number: 10983795
    Abstract: Systems, apparatuses, and methods related to extended memory operations are described. Extended memory operations can include operations specified by a single address and operand and may be performed by a computing device that includes a processing unit and a memory resource. The computing device can perform extended memory operations on data streamed through the computing tile without receipt of intervening commands. In an example, a computing device is configured to receive a command to perform an operation that comprises performing an operation on a data with the processing unit of the computing device and determine that an operand corresponding to the operation is stored in the memory resource. The computing device can further perform the operation using the operand stored in the memory resource.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: April 20, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Richard C. Murphy, Glen E. Hush, Vijay S. Ramesh, Allan Porterfield, Anton Korzh
  • Patent number: 10936303
    Abstract: The disclosed technology is generally directed to updating of applications, firmware and/or other software on IoT devices. In one example of the technology, a request that is associated with a requested update is communicated from a normal world of a first application processor to a secure world of the first application processor. The secure world validates the requested update. Instructions associated with the validated update are communicated from the secure world to the normal world. Image requests are sent from the normal world to a cloud service for image binaries associated with the validated update. The secure world receives the requested image binaries from the cloud service. The secure world writes the received image binaries to memory, and validates the written image binaries.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: March 2, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Adrian Bonar, Reuben R. Olinsky, Sang Eun Kim, Edmund B. Nightingale, Thales de Carvalho
  • Patent number: 10740331
    Abstract: The present invention relates to an apparatus and method for executing a query, and a system for processing data by using the same. The apparatus for executing a query includes: a processor receiving a query and returning a result value; and a storage storing data on the query. The storage includes: a first storage temporarily storing data required for the execution of the query; and a second storage constructing a DB and storing data, and the processor combines a plurality of primitives in the query to configure a composite primitive, generates a binary code for the composite primitive in run time, and executes a generated code.
    Type: Grant
    Filed: August 7, 2014
    Date of Patent: August 11, 2020
    Assignee: COUPANG CORP.
    Inventor: Hyunsik Choi
  • Patent number: 10725755
    Abstract: Systems, apparatuses, and methods for a hardware and software system to automatically decompose a program into multiple parallel threads are described. In some embodiments, the systems and apparatuses execute a method of original code decomposition and/or generated thread execution.
    Type: Grant
    Filed: June 6, 2017
    Date of Patent: July 28, 2020
    Assignee: Intel Corporation
    Inventors: David J. Sager, Ruchira Sasanka, Ron Gabor, Shlomo Raikin, Joseph Nuzman, Leeor Peled, Jason A. Domer, Ho-Seop Kim, Youfeng Wu, Koichi Yamada, Tin-Fook Ngai, Howard H. Chen, Jayaram Bobba, Jeffrey J. Cook, Omar M. Shaikh, Suresh Srinivas
  • Patent number: 10681125
    Abstract: A method of message-based communication is provided which includes executing, on one or more accelerated processing units, a plurality of groups of work items, receiving a first message from a first group of work items of the plurality of groups of work items executing on the one or more accelerated processing units and storing the first message at a first segment of memory allocated to a second group of work items of the plurality of groups of work items executing on the accelerated processing unit.
    Type: Grant
    Filed: March 29, 2016
    Date of Patent: June 9, 2020
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Shuai Che
  • Patent number: 10650322
    Abstract: Systems, computer-implemented methods, and computer program products to facilitate external port measurement of qubit port responses are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise an analysis component that can analyze responses of a multi-mode readout device coupled to a qubit. The computer executable components can further comprise an assignment component that can assign a readout state of the qubit based on the responses. In some embodiments, the multi-mode readout device can be electrically coupled to at least one of the qubit or an environment of the qubit based on a defined electrical coupling value.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: May 12, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Paul Kristan Temme, Salvatore Bernardo Olivadese, Antonio Corcoles-Gonzalez, Jay M. Gambetta, Lev Samuel Bishop
  • Patent number: 10642761
    Abstract: An avionics system comprising a central processing unit to implement one or more hard real-time safety-critical applications, the central processing unit comprises a multi-core processor with a plurality of cores, an avionics system software executable by the multi-core processor, a memory, and a common bus though which the multi-core processor can access the memory; the avionics system is characterized in that the avionics system software is designed to cause, when executed, the cores in the multi-core processor to access the memory through the common bus by sharing bus bandwidth according to assigned bus bandwidth shares.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: May 5, 2020
    Assignee: Leonardo S.P.A.
    Inventors: Marco Sozzi, Massimo Traversone
  • Patent number: 10635598
    Abstract: An embodiment of a semiconductor apparatus may include technology to determine one or more logical block addresses for a persistent storage media, determine one or more addresses for a physical memory space, and define a memory-mapped input/output region for the physical memory space with a direct mapping between the one or more addresses for the physical memory space and the one or more logical block addresses for the persistent storage media. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: April 28, 2020
    Assignee: Intel Corporation
    Inventors: Bryan Veal, Annie Foong
  • Patent number: 10620613
    Abstract: Techniques for controlling the operation of a process plant or several process plants within a process control system using a centralized or distributed controller farm allow for increased flexibility in the process control system. Any of the controllers in the controller farm may be utilized to execute modules corresponding to any of the field devices in one or several process plants. Control modules or other operations may be allocated amongst the controllers distributing the load so that one controller is not performing several operations while others are inactive. Additionally, the controller farm may be located in a temperature controlled room or area in an offsite location from the process plants. In some scenarios, load balancing techniques are performed to distribute the load for the modules equally or at least similarly amongst the controllers.
    Type: Grant
    Filed: August 21, 2017
    Date of Patent: April 14, 2020
    Assignee: FISHER-ROSEMOUNT SYSTEMS, INC.
    Inventors: Tiong P. Ong, Kent A. Burr, David R. Denison, Godfrey R. Sherriff, Gary Law, Brandon Hieb, David M. Smith
  • Patent number: 10579264
    Abstract: A memory system may include: a plurality of memory dies; and a controller suitable for identifying a dependency between first and second commands and a priority order of the first and the second commands through a check engine, and control the memory dies to sequentially perform first and second command operations in response to the first and second commands according to the dependency and the priority order.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: March 3, 2020
    Assignee: SK hynix Inc.
    Inventor: Dong-Sop Lee
  • Patent number: 10545815
    Abstract: A method for data redistribution of a job data in a first datanode (DN) to at least one additional DN in a Massively Parallel Processing (MPP) Database (DB) is provided. The method includes recording a snapshot of the job data, creating a first data portion in the first DN and a redistribution data portion in the first DN, collecting changes to a job data copy stored in a temporary table, and initiating transfer of the redistribution data portion to the at least one additional DN.
    Type: Grant
    Filed: August 3, 2016
    Date of Patent: January 28, 2020
    Assignee: Futurewei Technologies, Inc.
    Inventors: Le Cai, QingQing Zhou, Yang Sun
  • Patent number: 10509762
    Abstract: Systems, methods, and computer-readable media for transferring data between a host platform and modem circuitry are provided. At low data rates, data may be stored by on-chip memory, and data may be transferred from the on-chip memory to the host platform over an interconnect (IX) when a first aggregation period expires. At medium data rates, data may be stored in both the on-chip memory and in in-package or off-chip memory, and the data may be transferred from the on-chip memory and off-chip memory to the host platform over the IX when a second aggregation period expires. At high data rates, the on-chip memory may serve as an elastic buffer, and the data may be streamed directly through the on-chip memory to the host platform over the IX. Other embodiments are described and/or claimed.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: December 17, 2019
    Assignee: Intel IP Corporation
    Inventors: Pavel Peleska, Reinhold Schneider
  • Patent number: 10502781
    Abstract: A detection circuit, a detection method, and an electronic system for detecting an I/O output status are provided. The detection circuit includes a comparison-window generating circuit configured to: detect an I/O data signal, generate a first single pulse signal, determining a first-time window, in response to a rising edge of the I/O data signal, and generate a second single pulse signal, determining a second-time window, in response to a falling edge of the I/O data signal. A first comparison circuit is configured to: receive the first single pulse signal, and compare the I/O drive signal with a preset high-level reference signal within the first time window to obtain a first comparison result. The second comparison circuit is configured to: receive the second single pulse signal, and compare the I/O drive signal with a preset low-level reference signal within the second time window to obtain a second comparison result.
    Type: Grant
    Filed: January 18, 2018
    Date of Patent: December 10, 2019
    Assignees: Semiconductor Manufacturing International (Shanghai) Corporation, Semiconductor Manufacturing International (Beijing) Corporation
    Inventors: Zhen Ye Guo, Zhen Jiang Su
  • Patent number: 10394615
    Abstract: An information processing apparatus takes each currently executing job as a candidate job, and when determining that a migration of a candidate job to a migration destination node selected from free nodes, which are not executing any jobs, is expected to expand a continued range of free nodes, specifies the migration as a possible migration. Then, on the basis of the amounts of communication needed to perform individual migrations based on a plurality of possible migrations and the numbers of nodes used for executing candidate jobs to be migrated in the individual possible migrations, the information processing apparatus determines a possible migration to be performed.
    Type: Grant
    Filed: February 9, 2017
    Date of Patent: August 27, 2019
    Assignee: FUJITSU LIMITED
    Inventor: Hiroki Yokota
  • Patent number: 10310588
    Abstract: In an embodiment, a processor includes a plurality of cores each to independently execute instructions, a power delivery logic coupled to the plurality of cores, and a power controller including a first logic to cause a first core to enter into a first low power state of an operating system power management scheme independently of the OS, during execution of at least one thread on the first core. Other embodiments are described and claimed.
    Type: Grant
    Filed: October 18, 2016
    Date of Patent: June 4, 2019
    Assignee: Intel Corporation
    Inventors: Ankush Varma, Krishnakanth V. Sistla, Allen W. Chu, Ian M. Steiner
  • Patent number: 10304156
    Abstract: A method is described. The method includes repeatedly loading a next sheet of image data from a first location of a memory into a two dimensional shift register array. The memory is locally coupled to the two-dimensional shift register array and an execution lane array having a smaller dimension than the two-dimensional shift register array along at least one array axis. The loaded next sheet of image data keeps within an image area of the two-dimensional shift register array. The method also includes repeatedly determining output values for the next sheet of image data through execution of program code instructions along respective lanes of the execution lane array, wherein, a stencil size used in determining the output values encompasses only pixels that reside within the two-dimensional shift register array.
    Type: Grant
    Filed: June 16, 2017
    Date of Patent: May 28, 2019
    Assignee: Google LLC
    Inventors: Albert Meixner, Hyunchul Park, Qiuling Zhu, Jason Rupert Redgrave
  • Patent number: 10261831
    Abstract: Embodiments include computing devices, apparatus, and methods implemented by the apparatus for implementing speculative loop iteration partitioning (SLIP) for heterogeneous processing devices. A computing device may receive iteration information for a first partition of iterations of a repetitive process and select a SLIP heuristic based on available SLIP information and iteration information for the first partition. The computing device may determine a split value for the first partition using the SLIP heuristic, and partition the first partition using the split value to produce a plurality of next partitions.
    Type: Grant
    Filed: August 24, 2016
    Date of Patent: April 16, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Arun Raman, Han Zhao, Aravind Natarajan
  • Patent number: 10235102
    Abstract: Methods, systems, and computer readable media for submission queue pointer management are disclosed. One method is implemented in a data storage device including a controller and a memory. The method includes fetching a plurality of commands from a submission queue. The method further includes parsing at least one of the commands. The method further includes, in response to successful parsing of at least one of the commands and prior to executing all of the commands, notifying a host to advance a head entry pointer for the submission queue by a number of entries corresponding to a number of the commands successfully parsed.
    Type: Grant
    Filed: November 1, 2015
    Date of Patent: March 19, 2019
    Assignee: SanDisk Technologies LLC
    Inventors: Elkana Richter, Shay Benisty, Tal Sharifie
  • Patent number: 10049133
    Abstract: Techniques are described for managing the execution of one or more groups of queries. Embodiments of the present disclosure may generally receive a group of queries to be executed against a database. Embodiments also determine, based on one or more attributes of the group of queries, an expected amount of resources that will be used in executing the group of queries against the database. Embodiments further schedule one or more queries of the group of queries for execution against the database based on the expected amount of resources to be used for the group of queries.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: August 14, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Eric L. Barsness, Daniel E. Beuch, Alexander Cook, Brian R. Muras, John M. Santosuosso
  • Patent number: 10019292
    Abstract: A method for executing a comprehensive real-time computer application including an application software including a description of functions on a distributed real-time computer system including sensors, actuators, computing nodes, and distributor units having access to a global time. The application software including a number of real-time software components (RTSWCs). When executed, the RTSWCs exchange information by time-triggered messages. Each RTSWC is allocated a time-triggered virtual machine TTVM, wherein, during a service interval SI, an operating system running on a computing node provides a TTVM realized on the computing node with protected access to the network resources and memory resources of the computing node assigned to the TTVM, and wherein, during the SI, a defined computing power for processing the RTSWCs running in the TTVM is allocated to the TTVM by the operating system of the computing node such that the RTSWCs provide a result before the end of the SI.
    Type: Grant
    Filed: January 27, 2016
    Date of Patent: July 10, 2018
    Assignee: FTS COMPUTERTECHNIK GMBH
    Inventors: Hermann Kopetz, Stefan Poledna
  • Patent number: 9990607
    Abstract: A low-latency, high-bandwidth, and highly scalable method delivers data from a source device to multiple communication devices on a communication network. Under this method, the communication devices (also called player nodes) provide download and upload bandwidths for each other. In this manner, the bandwidth requirement on the data source is significantly reduced. Such a data delivery network is scalable without limits with the number of player nodes. In one embodiment, a computer network includes (a) a source server that provides a data stream for delivery in the computer network, (b) player nodes that exchange data with each other to obtain a complete copy of the data stream, the network nodes being capable of dynamically joining or exiting the computer network, and (c) a control server which maintains a topology graph representing connections between the source server and the player nodes, and the connections among the player nodes themselves.
    Type: Grant
    Filed: January 12, 2007
    Date of Patent: June 5, 2018
    Inventor: Wensheng Hua
  • Patent number: 9916226
    Abstract: A system of testing software is provided. The system comprises a first hardware system having hardware components to execute a first version of the software, and additionally comprises a second hardware system having hardware components to execute a second version of the software. Here, the first version of the software and the second version are different. In addition, the system includes a device configured to test the first hardware system and the second hardware system by providing first input data traffic to the first hardware system, providing second input data traffic to the second hardware system, and accessing performance values from the first hardware system and the second hardware system to evaluate a performance comparison between the first hardware system executing the first version of the software and the second hardware system executing the second version of the software.
    Type: Grant
    Filed: May 27, 2014
    Date of Patent: March 13, 2018
    Assignee: eBay Inc.
    Inventors: Jayaram Singonahalli, Darrin Curtis Alves, Douglas Ray Woolard
  • Patent number: 9911092
    Abstract: Various embodiments of the present invention provide systems and methods for enabling design, generation, and execution of real-time workflows. Such embodiments provide a graphical designer including a plurality of shapes representing the various objects of a workflow that are used to model the workflow. In addition, various embodiments of the graphical designer provide shapes to model aspects of the workflow not found in previous graphical designers. Various embodiments also provide a code generator that converts the representation of the workflow into executable code for multiple target languages. Various embodiments also provide a workflow engine based on a Petri net model responsible for executing the workflow and for delegating tasks to be performed for the workflow to an operating system. In various embodiments, the workflow engine further includes a platform abstraction layer that provides a transition layer from the Petri net language to the operating system language.
    Type: Grant
    Filed: March 4, 2014
    Date of Patent: March 6, 2018
    Assignee: UNITED PARCEL SERVICE OF AMERICA, INC.
    Inventor: Asheesh Goja
  • Patent number: 9886072
    Abstract: Systems and methods are provided for reducing power consumption of a multi-die device, such as a network processor FPGA (npFPGA). The multi-die device may include hardware resources such as FPGA dies, which may be coupled to NIC dies and/or memory dies. Power consumption of the multi-die device may be reduced by monitoring usage of hardware resources in the multi-die device, identifying hardware resources that are not in use, and gating power to the identified hardware resources. The status of processing elements (PEs) in the multi-die device may be tracked in a PE state table. Based on the PE state table, tasks from a task queue may be assigned to one or more processing elements.
    Type: Grant
    Filed: June 19, 2013
    Date of Patent: February 6, 2018
    Assignee: ALTERA CORPORATION
    Inventor: Krishnan Venkataraman
  • Patent number: 9779470
    Abstract: An image processing system is described herein in which a multi-line processing block has multiple inputs and multiple outputs. In order to provide the multiple outputs the multi-line processing block has multiple processing units operating in parallel on the multiple inputs. The multiple outputs of the multi-line processing block are coupled to corresponding multiple inputs of a subsequent multi-line processing block in the image processing system.
    Type: Grant
    Filed: January 19, 2017
    Date of Patent: October 3, 2017
    Assignee: Imagination Technologies Limited
    Inventors: Michael Bishop, Morgyn Taylor
  • Patent number: 9760474
    Abstract: Novel tools and techniques are provided for implementing green software applications and/or certifying software applications with a green applications efficiency (“GAE”) rating. Implementing green software applications might include performing performance tests of a software application, measuring power consumption of one or more hardware components, in response to execution of the software application during the one or more performance tests, generating a power consumption profile for the software application based on the measure power consumption, and tuning the software application such that power consumption of the one or more hardware components matches a power load caused by execution of the software application, based at least in part on the power consumption profile for the software application.
    Type: Grant
    Filed: March 24, 2015
    Date of Patent: September 12, 2017
    Assignee: CenturyLink Intellectual Property LLC
    Inventors: Vishak Shanmugam Pillai, Darshan Sonbarse, Viswanath Seetharam, Manoj U P
  • Patent number: 9660865
    Abstract: A system for gradually implementing network services to end users includes substantially redundant first and second control networks, connectable to the end users through a routable communications network. The first control network provides a first service capability to all the end users. The second control network provides a second service capability to a first portion of the end users, the second service capability replacing the first service capability of the first portion of the end users. The second control network subsequently provides the second service capability to a second portion of the end users, while continuing to provide the second service capability to the first portion, the second service capability replacing the first service capability of the second portion of the end users. The second service capability provided to the second portion of the end users may include revisions based on feedback from the first portion of end users.
    Type: Grant
    Filed: September 23, 2013
    Date of Patent: May 23, 2017
    Assignee: TIME WARNER CABLE INC.
    Inventors: Scott W. Ramsdell, Chris A. Cholas
  • Patent number: 9645982
    Abstract: A method for loading a web page is provided. Primary executable script are asynchronously loaded. Commands associated with the primary executable script are pushed onto a first queue and processed by asynchronously loading secondary executable script if the command is a dependency command and pushing the dependency command onto a second queue; registering secondary executable script referenced in the command if the command is a fulfillment command, and pushing the command onto the second queue if the command is neither a dependency nor a fulfillment command. Commands in the second queue are processed by, if the command is a dependency command, determining if the secondary executable script referenced in the dependency command is registered, and associating the secondary executable script with an object if the secondary executable script is registered. If the command is not a dependency command, then the command is executed and removed from the second queue.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: May 9, 2017
    Assignee: Google Inc.
    Inventors: Bradley David Townsend, Brian Kuhn, Xin Liu
  • Patent number: 9558152
    Abstract: A synchronization method is executed by a multi-core processor system. The synchronization method includes registering based on a synchronous command issued from a first CPU, CPUs to be synchronized and a count of the CPUs into a specific table; counting by each of the CPUs and based on a synchronous signal from the first CPU, an arrival count for a synchronous point, and creating by each of the CPUs, a second shared memory area that is a duplication of a first shared memory area accessed by processes executed by the CPUs; and comparing the first shared memory area and the second shared memory area when the arrival count becomes equal to the count of the CPUs, and based on a result of the comparison, judging the processes executed by the CPUs.
    Type: Grant
    Filed: September 13, 2013
    Date of Patent: January 31, 2017
    Assignee: FUJITSU LIMITED
    Inventors: Koichiro Yamashita, Hiromasa Yamauchi, Takahisa Suzuki, Koji Kurihara
  • Patent number: 9529640
    Abstract: A network processor includes a schedule, sync and order (SSO) module for scheduling and assigning work to multiple processors. The SSO includes an on-deck unit (ODU) that provides a table having several entries, each entry storing a respective work queue entry, and a number of lists. Each of the lists may be associated with a respective processor configured to execute the work, and includes pointers to entries in the table. pointer is added to the list based on an indication of whether the associated processor accepts the WQE corresponding to the pointer.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: December 27, 2016
    Assignee: Cavium, Inc.
    Inventors: David Kravitz, Daniel E. Dever, Wilson P. Snyder, II
  • Patent number: 9495204
    Abstract: Constructing a logical tree topology in a parallel computer that includes compute nodes, where each compute node includes a hardware acceleration unit and executes an identical number of tasks and the tasks of each node have a rank, includes: creating hardware acceleration groups, with each hardware acceleration group including one task from each node, where the one task from each node has the same rank; assigning one task of a root compute node as a global root of the logical tree topology; assigning tasks of the root compute node other than the global root as local children of the global root; and assigning each of the global root and local children of the root compute node as a root of a subtree of tasks, wherein each subtree comprises the tasks of a hardware acceleration group.
    Type: Grant
    Filed: January 6, 2014
    Date of Patent: November 15, 2016
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, Nysal Jan K. A., Sameh S. Sharkawi
  • Patent number: 9495205
    Abstract: Constructing a logical tree topology in a parallel computer that includes compute nodes, where each compute node includes a hardware acceleration unit and executes an identical number of tasks and the tasks of each node have a rank, includes: creating hardware acceleration groups, with each hardware acceleration group including one task from each node, where the one task from each node has the same rank; assigning one task of a root compute node as a global root of the logical tree topology; assigning tasks of the root compute node other than the global root as local children of the global root; and assigning each of the global root and local children of the root compute node as a root of a subtree of tasks, wherein each subtree comprises the tasks of a hardware acceleration group.
    Type: Grant
    Filed: April 30, 2014
    Date of Patent: November 15, 2016
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, Nysal Jan K. A., Sameh S. Sharkawi
  • Patent number: 9465743
    Abstract: Embodiments of the present invention disclose a method for accessing a cache and a pseudo cache agent (PCA). The method of the present invention is applied to a multiprocessor system, where the system includes at least one NC, at least one PCA conforming to a processor micro-architecture level interconnect protocol is embedded in the NC, the PCA is connected to at least one PCA storage device, and the PCA storage device stores data shared among memories in the multiprocessor system. The method of the present invention includes: if the NC receives a data request, obtaining, by the PCA, target data required in the data request from the PCA storage device connected to the PCA; and sending the target data to a sender of the data request. Embodiments of the present invention are mainly applied to a process of accessing cache data in the multiprocessor system.
    Type: Grant
    Filed: December 19, 2012
    Date of Patent: October 11, 2016
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Wei Zheng, Jiangen Liu, Gang Liu, Weiguang Cai
  • Patent number: 9436613
    Abstract: A central processing unit, connected to a main memory among a plurality of central processing units each including a cache memory, includes a control unit. The control unit executes a process including: classifying the plurality of central processing units into a smaller number than a total number of the plurality of central processing units, and writing to the main memory presence information indicating whether or not the same data as data stored in the main memory is held in a cache memory included in any of the central processing units that belong to a corresponding central processing unit group, for each central processing unit group of a plurality of central processing unit groups obtained by the classifying.
    Type: Grant
    Filed: January 16, 2013
    Date of Patent: September 6, 2016
    Assignee: FUJITSU LIMITED
    Inventors: Go Sugizaki, Naoya Ishimura
  • Patent number: 9430148
    Abstract: A method is provided, for example, to implement multiplexed communication between a controller and a preamplifier in a storage device. For example, multiplexed communication is implemented by controlling a bidirectional serial data line of a digital bus to selectively transmit digital signals in either a first direction from the controller to the preamplifier or a second direction from the preamplifier to the controller, in response to a direction control signal, and concurrently transmitting a synchronous clock signal over a clock signal line of the digital bus from the controller to the preamplifier to synchronize transfer and processing of the digital signals transmitted on the bidirectional serial data line of the digital bus. The direction control signal is transmitted from the controller to the preamplifier on one of the bidirectional serial data line and the clock signal line of the digital bus.
    Type: Grant
    Filed: May 1, 2014
    Date of Patent: August 30, 2016
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Ross S. Wilson, David W. Kelly, Daniel J. Dolan, Richard Rauschmayer
  • Patent number: 9411778
    Abstract: The invention discloses a multiprocessor System and synchronous engine device thereof.
    Type: Grant
    Filed: August 30, 2011
    Date of Patent: August 9, 2016
    Assignee: INSTITUTE OF COMPUTING TECHNOLOGY OF THE CHINESE ACADEMY OF SCIENCES
    Inventors: Ninghui Sun, Fei Chen, Zheng Cao, Kai Wang, Xuejun An
  • Patent number: 9367459
    Abstract: A scheduling method of a scheduler that manages threads is executed by a computer. The scheduling method includes selecting a CPU of relatively less load, when a second thread is generated from a first thread to be processed; determining whether the second thread operates exclusively from the first thread; copying a first storage area assessed by the first thread onto a second storage area managed by the CPU, when the second thread operates exclusively; calculating based on an address of the second storage area and a predetermined value, an offset for a second address for the second thread to access the first storage area; and notifying the CPU of the offset for the second address to convert a first address to a third address for accessing the second storage area.
    Type: Grant
    Filed: July 3, 2013
    Date of Patent: June 14, 2016
    Assignee: FUJITSU LIMITED
    Inventors: Koichiro Yamashita, Hiromasa Yamauchi, Takahisa Suzuki, Koji Kurihara
  • Patent number: 9348393
    Abstract: In one embodiment, a system includes a power management controller that controls a duty cycle of a processor to manage power. By frequently powering up and powering down the processor during a period of time, the power consumption of the processor may be controlled while providing the perception that the processor is continuously available. Before powering the processor up, the power management control may determine whether or not there is work for the processor to perform. If there is no work to perform, the power management control may delay powering the processor up until there is work to perform, saving additional power. This additional power savings may be tracked, and may serve as a “credit” for the processor when subsequently powered up again.
    Type: Grant
    Filed: August 28, 2014
    Date of Patent: May 24, 2016
    Assignee: Apple Inc.
    Inventor: Jason P. Jane
  • Patent number: 9323574
    Abstract: A method for managing processor power optimization is provided. The method may include receiving a plurality of tasks for processing by a processor environment. The method may also include allocating a portion of a compute resource corresponding to the processor environment to each of the received plurality of tasks, the allocating of the portion being based on both an execution time and a response time associated with each of the received plurality of tasks.
    Type: Grant
    Filed: February 21, 2014
    Date of Patent: April 26, 2016
    Assignee: Lenovo Enterprise Solutions (Singapore) Pte. Ltd.
    Inventors: Ganesh Balakrishnan, Mohammad Peyravian, Srinivasan Ramani, Brian M. Rogers, Ken V. Vu
  • Patent number: 9317474
    Abstract: A semiconductor device of the present invention has processor elements each of which divides data that is contiguous in one direction into multiple data groups and processes them, a processor element control unit that issues a data shift instruction, and a data transfer network that performs data transfer between adjacent processor elements. The processor elements each have a data storage unit that stores one of the multiple data groups, a data selector that outputs transfer data obtained by selecting either of head data or end data of one data group according to a data shift instruction into a data transfer network, a data shifter that shifts a position at which the data group is stored to the right or to the left according to the data shift instruction, and a data connector that connects the data group which is shifted and the transfer data obtained through the data transfer network.
    Type: Grant
    Filed: August 4, 2013
    Date of Patent: April 19, 2016
    Assignee: Renesas Electronics Corporation
    Inventor: Shohei Nomoto
  • Patent number: 9274586
    Abstract: Many computer processing tasks require large numbers of memory intensive operations to be performed very rapidly. For example, computer network requires that packets be placed into and removed from First-In First-Out (FIFO) queues, numerous counters to be maintained and routing table look-ups to be performed. All of these operations must be performed at very high-speeds in order to keep up with today's high-speed computer network traffic. To help perform these high-speed memory tasks, a high-speed intelligent memory subsystem has been developed. The high-speed intelligent memory subsystem handles the intricacies of these memory operations such that a main process is relieved of some of its duties. Various different high-level memory interfaces for interfacing with the intelligent memory subsystem. The memory interfaces may be hardware-based or software-based.
    Type: Grant
    Filed: September 7, 2005
    Date of Patent: March 1, 2016
    Assignee: Cisco Technology, Inc.
    Inventors: Sundar Iyer, Nick McKeown, Morgan Littlewood
  • Patent number: 9219769
    Abstract: Incoming data streams are managed by receiving a data stream on at least one network interface card (NIC) and performing operations on the data stream using a first process running several first threads for each network interface card and at least one group of second multiple processes each with an optional group of second threads. The first process and the one or more groups of second multiple processes are independent and communicate via the shared memory. The first threads for each network interface card are different than the group of second threads. The system includes at least one network interface card that receives a data stream, a first processor that runs a first process that uses a plurality of first threads for each network interface card and a second processor that runs at least one group of second multiple processes each with art optional group of second threads.
    Type: Grant
    Filed: June 10, 2013
    Date of Patent: December 22, 2015
    Assignee: VERISIGN, INC.
    Inventors: John Kenneth Gallant, Karl Henderson
  • Patent number: 9176669
    Abstract: An algorithm for mapping memory and a method for using a high performance computing (“HPC”) system are disclosed. The algorithm takes into account the number of physical nodes in the HPC system, and the amount of memory in each node. Some of the nodes in the HPC system also include input/output (“I/O”) devices like graphics cards and non-volatile storage interfaces that have on-board memory; the algorithm also accounts for the number of such nodes and the amount of I/O memory they each contain. The algorithm maximizes certain parameters in priority order, including the number of mapped nodes, the number of mapped I/O nodes, the amount of mapped I/O memory, and the total amount of mapped memory.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: November 3, 2015
    Assignee: Silicon Graphics International Corp.
    Inventors: Brian Justin Johnson, Michael John Habeck
  • Patent number: 9147373
    Abstract: Executing a map reduce sequence may comprise executing all jobs in the sequence by a collection of a plurality of processes with each process running one or more mappers, combiners, partitioners and reducers for each job, and transparently sharing heap state between the jobs to improve metrics associated with the job. Processes may communicate among themselves to coordinate completion of map, shuffle and reduce phases, and completion of said all jobs in the sequence.
    Type: Grant
    Filed: August 25, 2012
    Date of Patent: September 29, 2015
    Assignee: International Business Machines Corporation
    Inventors: David Cunningham, Benjamin W. Herta, Vijay A. Saraswat, Avraham E. Shinnar
  • Patent number: 9086974
    Abstract: Cache lines in a multi-processor computing environment are configurable with a coherency mode. Cache lines in full-line coherency mode are operated or managed with full-line granularity. Cache lines in sub-line coherency mode are operated or managed as sub-cache line portions of a full cache line. Communications detected on a coherence interconnect may indicate that a cache line is associated with performance-reducing events. A high-contention cache line may be placed in sub-line coherency mode. Caches accessing the cache line are notified that the cache line is in sub-line coherency mode. The cache line may be associated with a counter in a centralized detection table that is incremented based on detecting the communications. The cache line may be a high-contention cache line when the counter satisfies a high-contention criterion, such as reaching a threshold value. The cache line may be returned to full-line coherency mode when a reset criterion is satisfied.
    Type: Grant
    Filed: September 26, 2013
    Date of Patent: July 21, 2015
    Assignee: International Business Machines Corporation
    Inventors: Fadi Y. Busaba, Harold W. Cain, III, Michael K. Gschwind, Maged M. Michael, Valentina Salapura, Eric M. Schwarz, Chung-Lung K. Shum