Distributed Processing System Patents (Class 712/28)
  • Patent number: 11747782
    Abstract: Systems and methods are described herein for novel uses and/or improvements to artificial intelligence applications in an environment with limited or no available data. In particular, systems and methods are described herein for providing power consumption predictions for selected applications within network arrangements featuring devices with non-homogenous or unknown specifications.
    Type: Grant
    Filed: January 20, 2023
    Date of Patent: September 5, 2023
    Assignee: Citibank, N.A.
    Inventors: Adam Hess, Dawid Orczyk, Dominik Wojnarowski, Krzysztof Andrzejewski, Pawel Chrabonszcz
  • Patent number: 11579882
    Abstract: Systems, apparatuses, and methods related to extended memory operations are described. Extended memory operations can include operations specified by a single address and operand and may be performed by a computing device that includes a processing unit and a memory resource. The computing device can perform extended memory operations on data streamed through the computing tile without receipt of intervening commands. In an example, a computing device is configured to receive a command to perform an operation that comprises performing an operation on a data with the processing unit of the computing device and determine that an operand corresponding to the operation is stored in the memory resource. The computing device can further perform the operation using the operand stored in the memory resource.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: February 14, 2023
    Assignee: Micron Technology, Inc.
    Inventors: Richard C. Murphy, Glen E. Hush, Vijay S. Ramesh, Allan Porterfield, Anton Korzh
  • Patent number: 11561914
    Abstract: An interrupt generation method of a storage device includes executing a command provided by a host, writing a completion entry in a completion queue of the host upon completing execution of the command, and issuing an interrupt corresponding to the completion entry to the host in response to at least one of a first interrupt generation condition, a second interrupt generation condition, and a third interrupt generation condition being satisfied. The first interrupt generation condition is satisfied when a difference between a tail pointer and a head pointer of the completion queue is equal to a first mismatch value. The second interrupt generation condition is satisfied when the difference between the tail pointer and the head pointer is at least equal to an aggregation threshold. The third interrupt generation condition is satisfied when an amount of time that has elapsed since a previous interrupt was issued exceeds a reference time.
    Type: Grant
    Filed: October 15, 2020
    Date of Patent: January 24, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hyunseok Cha, Sarath Kumar Kunnumpurathu Sivan, Jungsoo Ryoo
  • Patent number: 11561840
    Abstract: The present disclosure provides a system comprising: a first group of computing nodes and a second group of computing nodes, wherein the first and second groups are neighboring devices and each of the first and second groups comprising: a set of computing nodes A-D, and a set of intra-group interconnects, wherein the set of intra-group interconnects communicatively couple computing node A with computing nodes B and C and computing node D with computing nodes B and C; and a set of inter-group interconnects, wherein the set of inter-group interconnects communicatively couple computing node A of the first group with computing node A of the second group, computing node B of the first group with computing node B of the second group, computing node C of the first group with computing node C of the second group, and computing node D of the first group with computing node D of the second group.
    Type: Grant
    Filed: January 30, 2020
    Date of Patent: January 24, 2023
    Assignee: Alibaba Group Holding Limited
    Inventors: Liang Han, Yang Jiao
  • Patent number: 11514294
    Abstract: A system and method for enhancing C*RAM, improving its performance for known applications such as video processing but also making it well suited to low-power implementation of neural nets. The required computing engine is decomposed into banks of enhanced C*RAM each having a SIMD controller, thus allowing operations at several scales simultaneously. Several configurations of suitable controllers are discussed, along with communication structures and enhanced processing elements.
    Type: Grant
    Filed: February 23, 2018
    Date of Patent: November 29, 2022
    Assignee: UNTETHER AI CORPORATION
    Inventors: William Martin Snelgrove, Darrick Wiebe
  • Patent number: 11461262
    Abstract: A printed circuit board comprises: a network controller; a memory controller; a heterogeneous processor; a field-programmable gate array (FPGA); and a non-volatile-media controller. The memory controller comprises: a fabric controller component configured to communicate with the network controller, the heterogeneous processor, the FPGA, and the non-volatile-media controller; and a media controller component configured to manage access relating to data stored in a volatile memory media. The FPGA is configured to perform computations relating to data stored via the non-volatile-media controller. The heterogeneous processor is configured to perform computation tasks relating to data stored via the memory controller. The printed circuit board is configured to be plugged in to a rack with a plurality of other plugged-in circuit boards.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: October 4, 2022
    Assignee: Alibaba Group Holding Limited
    Inventor: Shu Li
  • Patent number: 11435758
    Abstract: There is provided an electronic control system including: a plurality of blade processors and a plurality of backplanes. One or more of a vehicle, electronic control system, and autonomous driving vehicle, disclosed in the present invention, are able to realize connection with an Artificial Intelligence (AI) module, a Unmanned Aerial Vehicle (UAV), a robot, an Augmented Reality (AR) device, a Virtual Reality (VR) device, a 5G service device, and the like.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: September 6, 2022
    Assignee: LG Electronics Inc.
    Inventors: Jinkyoung Kim, Namyong Park, Namsu Lee, Sangwoo Han
  • Patent number: 11354203
    Abstract: A processing system encompasses several processing devices and a comparison device. A method for controlling the processing system encompasses: processing of identical information items by the processing devices using associated processing processes; furnishing a characteristic value of each processing process, respectively as a function of the processing that has occurred; and comparing the characteristic values by way of the comparison device and determining a defectively operating processing process on the basis of the comparison. The defectively operating processing process is replaced by a processing process restarted on the same processing device.
    Type: Grant
    Filed: March 21, 2018
    Date of Patent: June 7, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Peter Munk, Rainer Baumgaertner
  • Patent number: 11308248
    Abstract: Apparatus and method for a full quantum system simulator. For example, one embodiment of a method comprises: initializing a quantum computing system simulator for simulating multiple layers of a quantum system including one or more non-quantum layers and one or more physical quantum device layers of the quantum system; simulating a first set of operations of the one or more non-quantum layers of the quantum system to generate first simulation results; simulating a second set of operations of the one or more quantum device layers of the quantum system to generate second simulation results; analyzing the first and second simulation results to provide at least one configuration recommendation for the quantum system.
    Type: Grant
    Filed: May 5, 2018
    Date of Patent: April 19, 2022
    Assignee: Intel Corporation
    Inventors: Anne Matsuura, Sonika Johri, Justin Hogaboam
  • Patent number: 11169951
    Abstract: Systems and methods are provided for supporting wide-protocol interface across a multi-die interconnect interface. Data signals of a wide-protocol interface are split into a plurality of data streams. A handshake signal is established between a first circuit and a second circuit, whereby the first circuit and second circuit are dies of a multi-die device. The first circuit transmits the plurality of data streams to the second circuit via a plurality of multi-die interconnect channels. Each data stream of the plurality of data streams are compressed based on the handshake signal in order to provide wide-protocol interface with reduced number of required pins.
    Type: Grant
    Filed: November 12, 2020
    Date of Patent: November 9, 2021
    Assignee: Altera Corporation
    Inventors: Gary Brian Wallichs, Keith Duwel, Cora Lynn Mau
  • Patent number: 11163609
    Abstract: A system and method of allocating memory to a thread of a multi-threaded program are disclosed. A method includes determining one or more thread-local blocks of memory that are available for the thread, and generating a count of the available one or more thread-local blocks for a thread-local freelist. If a thread-local block is available, allocating one block of the one or more thread-local blocks to the thread and decrementing the count in the thread-local freelist. When the count is zero, accessing a global freelist of available blocks of memory to determine a set of available blocks represented by the global freelist. Then, the set of available blocks are allocated from the global freelist to the thread-local freelist by copying one or more free block pointers of the global freelist to a thread-local state of the thread. Blocks can also be deallocated.
    Type: Grant
    Filed: April 2, 2019
    Date of Patent: November 2, 2021
    Assignee: SAP SE
    Inventor: Ivan Schreter
  • Patent number: 11100426
    Abstract: Systems and methods are disclosed to implement a distributed matrix decomposition system using gossip. In embodiments, the matrix decomposition system employs a scalable, parallel, and decentralized approach to divide an input matrix into a grid blocks, and individually decompose the blocks into local decomposed matrices by communicating (gossiping) with a limited set of neighboring blocks. In embodiments, the decomposition may be implemented as an iterative process using Stochastic Gradient Descent, where the decomposed matrices are iteratively updated and kept in approximate agreement for neighboring blocks. The division of the input matrix allows the decomposition operation to be easily parallelized among nodes of a distributed computing system and scaled to suit the size of the input matrix. Moreover, the distributed approach eliminates the need for a central server, which in some systems may represent an operational bottle neck, a single point of failure, or a target for attacks.
    Type: Grant
    Filed: January 9, 2018
    Date of Patent: August 24, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Bamdev Mishra, Mukul Bhutani
  • Patent number: 11087428
    Abstract: There is provided with an image processing apparatus for performing image processing for an input image. Each of N processing modules refers to a processing result for a reference pixel different from a processing target pixel and generates a processing result for the processing target pixel. Each of the N processing modules generates a processing result for a first pixel included in the first pixel line and next generates a processing result for a second pixel. The second pixel is included in a second pixel line different from the first pixel line in the processing target region and becomes processable in accordance with the generation of the processing result for the first pixel.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: August 10, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Shigeo Kodama, Kohei Kishi
  • Patent number: 11074110
    Abstract: A computer-implemented method for scheduling a series of recurring events including: receiving one or more requests to allocate resource(s) to a series of recurring events, wherein the one or more requests specify, for each event, a corresponding desired time period over which the resource(s) are to be allocated, and the one or more requests further specify one or more adjustment criteria for defining, for one or more of the events, one or more permissibly adjusted time periods from the desired time period; obtaining, for each event, resource availability data indicative of an availability of the resource(s) during the desired time period; and, for each event: determining, based on the resource availability data, a viable time period, wherein the viable time period is either the desired time period or a permissibly adjusted time period that satisfies the one or more adjustment criteria; and allocating the resource(s) to the viable time period.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: July 27, 2021
    Assignee: Hubstar International Limited
    Inventors: Stefanos Vatidis, Denis Mequinion
  • Patent number: 11048656
    Abstract: Representative apparatus, method, and system embodiments are disclosed for configurable computing. A representative system includes an interconnection network; a processor; and a plurality of configurable circuit clusters. Each configurable circuit cluster includes a plurality of configurable circuits arranged in an array; a synchronous network coupled to each configurable circuit of the array; and an asynchronous packet network coupled to each configurable circuit of the array.
    Type: Grant
    Filed: March 31, 2019
    Date of Patent: June 29, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Tony M. Brewer
  • Patent number: 11042412
    Abstract: A memory allocation method and a server, wherein the method includes: identifying, by a server, a node topology table; generating fetch hop tables of the NUMA nodes based on the node topology table; calculating fetch priorities of the NUMA nodes based on the fetch hop tables of the NUMA nodes, and using an NC hop count as an important parameter for fetch priority calculation; and when a NUMA node applies for memory, allocating memory based on the fetch priority table, and for a higher priority, more preferentially allocating memory from a NUMA node corresponding to the priority.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: June 22, 2021
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Beilei Sun, Shengyu Shen, Jianrong Xu
  • Patent number: 10983795
    Abstract: Systems, apparatuses, and methods related to extended memory operations are described. Extended memory operations can include operations specified by a single address and operand and may be performed by a computing device that includes a processing unit and a memory resource. The computing device can perform extended memory operations on data streamed through the computing tile without receipt of intervening commands. In an example, a computing device is configured to receive a command to perform an operation that comprises performing an operation on a data with the processing unit of the computing device and determine that an operand corresponding to the operation is stored in the memory resource. The computing device can further perform the operation using the operand stored in the memory resource.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: April 20, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Richard C. Murphy, Glen E. Hush, Vijay S. Ramesh, Allan Porterfield, Anton Korzh
  • Patent number: 10936303
    Abstract: The disclosed technology is generally directed to updating of applications, firmware and/or other software on IoT devices. In one example of the technology, a request that is associated with a requested update is communicated from a normal world of a first application processor to a secure world of the first application processor. The secure world validates the requested update. Instructions associated with the validated update are communicated from the secure world to the normal world. Image requests are sent from the normal world to a cloud service for image binaries associated with the validated update. The secure world receives the requested image binaries from the cloud service. The secure world writes the received image binaries to memory, and validates the written image binaries.
    Type: Grant
    Filed: September 10, 2019
    Date of Patent: March 2, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Adrian Bonar, Reuben R. Olinsky, Sang Eun Kim, Edmund B. Nightingale, Thales de Carvalho
  • Patent number: 10740331
    Abstract: The present invention relates to an apparatus and method for executing a query, and a system for processing data by using the same. The apparatus for executing a query includes: a processor receiving a query and returning a result value; and a storage storing data on the query. The storage includes: a first storage temporarily storing data required for the execution of the query; and a second storage constructing a DB and storing data, and the processor combines a plurality of primitives in the query to configure a composite primitive, generates a binary code for the composite primitive in run time, and executes a generated code.
    Type: Grant
    Filed: August 7, 2014
    Date of Patent: August 11, 2020
    Assignee: COUPANG CORP.
    Inventor: Hyunsik Choi
  • Patent number: 10725755
    Abstract: Systems, apparatuses, and methods for a hardware and software system to automatically decompose a program into multiple parallel threads are described. In some embodiments, the systems and apparatuses execute a method of original code decomposition and/or generated thread execution.
    Type: Grant
    Filed: June 6, 2017
    Date of Patent: July 28, 2020
    Assignee: Intel Corporation
    Inventors: David J. Sager, Ruchira Sasanka, Ron Gabor, Shlomo Raikin, Joseph Nuzman, Leeor Peled, Jason A. Domer, Ho-Seop Kim, Youfeng Wu, Koichi Yamada, Tin-Fook Ngai, Howard H. Chen, Jayaram Bobba, Jeffrey J. Cook, Omar M. Shaikh, Suresh Srinivas
  • Patent number: 10681125
    Abstract: A method of message-based communication is provided which includes executing, on one or more accelerated processing units, a plurality of groups of work items, receiving a first message from a first group of work items of the plurality of groups of work items executing on the one or more accelerated processing units and storing the first message at a first segment of memory allocated to a second group of work items of the plurality of groups of work items executing on the accelerated processing unit.
    Type: Grant
    Filed: March 29, 2016
    Date of Patent: June 9, 2020
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Shuai Che
  • Patent number: 10650322
    Abstract: Systems, computer-implemented methods, and computer program products to facilitate external port measurement of qubit port responses are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise an analysis component that can analyze responses of a multi-mode readout device coupled to a qubit. The computer executable components can further comprise an assignment component that can assign a readout state of the qubit based on the responses. In some embodiments, the multi-mode readout device can be electrically coupled to at least one of the qubit or an environment of the qubit based on a defined electrical coupling value.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: May 12, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Paul Kristan Temme, Salvatore Bernardo Olivadese, Antonio Corcoles-Gonzalez, Jay M. Gambetta, Lev Samuel Bishop
  • Patent number: 10642761
    Abstract: An avionics system comprising a central processing unit to implement one or more hard real-time safety-critical applications, the central processing unit comprises a multi-core processor with a plurality of cores, an avionics system software executable by the multi-core processor, a memory, and a common bus though which the multi-core processor can access the memory; the avionics system is characterized in that the avionics system software is designed to cause, when executed, the cores in the multi-core processor to access the memory through the common bus by sharing bus bandwidth according to assigned bus bandwidth shares.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: May 5, 2020
    Assignee: Leonardo S.P.A.
    Inventors: Marco Sozzi, Massimo Traversone
  • Patent number: 10635598
    Abstract: An embodiment of a semiconductor apparatus may include technology to determine one or more logical block addresses for a persistent storage media, determine one or more addresses for a physical memory space, and define a memory-mapped input/output region for the physical memory space with a direct mapping between the one or more addresses for the physical memory space and the one or more logical block addresses for the persistent storage media. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: April 28, 2020
    Assignee: Intel Corporation
    Inventors: Bryan Veal, Annie Foong
  • Patent number: 10620613
    Abstract: Techniques for controlling the operation of a process plant or several process plants within a process control system using a centralized or distributed controller farm allow for increased flexibility in the process control system. Any of the controllers in the controller farm may be utilized to execute modules corresponding to any of the field devices in one or several process plants. Control modules or other operations may be allocated amongst the controllers distributing the load so that one controller is not performing several operations while others are inactive. Additionally, the controller farm may be located in a temperature controlled room or area in an offsite location from the process plants. In some scenarios, load balancing techniques are performed to distribute the load for the modules equally or at least similarly amongst the controllers.
    Type: Grant
    Filed: August 21, 2017
    Date of Patent: April 14, 2020
    Assignee: FISHER-ROSEMOUNT SYSTEMS, INC.
    Inventors: Tiong P. Ong, Kent A. Burr, David R. Denison, Godfrey R. Sherriff, Gary Law, Brandon Hieb, David M. Smith
  • Patent number: 10579264
    Abstract: A memory system may include: a plurality of memory dies; and a controller suitable for identifying a dependency between first and second commands and a priority order of the first and the second commands through a check engine, and control the memory dies to sequentially perform first and second command operations in response to the first and second commands according to the dependency and the priority order.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: March 3, 2020
    Assignee: SK hynix Inc.
    Inventor: Dong-Sop Lee
  • Patent number: 10545815
    Abstract: A method for data redistribution of a job data in a first datanode (DN) to at least one additional DN in a Massively Parallel Processing (MPP) Database (DB) is provided. The method includes recording a snapshot of the job data, creating a first data portion in the first DN and a redistribution data portion in the first DN, collecting changes to a job data copy stored in a temporary table, and initiating transfer of the redistribution data portion to the at least one additional DN.
    Type: Grant
    Filed: August 3, 2016
    Date of Patent: January 28, 2020
    Assignee: Futurewei Technologies, Inc.
    Inventors: Le Cai, QingQing Zhou, Yang Sun
  • Patent number: 10509762
    Abstract: Systems, methods, and computer-readable media for transferring data between a host platform and modem circuitry are provided. At low data rates, data may be stored by on-chip memory, and data may be transferred from the on-chip memory to the host platform over an interconnect (IX) when a first aggregation period expires. At medium data rates, data may be stored in both the on-chip memory and in in-package or off-chip memory, and the data may be transferred from the on-chip memory and off-chip memory to the host platform over the IX when a second aggregation period expires. At high data rates, the on-chip memory may serve as an elastic buffer, and the data may be streamed directly through the on-chip memory to the host platform over the IX. Other embodiments are described and/or claimed.
    Type: Grant
    Filed: April 30, 2018
    Date of Patent: December 17, 2019
    Assignee: Intel IP Corporation
    Inventors: Pavel Peleska, Reinhold Schneider
  • Patent number: 10502781
    Abstract: A detection circuit, a detection method, and an electronic system for detecting an I/O output status are provided. The detection circuit includes a comparison-window generating circuit configured to: detect an I/O data signal, generate a first single pulse signal, determining a first-time window, in response to a rising edge of the I/O data signal, and generate a second single pulse signal, determining a second-time window, in response to a falling edge of the I/O data signal. A first comparison circuit is configured to: receive the first single pulse signal, and compare the I/O drive signal with a preset high-level reference signal within the first time window to obtain a first comparison result. The second comparison circuit is configured to: receive the second single pulse signal, and compare the I/O drive signal with a preset low-level reference signal within the second time window to obtain a second comparison result.
    Type: Grant
    Filed: January 18, 2018
    Date of Patent: December 10, 2019
    Assignees: Semiconductor Manufacturing International (Shanghai) Corporation, Semiconductor Manufacturing International (Beijing) Corporation
    Inventors: Zhen Ye Guo, Zhen Jiang Su
  • Patent number: 10394615
    Abstract: An information processing apparatus takes each currently executing job as a candidate job, and when determining that a migration of a candidate job to a migration destination node selected from free nodes, which are not executing any jobs, is expected to expand a continued range of free nodes, specifies the migration as a possible migration. Then, on the basis of the amounts of communication needed to perform individual migrations based on a plurality of possible migrations and the numbers of nodes used for executing candidate jobs to be migrated in the individual possible migrations, the information processing apparatus determines a possible migration to be performed.
    Type: Grant
    Filed: February 9, 2017
    Date of Patent: August 27, 2019
    Assignee: FUJITSU LIMITED
    Inventor: Hiroki Yokota
  • Patent number: 10310588
    Abstract: In an embodiment, a processor includes a plurality of cores each to independently execute instructions, a power delivery logic coupled to the plurality of cores, and a power controller including a first logic to cause a first core to enter into a first low power state of an operating system power management scheme independently of the OS, during execution of at least one thread on the first core. Other embodiments are described and claimed.
    Type: Grant
    Filed: October 18, 2016
    Date of Patent: June 4, 2019
    Assignee: Intel Corporation
    Inventors: Ankush Varma, Krishnakanth V. Sistla, Allen W. Chu, Ian M. Steiner
  • Patent number: 10304156
    Abstract: A method is described. The method includes repeatedly loading a next sheet of image data from a first location of a memory into a two dimensional shift register array. The memory is locally coupled to the two-dimensional shift register array and an execution lane array having a smaller dimension than the two-dimensional shift register array along at least one array axis. The loaded next sheet of image data keeps within an image area of the two-dimensional shift register array. The method also includes repeatedly determining output values for the next sheet of image data through execution of program code instructions along respective lanes of the execution lane array, wherein, a stencil size used in determining the output values encompasses only pixels that reside within the two-dimensional shift register array.
    Type: Grant
    Filed: June 16, 2017
    Date of Patent: May 28, 2019
    Assignee: Google LLC
    Inventors: Albert Meixner, Hyunchul Park, Qiuling Zhu, Jason Rupert Redgrave
  • Patent number: 10261831
    Abstract: Embodiments include computing devices, apparatus, and methods implemented by the apparatus for implementing speculative loop iteration partitioning (SLIP) for heterogeneous processing devices. A computing device may receive iteration information for a first partition of iterations of a repetitive process and select a SLIP heuristic based on available SLIP information and iteration information for the first partition. The computing device may determine a split value for the first partition using the SLIP heuristic, and partition the first partition using the split value to produce a plurality of next partitions.
    Type: Grant
    Filed: August 24, 2016
    Date of Patent: April 16, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Arun Raman, Han Zhao, Aravind Natarajan
  • Patent number: 10235102
    Abstract: Methods, systems, and computer readable media for submission queue pointer management are disclosed. One method is implemented in a data storage device including a controller and a memory. The method includes fetching a plurality of commands from a submission queue. The method further includes parsing at least one of the commands. The method further includes, in response to successful parsing of at least one of the commands and prior to executing all of the commands, notifying a host to advance a head entry pointer for the submission queue by a number of entries corresponding to a number of the commands successfully parsed.
    Type: Grant
    Filed: November 1, 2015
    Date of Patent: March 19, 2019
    Assignee: SanDisk Technologies LLC
    Inventors: Elkana Richter, Shay Benisty, Tal Sharifie
  • Patent number: 10049133
    Abstract: Techniques are described for managing the execution of one or more groups of queries. Embodiments of the present disclosure may generally receive a group of queries to be executed against a database. Embodiments also determine, based on one or more attributes of the group of queries, an expected amount of resources that will be used in executing the group of queries against the database. Embodiments further schedule one or more queries of the group of queries for execution against the database based on the expected amount of resources to be used for the group of queries.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: August 14, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Eric L. Barsness, Daniel E. Beuch, Alexander Cook, Brian R. Muras, John M. Santosuosso
  • Patent number: 10019292
    Abstract: A method for executing a comprehensive real-time computer application including an application software including a description of functions on a distributed real-time computer system including sensors, actuators, computing nodes, and distributor units having access to a global time. The application software including a number of real-time software components (RTSWCs). When executed, the RTSWCs exchange information by time-triggered messages. Each RTSWC is allocated a time-triggered virtual machine TTVM, wherein, during a service interval SI, an operating system running on a computing node provides a TTVM realized on the computing node with protected access to the network resources and memory resources of the computing node assigned to the TTVM, and wherein, during the SI, a defined computing power for processing the RTSWCs running in the TTVM is allocated to the TTVM by the operating system of the computing node such that the RTSWCs provide a result before the end of the SI.
    Type: Grant
    Filed: January 27, 2016
    Date of Patent: July 10, 2018
    Assignee: FTS COMPUTERTECHNIK GMBH
    Inventors: Hermann Kopetz, Stefan Poledna
  • Patent number: 9990607
    Abstract: A low-latency, high-bandwidth, and highly scalable method delivers data from a source device to multiple communication devices on a communication network. Under this method, the communication devices (also called player nodes) provide download and upload bandwidths for each other. In this manner, the bandwidth requirement on the data source is significantly reduced. Such a data delivery network is scalable without limits with the number of player nodes. In one embodiment, a computer network includes (a) a source server that provides a data stream for delivery in the computer network, (b) player nodes that exchange data with each other to obtain a complete copy of the data stream, the network nodes being capable of dynamically joining or exiting the computer network, and (c) a control server which maintains a topology graph representing connections between the source server and the player nodes, and the connections among the player nodes themselves.
    Type: Grant
    Filed: January 12, 2007
    Date of Patent: June 5, 2018
    Inventor: Wensheng Hua
  • Patent number: 9916226
    Abstract: A system of testing software is provided. The system comprises a first hardware system having hardware components to execute a first version of the software, and additionally comprises a second hardware system having hardware components to execute a second version of the software. Here, the first version of the software and the second version are different. In addition, the system includes a device configured to test the first hardware system and the second hardware system by providing first input data traffic to the first hardware system, providing second input data traffic to the second hardware system, and accessing performance values from the first hardware system and the second hardware system to evaluate a performance comparison between the first hardware system executing the first version of the software and the second hardware system executing the second version of the software.
    Type: Grant
    Filed: May 27, 2014
    Date of Patent: March 13, 2018
    Assignee: eBay Inc.
    Inventors: Jayaram Singonahalli, Darrin Curtis Alves, Douglas Ray Woolard
  • Patent number: 9911092
    Abstract: Various embodiments of the present invention provide systems and methods for enabling design, generation, and execution of real-time workflows. Such embodiments provide a graphical designer including a plurality of shapes representing the various objects of a workflow that are used to model the workflow. In addition, various embodiments of the graphical designer provide shapes to model aspects of the workflow not found in previous graphical designers. Various embodiments also provide a code generator that converts the representation of the workflow into executable code for multiple target languages. Various embodiments also provide a workflow engine based on a Petri net model responsible for executing the workflow and for delegating tasks to be performed for the workflow to an operating system. In various embodiments, the workflow engine further includes a platform abstraction layer that provides a transition layer from the Petri net language to the operating system language.
    Type: Grant
    Filed: March 4, 2014
    Date of Patent: March 6, 2018
    Assignee: UNITED PARCEL SERVICE OF AMERICA, INC.
    Inventor: Asheesh Goja
  • Patent number: 9886072
    Abstract: Systems and methods are provided for reducing power consumption of a multi-die device, such as a network processor FPGA (npFPGA). The multi-die device may include hardware resources such as FPGA dies, which may be coupled to NIC dies and/or memory dies. Power consumption of the multi-die device may be reduced by monitoring usage of hardware resources in the multi-die device, identifying hardware resources that are not in use, and gating power to the identified hardware resources. The status of processing elements (PEs) in the multi-die device may be tracked in a PE state table. Based on the PE state table, tasks from a task queue may be assigned to one or more processing elements.
    Type: Grant
    Filed: June 19, 2013
    Date of Patent: February 6, 2018
    Assignee: ALTERA CORPORATION
    Inventor: Krishnan Venkataraman
  • Patent number: 9779470
    Abstract: An image processing system is described herein in which a multi-line processing block has multiple inputs and multiple outputs. In order to provide the multiple outputs the multi-line processing block has multiple processing units operating in parallel on the multiple inputs. The multiple outputs of the multi-line processing block are coupled to corresponding multiple inputs of a subsequent multi-line processing block in the image processing system.
    Type: Grant
    Filed: January 19, 2017
    Date of Patent: October 3, 2017
    Assignee: Imagination Technologies Limited
    Inventors: Michael Bishop, Morgyn Taylor
  • Patent number: 9760474
    Abstract: Novel tools and techniques are provided for implementing green software applications and/or certifying software applications with a green applications efficiency (“GAE”) rating. Implementing green software applications might include performing performance tests of a software application, measuring power consumption of one or more hardware components, in response to execution of the software application during the one or more performance tests, generating a power consumption profile for the software application based on the measure power consumption, and tuning the software application such that power consumption of the one or more hardware components matches a power load caused by execution of the software application, based at least in part on the power consumption profile for the software application.
    Type: Grant
    Filed: March 24, 2015
    Date of Patent: September 12, 2017
    Assignee: CenturyLink Intellectual Property LLC
    Inventors: Vishak Shanmugam Pillai, Darshan Sonbarse, Viswanath Seetharam, Manoj U P
  • Patent number: 9660865
    Abstract: A system for gradually implementing network services to end users includes substantially redundant first and second control networks, connectable to the end users through a routable communications network. The first control network provides a first service capability to all the end users. The second control network provides a second service capability to a first portion of the end users, the second service capability replacing the first service capability of the first portion of the end users. The second control network subsequently provides the second service capability to a second portion of the end users, while continuing to provide the second service capability to the first portion, the second service capability replacing the first service capability of the second portion of the end users. The second service capability provided to the second portion of the end users may include revisions based on feedback from the first portion of end users.
    Type: Grant
    Filed: September 23, 2013
    Date of Patent: May 23, 2017
    Assignee: TIME WARNER CABLE INC.
    Inventors: Scott W. Ramsdell, Chris A. Cholas
  • Patent number: 9645982
    Abstract: A method for loading a web page is provided. Primary executable script are asynchronously loaded. Commands associated with the primary executable script are pushed onto a first queue and processed by asynchronously loading secondary executable script if the command is a dependency command and pushing the dependency command onto a second queue; registering secondary executable script referenced in the command if the command is a fulfillment command, and pushing the command onto the second queue if the command is neither a dependency nor a fulfillment command. Commands in the second queue are processed by, if the command is a dependency command, determining if the secondary executable script referenced in the dependency command is registered, and associating the secondary executable script with an object if the secondary executable script is registered. If the command is not a dependency command, then the command is executed and removed from the second queue.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: May 9, 2017
    Assignee: Google Inc.
    Inventors: Bradley David Townsend, Brian Kuhn, Xin Liu
  • Patent number: 9558152
    Abstract: A synchronization method is executed by a multi-core processor system. The synchronization method includes registering based on a synchronous command issued from a first CPU, CPUs to be synchronized and a count of the CPUs into a specific table; counting by each of the CPUs and based on a synchronous signal from the first CPU, an arrival count for a synchronous point, and creating by each of the CPUs, a second shared memory area that is a duplication of a first shared memory area accessed by processes executed by the CPUs; and comparing the first shared memory area and the second shared memory area when the arrival count becomes equal to the count of the CPUs, and based on a result of the comparison, judging the processes executed by the CPUs.
    Type: Grant
    Filed: September 13, 2013
    Date of Patent: January 31, 2017
    Assignee: FUJITSU LIMITED
    Inventors: Koichiro Yamashita, Hiromasa Yamauchi, Takahisa Suzuki, Koji Kurihara
  • Patent number: 9529640
    Abstract: A network processor includes a schedule, sync and order (SSO) module for scheduling and assigning work to multiple processors. The SSO includes an on-deck unit (ODU) that provides a table having several entries, each entry storing a respective work queue entry, and a number of lists. Each of the lists may be associated with a respective processor configured to execute the work, and includes pointers to entries in the table. pointer is added to the list based on an indication of whether the associated processor accepts the WQE corresponding to the pointer.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: December 27, 2016
    Assignee: Cavium, Inc.
    Inventors: David Kravitz, Daniel E. Dever, Wilson P. Snyder, II
  • Patent number: 9495205
    Abstract: Constructing a logical tree topology in a parallel computer that includes compute nodes, where each compute node includes a hardware acceleration unit and executes an identical number of tasks and the tasks of each node have a rank, includes: creating hardware acceleration groups, with each hardware acceleration group including one task from each node, where the one task from each node has the same rank; assigning one task of a root compute node as a global root of the logical tree topology; assigning tasks of the root compute node other than the global root as local children of the global root; and assigning each of the global root and local children of the root compute node as a root of a subtree of tasks, wherein each subtree comprises the tasks of a hardware acceleration group.
    Type: Grant
    Filed: April 30, 2014
    Date of Patent: November 15, 2016
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, Nysal Jan K. A., Sameh S. Sharkawi
  • Patent number: 9495204
    Abstract: Constructing a logical tree topology in a parallel computer that includes compute nodes, where each compute node includes a hardware acceleration unit and executes an identical number of tasks and the tasks of each node have a rank, includes: creating hardware acceleration groups, with each hardware acceleration group including one task from each node, where the one task from each node has the same rank; assigning one task of a root compute node as a global root of the logical tree topology; assigning tasks of the root compute node other than the global root as local children of the global root; and assigning each of the global root and local children of the root compute node as a root of a subtree of tasks, wherein each subtree comprises the tasks of a hardware acceleration group.
    Type: Grant
    Filed: January 6, 2014
    Date of Patent: November 15, 2016
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, Nysal Jan K. A., Sameh S. Sharkawi
  • Patent number: 9465743
    Abstract: Embodiments of the present invention disclose a method for accessing a cache and a pseudo cache agent (PCA). The method of the present invention is applied to a multiprocessor system, where the system includes at least one NC, at least one PCA conforming to a processor micro-architecture level interconnect protocol is embedded in the NC, the PCA is connected to at least one PCA storage device, and the PCA storage device stores data shared among memories in the multiprocessor system. The method of the present invention includes: if the NC receives a data request, obtaining, by the PCA, target data required in the data request from the PCA storage device connected to the PCA; and sending the target data to a sender of the data request. Embodiments of the present invention are mainly applied to a process of accessing cache data in the multiprocessor system.
    Type: Grant
    Filed: December 19, 2012
    Date of Patent: October 11, 2016
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Wei Zheng, Jiangen Liu, Gang Liu, Weiguang Cai
  • Patent number: 9436613
    Abstract: A central processing unit, connected to a main memory among a plurality of central processing units each including a cache memory, includes a control unit. The control unit executes a process including: classifying the plurality of central processing units into a smaller number than a total number of the plurality of central processing units, and writing to the main memory presence information indicating whether or not the same data as data stored in the main memory is held in a cache memory included in any of the central processing units that belong to a corresponding central processing unit group, for each central processing unit group of a plurality of central processing unit groups obtained by the classifying.
    Type: Grant
    Filed: January 16, 2013
    Date of Patent: September 6, 2016
    Assignee: FUJITSU LIMITED
    Inventors: Go Sugizaki, Naoya Ishimura