Patents Examined by William B Partridge
  • Patent number: 11030146
    Abstract: The execution engine is a new organization for a digital data processing apparatus, suitable for highly parallel execution of structured fine-grain parallel computations. The execution engine includes a memory for storing data and a domain flow program, a controller for requesting the domain flow program from the memory, and further for translating the program into programming information, a processor fabric for processing the domain flow programming information and a crossbar for sending tokens and the programming information to the processor fabric.
    Type: Grant
    Filed: April 1, 2019
    Date of Patent: June 8, 2021
    Assignee: Stillwater Supercomputing, Inc.
    Inventor: Erwinus Theodorus Leonardus Omtzigt
  • Patent number: 11023232
    Abstract: In an embodiment, the present invention includes a processor having an execution logic to execute instructions and a control transfer termination (CTT) logic coupled to the execution logic. This logic is to cause a CTT fault to be raised if a target instruction of a control transfer instruction is not a CTT instruction. Other embodiments are described and claimed.
    Type: Grant
    Filed: March 13, 2019
    Date of Patent: June 1, 2021
    Assignee: Intel Corporation
    Inventors: Vedvyas Shanbhogue, Jason W. Brandt, Uday Savagaonkar, Ravi L. Sahita
  • Patent number: 11016778
    Abstract: Techniques are provided for vectorizing Heapsort. A K-heap is used as the underlying data structure for indexing values being sorted. The K-heap is vectorized by storing values in a contiguous memory array containing a beginning-most side and end-most side. The vectorized Heapsort utilizes horizontal aggregation SIMD instructions for comparisons, shuffling, and moving data. Thus, the number of comparisons required in order to find the maximum or minimum key value within a single node of the K-heap is reduced resulting in faster retrieval operations.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: May 25, 2021
    Assignee: Oracle International Corporation
    Inventors: Benjamin Schlegel, Pit Fender, Harshad Kasture, Matthias Brantner, Hassan Chafi
  • Patent number: 11010160
    Abstract: A data processor comprising a plurality of registers, and instruction execution circuitry having an associated instruction set, wherein the instruction set includes an instruction specifying at least a mask operand, a register operand and an immediate value operand, and the instruction execution circuitry, in response to an instance of the instruction, determines a Boolean value based on the mask operand and sets a respective one of a plurality of registers specified by the register operand of the instance to a value of the immediate value operand if the Boolean value is true. The instruction execution circuitry, in response to the instance of the instruction, may set the respective one of the plurality of registers specified by the register operand of the instance to zero if the Boolean value is false.
    Type: Grant
    Filed: February 21, 2019
    Date of Patent: May 18, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Wolfgang Gellerich, Martin Schwidefsky, Chung-Lung K. Shum, Kai Weber
  • Patent number: 10997115
    Abstract: Systems and methods of propagating data within an integrated circuit includes: identifying a coarse data propagation path for distinct subsets of data of an input dataset that includes: setting inter-core data movements for the distinct subsets of data, the inter-core data movements defining a predetermined propagation of a given subset of data between two or more of a plurality of cores of an integrated circuit array of the integrated circuit; identifying a granular data propagation path for each distinct subset of data that includes: setting intra-core data movements for each distinct subset of data, the intra-core data movements defining a predetermined propagation of the given subset of data within one or more of the plurality of cores of the integrated circuit array of the integrated circuit; enabling a flow of the input dataset within the integrated circuit based on the coarse data propagation path and the granular propagation path.
    Type: Grant
    Filed: March 5, 2019
    Date of Patent: May 4, 2021
    Assignee: quadric.io, Inc.
    Inventors: Nigel Drego, Aman Sikka, Ananth Durbha, Mrinalini Ravichandran, Robert Daniel Firu, Veerbhan Kheterpal
  • Patent number: 10990405
    Abstract: A computer system includes a branch detection module and a branch predictor module. The branch detection module determines that a first program branch is a possible call branch having a next sequential instruction address (NSIA), and determines that a first routine branch is a possible return capable branch having the first routine instruction address that is a detected as being offset. The branch predictor module determines that a second program branch is a possible call branch having a next sequential instruction address (NSIA), and determines that a second routine branch is a predicted return branch having a predicted target instruction address based on the NSIA of the second program branch and the predicted offset.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: April 27, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Adam Collura, James Bonanno, Steven J. Hnatko, Brian Robert Prasky, Daniel Lipetz
  • Patent number: 10977033
    Abstract: The present disclosure includes apparatuses and methods related to mask patterns generated in memory from seed vectors. An example method includes performing operations on a plurality of data units of a seed vector and generating, by performance of the operations, a vector element in a mask pattern.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: April 13, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Jeremiah J. Willcock
  • Patent number: 10963265
    Abstract: Examples described herein include systems and methods which include an apparatus comprising a plurality of configurable logic units and a plurality of switches, with each switch being coupled to at least one configurable logic unit of the plurality of configurable logic units. The apparatus further includes an instruction register configured to provide respective switch instructions of a plurality of switch instructions to each switch based on a computation to be implemented among the plurality of configurable logic units. For example, the switch instructions may include allocating the plurality of configurable logic units to perform the computation and activating an input of the switch and an output of the switch to couple at least a first configurable logic unit and a second configurable logic unit. In various embodiments, configurable logic units can include arithmetic logic units (ALUs), bit manipulation units (BMUs), and multiplier-accumulator units (MACs).
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: March 30, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Fa-Long Luo, Tamara Schmitz, Jeremy Chritz, Jaime Cummins
  • Patent number: 10942748
    Abstract: In an embodiment, a method for processing instructions in a microcontroller is disclosed. In the embodiment, the method involves, upon receipt of an interrupt while an instruction is being executed, completing execution of the instruction by a shadow functional unit and, upon servicing the interrupt, terminating re-execution of the instruction and updating a main register file with the result of the execution of the instruction by the shadow functional unit.
    Type: Grant
    Filed: July 16, 2015
    Date of Patent: March 9, 2021
    Assignee: NXP B.V.
    Inventors: Surendra Guntur, Sebastien Antonius Josephus Fabrie, Jose de Jesus Pineda de Gyvez
  • Patent number: 10915272
    Abstract: Methods and systems for managing data in shared storage systems, such as virtualized storage arrays, physical disks and hypervisor data stores are provided. One method includes providing multiple data storage devices including a bottom tier of data storage devices including a plurality of physical data storage devices and at least one higher tier of data storage devices including a plurality of virtual data storage devices and storing, by a processor, a data type record including a list of data types recognized by a storage system and a plurality of access control records. The method further includes controlling movement of logical units of data within the storage system resulting from data operations that map data between the data storage tiers while maintaining one or more policies defined in the plurality of access control records. Systems and computer program products for performing the above method are also provided.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: February 9, 2021
    Assignee: International Business Machines Corporation
    Inventor: Fraser I. MacIntosh
  • Patent number: 10908963
    Abstract: Methods, apparatus, and products for deterministic real time business application processing in a service-oriented architecture (‘SOA’), the SOA including SOA services, each SOA service carrying out a processing step of the business application where each SOA service is a real time process executable on a real time operating system of a generally programmable computer and deterministic real time business application processing according to embodiments of the present invention includes configuring the business application with real time processing information and executing the business application in the SOA in accordance with the real time processing information.
    Type: Grant
    Filed: July 28, 2016
    Date of Patent: February 2, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Landon C. Miller, Siljan H. Simpson
  • Patent number: 10908998
    Abstract: A data storage device comprises a non-volatile semiconductor memory device and a solid-state drive controller communicatively coupled to the non-volatile semiconductor memory device, including a function level reset manager. The function level reset manager can receive a function level reset request from a host system, generate a function level reset bitmap based on the function level reset request, and broadcast the function level reset request to a command processing pipeline. The function level reset bitmap can indicate which functions are in a reset state. Further, the function level reset manager can determine which functions are in the reset state and instruct the command processing pipeline to cancel commands associated with the functions in the reset state.
    Type: Grant
    Filed: August 8, 2017
    Date of Patent: February 2, 2021
    Assignee: Toshiba Memory Corporation
    Inventors: Zhimin Ding, Sancar K. Olcay
  • Patent number: 10908821
    Abstract: A request to read data stored at a memory sub-system can be received. A determination can be made whether the data is stored at a cache of the memory sub-system. responsive to determining that the data is not stored at the cache of the memory sub-system, a determination can be made, by a processing device, of a queue of a set of queues to store the request with other read requests for the data stored at the memory sub-system. Each queue of the et of queues corresponds to a respective cache line of the cache. The request can be stored at the determined queue with the other read requests for the data stored at the memory sub-system.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: February 2, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Dhawal Bavishi
  • Patent number: 10901640
    Abstract: A memory access system includes a memory, a controller, and a redundancy elimination unit. The memory is a multi-way set associative memory, and the redundancy elimination unit records M record items. Each record item is used to store a tag of a stored data block in one of storage sets. The controller determines a read data block and a target storage set of the read data block and sends a query message to the redundancy elimination unit. The query message carries a set identifier of the target storage set of the read data block and a tag of the read data block. The redundancy elimination unit determines a record item corresponding to the set identifier of the target storage set, matches the tag of the read data block with a tag of a stored data block in the record item corresponding to the target storage set of the read data block.
    Type: Grant
    Filed: November 30, 2017
    Date of Patent: January 26, 2021
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Fenglong Song, Guangfei Zhang, Tao Wang
  • Patent number: 10901634
    Abstract: A storage system may include a plurality of logical storage units that each include a plurality of storage devices. One or more logical unit numbers may be stored across one or more of the plurality of logical storage units, and the logical unit numbers may be accessible by one or more host devices. A logical storage unit may include a plurality of storage devices. Upon detection of failure of a storage device of a logical storage unit, data of the logical storage unit is drained to one or more fault tolerant logical storage unit. The logical storage unit with the defective device is converted to a fault-tolerant logical storage unit using the available and non-defective devices. Data is rebalanced across the fault-tolerant logical storage units.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: January 26, 2021
    Assignee: SEAGATE TECHNOLOGY LLC
    Inventor: Chetan Bendakaluru Lingarajappa
  • Patent number: 10896128
    Abstract: Technology is provided for partitioning a shared unified cache in a multi-processor computer system. The technology can receive a request to allocate a portion of a shared unified cache memory for storing only executable instructions, partition the cache memory into multiple partitions, and allocate one of the partitions for storing only executable instructions. The technology can further determine the size of the portion of the cache memory to be allocated for storing only executable instructions as a function of the size of the multi-processor's L1 instruction cache and the number of cores in the multi-processor.
    Type: Grant
    Filed: December 23, 2016
    Date of Patent: January 19, 2021
    Assignee: Facebook, Inc.
    Inventors: Narsing Vijayrao, Keith Adams
  • Patent number: 10884950
    Abstract: Memory management is provided which includes a page replacement process managed by a storage manager and a workload manager. The page replacement process swaps out the content associated with a frame of physical memory to an auxiliary storage in order to provide a free frame. The memory management process includes: determining that the physical memory runs out of free frames; providing priority information from the workload manager to the storage manager, the priority information indicating the priority or business relevance of a certain process; selecting one or more pages to be swapped to the auxiliary storage based on the priority information; and swapping out the contents of the one or more selected pages to the auxiliary storage.
    Type: Grant
    Filed: May 16, 2016
    Date of Patent: January 5, 2021
    Assignee: International Business Machines Corporation
    Inventors: Harris M. Morgenstern, Horst Sinram, Elpida Tzortzatos, Dieter Wellerdiek
  • Patent number: 10877668
    Abstract: Techniques for offloading operations to access data that is compressed and distributed to multiple storage nodes are disclosed. A storage node includes one or more storage devices to store a portion of compressed data. Other portions of the compressed data are stored on other storage nodes. A storage node receives a request to perform an operation on the data, decompresses at least part of the portion of the locally stored compressed data, and performs the operation on the decompressed part, returning the operation result to a compute node. Any part that could not be decompressed can be sent with the request to the next storage node. The process continues until all the storage nodes storing the compressed data receive the request, decompress the locally stored data, and perform the operation on the decompressed data.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: December 29, 2020
    Assignee: Intel Corporation
    Inventors: Sanjeev N. Trika, Jawad B. Khan
  • Patent number: 10877879
    Abstract: Techniques to manage usage of a flash-based storage are disclosed. In various embodiments, the execution time of the flash-based storage is divided into quanta. Within each quantum comprising at least a subset of quanta, flash erasures are allowed without restriction up to a prescribed erasure quota. Erasures are throttled within a slack range bound at a lower end by the erasure quota and at an upper end by an upper bound, including by dividing the slack range into two or more intervals and within each interval applying a corresponding erasure control policy, wherein the respective corresponding erasure control policies applied to successive intervals in the slack range become increasingly strict in a stepwise manner.
    Type: Grant
    Filed: December 11, 2015
    Date of Patent: December 29, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Cheng Li, Philip Shilane, Grant Wallace, Frederick Douglis
  • Patent number: 10877669
    Abstract: A system that implements a scaleable data storage service may maintain tables in a data store on behalf of storage service clients. The service may maintain data in partitions stored on respective computing nodes in the system. The service may support multiple throughput models, including a committed throughput model and a best effort throughput model. A service request to create a table may specify that requests directed to the table should be serviced under a committed throughput model and may specify the committed throughput level in terms of logical service request units. The service may reserve low-latency storage and other resources sufficient to meet the specified committed throughput level. A client/user may request a modification to the committed throughput level in anticipation of workload changes, such as an increase or decrease in traffic or data volume. In response, the system may increase or decrease the resources reserved for the table.
    Type: Grant
    Filed: June 30, 2011
    Date of Patent: December 29, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Swaminathan Sivasubramanian, Stefano Stefani, Wei Xiao, Timothy Andrew Rath, Rande A. Blackman, Grant A. M. McAlister, Raymond S. Bradford