Access Request Queuing Patents (Class 710/39)
  • Patent number: 10990120
    Abstract: A method operates a first-in-first-out (FIFO) buffer with a first clock, and operates one of a read pointer or a write pointer of the FIFO buffer with the first clock while operating the other one of the read pointer or write pointer with a second clock. One of a serializer fed from the FIFO buffer output, or a de-serializer feeding the FIFO buffer input, is operated with the second clock. Timing pulses indicate that the pointer operating with the second clock has reached a predetermined point in its cycle. The phase of the second clock is adjusted based on a relationship between the timing pulses and an advance period of the pointer operating with the first clock. The pointer operating with the first clock is reset to achieve a desired value for the relationship. A skew created from adjusting the phase of the second clock is corrected.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: April 27, 2021
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Bhuvanachandran K. Nair
  • Patent number: 10929356
    Abstract: Hidden data co-occurrence relationships may be detected by a computer-implemented method, including monitoring data processing events on one or more server computers, gathering co-occurrences between a plurality of the data processing events, and generating one or more lineages between a plurality of directories associated with the plurality of the data processing events based on the gathered co-occurrences.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: February 23, 2021
    Assignee: International Business Machines Corporation
    Inventors: Yuriko Nishikawa, Masaki Saitoh, Yoshinori Tahara
  • Patent number: 10928850
    Abstract: A FIFO apparatus includes write registers, a first control circuit, a multiplexer, and a second control circuit. The write registers are for receiving an input signal and the first clock signal, and outputting first outputs to a multiplexer. The first control circuit is for receiving a first clock signal, generating a first toggling pulse, and enabling the write registers according to a sequence. The second control circuit is for controlling the multiplexer according to the first toggling pulse and a second clock. The multiplexer outputs a second output according to the sequence. The first and second clock signals have a first delay time and a second delay time, respectively. Difference between the first and second delay times is equal to M cycle(s) of the first clock signal, and a number of the write registers is equal to or larger than M.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: February 23, 2021
    Assignee: REALTEK SEMICONDUCTOR CORPORATION
    Inventors: Huan-Wen Chen, Po-Hsien Wu, Li-Yu Chen
  • Patent number: 10853078
    Abstract: A processor includes a store buffer to store store instructions to be processed to store data in main memory, a load buffer to store load instructions to be processed to load data from main memory, and a loop invariant code motion (LICM) protection structure coupled to the store buffer and the load buffer. The LPT tracks information to compare an address of a store or snoop microoperation with entries in the LICM and re-loads a load microoperation of a matching entry.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: December 1, 2020
    Assignee: INTEL CORPORATION
    Inventors: Vineeth Mekkat, Mark Dechene, Zhongying Zhang, John Faistl, Janghaeng Lee, Hou-Jen Ko, Sebastian Winkel, Oleg Margulis
  • Patent number: 10853299
    Abstract: A hot-plugged PCIe device configuration system includes a PCIe device with a PCIe configuration space having PCIe configuration space registers. A computing system includes a PCIe connector and a PCIe setting record database storing a first PCIe setting record having a first register write location value and first register value information. The computing system detects that the PCIe device has been hot-plugged into the PCIe connector, and uses the first register write location value in the first PCIe setting record to determine a location in the PCIe configuration space that provides a first PCIe configuration space register. The computing system then uses the first register value information in the first PCIe setting record to determine at least one register value change for the first PCIe configuration register, and writes the at least one register value change to the first PCIe configuration space register using the location.
    Type: Grant
    Filed: September 15, 2017
    Date of Patent: December 1, 2020
    Assignee: Dell Products L.P.
    Inventors: Austin Patrick Bolen, Vijay Bharat Nijhawan
  • Patent number: 10817183
    Abstract: An information processing system includes a first processor that issues a first write request group including a plurality of data write requests for writing first data to a memory. The first processor issues a first completion write request after issuing the first write request group. The first completion write request is a request for writing completion information to the memory. The completion information indicates completion of write processing requested by the first write request group. The first processor inserts a first barrier instruction into the issued requests, between the first write request group and the first completion write request. The first processor outputs all of the plurality of data write requests included in the first write request group, subsequently outputs the first barrier instruction, and subsequently outputs the first completion write request.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: October 27, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Kentaro Katayama
  • Patent number: 10819648
    Abstract: In one embodiment, a computer-implemented method comprises receiving a plurality of digital data messages in a first-in first-out (FIFO) primary queue of an electronic digital memory that is coupled to a message consuming process that is executed using computer instructions that are programmed to serially obtain messages from the primary queue and to process the messages; determining that processing a first message of the plurality of messages has failed; in response to determining that processing the first message failed, using the message consuming process, sending a first acknowledgement to the primary queue, sending the first message to a retry queue that is different from the primary queue, and processing one or more other messages from the primary queue; obtaining the first message from the retry queue; reprocessing the first message; repeating periodically selecting and processing one or more other messages from the primary queue and periodically selecting and processing one or more different other mes
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: October 27, 2020
    Assignees: ATLASSIAN PTY LTD., ATLASSIAN INC.
    Inventors: Igor Katkov, Alex Kudinov
  • Patent number: 10795728
    Abstract: A sharing expansion device, a controlling method and a computer using the same are provided. The computer has at least one first user account and a second user account. The first user account has been logged in the computer. The computer is connected to a first input device and a first monitor. The first input device provides at least one first command. The sharing expansion device includes at least two first ports, a second port, a hub unit and a graphic processor. The first ports connect the computer and a second input device. The second input device provides at least one second command. The computer executes the first command and the second command by way of time division multiplexing. The computer provides a first frame and a second frame to the first monitor and the second monitor according to the first user account and the second user account respectively.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: October 6, 2020
    Assignee: QUANTA COMPUTER INC.
    Inventor: Hsiang-Chih Chi
  • Patent number: 10635350
    Abstract: Technology is disclosed herein for aborting a tail portion of a command queue in a storage device. In one aspect, one or more control circuits of a storage system are configured to abort tasks at a tail end of a command queue in response to receiving a task tail abort command. However, tasks at the head end of the command queue may still be executed. Thus, the head end of the command queue need not be rebuilt after the task tail abort command is performed. Therefore, considerable time is saved by not having to rebuild the head end of the command queue. Note that the task tail abort command may be received while the storage system is in a sequential command execution mode, in which tasks are executed in the order of their respective task identifiers.
    Type: Grant
    Filed: January 23, 2018
    Date of Patent: April 28, 2020
    Assignee: Western Digital Technologies, Inc.
    Inventors: Prashant Singhal, Vallivelraja Ponnudurai, Anil Jain
  • Patent number: 10606776
    Abstract: Provided are a computer program product, system, and method for adding dummy requests to a submission queue to manage processing of queued requests according to priorities of the queued requests. A determination is made of a priority for a request to stage a track from the storage device to the cache or to destage a track from the cache to the storage device, comprising a first priority or a second priority. The first priority is higher than the second priority. At least one dummy request is added to a queue in response to the request having the second priority. The controller upon processing a dummy request in the queue discards the dummy request without performing an operation with respect to the storage device. An I/O request having the second priority is added to the queue. The controller processes the I/O request to stage or destage data.
    Type: Grant
    Filed: April 16, 2018
    Date of Patent: March 31, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh M. Gupta, Matthew G. Borlick, Kevin J. Ash
  • Patent number: 10592107
    Abstract: Embodiments are directed to a method performed by a computing device. The method includes (a) receiving, by the computing device, a stream of storage management commands directed at logical disks hosted by a DSS, the logical disks being accessible to VMs running on a remote host, each storage management command having a command type of a plurality of command types, each command type of the plurality of command types having a respective timeout period, (b) placing the storage management commands of the stream into a VM storage management queue stored on the computing device, and (c) selectively dequeueing storage management commands stored in the VM storage management queue to be performed by the DSS, wherein selectively dequeueing includes applying a set of dequeueing criteria, the set of dequeueing criteria including a criterion that selects storage management commands from the VM storage management queue according to their respective command types.
    Type: Grant
    Filed: March 30, 2016
    Date of Patent: March 17, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Sergey Alexandrovich Alexeev, Alexey Vladimirovich Shusharin, Ilya Konstantinovich Morev, Sergey Alexandrovich Zaporozhtsev, Yakov Stanislavovich Belikov
  • Patent number: 10552056
    Abstract: Techniques for performing storage tiering in a data storage system taking into account the write endurance of flash drives and the frequencies with which data are written to storage extents in a data storage system are disclosed. Such storage tiering tends to maximize data temperature of a flash tier by selecting hot extents for placement thereon, but subject to a constraint that doing so does not cause flash drives in the flash tier to operate beyond their endurance levels.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: February 4, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Roman Vladimirovich Marchenko, Nickolay Alexandrovich Dalmatov, Alexander Valeryevich Romanyukov
  • Patent number: 10545701
    Abstract: Techniques relating to arbitration in a memory controller are disclosed. In some embodiments, the memory controller implements a per-bank priority-based arbitration scheme for different types of memory traffic (e.g., with different quality of service parameters). In some embodiments, the memory controller is configured to provide per-bank overrides to the arbitration scheme based on latency tolerance reported by one or more requesters sending a particular type of memory traffic. Various techniques disclosed herein may improve performance, improve fairness among different types of memory traffic, and/or reduce power consumption.
    Type: Grant
    Filed: August 17, 2018
    Date of Patent: January 28, 2020
    Assignee: Apple Inc.
    Inventors: Gregory S. Mathews, Kai Lun Hsiung, Lakshmi Narasimha Murthy Nukala, Peter Fu, Rakesh L. Notani, Sukalpa Biswas, Thejasvi Magudilu Vijayaraj, Yanzhe Liu, Shane J. Keil
  • Patent number: 10528393
    Abstract: A method for operating a data storage device includes determining a first weight based on the sum of data sizes for commands queued in a command queue; determining a second weight by summing weights by types of the commands; and controlling an urgent command selection threshold value for selecting an urgent command existing in the command queue, based on at least one of the first weight and the second weight.
    Type: Grant
    Filed: September 12, 2017
    Date of Patent: January 7, 2020
    Assignee: SK hynix Inc.
    Inventor: Jeen Park
  • Patent number: 10523341
    Abstract: A method includes deactivating transmitters of a first plurality of transceivers that are associated with an endpoint to multi-channel communication fabric. A given transceiver of the first plurality of transceivers includes a receiver. The method includes controlling the given transceiver to cause the given transceiver to couple a reference source of the given transceiver to a first node of the receiver, measure a first value at a second node of the receiver, and determine a gain between the first node and the second node based on the measured first value. The method includes controlling the given receiver to cause the given receiver to isolate the reference source from the first node of the receiver; and measuring, by the given transceiver, a second value at the second node and determining, by the given transceiver, an intrinsic noise based on the measured second value.
    Type: Grant
    Filed: March 1, 2019
    Date of Patent: December 31, 2019
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Petar Ivanov Krotnev, Davide Tonietto, Marc-Andre LaCroix
  • Patent number: 10522185
    Abstract: A data storage device is disclosed comprising a head actuated over a disk. A plurality of access commands are stored in a command queue, wherein the access commands are for accessing the disk using the head. A future access command is predicted, and an execution order for the access commands in the command queue is determined based on an associated execution cost of at least some of the access commands in the command queue and an associated execution cost of the future access command. At least one of the access commands in the command queue is executed based on the execution order.
    Type: Grant
    Filed: February 9, 2019
    Date of Patent: December 31, 2019
    Assignee: Western Digital Technologies, Inc.
    Inventor: David R. Hall
  • Patent number: 10481951
    Abstract: A system and method of device assignment includes receiving, by a supervisor, an assignment request to assign a device to a first application and a second application. The first application is associated with a first memory and has a first address. The second application is associated with a second memory and has a second address. The supervisor selects a first bus address offset and a second bus address offset, which is different from the first bus address offset. The supervisor sends, to the first application, the first bus address offset. The supervisor sends, to the second application, the second bus address offset. The supervisor updates a mapping to the first address to include the first bus address offset and updates a mapping to the second address to include the second bus address offset. The device is assigned to the first application and the second application.
    Type: Grant
    Filed: November 15, 2016
    Date of Patent: November 19, 2019
    Assignee: Red Hat Israel, Ltd.
    Inventors: Michael Tsirkin, Marcel Apfelbaum
  • Patent number: 10423568
    Abstract: A method and system for transferring NVMe data over a network comprises using a discrete buffer memory device to generate a write command from an NVMe-over-RDMA write command request, store the user data from a client host of the network, and send an interrupt signal to a NVMe storage device of the network. The NVMe storage device retrieves the write command from the discrete buffer memory device and performs a direct memory access transfer of the stored user data from the discrete buffer memory device to the NVMe storage device. The discrete buffer memory device comprises a controller and a random access memory for generating commands and storing the commands in a submission queue of the random access memory. The controller can clear commands from the submission queue based on completion commands received in a completion queue of the random access memory.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: September 24, 2019
    Assignee: Microsemi Solutions (U.S.), Inc.
    Inventors: Oren Berman, Stephen Bates
  • Patent number: 10379595
    Abstract: In various embodiments and/or usage scenarios, device power control, such as relating to one or more power control commands, requests to transition operation to a specific power mode, and/or device power management commands, is advantageous and improves one or more of: performance, reliability, unit cost, and development cost of one or more devices, such as storage devices (e.g. a Solid-State Disk (SSD)) or systems including same.
    Type: Grant
    Filed: September 14, 2015
    Date of Patent: August 13, 2019
    Assignee: Seagate Technology LLC
    Inventor: Ross John Stenfort
  • Patent number: 10379770
    Abstract: A storage system according to the present invention includes: a plurality of storage devices, wherein each of a plurality of the storage devices including: a control unit; and a storage unit that stores data, wherein the control unit of the storage device that receives a request specifies the storage device that includes the storage unit in that target data targeted by the request is stored among a plurality of the storage devices, and the control unit of the storage device that is specified transmits, as a response to the request, the target data and header information in that a destination identifier indicating a destination of the request is set to a source identifier of the response, and a source identifier indicating a source of the request is set to a destination identifier of the response.
    Type: Grant
    Filed: March 20, 2018
    Date of Patent: August 13, 2019
    Assignee: NEC CORPORATION
    Inventor: Yuko Chinone
  • Patent number: 10372378
    Abstract: Technology is described herein for operating non-volatile storage. In one aspect, a memory controller replaces an original data buffer pointer(s) to a host memory data buffer(s) with a replacement data buffer pointer(s) to a different data buffer(s) in the host memory. The original data buffer pointer(s) may be associated with a specific read command. For example, the original data buffer pointer(s) may point to data buffer(s) to which data for some range of logical addresses (which may be read from the non-volatile storage) is to be transferred by a memory controller of the non-volatile storage. The replacement data buffer pointer(s) could be associated with a different read command. However, it is not required for the replacement data buffer pointer(s) to be associated with a read command. The replacement data buffer pointer(s) may point to a region of memory that is allocated for exclusive use of the memory controller.
    Type: Grant
    Filed: February 15, 2018
    Date of Patent: August 6, 2019
    Assignee: Western Digital Technologies, Inc.
    Inventors: Shay Benisty, Judah Gamliel Hahn, Alon Marcu, Ariel Navon, Alexander Bazarsky
  • Patent number: 10289516
    Abstract: A processor core includes a decode circuit to decode an instruction, where the instruction specifies an address to be monitored. The processor core further includes a monitor circuit, where the monitor circuit includes a data structure to store a plurality of entries for addresses that are being monitored by the monitor circuit and a triggered queue, where the monitor circuit is to enqueue an address being monitored by the monitor circuit into the triggered queue in response to a determination that a triggering event for the address being monitored by the monitor circuit occurred. The processor core further includes an execution circuit to execute the decoded instruction to add an entry for the specified address to be monitored into the data structure and ensure, using a cache coherence protocol, that a coherency status of a cache line corresponding to the specified address to be monitored is in a shared state.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: May 14, 2019
    Assignee: INTEL CORPORATION
    Inventors: Wim Heirman, Yves Vandriessche
  • Patent number: 10270713
    Abstract: A system for communicating a multi-destination packet through a network switch fabric with a plurality of input and output ports is described. This system receives the multi-destination packet at an input port, wherein the multi-destination packet includes a multicast packet or a broadcast packet that is directed to multiple output ports, and wherein the network switch fabric maintains a separate virtual output queue (VOQ) for each output port. Next, the system sends the multi-destination packet from the input port to the multiple output ports by inserting the multi-destination packet into VOQs associated with the multiple output ports. The multi-destination packet is inserted into one VOQ at a time, so that after the multi-destination packet is read out of a VOQ and is sent to a corresponding output port, the multi-destination packet is inserted in another VOQ until the multi-destination packet is sent to all of the multiple output ports.
    Type: Grant
    Filed: December 16, 2014
    Date of Patent: April 23, 2019
    Assignee: Oracle International Corporation
    Inventors: Arvind Srinivasan, Shimon Muller
  • Patent number: 10228873
    Abstract: A method for swapping out tape cartridges in tape libraries is disclosed. In one embodiment, such a method includes maintaining, in a tape library, old tape cartridges backing up data in a primary storage system. The method adds, to the tape library, new tape cartridges to replace the old tape cartridges. The method then initiates a data transfer process to move active data to the new tape cartridges. This data transfer process first moves active data in less frequently accessed storage elements, followed by active data in more frequently accessed storage elements. During the data transfer process, the method backs up updates to data in the primary storage system to the new tape cartridges. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: March 12, 2019
    Assignee: International Business Machines Corporation
    Inventors: Joshua J. Crawford, Paul A. Jennas, II, Jason L. Peipelman, Matthew J. Ward
  • Patent number: 10223032
    Abstract: A memory controller is provided for accessing shared memory objects by read and write requests made to a memory. The memory controller includes a list for registering address locations of the shared objects in the memory, and having slots for a lock bit. The memory controller includes a read wait queue and a write wait queue for selectively inputting, outputting, holding, and purging requests. The memory controller includes a read initiated queue and a write initiated queue for selectively inputting and purging requests transferred from the read wait queue and the write wait queue, respectively, upon memory access initiation and completion. The memory controller includes a controller for controlling the wait queues using policies by determining which requests to output, hold, and purge, based on a list entry, a lock bit and TTL information set to each request upon a hold being applied thereto and decremented in each cycle.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: March 5, 2019
    Assignee: International Business Machines Corporation
    Inventor: Yasunao Katayama
  • Patent number: 10209925
    Abstract: A memory controller is provided for accessing shared memory objects by read and write requests made to a memory. The memory controller includes a list for registering address locations of the shared objects in the memory, and having slots for a lock bit. The memory controller includes a read wait queue and a write wait queue for selectively inputting, outputting, holding, and purging requests. The memory controller includes a read initiated queue and a write initiated queue for selectively inputting and purging requests transferred from the read wait queue and the write wait queue, respectively, upon memory access initiation and completion. The memory controller includes a controller for controlling the wait queues using policies by determining which requests to output, hold, and purge, based on a list entry, a lock bit and TTL information set to each request upon a hold being applied thereto and decremented in each cycle.
    Type: Grant
    Filed: December 11, 2017
    Date of Patent: February 19, 2019
    Assignee: International Business Machines Corporation
    Inventor: Yasunao Katayama
  • Patent number: 10204047
    Abstract: An apparatus is described that includes a memory controller having an interface to couple to a multi-level system memory. The memory controller also includes a coherency buffer and coherency services logic circuitry. The coherency buffer is to keep cache lines for which read and/or write requests have been received. The coherency services logic circuitry is coupled to the interface and the coherency buffer. The coherency services logic circuitry is to merge a cache line that has been evicted from a level of the multi-level system memory with another version of the cache line within the coherency buffer before writing the cache line back to a deeper level of the multi-level system memory if at least one of the following is true: the another version of said cache line is in a modified state; the memory controller has a pending write request for the cache line.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: February 12, 2019
    Assignee: Intel Corporation
    Inventors: Israel Diamand, Nir Misgav, Aravindh Anantaraman, Zvika Greenfield
  • Patent number: 10175897
    Abstract: A data server, method and computer readable storage medium for receiving a current request relating to a data archive, determining a number of queued requests relating to the data archive present in a request queue, determining a waiting time for the current request based on the number of queued requests and adding the current request to the request queue after the waiting time has elapsed.
    Type: Grant
    Filed: November 15, 2017
    Date of Patent: January 8, 2019
    Assignee: VIACOM INTERNATIONAL INC.
    Inventor: Richard Torpey
  • Patent number: 10169272
    Abstract: A data processing apparatus is provided, which includes: a plurality of processor cores; a shared processor cache, the shared processor cache being connected to each of the processor cores and to a main memory; a bus controller, the bus controller being connected to the shared processor cache and performing, in response to receiving a descriptor sent by one of the processor cores, a transfer of requested data indicated by the descriptor from the shared processor cache to an input/output (I/O) device; a bus unit, the bus unit being connected to the bus controller and transferring data to/from the I/O device; wherein the shared processor cache includes means for prefetching the requested data from the shared processor cache or main memory by performing a direct memory access in response to receiving a descriptor from the one of the processor cores.
    Type: Grant
    Filed: August 17, 2015
    Date of Patent: January 1, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Ambroladze, Norbert Hagspiel, Sascha Junghans, Matthias Klein, Jeorg Walter
  • Patent number: 10146293
    Abstract: Systems, methods, and firmware for power control of data storage devices are provided herein. In one example, a data storage device is presented. The data storage device includes a storage control system to identify a power threshold for the data storage device. The data storage device determines power consumption characteristics for the data storage device and enters into a power controlled mode for the data storage device that adjusts at least a storage transaction queue depth in the data storage device to establish the power consumption characteristics as below the power threshold for the data storage device.
    Type: Grant
    Filed: September 22, 2014
    Date of Patent: December 4, 2018
    Assignee: Western Digital Technologies, Inc.
    Inventors: Mohammed Ghiath Khatib, Damien Cyril Daniel Le Moal
  • Patent number: 10142231
    Abstract: Technologies for accelerating non-uniform network input/output accesses include a multi-home network interface controller (NIC) of a network computing device communicatively coupled to a plurality of non-uniform memory access (NUMA) nodes, each of which include an allocated number of processor cores of a physical processor package and an allocated portion of a main memory directly linked to the physical processor package. The multi-home NIC includes a logical switch communicatively coupled to a plurality of logical NICs, each of which is communicatively coupled to a corresponding NUMA node. The multi-home NIC is configured to facilitate the ingress and egress of network packets by determining a logical path for each network packet received at the multi-home NIC based on a relationship between one of the NUMA nodes and/or a logical NIC (e.g., to forward the network packet from the multi-home NIC) coupled to the one of the NUMA nodes. Other embodiments are described herein.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: November 27, 2018
    Assignee: Intel Corporation
    Inventor: Anil Vasudevan
  • Patent number: 10116732
    Abstract: A provider network hosting multiple network-based services that implement different resources for a client may provide automated management of resource attributes across the multiple network-based services. A client may send a request to a resource attribute service implemented at the provider network to add a resource attribute to different resources implemented among different network-based services that satisfy resource metadata selection criteria. In response to receiving the request, resource metadata maintained for the different resources implemented among the different network-based resources, which may include one or more previously applied resource attributes, may be evaluated to identify those resources that satisfy the resource metadata selection criteria. For those resources that satisfy the resource metadata selection criteria, the resource attribute may be added to the resource metadata maintained for the different resources.
    Type: Grant
    Filed: December 8, 2014
    Date of Patent: October 30, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Jeffrey Cicero Canton, William Frederick Hingle Kruse
  • Patent number: 9946496
    Abstract: A computing system includes a storage device and a host. The storage device includes a volatile memory and a non-volatile memory, and is configured to receive data for storage in the non-volatile memory, to buffer at least some of the received data temporarily in the volatile memory, and to guarantee that any data, which is not part of a predefined amount of data that was most recently received, has been committed to the non-volatile memory. The host is configured to send the data for storage in the storage device, and, in response to a need to commit given data to the non-volatile memory, to send the given data to the storage device followed by at least the predefined amount of additional data.
    Type: Grant
    Filed: March 20, 2016
    Date of Patent: April 17, 2018
    Assignee: Elastifile Ltd.
    Inventors: Eyal Lotem, Avraham Meir, Shahar Frank
  • Patent number: 9928179
    Abstract: Cache replacement policy. In accordance with a first embodiment of the present invention, an apparatus comprises a queue memory structure configured to queue cache requests that miss a second cache after missing a first cache. The apparatus comprises additional memory associated with the queue memory structure is configured to record an evict way of the cache requests for the cache. The apparatus may be further configured to lock the evict way recorded in the additional memory, for example, to prevent reuse of the evict way. The apparatus may be further configured to unlock the evict way responsive to a fill from the second cache to the cache. The additional memory may be a component of a higher level cache.
    Type: Grant
    Filed: December 16, 2011
    Date of Patent: March 27, 2018
    Assignee: Intel Corporation
    Inventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
  • Patent number: 9922107
    Abstract: A processing platform integrates ETL (extract, transform, and load), real time stream processing, and “big data” data stores into a high performance analytic system that runs in a public or private cloud. The platform performs real time pre-storage enrichment of data records to form a single comprehensive record usable for analytics, searching and alerting. The platform further supports sharing of components and plug-ins and performs automatic scaling of resources based on real time resource monitoring and analysis.
    Type: Grant
    Filed: November 21, 2016
    Date of Patent: March 20, 2018
    Assignee: Leidos, Inc.
    Inventors: Thomas James Cannaliato, Joshua A. Decker, Matthew William Vahlberg
  • Patent number: 9846549
    Abstract: A data server, method and computer readable storage medium for receiving a current request relating to a data archive, determining a number of queued requests relating to the data archive present in a request queue, determining a waiting time for the current request based on the number of queued requests and adding the current request to the request queue after the waiting time has elapsed.
    Type: Grant
    Filed: December 20, 2016
    Date of Patent: December 19, 2017
    Assignee: VIACOM INTERNATIONAL INC.
    Inventor: Richard Torpey
  • Patent number: 9753734
    Abstract: A method for sorting elements in hardware structures is disclosed. The method comprises selecting a plurality of elements to order from an unordered input queue (UIQ) within a predetermined range in response to finding a match between at least one most significant bit of the predetermined range and corresponding bits of a respective identifier associated with each of the plurality of elements. The method further comprises presenting each of the plurality of elements to a respective multiplexer. Further the method comprises generating a select signal for an enabled multiplexer in response to finding a match between at least one least significant bit of a respective identifier associated with each of the plurality of elements and a port number of the ordered queue. Finally, the method comprises forwarding a packet associated with a selected element identifier to a matching port number of the ordered queue from the enabled multiplexer.
    Type: Grant
    Filed: July 20, 2016
    Date of Patent: September 5, 2017
    Assignee: Intel Corporation
    Inventors: Mohammad A. Abdallah, Mandeep Singh
  • Patent number: 9747044
    Abstract: In an all-flash storage array, write requests can take about 9 to 10 times longer than a read request of the same size. There could be several problems when reading or writing from all-flash storage, such as a large write request slowing down small read requests, or other write requests. Also, a large read request may slow down smaller read requests by filling the incoming requests queue. In one implementation, a determination is made on what is the maximum size of a request to flash storage that improves the throughput of a flash chip (e.g., write requests beyond a certain size do not improve throughput). A chunklet is defined as a block of data having the calculated maximum size. As write requests come in, the write requests are broken into chunklets, and then the chunklets are queued for processing by the flash chip. One chunklet is processed at a time per write request.
    Type: Grant
    Filed: October 4, 2016
    Date of Patent: August 29, 2017
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Anil Kumar Nanduri, Murali Krishna Vishnumolakala
  • Patent number: 9742683
    Abstract: A method is provided in one example embodiment and includes determining whether a packet received at a network node in a communications network is a high priority packet; determining whether a low priority queue of the network node has been deemed to be starving; if the packet is a high priority packet and the low priority queue has not been deemed to be starving, adding the packet to a high priority queue, wherein the high priority queue has strict priority over the low priority queue; and if the packet is a high priority packet and the low priority queue has been deemed to be starving, adding the packet to the low priority queue.
    Type: Grant
    Filed: November 3, 2015
    Date of Patent: August 22, 2017
    Assignee: Cisco Technology, Inc.
    Inventors: Erico Vanini, Rong Pan, Thomas J. Edsall
  • Patent number: 9734101
    Abstract: A method, system, and computer program product identify extraneous input/output interrupts for a queued input/output device architecture. At least one interrupt is determined to have been generated for at least one queue in a plurality of queues of a queued input/output device architecture. The interrupt is identified as an extraneous interrupt in response to the determining one of that the queue is associated with at least one reply message waiting to be dequeued for a previously processed interrupt, and that the queue fails to include at least one pending reply for a previously received unprocessed interrupt.
    Type: Grant
    Filed: October 2, 2014
    Date of Patent: August 15, 2017
    Assignee: International Business Machines Corporation
    Inventor: Louis P. Gomes
  • Patent number: 9727364
    Abstract: A hypervisor retrieves a packet written by a guest operating system of a virtual machine from hypervisor memory accessible to the guest operating system. The Hypervisor then adds the packet of the guest operating system to at least one receive queue associated with a virtual device. The hypervisor pauses the retrieving of additional packets from the guest upon determining that the at least one receive queue size has met a first predetermined threshold condition. The hypervisor processes queued packets from the at least one receive queue sequentially. The hypervisor restarts the retrieving of the additional packets from the guest upon determining that the at least one receive queue size has met a second predetermined threshold condition.
    Type: Grant
    Filed: September 5, 2014
    Date of Patent: August 8, 2017
    Assignee: Red Hat Israel, Ltd.
    Inventor: Michael Tsirkin
  • Patent number: 9665719
    Abstract: A system and method can support controlled and secure firmware upgrade in a middleware machine environment. The system can provide a boot image of an operating system (OS) in a host node, wherein the host node connects to a shared resource, such as a network fabric, via an input/out (I/O) device. The boot image can receive at least one of a firmware image and a firmware update from the host node, and upgrade firmware in the I/O device associated with the host node. Furthermore, the host-based firmware upgrade can be based on a special boot image that is prevented from accessing local information on the host node, or a normal boot image that is prevented from controlling the I/O device.
    Type: Grant
    Filed: December 5, 2013
    Date of Patent: May 30, 2017
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Bjørn Dag Johnsen, Martin Paul Mayhead
  • Patent number: 9658968
    Abstract: A method and controller for implementing enhanced storage adapter write cache management, and a design structure on which the subject controller circuit resides are provided. The controller includes a hardware write cache engine implementing hardware acceleration for storage write cache management. The controller manages write cache data and metadata with minimum or no firmware involvement for greatly enhancing performance.
    Type: Grant
    Filed: November 12, 2015
    Date of Patent: May 23, 2017
    Assignee: International Business Machines Corporation
    Inventors: Brian E. Bakke, Joseph R. Edwards, Robert E. Galbraith, Adrian C. Gerhard, Daniel F. Moertl, Gowrisankar Radhakrishnan, Rick A. Weckwerth
  • Patent number: 9652385
    Abstract: An apparatus and method are provided for handling atomic update operations. The apparatus has a cache storage to store data for access by processing circuitry, the cache storage having a plurality of cache lines. Atomic update handling circuitry is used to handle performance of an atomic update operation in respect of data at a specified address. When data at the specified address is determined to be stored within a cache line of the cache storage, the atomic update handling circuitry performs the atomic update operation on the data from that cache line. Hazard detection circuitry is used to trigger deferral of performance of the atomic update operation upon detecting that a linefill operation for the cache storage is pending that will cause a chosen cache line to be populated with data that includes data at the specified address. The linefill operation causes the apparatus to receive a sequence of data portions that collectively form the data for storing in the chosen cache line.
    Type: Grant
    Filed: November 27, 2015
    Date of Patent: May 16, 2017
    Assignee: ARM Limited
    Inventors: Gregory Andrew Chadwick, Adnan Khan
  • Patent number: 9588913
    Abstract: Embodiments of the present invention provide systems, methods, and computer program products for managing computing devices to handle an input/output (I/O) request. In one embodiment, the I/O request may eligible for performance throttling based, at least in part, on the associated importance level for performing the received I/O request and one or more characteristics of the received I/O request. Embodiments of the present invention provide systems, methods, and computer program products for throttling the I/O request and transmitting the I/O request to a storage controller.
    Type: Grant
    Filed: June 29, 2015
    Date of Patent: March 7, 2017
    Assignee: International Business Machines Corporation
    Inventors: Susan K. Candelaria, Scott B. Compton, Deborah A. Furman, Ilene A. Goldman, Matthew J. Kalos, John R. Paveza, Beth A. Peterson, Dale F. Riedy, David M. Shackelford, Harry M. Yudenfriend
  • Patent number: 9557935
    Abstract: Provided is a method of writing data of a storage system. The method includes causing a host to issue a first writing command; causing the host, when a queue depth of the first writing command is a first value, to store the first writing command in an entry which is assigned in advance and is included in a cache; causing the host to generate a writing completion signal for the first writing command; and causing the host to issue a second writing command.
    Type: Grant
    Filed: July 21, 2014
    Date of Patent: January 31, 2017
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Pradeep Bisht, Jiurong Cheng, Jong-tae Park, Sung-chul Kim, Seung-yeun Jeong, Sang-jin Oh, Jung-ho Kim
  • Patent number: 9557920
    Abstract: A data server, method and computer readable storage medium for receiving a current request relating to a data archive, determining a number of queued requests relating to the data archive present in a request queue, determining a waiting time for the current request based on the number of queued requests and adding the current request to the request queue after the waiting time has elapsed.
    Type: Grant
    Filed: June 11, 2013
    Date of Patent: January 31, 2017
    Assignee: VIACOM INTERNATIONAL INC.
    Inventor: Richard Torpey
  • Patent number: 9558149
    Abstract: A dual system according to the present invention includes: a memory copying unit configured to, when an arithmetic device of a first computer module is installed into the dual system, execute a memory copy process of copying data in a memory region of a second computer module into a memory region of the first computer module; a substitute processing unit configured to execute a service substitute process that is executed by a different arithmetic device from an arithmetic device executing the memory copy process and that is part of processes involved in the information processing service by the dual system; and a shared memory that stores data of the service substitute process by the substitute processing unit. The shared memory is excluded from the target of the memory copy process.
    Type: Grant
    Filed: March 10, 2014
    Date of Patent: January 31, 2017
    Assignee: NEC CORPORATION
    Inventor: Sayuri Fuse
  • Patent number: 9535864
    Abstract: The present invention is a clustered storage system with which, even when access to the processor of another controller is sent from the processor of one controller, the processor of the second controller is able to prioritize processing of this access so that I/O processing is also prevented from being delayed. With the storage system of the present invention, the first processor of the first controller transmits request information which is to be processed by the second processor of the second controller to the second processor by differentiating between request information for which processing is to be prioritized by the second processor and request information for which processing is not to be prioritized, and the second processor acquires the request information by differentiating between request information for which processing is to be prioritized and request information for which processing is not to be prioritized.
    Type: Grant
    Filed: February 26, 2015
    Date of Patent: January 3, 2017
    Assignee: Hitachi, Ltd.
    Inventors: Shintaro Kudo, Yusuke Nonaka
  • Patent number: 9514072
    Abstract: An input/output (I/O) request is received that indicates a priority for performing the received I/O request by a storage controller. If a base device is not available to handle the received I/O request, whether the received I/O request is eligible for performance throttling is determined. The received I/O request is transmitted to the storage controller indicating whether the received I/O request is eligible for performance throttling. An alias device is allocated to the base device based on the priority for performing the received I/O request. If the throttling information received from the storage controller for the previous I/O request indicates that a request type of the received I/O request is not being throttled, and it is determined that the received I/O request is a new request, then a control block is representing the base device is flagged, indicating that the received I/O request is eligible for performance throttling.
    Type: Grant
    Filed: March 11, 2016
    Date of Patent: December 6, 2016
    Assignee: International Business Machines Corporation
    Inventors: Susan K. Candelaria, Scott B. Compton, Deborah A. Furman, Ilene A. Goldman, Matthew J. Kalos, John R. Paveza, Beth A. Peterson, Dale F. Riedy, David M. Shackelford, Harry M. Yudenfriend