Coherency Patents (Class 711/141)
  • Patent number: 10216692
    Abstract: A multiprocessor system on a chip (MPSoC) implements parallel processing and include a plurality of cores with inter-core communication. This communication is implemented by an on-chip switch fabric in communication with each core, or by shared memory in communication with each core. In another embodiment, a parallel processing system is implemented as a Howard Cascade and uses shared memory for implementing inter-chip communication. The parallel processing system includes a plurality of chips, each formed as an MPSoC, and implements communication between the chips using shared memory.
    Type: Grant
    Filed: June 17, 2010
    Date of Patent: February 26, 2019
    Assignee: Massively Parallel Technologies, Inc.
    Inventor: Kevin D. Howard
  • Patent number: 10216662
    Abstract: Embodiments of systems, apparatuses, and methods for remote action handling are describe. In an embodiment, a hardware apparatus comprises: a first register to store a memory address of a payload corresponding to an action to be performed associated with a remote action request (RAR) interrupt, a second register to store a memory address of an action list accessible by a plurality of processors, and a remote action handler circuit to identify a received RAR interrupt, perform an action of the received RAR interrupt, and signal acknowledgment to an initiating processor upon completion of the action.
    Type: Grant
    Filed: September 26, 2015
    Date of Patent: February 26, 2019
    Assignee: Intel Corporation
    Inventors: Michael Mishaeli, Ido Ouziel, Baruch Chaikin, Yoav Zach
  • Patent number: 10210091
    Abstract: A processor of an aspect includes a plurality of packed data registers, and a decode unit to decode a no-locality hint vector memory access instruction. The no-locality hint vector memory access instruction to indicate a packed data register of the plurality of packed data registers that is to have a source packed memory indices. The source packed memory indices to have a plurality of memory indices. The no-locality hint vector memory access instruction is to provide a no-locality hint to the processor for data elements that are to be accessed with the memory indices. The processor also includes an execution unit coupled with the decode unit and the plurality of packed data registers. The execution unit, in response to the no-locality hint vector memory access instruction, is to access the data elements at memory locations that are based on the memory indices.
    Type: Grant
    Filed: February 15, 2017
    Date of Patent: February 19, 2019
    Assignee: Intel Corporation
    Inventor: Christopher J. Hughes
  • Patent number: 10204049
    Abstract: Methods and apparatus relating to improving the value of F-state by increasing a local caching agent's data forwarding are described. In one embodiment, the opportunity for forwarding from a local caching agent is improved by allowing the local caching agent to keep an F-state copy of the line while sending an S-state copy to the requestor (e.g., in response to a non-ownership read operation). Other embodiments are also disclosed.
    Type: Grant
    Filed: January 6, 2012
    Date of Patent: February 12, 2019
    Assignee: Intel Corporation
    Inventors: Vedaraman Geetha, Jeffrey D. Chamberlain, Sailesh Kottapalli, Ganesh Kumar, Henk G. Neefs, Neil J. Achtman, Bongjin Jung
  • Patent number: 10205673
    Abstract: Disclosed is a data caching method, comprising: according to an input port number of a cell, storing the cell in a corresponding first-in first-out queue; determining that a cell to be dequeued can be dequeued in the current Kth cycle, scheduling for the cell to be dequeued to be dequeued, acquiring the actual value of the number of splicing units occupied by the cell to be dequeued, and storing the cell to be dequeued in a register the same number of bits wide as a bus in a cell splicing manner, wherein determining that the cell to be dequeued can be dequeued is conducted in accordance with the fact that a first back pressure count value of the (K?1)th cycle is less than or equal to a first preset threshold value, and the first back pressure count value of the (K?1)th cycle is obtained in accordance with an estimated value of the number of the splicing units occupied when the previous cell to be dequeued is dequeued, the number of splicing units capable of being transmitted by the bus in each cycle, and a fi
    Type: Grant
    Filed: April 28, 2015
    Date of Patent: February 12, 2019
    Assignee: Sanechips Technology Co. Ltd.
    Inventors: Jiao Zhao, Mingliang Lai, Haoxuan Tian, Yanrui Chang
  • Patent number: 10198261
    Abstract: A method of performing memory synchronization operations is provided that includes receiving, at a programmable cache controller in communication with one or more caches, an instruction in a first language to perform a memory synchronization operation of synchronizing a plurality of instruction sequences executing on a processor, mapping the received instruction in the first language to one or more selected cache operations in a second language executable by the cache controller and executing the one or more cache operations to perform the memory synchronization operation. The method further comprises receiving a second mapping that provides mapping instructions to map the received instruction to one or more other cache operations, mapping the received instruction to one or more other cache operations and executing the one or more other cache operations to perform the memory synchronization operation.
    Type: Grant
    Filed: April 11, 2016
    Date of Patent: February 5, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Shuai Che, Marc S. Orr, Bradford M. Beckmann
  • Patent number: 10185639
    Abstract: An example computer-implemented method for performing failover operations in a data storage system is described herein. The data storage system can include a first storage controller and a second storage controller for processing input/output (“I/O”) operations for the data storage system. The method can include, in response to a failure of the first storage controller, performing failover operations with the second storage controller, and processing the I/O operations with the second storage controller. The failover operations can include preparing a disk subsystem layer for I/O operations, preparing a device manager layer for the I/O operations, and preparing a network layer for the I/O operations. The disk subsystem, device manager, and network layers can be prepared for the I/O operations without dependencies. In particular, preparation of the network layer is not dependent on preparation of the disk subsystem layer or the device manager layer.
    Type: Grant
    Filed: May 2, 2016
    Date of Patent: January 22, 2019
    Assignee: AMERICAN MEGATRENDS, INC.
    Inventors: Paresh Chatterjee, Vijayarankan Muthirisavenugopal, Jomy Jose Maliakal, Sharon Samuel Enoch
  • Patent number: 10186302
    Abstract: A semiconductor system includes a controller. The controller is configured to have a write buffer that stores first write data outputted from a host before the first write data is written into a memory circuit. The controller is configured to write the first write data stored in the write buffer into the memory circuit under a first condition and configured to double-write the first write data stored in the write buffer into the memory circuit under a second condition.
    Type: Grant
    Filed: July 19, 2017
    Date of Patent: January 22, 2019
    Assignee: SK hynix Inc.
    Inventors: Soojin Kim, Sangwon Lee
  • Patent number: 10180796
    Abstract: A memory system includes: a plurality of first memory devices directly or indirectly coupled to one another, each first memory device including a first memory and a first memory controller suitable for controlling the first memory to store data; a second memory device including a second memory and a second memory controller suitable for controlling the second memory to store data; and a multi-processor including a plurality of processors, each processor executing an operating system (OS) and an application to access a data storage memory through the first and second memory devices.
    Type: Grant
    Filed: October 14, 2016
    Date of Patent: January 15, 2019
    Assignee: SK Hynix Inc.
    Inventors: Chang-Hyun Kim, Min-Chang Kim, Do-Yun Lee, Yong-Woo Lee, Jae-Jin Lee, Hoe-Kwon Jung
  • Patent number: 10175992
    Abstract: Systems and methods are disclosed for initialization of a processor. Embodiments relate to alleviating any BIOS code size limitation. In one example, a system includes a memory having stored thereon a basic input/output system (BIOS) program comprising a readable code region and a readable and writeable data stack, a circuit coupled to the memory and to: read, during a boot mode and while using a cache as RAM (CAR), at least one datum from each cache line of the data stack, and write at least one byte of each cache line of the data stack to set a state of each cache line of the data stack to modified, enter a no-modified-data-eviction mode to protect modified data from eviction, and to allow eviction and replacement of readable data, and begin reading from the readable code region and executing the BIOS program after entering the no-modified-data-eviction mode.
    Type: Grant
    Filed: October 1, 2016
    Date of Patent: January 8, 2019
    Assignee: Intel Corporation
    Inventors: Leon Polishuk, Pavel Konev, Larisa Novakovsky, Julius Mandelblat
  • Patent number: 10169236
    Abstract: A cache coherency controller comprises a directory indicating, for memory addresses cached by one or more of a group of one or more cache memories connectable in a coherent cache structure, which of the cache memories are caching those memory addresses; and control circuitry configured to detect a directory entry relating to a memory address to be accessed so as to coordinate, amongst the cache memories, an access to a memory address by one of the cache memories or a coherent agent in instances when the directory entry indicates that another of the cache memories is caching that memory address; the control circuitry being responsive to status data indicating whether each cache memory in the group is currently subject to cache coherency control so as to take into account, in the detection of the directory entry relating to the memory address to be accessed, only those cache memories in the group which are currently subject to cache coherency control.
    Type: Grant
    Filed: April 20, 2016
    Date of Patent: January 1, 2019
    Assignee: ARM Limited
    Inventors: Sean James Salisbury, Andrew David Tune
  • Patent number: 10157139
    Abstract: Aspects include computing devices, apparatus, and methods implemented by the apparatus for implementing asynchronous cache maintenance operations on a computing device, including activating a first asynchronous cache maintenance operation, determining whether an active address of a memory access request to a cache is in a first range of addresses of the first active asynchronous cache maintenance operation, and queuing the first active asynchronous cache maintenance operation as the first asynchronous cache maintenance operation in a fixup queue in response to determining that the active address is in the first range of addresses.
    Type: Grant
    Filed: September 19, 2016
    Date of Patent: December 18, 2018
    Assignee: QUALCOMM Incorporated
    Inventor: Andrew Edmund Turner
  • Patent number: 10154092
    Abstract: A network of PCs includes an I/O channel adapter and network adapter, and is configured for management of a distributed cache memory stored in the plurality of PCs interconnected by the network. The use of standard PCs reduces the cost of the data storage system. The use of the network of PCs permits building large, high-performance, data storage systems.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: December 11, 2018
    Assignee: LS CLOUD STORAGE TECHNOLOGIES, LLC
    Inventor: Ilya Gertner
  • Patent number: 10148318
    Abstract: A method and a system for communicating personal health data in a Near Field Communication (NFC) environment are provided. An NFC manager sets control information in an NFC Data Exchange Format (NDEF) for providing synchronized communication of personal health data between the NFC manager and an NFC agent. The control information may include a direction flag, a state flag, sequence identifier field, and request/response flag. The NFC manager writes the NDEF format including the control information and payload data into an NFC tag associated with the NFC agent. Subsequently, the NFC manager reads the NDEF record stored in the NFC tag and determines whether the NDEF record is written into the NFC tag by the NFC agent based on the control information in the read NDEF format. Accordingly, the NFC manager repeats the above mentioned steps if the NDEF record includes payload data of the NFC agent.
    Type: Grant
    Filed: October 25, 2011
    Date of Patent: December 4, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jayabharath Reddy Badvel, Thenmozhi Arunan, Eun-Tae Won
  • Patent number: 10133641
    Abstract: A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines; a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag; and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, but not for a cache line modified by the hypervisor operating in privilege mode; periodically check the image modification flags; and write only the memory address of the flagged cache rows in the defined log.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: November 20, 2018
    Assignee: International Business Machines Corporation
    Inventors: Guy Lynn Guthrie, Naresh Nayar, Geraint North, Hugh Shen, William Starke, Phillip Williams
  • Patent number: 10126964
    Abstract: Apparatus and method for managing map data in a data storage device. A programmable processor issues a find command to locate and place a requested map page of a map structure into a first cache to service a received host command. A non-programmable hardware circuit searches a forward table to determine whether the requested map page is in a second cache, and if so, loads the map page to the first cache. If not, the hardware circuit requests the requested map page from a back end processor which retrieves the requested map page from a non-volatile memory (NVM), such as a flash memory array. The hardware circuit searches a reverse table and the first cache to select a candidate location in the second cache for the retrieved requested map page from the NVM, and directs the storage of a copy of the requested map page at the candidate location.
    Type: Grant
    Filed: May 25, 2017
    Date of Patent: November 13, 2018
    Assignee: Seagate Technology LLC
    Inventors: Jeffrey Munsil, Jackson Ellis, Ryan J. Goss
  • Patent number: 10082968
    Abstract: A data storage system and associated method are provided wherein a policy engine continuously collects qualitative information about a network load to the data storage system in order to dynamically characterize the load and continuously correlates the load characterization to the content of a command queue of transfer requests for writeback commands and host read commands, selectively limiting the content with respect to writeback commands to only those transfer requests for writeback data that are selected on a physical zone basis of a plurality of predefined physical zones of a storage media.
    Type: Grant
    Filed: May 2, 2016
    Date of Patent: September 25, 2018
    Assignee: Seagate Technology LLC
    Inventors: Clark Edward Lubbers, Robert Michael Lester
  • Patent number: 10078589
    Abstract: Interconnect circuitry and a method of operating the interconnect circuitry are provided, where the interconnect circuitry is suitable to couple at least two master devices to a memory, each comprising a local cache. Any access to the memory mediated by the interconnect circuitry is policed by a memory protection controller situated between the interconnect circuitry and the memory. The interconnect circuitry modifies a coherency type associated with a memory transaction received from one of the master devices to a type which ensures that when a modified version of a copy of a transaction target specified by the issuing master device is stored in a local cache of another master device an access to the transaction target in the memory must take place and therefore must be policed by the memory protection controller.
    Type: Grant
    Filed: April 30, 2015
    Date of Patent: September 18, 2018
    Assignee: ARM Limited
    Inventors: Daniel Sara, Antony John Harris, Håkan Lars-Göran Persson, Andrew Christopher Rose, Ian Bratt
  • Patent number: 10075741
    Abstract: A system for layered local caching of downstream shared media in a hierarchical tree network arrangement includes a first network node on a first distribution network having a first caching controller. The first network node configured to store a video segment transmitted on the first distribution network based on a first instruction received by the first caching controller from a central caching controller communicatively coupled to the first distribution network. The central caching controller is located upstream from the first network node. The system includes a second network node on a second distribution network having a second caching controller and communicatively coupled to the first network node. The second network node configured to store a video segment transmitted on the second distribution network based on a second instruction received by the second caching controller from the first caching controller.
    Type: Grant
    Filed: July 22, 2013
    Date of Patent: September 11, 2018
    Assignee: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
    Inventors: Yong Li, Xuemin Chen
  • Patent number: 10073641
    Abstract: Cluster families for cluster selection and cooperative replication are created. The clusters are grouped into family members of a cluster family base on their relationships and roles. Members of the cluster family determine which family member is in the best position to obtain replicated information and become cumulatively consistent within their cluster family. Once the cluster family becomes cumulatively consistent, the data is shared within the cluster family so that all copies within the cluster family are consistent.
    Type: Grant
    Filed: May 1, 2017
    Date of Patent: September 11, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Thomas W. Bish, Takeshi Nohta, Joseph M. Swingler, Rufus-John Y. Twito
  • Patent number: 10075473
    Abstract: A computer implemented method and apparatus comprises detecting a file content update on a first client computer system, the file to be synchronized on a plurality of different types of client computer systems in a plurality of formats. The method further comprises associating a security policy with the file, wherein the security policy includes restrictions to limit one or more actions that can be performed with the file, and synchronizing the file to a second client computing system while applying the security policy to provide controls for enforcement of the restrictions at the second client computer system.
    Type: Grant
    Filed: February 12, 2015
    Date of Patent: September 11, 2018
    Assignee: BlackBerry Limited
    Inventors: Adi Ruppin, Doron Peri, Yigal Ben-Natan, Gil S. Shidlansik, Miron Liram, Ori Saporta, David Potashinsky, Uri Yulevich, Timothy Choi
  • Patent number: 10073649
    Abstract: A method and a system for storing metadata. The method includes requesting an update of metadata from an external source. The method includes storing updated metadata to a fast storage medium using an update thread. The method further includes moving the updated metadata from the fast storage medium to a slow storage medium using a flush thread.
    Type: Grant
    Filed: July 24, 2014
    Date of Patent: September 11, 2018
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventor: David Michael DeJong
  • Patent number: 10067870
    Abstract: An apparatus and method are described for low overhead synchronous page table updates.
    Type: Grant
    Filed: April 1, 2016
    Date of Patent: September 4, 2018
    Assignee: Intel Corporation
    Inventors: Kshitij A. Doshi, Christopher J. Hughes
  • Patent number: 10061918
    Abstract: In one embodiment, a processor comprises: a first storage including a plurality of entries to store an address of a portion of a memory in which information has been modified; a second storage to store an identifier of a process for which information is to be stored into the first storage; and a first logic to identify a modification to a first portion of the memory and store a first address of the first portion of the memory in a first entry of the first storage, responsive to a determination that a current identifier of a current process corresponds to the identifier stored in the second storage. Other embodiments are described and claimed.
    Type: Grant
    Filed: April 1, 2016
    Date of Patent: August 28, 2018
    Assignee: Intel Corporation
    Inventors: Salmin Sultana, David M. Durham, Michael Lemay, Karanvir S. Grewal, Ravi L. Sahita
  • Patent number: 10061700
    Abstract: A method for writing data, the method may include: receiving or generating, by an interfacing module, a data unit coherent write request for performing a coherent write operation of a data unit to a first address; receiving, by the interfacing module and from a circuit that comprises a cache and a cache controller, a cache coherency indicator that indicates that a most updated version of the content stored at the first address is stored in the cache; and instructing, by the interfacing module, the cache controller to invalidate a cache line of the cache that stored the most updated version of the first address without sending the most updated version of the content stored at the first address from the cache to a memory module that differs from the cache if a length of the data unit equals a length of the cache line.
    Type: Grant
    Filed: August 5, 2016
    Date of Patent: August 28, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Adi Habusha, Gil Stoler, Said Bshara, Nafea Bshara
  • Patent number: 10055259
    Abstract: A method for performing processor resource allocation in an electronic device is provided, where the method may include the steps of: obtaining task-related information to determine whether a task of a plurality of tasks is a heavy task (e.g. the heavy task may correspond to heavier loading than others of the plurality of tasks), to selectively utilize a specific processor core within a plurality of processor cores to perform the task, and determining whether at least one scenario task exists within others of the plurality of tasks, to selectively determine according to application requirements a minimum processor core count and a minimum operating frequency for performing the at least one scenario task; and performing processor resource allocation according to a power table and system loading, to perform any remaining portion of the plurality of tasks. An apparatus for performing processor resource allocation according to the above method is provided.
    Type: Grant
    Filed: December 14, 2015
    Date of Patent: August 21, 2018
    Assignee: MEDIATEK INC.
    Inventors: Tzu-Jen Lo, Yu-Ming Lin, Jia-Ming Chen, Ya-Ting Chang, Nicholas Ching Hui Tang, Yin Chen, Hung-Lin Chou
  • Patent number: 9998378
    Abstract: A traffic control method and device are provided. The method includes receiving traffic monitoring information of a first service flow reported by a reaction point RP; when a congestion state of a first service flow of a congestion point CP satisfies a congestion condition, determining a reaction point RP needing traffic adjustment from a designated reaction point RP according to the received traffic monitoring information, and calculating, according to the traffic monitoring information, a new traffic value of a first service flow of each of the reaction point RP needing traffic adjustment; sending each calculated new traffic value of a first service flows to a corresponding reaction point RP needing traffic adjustment, so that the reaction point RP performs traffic control on the first service flow of the reaction point RP according to the new traffic value.
    Type: Grant
    Filed: March 25, 2015
    Date of Patent: June 12, 2018
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Xing Hu, Xiaojun Shen, Zhiyun Chen
  • Patent number: 9996398
    Abstract: An application processor includes a first core and a second core. The first core is configured to implement a scheduler which monitors a workload of a task of the first core, and the first core is further configured to implement an idle checker which determines whether the second core is idle.
    Type: Grant
    Filed: April 26, 2016
    Date of Patent: June 12, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Gyeong Taek Lee, Seung Kyu Kim, Kyung Min Park, Jong Lae Park, Ji Eun Park
  • Patent number: 9990287
    Abstract: An apparatus and method are described for efficiently transferring data from a core of a central processing unit (CPU) to a graphics processing unit (GPU). For example, one embodiment of a method comprises: writing data to a buffer within the core of the CPU until a designated amount of data has been written; upon detecting that the designated amount of data has been written, responsively generating an eviction cycle, the eviction cycle causing the data to be transferred from the buffer to a cache accessible by both the core and the GPU; setting an indication to indicate to the GPU that data is available in the cache; and upon the GPU detecting the indication, providing the data to the GPU from the cache upon receipt of a read signal from the GPU.
    Type: Grant
    Filed: December 21, 2011
    Date of Patent: June 5, 2018
    Assignee: Intel Corporation
    Inventors: Shlomo Raikin, Raanan Sade, Robert Valentine, Julius Yuli Mandelblat, Ron Shalev, Larisa Novakovsky
  • Patent number: 9992299
    Abstract: Technologies for identifying a cache line of a network packet for eviction from an on-processor cache of a network device communicatively coupled to a network controller. The network device is configured to determine whether a cache line of the cache corresponding to the network packet is to be evicted from the cache based on a determination that the network packet is not needed subsequent to processing the network packet, and provide an indication that the cache line is to be evicted from the cache based on an eviction policy received from the network controller.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: June 5, 2018
    Assignee: Intel Corporation
    Inventors: Ren Wang, Sameh Gobriel, Christian Maciocco, Tsung-Yuan C. Tai, Ben-Zion Friedman, Hang T. Nguyen, Namakkal N. Venkatesan, Michael A. O'Hanlon, Shrikant M. Shah, Sanjeev Jain
  • Patent number: 9983820
    Abstract: In an embodiment, a method for re-programming memory is disclosed. In the embodiment, the method involves selecting a memory page based on version information and re-programming the selected memory page using cyclic redundancy check (CRC) data for the memory page.
    Type: Grant
    Filed: March 29, 2016
    Date of Patent: May 29, 2018
    Assignee: NXP B.V.
    Inventors: Sönke Ostertun, Wolfgang Stidl, Raffaele Costa
  • Patent number: 9965889
    Abstract: A ray tracing core includes a ray generation unit and a plurality of T&I (Traversal & Intersection) units with MIMD (Multiple Instruction stream Multiple Data stream) architecture. The ray generation unit generates at least one eye ray based on an eye ray generation information. The eye ray generation information includes a screen coordinate value. Each of the plurality of T&I units receives the at least one eye ray and checks whether there exists a triangle intersected with the received at least one eye ray. The triangle configures a space.
    Type: Grant
    Filed: March 23, 2016
    Date of Patent: May 8, 2018
    Assignees: SILICONARTS, INC., INDUSTRY-ACADEMIA COOPERATION FOUNDATION OF SEJONG UNIVERSITY
    Inventors: Jin Suk Hur, Woo Chan Park
  • Patent number: 9965220
    Abstract: Various aspects include methods for managing memory subsystems on a computing device. Various aspect methods may include determining a period of time to force a memory subsystem on the computing device into a low power mode, inhibiting memory access requests to the memory subsystem during the determined period of time, forcing the memory subsystem into the low power mode for the determined period of time, and executing the memory access requests to the memory subsystem inhibited during the determined period of time in response to expiration of the determined period of time.
    Type: Grant
    Filed: February 5, 2016
    Date of Patent: May 8, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Olivier Alavoine, Sejoong Lee, Tauseef Kazi, Simon Booth, Edoardo Regini, Renatas Jakushokas, Saurabh Patodia, Jeffrey Gemar, Haw-Jing Lo, Vinod Chamarty, Boris Andreev, Tao Shen, Aravind Bhaskara, Wenbiao Wang, Stephen Molloy
  • Patent number: 9940071
    Abstract: A memory system includes a non-volatile memory and a controller circuit. The controller circuit is configured to carry out an atomic write operation in the non-volatile memory in response to an atomic write command, and selectively carry out one of a first operation and a second operation corresponding to address mapping between a logical address and a physical address of the non-volatile memory, along with the atomic write operation. When the first operation is selected, the controller circuit starts to update the address mapping after receiving a notification that writing of all data of the atomic write operation has been completed. When the second operation is carried out, the controller circuit starts to update the address mapping before receiving the notification.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: April 10, 2018
    Assignee: Toshiba Memory Corporation
    Inventors: Hiroyuki Nemoto, Shunitsu Kohara, Kazuya Kitsunai, Satoshi Arai
  • Patent number: 9940236
    Abstract: A first pointer dereferencer receives a location of a portion of a first node of a data structure. The first node is to be stored in a first storage element. A first pointer is obtained from the first node of the data structure. A location of a portion of a second node of the data structure is determined based on the first pointer. The second node is to be stored in a second storage element. The location of the portion of the second node of the data structure is sent to a second pointer dereferencer that is to access the portion of the second node from the second storage element.
    Type: Grant
    Filed: December 17, 2014
    Date of Patent: April 10, 2018
    Assignee: Intel Corporation
    Inventors: Mark A. Anders, Himanshu Kaul, Gregory K. Chen
  • Patent number: 9928119
    Abstract: In a multithreaded data processing system including a plurality of processor cores, storage-modifying requests of a plurality of concurrently executing hardware threads are received in a shared queue. The storage-modifying requests include a translation invalidation request of an initiating hardware thread. The translation invalidation request is removed from the shared queue and buffered in sidecar logic in one of a plurality of sidecars each associated with a respective one of the plurality of hardware threads. While the translation invalidation request is buffered in the sidecar, the sidecar logic broadcasts the translation invalidation request so that it is received and processed by the plurality of processor cores. In response to confirmation of completion of processing of the translation invalidation request by the initiating processor core, the sidecar logic removes the translation invalidation request from the sidecar.
    Type: Grant
    Filed: March 28, 2016
    Date of Patent: March 27, 2018
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Hugh Shen, Derek E. Williams
  • Patent number: 9910968
    Abstract: A content management system can detect file events that are suspected to be in error, and notify users having access to files affected by the detected file events of the detected events. The content management system can maintain a log of file events including a plurality of file identifiers. The file identifiers identify files that are associated with a namespace, a file event, and a user account responsible for the file event. An analytics module can analyze the log of file events and notify the user of a suspected error when it may be that the file events were inadvertent. A notification can include a link to restore (undo) the file events if the user confirms that the file events were in error.
    Type: Grant
    Filed: December 30, 2015
    Date of Patent: March 6, 2018
    Assignee: Dropbox, Inc.
    Inventors: Matt Eccleston, Stacey Sern, Marcio von Muhlen
  • Patent number: 9898418
    Abstract: A processor including a translation lookaside buffer (TLB), an instruction translator, and a memory subsystem. The TLB caches virtual to physical address translations. The instruction translator incorporates a microinstruction set for the processor that includes a single invalidate page instruction. The invalidate page instruction, when executed by the processor, causes the processor to perform a pseudo translation process in which a virtual address is submitted to the TLB to identify matching entries in the TLB that match the virtual address. The memory subsystem invalidates the matching entries in the TLB. The TLB may include a data TLB and an instruction TLB. The memory subsystem may include a tablewalk engine that performs a pseudo tablewalk to invalidate entries in the TLB and in one or more paging caches. The invalidate page instruction may specify invalidation of only those entries indicated as local.
    Type: Grant
    Filed: May 21, 2015
    Date of Patent: February 20, 2018
    Assignee: VIA ALLIANCE SEMICONDUCTOR CO., LTD.
    Inventor: Colin Eddy
  • Patent number: 9898404
    Abstract: An improved garbage collection (“GC”) process configured to recover new blocks from used storage space is disclosed. After initiating the GC process for a flash memory in accordance with at least one of predefined triggering events, a first valid page within a first block marked as an erasable block is identified. Upon determining a first signature representing the content of the first valid page according to a predefined signature generator, the process identifies a second valid page within a second block as a duplicated page of the first valid page in response to the first signature. The process subsequently associates the logical block address (“LBA”) of the first valid page to the second valid page. In an alternative embodiment, page compression and sequential order of page arrangement can also be implemented to further enhance efficiency of garbage collection.
    Type: Grant
    Filed: June 25, 2014
    Date of Patent: February 20, 2018
    Assignee: CNEX LABS
    Inventors: Yiren Ronnie Huang, Aaron Huang
  • Patent number: 9891938
    Abstract: The present disclosure is related to methods, systems, and machine-readable media for modifying an instance catalog to perform operation. A storage system can include a plurality of packfiles that store data. The storage system can include a plurality of streams that include a plurality of hashes that identify the plurality of packfiles. The storage system can include an instance catalog that includes an identification of the plurality of streams. The storage system can include an operation engine to perform a number of operations on the plurality of packfiles by modifying the instance catalog using the identification of the plurality of streams.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: February 13, 2018
    Assignee: VMware, Inc.
    Inventors: David W. Barry, Joanne Ren, Mike Zucca, Keith Farkas
  • Patent number: 9892072
    Abstract: Interconnect circuitry for connecting transaction masters to transaction slaves includes response modification circuitry. The response modification circuitry includes shortlist buffer circuitry storing identification for modification target transaction responses. The response modification circuitry uses this identification data to identify among a stream of transaction responses in transit a modification target transaction response. The response modification circuitry then serves to form a modified transaction response to be sent in place of the modification target transaction response to the transaction master.
    Type: Grant
    Filed: October 5, 2015
    Date of Patent: February 13, 2018
    Assignee: ARM Limited
    Inventors: Andrew David Tune, Arthur Brian Laughton, Daniel Adam Sara, Sean James Salisbury, Peter Andrew Riocreux
  • Patent number: 9886350
    Abstract: A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines; a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag; and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, but not for a cache line modified by the hypervisor operating in privilege mode; periodically check the image modification flags; and write only the memory address of the flagged cache rows in the defined log.
    Type: Grant
    Filed: February 19, 2016
    Date of Patent: February 6, 2018
    Assignee: International Business Machines Corporation
    Inventors: Guy Lynn Guthrie, Naresh Nayar, Geraint North, Hugh Shen, William Starke, Phillip Williams
  • Patent number: 9887924
    Abstract: Embodiments of the disclosure provide techniques for measuring congestion and controlling quality of service to a shared resource. A module that interfaces with the shared resource monitors the usage of the shared resource by accessing clients. Upon detecting that the rate of usage of the shared resource has exceeded a maximum rate supported by the shared resource, the module determines and transmits a congestion metric to clients that are currently attempting to access the shared resource. Clients, in turn determine a delay period based on the congestion metric prior to attempting another access of the shared resource.
    Type: Grant
    Filed: August 26, 2013
    Date of Patent: February 6, 2018
    Assignee: VMware, Inc.
    Inventors: William Earl, Christos Karamanolis
  • Patent number: 9880939
    Abstract: According to one embodiment, a memory system includes a nonvolatile memory, a device controller, and a tag memory. The device controller stores a part of a logical-to-physical address translation table (L2P table) stored in the nonvolatile memory in a memory of a host as a cache. The tag memory includes a plurality of entries associated with a plurality of cache lines of the cache. Each entry includes a tag indicating which area of the L2P table is stored in a corresponding cache line, and a plurality of bitmap flags indicating whether a plurality of sub-lines included in the corresponding cache line are valid or not.
    Type: Grant
    Filed: February 5, 2016
    Date of Patent: January 30, 2018
    Assignee: TOSHIBA MEMORY CORPORATION
    Inventors: Konosuke Watanabe, Satoshi Kaburaki, Tetsuhiko Azuma
  • Patent number: 9880937
    Abstract: The present invention provides a dynamic set associative cache apparatus for a processor. When read access occurs, the apparatus first determines a valid/invalid bit of each cache block in a cache set to be accessed, and sets, according to the valid/invalid bit of each cache block, an enable/disable bit of a cache way in which the cache block is located; then, reads valid cache blocks, compares a tag section in a memory address with a tag block in each cache block that is read, and if there is a hit, reads data from a data block in a hit cache block according to an offset section of the memory address.
    Type: Grant
    Filed: July 10, 2014
    Date of Patent: January 30, 2018
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Lingjun Fan, Shibin Tang, Da Wang, Hao Zhang, Dongrui Fan
  • Patent number: 9880905
    Abstract: A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines; a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag; and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, but not for a cache line modified by the hypervisor operating in privilege mode; periodically check the image modification flags; and write only the memory address of the flagged cache rows in the defined log.
    Type: Grant
    Filed: July 2, 2014
    Date of Patent: January 30, 2018
    Assignee: International Business Machines Corporation
    Inventors: Guy Lynn Guthrie, Naresh Nayar, Geraint North, Hugh Shen, William Starke, Phillip Williams
  • Patent number: 9858204
    Abstract: A cache device connected to a storage device and connected to a plurality of sources, including a cache unit that relays a read request and a read response between a source and a storage device, and a storage area control unit that stores the source as a first history in association with specification of first data in a read request, and, if the first data are not retained by the cache unit and a storage area sufficient for retaining the first data does not exist, selects data associated with a less number of the sources in the first history as second data in preference to data associated with a greater number of the sources out of the first data or data retained by the cache unit, and, if the second data differ from the first data, causes the cache unit to discard the second data and then store the first data.
    Type: Grant
    Filed: December 9, 2015
    Date of Patent: January 2, 2018
    Assignee: NEC Corporation
    Inventor: Youhei Kajimoto
  • Patent number: 9836400
    Abstract: In an embodiment, a first portion of a cache memory is associated with a first core. This first cache memory portion is of a distributed cache memory, and may be dynamically controlled to be one of a private cache memory for the first core and a shared cache memory shared by a plurality of cores (including the first core) according to an addressing mode, which itself is dynamically controllable. Other embodiments are described and claimed.
    Type: Grant
    Filed: October 31, 2013
    Date of Patent: December 5, 2017
    Assignee: Intel Corporation
    Inventors: Kebing Wang, Zhaojuan Bian, Wei Zhou, Zhihong Wang
  • Patent number: 9836411
    Abstract: A synchronization capability to synchronize updates to page tables by forcing updates in cached entries to be made visible in memory (i.e., in in-memory page table entries). A synchronization instruction is used that ensures after the instruction has completed that updates to the cached entries that occurred prior to the synchronization instruction are made visible in memory. Synchronization may be used to facilitate memory management operations, such as bulk operations used to change a large section of memory to read-only, operations to manage a free list of memory pages, and/or operations associated with terminating processes.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: December 5, 2017
    Assignee: International Business Machines Corporation
    Inventor: Michael K. Gschwind
  • Patent number: 9824009
    Abstract: Systems and methods for coherency maintenance are presented. The systems and methods include utilization of multiple information state tracking approaches or protocols at different memory or storage levels. In one embodiment, a first coherency maintenance approach (e.g., similar to a MESI protocol, etc.) can be implemented at one storage level while a second coherency maintenance approach (e.g., similar to a MOESI protocol, etc.) can be implemented at another storage level. Information at a particular storage level or tier can be tracked by a set of local state indications and a set of essence state indications. The essence state indication can be tracked “externally” from a storage layer or tier directory (e.g., in a directory of another cache level, in a hub between cache levels, etc.). One storage level can control operations based upon the local state indications and another storage level can control operations based in least in part upon an essence state indication.
    Type: Grant
    Filed: December 21, 2012
    Date of Patent: November 21, 2017
    Assignee: NVIDIA CORPORATION
    Inventors: Anurag Chaudhary, Guillermo Juan Rozas