Access Control Bit Patents (Class 711/145)
  • Patent number: 11941611
    Abstract: A method for using shareable and nested transaction on hash chains includes storing transaction data of a transaction of a hash chain. A lock block is appended to the hash chain. Appending the lock block includes setting a tail block identifier of the hash chain from a preceding tail block of a preceding transaction to the lock block. A data block is appended to the hash chain. Appending the data block includes setting the tail block identifier of the hash chain to the data block. The method further includes removing the transaction data from the transaction without invalidating the hash chain. The method further includes appending an updated data block to the hash chain to update the transaction with updated transaction data.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: March 26, 2024
    Assignee: Intuit Inc.
    Inventors: Glenn Carter Scott, Michael Richard Gabriel
  • Patent number: 11797474
    Abstract: Implementations relate to a data processor that includes a data processing unit having a plurality of processing elements and a cache hierarchy including a plurality of levels of data caches. The data caches include a first level data cache connected to a second level data cache, and a main memory connected to the highest level cache of the cache hierarchy. At least one of the first level data cache or second level data cache is divided into a plurality of cache segments, and during operation of the data processor, at least some of the plurality of cache segments are excluded from cache operation. Each of the excluded cache segments is dedicated to an associated processing element as tightly coupled local access memory.
    Type: Grant
    Filed: October 24, 2020
    Date of Patent: October 24, 2023
    Assignee: Hyperion Core, Inc.
    Inventor: Martin Vorbach
  • Patent number: 11580033
    Abstract: Provided herein may be a storage device and a method of operating the same. The method of operating a storage device including a replay protected memory block (RPMB) may include receiving a write request for the RPMB from an external host, selectively storing data in the RPMB based on an authentication operation, receiving a read request from the external host, and providing result data to the external host in response to the read request, wherein the read request includes a message indicating that a read command to be subsequently received from the external host is a command related to the result data.
    Type: Grant
    Filed: June 1, 2020
    Date of Patent: February 14, 2023
    Assignee: SK hynix Inc.
    Inventor: Kwang Su Kim
  • Patent number: 11513694
    Abstract: Systems and methods for storage pruning can enable users to delete, edit, or copy backed up data that matches a pattern. Storage pruning can enable fine-grain deletion or copying of these files from backups stored in secondary storage devices. Systems and methods can also enable editing of metadata associated with backups so that when the backups are restored or browsed, the logical edits to the metadata can then be performed physically on the data to create a custom restore or a custom view. A user may perform operations such as renaming, deleting, modifying flags, and modifying retention policies on backed up items. Although the underlying data in the backup may not change, the view of the backup data when the user browses the backup data can appear to include the user's changes. A restore of the data can cause those changes to be performed on the backup data.
    Type: Grant
    Filed: October 29, 2020
    Date of Patent: November 29, 2022
    Assignee: Commvault Systems, Inc.
    Inventors: Manas Bhikchand Mutha, Pavan Kumar Reddy Bedadala, Prosenjit Sinha
  • Patent number: 11385796
    Abstract: Techniques perform storage management. Such techniques involve: in response to receiving, at a first processor of a storage system, a write request from a host for writing user data, caching the user data in a first cache of the first processor, and generating cache metadata in the first cache, the cache metadata including information associated with writing the user data; sending the user data and the cache metadata to a second cache of a second processor, for the second processor to perform, in the second cache, data processing related to cache mirroring by the second processor; and sending, to the host, an indication of completion of the write request, without waiting for the second processor to complete the data processing. Such techniques can improve system performance such as reducing latency, and shortening length of the I/O handling path of write request.
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: July 12, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Yousheng Liu, Ruiyong Jia, Xinlei Xu
  • Patent number: 11294707
    Abstract: A method includes receiving, by a L2 controller, a request to perform a global operation on a L2 cache and preventing new blocking transactions from entering a pipeline coupled to the L2 cache while permitting new non-blocking transactions to enter the pipeline. Blocking transactions include read transactions and non-victim write transactions. Non-blocking transactions include response transactions, snoop transactions, and victim transactions.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: April 5, 2022
    Assignee: Texas Instruments Incorporated
    Inventors: Abhijeet Ashok Chachad, Naveen Bhoria, David Matthew Thompson, Neelima Muralidharan
  • Patent number: 11163681
    Abstract: A method for identifying, in a system including two or more computing devices that are able to communicate with each other, with each computing device having with a cache and connected to a corresponding memory, a computing device accessing one of the memories, includes monitoring memory access to any of the memories; monitoring cache coherency commands between computing devices; and identifying the computing device accessing one of the memories by using information related to the memory access and cache coherency commands.
    Type: Grant
    Filed: January 14, 2019
    Date of Patent: November 2, 2021
    Assignee: International Business Machines Corporation
    Inventors: Nobuyuki Ohba, Atsuya Okazaki
  • Patent number: 11106587
    Abstract: A system includes a memory, a producer processor and a consumer processor. The memory includes a shared ring buffer, which has a partially overlapping active ring and processed ring. The producer processor is in communication with the memory and is configured to receive a request associated with a memory entry, store the request in a first slot of the shared ring buffer at a first offset, receive another request associated with another memory entry, and store the other request in a second slot (in the overlapping region adjacent to the first slot) of the shared ring buffer. The consumer processor is in communication with the memory and is configured to process the request and write the processed request in a third slot (outside of the overlapping region at a second offset and in a different cache-line than the second slot) of the shared ring buffer.
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: August 31, 2021
    Assignee: Red Hat, Inc.
    Inventor: Michael Tsirkin
  • Patent number: 11106464
    Abstract: Systems, methods, and apparatuses relating to access synchronization in a shared memory are described. In one embodiment, a processor includes a decoder to decode an instruction into a decoded instruction, and an execution unit to execute the decoded instruction to: receive a first input operand of a memory address to be tracked and a second input operand of an allowed sequence of memory accesses to the memory address, and cause a block of a memory access that violates the allowed sequence of memory accesses to the memory address. In one embodiment, a circuit separate from the execution unit compares a memory address for a memory access request to one or more memory addresses in a tracking table, and blocks a memory access for the memory access request when a type of access violates a corresponding allowed sequence of memory accesses to the memory address for the memory access request.
    Type: Grant
    Filed: September 27, 2016
    Date of Patent: August 31, 2021
    Assignee: Intel Corporation
    Inventors: Swagath Venkataramani, Dipankar Das, Sasikanth Avancha, Ashish Ranjan, Subarno Banerjee, Bharat Kaul, Anand Raghunathan
  • Patent number: 11023162
    Abstract: Techniques are disclosed relating to caches that support transient storage fields for cache entries. In some embodiments, cache circuitry includes a set of multiple cache entries that each include a tag field and a data field. In some embodiments, transient storage circuitry includes a transient storage field for each of the multiple cache entries. In some embodiments, cache control circuitry stores received first data in the data field of a cache entry and stores received transient data in a corresponding transient storage field. In response to an eviction determination for the cache entry, however, the cache control circuitry may write the first data but not the transient data to a backing memory for the cache circuitry. In various embodiments, disclosed techniques may allow caching additional data that is transient without increasing bandwidth to the backing memory.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: June 1, 2021
    Assignee: Apple Inc.
    Inventors: Jeffrey T. Brady, Sindhuja Sethuraman, Frank W. Liljeros, Adil M. Sadik
  • Patent number: 10909046
    Abstract: Apparatuses and methods related to computer memory access determination are described. A command can be received at a memory system (e.g., a system with or exploiting DRAM). The command can comprise a memory operation and a plurality of privilege bits. The privilege level or a memory address that is associated with the memory operation can be identified. The privilege level can correspond to the memory address can describe a privilege level that can access the memory address. A determination can be made as to whether the memory operation, or the application requesting certain data or prompting corresponding instructions, is entitled to access to the memory address using the plurality of privilege bits and the privilege level. Responsive to determining that the memory operation has access to the memory address, the memory operation can be processed.
    Type: Grant
    Filed: June 15, 2018
    Date of Patent: February 2, 2021
    Assignee: Micron Technology, Inc.
    Inventor: Richard C. Murphy
  • Patent number: 10901940
    Abstract: A processor includes a widest set of data registers that corresponds to a given logical processor. Each of the data registers of the widest set have a first width in bits. A decode unit that corresponds to the given logical processor is to decode instructions that specify the data registers of the widest set, and is to decode an atomic store to memory instruction. The atomic store to memory instruction is to indicate data that is to have a second width in bits that is wider than the first width in bits. The atomic store to memory instruction is to indicate memory address information associated with a memory location. An execution unit is coupled with the decode unit. The execution unit, in response to the atomic store to memory instruction, is to atomically store the indicated data to the memory location.
    Type: Grant
    Filed: April 2, 2016
    Date of Patent: January 26, 2021
    Assignee: INTEL CORPORATION
    Inventors: Vedvyas Shanbhogue, Stephen J. Robinson, Christopher D. Bryant, Jason W. Brandt
  • Patent number: 10891228
    Abstract: A cache memory control device for controlling a first cache memory of a multi-cache memory system that includes logic circuitry operable for storing state information assigned to an invalid copy of a cache line stored in the first cache memory, where the state information includes a cache memory identifier identifying an individual second cache memory of the multi-cache memory system that is likely to contain a valid copy of the cache line.
    Type: Grant
    Filed: February 12, 2018
    Date of Patent: January 12, 2021
    Assignee: International Business Machines Corporation
    Inventor: Burkhard Steinmacher-Burow
  • Patent number: 10783089
    Abstract: The present disclosure includes systems and methods for securing data direct I/O (DDIO) for a secure accelerator interface, in accordance with various embodiments. Historically, DDIO has enabled performance advantages that have outweighed its security risks. DDIO circuitry may be configured to secure DDIO data by using encryption circuitry that is manufactured for use in communications with main memory along the direct memory access (DMA) path. DDIO circuitry may be configured to secure DDIO data by using DDIO encryption circuitry manufactured for use by or manufactured within the DDIO circuitry. Enabling encryption and decryption in the DDIO path by the DDIO circuitry has the potential to close a security gap in modern data central processor units (CPUs).
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: September 22, 2020
    Assignee: Intel Corporation
    Inventors: Siddhartha Chhabra, Prashant Dewan, Abhishek Basak, David M. Durham
  • Patent number: 10783085
    Abstract: Techniques are disclosed relating to filtering cache accesses. In some embodiments, a control unit is configured to, in response to a request to process a set of data, determine a size of a portion of the set of data to be handled using a cache. In some embodiments, the control unit is configured to determine filtering parameters indicative of a set of addresses corresponding to the determined size. In some embodiments, the control unit is configured to process one or more access requests for the set of data based on the determined filter parameters, including: using the cache to process one or more access requests having addresses in the set of addresses and bypassing the cache to access a backing memory directly, for access requests having addresses that are not in the set of addresses. The disclosed techniques may reduce average memory bandwidth or peak memory bandwidth.
    Type: Grant
    Filed: March 1, 2019
    Date of Patent: September 22, 2020
    Assignee: Apple Inc.
    Inventors: Karthik Ramani, Fang Liu, Steven Fishwick, Jonathan M. Redshaw
  • Patent number: 10732898
    Abstract: A method for accessing a flash memory device and a flash memory device. After receiving a write request for an address, a flash memory controller obtains an indicator of the address, where the indicator indicates a last access type of the address, which might be a write operation or a read operation. When determining the indicator indicates a write operation, which means the access type for the address is normally write operation, to save time, the flash memory controller perform a fast-write operation on the address, when the indicator indicates a read operation, which means there might be plenty of read operations on the address, to facilitate future read operation, the controller performs a slow-write operation on the address.
    Type: Grant
    Filed: August 20, 2018
    Date of Patent: August 4, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Liang Shi, Chun Xue, Qiao Li, Dongfang Shan, Jun Xu, Yuangang Wang
  • Patent number: 10719444
    Abstract: The disclosure provides for a reactive cache coherence protocol that has efficiencies over proactive approaches. Rather than proactively performing remediation when a data item is invalidated, a destination endpoint checks cache coherence upon receiving an indication of a cache hit, and based at least on detecting a lack of coherence, performs a reactive remediation process. For example, the incoherence may be fixed by replacing, as a cached data item, a data block indicated by the cache hit with a replacement data block that triggered the cache hit.
    Type: Grant
    Filed: November 23, 2018
    Date of Patent: July 21, 2020
    Assignee: VMware, Inc.
    Inventor: Oleg Zaydman
  • Patent number: 10706069
    Abstract: Techniques for replication of a client database to remote devices are described. In one embodiment, an apparatus may comprise a server database management component operative to receive a collection subscription command from a client device at a database synchronization system, the collection subscription command specifying an object collection; and detect a collection change for the object collection; and an update queue management component operative to register the client device for push notification with a collection update queue associated with the object collection; and add a collection update to the collection update queue, the collection update based on the collection change. Other embodiments are described and claimed.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: July 7, 2020
    Assignee: FACEBOOK, INC.
    Inventors: Vishal Kathuria, Joshua Scott Evenson, Andras Biczo, Hong-Seok Kim, Leigh Jonathan Henry Pauls
  • Patent number: 10691601
    Abstract: A cache coherence management method, a node controller, and a multiprocessor system that includes a first table, a second table, a node controller, and at least two nodes, where the node controller determines, in the first table according to address information of data, a first entry, where the first entry includes a first field and a second field. The first field records an occupation status of the data, the second field indicates a node that occupies the data exclusively when the first field includes an exclusive state, and the node controller determines a second entry in the second table according to the address information of the data and the second field when the first field includes a shared state, where the second entry includes a third field, and the third field indicates nodes that share the data.
    Type: Grant
    Filed: October 19, 2018
    Date of Patent: June 23, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Tao Li, Yongbo Cheng, Chenghong He
  • Patent number: 10628329
    Abstract: A processing system includes a first processor configured to issue a first request in a first format, an adapter configured to receive the first request in the first format and send the first request in a second format, and a memory coherency interconnect configured to receive the first request in the second format and determine whether the first request in the second format is for a translation lookaside buffer (TLB) operation or a non-TLB operation based on information in the first request in the second format. When the first request in the second format is for a TLB operation, the interconnect routes the first request in the second format to a TLB global ordering point (GOP). When the first request in the second format is not for a TLB operation, the interconnect routes the first request in the second format to a non-TLB GOP.
    Type: Grant
    Filed: April 26, 2016
    Date of Patent: April 21, 2020
    Assignee: NXP USA, Inc.
    Inventor: Sanjay R. Deshpande
  • Patent number: 10571982
    Abstract: Embodiments include method, systems and computer program products for operating a resettable write once read many (RWORM) memory. The method includes receiving, by a processor, a request for at least a portion of memory in a computer system to be designated as RWORM memory. The processor further writes data to the RWORM memory. The processor further maintains the RWORM memory in a read-only state after the RWORM memory is written to. The processor further re-designates the RWORM memory to a read/write state in response to encountering a system reset.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: February 25, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: John Eells
  • Patent number: 10423877
    Abstract: Three-dimensional (3D) neuromorphic computing systems are provided. A system includes a logic wafer having a plurality of processors. The system further includes a double-sided interposer bonded to the logic wafer and incorporating a signal port ring for sending and receiving signals. The system also includes a plurality of 3D memory modules bonded to the double-sided interposer. The double-sided interposer is a wafer scale or a panel scale providing communication between the plurality of processors and the plurality of 3D memory modules.
    Type: Grant
    Filed: August 15, 2016
    Date of Patent: September 24, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Charles E. Cox, Harald Huels, Arvind Kumar, Pritish Narayanan, Ahmet S. Ozcan, J. Campbell Scott, Winfried W. Wilcke
  • Patent number: 10366049
    Abstract: A method of controlling a processor includes receiving from a command buffer a first command corresponding to a first instruction that is processed by a second processing core and starting processing of the first command by the first processing core, storing in the command buffer a second command corresponding to a second instruction that is processed by the second processing core before the processing of the first command is completed, and starting processing of a third instruction by the second processing core before the processing of the first command is completed.
    Type: Grant
    Filed: December 12, 2014
    Date of Patent: July 30, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ki-seok Kwon, Suk-jin Kim, Do-hyung Kim
  • Patent number: 10360079
    Abstract: A synchronization method in a multiprocessor system is provided. The method includes providing a plurality of synchronization mechanisms for synchronizing data to be accessed by a plurality of concurrently executable tasks, analyzing design information and runtime information for application software that includes the concurrently executable tasks, identifying, based on the analysis, software architecture patterns for the concurrently executable tasks that access a shared variable, and associating, based on the analysis, each of the software architecture patterns to one or more of the synchronization mechanisms.
    Type: Grant
    Filed: June 16, 2017
    Date of Patent: July 23, 2019
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Shige Wang, Stephen G. Lusko
  • Patent number: 10331550
    Abstract: This disclosure describes, in one embodiment an apparatus. The apparatus includes a processor; a memory, an application, collector circuitry and aggregator circuitry. The memory is to store one or more tasks. The application is associated with the one or more tasks. The collector circuitry is to identify a local free address range in at least one address space. The aggregator circuitry is to provide address range data to a subgroup aggregator. The provided address range data includes at least one local free address range.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: June 25, 2019
    Assignee: Intel Corporation
    Inventors: David Keppel, Charles J. Archer
  • Patent number: 10310735
    Abstract: Data storage apparatus comprises detection circuitry configured to detect a match between a multi-bit reference memory address and a test address, the test address being a combination of a multi-bit base address and a multi-bit address offset, the detection circuitry comprising: a comparator configured to compare, as a first comparison, a first subset of bits of the reference memory address with a combination of the corresponding first subset of bits of the base address and the corresponding first subset of bits of the address offset; the comparator being configured to compare, as a second comparison, a second, different subset of bits of the reference memory address with the corresponding second subset of bits of the base address; a detector configured to detect the match between the reference memory address and the test address when both of the first comparison and the second comparison detect a respective match; and control circuitry configured to control operation of the data storage apparatus in dependen
    Type: Grant
    Filed: February 23, 2017
    Date of Patent: June 4, 2019
    Assignee: ARM Limited
    Inventors: Cédric Denis Robert Airaud, Max John Batley, Ian Michael Caulfield, Thomas Edward Roberts
  • Patent number: 10310982
    Abstract: A computer-implemented method for managing cache memory in a distributed symmetric multiprocessing computer is described. The method may include receiving, at a first central processor (CP) chip, a fetch request from a first chip. The method may further include determining via address compare mechanisms on the first CP chip whether one or more of a second CP chip and a third CP chip is requesting access to a target line. The first chip, the second chip, and the third chip are within the same chip cluster. The method further includes providing access to the target line if both of the second CP chip and the third CP chip have accessed the target line at least one time since the first CP chip has accessed the target line.
    Type: Grant
    Filed: December 15, 2016
    Date of Patent: June 4, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Deanna Postles Dunn Berger, Johnathon J. Hoste, Pak-kin Mak, Arthur J. O'Neill, Jr., Robert J. Sonnelitter, III
  • Patent number: 10198301
    Abstract: A semiconductor device includes a central processing unit and a processor on one semiconductor substrate. The processor includes a buffer for storing a first register setting list and notifies the central processing unit of an access complete signal indicating completion of reading a second register setting list within a memory. The central processing unit changes the second register setting list within the memory based on the access complete signal and notifies the processor of an update request signal. The processor reads the second register setting list changed by the central processing unit into the buffer to update the first register setting list based on the update request information.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: February 5, 2019
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventors: Tetsuji Tsuda, Masaru Hase, Yuki Inoue, Naohiro Nishikawa
  • Patent number: 10114758
    Abstract: One embodiment of the present invention includes techniques to support demand paging across a processing unit. Before a host unit transmits a command to an engine that does not tolerate page faults, the host unit ensures that the virtual memory addresses associated with the command are appropriately mapped to physical memory addresses. In particular, if the virtual memory addresses are not appropriately mapped, then the processing unit performs actions to map the virtual memory address to appropriate locations in physical memory. Further, the processing unit ensures that the access permissions required for successful execution of the command are established. Because the virtual memory address mappings associated with the command are valid when the engine receives the command, the engine does not encounter page faults upon executing the command. Consequently, in contrast to prior-art techniques, the engine supports demand paging regardless of whether the engine is involved in remedying page faults.
    Type: Grant
    Filed: September 13, 2013
    Date of Patent: October 30, 2018
    Assignee: NVIDIA CORPORATION
    Inventors: Samuel H. Duncan, Jerome F. Duluk, Jr., Jonathon Stuart Ramsay Evans, James Leroy Deming
  • Patent number: 10089231
    Abstract: Improving access to a cache by a processing unit. One or more previous requests to access data from a cache are stored. A current request to access data from the cache is retrieved. A determination is made whether the current request is seeking the same data from the cache as at least one of the one or more previous requests. A further determination is made whether the at least one of the one or more previous requests seeking the same data was successful in arbitrating access to a processing unit when seeking access. A next cache write access is suppressed if the at least one of previous requests seeking the same data was successful in arbitrating access to the processing unit.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: October 2, 2018
    Assignee: International Business Machines Corporation
    Inventors: Simon H. Friedmann, Girish G. Kurup, Markus Kaltenbach, Ulrich Mayer, Martin Recktenwald
  • Patent number: 10089237
    Abstract: Certain embodiments herein relate to, among other things, designing data cache systems to enhance energy efficiency and performance of computing systems. A data filter cache herein may be designed to store a portion of data stored in a level one (L1) data cache. The data filter cache may reside between the L1 data cache and a register file in the primary compute unit. The data filter cache may therefore be accessed before the L1 data cache when a request for data is received and processed. Upon a data filter cache hit, access to the L1 data cache may be avoided. The smaller data filter cache may therefore be accessed earlier in the pipeline than the larger L1 data cache to promote improved energy utilization and performance. The data filter cache may also be accessed speculatively based on various conditions to increase the chances of having a data filter cache hit.
    Type: Grant
    Filed: March 3, 2017
    Date of Patent: October 2, 2018
    Assignee: Florida State University Research Foundation, Inc.
    Inventors: David Whalley, Magnus Själander, Alen Bardizbanyan, Per Larsson-Edefors
  • Patent number: 10082969
    Abstract: A system and method for managing a storage system may include recording, in a cache memory, data related to user writes to the storage system; setting a time in a next consistency point (NCP) object with a value greater than the current time; and maintaining a first counter related to the number of user writes recorded in the cache memory and that occurred before the time included in the NCP object and after a time included in a consistency point (CP) object; maintaining a second counter related to the number of user writes that were stored in a persistent storage system and that occurred before the time in the NCP object and after a time in the CP object. A system and method for managing a storage system may include initializing the storage system to a consistent state based on the time included in the CP object.
    Type: Grant
    Filed: January 26, 2017
    Date of Patent: September 25, 2018
    Assignee: Reduxio Systems Ltd.
    Inventor: Uri Weissbrem
  • Patent number: 10050788
    Abstract: The invention creates a method for the contactless readout of an electronic identification document by means of a terminal, wherein in a data reading step encrypted identification data from a data memory are transmitted to the terminal, and in a key reading step the data key with which the identification data can be decrypted is transmitted to the terminal, and in the terminal the identification data are decrypted with the data key. The data reading step is carried out employing a long-range radio connection, and the key reading step is carried out employing a short-range radio connection.
    Type: Grant
    Filed: December 18, 2012
    Date of Patent: August 14, 2018
    Assignee: GIESECKE+DEVRIENT MOBILE SECURITY GMBH
    Inventors: Jan Eichholz, Gisela Meister, Thomas Aichberger
  • Patent number: 10021209
    Abstract: Embodiments as disclosed provide a distributed caching solution that improve the performance and functionality of a content management platform for sites that are physically or logically remote from the primary site of the content management platform. In particular, according to embodiments, a remote cache server may be associated with a remote site to store local copies of documents that are managed by the primary content management platform. Periodically, a portion of the remote site's cache may be synchronized with the content management platform's primary site using an extensible architecture to ensure that content at the remote cache server is current.
    Type: Grant
    Filed: April 8, 2016
    Date of Patent: July 10, 2018
    Assignee: Open Text SA ULC
    Inventors: Nicolae Ionescu, Dan-Horia Trufasiu, Peter Varga, Tao Zhou, Franz Pauthner, Yue Kuk Wong
  • Patent number: 9971684
    Abstract: A device for interleaving/deinterleaving digital data delivered by processing elements (P0 . . . Pn-1) suitable for being used both with turbo-codes and with LDPC codes. The device includes memory banks (B0 . . . Bm-1) for storing data coming from or going to the processing elements, an interconnection network (INT) for directing the data between the processing elements and the memory banks, and a control unit (CTRL) for controlling the interconnection network and the memory banks. The control unit (CTRL) includes a calculation circuit (CAL) capable of the online generation of command words for the interconnection network and addressing and control sequences of the memory banks, ensuring conflict-free memory access on the basis of the interleaving rule to be applied, the size of the digital data frames, the number of processing units and memory banks, and the interconnection network.
    Type: Grant
    Filed: February 22, 2013
    Date of Patent: May 15, 2018
    Assignees: UNIVERSITE DE BRETAGNE SUD, CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE—CNRS
    Inventors: Philippe Coussy, Cyrille Chavet
  • Patent number: 9971700
    Abstract: A processing device includes a cache implementing a set of at least three cache slices. Each cache slice is to store a corresponding set of cache lines. The cache further includes cache control logic coupled to the set of at least three cache slices. The cache control logic is to map addresses of an address space to the cache such that each address within the address space maps to a corresponding strict subset of two or more cache slices of the set of cache slices.
    Type: Grant
    Filed: November 6, 2015
    Date of Patent: May 15, 2018
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Gabriel H. Loh
  • Patent number: 9916179
    Abstract: In a transactional memory environment including a first processor and one or more additional processors, a computer-implemented method includes, by the first processor, initializing a time record, listening for zero or more probes from the one more additional processors, responding to each probe of the zero or more probes, and logging each probe of the zero or more probes to yield a probe log. The computer-implemented method further includes, by the first processor, receiving a probe report directive and, responsive to the probe report directive, generating a probe report indication based on the probe log. The probe report indication denotes whether, since the time record, the first processor has received any of the zero or more probes. The computer-implemented method further includes ending the time record. A corresponding computer program product and computer system are also disclosed.
    Type: Grant
    Filed: October 29, 2015
    Date of Patent: March 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Michael Karl Gschwind, Maged M. Michael, Valentina Salapura, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
  • Patent number: 9916180
    Abstract: In a transactional memory environment including a first processor and one or more additional processors, a computer-implemented method includes, by the first processor, initializing a time record, listening for zero or more probes from the one more additional processors, responding to each probe of the zero or more probes, and logging each probe of the zero or more probes to yield a probe log. The computer-implemented method further includes, by the first processor, receiving a probe report directive and, responsive to the probe report directive, generating a probe report indication based on the probe log. The probe report indication denotes whether, since the time record, the first processor has received any of the zero or more probes. The computer-implemented method further includes ending the time record. A corresponding computer program product and computer system are also disclosed.
    Type: Grant
    Filed: May 25, 2016
    Date of Patent: March 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Michael Karl Gschwind, Maged M. Michael, Valentina Salapura, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
  • Patent number: 9727472
    Abstract: Systems and methods presented herein provide for region lock management in an expander. In one embodiment, an expander, being operable to link a plurality of initiators to a plurality of Redundant Array of Independent Disks logical volumes, includes a plurality of physical transceivers, each being operable to link the logical volumes to the initiators, and a region lock manager operable to receive a request from a first of the initiators to lock a region of the logical volumes for an input/output operation by the first initiator. The region lock manager is also operable to determine if the requested region is unlocked, to lock the requested region from the remaining initiators to allow the input/output operation of the first initiator after determining the requested region is unlocked, and to unlock the requested region after the input/output operation of the first initiator is complete.
    Type: Grant
    Filed: February 14, 2013
    Date of Patent: August 8, 2017
    Assignee: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
    Inventors: Naresh Madhusudana, Naveen Krishnamurthy
  • Patent number: 9729434
    Abstract: Provided are a computer program product, system, and method for processing requests for multiple services in a service request. A receiving controller, comprising one of a controlling forwarder or a data forwarder, receives a service request for a service from an originating device node. The receiving controller forwards an internal service request to a processing controller providing response information for the service request. The processing controller comprises a data forwarder when the receiving controller comprises the controlling forwarder or comprises the controlling forwarder when the receiving controller comprises one of the at least one data forwarder. The processing controller processes the internal service request to generate response information requested by the service request and forwards a reply including the response information to the receiving controller, which forwards the response information in a reply to the service request to the originating device node.
    Type: Grant
    Filed: April 12, 2013
    Date of Patent: August 8, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Roger G. Hathorn, Henry J. May
  • Patent number: 9710280
    Abstract: In one embodiment, the present invention includes a processor having a core to execute instructions. This core can include various structures and logic that enable instructions of different atomic regions to be executed in an overlapping manner. To this end, the core can include a register file having registers to store data for use in execution of the instructions, and multiple shadow register files each to store a register checkpoint on initiation of a given atomic region. In this way, overlapping execution of atomic regions identified by a programmer or compiler can occur. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 30, 2011
    Date of Patent: July 18, 2017
    Assignee: Intel Corporation
    Inventors: Jaewoong Chung, Cheng Wang, Youfeng Wu
  • Patent number: 9703718
    Abstract: Managing cache evictions during transactional execution of a process. Based on initiating transactional execution of a memory data accessing instruction, memory data is fetched from a memory location, the memory data to be loaded as a new line into a cache entry of the cache. Based on determining that a threshold number of cache entries have been marked as read-set cache lines, determining whether a cache entry that is a read-set cache line can be replaced by identifying a cache entry that is a read-set cache line for the transaction that contains memory data from a memory address within a predetermined non-conflict address range. Then invalidating the identified cache entry of the transaction. Then loading the fetched memory data into the identified cache entry, and then marking the identified cache entry as a read-set cache line of the transaction.
    Type: Grant
    Filed: June 27, 2014
    Date of Patent: July 11, 2017
    Assignee: International Business Machines Corporation
    Inventors: Dan F. Greiner, Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
  • Patent number: 9674283
    Abstract: Methods and systems are provided for negotiating a session with a first server, wherein data within the session travels through at least a second server such as a proxy server; replacing a first client global unique identifier (GUID) with a second GUID generated by the second server; maintaining a GUID map table at the second server mapping the second GUID with the first GUID; requesting a plurality of leases on a file from the first server, wherein the each of the plurality of lease requests comprises a lease key and the second GUID, wherein the lease key is identical for each of the plurality of leases; providing caching services, wherein caching services are associated with a lease state corresponding to one of the plurality of leases; receiving an indication that a second client has made a lease request for the file; breaking the first lease upon receipt of the indication; and communicating a lease break notification to addresses associated with the second GUID.
    Type: Grant
    Filed: January 15, 2014
    Date of Patent: June 6, 2017
    Assignee: Cisco Technology, Inc.
    Inventors: Jitesh Mulchandani, Mangal Vithal Shrirame, Magesh Babu Nagalingam, Srinivas Dharmasanam, Paul Theodore Mathison, Suresh Pachiappan
  • Patent number: 9652413
    Abstract: A signal processing system comprising at least one master device at least one memory element and prefetch module arranged to perform prefetching from at least one memory element upon a memory access request to the at least one memory element from the at least one master device. Upon receiving a memory access request from the at least one master device, the prefetch module is arranged to configure the enabling of prefetching of at least one of instruction information and data information in relation to that memory access request based at least partly on an address to which the memory access request relates.
    Type: Grant
    Filed: July 20, 2009
    Date of Patent: May 16, 2017
    Assignee: NXP USA, Inc.
    Inventors: Alistair Robertson, Joseph Circello, Mark Maiolani
  • Patent number: 9588924
    Abstract: A hybrid messaging model including a method that sends a first request message from a control process executing on a computer to a plurality of subordinate processes. The first request message directs the subordinate processes to enter a first state. An expected state is set equal to the first state in response to sending the first request message. A status message, including the expected state, is periodically broadcast from the control process to the plurality of subordinate processes. At least one confirmation message is received from each of the subordinate processes confirming that the subordinate process is in the expected state. Each of the confirmation messages is responsive to either the first request message or to the status message. A second request message is sent from the control process to the plurality of subordinate processes in response to receiving at least one confirmation message from each of the subordinate processes.
    Type: Grant
    Filed: May 26, 2011
    Date of Patent: March 7, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kevin J. Kathmann, Alexei L. Pytel, Steven J. Simonson, Bruce W. Talbott, Thomas J. Wasik
  • Patent number: 9563564
    Abstract: Systems and methods for cache allocation with code and data prioritization. An example system may comprise: a cache; a processing core, operatively coupled to the cache; and a cache control logic, responsive to receiving a cache fill request comprising an identifier of a request type and an identifier of a class of service, to identify a subset of the cache corresponding to a capacity bit mask associated with the request type and the class of service.
    Type: Grant
    Filed: April 7, 2015
    Date of Patent: February 7, 2017
    Assignee: Intel Corporation
    Inventors: Andrew J. Herdrich, Edwin Verplanke, Ravishankar Iyer, Christopher C. Gianos, Jeffrey D. Chamberlain, Ronak Singhal, Julius Mandelblat, Bret L. Toll
  • Patent number: 9396227
    Abstract: A system, method, and non-transitory computer readable medium for providing controlled lock violation for data transactions are presented. The system includes a processor for executing a first data transaction and a second data transaction, the first and second data transactions operating on a plurality of data resources. A controlled lock violation module grants to the second transaction a conflicting lock to a data resource locked by the first transaction with a lock, the conflicting lock granted to the second transaction while the first transaction holds its lock. The controlled lock violation module can be applied to distributed transactions in a two-phase commit and to canned transactions.
    Type: Grant
    Filed: March 29, 2012
    Date of Patent: July 19, 2016
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Goetz Graefe, Harumi Kuno
  • Patent number: 9383998
    Abstract: A system and method for fencing memory accesses. Memory loads can be fenced, or all memory access can be fenced. The system receives a fencing instruction that separates memory access instructions into older accesses and newer accesses. A buffer within the memory ordering unit is allocated to the instruction. The access instructions newer than the fencing instruction are stalled. The older access instructions are gradually retired. When all older memory accesses are retired, the fencing instruction is dispatched from the buffer.
    Type: Grant
    Filed: April 5, 2012
    Date of Patent: July 5, 2016
    Assignee: Intel Corporation
    Inventors: Salvador Palanca, Stephen A. Fischer, Subramaniam Maiyuran, Shekoufeh Qawami
  • Patent number: 9378210
    Abstract: Managing the writing of a dataset by initiating a first computer-implemented process to write a plurality of portions of a dataset to a corresponding plurality of data storage locations on at least one data storage device, identifying a request made by a second computer-implemented process to write data to one of the data storage locations before the first computer-implemented process has finished writing all of the portions of the dataset to all of the data storage locations, and excluding the data storage location associated with the request from future writes by the first computer-implemented process of any portion of the dataset.
    Type: Grant
    Filed: July 1, 2013
    Date of Patent: June 28, 2016
    Assignee: HAPPY CLOUD INC.
    Inventors: Gavriel Raanan, Lawrence Reisler
  • Patent number: 9367363
    Abstract: Systems and methods for integrating multiple best effort hardware transactional support mechanisms, such as Read Set Monitoring (RSM) and Best Effort Hardware Transactional Memory (BEHTM), in a single transactional memory implementation are described. The best effort mechanisms may be integrated such that the overhead associated with support of multiple mechanisms may be reduced and/or the performance of the resulting transactional memory implementations may be improved over those that include any one of the mechanisms, or an un-integrated collection of multiple such mechanisms. Two or more of the mechanisms may be employed concurrently or serially in a single attempt to execute a transaction, without aborting or retrying the transaction. State maintained or used by a first mechanism may be shared with or transferred to another mechanism for use in execution of the transaction. This transfer may be performed automatically by the integrated mechanisms (e.g., without user, programmer, or software intervention).
    Type: Grant
    Filed: September 25, 2008
    Date of Patent: June 14, 2016
    Assignee: Oracle America, Inc.
    Inventors: Mark S. Moir, David Dice