Caching Patents (Class 711/118)
  • Patent number: 10353682
    Abstract: A device includes a processor configured to: divide loop in a program into first loop and second loop when compiling the program, the loop accessing data of an array and prefetching data of the array to be accessed at a repetition after prescribed repetitions at each repetition, the first loop including one or more repetitions from an initial repetition to a repetition immediately before the repetition after the prescribed repetitions, the second loop including one or more repetitions from the repetition after the prescribed repetitions to a last repetition, and generate an intermediate language code configured to access data of the array using a first region in a cache memory and prefetch data of the array using a second region in the cache memory in the first loop, and to access and prefetch data of the array using the second region in the second loop.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: July 16, 2019
    Assignee: FUJITSU LIMITED
    Inventor: Yuta Mukai
  • Patent number: 10353885
    Abstract: Embodiments of the present invention provide a method, computer program product, and a computer system for storing data records in extents. According to one embodiment a data record comprising an attribute value is received. One or more data records stored in a first extent, are identified, wherein the stored one or more data records in the first extent have at least one attribute value. The attribute value of the received data record is compared to the attribute values of the identified data records stored in the first extent. It is then determined whether to store the received data record in the first extent. Responsive to determining, not to store the received data record in the first extent, the received data record is stored in a second extent. If the first received data record is stored in a second extent, determining, an attribute value information of the second extent.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: July 16, 2019
    Assignee: International Business Machines Corporation
    Inventors: Michal Bodziony, Artur M. Gruszecki, Tomasz Kazalski, Konrad K. Skibski
  • Patent number: 10353820
    Abstract: Systems and methods for a low-overhead index for a cache. The index is used to access content or segments in the cache by storing at least an identifier and a location. The index is accessed using the identifier. The identifier may be shortened or be a short identifier. Because a collision may occur, the index may also include one or more meta-data values associated with the data segment. Collisions can be resolved by also comparing the metadata of the segment with the metadata stored in the index. If both the short identifier and metadata match those of the segment, the segment is likely in the cache and can be accessed. Segments can also be inserted into the cache.
    Type: Grant
    Filed: August 14, 2018
    Date of Patent: July 16, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Grant R. Wallace, Philip N. Shilane
  • Patent number: 10346316
    Abstract: Provided are a computer program product, system, and method for processing cache miss rates to determine memory space to add to an active cache to reduce a cache miss rate for the active cache. During caching operations to the active cache, information is gathered on an active cache miss rate based on a rate of access to tracks that are not indicated in the active cache list and a cache demote rate. A determination is made as to whether adding additional memory space to the active cache would result in the active cache miss rate being less than the cache demote rate when the active cache miss rate exceeds the cache demote rate. A message is generated indicating to add the additional memory space when adding the additional memory space would result in the active cache miss rate being less than the cache demote rate.
    Type: Grant
    Filed: June 21, 2017
    Date of Patent: July 9, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta
  • Patent number: 10345883
    Abstract: A power state transformer, a system and a method thereof are disclosed. The power state transformer is coupled with a processing unit model. The power state transformer is configured for counting performance activities executed in the processing unit model, and further for determining a power state of the processing unit model according to count values of the performance activities.
    Type: Grant
    Filed: May 31, 2016
    Date of Patent: July 9, 2019
    Assignee: TAIWAN SEMICONDUCTOR MANUFACTURING CO., LTD.
    Inventors: Kai-Yuan Ting, Shereef Shehata, Tze-Chiang Huang, Sandeep Kumar Goel, Mei Wong, Yun-Han Lee
  • Patent number: 10341288
    Abstract: Disclosed are methods, circuits, devices, systems and associated computer executable code for providing Domain Name Resolution functionality to a data client device accessing a networked data resource through an access point of a data communication network. According to some embodiments, an access point or node of a data communication network may be integral or otherwise functionally associated with a conditional domain name system (CDNS), which CDNS may include a local cache of conditional DNS records.
    Type: Grant
    Filed: October 28, 2015
    Date of Patent: July 2, 2019
    Assignee: SAGUNA NETWORKS LTD.
    Inventors: Daniel Nathan Frydman, Lior Fite
  • Patent number: 10339094
    Abstract: A computer processor is disclosed. The computer processor comprises one or more processor resources. The computer processor further comprises a plurality of hardware thread units coupled to the one or more processor resources. The computer processor may be configured to permit simultaneous access to the one or more processor resources by only a subset of hardware thread units of the plurality of hardware thread units. The number of hardware threads in the subset may be less than the total number of hardware threads of the plurality of hardware thread units.
    Type: Grant
    Filed: May 19, 2015
    Date of Patent: July 2, 2019
    Assignee: OPTIMUM SEMICONDUCTOR TECHNOLOGIES, INC.
    Inventors: Mayan Moudgill, Gary J. Nacer, C. John Glossner, Arthur Joseph Hoane, Paul Hurtley, Murugappan Senthilvelan, Pablo Balzola, Vitaly Kalashnikov, Sitij Agrawal
  • Patent number: 10339058
    Abstract: Aspects include computing devices and methods implemented by the computing for automatic cache coherency for page table data on a computing device. Some aspects may include modifying, by a first processing device, page table data stored in a first cache associated with the first processing device, receiving, at a page table coherency unit, a page table cache invalidate signal from the first processing device, issuing, by the page table coherency unit, a cache maintenance operation command to the first processing device, and writing, by the first processing device, the modified page table data stored in the first cache to a shared memory accessible by the first processing device and a second processing device associated with a second cache storing the page table data.
    Type: Grant
    Filed: July 25, 2017
    Date of Patent: July 2, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Andrew Edmund Turner, Farrukh Hijaz, Bohuslav Rychlik
  • Patent number: 10331373
    Abstract: A data processing system includes at least one processor core each having an associated store-through upper level cache and an associated store-in lower level cache. In response to execution of a memory move instruction sequence including a plurality of copy-type instructions and a plurality of paste-type instructions, the at least one processor core transmits a corresponding plurality of copy-type and paste-type requests to its associated lower level cache, where each copy-type request specifies a source real address and each paste-type request specifies a destination real address. In response to receipt of each copy-type request, the associated lower level cache copies a respective data granule from a respective storage location specified by the source real address of that copy-type request into a non-architected buffer.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: June 25, 2019
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, William J. Starke, Derek E. Williams
  • Patent number: 10331572
    Abstract: Techniques are provided for maintaining data persistently in one format, but making that data available to a database server in more than one format. For example, one of the formats in which the data is made available for query processing is based on the on-disk format, while another of the formats in which the data is made available for query processing is independent of the on-disk format. Data that is in the format that is independent of the disk format may be maintained exclusively in volatile memory to reduce the overhead associated with keeping the data in sync with the on-disk format copies of the data. Selection of data to be maintained in the volatile memory may be based on various factors. Once selected the data may also be compressed to save space in the volatile memory. The compression level may depend on one or more factors that are evaluated for the selected data.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: June 25, 2019
    Assignee: Oracle International Corporation
    Inventors: Chinmayi Krishnappa, Vineet Marwah, Amit Ganesh
  • Patent number: 10324861
    Abstract: According to embodiments described herein, the hierarchical complexity for coherence protocols associated with clustered cache architectures can be encapsulated in a simple function, i.e., that of determining when a data block is shared entirely within a cluster (i.e., a sub-tree of the hierarchy) and is private from the outside. This allows embodiments to eliminate complex recursive coherence operations that span the hierarchy and instead employ simple coherence mechanisms such as self-invalidation and write-through but which are restricted to operate where a data block is shared. Thus embodiments recognize that, in the context of clustered cache hierarchies, data can be shared entirely within one cluster but can be private (unshared) to this cluster when viewed from the perspective of other clusters. This characteristic of the data can be determined and then used to locally simplify coherence protocols.
    Type: Grant
    Filed: February 4, 2016
    Date of Patent: June 18, 2019
    Assignee: ETA SCALE AB
    Inventors: Alberto Ros, Stefanos Kaxiras
  • Patent number: 10324856
    Abstract: Technical solutions are described for executing one or more out-of-order instructions by a processing unit. An example method includes executing, by a load-store unit (LSU), instructions from an out-of-order (OoO) window. The OoO execution includes determining an effective address being used by a load instruction from the OoO window. Further, the execution includes determining presence of the effective address in an effective address directory (EAD) by identifying an EAD entry in the EAD, the EAD entry maps the effective address with an index of a corresponding effective-real table (ERT) entry from an effective-real table (ERT). In response to the effective address being present in the EAD, the execution includes accessing the corresponding ERT entry of the effective address of the load instruction, the corresponding ERT entry including a real address for the effective address, and issuing the load instruction using the real address from the corresponding ERT entry.
    Type: Grant
    Filed: November 29, 2017
    Date of Patent: June 18, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Bryan Lloyd, Balaram Sinharoy, Shih-Hsiung S. Tung
  • Patent number: 10318427
    Abstract: An instruction in a first cache line may be identified and an address associated with the instruction may be determined. The address may be determined to cross a cache line boundary associated with the first cache line and a second cache line. In response to determining that the address crosses the cache line boundary, the instruction may be adjusted based on a portion of the address included in the first cache line and a second instruction may be created based on a portion of the address included in the second cache line. The second instruction may be injected into an instruction pipeline after the adjusted instruction.
    Type: Grant
    Filed: December 18, 2014
    Date of Patent: June 11, 2019
    Assignee: Intel Corporation
    Inventors: Ramon Matas, Chung-Lun Chan, Alexey P. Suprun, Aditya Kesiraju
  • Patent number: 10318511
    Abstract: In non-limiting examples of the present disclosure, systems and methods for interning expression trees are provided. Hash code for a plurality of expression tree nodes is recursively computed and a determination is made as to whether hash code for each of a plurality of expression tree nodes is stored in a cached intern pool. Upon determining that at least one of a plurality of expression tree nodes is not stored in a cached intern pool, one or more functions may be run on at least one of a plurality of expression tree nodes for determining whether at least one of a plurality of expression tree nodes should be stored in a cached intern pool. Normalization of expression trees may also be employed to effectuate effective sharing of expression tree nodes.
    Type: Grant
    Filed: November 25, 2015
    Date of Patent: June 11, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Bart De Smet, Eric Anthony Rozell
  • Patent number: 10318513
    Abstract: Embodiments of the present invention provide a method, computer program product, and a computer system for storing data records in extents. According to one embodiment a data record comprising an attribute value is received. One or more data records stored in a first extent, are identified, wherein the stored one or more data records in the first extent have at least one attribute value. The attribute value of the received data record is compared to the attribute values of the identified data records stored in the first extent. It is then determined whether to store the received data record in the first extent. Responsive to determining, not to store the received data record in the first extent, the received data record is stored in a second extent. If the first received data record is stored in a second extent, determining, an attribute value information of the second extent.
    Type: Grant
    Filed: December 5, 2017
    Date of Patent: June 11, 2019
    Assignee: International Business Machines Corporation
    Inventors: Michal Bodziony, Artur M. Gruszecki, Tomasz Kazalski, Konrad K. Skibski
  • Patent number: 10310769
    Abstract: A memory system may include a memory device comprising a plurality of memory blocks each having a plurality of pages; and a controller suitable for storing data in a first memory block among the memory blocks, storing map data of the data in a second memory block among the memory blocks, and scanning the map data by performing filtering on logical information of the data in response to a command.
    Type: Grant
    Filed: April 11, 2016
    Date of Patent: June 4, 2019
    Assignee: SK hynix Inc.
    Inventor: Jong-Min Lee
  • Patent number: 10310988
    Abstract: Technical solutions are described for executing one or more out-of-order instructions by a processing unit. An example method includes executing, by a load-store unit (LSU), instructions from an out-of-order (OoO) window. The OoO execution includes determining an effective address being used by a load instruction from the OoO window. Further, the execution includes determining presence of the effective address in an effective address directory (EAD) by identifying an EAD entry in the EAD, the EAD entry maps the effective address with an index of a corresponding effective-real table (ERT) entry from an effective-real table (ERT). In response to the effective address being present in the EAD, the execution includes accessing the corresponding ERT entry of the effective address of the load instruction, the corresponding ERT entry including a real address for the effective address, and issuing the load instruction using the real address from the corresponding ERT entry.
    Type: Grant
    Filed: October 6, 2017
    Date of Patent: June 4, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Bryan Lloyd, Balaram Sinharoy, Shih-Hsiung S. Tung
  • Patent number: 10310757
    Abstract: Systems, methods, and computer programs are disclosed for reducing memory power consumption. An exemplary method comprises configuring a power saving memory balloon associated with a volatile memory. Memory allocations are steered to the power saving memory balloon. In response to initiating a memory power saving mode, data is migrated from the power saving memory balloon. A power saving feature is executed on the power saving memory balloon while in the memory power saving mode.
    Type: Grant
    Filed: August 23, 2017
    Date of Patent: June 4, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Yanru Li, Larry Bassel, Thomas Zeng, Dexter Chun
  • Patent number: 10305986
    Abstract: A cloud-based storage service hosts content information that may be accessed by client machines in a peer-to-peer network. The content information is a compact representation of the content which is stored outside of the cloud-based storage service. The cloud-based storage service generates the content information and a content information hash. The content information hash is used to validate the content information when the content information is downloaded to the peer-to-peer network. The cloud-based storage service also generates metadata that describes the content information so that a client machine in the peer-to-peer network may access the content information from the cloud-based storage service.
    Type: Grant
    Filed: July 21, 2015
    Date of Patent: May 28, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC.
    Inventors: Daniel Kappes, Jian Lin, Igor Liokumovich, Hemant Nanivadekar, Mandar Gokhale
  • Patent number: 10303603
    Abstract: A special class of loads and stores access a user-defined memory region where coherency and memory orders are only enforced at the coherent point. Coherent memory requests, which are limited to user-defined memory region, are dispatched to the common memory ordering buffer. Non-coherent memory requests (e.g., all other memory requests) can be routed via non-coherent lower level caches to the shared last level cache. By assigning a private, non-overlapping, address spaces to each of the processor cores, the lower-level caches do not need to implement the logic necessary to maintain cache coherency. This can reduce power consumption and integrated circuit die area. This can also improve memory bandwidth and performance for applications with predominantly non-coherent memory accesses while still providing memory coherence for specific memory range(s)/applications that demand it.
    Type: Grant
    Filed: June 13, 2017
    Date of Patent: May 28, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventor: Patrick P. Lai
  • Patent number: 10303604
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for caching data not frequently accessed. One of the methods includes receiving a request for data from a component of a device, determining that the data satisfies an infrequency condition, in response to determining that the data satisfies the infrequency condition: determining a target cache level which defines a cache level within a cache level hierarchy of a particular cache at which to store infrequently accessed data, the target cache level being lower than a highest cache level in the cache level hierarchy, requesting and receiving the data from a memory that is not a cache of the device, and storing the data in a level of the particular cache that is at or below the target cache level in the cache level hierarchy, and providing the data to the component.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: May 28, 2019
    Assignee: Google LLC
    Inventors: Richard Yoo, Liqun Cheng, Benjamin C. Serebrin, Parthasarathy Ranganathan, Rama Krishna Govindaraju
  • Patent number: 10303480
    Abstract: Embodiments herein provide for improved store-to-load-forwarding (STLF) logic and linear aliasing effect reduction logic. In one embodiment, a load instruction to be executed is selected. Whether a first linear address associated with said load instruction matches a linear address of a store instruction of a plurality of store instructions in a queue is determined. Data associated with said store instruction for executing said load instruction is forwarded, in response to determining that the first linear address matches the linear address of the store instruction.
    Type: Grant
    Filed: October 30, 2013
    Date of Patent: May 28, 2019
    Assignee: Advanced Micro Devices
    Inventors: David A Kaplan, Daniel Hopper, John M. King, Jeff Rupley
  • Patent number: 10282309
    Abstract: Systems, apparatuses, and methods for implementing per-page control of physical address space distribution among memory modules are disclosed. A computing system includes a plurality of processing units coupled to a plurality of memory modules. A determination is made as to which physical address space distribution granularity to implement for physical memory pages allocated for a first data structure. The determination can be made on a per-data-structure basis (e.g., file, page, block, etc.) or on a per-application-basis. A physical address space distribution granularity is encoded as a property of each physical memory page allocated for the first data structure, and physical memory pages of the first data structure distributed across the plurality of memory modules based on a selected physical address space distribution granularity.
    Type: Grant
    Filed: February 24, 2017
    Date of Patent: May 7, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Nuwan S. Jayasena, Hyojong Kim, Hyesoon Kim
  • Patent number: 10282302
    Abstract: Examples disclosed herein relate to programmable memory-side cache management. Some examples disclosed herein may include a programmable memory-side cache and a programmable memory-side cache controller. The programmable memory-side cache may locally store data of a system memory. The programmable memory-side cache controller may include programmable processing cores, each of the programmable processing cores configurable by cache configuration codes to manage the programmable memory-side cache for different applications.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: May 7, 2019
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Qiong Cai, Paolo Faraboschi
  • Patent number: 10284437
    Abstract: Cloud-based virtual machines and offices are provided herein. Methods may include establishing a cloud-based virtual office using a runbook that is pre-configured with computing resource settings for VMs as well as VM dependencies and sequences that create the virtual office or virtual private cloud. Multiple runbooks can be created to cover various scenarios such as disaster recovery and sandbox testing, by example.
    Type: Grant
    Filed: November 23, 2016
    Date of Patent: May 7, 2019
    Assignee: eFOLDER, INC.
    Inventors: Todd Scallan, Shravya Yelisetti, Saurabh Modh, Vlad Ananyev, Leonid Kornilenko, William Scott Edwards
  • Patent number: 10283698
    Abstract: A device, which may include a semiconductor device, may include a contact plug, a first barrier metal covering a bottom surface of the contact plug and a lower sidewall of the contact plug, such that the first barrier metal exposes an upper sidewall of the contact plug, and an insulation pattern covering the upper sidewall of the contact plug such that the insulation pattern isolates the first barrier metal from exposure. A magnetic tunnel junction pattern may cover a top surface of the contact plug. Each element of the contact plug, the first barrier metal, and the insulation pattern may be in a contact hole of a first interlayer dielectric layer.
    Type: Grant
    Filed: August 2, 2017
    Date of Patent: May 7, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seung Pil Ko, Kiseok Suh, Kilho Lee, Daeeun Jeong
  • Patent number: 10275360
    Abstract: Provided are a computer program product, system, and method for considering a density of tracks to destage in groups of tracks to select groups of tracks to destage. Groups of tracks in the cache are scanned to determine whether they are ready to destage. A determination is made as to whether the tracks in one of the groups are ready to destage in response to scanning the tracks in the group. A density for the group is increased in response to determining that the group is not ready to destage. The group is destaged in response to determining that the density of the group exceeds a density threshold.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: April 30, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kevin J. Ash, Lokesh M. Gupta
  • Patent number: 10268415
    Abstract: According to one embodiment, a data storage device includes a first storage unit, a second storage unit, a first queue, a second queue, and a distributor. The second storage unit is used as a cache of the first storage unit and has a lower write transfer rate and a faster response time than the first storage unit. The first queue corresponds to the first storage unit. The second queue corresponds to the second storage unit. The distributor distributes a write command received presently from a host to one of the first and second queues in which the number of write commands registered presently is smaller.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: April 23, 2019
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Shinichi Kanno, Kazuhiro Fukutomi
  • Patent number: 10268609
    Abstract: A resource management and task allocation controller for installation in a multicore processor having a plurality of interconnected processor elements providing resources for processing executable transactions, at least one of said elements being a master processing unit, the controller being adapted to communicate, when installed, with each of the processor elements including the master processing unit, and comprising control logic for allocating executable transactions within the multicore processor to particular processor elements in accordance with pre-defined allocation parameters.
    Type: Grant
    Filed: August 12, 2013
    Date of Patent: April 23, 2019
    Assignees: Synopsys, Inc., Fujitsu Semiconductor Limited
    Inventor: Mark David Lippett
  • Patent number: 10261906
    Abstract: A data accessing method includes: determining whether a preset cache area has cached data that a read target address points to when receiving a read instruction that includes the read target address; and finding a cache address corresponding to the read target address according to a first mapping relationship if the preset cache area has cached the data that the read target address points to, and reading data that the cache address points to from the preset cache area, where the first mapping relationship is used to record a correspondence between the target address and the cache address; orreading, from non-volatile storage space, the data that the read target address points to if the preset cache area has not cached the data that the read target address points to. By means of the method, data read errors caused by write interference can be reduced.
    Type: Grant
    Filed: June 22, 2017
    Date of Patent: April 16, 2019
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Jianhua Zhou, Yan Li, Po Zhang, Fei Wang
  • Patent number: 10255190
    Abstract: Systems, apparatuses, and methods for implementing a hybrid cache. A processor may include a hybrid L2/L3 cache which allows the processor to dynamically adjust a size of the L2 cache and a size of the L3 cache. In some embodiments, the processor may be a multi-core processor and there may be a single cache partitioned into a logical L2 cache and a logical L3 cache for use by the cores. In one embodiment, the processor may track the cache hit rates of the logical L2 and L3 caches and adjust the sizes of the logical L2 and L3 cache based on the cache hit rates. In another embodiment, the processor may adjust the sizes of the logical L2 and L3 caches based on which application is currently being executed by the processor.
    Type: Grant
    Filed: December 17, 2015
    Date of Patent: April 9, 2019
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Gabriel H. Loh
  • Patent number: 10257264
    Abstract: A system for reducing data center latency including a webserver having a processor, a memory system controller, external memory, cache memory including a plurality of cache blocks, where each cache block includes provider parameters and at least one user identifier (ID), and program memory including code segments executable by the processor. In an embodiment, the webserver receives a request sent by a requestor having requestor parameters including at least a requestor ID and a user ID, identifies a predictive cache block set; formulates a reply based, at least in part, upon a probability that a number of replies associated with a user ID of the predictive cache block set will exceed a frequency floor number within a predetermined period of time; and sends the reply to the requestor.
    Type: Grant
    Filed: February 21, 2017
    Date of Patent: April 9, 2019
    Assignee: YUME, INC.
    Inventors: Ayyappan Sankaran, Priya Wasnikar, Ayusman Sarangi
  • Patent number: 10248566
    Abstract: Systems and methods for caching data from a plurality of virtual machines may comprise detecting, using a computer processor executing cache management software, initiation of migration of a cached virtual machine from a first virtualization platform to a second virtualization platform, disabling caching for the virtual machine on the first virtualization platform, detecting completion of the migration of the virtual machine to the second virtualization platform, and enabling caching for the virtual machine on the second virtualization platform.
    Type: Grant
    Filed: May 27, 2015
    Date of Patent: April 2, 2019
    Assignee: Western Digital Technologies, Inc.
    Inventors: Anurag Agarwal, Anand Mitra, Prasad Joshi, Kanishk Rastogi
  • Patent number: 10250709
    Abstract: A data processing apparatus has multiple caches and a controller for controlling the caches. The controller and caches communicate over a first network and a second network. The first network is used for unicast communication from the controller to a specific one of the caches. The second network is used for communication of a multicast communication from the controller to two or more of the caches.
    Type: Grant
    Filed: April 14, 2016
    Date of Patent: April 2, 2019
    Assignee: Arm Limited
    Inventors: Jesus Javier de los Reyes Darias, Hakan Persson, Roko Grubisic, Vinod Pisharath Hari Pai
  • Patent number: 10248515
    Abstract: An apparatus includes an interface and storage circuitry. The interface is configured to communicate with a memory that includes multiple memory cells arranged in multiple planes that each includes one or more blocks of the memory cells. The storage circuitry is configured to apply a multi-plane storage operation to multiple blocks simultaneously across the respective planes. In response to detecting that the multi-plane storage operation has failed, the storage circuitry is configured to apply a single-plane storage operation to one or more of the blocks that were accessed in the multi-plane storage operation, including a given block, and to identify the given block as a bad block if the single-plane operation applied to the given block fails. The storage circuitry is further configured to store data in the blocks that were accessed in the multi-plane operation but were not identified as bad blocks.
    Type: Grant
    Filed: January 19, 2017
    Date of Patent: April 2, 2019
    Assignee: Apple Inc.
    Inventors: Charan Srinivasan, Eyal Gurgi
  • Patent number: 10248424
    Abstract: One embodiment provides an apparatus. The apparatus includes collector circuitry to capture processor trace (PT) data from a PT driver. The PT data includes a first target instruction pointer (TIP) packet including a first runtime target address of an indirect branch instruction of an executing target application. The apparatus further includes decoder circuitry to extract the first TIP packet from the PT data and to decode the first TIP packet to yield the first runtime target address. The apparatus further includes control flow validator circuitry to determine whether a control flow transfer to the first runtime target address corresponds to a control flow violation based, at least in part, on a control flow graph (CFG). The CFG including a plurality of nodes, each node including a start address of a first basic block, an end address of the first basic block and a next possible address of a second basic block or a not found tag.
    Type: Grant
    Filed: October 1, 2016
    Date of Patent: April 2, 2019
    Assignee: Intel Corporation
    Inventors: Salmin Sultana, Stanislav Bratanov, David M. Durham, Beeman C. Strong
  • Patent number: 10235101
    Abstract: Example apparatus and methods provide a log structured block device for a hard disk drive (HDD). Data that is to be stored on an HDD is serialized and written as a series of data blocks using a sequential write. Information about where individual data blocks were supposed to be written (e.g., actual address, neighboring data blocks), where data blocks were actually written, and how often data blocks are accessed is maintained. During garbage collection, data blocks that are being accessed with similar frequencies may be relocated together, with the most frequently accessed (e.g., hottest) data blocks migrating to the outer cylinders of the disk and the least frequently accessed (e.g., coldest) data blocks migrating to the inner cylinders. Blocks stored in the same temperature regions that were intended to be located together when written may be repositioned to facilitate sequential reads.
    Type: Grant
    Filed: November 6, 2017
    Date of Patent: March 19, 2019
    Assignee: Quantum Corporation
    Inventor: Don Doerner
  • Patent number: 10230542
    Abstract: In various embodiments, the present disclosure provides a system comprising a first plurality of processing cores, ones of the first plurality of processing cores coupled to a respective core interface module among a first plurality of core interface modules, the first plurality of core interface modules configured to be coupled to form in a first ring network of processing cores; a second plurality of processing cores, ones of the second plurality of processing cores coupled to a respective core interface module among a second plurality of core interface modules, the second plurality of core interface modules configured to be coupled to form a second ring network of processing cores; a first global interface module to form an interface between the first ring network and a third ring network; and a second global interface module to form an interface between the second ring network and the third ring network.
    Type: Grant
    Filed: March 8, 2017
    Date of Patent: March 12, 2019
    Assignee: Marvell World Trade Ltd.
    Inventors: Eitan Joshua, Shaul Chapman, Erez Amit, Noam Mizrahi, Moshe Raz, Husam Khshaiboun, Amit Shmilovich, Sujat Jamil, Frank O'Bleness
  • Patent number: 10229060
    Abstract: Embodiments provide for a processor comprising a cache, a prefetcher to select information according to a prefetcher algorithm and to send the selected information to the cache, and a prefetch tuning buffer including tuning state for the set of candidate prefetcher algorithms, wherein the prefetcher is to adjust operation of the prefetcher algorithm based on the tuning state.
    Type: Grant
    Filed: December 5, 2016
    Date of Patent: March 12, 2019
    Assignee: INTEL CORPORATION
    Inventors: Christopher B. Wilkerson, Ren Wang, Namakkal N. Venkatesan, Patrick Lu
  • Patent number: 10223289
    Abstract: In an aspect, a cache memory device receives a request to read an instruction or data associated with a memory device. The request includes a first realm identifier and a realm indicator bit, where the first realm identifier enables identification of a realm that includes one or more selected regions in the memory device. The cache memory device determines whether the first realm identifier matches a second realm identifier in a cache tag when the instruction or data is stored in the cache memory device, where the instruction or data stored in the cache memory device has been decrypted based on an ephemeral encryption key associated with the second realm identifier when the first realm identifier indicates the realm and when the realm indicator bit is enabled. The cache memory device transmits the instruction or data when the first realm identifier matches the second realm identifier.
    Type: Grant
    Filed: March 15, 2016
    Date of Patent: March 5, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Roberto Avanzi, David Hartley, Rosario Cammarota
  • Patent number: 10223282
    Abstract: Disclosed aspects relate to memory affinity management in a shared pool of configurable computing resources that utilizes non-uniform memory access (NUMA). An access relationship is monitored between a set of hardware memory components and a set of software assets. A set of memory affinity data is stored. The set of memory affinity data indicates the access relationship between the set of software assets and the set of hardware memory components. Using the set of memory affinity data, a NUMA utilization configuration with respect to the set of software assets is determined. Based on the NUMA utilization configuration, a set of accesses pertaining to the set of software assets and the set of hardware memory components is executed.
    Type: Grant
    Filed: May 23, 2017
    Date of Patent: March 5, 2019
    Assignee: International Business Machines Corporation
    Inventors: Mehulkumar Patel, Vaidyanathan Srinivasan, Venkatesh Sainath
  • Patent number: 10223305
    Abstract: A computing system includes a processor and a memory unit that stores program instructions. The system purges an entry from an address translation cache in response to the processor executing the program instructions to perform issuing, via an operating system running on the computing system, a command indicating a request to perform an I/O transaction requiring a translation entry. A host bridge monitors a total data length of the address translation entry to be transferred during the I/O transaction. An address translation entry is selected from an address translation table, loaded into the address translation cache, and data corresponding to the I/O transaction is transferred using the selected address translation entry. The host bridge automatically purges the selected address translation entry from the address translation cache in response to determining the transferred amount of data matches the total data length for the address translation entry.
    Type: Grant
    Filed: June 27, 2016
    Date of Patent: March 5, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Matthias Klein, Eric N. Lais, Darwin W. Norton, Jr.
  • Patent number: 10223393
    Abstract: A computing resource service provider may operate a build service configured to store data object on behalf of a customer of the computing resource service provide. The build service may receive a stream of data objects including object objects that reference one or more other data objects. A Bloom filter may be used to determine whether a one or more referenced data objects have been previously processed by the build service. This may enable the build service to reorder processing of the data object based at least in part on whether the one or more referenced data objects have previously been processed by the build service.
    Type: Grant
    Filed: June 25, 2015
    Date of Patent: March 5, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Matthew Roy Noble, Wade Alvin Matveyenko, Yilun Cui, Clare Emma Liguori
  • Patent number: 10216533
    Abstract: A method includes using a network interface controller to monitor a transmit ring, wherein the transmit ring comprises a circular ring data structure that stores descriptors, wherein a descriptor describes data and comprises a guest bus address that provides a virtual memory location of the data. The method also includes using the network interface controller to determine that a descriptor has been written to the transmit ring. The method further includes using the network interface controller to attempt to retrieve a translation for the guest bus address. The method includes using the network interface controller to read the descriptor from the transmit ring.
    Type: Grant
    Filed: October 1, 2015
    Date of Patent: February 26, 2019
    Assignee: Altera Corporation
    Inventor: Kenneth Vincent Bridgers
  • Patent number: 10216635
    Abstract: Techniques relate to handling outstanding cache miss prefetches. A processor pipeline recognizes that a prefetch canceling instruction is being executed. In response to recognizing that the prefetch canceling instruction is being executed, all outstanding prefetches are evaluated according to a criterion as set forth by the prefetch canceling instruction in order to select qualified prefetches. In response to evaluating, a cache subsystem is communicated with to cause canceling of the qualified prefetches that fit the criterion. In response to successful canceling of the qualified prefetches, a local cache is prevented from being updated from the qualified prefetches.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: February 26, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael Karl Gschwind, Maged M. Michael, Valentina Salapura, Eric M. Schwarz, Chung-Lung K. Shum
  • Patent number: 10216581
    Abstract: Machines, systems and methods for recovering data objects in a distributed data storage system, the method comprising storing one or more replicas of a first data object on one or more clusters in one or more data centers connected over a data communications network; recording health information about said one or more replicas, wherein the health information comprises data about availability of a replica to participate in a restoration process; calculating a query-priority for the first data object; querying, based on the calculated query-priority, the health information for the one or more replicas to determine which of the one or more replicas is available for restoration of the object data; calculating a restoration-priority for the first data object based on the health information for the one or more replicas; and restoring the first data object from the one or more of the available replicas, based on the calculated restoration-priority.
    Type: Grant
    Filed: December 9, 2015
    Date of Patent: February 26, 2019
    Assignee: International Business Machines Corporation
    Inventors: Michael E. Factor, David Hadas, Elliot K. Kolodner
  • Patent number: 10216632
    Abstract: A memory system implements a plurality of cache eviction policies and optionally a plurality of virtual address modification policies. A cache storage unit of the memory system has a plurality of cache storage sub-units. The cache storage unit is optionally managed by a cache management unit in accordance with the cache eviction polices. The cache storage sub-units are allocated for retention of information associated with respective memory addresses and are associated with the cache eviction policies in accordance with the respective memory addresses. For example, in response to a reference to an address that misses in a cache, the address is used to access a page table entry having an indicator specifying an eviction policy to use when selecting a cache line from the cache to evict in association with allocating a cache line of the cache to retain data obtained via the address.
    Type: Grant
    Filed: December 30, 2013
    Date of Patent: February 26, 2019
    Inventor: Michael Henry Kass
  • Patent number: 10210590
    Abstract: In one embodiment, a computing device receives receive a request for particular content associated with an application. The device may determine, based on a first recycling policy associated with a first recycler, that the first recycler associated with the application includes a display object that is capable of being used for containing the particular content. The device may encapsulate the display object with the particular content in a wrapper object and return the wrapper object encapsulating the display object in response to the request. The device may receive an indication that the display object is no longer needed, and extract the display object from the wrapper object. The display object may be stored in the first recycler. The wrapper object without the display object may be disposed in accordance with a second recycling policy associated with a second recycler associated with an operating system of the computing device.
    Type: Grant
    Filed: July 20, 2017
    Date of Patent: February 19, 2019
    Assignee: Facebook, Inc.
    Inventors: Qixing Du, Ashwin Bhat, Jonathan M. Kaldor, I Chien Peng, Joshua Li, Kang Zhang
  • Patent number: 10210047
    Abstract: Machines, systems and methods for recovering data objects in a distributed data storage system, the method comprising storing one or more replicas of a first data object on one or more clusters in one or more data centers connected over a data communications network; recording health information about said one or more replicas, wherein the health information comprises data about availability of a replica to participate in a restoration process; calculating a query-priority for the first data object; querying, based on the calculated query-priority, the health information for the one or more replicas to determine which of the one or more replicas is available for restoration of the object data; calculating a restoration-priority for the first data object based on the health information for the one or more replicas; and restoring the first data object from the one or more of the available replicas, based on the calculated restoration-priority.
    Type: Grant
    Filed: December 9, 2015
    Date of Patent: February 19, 2019
    Assignee: International Business Machines Corporation
    Inventors: Michael E. Factor, David Hadas, Elliot K. Kolodner
  • Patent number: 10210093
    Abstract: A method of operating a memory device that includes at least one sub-memory supporting a cache mode and a memory mode, the method including receiving a mode change signal instructing the memory device to change an operation mode of the at least one sub-memory from the cache mode to the memory mode; and changing the operation mode of the at least one sub-memory from the cache mode to the memory mode without flushing the at least one sub-memory, according to the mode change signal.
    Type: Grant
    Filed: September 5, 2014
    Date of Patent: February 19, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Nak Hee Seong