Patents Issued in January 17, 2019
  • Publication number: 20190018761
    Abstract: A method includes identifying a set of tests for a source code, analyzing the set of tests to identify overlapping blocks of the source code that are to be tested by each of the set of tests, merging a subset of the tests that include the overlapping blocks of the source code to create a merged test, and causing the merged test to be executed to test the source code. In an implementation, code coverage results are used when analyzing the set of tests to identify overlapping blocks of the source code.
    Type: Application
    Filed: July 17, 2017
    Publication date: January 17, 2019
    Inventors: Oded Ramraz, Boaz Shuster
  • Publication number: 20190018762
    Abstract: A test program is run repeatedly (either as a loop that is programmed into the code of the test program itself, or by repeatedly running the test program manually in response to user input instructing repeated run(s) of the test program. At least some run(s) of the test program use a cipher key that was derived and saved by the test program during a previous run of the test program (rather than re-deriving the cipher key based on information provided by the operating system). In this way, if the corresponding cipher key, as stored in the system space of the operating system has become corrupted during previous run(s) of the test program, then the incompatibility between the corrupted cipher key in the system space, and the previously saved cipher key that was previously derived by the test program, will be more easily detected.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Inventor: Louis P. Gomes
  • Publication number: 20190018763
    Abstract: A test program is run repeatedly (either as a loop that is programmed into the code of the test program itself, or by repeatedly running the test program manually in response to user input instructing repeated run(s) of the test program. At least some run(s) of the test program use a cipher key that was derived and saved by the test program during a previous run of the test program (rather than re-deriving the cipher key based on information provided by the operating system). In this way, if the corresponding cipher key, as stored in the system space of the operating system has become corrupted during previous run(s) of the test program, then the incompatibility between the corrupted cipher key in the system space, and the previously saved cipher key that was previously derived by the test program, will be more easily detected.
    Type: Application
    Filed: November 8, 2017
    Publication date: January 17, 2019
    Inventor: Louis P. Gomes
  • Publication number: 20190018764
    Abstract: A test program is run repeatedly (either as a loop that is programmed into the code of the test program itself, or by repeatedly running the test program manually in response to user input instructing repeated run(s) of the test program. At least some run(s) of the test program use a cipher key that was derived and saved by the test program during a previous run of the test program (rather than re-deriving the cipher key based on information provided by the operating system). In this way, if the corresponding cipher key, as stored in the system space of the operating system has become corrupted during previous run(s) of the test program, then the incompatibility between the corrupted cipher key in the system space, and the previously saved cipher key that was previously derived by the test program, will be more easily detected.
    Type: Application
    Filed: February 13, 2018
    Publication date: January 17, 2019
    Inventor: Louis P. Gomes
  • Publication number: 20190018765
    Abstract: Using, as a target pattern, each of combinations of a plurality of input conditions, a plurality of output conditions, and a plurality of arrival points each at which attainment of a process is confirmed by a test method based on a software structure, a test case generation apparatus determines whether or not generation of a test case is possible. The test case is formed of values of input and output signals and enables simultaneous checking of an arrival point and an input-output condition being a pair of an input condition and an output condition in the target pattern. With this arrangement, the test case generation apparatus identifies a set of the test cases that enable checking of each of the plurality of input conditions, each of the plurality of output conditions, and each of the plurality of arrival points.
    Type: Application
    Filed: February 24, 2016
    Publication date: January 17, 2019
    Applicant: MITSUBISHI ELECTRIC CORPORATION
    Inventor: Makoto ISODA
  • Publication number: 20190018766
    Abstract: The present disclosure may include a method that comprises: partitioning data on an on-chip and/or an off-chip storage medium into different data blocks according to a pre-determined data partitioning principle, wherein data with a reuse distance less than a pre-determined distance threshold value is partitioned into the same data block; and a data indexing step for successively loading different data blocks to at least one on-chip processing unit according a pre-determined ordinal relation of a replacement policy, wherein the repeated data in a loaded data block being subjected to on-chip repetitive addressing. Data with a reuse distance less than a pre-determined distance threshold value is partitioned into the same data block, and the data partitioned into the same data block can be loaded on a chip once for storage, and is then used as many times as possible, so that the access is more efficient.
    Type: Application
    Filed: August 9, 2016
    Publication date: January 17, 2019
    Inventors: Qi GUO, Tianshi CHEN, Yunji CHEN
  • Publication number: 20190018767
    Abstract: A data storage device includes a first nonvolatile memory device including first LSB, CSB and MSB pages; a second nonvolatile memory device including second LSB, CSB and MSB pages; a data cache memory is configured to store data write-requested from a host device; and a control unit suitable for configuring the first and second LSB pages as an LSB super page, configuring the first and second CSB pages as a CSB super page, and configuring the first and second MSB pages as an MSB super page, wherein the control unit is configured to one-shot programs the data stored in the data cache memory in the first LSB, CSB and MSB pages when determination is made as a data stability mode, and is configured to one-shot programs data stored in the data cache memory in the LSB, CSB and MSB super pages in a performance-improving mode.
    Type: Application
    Filed: January 31, 2018
    Publication date: January 17, 2019
    Inventors: Duck Hoi KOO, Yong JIN
  • Publication number: 20190018768
    Abstract: A method for operating a data storage device including a non-volatile memory device including a first region and a second region includes: storing data from a data cache memory in memory blocks in the first region; determining a first garbage collection cost with respect to a first target memory block having the least valid page among the memory blocks in the first region in which the data are kept; determining a second garbage collection cost with respect to a second target memory block having the least valid page among the memory blocks in the first region from which the data are cleared; and performing a garbage collection operation to copy valid data of a garbage collection target memory block into memory blocks in the second region based on a comparison result of the first garbage collection cost and the second garbage collection cost.
    Type: Application
    Filed: December 4, 2017
    Publication date: January 17, 2019
    Inventors: Yong Tae Kim, Duck Hoi Koo, Soong Sun Shin, Cheon Ok Jeong
  • Publication number: 20190018769
    Abstract: A computer implemented method to operate different processor cache levels of a cache hierarchy for a processor with pipelined execution is suggested. The cache hierarchy comprises at least a lower hierarchy level entity and a higher hierarchy level entity. The method comprises: sending a fetch request to the cache hierarchy; detecting a miss event from a lower hierarchy level entity; sending a fetch request to a higher hierarchy level entity; and scheduling at least one write pass.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Inventors: Simon H. Friedmann, Christian Jacobi, Markus Kaltenbach, Ulrich Mayer, Anthony Saporito
  • Publication number: 20190018770
    Abstract: A computer implemented method to operate different processor cache levels of a cache hierarchy for a processor with pipelined execution is suggested. The cache hierarchy comprises at least a lower hierarchy level entity and a higher hierarchy level entity. The method comprises: sending a fetch request to the cache hierarchy; detecting a miss event from a lower hierarchy level entity; sending a fetch request to a higher hierarchy level entity; and scheduling at least one write pass.
    Type: Application
    Filed: December 28, 2017
    Publication date: January 17, 2019
    Inventors: Simon H. Friedmann, Christian Jacobi, Markus Kaltenbach, Ulrich Mayer, Anthony Saporito
  • Publication number: 20190018771
    Abstract: A computer implemented method to operate different processor cache levels of a cache hierarchy for a processor with pipelined execution is suggested. The cache hierarchy comprises at least a lower hierarchy level entity and a higher hierarchy level entity. The method comprises: sending a fetch request to the cache hierarchy; detecting a miss event from a lower hierarchy level entity; sending a fetch request to a higher hierarchy level entity; and scheduling at least one write pass.
    Type: Application
    Filed: February 14, 2018
    Publication date: January 17, 2019
    Inventors: Simon H. Friedmann, Christian Jacobi, Markus Kaltenbach, Ulrich Mayer, Anthony Saporito
  • Publication number: 20190018772
    Abstract: A first request is received to access a first set of data in a first cache. A likelihood that a second request to a second cache for the first set of data will be canceled is determined. Access to the first set of data is completed based on the determining the likelihood that the second request to the second cache for the first set of data will be canceled.
    Type: Application
    Filed: July 13, 2017
    Publication date: January 17, 2019
    Inventors: Willm Hinrichs, Markus Kaltenbach, Eyal Naor, Martin Recktenwald
  • Publication number: 20190018773
    Abstract: A first request is received to access a first set of data in a first cache. A likelihood that a second request to a second cache for the first set of data will be canceled is determined. Access to the first set of data is completed based on the determining the likelihood that the second request to the second cache for the first set of data will be canceled.
    Type: Application
    Filed: November 15, 2017
    Publication date: January 17, 2019
    Inventors: Willm Hinrichs, Markus Kaltenbach, Eyal Naor, Martin Recktenwald
  • Publication number: 20190018774
    Abstract: A method for coordinating cache and memory reservation in a computerized system includes identifying at least one running application, recognizing the at least one application as a latency-critical application, monitoring information associated with a current cache access rate and a required memory bandwidth of the at least one application, allocating a cache partition, a size of the cache partition corresponds to the cache access rate and the required memory bandwidth of the at least one application, defining a threshold value including a number of cache misses per time unit, determining a reduction of cache misses per time unit, in response to the reduction of cache misses per time unit being above the threshold value, retaining the cache partition, assigning a priority of scheduling memory request including a medium priority level, and assigning a memory channel to the at least one application to avoid memory channel contention.
    Type: Application
    Filed: July 12, 2017
    Publication date: January 17, 2019
    Inventors: Robert Birke, Yiyu Chen, Navaneeth Rameshan, Martin Schmatz
  • Publication number: 20190018775
    Abstract: Embodiments include methods, systems and computer program products method for maintaining ordered memory access with parallel access data streams associated with a distributed shared memory system. The computer-implemented method includes performing, by a first cache, a key check, the key check being associated with a first ordered data store. A first memory node signals that the first memory node is ready to begin pipelining of a second ordered data store into the first memory node to an input/output (I/O) controller. A second cache returns a key response to the first cache indicating that the pipelining of the second ordered data store can proceed. The first memory node sends a ready signal indicating that the first memory node is ready to continue pipelining of the second ordered data store into the first memory node to the I/O controller, wherein the ready signal is triggered by receipt of the key response.
    Type: Application
    Filed: July 17, 2017
    Publication date: January 17, 2019
    Inventors: Ekaterina M. Ambroladze, Timothy C. Bronson, Matthias Klein, Pak-kin Mak, Vesselina K. Papazova, Robert J. Sonnelitter, III, Lahiruka S. Winter
  • Publication number: 20190018776
    Abstract: A first information-processing-apparatus includes a buffer to have entries to store a first request-data received and transmitted to a second information-processing-apparatus, a memory, and a processor coupled to the memory and configured to transmit, to the second information-processing-apparatus, the first request-data and a second request-data to be transmitted to second or third information-processing-apparatus, when a state where a number of entries in which data is stored in the buffer is equal to or larger than a first threshold is continued for a time longer than a first time, and a state where transmissions of first and second request-data to the second information-processing-apparatus are suppressed is continued for a time longer than a second time, change a number of entries usable in the buffer into a second threshold larger than the first threshold, and when the number of entries usable is the second threshold, suppress a transmission of the second request-data.
    Type: Application
    Filed: June 14, 2018
    Publication date: January 17, 2019
    Applicant: FUJITSU LIMITED
    Inventor: KENTA SATO
  • Publication number: 20190018777
    Abstract: An apparatus has an address translation cache with entries for storing address translation data. Partition configuration storage circuitry stores multiple sets of programmable configuration data each corresponding to a partition identifier identifying a corresponding software execution environment or master device and specifying a corresponding subset of entries of the cache. In response to a translation lookup request specifying a target address and a requesting partition identifier, control circuitry triggers a lookup operation to identify whether the target address hits or misses in the corresponding subset of entries specified by the set of partition configuration data for the requesting partition identifier.
    Type: Application
    Filed: July 11, 2017
    Publication date: January 17, 2019
    Inventor: Andrew Brookfield SWAINE
  • Publication number: 20190018778
    Abstract: A hierarchical NAND memory device includes: memory units each including memory groups; dynamic cache register (DCR) units each including DCR groups; switching circuit units each including switching circuits that are respectively coupled to the memory groups of a respective memory unit and that are respectively coupled to the DCR groups of a respective DCR unit; data register units each including data registers that are respectively coupled to the switching circuits of a respective switching circuit units; a data line (DL) unit each including DLs; and DL switch units each including switches that are respectively coupled between the data registers of a respective data register unit and the DLs of the DL unit.
    Type: Application
    Filed: August 31, 2018
    Publication date: January 17, 2019
    Inventor: Peter Wung LEE
  • Publication number: 20190018779
    Abstract: Improving access to a cache by a processing unit. One or more previous requests to access data from a cache are stored. A current request to access data from the cache is retrieved. A determination is made whether the current request is seeking the same data from the cache as at least one of the one or more previous requests. A further determination is made whether the at least one of the one or more previous requests seeking the same data was successful in arbitrating access to a processing unit when seeking access. A next cache write access is suppressed if the at least one of previous requests seeking the same data was successful in arbitrating access to the processing unit.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Inventors: Simon H. Friedmann, Girish G. Kurup, Markus Kaltenbach, Ulrich Mayer, Martin Recktenwald
  • Publication number: 20190018780
    Abstract: A computer implemented method for saving cache access power is suggested. The cache is provided with a set predictor logic for providing a generated set selection for selecting a set in the cache, and with a set predictor cache for pre-caching generated set indices of the cache. The method comprises further: receiving a part of a requested memory address; checking, in the set predictor cache, whether the requested memory address is already generated; in the case, that the requested memory address has already been generated: securing that the set predictor cache is switched off; issuing the pre-cached generated set index towards the cache; and securing that only that part of the cache is switched on that is associated with the pre-cached generated set index.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Inventors: Christian Jacobi, Markus Kaltenbach, Ulrich Mayer, Johannes C. Reichart, Anthony Saporito, Siegmund Schlechter
  • Publication number: 20190018781
    Abstract: A computer implemented method for saving cache access power is suggested. The cache is provided with a set predictor logic for providing a generated set selection for selecting a set in the cache, and with a set predictor cache for pre-caching generated set indices of the cache. The method comprises further: receiving a part of a requested memory address; checking, in the set predictor cache, whether the requested memory address is already generated; in the case, that the requested memory address has already been generated: securing that the set predictor cache is switched off; issuing the pre-cached generated set index towards the cache; and securing that only that part of the cache is switched on that is associated with the pre-cached generated set index.
    Type: Application
    Filed: December 27, 2017
    Publication date: January 17, 2019
    Inventors: Christian Jacobi, Markus Kaltenbach, Ulrich Mayer, Johannes C. Reichart, Anthony Saporito, Siegmund Schlechter
  • Publication number: 20190018782
    Abstract: The present disclosure includes apparatuses and methods for compute enabled cache. An example apparatus comprises a compute component, a memory and a controller coupled to the memory. The controller configured to operate on a block select and a subrow select as metadata to a cache line to control placement of the cache line in the memory to allow for a compute enabled cache.
    Type: Application
    Filed: September 10, 2018
    Publication date: January 17, 2019
    Inventor: Richard C. Murphy
  • Publication number: 20190018783
    Abstract: The present invention discloses a data access device and method applicable to a processor. An embodiment of the data access device comprises: an instruction cache memory; a data cache memory; a processor circuit configured to read specific data from the instruction cache memory for the Nth time and read the specific data from the data cache memory for the Mth time, in which both N and M are positive integers and M is greater than N; a duplication circuit configured to copy the specific data from the instruction cache memory to the data cache memory when the processor circuit reads the specific data for the Nth time; and a decision circuit configured to determine whether data requested by a read request from the processor circuit are stored in the data cache memory according to the read request.
    Type: Application
    Filed: July 13, 2018
    Publication date: January 17, 2019
    Inventors: Yen-Ju LU, Chao-Wei HUANG
  • Publication number: 20190018784
    Abstract: A storing unit stores therein mapping management information indicating mappings between each of a plurality of divided regions created by dividing logical storage space on a storage apparatus and one of a plurality of identification numbers each representing a different write frequency. A control unit measures the write frequency of each of the plurality of divided regions and updates the mappings indicated by the mapping management information based on results of the write frequency measurement. Upon request for a data write to a write address included in a divided region after the update of the mappings, the control unit identifies an identification number associated with the divided region based on the mapping management information, appends the identified identification number to a write request for the write address, and transmits the write request with the identified identification number appended thereto to the storage apparatus.
    Type: Application
    Filed: July 3, 2018
    Publication date: January 17, 2019
    Applicant: FUJITSU LIMITED
    Inventors: Takanori ISHII, Tomoka Aoki
  • Publication number: 20190018785
    Abstract: A data processing network includes a network of devices addressable via a system address space, the network including a computing device configured to execute an application in a virtual address space. A virtual-to-system address translation circuit is configured to translate a virtual address to a system address. A memory node controller has a first interface to a data resource addressable via a physical address space, a second interface to the computing device, and a system-to-physical address translation circuit, configured to translate a system address in the system address space to a corresponding physical address in the physical address space of the data resource. The virtual-to-system mapping may be a range table buffer configured to retrieve a range table entry comprising an offset address of a range together with a virtual address base and an indicator of the extent of the range.
    Type: Application
    Filed: November 21, 2017
    Publication date: January 17, 2019
    Applicant: Arm Limited
    Inventors: Jonathan Curtis BEARD, Roxana RUSITORU, Curtis Glenn DUNHAM
  • Publication number: 20190018786
    Abstract: A mechanism is provided for efficient coherence state modification of cached data stored in a range of addresses in a coherent data processing system in which data coherency is maintained across multiple caches. A tag search structure is maintained that identifies address tags and coherence states of cached data indexed by address tags. In response to a request from a device internal to or external from the coherence network, the tag search structure is searched to identify address tags of cached data for which the coherence state is to be modified and requests are issued in the data processing system to modify a coherence state of cached lines with the identified address tags. The request from the external device may specify a range of addresses for which a coherence state change is sought. The tag search structure may be implemented as search tree, for example.
    Type: Application
    Filed: November 21, 2017
    Publication date: January 17, 2019
    Applicant: Arm Limited
    Inventors: Jonathan Curtis Beard, Stephan Diestelhorst
  • Publication number: 20190018787
    Abstract: A host machine uses a range-based address translation system rather than a conventional page-based system. This enables address translation to be performed with improved efficiency, particularly when nest virtual machines are used. A data processing system utilizes range-based address translation to provide fast address translation for virtual machines that use virtual address space.
    Type: Application
    Filed: November 21, 2017
    Publication date: January 17, 2019
    Applicant: Arm Limited
    Inventors: Roxana Rusitoru, Jonathan Curtis Beard, Curtis Glenn Dunham
  • Publication number: 20190018788
    Abstract: According to one embodiment, a memory system receives a write request specifying a first logical address to which first data is to be written, and a length of the first data, from a host. The memory system writes the first data to a nonvolatile memory, and stores a first physical address indicating a physical storage location on the nonvolatile memory to which the first data is written, and the length of the first data, in an entry of a logical-to-physical address translation table corresponding to the first logical address. When the memory system receives a read request specifying the first logical address, the memory system acquires the first physical address and the length from the address translation table, and reads the first data from the nonvolatile memory.
    Type: Application
    Filed: November 22, 2017
    Publication date: January 17, 2019
    Inventors: Hideki Yoshida, Shinichi Kanno
  • Publication number: 20190018789
    Abstract: Memory address translation apparatus comprises a translation data store to store one or more instances of translation data providing address range boundary values defining a range of virtual memory addresses between respective virtual memory address boundaries in a virtual memory address space, and indicating a translation between a virtual memory address in the range of virtual memory addresses and a corresponding output memory address in an output address space; detector circuitry to detect whether a given virtual memory address to be translated lies in the range of virtual memory addresses defined by an instance of the translation data in the translation data store; in which the detector circuitry is configured, when the given virtual memory address to be translated lies outside the ranges of virtual memory addresses defined by any instances of the translation data stored by the translation data store, to retrieve one or more further instances of the translation data; and translation circuitry to apply the
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Applicant: ARM LTD
    Inventors: Jonathan Curtis BEARD, Roxana RUSITORU, Curtis Glenn DUNHAM
  • Publication number: 20190018790
    Abstract: A system, apparatus and method are provided in which a range of virtual memory addresses and a copy of that range are mapped to the same first system address range in a data processing system until an address in the virtual memory address range, or its copy, is written to. The common system address range includes a number of divisions. Responsive to a write request to an address in a division of the common address range, a second system address range is generated. The second system address range is mapped to the same physical addresses as the first system address range, except that the division containing the address to be written to and its corresponding division in the second system address range are mapped to different physical addresses. First layer mapping data may be stored in a range table buffer and updated when the second system address range is generated.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Applicant: ARM LTD
    Inventors: Jonathan Curtis BEARD, Roxana RUSITORU, Curtis Glenn DUNHAM
  • Publication number: 20190018791
    Abstract: Improving operation of a processing unit to access data within a cache system. A first fetch request and one or more subsequent fetch requests are accessed in an instruction stream. An address of data sought by the first fetch requested is obtained. At least a portion of the address of data sought by the first fetch request in inserted in each of the one or more subsequent fetch requests. The portion of the address inserted in each of the one or more subsequent fetch requests is utilized to retrieve the data sought by the first fetch request first in order from the cache system.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Inventors: Markus Kaltenbach, Ulrich Mayer, Siegmund Schlechter, Maxim Scholl
  • Publication number: 20190018792
    Abstract: Improving operation of a processing unit to access data within a cache system. A first fetch request and one or more subsequent fetch requests are accessed in an instruction stream. An address of data sought by the first fetch requested is obtained. At least a portion of the address of data sought by the first fetch request in inserted in each of the one or more subsequent fetch requests. The portion of the address inserted in each of the one or more subsequent fetch requests is utilized to retrieve the data sought by the first fetch request first in order from the cache system.
    Type: Application
    Filed: January 24, 2018
    Publication date: January 17, 2019
    Inventors: Markus Kaltenbach, Ulrich Mayer, Siegmund Schlechter, Maxim Scholl
  • Publication number: 20190018793
    Abstract: A system comprises a processor with one or more cores; and memory including instructions to configure the processor to perform the method comprising receiving a data set of a data length; determining a bit pattern of the data set; generating a reference set of bit patterns, the set being of a set length, the set length being equivalent to the data length, the set of bit patterns including every possible different bit pattern from all zeros to all ones; determining a first test bit pattern using a first bit pattern generation function applied to test data; determining a distance between the first test bit pattern and the bit pattern of the data set using a location of the first test bit pattern and a location of the bit pattern of the data set, the locations being relative to the reference set of bit patterns; iterating the first test pattern generation function in a direction of the bit pattern of the data set and combining the first test pattern generation function with at least one second test pattern gener
    Type: Application
    Filed: June 26, 2018
    Publication date: January 17, 2019
    Inventor: Stephen Tarin
  • Publication number: 20190018794
    Abstract: A data processing system includes a memory system, a first processing element, a first address translator that maps virtual addresses to system addresses, a second address translator that maps system address to physical addresses, and a task management unit. A first program task uses a first virtual memory space that is mapped to a first system address range using a first table. The context of the first program task includes an address of the first table and is cloned by creating a second table indicative of a mapping from a second virtual address space to a second range of system addresses, where the second range is mapped to the same physical addresses as the first range until a write occurs, at which time memory is allocated and the mapping of the second range is updated. The cloned context includes an address of the second table.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Applicant: ARM LTD
    Inventors: Jonathan Curtis BEARD, Roxana RUSITORU, Curtis Glenn DUNHAM
  • Publication number: 20190018795
    Abstract: The present disclosure relates to a method of operating a translation lookaside buffer (TLB) arrangement for a processor supporting virtual addressing, wherein multiple translation engines are used to perform translations on request of one of a plurality of dedicated processor units. The method comprises: maintaining by a cache unit a dependency matrix for the engines to track for each processing unit if an engine is assigned to the each processing unit for a table walk. The cache unit may block a processing unit from allocating an engine to a translation request when the engine is already assigned to the processing unit in the dependency matrix.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Inventors: Michael Johannes Jaspers, Markus Kaltenbach, Girish G. Kurup, Ulrich Mayer
  • Publication number: 20190018796
    Abstract: The present disclosure relates to a method of operating a translation lookaside buffer (TLB) arrangement for a processor supporting virtual addressing, wherein multiple translation engines are used to perform translations on request of one of a plurality of dedicated processor units. The method comprises: maintaining by a cache unit a dependency matrix for the engines to track for each processing unit if an engine is assigned to the each processing unit for a table walk. The cache unit may block a processing unit from allocating an engine to a translation request when the engine is already assigned to the processing unit in the dependency matrix.
    Type: Application
    Filed: December 15, 2017
    Publication date: January 17, 2019
    Inventors: Michael Johannes Jaspers, Markus Kaltenbach, Girish G. Kurup, Ulrich Mayer
  • Publication number: 20190018797
    Abstract: An information processing apparatus includes a first memory, a second memory, and a processor coupled to the first memory and the second memory. The first memory is configured to store data and has a first access speed. The second memory is configured to store data and has a second access speed different from the first access speed. The processor is configured to determine respective storage destinations of first data stored in the first memory and second data stored in the second memory from among the first memory and the second memory based on a first access probability and a first latency of the first data and a second access probability and a second latency of the second data.
    Type: Application
    Filed: July 11, 2018
    Publication date: January 17, 2019
    Applicant: FUJITSU LIMITED
    Inventor: SHUN GOKITA
  • Publication number: 20190018798
    Abstract: Systems and methods relate to cost-aware cache management policies. In a cost-aware least recently used (LRU) replacement policy, temporal locality as well as miss cost is taken into account in selecting a cache line for replacement, wherein the miss cost is based on an associated operation type including instruction cache read, data cache read, data cache write, prefetch, and write back. In a cost-aware dynamic re-reference interval prediction (DRRIP) based cache management policy, miss costs associated with operation types pertaining to a cache line are considered for assigning re-reference interval prediction values (RRPV) for inserting the cache line, pursuant to a cache miss and for updating the RRPV upon a hit for the cache line. The operation types comprise instruction cache access, data cache access, prefetch, and write back. These policies improve victim selection, while minimizing cache thrashing and scans.
    Type: Application
    Filed: September 18, 2018
    Publication date: January 17, 2019
    Inventors: Rami Mohammad AL SHEIKH, Shivam PRIYADARSHI, Harold Wade CAIN III
  • Publication number: 20190018799
    Abstract: A hybrid hierarchical cache is implemented at the same level in the access pipeline, to get the faster access behavior of a smaller cache and, at the same time, a higher hit rate at lower power for a larger cache, in some embodiments. A split cache at the same level in the access pipeline includes two caches that work together. In the hybrid, split, low level cache (e.g., L1) evictions are coordinated locally between the two L1 portions, and on a miss to both L1 portions, a line is allocated from a larger L2 cache to the smallest L1 cache.
    Type: Application
    Filed: August 27, 2018
    Publication date: January 17, 2019
    Inventors: Abhishek R. Appu, Joydeep Ray, James A. Valerio, Altug Koker, Prasoonkumar P. Surti, Balaji Vembu, Wenyin Fu, Bhushan M. Borole, Kamal Sinha
  • Publication number: 20190018800
    Abstract: A host processor receives an address translation request from an accelerator, which may be trusted or un-trusted. The address translation request includes a virtual address in a virtual address space that is shared by the host processor and the accelerator. The host processor encrypts a physical address in a host memory indicated by the virtual address in response to the accelerator being permitted to access the physical address. The host processor then provides the encrypted physical address to the accelerator. The accelerator provides memory access requests including the encrypted physical address to the host processor, which decrypts the physical address and selectively accesses a location in the host memory indicated by the decrypted physical address depending upon whether the accelerator is permitted to access the location indicated by the decrypted physical address.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Inventors: Nuwan JAYASENA, Brandon K. POTTER, Andrew G. KEGEL
  • Publication number: 20190018801
    Abstract: Data security access and management may require a server dedicated to monitoring document access requests and enforcing rules and policies to limit access to those who are not specifically identified as having access to the data. One example of operation may include selecting data to be protected via a user device, applying at least one policy to the data, storing the at least one policy in a data record identifying the data, modifying a data format of the data to create a modified data, and storing the modified data in memory.
    Type: Application
    Filed: September 10, 2018
    Publication date: January 17, 2019
    Inventors: Prakash Linga, Ajay Arora, Vladimir Buzuev, Maurice C. Evans
  • Publication number: 20190018802
    Abstract: In one example a Universal Serial Bus (USB) controller comprises at least one memory register to store one or more enumeration parameters for a USB connection with the USB controller and logic, at least partially including hardware logic, to detect a USB connection with a remote device via the USB connection, retrieve one or more connection enumeration parameters for the USB connection from the at least one memory register on the USB host controller, and implement a connection enumeration process using the one or more connection enumeration parameters retrieved from the memory register on the USB controller. Other examples may be described.
    Type: Application
    Filed: April 17, 2018
    Publication date: January 17, 2019
    Applicant: Intel Corporation
    Inventors: SATHEESH CHELLAPPAN, KISHORE KASICHAINULA, LAY CHENG ONG, CHEE LIM POON, HARISH G. KAMAT
  • Publication number: 20190018803
    Abstract: A computer system with a configurable ordering controller for coupling transactions. The computer system comprises a coupling device configured to send first data packets with an unordered attribute being set to an ordering controller. The computer system further comprises the coupling device configured to send second data packets with requested ordering to the ordering controller, back-to-back after the first data packets, without waiting until all of the first data packets are completed. The computer system further comprises the ordering controller configured to send the first data packets to a memory subsystem in a relaxed ordering mode, wherein the ordering controller sends the first data packets to the memory subsystem in an arbitrary order, and wherein the ordering controller sends the second data packets to the memory subsystem after sending all of the first data packets to the memory subsystem.
    Type: Application
    Filed: July 11, 2017
    Publication date: January 17, 2019
    Inventors: Norbert Hagspiel, Sascha Junghans, Matthias Klein, Girish Kurup
  • Publication number: 20190018804
    Abstract: A method for coupling transactions with a configurable ordering controller in a computer system. The method comprises sending, by a coupling device, first data packets with an unordered attribute being set to an ordering controller. The method further comprises sending, by the coupling device, second data packets with requested ordering to the ordering controller, back-to-back after the first data packets, without waiting until all of the first data packets are completed. The method further comprises sending, by the ordering controller, the first data packets to a memory subsystem in a relaxed ordering mode, wherein the ordering controller sends the first data packets to the memory subsystem in an arbitrary order, and wherein the ordering controller sends the second data packets to the memory subsystem after sending all of the first data packets to the memory subsystem.
    Type: Application
    Filed: November 8, 2017
    Publication date: January 17, 2019
    Inventors: Norbert Hagspiel, Sascha Junghans, Matthias Klein, Girish Kurup
  • Publication number: 20190018805
    Abstract: Systems and methods for fast execution of in-capsule commands are disclosed. NVM Express (NVMe) over fabrics is a standard in which a host device sends commands in a command capsule to a memory device. The memory device then saves the command capsule as an entry to a submission queue, and thereafter fetches the command capsule from the submission queue for execution. In certain instances, such as when the command capsule includes a write command, the memory device may decide to by-pass the submission queue and instead begin execution of the command without fetching the command from the submission queue. In these instances of bypassing, the memory device may instead insert a no-operation entry in the submission queue. Further, the memory device may send a response capsule prior to beginning execution of the command.
    Type: Application
    Filed: July 17, 2017
    Publication date: January 17, 2019
    Applicant: Western Digital Technologies, Inc.
    Inventor: Shay Benisty
  • Publication number: 20190018806
    Abstract: Techniques and apparatus to manage access to accelerator-attached memory are described. In one embodiment, an apparatus to provide coherence bias for accessing accelerator memory may include at least one processor, a logic device communicatively coupled to the at least one processor, a logic device memory communicatively coupled to the logic device, and logic, at least a portion comprised in hardware, the logic to receive a request to access the logic device memory from the logic device, determine a bias mode associated with the request, and provide the logic device with access to the logic device memory via a device bias pathway responsive to the bias mode being a device bias mode. Other embodiments are described and claimed.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Applicant: INTEL CORPORATION
    Inventors: DAVID A. KOUFATY, RAJESH M. SANKARAN, STEPHEN R. VAN DOREN
  • Publication number: 20190018807
    Abstract: An electronic device includes a memory, plural master circuits, a transmission path, a detection unit, and a reset control unit. The plural master circuits read and write data from and into the memory. Plural instructions and data are transmitted through the transmission path while buffering and arbitrating the instructions and the data. The detection unit detects a buffer overrun in the transmission path. The reset control unit performs reset control for a portion of the transmission path affected by the buffer overrun and master circuits, of the plural master circuits, affected by the buffer overrun.
    Type: Application
    Filed: June 27, 2018
    Publication date: January 17, 2019
    Applicant: FUJI XEROX CO., LTD.
    Inventors: Tomoyuki ONO, Masaki NUDEJIMA, Takayuki HASHIMOTO, Suguru OUE
  • Publication number: 20190018808
    Abstract: A memory node controller for a node of a data processing network, the network including at least one computing device and at least one data resource, each data resource addressed by a physical address. The node is configured to couple the at least one computing device with the at least one data resource. Elements of the data processing network are addressed via a system address space. The memory node controller includes a first interface to the at least one data resource, a second interface to the at least one computing device, and a system to physical address translator cache configured to translate a system address in the system address space to a physical address in the physical address space of the at least one data resource.
    Type: Application
    Filed: July 14, 2017
    Publication date: January 17, 2019
    Applicant: ARM LTD
    Inventors: Jonathan Curtis BEARD, Roxana RUSITORU, Curtis Glenn DUNHAM
  • Publication number: 20190018809
    Abstract: A semiconductor chip comprising memory controller circuitry having interface circuitry to couple to a memory channel. The memory controller includes first logic circuitry to implement a first memory channel protocol on the memory channel. The first memory channel protocol is specific to a first volatile system memory technology. The interface also includes second logic circuitry to implement a second memory channel protocol on the memory channel. The second memory channel protocol is specific to a second non volatile system memory technology.
    Type: Application
    Filed: July 26, 2018
    Publication date: January 17, 2019
    Inventors: Bill NALE, Raj K. RAMANUJAN, Muthukumar P. SWAMINATHAN, Tessil THOMAS, Taarinya POLEPEDDI
  • Publication number: 20190018810
    Abstract: A method and system for programming a microcontroller (MCU) to implement a data transfer, the MCU having a flash memory, a central processing unit (CPU) and a direct memory access controller (DMAC). In one embodiment, the method includes calling a function stored in the flash memory, wherein a first parameter is passed to the function when it is called, wherein the first parameter identifies a first data structure that is stored in flash memory, and wherein the first data structure includes first DMAC control values. The CPU reads the first DMAC control values in response to the CPU executing instructions of the function. The CPU then writes the first DMAC control values to respective control registers of the DMAC in response to the CPU executing instructions of the function.
    Type: Application
    Filed: February 28, 2017
    Publication date: January 17, 2019
    Inventor: Dale SPARLING