Interleaved Patents (Class 711/127)
  • Patent number: 11726909
    Abstract: A memory controller maintains a mapping of target ranges in system memory space interleaved two-ways across locations in a three-rank environment. For each range of the target ranges, the mapping comprises a two-way interleaving of the range across two ranks of the three-rank environment and offsets from base locations in the two ranks. At least one of the ranges has offsets that differ relative to each other. Such offsets allow the three ranks to be fully interleaved, two ways. An instruction to read data at a rank-agnostic location in the diverse-offset range causes the memory controller to map the rank-agnostic location to two interleaved locations offset different amounts from their respective base locations in their ranks. The controller may then affect the transfer of the data at the two interleaved locations.
    Type: Grant
    Filed: July 13, 2022
    Date of Patent: August 15, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Brett Kenneth Dodds, Monish Shantilal Shah
  • Patent number: 11232031
    Abstract: A memory allocation method and a device, where the method is applied to a computer system including a processor and a memory, and comprises, after receiving a memory access request carrying a to-be-accessed virtual address and determining that no memory page has been allocated to the virtual address, the processor selecting a target rank group from at least two rank groups of the memory based on access traffic of the rank groups. The processor selects, from idle memory pages, a to-be-allocated memory page for the virtual address, where information about a first preset location in a physical address of the to-be-allocated memory page is the same as first portions of address information in addresses of ranks in the target rank group.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: January 25, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Shihai Xiao, Xing Hu, Kanwen Wang, Wei Yang
  • Patent number: 11169937
    Abstract: A memory control device of the present invention controls a memory that includes multiple bank groups each including multiple banks. The memory control device includes a request buffer configured to store memory requests to be issued to the banks, a bank busy manager configured to manage busy states of the banks, a bank group checker configured to, for each of the banks, manage the number of banks in not-busy state of the banks in each of the bank groups, a bank group determination unit configured to determine a bank group to which a memory request is issued, on the basis of the numbers of the banks in not-busy state in the respective bank groups, and a request issuer configured to issue the memory request in the request buffer to a bank in the determined bank group.
    Type: Grant
    Filed: January 11, 2018
    Date of Patent: November 9, 2021
    Assignee: NEC CORPORTATION
    Inventor: Yasushi Kanoh
  • Patent number: 10877686
    Abstract: An apparatus is described that includes a solid state drive having non volatile buffer memory and non volatile primary storage memory. The non volatile buffer memory is to store less bits per cell than the non volatile primary storage memory. The solid state drive includes a controller to flush the buffer in response to a buffer flush command received from a host. The controller is to cause the solid state drive to service read/write requests that are newly received from the host in between flushes of smaller portions of the buffer's content that are performed to service the buffer flush command.
    Type: Grant
    Filed: April 13, 2018
    Date of Patent: December 29, 2020
    Assignee: Intel Corporation
    Inventors: Shankar Natarajan, Romesh Trivedi, Suresh Nagarajan, Sriram Natarajan
  • Patent number: 10853236
    Abstract: A storage device and a method of operating the storage device include various memory devices. The storage device includes a plurality of memory devices each including at least one or more read cache memory blocks and a plurality of main memory blocks; and a memory controller configured to spread and store data stored in an identical memory device, having a read count representing a number of read requests and exceeding a threshold value, among data stored in the plurality of main memory blocks, to the at least one or more read cache memory blocks included in each of the plurality of memory devices.
    Type: Grant
    Filed: October 8, 2018
    Date of Patent: December 1, 2020
    Assignee: SK hynix Inc.
    Inventor: Jong Ju Park
  • Patent number: 10838873
    Abstract: There is provided a method for managing addresses in a distributed system. The distributed system comprises a client and a resource pool, the resource pool comprising multiple hosts, a host among the multiple hosts comprising a computing node. The method comprises: receiving an access request from the client, the access request being for accessing first target data in a physical memory of the computing node via a first virtual address; determining the physical memory on the basis of the first virtual address; and determining a first physical address of the first target data in the physical memory on the basis of the first virtual address, wherein the computing node is a graphics processing unit, and the physical memory is a memory of the graphics processing unit.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: November 17, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Wei Cui, Kun Wang
  • Patent number: 10782916
    Abstract: A memory system having memory components and a processing device to receive, from a host system, write commands to store data in the memory components, store the write commands in a buffer, and execute at least a portion of the write commands. For example, this write buffer capacity can be represented by write credit values on the host and the subsystem. The processing device determines an amount of available capacity of the buffer that becomes available after execution of at least the portion of the write commands, and signals the host system to receive information identifying the amount of available capacity, without a pending information request received from the host system.
    Type: Grant
    Filed: August 8, 2018
    Date of Patent: September 22, 2020
    Assignee: Micron Technology, Inc.
    Inventors: Trevor Conrad Meyerowitz, Dhawal Bavishi
  • Patent number: 10504572
    Abstract: A method for addressing memory device data arranged in rows and columns indexed by a first number of row address bits and a second number of column address bits, and addressed by a row command specifying a third number of row address bits followed by a column command specifying a fourth number of column address bits, the first number being greater than the third number or the second number being greater than the fourth number, includes: splitting the first number of row address bits into first and second subsets, and specifying the first subset in the row command and the second subset in a next address command when the first number is greater than the third number; otherwise splitting the second number of column address bits into third and fourth subsets, and specifying the fourth subset in the column command and the third subset in a previous address command.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: December 10, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Mu-Tien Chang, Dimin Niu, Hongzhong Zheng, Sun Young Lim, Indong Kim, Jangseok Choi
  • Patent number: 10419242
    Abstract: A method involves compiling a first amount of high-level programming language code (for example, P4) and a second amount of a low-level programming language code (for example, C) thereby obtaining a first amount of native code and a second amount of native code. The high-level programming language code at least in part defines how an SDN switch performs matching in a first condition. The low-level programming language code at least in part defines how the SDN switch performs matching in a second condition. The low-level code can be a type of plugin or patch for handling special packets. The amounts of native code are loaded into the SDN switch such that a first processor (for example, x86 of the host) executes the first amount of native code and such that a second processor (for example, ME of an NFP on the NIC) executes the second amount of native code.
    Type: Grant
    Filed: February 12, 2018
    Date of Patent: September 17, 2019
    Assignee: Netronome Systems, Inc.
    Inventors: Johann H. Tönsing, David George
  • Patent number: 10275245
    Abstract: A simulation system that includes a simulation accelerator that uses parallel processing to accelerate the simulation of register transfer level codes (RTLs) while minimizing memory access latency is disclosed. The accelerator has an array of parallel computing resources. The simulation accelerator receives compiled RTLs in which the components of the design are mapped to instructions. The instructions are divided into groups, in which instructions belonging to a same group are logically independent of each other. The simulation accelerator fetches instructions and data for processing by the parallel computing resources for one group of instructions at a time.
    Type: Grant
    Filed: January 6, 2017
    Date of Patent: April 30, 2019
    Assignee: Montana Systems Inc.
    Inventors: Vivian Chou, Julien Lamoureux, Sherman Lee
  • Patent number: 10223010
    Abstract: A method or system for allocating the storage space of a storage medium into a permanently allocated media cache storage region, a dynamically mapped media cache storage region, and statically mapped storage region. In one implementation, the dynamically mapped media cache storage region is used for performance and/or reliability enhancing functions.
    Type: Grant
    Filed: December 1, 2016
    Date of Patent: March 5, 2019
    Assignee: SEAGATE TECHNOLOGY LLC
    Inventor: Andrew Michael Kowles
  • Patent number: 10127166
    Abstract: Techniques are disclosed relating to processing data in a storage controller. In one embodiment, a method includes receiving data at a storage controller of a storage device. The method further includes processing data units of the data in parallel via a plurality of write pipelines in the storage controller. The method further includes writing the data units to a storage medium of the storage device. In some embodiments, the method may include inserting header information into the data for a plurality of data units before processing, and the header information may include sequence information. In some embodiments, writing the data units may include writing according to a sequence determined prior to processing the data units.
    Type: Grant
    Filed: April 16, 2014
    Date of Patent: November 13, 2018
    Assignee: SanDisk Technologies LLC
    Inventor: James G. Peterson
  • Patent number: 9990302
    Abstract: A tag memory and a cache system with automating tag comparison mechanism and a cache method thereof are provided. The tag memory in the cache system includes a memory cell array, sensing amplifiers and a tag comparison circuit. The memory cell array stores cache tags, and outputs row tags of the cache tags according to an index in a memory address. The sensing amplifiers perform signal amplifications on the row tags to serve as comparison tags. The tag comparison circuit performs parallel comparisons between a target tag in the memory address and the row tags. When one of the row tags matches the target tag, the tag comparison circuit outputs a location of the matched row tag to serve as a first column address. The first column address is a column address where the memory address corresponds to a first data memory in the cache system.
    Type: Grant
    Filed: May 25, 2016
    Date of Patent: June 5, 2018
    Assignee: National Chiao Tung University
    Inventors: Tien-Fu Chen, Meng-Fan Chang, Keng-Hao Yang
  • Patent number: 9916253
    Abstract: A method for supporting a plurality of requests for access to a data cache memory (“cache”) is disclosed. The method comprises accessing a first set of requests to access the cache, wherein the cache comprises a plurality of blocks. Further, responsive to the first set of requests to access the cache, the method comprises accessing a tag memory that maintains a plurality of copies of tags for each entry in the cache and identifying tags that correspond to individual requests of the first set. The method also comprises performing arbitration in a same clock cycle as the accessing and identifying of tags, wherein the arbitration comprises: (a) identifying a second set of requests to access the cache from the first set, wherein the second set accesses a same block within the cache; and (b) selecting each request from the second set to receive data from the same block.
    Type: Grant
    Filed: February 5, 2014
    Date of Patent: March 13, 2018
    Assignee: Intel Corporation
    Inventors: Karthikeyan Avudaiyappan, Sourabh Alurkar
  • Patent number: 9747994
    Abstract: A memory system includes first through fifth pins connectable to a host device to output to the host device a first signal through the third pin and to receive from the host device a first chip select signal through the first pin, a second chip select signal through the second pin, a second signal through the fourth pin, and a clock signal through the fifth pin, an interface circuit configured to recognize, as a command, the second signal received through the fourth pin immediately after detecting the first or second chip select signal, and first and second memory cell arrays. The interface circuit and the first and second memory cell arrays are provided in one common package, and is configured to access the first memory cell array when detecting the first chip select signal, and to access the second memory cell array when detecting the second chip select signal.
    Type: Grant
    Filed: August 10, 2016
    Date of Patent: August 29, 2017
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Hirosuke Narai, Toshihiko Kitazume, Kenichirou Kada, Nobuhiro Tsuji, Shunsuke Kodera, Tetsuya Iwata, Yoshio Furuyama, Shinya Takeda
  • Patent number: 9542306
    Abstract: A method or system for allocating the storage space of a storage medium into a permanently allocated media cache storage region; a dynamically mapped media cache storage region, and statically mapped storage region, wherein the dynamically mapped media cache storage region is used for performance and/or reliability enhancing functions.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: January 10, 2017
    Assignee: SEAGATE TECHNOLOGY LLC
    Inventor: Andrew Michael Kowles
  • Patent number: 9507645
    Abstract: A thread processing method that is executed by a multi-core processor, includes supplying a command to execute a first thread to a first processor; judging a dependence relationship between the first thread and a second thread to be executed by a second processor; comparing a first threshold and a frequency of access of any one among shared memory and shared cache memory by the first thread; and changing a phase of a first operation clock of the first processor when the access frequency is greater than the first threshold and upon judging that no dependence relationship exists.
    Type: Grant
    Filed: October 18, 2013
    Date of Patent: November 29, 2016
    Assignee: FUJITSU LIMITED
    Inventors: Hiromasa Yamauchi, Koichiro Yamashita, Takahisa Suzuki, Koji Kurihara, Toshiya Otomo, Naoki Odate
  • Patent number: 9372736
    Abstract: Systems and methods for determining a representation of an execution trace include identifying at least one execution trace of a business process model, the business process model including parallel paths where a path influences an outcome of a decision. Path information of the business process model is determined using a processor, the path information including at least one of task execution order for each parallel path, task execution order across parallel paths, and dependency between parallel paths. A path representation for the at least one execution trace is selected based upon the path information to determine a representation of the at least one execution trace.
    Type: Grant
    Filed: May 6, 2014
    Date of Patent: June 21, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Francisco Curbera, Yurdaer N. Doganata, Geetika T. Lakshmanan, Merve Unuvar
  • Patent number: 9323607
    Abstract: An apparatus comprising a memory and a controller. The memory is configured to process a plurality of read/write operations. The memory comprises a plurality of memory modules each having a size less than a total size of the memory. The controller is configured to salvage data stored in a failed page of the memory determined to exceed a maximum number of errors. The controller copies raw data stored in the failed page. The controller identifies locations of a first type of data cells that fail erase identification. The controller identifies locations of a second type of data cells that have program errors. The controller flips data values in the raw data at the locations of the first type of data cells and the locations of the second type of data cells. The controller is configured to perform error correcting code decoding on the raw data having flipped data values. The controller salvages data stored in the failed page.
    Type: Grant
    Filed: May 9, 2014
    Date of Patent: April 26, 2016
    Assignee: Seagate Technology LLC
    Inventors: Yu Cai, Yunxiang Wu, Erich F. Haratsch
  • Patent number: 9251070
    Abstract: A multi-level cache structure in accordance with one embodiment includes a first cache structure and a second cache structure. The second cache structure is hierarchically above the first cache. The second cache includes a tag array comprising a plurality of tag entries corresponding to respective addresses of data within a system memory; a selector array associated with the tag array; and a data array configured to store a subset of the data. The selector array is configured to specify, for each corresponding tag entry, whether the data array includes the data corresponding to that tag entry.
    Type: Grant
    Filed: December 19, 2012
    Date of Patent: February 2, 2016
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Jin Haeng Cho
  • Patent number: 9244823
    Abstract: Systems, methods, and other embodiments associated with speculating whether a read request will cause an access to a memory are described. In one embodiment, a method includes detecting, in a memory, a read request from a processor and speculating whether the read request will cause an access to a memory bank in the memory based, at least in part, on an address identified by the read request. The method selectively enables power to the memory bank in the memory based, at least in part, on speculating whether the read request will cause an access to the memory bank.
    Type: Grant
    Filed: November 9, 2012
    Date of Patent: January 26, 2016
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Hoyeol Cho, Ioannis Orginos, Jinho Kwack
  • Patent number: 8977800
    Abstract: Provided is a multi-port cache memory apparatus and a method of the multi-port cache memory apparatus. The multi-port memory apparatus may divide an address space into address regions and allocate the divided memory regions to cache banks, thereby preventing the concentration of access to a particular cache.
    Type: Grant
    Filed: January 31, 2012
    Date of Patent: March 10, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Moo-Kyoung Chung, Soo-Jung Ryu, Ho-Young Kim, Woong Seo, Young-Chul Cho
  • Patent number: 8838896
    Abstract: The present patent application discloses a method and apparatus for using external and internal memory for cancelling traffic interference comprising storing data in an external memory; and processing the data samples on an internal memory, wherein the external memory is low bandwidth memory; and the internal memory is high bandwidth on board cache. The present method and apparatus also comprises caching portions of the data on the internal memory, filling the internal memory by reading the newest data from the external memory and updating the internal memory; and writing the older data back to the external memory from the internal memory, wherein the data is incoming data samples.
    Type: Grant
    Filed: May 20, 2008
    Date of Patent: September 16, 2014
    Assignee: QUALCOMM Incorporated
    Inventors: Senthil Govindaswamy, Jeffrey A. Levin, Raghu Sagar Madala, Sharad Deepak Sambhwani
  • Patent number: 8819359
    Abstract: A memory system that interleaves storage of data across and within a plurality memory modules is described. The memory system includes a hybrid interleaving mechanism which maps physical addresses to locations within memory modules and ranks so that physical addresses for a given page all map to the same memory module, and physical addresses for the given page are interleaved across the plurality of ranks which comprise the same memory module.
    Type: Grant
    Filed: June 29, 2009
    Date of Patent: August 26, 2014
    Assignee: Oracle America, Inc.
    Inventors: Sanjiv Kapil, Blake Alan Jones
  • Patent number: 8819348
    Abstract: Provided is a method for uniquely masking addressing to the cache memory for each user, thereby reducing risk of a timing attack by one user on another user. The method comprises assigning a first mask value to the first user and a second mask value to the second user. The mask values are unique to one another. While executing a first instruction on behalf of the first user, the method comprises applying the first mask value to set selection bits in a memory address accessed by the first instruction. While executing a second instruction on behalf of the second user, the method comprises applying the second mask value to set selection bits in the memory address accessed by the second instruction. The result offers an additional level of security between users as well as reducing the occurrence of threads or processes contending for the same memory address.
    Type: Grant
    Filed: July 12, 2006
    Date of Patent: August 26, 2014
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Blaine D. Gaither, Benjamin D. Osecky
  • Patent number: 8787145
    Abstract: A method and apparatus for de-interleaving interleaved data in a deinterleaver memory in an Orthogonal Frequency Division Multiplexing (OFDM) based Integrated Services Digital Broadcasting Terrestrial (ISDB-T) receiver. In different embodiments, the apparatus comprises of a OFDM symbol counter along with a divider or a buffer pointer RAM with circular pointer logic, a first lookup table to obtain delay buffer size and interleaving lengths for a given OFDM transmission layer, and a second lookup table to obtain buffer base address and interleaving lengths for a given OFDM transmission layer.
    Type: Grant
    Filed: September 10, 2012
    Date of Patent: July 22, 2014
    Assignee: Newport Media, Inc.
    Inventor: Philip Treigherman
  • Publication number: 20140129775
    Abstract: Disclosed is a method of pre-fetching NFA instructions to an NFA cell array. The method and system fetch instructions for use in an L1 cache during NFA instruction execution. Successive instructions from a current active state are fetched and loaded in the L1 cache. Disclosed is a system comprising an external memory, a cache line fetcher, and an L1 cache where the L1 cache is accessible and searchable by an NFA cell array and where successive instructions from a current active state in the NFA are fetched from external memory in an atomic cache line manner into a plurality of banks in the L1 cache.
    Type: Application
    Filed: November 6, 2012
    Publication date: May 8, 2014
    Applicant: LSI CORPORATION
    Inventor: Michael Ruehle
  • Patent number: 8645628
    Abstract: Various embodiments of the present invention manage access to a cache memory. In or more embodiments a request for a targeted interleave within a cache memory is received. The request is associated with an operation of a given type. The target is determined to be available. The request is granted in response to the determining that the target is available. A first interleave availability table associated with a first busy time associated with the cache memory is updated based on the operation associated with the request in response to granting the request. A second interleave availability table associated with a second busy time associated with the cache memory is updated based on the operation associated with the request in response to granting the request.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: February 4, 2014
    Assignee: International Business Machines Corporation
    Inventors: Deanna P. Berger, Michael F. Fee, Arthur J. O'Neill
  • Patent number: 8635409
    Abstract: A method of providing requests to a cache pipeline includes receiving a plurality of requests from one or more state machines at an arbiter; selecting one of the plurality of requests as a selected request the selected request having been provided by a first state machine; determining that the selected request includes a mode that requires a first step and a second step, the first step including an access to a location in a cache; determining that the location in the cache is unavailable; and replacing the mode with a modified mode that only includes the second step.
    Type: Grant
    Filed: June 23, 2010
    Date of Patent: January 21, 2014
    Assignee: International Business Machines Corporation
    Inventors: Deanna Postles Dunn Berger, Michael F. Fee, Kenneth D. Klapproth, Robert J. Sonnelitter, III
  • Publication number: 20130339612
    Abstract: An apparatus generates test data for a cache memory that caches data in a cache line in accordance with a memory address. The apparatus generates a memory address to be accessed, data to be arranged in a storage area designated by the memory address, an access instruction for the memory address, and an expected value of the data that is to be cached in the cache memory when memory access is performed in accordance with the access instruction. The apparatus generates an address list including the first memory address, the access instruction, and the expected value, so that the address list is stored in a cache block that is cacheable in the cache memory by one memory access. The apparatus generates test data in which the address list and the data are arranged, so that the address list is cached in a different cache line from the data.
    Type: Application
    Filed: May 1, 2013
    Publication date: December 19, 2013
    Applicant: FUJITSU LIMITED
    Inventor: Shintarou SUZUKI
  • Patent number: 8612664
    Abstract: Memory management process for optimizing the access to a central memory located within a processing system comprising a set of specific units communicating with each other through said memory, said process involving the steps of: a) arranging in a local memory at least a first and a second bank of storage (A, B) for the purpose of temporary object exchanged between a first data object producer (400) and a second data object consumer (410); b) arranging a address translation process for mapping the real address of an object to be stored within said banks into the address of the bank; b) receiving one object produced by said producer and dividing it into stripes of reduced size; c) storing the first stripe into said first bank; d) storing the next stripe into said second bank while the preceding stripe is read by said object consumer (410); e) storing the next stripe into said first bank again while the preceding stripe is read by said object consumer (410).
    Type: Grant
    Filed: December 29, 2009
    Date of Patent: December 17, 2013
    Assignee: ST-Ericsson SA
    Inventor: David Coupe
  • Patent number: 8560795
    Abstract: A hardware memory architecture or arrangement suited for multi-processor systems or arrays is disclosed. In one aspect, the memory arrangement includes at least one memory queue between a functional unit (e.g., computation unit) and at least one memory device, which the functional unit accesses (for write and/or read access).
    Type: Grant
    Filed: December 28, 2007
    Date of Patent: October 15, 2013
    Assignees: IMEC, Samsung Electronics Co., Ltd.
    Inventors: Bingfeng Mei, Suk Jin Kim, Osman Allam
  • Publication number: 20130232304
    Abstract: Accelerated interleaved memory data transfers in microprocessor-based systems and related devices, methods, and computer-readable media are disclosed. Embodiments disclosed in the detailed description include accelerated interleaved memory data transfers in processor-based systems. Related devices, methods, and computer-readable media are also disclosed. Embodiments disclosed include accelerated large and small memory data transfers. As a non-limiting example, a large data transfer is a data transfer size greater than the interleaved address block size provided in the interleaved memory. As another non-limiting example, a small data transfer is a data transfer size less than the interleaved address block size provided in the interleaved memory.
    Type: Application
    Filed: March 4, 2013
    Publication date: September 5, 2013
    Applicant: QUALCOMM INCORPORATED
    Inventors: Terence J. Lohman, Brent L. DeGraaf, Gregory Allan Reid
  • Patent number: 8468410
    Abstract: An address generation apparatus for quadratic permutation polynomial (QPP) interleaver receives several configurable parameters and uses a plurality of QPP units to compute and outputs a plurality of interleaving addresses according to a QPP function ?(i)=(f1i+f2i2) mod k, where f1 and f2 are QPP coefficients, k is information block length of an input sequence, 0?i?k?1, and mod is a modulus operation. Each of the plurality of QPP units is a parallel computation unit, and outputs in parallel a corresponding group of interleaver addresses, where ?(i) is also a ith interleaving address generated by the apparatus.
    Type: Grant
    Filed: September 28, 2010
    Date of Patent: June 18, 2013
    Assignees: Industrial Technology Research Institute, National Chiao Tung University
    Inventors: Shuenn-Gi Lee, Chung Hsuan Wang, Wern-Ho Sheen
  • Publication number: 20130145096
    Abstract: A method, system, and computer program product is disclosed for generating an ordered sequence from a predetermined sequence of symbols using protected interleaved caches, such as semaphore protected interleaved caches. The approach commences by dividing the predetermined sequence of symbols into two or more interleaved caches, then mapping each of the two or more interleaved caches to a particular semaphore of a group of semaphores. The group of semaphores is organized into bytes or machine words for storing the group of semaphores into a shared memory, the shared memory accessible by a plurality of session processes. Protected (serialized) access by the session processes is provided by granting access to one of the two or more interleaved caches only after one of the plurality of session processes performs a semaphore altering read-modify-write operation (e.g., a CAS) on the particular semaphore.
    Type: Application
    Filed: December 1, 2011
    Publication date: June 6, 2013
    Applicant: ORACLE INTERNATIONAL CORPORATION
    Inventors: Fulu LI, Chern Yih CHEAH, Michael ZOLL
  • Publication number: 20130145097
    Abstract: An apparatus includes a cache memory that includes a state array configured to store state information. The state information includes a state that indicates updated corresponding to a particular address of the cache memory is not stored in the cache memory but is available from at least one of multiple sources external to the cache memory, where at least one of the multiple sources is a store buffer.
    Type: Application
    Filed: December 5, 2011
    Publication date: June 6, 2013
    Applicant: QUALCOMM INCORPORATED
    Inventors: Ajay Anant Ingle, Lucian Codrescu
  • Patent number: 8443147
    Abstract: A memory interleave system for providing memory interleave for a heterogeneous computing system is provided. The memory interleave system effectively interleaves memory that is accessed by heterogeneous compute elements in different ways, such as via cache-block accesses by certain compute elements and via non-cache-block accesses by certain other compute elements. The heterogeneous computing system may comprise one or more cache-block oriented compute elements and one or more non-cache-block oriented compute elements that share access to a common main memory. The cache-block oriented compute elements access the memory via cache-block accesses (e.g., 64 bytes, per access), while the non-cache-block oriented compute elements access memory via sub-cache-block accesses (e.g., 8 bytes, per access).
    Type: Grant
    Filed: December 5, 2011
    Date of Patent: May 14, 2013
    Assignee: Convey Computer
    Inventors: Tony M. Brewer, Terrell Magee, J. Michael Andrewartha
  • Patent number: 8438434
    Abstract: Various embodiments relate to a memory device in a turbo decoder and a related method for allocating data into the memory device. Different communications standards use data blocks of varying sizes when enacting block decoding of concatenated convolutional codes. The memory device efficiently minimizes space while enabling a higher throughput of the turbo decoder by enabling a plurality of memory banks of equal size. The number of memory banks may be limited by the amount of unused space in the memory banks, which may be a waste of area on an IC chip. Using the address associated with the maximum value of the data block, the memory may be split into a plurality of memory blocks according to the most-significant bits of the maximum address, with a number of parallel SISO decoders matching the number of memory banks. This may enable higher throughput while minimizing area on the IC chip.
    Type: Grant
    Filed: December 30, 2009
    Date of Patent: May 7, 2013
    Assignee: NXP B.V.
    Inventor: Nur Engin
  • Patent number: 8370558
    Abstract: We describe a system and method to merge and align data from distributed memory controllers. A memory system includes a command bus to transmit a predetermined memory access command, and a memory interface to manipulate data from at least two memory channels, each memory channel corresponding to a portion of a distributed memory, responsive to the predetermined memory access command. The memory interface includes a plurality of memory controllers coupled to the command bus, each memory controller being operable to control a corresponding memory channel responsive to the predetermined memory access command, and a push arbiter coupled to each memory controller. The push arbiter being is operable to merge and align data retrieved responsive to each split read align command.
    Type: Grant
    Filed: January 20, 2009
    Date of Patent: February 5, 2013
    Assignee: Intel Corporation
    Inventors: Rohit Natarajan, Sridhar Lakshmanamurthy, Chen-Chi Kuo
  • Publication number: 20130031309
    Abstract: A cache memory associated with a main memory and a processor capable of executing a dataflow processing task, includes a plurality of disjoint storage segments, each associated with a distinct data category. A first segment is dedicated to input data originating from a dataflow consumed by the processing task. A second segment is dedicated to output data originating from a dataflow produced by the processing task. A third segment is dedicated to global constants, corresponding to data available in a single memory location to multiple instances of the processing task.
    Type: Application
    Filed: October 5, 2012
    Publication date: January 31, 2013
    Applicant: Commissariat a l'Energie Atomique et aux Energies Alternatives
    Inventor: Commissariat a l'Energie Atomique et aux Energie
  • Patent number: 8359433
    Abstract: A method and system to facilitate full throughput operation of cache memory line split accesses in a device. By facilitating full throughput operation of cache memory line split accesses in the device, the device minimizes the performance and throughput loss associated with the handling of non-aligned cache memory accesses that cross two or more cache memory lines and/or page memory boundaries in one embodiment of the invention. When the device receives a non-aligned cache memory access request, the merge logic combines or merges the incoming data of a particular cache memory line from a data cache memory with the stored data of the preceding cache memory line of the particular cache memory line.
    Type: Grant
    Filed: August 17, 2010
    Date of Patent: January 22, 2013
    Assignee: Intel Corporation
    Inventor: Gad S. Sheaffer
  • Patent number: 8358573
    Abstract: A method and apparatus for de-interleaving interleaved data in a deinterleaver memory in an Orthogonal Frequency Division Multiplexing (OFDM) based Integrated Services Digital Broadcasting Terrestrial (ISDB-T) receiver. In different embodiments, the apparatus comprises of a OFDM symbol counter along with a divider or a buffer pointer RAM with circular pointer logic, a first lookup table to obtain delay buffer size and interleaving lengths for a given OFDM transmission layer, and a second lookup table to obtain buffer base address and interleaving lengths for a given OFDM transmission layer.
    Type: Grant
    Filed: May 13, 2010
    Date of Patent: January 22, 2013
    Assignee: Newport Media, Inc.
    Inventor: Philip Treigherman
  • Patent number: 8352808
    Abstract: A data storage device receives write data and includes a controller configured to determine a characteristic of the write data and provide a first control signal in response to the determined characteristic, a randomizer configured to selectively randomize or not randomize the write data in response to the first control signal to thereby generate randomized write data, and a data storage unit configured to store the randomized write data.
    Type: Grant
    Filed: October 5, 2009
    Date of Patent: January 8, 2013
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yong June Kim, Jun Jin Kong, Jae Hong Kim, Kyoung Lae Cho
  • Publication number: 20120324168
    Abstract: A method for protecting an operation sequence executed by a portable data carrier from spying out, wherein the data carrier has at least a processor core, a main memory and a cache memory with a plurality of cache lines. The processor core is able to access, upon executing the operation sequence, at least two data values, with the data values occupying at least one cache line in the cache memory and being respectively divided into several portions so that the occurrence of a cache miss or a cache hit is independent of which data value is accessed. A computer program product and a device have corresponding features. The invention serves to thwart attacks based on an evaluation of the cache accesses during the execution of the operation sequence.
    Type: Application
    Filed: March 3, 2011
    Publication date: December 20, 2012
    Applicant: Giesecke & Devrient GmbH
    Inventor: Christof Rempel
  • Patent number: 8332701
    Abstract: An address generation apparatus for a quadratic permutation polynomial (QPP) interleaver is provided. It comprises a basic recursive unit, and L recursive units represented by first recursive unit up to Lth recursive units. The apparatus inputs a plurality of configurable parameters according to a QPP function ?(i)=(f1i+f2i2) mod k, generates a plurality of interleaver addresses in serial via the basic recursive unit, and generates L groups of corresponding interleaver addresses via the first up to the Lth recursive units, wherein ?(i) is the i-th interleaver address generated by the apparatus, f1 and f2 are QPP coefficients, and k is information block length of an input sequence, 0?i?k?1.
    Type: Grant
    Filed: December 25, 2009
    Date of Patent: December 11, 2012
    Assignees: Industrial Technology Research Institute, National Chiao Tung University
    Inventors: Shuenn-Gi Lee, Chung-Hsuan Wang, Wern-Ho Sheen
  • Patent number: 8326895
    Abstract: A computer readable storage medium for associating a phase with an activation of a computer program that supports garbage collection include: a plurality of stacks, each stack including at least one stack frame that includes an activation count; and a processor with logic for performing steps of: zeroing the activation count whenever the program creates a new stack frame and after garbage collection is performed; determining whether an interval has transpired during program execution; examining each stack frame's content and incrementing the activation count for each frame of the stacks once the interval has transpired; detecting the phase whose activation count is non-zero and associating the phase with the activation; and ensuring that when the phase ends, an action is immediately performed.
    Type: Grant
    Filed: May 22, 2010
    Date of Patent: December 4, 2012
    Assignee: International Business Machines Corporation
    Inventors: Stephen J Fink, David P. Grove
  • Patent number: 8321634
    Abstract: A processor may operate in one of a plurality of operating states. In a Normal operating state, the processor is not involved with a memory transaction. Upon receipt of a transaction instruction to access a memory location, the processor transitions to a Transaction operating state. In the Transaction operating state, the processor performs changes to a cache line and data associated with the memory location. While in the Transaction operating state, any changes to the data and the cache line is not visible to other processors in the computing system. These changes become visible upon the processor entering a Commit operating state in response to receipt of a commit instruction. After changes become visible, the processor returns to the Normal operating state. If an abort event occurs prior to receipt of the commit instruction, the processor transitions to an Abort operating state where any changes to the data and cache line are discarded.
    Type: Grant
    Filed: April 11, 2011
    Date of Patent: November 27, 2012
    Assignee: Silicon Graphics International Corp.
    Inventors: Steven C. Miller, Martin M. Deneroff, Curt F. Schimmel, Larry Rudolph, Charles E. Leiserson, Bradley C. Kuszmaul, Krste Asanovic
  • Patent number: 8281106
    Abstract: A processor includes at least one execution unit that executes instructions, at least one register file, coupled to the at least one execution unit, that buffers operands for access by the at least one execution unit, and an instruction sequencing unit that fetches instructions for execution by the execution unit. The processor further includes an operand data structure and an address generation accelerator. The operand data structure specifies a first relationship between addresses of sequential accesses within a first address region and a second relationship between addresses of sequential accesses within a second address region. The address generation accelerator computes a first address of a first memory access in the first address region by reference to the first relationship and a second address of a second memory access in the second address region by reference to the second relationship.
    Type: Grant
    Filed: December 16, 2008
    Date of Patent: October 2, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Balaram Sinharoy
  • Patent number: 8281052
    Abstract: A microprocessor system having a microprocessor and a double data rate memory device having separate groups of external pins adapted to receive addressing, data, and control information and a memory controller adapted to set a burst type of the double data rate memory to interleaved or sequential by sending a signal through one of the external pins of the double data rate memory device, such that when a read command is sent by the controller, depending on the burst type set, the double data rate memory device returns interleaved or sequentially output data to the memory controller.
    Type: Grant
    Filed: April 9, 2012
    Date of Patent: October 2, 2012
    Assignee: Round Rock Research, LLC
    Inventor: Christopher S. Johnson
  • Patent number: 8272059
    Abstract: A system and method to protect web applications from malicious attacks and, in particular, a system and method for identification and blocking of malicious code for web browser script engines. The system includes at least one module configured to protect web applications from malicious attacks by detecting an occurrence of heap spraying and blocking the occurrence of heap spraying.
    Type: Grant
    Filed: May 28, 2008
    Date of Patent: September 18, 2012
    Assignee: International Business Machines Corporation
    Inventor: Robert G. Freeman