Multiport Cache Patents (Class 711/131)
  • Patent number: 10481867
    Abstract: A data input/output unit is provided. The data input/output unit which is connected to a processor, and receives and outputs data in sequence based on a first schedule includes a first input first output (FIFO) memory connected to an external unit and the processor; and a reordering buffer connected to one side of the FIFO memory, and store data outputted from, or inputted to, the FIFO memory in a plurality of buffer regions in sequence, and output data stored in one of the plurality of buffer regions based on a control signal provided from the processor.
    Type: Grant
    Filed: October 6, 2017
    Date of Patent: November 19, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jae-un Park, Jong-hun Lee, Ki-seok Kwon, Dong-kwan Suh, Kang-jin Yoon, Jung-uk Cho
  • Patent number: 10360031
    Abstract: Fast unaligned memory access. In accordance with a first embodiment of the present invention, a computing device includes a load queue memory structure configured to queue load operations and a store queue memory structure configured to queue store operations. The computing device includes also includes at least one bit configured to indicate the presence of an unaligned address component for an entry of said load queue memory structure, and at least one bit configured to indicate the presence of an unaligned address component for an entry of said store queue memory structure. The load queue memory may also include memory configured to indicate data forwarding of an unaligned address component from said store queue memory structure to said load queue memory structure.
    Type: Grant
    Filed: October 21, 2011
    Date of Patent: July 23, 2019
    Assignee: Intel Corporation
    Inventors: Mandeep Singh, Mohammad Abdallah
  • Patent number: 10261705
    Abstract: Data verification includes obtaining a logical block address (LBA), which is associated with a data block of a file, to be verified. Data verification further includes reading, from a solid state drive (SSD) comprising one or more flash storage elements, data content that corresponds to the LBA. Data verification further includes determining whether an access latency associated with the reading of the data content exceeds a threshold. Data verification further includes, in the event that the access latency does not exceed the threshold, evaluating the data content to determine whether it is consistently stored in a physical memory included in the SSD. Data verification further includes, in the event that the data content is determined not to be consistently stored in the physical memory included in the SSD, recording an indication indicating that the LBA is not successfully verified.
    Type: Grant
    Filed: December 15, 2016
    Date of Patent: April 16, 2019
    Assignee: Alibaba Group Holding Limited
    Inventor: Shu Li
  • Patent number: 10209991
    Abstract: A system and method for reducing latencies of main memory data accesses are described. A non-blocking load (NBLD) instruction identifies an address of requested data and a subroutine. The subroutine includes instructions dependent on the requested data. A processing unit verifies that address translations are available for both the address and the subroutine. The processing unit continues processing instructions with no stalls caused by younger-in-program-order instructions waiting for the requested data. The non-blocking load unit performs a cache coherent data read request on behalf of the NBLD instruction and requests that the processing unit perform an asynchronous jump to the subroutine upon return of the requested data from lower-level memory.
    Type: Grant
    Filed: November 16, 2016
    Date of Patent: February 19, 2019
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Meenakshi Sundaram Bhaskaran, Elliot H. Mednick, David A. Roberts, Anthony Asaro, Amin Farmahini-Farahani
  • Patent number: 10067713
    Abstract: In a data processing system implementing a weak memory model, a lower level cache receives, from a processor core, a plurality of copy-type requests and a plurality of paste-type requests that together indicate a memory move to be performed. The lower level cache also receives, from the processor core, a barrier request that requests enforcement of ordering of memory access requests prior to the barrier request with respect to memory access requests after the barrier request. In response to the barrier request, the lower level cache enforces a barrier indicated by the barrier request with respect to a final paste-type request ending the memory move but not with respect to other copy-type requests and paste-type requests in the memory move.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: September 4, 2018
    Assignee: International Business Machines Corporation
    Inventors: Bradly G. Frey, Guy L. Guthrie, Cathy May, William J. Starke, Derek E. Williams
  • Patent number: 10013270
    Abstract: Embodiments relate to application-level initiation of processor parameter adjustment. An aspect includes receiving, by a hypervisor in a computer system from an application running on the computer system, a request to adjust an operating parameter of a processor of the computer system. Another aspect includes determining an adjusted value for the operating parameter during execution of the application by the hypervisor. Another aspect includes setting the operating parameter in a parameter register of the processor to the adjusted value by the hypervisor. Yet another aspect includes executing the application according to the parameter register of the processor.
    Type: Grant
    Filed: December 3, 2015
    Date of Patent: July 3, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Giles R. Frazier, Michael Karl Gschwind
  • Patent number: 9910857
    Abstract: Methods and systems for data management are disclosed. With embodiments of the present disclosure, data files originating from the same source data can be de-duplicated. One such method comprises calculating one or more of a first characteristic value for first data in a first format, and one or more second characteristic values for one or more data in one or more second formats into which the first data can be converted, said characteristic value uniquely representing an arrangement characteristic of at least part of bits of data in a particular format. The method also includes storing one of the first data and the second data in response to one of the calculated characteristic values being the same as a stored characteristic value corresponding to a second data.
    Type: Grant
    Filed: April 28, 2014
    Date of Patent: March 6, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Peng Hui Jiang, Pi Jun Jiang, Xi Ning Wang, Liang Xue, Wen Yin
  • Patent number: 9842047
    Abstract: A storage device controller addresses consecutively-addressed portions of incoming data to consecutive data tracks on a storage medium and writes the consecutively-addressed portions to the consecutive data tracks in a non-consecutive track order. In one implementation, the storage device controller reads the data back from the consecutive data tracks in a consecutive address order in a single sequential read operation.
    Type: Grant
    Filed: April 16, 2015
    Date of Patent: December 12, 2017
    Assignee: SEAGATE TECHNOLOGY LLC
    Inventors: Kaizhong Gao, Wenzhong Zhu, Tim Rausch, Edward Gage
  • Patent number: 9753858
    Abstract: A system and method for efficient cache data access in a large row-based memory of a computing system. A computing system includes a processing unit and an integrated three-dimensional (3D) dynamic random access memory (DRAM). The processing unit uses the 3D DRAM as a cache. Each row of the multiple rows in the memory array banks of the 3D DRAM stores at least multiple cache tags and multiple corresponding cache lines indicated by the multiple cache tags. In response to receiving a memory request from the processing unit, the 3D DRAM performs a memory access according to the received memory request on a given cache line indicated by a cache tag within the received memory request. Rather than utilizing multiple DRAM transactions, a single, complex DRAM transaction may be used to reduce latency and power consumption.
    Type: Grant
    Filed: November 30, 2011
    Date of Patent: September 5, 2017
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Gabriel H. Loh, Mark D. Hill
  • Patent number: 9612833
    Abstract: Technologies are presented that optimize data processing cost and efficiency. A computing system may comprise at least one processing element; a memory communicatively coupled to the at least one processing element; at least one compressor-decompressor communicatively coupled to the at least one processing element, and communicatively coupled to the memory through a memory interface; and a cache fabric comprising a plurality of distributed cache banks communicatively coupled to each other, to the at least one processing element, and to the at least one compressor-decompressor via a plurality of nodes. In this system, the at least one compressor-decompressor and the cache fabric are configured to manage and track uncompressed data of variable length for data requests by the processing element(s), allowing usage of compressed data in the memory.
    Type: Grant
    Filed: February 28, 2014
    Date of Patent: April 4, 2017
    Assignee: Intel Corporation
    Inventors: Altug Koker, Hong Jiang, James M. Holland
  • Patent number: 9600183
    Abstract: Techniques and mechanisms for determining comparison information at a memory device. In an embodiment, the memory device receives from a memory controller signals that include or otherwise indicate an address corresponding to a memory location of the memory device. Where it is determined that the signals indicate a compare operation, the memory device retrieves data stored at the memory location, and performs a comparison of the data to a reference data value that is included in or otherwise indicated by the received signals. The memory device sends to the memory controller information representing a result of the comparison. In another embodiment, a memory controller provides signals to control a compare operation by such a memory device.
    Type: Grant
    Filed: September 22, 2014
    Date of Patent: March 21, 2017
    Assignee: Intel Corporation
    Inventors: Shigeki Tomishima, Shih-Lien L. Lu
  • Patent number: 9454482
    Abstract: An apparatus for processing cache requests in a computing system is disclosed. The apparatus may include a single-port memory, a dual-port memory, and a control circuit. The single-port memory may be store tag information associated with a cache memory, and the dual-port memory may be configured to store state information associated with the cache memory. The control circuit may be configured to receive a request which includes a tag address, access the tag and state information stored in the single-port memory and the dual-port memory, respectively, dependent upon the received tag address. A determination of if the data associated with the received tag address is contained in the cache memory may be made the control circuit, and the control circuit may update and store state information in the dual-port memory responsive to the determination.
    Type: Grant
    Filed: June 27, 2013
    Date of Patent: September 27, 2016
    Assignee: Apple Inc.
    Inventors: Harshavardhan Kaushikkar, Muditha Kanchana, Odutola O. Ewedemi
  • Patent number: 9454492
    Abstract: One method includes streaming a data segment to a write buffer corresponding to a virtual page including at least two physical pages. Each physical page is defined within a respective solid-state storage element. The method also includes programming contents of the write buffer to the virtual page, such that a first portion of the data segment is programmed to a first one of the physical pages, and a second portion of the data segment is programmed to a second one of the physical pages.
    Type: Grant
    Filed: December 28, 2012
    Date of Patent: September 27, 2016
    Assignee: LONGITUDE ENTERPRISE FLASH S.A.R.L.
    Inventors: David Flynn, Bert Lagerstedt, John Strasser, Jonathan Thatcher, John Walker, Michael Zappe
  • Patent number: 9372798
    Abstract: A data processing apparatus (2) comprises a first protocol domain A configured to operate under a write progress protocol and a second protocol domain B configured to operate under a snoop progress protocol. A deadlock condition is detected if a write target address for a pending write request issued from the first domain A to the second domain B is the same as a snoop target address or a pending snoop request issued from the second domain B to the first domain A. When the deadlock condition is detected, a bridge (4) between the domains may issue an early response to a selected one of the deadlocked write and snoop requests without waiting for the selected request serviced. The early response indicates to the domain that issued the selected request that the selected request has been serviced, enabling the other request to be serviced by the issuing domain.
    Type: Grant
    Filed: March 2, 2012
    Date of Patent: June 21, 2016
    Assignee: ARM Limited
    Inventors: William Henry Flanders, Ramamoorthy Guru Prasadh, Ashok Kumar Tummala, Jamshed Jalal, Phanindra Kumar Mannava
  • Patent number: 9335947
    Abstract: Embodiments relate to an inter-processor memory. An aspect includes a plurality of memory banks, each of the plurality of memory banks comprising a respective plurality of parallel memory modules, wherein a number of the plurality of memory banks is equal to a number of read ports of the inter-processor memory, and a number of parallel memory modules within a memory bank is equal to a number of write ports of the inter-processor memory. Another aspect includes each memory bank corresponding to a single respective read port of the inter-processor memory, and wherein, within each memory bank, each memory module of the plurality of parallel memory modules is writable in parallel by a single respective write port of the inter-processor memory.
    Type: Grant
    Filed: June 30, 2014
    Date of Patent: May 10, 2016
    Assignee: RAYTHEON COMPANY
    Inventors: Pen C. Chien, Frank N. Cheung, Kuan Y. Huang
  • Patent number: 9299429
    Abstract: A nonvolatile memory device includes a buffer memory, a read circuit configured to read first data stored in the buffer memory in a first read operation, and a write circuit configured to write second data in the buffer memory in a first write operation, wherein the first write operation is performed when a first internal write command is generated during the first read operation.
    Type: Grant
    Filed: June 27, 2014
    Date of Patent: March 29, 2016
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yong-Jun Lee, Hoi-Ju Chung, Yong-Jin Kwon, Hyo-Jin Kwon, Eun-Hye Park
  • Patent number: 9202552
    Abstract: Dual port static random access memory (SRAM) bitcell structures with improve symmetry in access transistors physical placement are provided. The bitcell structures may include, for example, two pairs of parallel pull-down transistors. The bitcell structures may also include pass-gate transistors PGLA and PGRA forming a first port, and pass-gate transistors PGLB and PGRB forming a second port. The pass-gate transistors PGLA and PGLB may be adjacent one another and a first side of the bitcell structure, and pass-gate transistors PGRA and PGRB may be adjacent one another and a second side of the bitcell structure. Each of the pass-gate transistors PGLA and PGLB may be connected with one of the pull-down transistors of one of the pairs of parallel pull-down transistors. Similarly, each of the pass-gate transistors PGRA and PGRB may be connected with one of the pull-down transistors of the other pair of parallel pull-down transistors.
    Type: Grant
    Filed: December 13, 2013
    Date of Patent: December 1, 2015
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Bipul C. Paul, Randy W. Mann, Sangmoon J. Kim
  • Patent number: 9128850
    Abstract: A multi-ported memory that supports multiple read and write accesses is described. The multi-ported memory may include a number of read/write ports that is greater than the number of read/write ports of each memory bank of the multi-ported memory. The multi-ported memory allows for read operation(s) and write operation(s) to be received during the same clock cycle. In the event that an incoming write operation is blocked by read operation(s), data for that write operation may be stored in one of a plurality of cache banks included in the multi-port memory. The cache banks are accessible to both write and read operations. In the event than the write operation is not blocked by read operation(s), a determination is made as to whether data for that incoming write operation is stored in the memory bank targeted by that incoming write operation or in one of the cache banks.
    Type: Grant
    Filed: December 17, 2012
    Date of Patent: September 8, 2015
    Assignee: Broadcom Corporation
    Inventors: Weihuang Wang, Chien-Hsien Wu
  • Patent number: 9063794
    Abstract: A computer system includes: a main storage unit, a processing executing unit sequentially executing processing to be executed on virtual processors; a level-1 cache memory shared among the virtual processors; a level-2 cache memory including storage areas partitioned based on the number of the virtual processors, the storage areas each (i) corresponding to one of the virtual processors and (ii) holding the data to be used by the corresponding one of the virtual processors; a context memory holding a context item corresponding to the virtual processor; a virtual processor control unit saving and restoring a context item of one of the virtual processors; a level-1 cache control unit; and a level-2 cache control unit.
    Type: Grant
    Filed: October 4, 2012
    Date of Patent: June 23, 2015
    Assignee: SOCIONEXT INC.
    Inventors: Teruyuki Morita, Yoshihiro Koga, Kouji Nakajima
  • Patent number: 9043489
    Abstract: A method begins by a router receiving data for storage and interpreting the data to determine whether the data is to be forwarded or error encoded. The method continues with the router obtaining a routing table when the data is to be error encoded. Next, the method continues with the router selecting a routing option from the plurality of routing options and determining error coding dispersal storage function parameters based on the routing option. Next, the method continues with the router encoding the data based on the error coding dispersal storage function parameters to produce a plurality of sets of encoded data slices. Next, the method continues with the router outputting at least some of the encoded data slices of a set of the plurality of sets of encoded data slices to an entry point of the routing option.
    Type: Grant
    Filed: August 4, 2010
    Date of Patent: May 26, 2015
    Assignee: Cleversafe, Inc.
    Inventors: Gary W. Grube, Timothy W. Markison
  • Patent number: 8977800
    Abstract: Provided is a multi-port cache memory apparatus and a method of the multi-port cache memory apparatus. The multi-port memory apparatus may divide an address space into address regions and allocate the divided memory regions to cache banks, thereby preventing the concentration of access to a particular cache.
    Type: Grant
    Filed: January 31, 2012
    Date of Patent: March 10, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Moo-Kyoung Chung, Soo-Jung Ryu, Ho-Young Kim, Woong Seo, Young-Chul Cho
  • Patent number: 8966180
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Grant
    Filed: March 1, 2013
    Date of Patent: February 24, 2015
    Assignee: Intel Corporation
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
  • Patent number: 8954674
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Grant
    Filed: October 8, 2013
    Date of Patent: February 10, 2015
    Assignee: Intel Corporation
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
  • Patent number: 8914649
    Abstract: A computing device (101, 400, 500) has a processor (401) and at least one peripheral device port (106, 107, 108, 109, 410-1 to 410-5). The processor (401) is configured to selectively power the at least one peripheral device port (106, 107, 108, 109, 410-1 to 410-5) when the processor (401) is in a sleep state (302, 303, 304, 305, 306) according to at least one setting stored by firmware (405) of the processor (401).
    Type: Grant
    Filed: February 9, 2009
    Date of Patent: December 16, 2014
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Chi W. So, Binh T. Truong, Luke Mulcahy
  • Patent number: 8732400
    Abstract: Interconnect circuitry for a data processing apparatus is disclosed. The interconnect circuitry is configured to provide data routes via which at least one initiator device may access at least one recipient device.
    Type: Grant
    Filed: October 5, 2010
    Date of Patent: May 20, 2014
    Assignee: ARM Limited
    Inventors: Peter Andrew Riocreux, Bruce James Mathewson, Christopher William Laycock, Richard Roy Grisenthwaite
  • Patent number: 8732384
    Abstract: A device and methods are provided for accessing memory. In one embodiment, a method includes receiving a request for data stored in a device, checking a local memory for data based on the request to determine if one or more blocks of data associated with the request are stored in the local memory, and generating a memory access request for one or more blocks of data stored in a memory of the device based when one or more blocks of data are not stored in the local memory. In one embodiment, data stored in memory of the device may be arranged in a configuration to include a plurality of memory access units each having adjacent lines of pixel data to define a single line of memory within the memory access units. Memory access units may be configured based on memory type and may reduce the number of undesired pixels read.
    Type: Grant
    Filed: July 21, 2010
    Date of Patent: May 20, 2014
    Assignee: CSR Technology Inc.
    Inventors: Eran Scharam, Costia Parfenyev, Liron Ain-Kedem, Ophir Turbovich, Tuval Berler
  • Patent number: 8677070
    Abstract: According to an aspect of the embodiment, an FP includes a plurality of entries which holds requests to be processed, and each of the plurality of entries includes a requested flag indicating that data transfer is once requested. An FP-TOQ holds information indicating an entry holding the oldest request. A data transfer request prevention determination circuit checks the requested flag of a request to be processed and the FP-TOQ, and when a transfer request of data as a target of the request to be processed has already been issued and the entry holding the request to be processed is not the entry indicated by the FP-TOQ, transmits a signal which prevents the transfer request of the data to a data transfer request control circuit. Even when a cache miss occurs in a primary cache RAM, the data transfer request control circuit does not issue a data transfer request when the signal which prevents the transfer request is received.
    Type: Grant
    Filed: December 16, 2009
    Date of Patent: March 18, 2014
    Assignee: Fujitsu Limited
    Inventor: Naohiro Kiyota
  • Patent number: 8671232
    Abstract: A system and method for dynamically migrating stash transactions include first and second processing cores, an input/output memory management unit (IOMMU), an IOMMU mapping table, an input/output (I/O) device, a stash transaction migration management unit (STMMU), and an operating system (OS) scheduler. The first core executes a first thread associated with a frame manager. The OS scheduler migrates the first thread from the first core to the second core and generates pre-empt notifiers to indicate scheduling-out and scheduling-in of the first thread from the first core and to the second core. The STMMU uses the pre-empt notifiers to enable dynamic stash transaction migration.
    Type: Grant
    Filed: March 7, 2013
    Date of Patent: March 11, 2014
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Vakul Garg, Varun Sethi
  • Patent number: 8661200
    Abstract: Disclosed herein is a channel controller for a multi-channel cache memory, and a method that includes receiving a memory address associated with a memory access request to a main memory of a data processing system; translating the memory address to form a first access portion identifying at least one partition of a multi-channel cache memory, and at least one further access portion, where the at least one partition includes at least one channel; and applying the at least one further access portion to the at least one channel of the multi-channel cache memory.
    Type: Grant
    Filed: February 5, 2010
    Date of Patent: February 25, 2014
    Assignee: Nokia Corporation
    Inventors: Jari Nikara, Eero Aho, Kimmo Kuusilinna
  • Publication number: 20140040542
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Application
    Filed: October 8, 2013
    Publication date: February 6, 2014
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
  • Patent number: 8639884
    Abstract: Systems and methods are disclosed for multi-threading computer systems. In a computer system executing multiple program threads in a processing unit, a first load/store execution unit is configured to handle instructions from a first program thread and a second load/store execution unit is configured to handle instructions from a second program thread. When the computer system executing a single program thread, the first and second load/store execution units are reconfigured to handle instructions from the single program thread, and a Level 1 (L1) data cache is reconfigured with a first port to communicate with the first load/store execution unit and a second port to communicate with the second load/store execution unit.
    Type: Grant
    Filed: February 28, 2011
    Date of Patent: January 28, 2014
    Assignee: Freescale Semiconductor, Inc.
    Inventor: Thang M. Tran
  • Publication number: 20130326131
    Abstract: A security context management system within a security accelerator that can operate with high latency memories and can provide line-rate processing on several security protocols. The method employed hides the memory latencies by having the processing engines working in a pipelined fashion. It is designed to auto-fetch security context from external memory, and will allow any number of simultaneous security connections by caching only limited contexts on-chip and fetching other contexts as needed. The module does the task of fetching and associating security context with ingress packet, and populates the security context RAM with data from the external memory.
    Type: Application
    Filed: May 29, 2012
    Publication date: December 5, 2013
    Applicant: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Amritpal Singh Mundra, Denis Beaudoin, Eric Lasmana
  • Patent number: 8595442
    Abstract: Methods and systems redundantly validate values that are stored in a memory arrangement. The memory arrangement includes a first port and a second port that provide coherent access to one or more caches in the memory arrangement, and the first and second ports provide this coherent access at the same priority level. An instruction processor verifies that a first expected value matches a first check value calculated from the values as read from the memory arrangement via the first port. A check circuit verifies that a second expected value matches a second check value calculated from the values as read from the memory arrangement via the second port. A recovery operation is performed in response to the first or second expected values not matching the first and second check values, respectively.
    Type: Grant
    Filed: November 16, 2010
    Date of Patent: November 26, 2013
    Assignee: XILINX, Inc.
    Inventors: Philip B. James-Roxby, Austin H. Lesea
  • Patent number: 8583873
    Abstract: A multiport data cache apparatus and a method of controlling the same are provided. The multiport data cache apparatus includes a plurality of cache banks configured to share a cache line, and a data cache controller configured to receive cache requests for the cache banks, each of which including a cache bank identifier, transfer the received cache requests to the respective cache banks according to the cache bank identifiers, and process the cache requests independently from one another.
    Type: Grant
    Filed: February 28, 2011
    Date of Patent: November 12, 2013
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jae-Un Park, Ki-Seok Kwon, Suk-Jin Kim
  • Patent number: 8578097
    Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.
    Type: Grant
    Filed: October 24, 2011
    Date of Patent: November 5, 2013
    Assignee: Intel Corporation
    Inventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
  • Patent number: 8516196
    Abstract: A processor may include several processor cores, each including a respective higher-level cache; a lower-level cache including several tag units each including several controllers, where each controller corresponds to a respective cache bank configured to store data, and where the controllers are concurrently operable to access their respective cache banks; and an interconnect network configured to convey data between the cores and the lower-level cache. The controllers in a given tag unit may share access to a resource that may include one or more of an interconnect egress port coupled to the interconnect network, an interconnect ingress port coupled to the interconnect network, a test controller, or a data storage structure.
    Type: Grant
    Filed: June 1, 2012
    Date of Patent: August 20, 2013
    Assignee: Oracle America, Inc.
    Inventors: Prashant Jain, Yoganand Chillarige, Sandip Das, Shukur Moulali Pathan, Srinivasan R. Iyengar, Sanjay Patel
  • Patent number: 8499128
    Abstract: According to the disclosure, a unique and novel archiving system that provides one or more application layer partitions to archive data is disclosed. Embodiments include an active archive including a fixed storage. The active archive can create application layer partitions that associate the application layer partitions with portions of the fixed storage. Each application layer partition, in embodiments, has a separate set of controls that allow for customized storage of different data within a single archiving system. Further, embodiments of methods for ensuring storage capacity in the active archive and the application layer partitions within the active archive is also disclosed.
    Type: Grant
    Filed: February 9, 2012
    Date of Patent: July 30, 2013
    Assignee: Imation Corp.
    Inventors: Matthew D. Bondurant, S. Christopher Alaimo, Randy Kerns
  • Patent number: 8495303
    Abstract: A processor and a computing system include a processor core and a buffer memory to read word data from a memory. The read word data includes first byte data read by the processor core from the memory. The buffer memory also stores the read word data, and determines whether second byte data requested by the processor core is stored in the buffer memory.
    Type: Grant
    Filed: July 21, 2008
    Date of Patent: July 23, 2013
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sang Suk Lee, Suk Jin Kim, Yeon Gon Cho
  • Patent number: 8489814
    Abstract: A cache controller, a method for controlling the cache controller, and a computing system comprising the same are provided. The computer system comprises a processor and a cache controller. The cache controller is electrically connected to the processor and comprises a first port, a second port, and at least one cache. The first port is configured to receive an address of a content, wherein a type of the content is one of instruction and data. The second port is configured to receive an information bit corresponding to the content, wherein the information bit indicates the type of the content. The at least one cache comprises at least one cache lines. Each of the cache lines comprises a content field and corresponding to an information field. The content and the information bit is stored in the content field of one of the cache lines and the corresponding information field respectively according to the information bit and the address. Thereby, instruction and data are separated in a unified cache.
    Type: Grant
    Filed: June 23, 2009
    Date of Patent: July 16, 2013
    Assignee: Mediatek, Inc.
    Inventors: Po-Hung Chen, Chang-Hsien Tai
  • Patent number: 8484421
    Abstract: Embodiments of the present disclosure provide a system on a chip (SOC) comprising a processing core, and a cache including a cache instruction port, a cache data port, and a port utilization circuitry configured to selectively fetch instructions through the cache instruction port and selectively pre-fetch instructions through the cache data port. Other embodiments are also described and claimed.
    Type: Grant
    Filed: November 23, 2009
    Date of Patent: July 9, 2013
    Assignee: Marvell Israel (M.I.S.L) Ltd.
    Inventors: Tarek Rohana, Adi Habusha, Gil Stoler
  • Patent number: 8447931
    Abstract: One embodiment of the present invention provides a processor that supports multiple-issue execution. This processor includes a register file, which contains an array of memory cells, wherein the memory cells contain bits for architectural registers of the processor. The register file also includes multiple read ports and multiple write ports to support multiple-issue execution. During operation, if multiple read ports simultaneously read from a given register, the register file is configured to: read each bit of the given register out of the array of memory cells through a single bitline associated with the bit; and to use a driver located outside of the array of memory cells to drive the bit to the multiple read ports. In this way, each memory cell only has to drive a single bitline (instead of multiple bitlines) during a multiple-port read operation, thereby allowing memory cells to use smaller and more power-efficient drivers for read operations.
    Type: Grant
    Filed: July 1, 2005
    Date of Patent: May 21, 2013
    Assignee: Oracle America, Inc.
    Inventors: Shailender Chaudhry, Paul Caprioli, Marc Tremblay
  • Publication number: 20130111143
    Abstract: A multi-core system includes processor cores having caches; an external input/output bus connected to the processor cores; memory accessed by the processor cores via the external input/output bus; profile information indicating the volume of a write access to the memory by tasks concurrently allocated to the processor cores and whether a cache miss will occur in a read access to the caches; and an operating system that controls clock frequency of the external input/output bus to be a first frequency, based on the volume of the write access to the memory by the tasks and the bus width of the external input/output bus when a cache miss in read access is judged to not occur in executing the tasks and that controls the clock frequency of the external input/output bus to be a second frequency higher than the first frequency when a cache miss in read access is judged.
    Type: Application
    Filed: December 18, 2012
    Publication date: May 2, 2013
    Applicant: FUJITSU LIMITED
    Inventor: FUJITSU LIMITED
  • Patent number: 8423717
    Abstract: A multi-core processor chip comprises at least one shared cache having a plurality of ports and a plurality of address spaces and a plurality of processor cores. Each processor core is coupled to one of the plurality of ports such that each processor core is able to access the at least one shared cache simultaneously with another of the plurality of processor cores. Each processor core is assigned one of a unique application or a unique application task and the multi-core processor is operable to execute a partitioning operating system that temporally and spatially isolates each unique application and each unique application task such that each of the plurality of processor cores does not attempt to write to the same address space of the at least one shared cache at the same time as another of the plurality of processor cores.
    Type: Grant
    Filed: December 2, 2009
    Date of Patent: April 16, 2013
    Assignee: Honeywell International Inc.
    Inventors: Scott Gray, Nicholas Wilt
  • Patent number: 8412886
    Abstract: In such a configuration that a port unit is provided which takes a form being shared among threads and has a plurality of entries for holding access requests, and the access requests for a cache shared by a plurality of threads being executed at the same time are controlled using the port unit, the access request issued from each tread is registered on a port section of the port unit which is assigned to the tread, thereby controlling the port unit to be divided for use in accordance with the thread configuration. In selecting the access request, the access requests are selected for each thread based on the specified priority control from among the access requests issued from the threads held in the port unit, thereafter a final access request is selected in accordance with a thread selection signal from among those selected access requests.
    Type: Grant
    Filed: December 16, 2009
    Date of Patent: April 2, 2013
    Assignee: Fujitsu Limited
    Inventor: Naohiro Kiyota
  • Patent number: 8387147
    Abstract: A method and system for detecting and removing a hidden pestware file is described. One illustrative embodiment detects, using direct drive access, a file on a computer storage device; determines whether the file is also detectable by the operating system by attempting to access the file using a standard file Application-Program-Interface (API) function call of the operating system; identifies the file as a potential hidden pestware file, when the file is undetectable by the operating system; confirms through an automated pestware-signature scan of the potential hidden pestware file that the potential hidden pestware file is a hidden pestware file; and removes automatically, using direct drive access, the hidden pestware file from the storage device.
    Type: Grant
    Filed: July 18, 2011
    Date of Patent: February 26, 2013
    Assignee: Webroot Inc.
    Inventor: Patrick Sprowls
  • Patent number: 8374050
    Abstract: A memory operative to provide multi-port functionality includes multiple single-port memory cells forming a first memory array. The first memory array is organized into multiple memory banks, each of the memory banks comprising a corresponding subset of the single-port memory cells. The memory further includes a second memory array including multiple multi-port memory cells and is operative to track status information of data stored in corresponding locations in the first memory array. At least one cache memory is connected with the first memory array and is operative to store data for resolving concurrent read and write access conflicts in the first memory array.
    Type: Grant
    Filed: June 4, 2011
    Date of Patent: February 12, 2013
    Assignee: LSI Corporation
    Inventors: Ting Zhou, Ephrem Wu, Sheng Liu, Hyuck Jin Kwon
  • Patent number: 8312216
    Abstract: The data processing apparatus according to an embodiment of the present invention includes: a first processor; a second processor; and an external RAM to/from which the first processor writes/reads data, the first processor including a cache memory for storing data used in the first processor in association with an address on the external RAM, and the data being written to the cache memory by the second processor not through the external RAM.
    Type: Grant
    Filed: January 11, 2012
    Date of Patent: November 13, 2012
    Assignee: Renesas Electronics Corporation
    Inventor: Mitsunobu Tanigawa
  • Patent number: 8275942
    Abstract: According to one embodiment of the invention, a method is disclosed for selecting a first subset of a plurality of cache ways in a cache for storing hardware threads identified as high priority hardware threads for processing by a multi-threaded processor in communication with the cache; assigning high priority hardware threads to the selected first subset; monitoring a cache usage of a high priority hardware thread assigned to the selected first subset of plurality of cache ways; and reassigning the assigned high priority hardware thread to any cache way of the plurality of cache ways if the cache usage of the high priority hardware thread exceeds a predetermined inactive cache usage threshold value based on the monitoring.
    Type: Grant
    Filed: December 22, 2005
    Date of Patent: September 25, 2012
    Assignee: Intel Corporation
    Inventors: Theodros Yigzaw, Geeyarpuram N. Santhanakrishnan, Mark Rowland, Ganapati Srinivasa
  • Patent number: 8261023
    Abstract: A data processor includes a cache memory control section which includes: a hit/miss determination section which is supplied with a request for data processing to determine whether data to be processed is present in a cache memory and outputs a cache hit/miss determination result and, if having determined that the data is not present in the cache memory, feeds a read command to make an upper memory control section read the data from the upper memory; a FIFO storage which stores the cache hit/miss determination result and the in-block read position information according to a FIFO system; and a cache memory read/write section which reads the hit/miss determination result and the in-block read position information from the FIFO storage and reads the data from the cache memory, or writes the data from the upper memory control section into the cache memory and outputs the data.
    Type: Grant
    Filed: December 15, 2009
    Date of Patent: September 4, 2012
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Kentaro Yoshikawa
  • Publication number: 20120221797
    Abstract: Provided is a multi-port cache memory apparatus and a method of the multi-port cache memory apparatus. The multi-port memory apparatus may divide an address space into address regions and allocate the divided memory regions to cache banks, thereby preventing the concentration of access to a particular cache.
    Type: Application
    Filed: January 31, 2012
    Publication date: August 30, 2012
    Inventors: Moo-Kyoung Chung, Soo-Jung Ryu, Ho-Young Kim, Woong Seo, Young-Chul Cho