Cache Status Data Bit Patents (Class 711/144)
-
Patent number: 10007619Abstract: Systems and methods relate to performing address translations in a multithreaded memory management unit (MMU). Two or more address translation requests can be received by the multithreaded MMU and processed in parallel to retrieve address translations to addresses of a system memory. If the address translations are present in a translation cache of the multithreaded MMU, the address translations can be received from the translation cache and scheduled for access of the system memory using the translated addresses. If there is a miss in the translation cache, two or more address translation requests can be scheduled in two or more translation table walks in parallel.Type: GrantFiled: September 20, 2015Date of Patent: June 26, 2018Assignee: QUALCOMM IncorporatedInventors: Jason Edward Podaima, Paul Christopher John Wiercienski, Carlos Javier Moreira, Alexander Miretsky, Meghal Varia, Kyle John Ernewein, Manokanthan Somasundaram, Muhammad Umar Choudry, Serag Monier Gadelrab
-
Patent number: 9990257Abstract: One or more techniques and/or systems are provided for hosting a virtual machine from a snapshot. In particular, a snapshot of a virtual machine hosted on a primary computing device may be created. The virtual machine may be hosted on a secondary computing device using the snapshot, for example, when a failure of the virtual machine on the primary computing device occurs. If a virtual machine type (format) of the snapshot is not supported by the secondary computing device, then the virtual machine within the snapshot may be converted to a virtual machine type supported by the secondary computing device. In this way, the virtual machine may be operable and/or accessible on the secondary computing device despite the failure. Hosting the virtual machine on the secondary computing device provides, among other things, fault tolerance for the virtual machine and/or applications comprised therein.Type: GrantFiled: September 7, 2015Date of Patent: June 5, 2018Assignee: NetApp Inc.Inventors: Eric Paul Forgette, Deepak Kenchammana-Hosekote, Shravan Gaonkar, Arthur Franklin Lent
-
Patent number: 9983995Abstract: A cache and a method for operating a cache are disclosed. In an embodiment, the cache includes a cache controller, data cache and a delay write through cache (DWTC), wherein the data cache is separate and distinct from the DWTC, wherein cacheable write accesses are split into shareable cacheable write accesses and non-shareable cacheable write accesses, wherein the cacheable shareable write accesses are allocated only to the DWTC, and wherein the non-shareable cacheable write accesses are not allocated to the DWTC.Type: GrantFiled: April 18, 2016Date of Patent: May 29, 2018Assignee: Futurewei Technologies, Inc.Inventors: Sushma Wokhlu, Alan Gatherer, Ashish Rai Shrivastava
-
Patent number: 9971693Abstract: Various embodiments provide for a system that prefetches data from a main memory to a cache and then evicts unused data to a lower level cache. The prefetching system will prefetch data from a main memory to a cache, and data that is not immediately useable or is part of a data set which is too large to fit in the cache can be tagged for eviction to a lower level cache, which keeps the data available with a shorter latency than if the data had to be loaded from main memory again. This lowers the cost of prefetching useable data too far ahead and prevents cache trashing.Type: GrantFiled: May 13, 2015Date of Patent: May 15, 2018Assignee: Ampere Computing LLCInventor: Kjeld Svendsen
-
Patent number: 9946648Abstract: A method includes linking together a set of cache lines, tracking historical changes to the cache coherency protocol states of the set of cache lines, and prefetching to a cache the set of cache lines based at least on the historical changes to the set of cache lines. A cache circuit includes one or more random access memories (RAMs) to store cache lines and cache coherency protocol states of the cache lines and a prefetch logic unit to link together a set of cache lines, to track historical changes to cache coherency protocol states of the set of cache lines, and to prefetch to the cache circuit the set of cache lines based at least on the historical changes to the set of cache lines.Type: GrantFiled: March 17, 2016Date of Patent: April 17, 2018Assignee: Marvell International Ltd.Inventor: Wei Zhang
-
Patent number: 9946656Abstract: A completion packet may be returned before a data packet is written to a memory, if a field of the data packet indicates the data packet was sent due to a cache capacity eviction. The completion packet is returned after the data packet is written to the memory, if the field indicates the data packet was sent due to a flush operation.Type: GrantFiled: August 30, 2013Date of Patent: April 17, 2018Assignee: Hewlett Packard Enterprise Development LPInventors: Gregg B. Lesartre, Derek Alan Sherlock
-
Patent number: 9940247Abstract: The present application describes embodiments of a method and apparatus for concurrently accessing dirty bits in a cache. One embodiment of the apparatus includes a cache configurable to store a plurality of lines. The lines are grouped into a plurality of subsets the plurality of lines. This embodiment of the apparatus also includes a plurality of dirty bits associated with the plurality of lines and first circuitry configurable to concurrently access the plurality of dirty bits associated with at least one of the plurality of subsets of lines.Type: GrantFiled: June 26, 2012Date of Patent: April 10, 2018Assignee: Advanced Micro Devices, Inc.Inventor: William L. Walker
-
Patent number: 9898398Abstract: Reusing data in a memory buffer. A method includes reading data into a first portion of memory of a buffer implemented in the memory. The method further includes invalidating the data and marking the first portion of memory as free such that the first portion of memory is marked as being usable for storing other data, but where the data is not yet overwritten. The method further includes reusing the data in the first portion of memory after the data has been invalidated and the first portion of the memory is marked as free.Type: GrantFiled: December 30, 2013Date of Patent: February 20, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Cristian Petculescu, Amir Netz
-
Patent number: 9892051Abstract: A method can include executing a store instruction that instructs storing of data at an address and, in response to the store instruction, inserting a preloading instruction after the store instruction but before a dependent load instruction to the address. Executing the store instruction can include invalidating a data entry of a cache array at an address of the cache array corresponding to the address and writing the data to a backing memory at an address of the backing memory corresponding to the address. The preloading instruction can cause filling the data entry of the cache array, at the address of the cache array corresponding to the address, with the data from the backing memory at the address of the backing memory corresponding to the address and validating the data entry of the cache array.Type: GrantFiled: January 26, 2015Date of Patent: February 13, 2018Assignee: MARVELL INTERNATIONAL LTD.Inventors: Sujat Jamil, R. Frank O'Bleness, Russell J. Robideau, Tom Hameenanttila, Joseph Delgross, David E. Miner
-
Patent number: 9891916Abstract: A hardware data prefetcher is comprised in a memory access agent, wherein the memory access agent is one of a plurality of memory access agents that share a memory. The hardware data prefetcher includes a prefetch trait that is initially either exclusive or shared. The hardware data prefetcher also includes a prefetch module that performs hardware prefetches from a memory block of the shared memory using the prefetch trait. The hardware data prefetcher also includes an update module that performs analysis of accesses to the memory block by the plurality of memory access agents and, based on the analysis, dynamically updates the prefetch trait to either exclusive or shared while the prefetch module performs hardware prefetches from the memory block using the prefetch trait.Type: GrantFiled: February 18, 2015Date of Patent: February 13, 2018Assignee: VIA TECHNOLOGIES, INC.Inventors: Rodney E. Hooker, Albert J. Loper, John Michael Greer, Meera Ramani-Augustin
-
Patent number: 9891936Abstract: An apparatus and method for page level monitoring are described. For example, one embodiment of a method for monitoring memory pages comprises storing information related to each of a plurality of memory pages including an address identifying a location for a monitor variable for each of the plurality of memory pages in a data structure directly accessible only by a software layer operating at or above a first privilege level; detecting virtual-to-physical page mapping consistency changes or other page modifications to a particular memory page for which information is maintained in the data structure; responsively updating the monitor variable to reflect the consistency changes or page modifications; checking a first monitor variable associated with a first memory page prior to execution of first program code; and refraining from executing the first program code if the first monitor variable indicates consistency changes or page modifications to the first memory page.Type: GrantFiled: September 27, 2013Date of Patent: February 13, 2018Assignee: INTEL CORPORATIONInventors: Jiwei Oliver Lu, Koichi Yamada, James D. Beany, Palaniverlrajan Shanmugavelayutham, Bo Zhang
-
Patent number: 9886392Abstract: A method of enhancing a refresh PCI translation (RPCIT) operation to refresh a translation lookaside buffer (TLB) includes determining, by a computer processor, a request to perform at least one RPCIT instruction for purging at least one translation from the TLB. The method further includes purging, by the computer processor, the at least one translation from the TLB in response to executing the at least one RPCIT instruction. The computer processor selectively performs a synchronization operation prior to completing the at least one RPCIT instruction.Type: GrantFiled: May 19, 2014Date of Patent: February 6, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: David F. Craddock, Thomas A. Gregg, Dan F. Greiner, Damian L. Osisek
-
Patent number: 9880942Abstract: A method of enhancing a refresh PCI translation (RPCIT) operation to refresh a translation lookaside buffer (TLB) includes determining, by a computer processor, a request to perform at least one RPCIT instruction for purging at least one translation from the TLB. The method further includes purging, by the computer processor, the at least one translation from the TLB in response to executing the at least one RPCIT instruction. The computer processor selectively performs a synchronization operation prior to completing the at least one RPCIT instruction.Type: GrantFiled: June 22, 2016Date of Patent: January 30, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: David F. Craddock, Thomas A. Gregg, Dan F. Greiner, Damian L. Osisek
-
Patent number: 9864692Abstract: Managing cache evictions during transactional execution of a process. Based on initiating transactional execution of a memory data accessing instruction, memory data is fetched from a memory location, the memory data to be loaded as a new line into a cache entry of the cache. Based on determining that a threshold number of cache entries have been marked as read-set cache lines, determining whether a cache entry that is a read-set cache line can be replaced by identifying a cache entry that is a read-set cache line for the transaction that contains memory data from a memory address within a predetermined non-conflict address range. Then invalidating the identified cache entry of the transaction. Then loading the fetched memory data into the identified cache entry, and then marking the identified cache entry as a read-set cache line of the transaction.Type: GrantFiled: August 12, 2015Date of Patent: January 9, 2018Assignee: International Business Machines CorporationInventors: Dan F. Greiner, Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
-
Patent number: 9804801Abstract: A method of processing data in a memory system including a control unit and a hybrid memory device having a first memory and a second memory, includes; receiving first write data, storing the first write data in the first memory and assigning a first group state from among a plurality of group states to the stored first write data in response to first attribution information, completing a data processing operation in the memory system directed to the stored first write data that changes the attribution information associated with the stored first write data by monitoring of the first attribution information using an operating system running on the memory controller, and changing the first group state assigned to the stored first write data to a second group state from among the plurality of group states, the second group state having a different priority than a priority for the first group state.Type: GrantFiled: March 24, 2015Date of Patent: October 31, 2017Assignee: Samsung Electronics Co., Ltd.Inventors: Sangkwon Moon, Jin-Soo Kim, Young-Sik Lee, Jinkyu Jeong, Kyung Ho Kim
-
Patent number: 9773048Abstract: A method includes processing a transaction on an in memory database where data being processed has a validity time, updating a time dependent data view responsive to the transaction being processed to capture time validity information regarding the data, and storing the time validity information in a historization table to provide historical access to past time dependent data following expiration of the validity time.Type: GrantFiled: September 12, 2013Date of Patent: September 26, 2017Assignee: SAP SEInventor: Siar Sarferaz
-
Patent number: 9766820Abstract: An arithmetic processing device which connects to a main memory, the arithmetic processor includes a cache memory which stores data, an arithmetic unit which performs an arithmetic operation for data stored in the cache memory, a first control device which controls the cache memory and outputs a first request which reads the data stored in the main memory, and a second control device which is connected to the main memory and transmits a plurality of second requests which are divided the first request output from the first control device, receives data corresponding to the plurality of second requests which is transmitted from the main memory and sends each of the data to the first control device.Type: GrantFiled: May 11, 2015Date of Patent: September 19, 2017Assignee: FUJITSU LIMITEDInventors: Yuta Toyoda, Koji Hosoe, Masatoshi Aihara, Akio Tokoyoda, Makoto Suga
-
Patent number: 9760486Abstract: Technologies are generally described herein for accelerating a cache state transfer in a multicore processor. The multicore processor may include first, second, and third tiles. The multicore processor may initiate migration of a thread executing on the first core at the first tile from the first tile to the second tile. The multicore processor may determine block addresses of blocks to be transferred from a first cache at the first tile to a second cache at the second tile, and identify that a directory at the third tile corresponds to the block addresses. The multicore processor may update the directory to reflect that the second cache shares the blocks. The multicore processor may transfer the blocks from the first cache in the first tile to the second cache in the second tile effective to complete the migration of the thread from the first tile to the second tile.Type: GrantFiled: March 25, 2016Date of Patent: September 12, 2017Assignee: EMPIRE TECHNOLOGY DEVELOPMENT LLCInventor: Yan Solihin
-
Patent number: 9760488Abstract: A cache system is provided. The cache system includes a first cache and a second cache. The first cache is configured for storing a first status of a plurality of data. The second cache is configured for storing a table. The table includes the plurality of data arranged from a highest level to a lowest level. The cache system is configured to update the first status of the plurality of data in the first cache. The cache system is further configured to update the table in the second cache according to the first status of the plurality of data.Type: GrantFiled: August 13, 2015Date of Patent: September 12, 2017Assignee: MACRONIX INTERNATIONAL CO., LTD.Inventors: Hung-Sheng Chang, Hsiang-Pang Li, Yuan-Hao Chang, Tei-Wei Kuo
-
Patent number: 9749190Abstract: A computer-implemented method is operable on a device having hardware including memory and at least one processor. The method includes maintaining invalidation information in a list at a service on the device, where the invalidation information includes a plurality of invalidation commands. At least some of the invalidation commands in the list are selectively combined to form at least one other invalidation command in the list.Type: GrantFiled: March 13, 2013Date of Patent: August 29, 2017Assignee: LEVEL 3 COMMUNICATIONS, LLCInventors: Christopher Newton, Lewis Robert Varney, Laurence R. Lipstone, William Crowder, Andrew Swart
-
Patent number: 9710266Abstract: A Load Count to Block Boundary instruction is provided that provides a distance from a specified memory address to a specified memory boundary. The memory boundary is a boundary that is not to be crossed in loading data. The boundary may be specified a number of ways, including, but not limited to, a variable value in the instruction text, a fixed instruction text value encoded in the opcode, or a register based boundary; or it may be dynamically determined.Type: GrantFiled: March 15, 2012Date of Patent: July 18, 2017Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jonathan D. Bradbury, Michael K. Gschwind, Christian Jacobi, Eric M. Schwarz, Timothy J. Slegel
-
Patent number: 9710267Abstract: A Load Count to Block Boundary instruction is provided that provides a distance from a specified memory address to a specified memory boundary. The memory boundary is a boundary that is not to be crossed in loading data. The boundary may be specified a number of ways, including, but not limited to, a variable value in the instruction text, a fixed instruction text value encoded in the opcode, or a register based boundary; or it may be dynamically determined.Type: GrantFiled: March 3, 2013Date of Patent: July 18, 2017Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jonathan D. Bradbury, Michael K. Gschwind, Christian Jacobi, Eric M. Schwarz, Timothy J. Slegel
-
Patent number: 9697130Abstract: A cache automation module detects the deployment of storage resources in a virtual computing environment and, in response, automatically configures cache services for the detected storage resources. The automation module may detect new storage resources by monitoring storage operations and/or requests, by use of an interface provided by virtualization infrastructure, and/or the like. The cache automation module may deterministically identify storage resources that are to be cached and automatically caching services for the identified storage resources.Type: GrantFiled: July 16, 2014Date of Patent: July 4, 2017Assignee: SanDisk Technologies LLCInventors: Jaidil Karippara, Pavan Pamula, Yuepeng Feng, Vikuto Atoka Sema
-
Patent number: 9665658Abstract: One embodiment provides an eviction system for dynamically-sized caching comprising a non-blocking data structure for maintaining one or more data nodes. Each data node corresponds to a data item in a cache. Each data node comprises information relating to a corresponding data item. The eviction system further comprises an eviction module configured for removing a data node from the data structure, and determining whether the data node is a candidate for eviction based on information included in the data node. If the data node is not a candidate for eviction, the eviction module inserts the data node back into the data structure; otherwise the eviction module evicts the data node and a corresponding data item from the system and the cache, respectively. Data nodes of the data structure circulate through the eviction module until a candidate for eviction is determined.Type: GrantFiled: December 30, 2013Date of Patent: May 30, 2017Assignee: Samsung Electronics Co., Ltd.Inventors: Gage W. Eads, Juan A. Colmenares
-
Patent number: 9639276Abstract: A request is received over a link that requests a particular line in memory. A directory state record is identified in memory that identifies a directory state of the particular line. A type of the request is identified from the request. It is determined that the directory state of the particular line is to change from the particular state to a new state based on the directory state of the particular line and the type of the request. The directory state record is changed, in response to receipt of the request, to reflect the new state. A copy of the particular line is sent in response to the request.Type: GrantFiled: March 27, 2015Date of Patent: May 2, 2017Assignee: Intel CorporationInventor: Robert G. Blankenship
-
Patent number: 9619396Abstract: A memory controller receives a memory invalidation request that references a line of far memory in a two level system memory topology with far memory and near memory, identifies an address of the near memory corresponding to the line, and reads data at the address to determine whether a copy of the line is in the near memory. Data of the address is to be flushed to the far memory if the data includes a copy of another line of the far memory and the copy of the other line is dirty. A completion is sent for the memory invalidation request to indicate that a coherence agent is granted exclusive access to the line. With exclusive access, the line is to be modified to generate a modified version of the line and the address of the near memory is to be overwritten with the modified version of the line.Type: GrantFiled: March 27, 2015Date of Patent: April 11, 2017Assignee: Intel CorporationInventors: Robert G. Blankenship, Jeffrey D. Chamberlain, Yen-Cheng Liu, Vedaraman Geetha
-
Patent number: 9600438Abstract: A method and apparatus for controlling and coordinating a multi-component system. Each component in the system contains a computing device. Each computing device is controlled by software running on the computing device. A first portion of the software resident on each computing device is used to control operations needed to coordinate the activities of all the components in the system. This first portion is known as a “coordinating process.” A second portion of the software resident on each computing device is used to control local processes (local activities) specific to that component. Each component in the system is capable of hosting and running the coordinating process. The coordinating process continually cycles from component to component while it is running.Type: GrantFiled: March 24, 2011Date of Patent: March 21, 2017Assignee: Florida Institute for Human and Machine Cognition, Inc.Inventors: Kenneth M. Ford, Niranjan Suri
-
Patent number: 9569282Abstract: Fine-grained parallelism within isolated object graphs is used to provide safe concurrent operations within the isolated object graphs. One example provides an abstraction labeled IsolatedObjectGraph that encapsulates at least one object graph, but often two or more object graphs, rooted by an instance of a type member. By encapsulating the object graph, no references from outside of the object graph are allowed to objects inside of the object graph. Also, the encapsulated object graph does not contain references to objects outside of the graphs. The isolated object graphs provide for safe data parallel operations, including safe data parallel mutations such as for each loops. In an example, the ability to isolate the object graph is provided through type permissions.Type: GrantFiled: April 24, 2009Date of Patent: February 14, 2017Assignee: Microsoft Technology Licensing, LLCInventors: John J. Duffy, Niklas Gustafsson, Vance Morrison
-
Patent number: 9558147Abstract: A system and method for monitoring a plurality of data streams is disclosed. At a first processing stage, a first memory area is associated to an element of a plurality of data streams. Upon arrival of a frame associated with one of the plurality of data streams, a second memory area is associated to the arrived frame based on the element. In the second memory area, a data indicating an arrival of the arrived frame is recorded and on a successful recording, the frame is forwarded to a second processing stage. An independent process executes at a preselected time interval to erase contents of the first memory area.Type: GrantFiled: June 12, 2014Date of Patent: January 31, 2017Assignee: NXP B.V.Inventors: Nicola Concer, Sujan Pandey, Hubertus Gerardus Hendrikus Vermeulen
-
Patent number: 9501418Abstract: The present disclosure relates to caches, methods, and systems for using an invalidation data area. The cache can include a journal configured for tracking data blocks, and an invalidation data area configured for tracking invalidated data blocks associated with the data blocks tracked in the journal. The invalidation data area can be on a separate cache region from the journal. A method for invalidating a cache block can include determining a journal block tracking a memory address associated with a received write operation. The method can also include determining a mapped journal block based on the journal block and on an invalidation record. The method can also include determining whether write operations are outstanding. If so, the method can include aggregating the outstanding write operations and performing a single write operation based on the aggregated write operations.Type: GrantFiled: June 26, 2014Date of Patent: November 22, 2016Assignee: HGST Netherlands B.V.Inventor: Pulkit Misra
-
Patent number: 9477514Abstract: A TRANSACTION BEGIN instruction and a TRANSACTION END instruction are provided. The TRANSACTION BEGIN instruction causes either a constrained or nonconstrained transaction to be initiated, depending on a field of the instruction. A constrained transaction has one or more restrictions associated therewith, while a nonconstrained transaction is not limited in the manner of a constrained transaction. The TRANSACTION END instruction ends the transaction started by the TRANSACTION BEGIN instruction.Type: GrantFiled: March 7, 2013Date of Patent: October 25, 2016Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Dan F. Greiner, Christian Jacobi, Marcel Mitran, Timothy J. Slegel
-
Patent number: 9471503Abstract: A computer system processor of a multi-processor computer system having a cache subsystem, the computer system having exclusive ownership of a cache line, executes a demote instruction to cause its own exclusively owned cache line to become shared or read-only in the computer processor cache.Type: GrantFiled: February 11, 2016Date of Patent: October 18, 2016Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Chung-Lung Kevin Shum, Kathryn Marie Jackson, Charles Franklin Webb
-
Patent number: 9471446Abstract: A system includes a plurality of storage devices and an information processing device including a cache memory. The information processing device is configured to access the plurality of storage devices. When a failure has occurred in a first storage device included in the plurality of storage devices, the information processing device perform a procedure including: specifying a second storage device in which no failure has occurred, among the plurality of storage devices, creating an invisible file including a cache that has been stored in the cache memory and is to be stored in the first storage device, and storing the created invisible file in the second storage device. The information processing device stores the cache included in the invisible file stored in the second storage device, in the first storage device when the failure of the first storage device is eliminated.Type: GrantFiled: October 6, 2014Date of Patent: October 18, 2016Assignee: FUJITSU LIMITEDInventor: Yoshihisa Chujo
-
Patent number: 9465739Abstract: A system, method, and computer program product are provided for conditionally sending a request for data to a node based on a determination. In operation, a first request for data is sent to a cache of a first node. Additionally, it is determined whether the first request can be satisfied within the first node, where the determining includes at least one of determining a type of the first request and determining a state of the data in the cache. Furthermore, a second request for the data is conditionally sent to a second node, based on the determination.Type: GrantFiled: October 17, 2013Date of Patent: October 11, 2016Assignee: Broadcom CorporationInventors: Gaurav Garg, David T. Hass
-
Patent number: 9460011Abstract: A computer system that includes a processor, a memory and a processor cache for the main memory with a check-in-cache instruction may be provided. The processor executes computer readable instructions stored in the memory that include receiving a check-in-cache instruction from a check-in-cache storage location. The instructions also include responsive to receiving the check-in-cache instruction, determining whether data bytes specified by the check-in-cache instruction are at least partially available in the processor cache. The instructions further include storing a condition code of the determination result in a storage location.Type: GrantFiled: December 14, 2015Date of Patent: October 4, 2016Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Marco Kraemer, Carsten Otte, Christoph Raisch
-
Patent number: 9454491Abstract: Systems and methods for accessing a unified translation lookaside buffer (TLB) are disclosed. A method includes receiving an indicator of a level one translation lookaside buffer (L1TLB) miss corresponding to a request for a virtual address to physical address translation, searching a cache that includes virtual addresses and page sizes that correspond to translation table entries (TTEs) that have been evicted from the L1TLB, where a page size is identified, and searching a second level TLB and identifying a physical address that is contained in the second level TLB. Access is provided to the identified physical address.Type: GrantFiled: January 6, 2015Date of Patent: September 27, 2016Assignee: SOFT MACHINES INC.Inventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
-
Patent number: 9448954Abstract: The subject matter discloses a method for data coherency; the method comprising receiving an interrupt request for interrupting a CPU; wherein the interrupt request is from one of a plurality of modules; wherein the interrupt request notifying a writing instruction of a first data by the one of the plurality of modules to a shared memory; and wherein the shared memory is accessible to the plurality of modules through a shared bus; suspending the interrupt request; validating a completion of an execution of the writing instruction; wherein the validating is performed after the suspending; and resuming the interrupt request after the completion of the execution of the writing is validated, whereby to notify a to the CPU about the completion of the execution of the writing instruction.Type: GrantFiled: February 28, 2011Date of Patent: September 20, 2016Assignee: DSP GROUP LTD.Inventors: Leonardo Vainsencher, Yaron P. Folk, Yuval Itkin
-
Patent number: 9430410Abstract: A method for supporting a plurality of load accesses is disclosed. A plurality of requests to access a data cache is accessed, and in response, a tag memory is accessed that maintains a plurality of copies of tags for each entry in the data cache. Tags are identified that correspond to individual requests. The data cache is accessed based on the tags that correspond to the individual requests. A plurality of requests to access the same block of the plurality of blocks causes an access arbitration that is executed in the same clock cycle as is the access of the tag memory.Type: GrantFiled: July 30, 2012Date of Patent: August 30, 2016Assignee: SOFT MACHINES, INC.Inventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
-
Patent number: 9432679Abstract: A data processing system is provided for processing video data on a window basis. At least one memory unit (L1) is provided for fetching and storing video data from an image memory (IM) according to a first window (R) in a first scanning order. At least one second memory unit (L0) is provided for fetching and storing video data from the first memory unit (L1) according to a second window in a second scanning order (SO). Furthermore, at least one processing unit (PU) is provided for performing video processing on the video data of the second window as stored in the at least one second memory unit (L0) based on the second scanning order (SO). The second scanning order (SO) is a meandering scanning order being orthogonal to the first scanning order (SO1).Type: GrantFiled: October 27, 2006Date of Patent: August 30, 2016Assignee: ENTROPIC COMMUNICATIONS, LLCInventors: Aleksandar Beric, Ramanathan Sethuraman
-
Patent number: 9424198Abstract: A processor includes at least one execution unit, a near memory, and memory management logic to manage the near memory and a far memory external to the processor as a unified exclusive memory. Each of a plurality of data blocks may be exclusively stored in either the far memory or the near memory. The unified exclusive memory space may be divided into a plurality of sets and a plurality of ways. In response to a request for a first block stored in the far memory, the memory management logic may move the first block from the far memory to the near memory, and may move a second block from the near memory to the far memory. A tag buffer may store tags associated with blocks being moved between the near memory and the far memory. Fill and drain buffers may also be used. Other implementations are described and claimed.Type: GrantFiled: November 30, 2012Date of Patent: August 23, 2016Assignee: Intel CorporationInventors: Shlomo Raikin, Zvika Greenfield
-
Patent number: 9396102Abstract: For cache/data management in a computing storage environment, incoming data segments into a Non Volatile Storage (NVS) device of the computing storage environment are validated against a bitmap to determine if the incoming data segments are currently in use. Those of the incoming data segments determined to be currently in use are designated to the computing storage environment to protect data integrity.Type: GrantFiled: September 14, 2012Date of Patent: July 19, 2016Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kevin John Ash, Michael Thomas Benhase, Lokesh Mohan Gupta, Kenneth Wayne Todd
-
Patent number: 9390023Abstract: According to at least one example embodiment, a method and corresponding apparatus for conditionally storing data include initiating an atomic sequence by executing, by a core processor, an instruction/operation designed to initiate an atomic sequence. Executing the instruction designed to initiate the atomic sequence includes loading content associated with a memory location into a first cache memory, and maintaining an indication of the memory location and a copy of the corresponding content loaded. A conditional storing operation is then performed, the conditional storing operation includes a compare-and-swap operation, executed by a controller associated with a second cache memory, based on the maintained copy of the content and the indication of the memory location.Type: GrantFiled: October 3, 2013Date of Patent: July 12, 2016Assignee: Cavium, Inc.Inventors: Richard E. Kessler, David H. Asher, Michael Sean Bertone, Shubhendu S. Mukherjee, Wilson P. Snyder, II, John M. Perveiler, Christopher J. Comis
-
Patent number: 9372800Abstract: A multi-chip system includes multiple chip devices configured to communicate to each other and share resources. According to at least one example embodiment, a method of providing memory coherence within the multi-chip system comprises maintaining, at a first chip device of the multi-chip system, state information indicative of one or more states of one or more copies, residing in one or more chip devices of the multi-chip system, of a data block. The data block is stored in a memory associated with one of the multiple chip devices. The first chip device receives a message associated with a copy of the one or more copies of the data block from a second chip device of the multiple chip devices, and, in response, executes a scheme of one or more actions determined based on the state information maintained at the first chip device and the message received.Type: GrantFiled: March 7, 2014Date of Patent: June 21, 2016Assignee: Cavium, Inc.Inventors: Isam Akkawi, Richard E. Kessler, David H. Asher, Bryan W. Chin, Wilson P. Snyder, II
-
Patent number: 9367464Abstract: A method is described that includes alternating cache requests sent to a tag array between data requests and dataless requests.Type: GrantFiled: December 30, 2011Date of Patent: June 14, 2016Assignee: Intel CorporationInventor: Larisa Novakovsky
-
Patent number: 9336144Abstract: Three-dimensional processing systems are provided which have multiple layers of conjoined chips, wherein one or more chip layers include processor cores that share cache hierarchies over multiple chip layers. The caches can be partitioned, conjoined, and managed according to various sets of rules and configurations.Type: GrantFiled: July 25, 2013Date of Patent: May 10, 2016Assignee: GLOBALFOUNDRIES INC.Inventors: Alper Buyuktosunoglu, Philip G. Emma, Allan M. Hartstein, Michael B. Healy, Krishnan K. Kailas
-
Patent number: 9336146Abstract: Technologies are generally described herein for accelerating a cache state transfer in a multicore processor. The multicore processor may include first, second, and third tiles. The multicore processor may initiate migration of a thread executing on the first core at the first tile from the first tile to the second tile. The multicore processor may determine block addresses of blocks to be transferred from a first cache at the first tile to a second cache at the second tile, and identify that a directory at the third tile corresponds to the block addresses. The multicore processor may update the directory to reflect that the second cache shares the blocks. The multicore processor may transfer the blocks from the first cache in the first tile to the second cache in the second tile effective to complete the migration of the thread from the first tile to the second tile.Type: GrantFiled: December 29, 2010Date of Patent: May 10, 2016Assignee: EMPIRE TECHNOLOGY DEVELOPMENT LLCInventor: Yan Solihin
-
Patent number: 9329800Abstract: A data storage system and associated method are provided wherein a policy engine continuously collects qualitative information about a network load to the data storage system in order to dynamically characterize the load and continuously correlates the load characterization to the content of a command queue of transfer requests for writeback commands and host read commands, selectively limiting the content with respect to writeback commands to only those transfer requests for writeback data that are selected on a physical zone basis of a plurality of predefined physical zones of a storage media.Type: GrantFiled: June 29, 2007Date of Patent: May 3, 2016Assignee: Seagate Technology LLCInventors: Clark Edward Lubbers, Robert Michael Lester
-
Patent number: 9311244Abstract: An interconnect has transaction tracking circuitry for enforcing ordering of a set of data access transactions so that they are issued to slave devices in an order in which they are received from master devices. The transaction tracking circuitry is reused for also enforcing ordering of snoop transactions which are triggered by the set of data access transactions, for snooping master devices identified by a snoop filter as holding cache data for the target address of the transactions.Type: GrantFiled: August 25, 2014Date of Patent: April 12, 2016Assignee: ARM LimitedInventors: Sean James Salisbury, Andrew David Tune, Daniel Sara
-
Patent number: 9304863Abstract: A method of backstepping through a program execution includes dividing the program execution into a plurality of epochs, wherein the program execution is performed by an active core, determining, during a subsequent epoch of the plurality of epochs, that a rollback is to be performed, performing the rollback including re-executing a previous epoch of the plurality of epochs, wherein the previous epoch includes one or more instructions of the program execution stored by a checkpointing core, and adjusting a granularity of the plurality of epochs according to a frequency of the rollback.Type: GrantFiled: June 27, 2013Date of Patent: April 5, 2016Assignee: International Business Machines CorporationInventors: Harold W. Cain, III, David M. Daly, Kattamuri Ekanadham, Jose E. Moreira, Mauricio J. Serrano
-
Patent number: 9304946Abstract: Technologies are described herein for providing a hardware-based accelerator adapted to manage copy-on-write. Some example technologies may identify a read request adapted to read a block at an original memory address. The technologies may utilize the hardware-based accelerator to determine whether the block is located at the original memory address. When a determination is made that the block is located in at the original memory address, the technologies may utilize the hardware-based accelerator to pass the original memory address so that the read request can be performed utilizing the original memory address. When a determination is made that the block is not located in the memory at the original memory address, the technologies may utilize the hardware-based accelerator to generate a new memory address and to pass the new memory address so that the read request can be performed utilizing the new memory address.Type: GrantFiled: June 25, 2012Date of Patent: April 5, 2016Assignee: EMPIRE TECHNOLOGY DEVELOPMENT LLCInventor: Yan Solihin