Patents Issued in December 31, 2015
-
Publication number: 20150378896Abstract: A transactional execution of a set of instructions in a transaction of a program may be initiated to collect memory operand access characteristics of a set of instructions of a transaction during the transactional execution. The memory operand access characteristics may be stored upon a termination of the transactional execution of the set of instructions. The memory operand access characteristics may include an address of an accessed storage location, a count of a number of times the storage location is accessed, a purpose value indicating whether the storage location is accessed for a fetch, store, or update operation, a count of a number of times the storage location is accessed for one or more of a fetch, store, or update operation; a translation mode in which the storage location is accessed; and an addressing mode.Type: ApplicationFiled: June 30, 2014Publication date: December 31, 2015Inventors: Dan F. Greiner, Michael Karl Gschwind, Valentina Salapura, Timothy J. Slegel
-
Publication number: 20150378897Abstract: Throttling instruction execution in a transaction operating in a processor configured to execute memory instructions out-of-order in a pipelined processor, wherein memory instructions are instructions for accessing operands in memory is provided. Included is executing, by the processor, instructions of a transaction comprising determining whether the transaction is in throttling mode and based on the transaction being in throttling mode, executing memory instructions in-program-order. Also included is based on the transaction not-being in throttling mode, executing memory instructions out-of-program order.Type: ApplicationFiled: September 2, 2015Publication date: December 31, 2015Inventors: Michael Karl Gschwind, Valentina Salapura, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
-
Publication number: 20150378898Abstract: A higher level shared cache of a hierarchical cache of a multi-processor system utilizes transaction identifiers to manage memory conflicts in corresponding transactions. The higher level cache is shared with two or more processors. A processor may have a corresponding accelerator that performs operations on behalf of the processor. Transaction indicators are set in the higher level cache corresponding to the cache lines being accessed. The transaction aborts if a memory conflict with the transaction's cache lines from another transaction is detected, and the corresponding cache lines are invalidated. For a successfully completing transaction, the corresponding cache lines are committed and the data from store operations is stored.Type: ApplicationFiled: June 27, 2014Publication date: December 31, 2015Inventors: Fadi Y. Busaba, Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum
-
Publication number: 20150378899Abstract: A higher level shared cache of a hierarchical cache of a multi-processor system utilizes transaction identifiers to manage memory conflicts in corresponding transactions. The higher level cache is shared with two or more processors. A processor may have a corresponding accelerator that performs operations on behalf of the processor. Transaction indicators are set in the higher level cache corresponding to the cache lines being accessed. The transaction aborts if a memory conflict with the transaction's cache lines from another transaction is detected, and the corresponding cache lines are invalidated. For a successfully completing transaction, the corresponding cache lines are committed and the data from store operations is stored.Type: ApplicationFiled: September 9, 2015Publication date: December 31, 2015Inventors: Fadi Y. Busaba, Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum
-
Publication number: 20150378900Abstract: Buffering a memory operand from a co-processor associated with a processor as a speculative store in a store accumulator of the processor, the processor including a cache and the store accumulator, the store accumulator buffering memory operands for writing to a higher level cache. Executing a transactional memory transaction, including the following. Suspending storing of memory operands in the store accumulator entries into the next higher level cache. Initiating a co-processor operation. Accumulating co-processor memory operands into corresponding locations of one or more queue entries of the accumulator associated with the transaction, and not storing the co-processor memory operands into the cache. Based on the transaction ending, storing accumulated co-processor memory operands from the one or more store accumulator entries associated with the transaction into the next higher level cache.Type: ApplicationFiled: June 27, 2014Publication date: December 31, 2015Inventors: Jonathan D. Bradbury, Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
-
Publication number: 20150378901Abstract: Buffering a memory operand from a co-processor associated with a processor as a speculative store in a store accumulator of the processor, the processor including a cache and the store accumulator, the store accumulator buffering memory operands for writing to a higher level cache. Executing a transactional memory transaction, including the following. Suspending storing of memory operands in the store accumulator entries into the next higher level cache. Initiating a co-processor operation. Accumulating co-processor memory operands into corresponding locations of one or more queue entries of the accumulator associated with the transaction, and not storing the co-processor memory operands into the cache. Based on the transaction ending, storing accumulated co-processor memory operands from the one or more store accumulator entries associated with the transaction into the next higher level cache.Type: ApplicationFiled: August 13, 2015Publication date: December 31, 2015Inventors: Jonathan D. Bradbury, Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
-
Publication number: 20150378902Abstract: Determining, by a processor having a cache, if data in the cache is to be monitored for cache coherency conflicts in a transactional memory (TM) environment. A processor executes a TM transaction, that includes the following. Executing a memory data access instruction that accesses an operand at an operand memory address. Based on either a prefix instruction associated with the memory data access instruction, or an operand tag associated with the operand of the memory data access instruction, determining whether a cache entry having the operand is to be marked for monitoring for cache coherency conflicts while the processor is executing the transaction. Based on determining that the cache entry is to be marked for monitoring for cache coherency conflicts while the processor is executing the transaction, marking the cache entry for monitoring for conflicts.Type: ApplicationFiled: June 27, 2014Publication date: December 31, 2015Inventors: Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
-
Publication number: 20150378903Abstract: Monitoring, by a processor having a cache, addresses accessed by a co-processor associated with the processor during transactional execution of a transaction by the processor. The processor executes a transactional memory (TM) transaction, including receiving, by the processor, a memory address range of data that a co-processor may access to perform a co-processor operation. The processor saves the memory address range. Based on receiving, by the processor, a cache coherency request that conflicts with the saved address range, the processor aborts the TM transaction.Type: ApplicationFiled: June 27, 2014Publication date: December 31, 2015Inventors: Jonathan D. Bradbury, Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
-
Publication number: 20150378904Abstract: A processor in a multi-processor configuration is configured to execute an instruction that specifies a virtual address range to be monitored to protect reads in a transaction. The processor translates the virtual address range to a series of real pages. The real starting address and ending address pairs for each real page are stored for use later on to resolve a potential cross-interrogation (XI) conflict with a real address on the XI bus.Type: ApplicationFiled: June 27, 2014Publication date: December 31, 2015Inventors: Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
-
Publication number: 20150378905Abstract: Monitoring, by a processor having a cache, addresses accessed by a co-processor associated with the processor during transactional execution of a transaction by the processor. The processor executes a transactional memory (TM) transaction, including receiving, by the processor, a memory address range of data that a co-processor may access to perform a co-processor operation. The processor saves the memory address range. Based on receiving, by the processor, a cache coherency request that conflicts with the saved address range, the processor aborts the TM transaction.Type: ApplicationFiled: August 13, 2015Publication date: December 31, 2015Inventors: Jonathan D. Bradbury, Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
-
Publication number: 20150378906Abstract: Determining, by a processor having a cache, if data in the cache is to be monitored for cache coherency conflicts in a transactional memory (TM) environment. A processor executes a TM transaction, that includes the following. Executing a memory data access instruction that accesses an operand at an operand memory address. Based on either a prefix instruction associated with the memory data access instruction, or an operand tag associated with the operand of the memory data access instruction, determining whether a cache entry having the operand is to be marked for monitoring for cache coherency conflicts while the processor is executing the transaction. Based on determining that the cache entry is to be marked for monitoring for cache coherency conflicts while the processor is executing the transaction, marking the cache entry for monitoring for conflicts.Type: ApplicationFiled: August 19, 2015Publication date: December 31, 2015Inventors: Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
-
Publication number: 20150378907Abstract: A transactional memory system predicts the outcome of coalescing outermost memory transactions, the coalescing causing committing of memory store data to memory for a first transaction to be done at transaction execution (TX) end of a second transaction, the method comprising. A processor of the transactional memory system determines whether a first plurality of outermost transactions from an associated program that were coalesced experienced an abort, the first plurality of outermost transactions including a first instance of a first transaction. The processor updates a history of the associated program to reflect the results of the determination. The processor coalesces a second plurality of outermost transactions from the associated program, based, at least in part, on the updated history.Type: ApplicationFiled: September 4, 2015Publication date: December 31, 2015Inventors: Fadi Y. Busaba, Harold W. Cain, III, Michael Karl Gschwind, Maged M. Michael, Eric M. Schwarz
-
Publication number: 20150378908Abstract: A processor in a multi-processor configuration is configured to execute an instruction that specifies a virtual address range to be monitored to protect reads in a transaction. The processor translates the virtual address range to a series of real pages. The real starting address and ending address pairs for each real page are stored for use later on to resolve a potential cross-interrogation (XI) conflict with a real address on the XI bus.Type: ApplicationFiled: September 9, 2015Publication date: December 31, 2015Inventors: Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
-
Publication number: 20150378909Abstract: A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether more than a threshold number of discard scans are waiting to be performed. The controller avoids satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache, in response to determining that more than the threshold number of discard scans are waiting to be performed.Type: ApplicationFiled: September 9, 2015Publication date: December 31, 2015Inventors: Michael T. Benhase, Lokesh M. Gupta, Matthew J. Kalos
-
Publication number: 20150378910Abstract: A higher level shared cache of a hierarchical cache of a multi-processor system utilizes transaction identifiers to manage memory conflicts in corresponding transactions. The higher level cache is shared with two or more processors. Transaction indicators are set in the higher level cache corresponding to the cache lines being accessed. The transaction aborts if a memory conflict with the transaction's cache lines from another transaction is detected.Type: ApplicationFiled: September 2, 2015Publication date: December 31, 2015Inventors: Fadi Y. Busaba, Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum
-
Publication number: 20150378911Abstract: A computer allows non-cacheable loads or stores in a hardware transactional memory environment. Transactional loads or stores, by a processor, are monitored in a cache for TX conflicts. The processor accepts a request to execute a transactional execution (TX) transaction. Based on processor execution of a cacheable load or store instruction for loading or storing first memory data of the transaction, the computer can perform a cache miss operation on the cache. Based on processor execution of a non-cacheable load instruction for loading second memory data of the transaction, the computer can not-perform the cache miss operation on the cache based on a cache line associated with the second memory data being not-cached, and load an address of the second memory data into a non-cache-monitor. The TX transaction can be aborted based on the non-cache monitor detecting a memory conflict from another processor.Type: ApplicationFiled: September 9, 2015Publication date: December 31, 2015Inventors: Jonathan D. Bradbury, Michael Karl Gschwind, Valentina Salapura, Chung-Lung K. Shum
-
Publication number: 20150378912Abstract: Throttling instruction execution in a transaction operating in a processor configured to execute memory instructions out-of-order in a pipelined processor, wherein memory instructions are instructions for accessing operands in memory is provided. Included is executing, by the processor, instructions of a transaction comprising determining whether the transaction is in throttling mode and based on the transaction being in throttling mode, executing memory instructions in-program-order. Also included is based on the transaction not-being in throttling mode, executing memory instructions out-of-program order.Type: ApplicationFiled: June 27, 2014Publication date: December 31, 2015Inventors: Michael Karl Gschwind, Valentina Salapura, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
-
Publication number: 20150378913Abstract: A memory system includes a plurality of memory nodes provided at different hierarchical levels of the memory system, each of the memory nodes including a corresponding memory storage and a cache. A memory node at a first of the different hierarchical levels is coupled to a processor with lower communication latency than a memory node at a second of the different hierarchical levels. The memory nodes are to cooperate to decide which of the memory nodes is to cache data of a given one of the memory nodes.Type: ApplicationFiled: March 20, 2013Publication date: December 31, 2015Inventors: Norman P. Jouppi, Sheng Li, Ke Chen
-
Publication number: 20150378914Abstract: Embodiments are disclosed for implementing a priority queue in a storage device, e.g., a solid state drive. At least some of the embodiments can use an in-memory set of blocks to store items until the block is full, and commit the full block to the storage device. Upon storing a full block, a block having a lowest priority can be deleted. An index storing correspondences between items and blocks can be used to update priorities and indicated deleted items. By using the in-memory blocks and index, operations transmitted to the storage device can be reduced.Type: ApplicationFiled: June 26, 2014Publication date: December 31, 2015Inventors: Wyatt Andrew Lloyd, Linpeng Tang, Qi Huang
-
Publication number: 20150378915Abstract: Throttling execution in a transaction operating in a processor configured to execute memory instructions out-of-program-order in a pipelined processor, wherein memory instructions are instructions for accessing operands in memory. Included is executing instructions of a transaction. Also included is determining whether the transaction is in throttling mode and based on determining that a transaction is in throttling mode, executing memory instructions in-program-order and dynamically prefetching memory operands of memory instructions.Type: ApplicationFiled: June 27, 2014Publication date: December 31, 2015Inventor: Michael Karl Gschwind
-
Publication number: 20150378916Abstract: Various embodiments mitigate busy time in a hierarchical store-through memory cache structure including a cache directory associated with a memory cache. The cache directory is divided into a plurality of portions each associated with a portion of memory cache. A determination is made that a first subpipe of a shared cache pipeline comprises a non-store request. The shared pipeline is communicatively coupled to the plurality of portions of the cache directory. A store command is prevented from being placed in a second subpipe of the shared cache pipeline based on determining that a first subpipe of the shared cache pipeline comprises a non-store request. Simultaneous cache lookup operations are supported between the plurality of portions of the cache directory and cache write operations. Two or more store commands simultaneously processed in a shared cache pipeline communicatively coupled to the plurality of portions of the cache directory.Type: ApplicationFiled: September 8, 2015Publication date: December 31, 2015Applicant: International Business Machines CorporationInventors: Deanna P. BERGER, Michael F. FEE, Christine C. JONES, Arthur J. O'NEILL, Diana L. ORF, Robert J. SONNELITTER, III
-
Publication number: 20150378917Abstract: Discontiguous storage locations are prefetched by a prefetch instruction. Addresses of the discontiguous storage locations are provided by a list directly or indirectly specified by a parameter of the prefetch instruction, along with metadata and information about the list entries. Fetching of corresponding data blocks to cache lines is initiated. A processor may enter transactional execution mode and memory instructions of a program may be executed using the prefetched data blocks.Type: ApplicationFiled: June 30, 2014Publication date: December 31, 2015Inventors: Fadi Y. Busaba, Dan F. Greiner, Michael Karl Gschwind, Maged M. Michael, Valentina Salapura, Eric M. Schwarz, Timothy J. Slegel
-
Publication number: 20150378918Abstract: Transactional execution of a transaction beginning instruction initiates prefetching, by a CPU, of discontiguous storage locations specified by a list. The list includes entries specifying addresses and may also include corresponding metadata. The list may be specified by levels of indirection. Fetching of corresponding discontiguous cache lines is initiated while in TX mode. Additional instructions in the transaction may be executed and use the prefetched cache lines.Type: ApplicationFiled: June 30, 2014Publication date: December 31, 2015Inventors: Fadi Y. Busaba, Dan F. Greiner, Michael Karl Gschwind, Maged M. Michael, Valentina Salapura, Eric M. Schwarz, Timothy J. Slegel
-
Publication number: 20150378919Abstract: A memory subsystem includes memory hierarchy that performs selective prefetching based on prefetch hints. A lower level memory detects a cache miss for a requested cache line that is part of a superline. The lower level memory generates a request vector for the cache line that triggered the cache miss, including a field for each cache line of the superline. The request vector includes a demand request for the cache line that caused the cache miss, and the lower level memory modifies the request vector with prefetch hint information. The prefetch hint information can indicate a prefetch request for one or more other cache lines in the superline. The lower level memory sends the request vector to the higher level memory with the prefetch hint information, and the higher level memory services the demand request and selectively either services a prefetch hint or drops the prefetch hint.Type: ApplicationFiled: June 30, 2014Publication date: December 31, 2015Inventors: ARAVINDH V. ANANTARAMAN, ZVIKA GREENFIELD, ANANT V. NORI, JULIUS YULI MANDELBLAT
-
Publication number: 20150378920Abstract: In one embodiment, an improved graphics data cache prefetcher includes a cache prefetch unit and a prefetch determination unit (PDU). The PDU determines if there is space available to retrieve some or all of the resources for an upcoming graphics operation while a current graphics operation is processed and whether the retrieval can be performed without impacting the performance of the current operation. If there is space available to retrieve some or all of the upcoming operation's resources into one or more GPU caches the prefetch determination unit programs the cache prefetch unit to retrieve the data into the one or more caches before performing the upcoming operation.Type: ApplicationFiled: June 30, 2014Publication date: December 31, 2015Inventors: John G. Gierach, Travis T. Schluessler
-
Publication number: 20150378921Abstract: A cache automation module detects the deployment of storage resources in a virtual computing environment and, in response, automatically configures cache services for the detected storage resources. The automation module may detect new storage resources by monitoring storage operations and/or requests, by use of an interface provided by virtualization infrastructure, and/or the like. The cache automation module may deterministically identify storage resources that are to be cached and automatically caching services for the identified storage resources.Type: ApplicationFiled: July 16, 2014Publication date: December 31, 2015Applicant: FUSION-IO, INC.Inventors: Jaidil Karippara, Pavan Pamula, Yuepeng Feng, Vikuto Atoka Sema
-
Publication number: 20150378922Abstract: A transactional execution of a set of instructions in a transaction of a program may be initiated to collect memory operand access characteristics of a set of instructions of a transaction during the transactional execution. The memory operand access characteristics may be stored upon a termination of the transactional execution of the set of instructions. The memory operand access characteristics may include an address of an accessed storage location, a count of a number of times the storage location is accessed, a purpose value indicating whether the storage location is accessed for a fetch, store, or update operation, a count of a number of times the storage location is accessed for one or more of a fetch, store, or update operation; a translation mode in which the storage location is accessed; and an addressing mode.Type: ApplicationFiled: August 20, 2015Publication date: December 31, 2015Inventors: Dan F. Greiner, Michael Karl Gschwind, Valentina Salapura, Timothy J. Slegel
-
Publication number: 20150378923Abstract: Embodiments of the current invention permit a user to allocate cache memory to main memory more efficiently. The processor or a user allocates the cache memory and associates the cache memory to the main memory location, but suppresses or bypassing reading the main memory data into the cache memory. Some embodiments of the present invention permit the user to specify how many cache lines are allocated at a given time. Further, embodiments of the present invention may initialize the cache memory to a specified pattern. The cache memory may be zeroed or set to some desired pattern, such as all ones. Alternatively, a user may determine the initialization pattern through the processor.Type: ApplicationFiled: August 26, 2015Publication date: December 31, 2015Inventors: Steven Gerard LeMire, Vuong Cao Nguyen
-
Publication number: 20150378924Abstract: A tool for determining eviction of store cache entries based on store pressure. The tool determines, by one or more computer processors, a count value for one or more new store cache entry allocations. The tool determines, by one or more computer processors, whether a new store cache entry allocation limit is exceeded. Responsive to determining the new store cache entry allocation limit is exceeded, the tool determines, by one or more computer processors, an allocation value for one or more existing store cache entries, the allocation value indicating an allocation class for each of the one or more existing store cache entries. The tool determines, by one or more computer processors based, at least in part, on the allocation value for the one or more existing store cache entries, at least one allocation class for eviction. The tool program determines, by one or more computer processors, an eviction request setting for evicting the one or more existing store cache entries.Type: ApplicationFiled: June 25, 2014Publication date: December 31, 2015Inventors: Uwe Brandt, Willm Hinrichs, Walter Lipponer, Martin Recktenwald, Hans-Werner Tast
-
Publication number: 20150378925Abstract: The present disclosure relates to caches, methods, and systems for using an invalidation data area. The cache can include a journal configured for tracking data blocks, and an invalidation data area configured for tracking invalidated data blocks associated with the data blocks tracked in the journal. The invalidation data area can be on a separate cache region from the journal. A method for invalidating a cache block can include determining a journal block tracking a memory address associated with a received write operation. The method can also include determining a mapped journal block based on the journal block and on an invalidation record. The method can also include determining whether write operations are outstanding. If so, the method can include aggregating the outstanding write operations and performing a single write operation based on the aggregated write operations.Type: ApplicationFiled: June 26, 2014Publication date: December 31, 2015Inventor: Pulkit MISRA
-
Publication number: 20150378926Abstract: A higher level shared cache of a hierarchical cache of a multi-processor system utilizes transaction identifiers to manage memory conflicts in corresponding transactions. The higher level cache is shared with two or more processors. Transaction indicators are set in the higher level cache corresponding to the cache lines being accessed. The transaction aborts if a memory conflict with the transaction's cache lines from another transaction is detected.Type: ApplicationFiled: June 27, 2014Publication date: December 31, 2015Inventors: Fadi Y. Busaba, Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum
-
Publication number: 20150378927Abstract: A computer allows non-cacheable loads or stores in a hardware transactional memory environment. Transactional loads or stores, by a processor, are monitored in a cache for TX conflicts. The processor accepts a request to execute a transactional execution (TX) transaction. Based on processor execution of a cacheable load or store instruction for loading or storing first memory data of the transaction, the computer can perform a cache miss operation on the cache. Based on processor execution of a non-cacheable load instruction for loading second memory data of the transaction, the computer can not-perform the cache miss operation on the cache based on a cache line associated with the second memory data being not-cached, and load an address of the second memory data into a non-cache-monitor. The TX transaction can be aborted based on the non-cache monitor detecting a memory conflict from another processor.Type: ApplicationFiled: June 27, 2014Publication date: December 31, 2015Inventors: Jonathan D. Bradbury, Michael Karl Gschwind, Valentina Salapura, Chung-Lung K. Shum
-
Publication number: 20150378928Abstract: Managing cache evictions during transactional execution of a process. Based on initiating transactional execution of a memory data accessing instruction, memory data is fetched from a memory location, the memory data to be loaded as a new line into a cache entry of the cache. Based on determining that a threshold number of cache entries have been marked as read-set cache lines, determining whether a cache entry that is a read-set cache line can be replaced by identifying a cache entry that is a read-set cache line for the transaction that contains memory data from a memory address within a predetermined non-conflict address range. Then invalidating the identified cache entry of the transaction. Then loading the fetched memory data into the identified cache entry, and then marking the identified cache entry as a read-set cache line of the transaction.Type: ApplicationFiled: August 12, 2015Publication date: December 31, 2015Inventors: Dan F. Greiner, Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
-
Publication number: 20150378929Abstract: A computational device maintains a first type of cache and a second type of cache. The computational device receives a command from the host to release space. The computational device synchronously discards tracks from the first type of cache, and asynchronously discards tracks from the second type of cache.Type: ApplicationFiled: September 11, 2015Publication date: December 31, 2015Inventors: Michael T. Benhase, Lokesh M. Gupta
-
Publication number: 20150378930Abstract: Systems and methods for validating virtual address translation. An example processing system comprises: a processing core to execute a first application associated with a first privilege level and a second application associated with a second privilege level, wherein a first set of privileges associated with the first privilege level includes a second set of privileges associated with the second privilege level; and an address validation component to validate, in view of an address translation data structure maintained by the first application, a mapping of a first address defined in a first address space of the second application to a second address defined in a second address space of the second application.Type: ApplicationFiled: June 27, 2014Publication date: December 31, 2015Inventors: RAVI L. SAHITA, GILBERT NEIGER, DAVID M. DURHAM, VEDVYAS SHANBHOGUE, MICHAEL LEMAY, IDO OUZIEL, STANISLAV SHWARTSMAN, BARRY HUNTLEY, ANDREW V. ANDERSON
-
Publication number: 20150378931Abstract: A system, method, and computer program product are provided for implementing shared reference counters among a plurality of virtual storage devices. The method includes the steps of allocating a first portion of a real storage device to store data, wherein the first portion is divided into a plurality of blocks of memory and allocating a second portion of the real storage device to store a plurality of reference counters that correspond to the plurality of blocks of memory. The reference counters may be updated by two or more virtual storage devices hosted in one or more nodes to manage the allocation of the blocks of memory in the real storage device.Type: ApplicationFiled: June 27, 2014Publication date: December 31, 2015Inventor: Philip Andrew White
-
Publication number: 20150378932Abstract: A system and method for executing user-provided code securely on a solid state drive (SSD) to perform data processing on the SSD. In one embodiment, a user uses a security-oriented cross-compiler to compile user-provided source code for a data processing task on a host computer containing, or otherwise connected to, an SSD. The resulting binary is combined with lists of input and output file identifiers and sent to the SSD. A central processing unit (CPU) on the SSD extracts the binary and the lists of file identifiers. The CPU obtains from the host file system the addresses of storage areas in the SSD containing the data in the input files, reads the input data, executes the binary using a container, and writes the results of the data processing task back to the SSD, in areas corresponding to the output file identifiers.Type: ApplicationFiled: December 5, 2014Publication date: December 31, 2015Inventors: Kamyar Souri, Joao Alcantara, Ricardo Cassia
-
Publication number: 20150378933Abstract: A storage management apparatus configured to allocate physical addresses in a physical storage area, to virtual addresses in a virtual storage area for storing data is provided. The storage management apparatus includes a processor that executes a process to define, in the physical area, a continuous area having a plurality of continuous physical addresses, and define, based on a virtual address to which a physical address in the continuous area has initially been allocated, an allocation range of virtual addresses for allocating the defined continuous area; and allocate a physical address in the defined continuous area to a virtual address in the defined relation range.Type: ApplicationFiled: May 21, 2015Publication date: December 31, 2015Inventors: Fumihiro Ooba, Shuko Yasumoto, Hisashi Osanai, Shunsuke Motoi, Daisuke Fujita, TETSUYA NAKASHIMA, Eiji Hamamoto
-
Publication number: 20150378934Abstract: A method, medium, and system to receive a request to add a resource to a cache, the resource including a data object and a context item key associated with the resource and uniquely identifying a context of use referenced by the context item key; determine whether the resource is stored in the cache; store, in response to the determination that the resource is not stored in the cache, the resource in the cache; and add the context item key of the resource stored in the cache to a record of reference list of resources.Type: ApplicationFiled: June 26, 2014Publication date: December 31, 2015Inventors: Eyal Nathan, Oleg Kossoy, David Malachi
-
Publication number: 20150378935Abstract: A storage table replacement method uses an index table, a storage table containing multiple rows of storage cells, and a correlation table. The method includes storing information in one or more rows of storage cells in the storage table; and storing track addresses of the storage cells in the storage table in the index table. Every track address includes a row address and a column address. The method further includes recording, in every row in the correlation table, a total number of index rows/index table memory cells that use the row as an index target in the index table and addresses of a certain number of index rows/index table memory cells, where the correlation table and the storage table have a same number of rows; and, when a row of new information is generated, based on the correlation table, selecting and replacing a row in the storage table.Type: ApplicationFiled: January 29, 2014Publication date: December 31, 2015Inventor: KENNETH CHENGHAO LIN
-
Publication number: 20150378936Abstract: A system, a method and a computer program product for managing memory access of an avionics control system having at least one control computer having at least one memory control device. The method includes assigning a memory access of at least one unique memory region of at least one memory unit to each of at least one application task or task set. A memory access of at least one application data update task is assigned to at least one subregion of one or more of the at least one unique memory region. At least one data parameter is written to the at least one subregion and the assigned memory access of the at least one application data update task de-activated.Type: ApplicationFiled: June 21, 2012Publication date: December 31, 2015Inventors: Torkel DANIELSSON, Jan HÅKEGÅRD, Anders GRIPSBORN, Björn HASSELQVIST
-
Publication number: 20150378937Abstract: The present disclosure relates to systems and methods for locking a storage device to prevent inadvertent modification when the device is mounted on a different system or different host. The method can include selecting, on a storage device, a location and contents of a byte region for locking, where the byte region comprises a boot sector of the device. The method can also include encoding the selected contents of the byte region, and locking the device by replacing the contents of the identified byte region with the encoded byte region at the identified location on the device. In some embodiments, encoding the selected contents of the byte region can include inverting the contents of the selected byte region using a binary not operation. In some embodiments, encoding the selected contents of the byte region can include modifying the selected contents of the byte region based on a generated unique identifier.Type: ApplicationFiled: June 26, 2014Publication date: December 31, 2015Inventors: Henglin YANG, Pulkit MISRA, Jonathan DEPRIZIO
-
Publication number: 20150378938Abstract: A wearable computer system comprising one or more processors, memory, and an attachment accessory is disclosed. The wearable computer system that includes one or more removable link components, the attachment accessory operatively to secure the system to the person of a user, the wearable computer system being configured such that the removable link components can be added to or removed from the attachment band and the capabilities of the wearable computer system change as components are added or removed.Type: ApplicationFiled: June 30, 2014Publication date: December 31, 2015Inventor: Nate L. Lyman
-
Publication number: 20150378939Abstract: A memory mechanism for providing semaphore functionality in a multi-master processing environment is disclosed. An exemplary memory unit includes a memory controller that manages access to a shared memory. The memory controller includes a semaphore context monitor associated with each master having access to the shared memory. A semaphore context monitor associated with a semaphore-capable master is activated by the semaphore-capable master (for example, by exclusive request signal(s) received by memory controller from semaphore-capable master). A semaphore context monitor associated with a non-semaphore-capable master is activated by the memory controller (for example, by exclusive request signal(s) generated by the memory controller).Type: ApplicationFiled: June 27, 2014Publication date: December 31, 2015Applicant: ANALOG DEVICES, INC.Inventor: Ahmed Ali Mohamed
-
Publication number: 20150378940Abstract: A computer can manage an interruption while a processor is executing a transaction in a transactional-execution (TX) mode. Execution, in a program context, of the transaction is begun by a processor in TX mode. An interruption request is detected for an interruption, by the processor, in TX mode. The interruption is accepted by the processor to execute a TX compatible routine in a supervisor context for changing supervisor resources. The TX compatible routine is executed within the TX mode. The processor returns to the program context to complete the execution of the transaction. Based on the transaction aborting, the processor does not commit changes to the supervisor resources.Type: ApplicationFiled: June 27, 2014Publication date: December 31, 2015Inventors: Jonathan D. Bradbury, Dan F. Greiner, Michael Karl Gschwind, Chung-Lung K. Shum
-
Publication number: 20150378941Abstract: Instructions and logic interrupt and resume paging in secure enclaves. Embodiments include instructions, specify page addresses allocated to a secure enclave, the instructions are decoded for execution by a processor. The processor includes an enclave page cache to store secure data in a first cache line and in a last cache line for a page corresponding to the page address. A page state is read from the first or last cache line for the page when an entry in an enclave page cache mapping for the page indicates only a partial page is stored in the enclave page cache. The entry for a partial page may be set, and a new page state may be recorded in the first cache line when writing-back, or in the last cache line when loading the page when the instruction's execution is being interrupted. Thus the writing-back, or loading can be resumed.Type: ApplicationFiled: June 27, 2014Publication date: December 31, 2015Inventors: Carlos V. Rozas, Ilya Alexandrovich, Gilbert Neiger, Francis X. McKeen, Ittai Anati, Vedvyas Shanbhogue, Shay Gueron
-
Publication number: 20150378942Abstract: A computer can manage an interruption while a processor is executing a transaction in a transactional-execution (TX) mode. Execution, in a program context, of the transaction is begun by a processor in TX mode. An interruption request is detected for an interruption, by the processor, in TX mode. The interruption is accepted by the processor to execute a TX compatible routine in a supervisor context for changing supervisor resources. The TX compatible routine is executed within the TX mode. The processor returns to the program context to complete the execution of the transaction. Based on the transaction aborting, the processor does not commit changes to the supervisor resources.Type: ApplicationFiled: September 4, 2015Publication date: December 31, 2015Inventors: Jonathan D. Bradbury, Dan F. Greiner, Michael Karl Gschwind, Chung-Lung K. Shum
-
Publication number: 20150378943Abstract: A computer implemented method and system for delaying a floating interruption while a processor is in a transactional-execution mode. A floating interruption mechanism can detect a floating interruption request for one or more floating interruption eligible processors. Based on each eligible processor being in TX mode, the method and system can delay, using a predetermined period of time, performing the floating interruption at a selected processor of the one or more of the processors. A first processor of the one or more processors can be selected based on the first processor exiting the transactional execution mode within the predetermined period of time. Based on the predetermined period of time expiring, the method and system can cause an interrupt to one of the plurality of processors, and the interrupt can cause the processor to abort a transaction.Type: ApplicationFiled: September 8, 2015Publication date: December 31, 2015Inventors: Dan F. Greiner, Michael Karl Gschwind, Valentina Salapura, Chung-Lung K. Shum
-
Publication number: 20150378944Abstract: A method of controlling access by a master to a peripheral includes receiving one or more interrupt priority levels from one or more interrupt controllers associated with the peripheral, comparing the one or more interrupt priority level with respective one or more pre-established interrupt access levels to obtain an interrupt level comparison result, establishing whether an access condition is satisfied in dependence on at least the interrupt level comparison result, and if the access condition is satisfied, granting access. If the access condition is not satisfied, access is denied. Further, a circuitry is described including one or more masters, one or more peripherals, and an access control circuitry including one or more interrupt controllers associated with the one or more peripherals. The access control circuitry is arranged to perform a method of controlling access by a master of the one or more masters to a peripheral of the one or more peripherals.Type: ApplicationFiled: February 12, 2013Publication date: December 31, 2015Applicant: FREESCALE SEMICONDUCTOR, INC.Inventors: Alistair ROBERTSON, Carl CULSHAW, Alan DEVINE, Andrei KOVALEV
-
Publication number: 20150378945Abstract: A computer implemented method and system for evading a floating interruption while a processor is in a transactional-execution (TX) mode. A floating interruption request can be detected, by a floating interrupt control mechanism, for a plurality of processors for execution by any one of the plurality of processors. An evasive action can be initiated for at least one of the plurality of processors in a transactional-execution mode, for evading the floating interruption such that another one of the plurality of processors can execute the floating interruption.Type: ApplicationFiled: September 4, 2015Publication date: December 31, 2015Inventors: Jonathan D. Bradbury, Fadi Y. Busaba, Harold W. Cain, III, Dan F. Greiner, Michael Karl Gschwind, Valentina Salapura, Eric M. Schwarz