Decentralized Address Translation, E.g., In Distributed Shared Memory Systems, Etc. (epo) Patents (Class 711/E12.066)
-
Patent number: 12242428Abstract: Techniques are provided for utilizing a log to free pages from persistent memory. A log is maintained to comprise a list of page block numbers of pages within persistent memory of a node to free. A page block number, of a page, within the log is identified for processing. A reference count, corresponding to a number of references to the page block number, is identified. In response to the reference count being greater than 1, the reference count is decremented and the page block number is removed from the log. In response to the reference count being 1, the page is freed from the persistent memory and the page block number is removed from the log.Type: GrantFiled: November 20, 2023Date of Patent: March 4, 2025Assignee: NetApp, Inc.Inventors: Rupa Natarajan, Ananthan Subramanian
-
Patent number: 12229079Abstract: A computing system can include a first system on chip (SoC) and a second SoC. Each SoC can comprise a memory in which the SoC publishes state information. For the first SoC, the state information can correspond to a set of tasks being performed by the first SoC, where the first SoC utilizes a plurality of computational components to perform the set of tasks. The first SoC can directly access the memory of the first SoC to dynamically read the state information published by the first SoC. In a backup role, the second SoC maintains a subset of its computational components in a low power state. When the second SoC detects a trigger while reading the state information published in the first memory of the first SoC, the second SoC powers the subset of computational components to take over the set of tasks.Type: GrantFiled: May 10, 2023Date of Patent: February 18, 2025Assignee: Mercedes-Benz Group AGInventor: Francois Piednoel
-
Patent number: 12189603Abstract: The present disclosure provides techniques and solutions for executing requests for database operations involving a remote data source in a system that includes an anchor node and one or more non-anchor nodes. A first request for one or more database operations is received, where at least a first database operation includes a data request for a remote data object. It is determined that the first database operation is not an insert, delete, or update operation, and therefore is assignable to the anchor node or one of the non-anchor nodes. The first database operation is assigned to a non-anchor node for execution. In a particular implementation, for a particular set of requests for a database operation, once an insert, delete, or update operation is received for the remote data object, subsequent operations for the remote data object in the set of requests are assigned to the anchor node for execution.Type: GrantFiled: October 21, 2022Date of Patent: January 7, 2025Assignee: SAP SEInventors: Won Wook Hong, Joo Yeon Lee, Hyeong Seog Kim, Jane Jung Lee, Younkyoung Lee
-
Patent number: 12189623Abstract: A central node can: receive a query comprising at least one parameter comprising a time range of a dataset stored in a cloud storage system; transmit one or more of the query parameters comprising the time range to a metadata service; receive from the metadata service a list of files related to the query; and assign to each processing node of a plurality of processing nodes a subset of the files. Each processing node can: determine that the subset is not stored on a cache; retrieving the subset not stored on the cache from the cloud storage system; store the retrieved subset in a local memory; scan the subset stored in the local memory for data matching the at least one parameter to generate a subset of query results; and concurrently copy using a separate thread from the scanning, the subset stored in the local memory to the cache.Type: GrantFiled: January 13, 2023Date of Patent: January 7, 2025Assignee: Sentinel Labs Israel Ltd.Inventor: Steve Newman
-
Patent number: 12019605Abstract: An index handler determines, with respect to a key to be inserted into an index, that a candidate destination node of the index meets a split criterion. The index handler generates and embeds a deferred split descriptor comprising an identifier of a new node within the destination node. Before an insert-completed indication is provided, the destination node is written to a back-end data store without acquiring a lock and without writing out the new node to the back-end data store. During the traversal of the index, the index handler identifies another deferred split descriptor indicating a second new node. After providing the indication that the key was successfully inserted, the index handler writes the second new node to the back-end data store.Type: GrantFiled: April 26, 2019Date of Patent: June 25, 2024Assignee: Amazon Technologies, Inc.Inventor: Andrew Ross Evenson
-
Patent number: 12020026Abstract: An instruction writing method, apparatus, and network device are provided to reduce a requirement for a storage space of a microcode processor. The method includes obtaining, by a first device, first indication information, where the first indication information indicates the first device to enable a first service function, and writing, by the first device, a first microcode instruction set corresponding to the first service function into an unused storage space of a target microcode processor in a network processor, where a size of the unused storage space is greater than or equal to a size of the first microcode instruction set.Type: GrantFiled: June 14, 2022Date of Patent: June 25, 2024Assignee: Huawei Technologies Co., Ltd.Inventors: Taixu Tian, Zhongzhen Wang, Jincong Huang, Wenliang Shen, Weijian Luo
-
Patent number: 12007892Abstract: Methods and systems are provided for allocating memory. An example method includes: allocating, for an application logic, a region of external primary memory included in a memory appliance; selecting, by a client device in response to a first request to reclaim a first portion of local primary memory in the client device, a portion of external primary memory from the region of external primary memory; copying data from the first portion of local primary memory to the portion of external primary memory; and converting a portion of a first virtual address space at the client device by remapping at least one virtual address in the first virtual address space at the client device from the first portion of local primary memory to the portion of external primary memory.Type: GrantFiled: May 19, 2023Date of Patent: June 11, 2024Assignee: Kove IP, LLCInventors: Timothy A. Stabrawa, Zachary A Cornelius, John Overton, Andrew S. Poling, Jesse Taylor
-
Patent number: 11941030Abstract: Methods, non-transitory machine readable media, and computing devices that provide more efficient hierarchical propagation in tree structures are disclosed. With this technology, a first delta record for a first interior node is created optionally in an atomic transaction along with updating a first tally record for a leaf node based on a first value. The transaction is in response to an action associated with the leaf node and the first interior node is a parent of the leaf node in a hierarchical tree. A timer associated with the first delta record is then set. A second value is updated in a second tally record for the first interior node based on the first value, when the timer has expired. Accordingly, this technology advantageously maintains recursive properties or values throughout a hierarchical tree continually, with reduced cost, even in a distributed network and in hierarchical trees with large numbers of nodes.Type: GrantFiled: December 21, 2022Date of Patent: March 26, 2024Assignee: NETAPP, INC.Inventors: Richard Jernigan, Keith Bare, Bill Zumach
-
Patent number: 11928367Abstract: Some embodiments provide a method for, at a network interface controller (NIC) of a computer, accessing data in a network. From the computer, the method receives a request to access data stored at a logical memory address. The method translates the logical memory address into a memory address of a particular network device storing the requested data. The method sends a data message to the particular network device to retrieve the requested data.Type: GrantFiled: June 21, 2022Date of Patent: March 12, 2024Assignee: VMware LLCInventors: Alex Markuze, Shay Vargaftik, Igor Golikov, Yaniv Ben-Itzhak, Avishay Yanai
-
Patent number: 11809338Abstract: In an example, there is disclosed a host-fabric interface (HFI), including: an interconnect interface to communicatively couple the HFI to an interconnect; a network interface to communicatively couple the HFI to a network; network interface logic to provide communication between the interconnect and the network; a coprocessor configured to provide an offloaded function for the network; a memory; and a caching agent configured to: designate a region of the memory as a shared memory between the HFI and a core communicatively coupled to the HFI via the interconnect; receive a memory operation directed to the shared memory; and issue a memory instruction to the memory according to the memory operation.Type: GrantFiled: October 4, 2021Date of Patent: November 7, 2023Assignee: Intel CorporationInventors: Francesc Guim Bernat, Daniel Rivas Barragan, Kshitij A. Doshi, Mark A. Schmisseur
-
Patent number: 11809888Abstract: A method includes receiving a request to migrate a virtual machine from a source host to a destination host, mapping, by a hypervisor running on the source host, a first portion of a memory of the virtual machine to a persistent memory device, where the persistent memory device is accessible by the source host machine and the destination host machine, responsive to determining that a time period to execute a synchronization operation with respect to the first portion of the memory by the persistent memory device is below a threshold, stopping the virtual machine on the source host, and starting the virtual machine on the destination host.Type: GrantFiled: April 29, 2019Date of Patent: November 7, 2023Assignee: Red Hat, Inc.Inventor: Michael Tsirkin
-
Patent number: 11762940Abstract: A method and system for component level data management in web applications is disclosed. In some embodiments, the method includes identifying a at least one component within a web application. The method further includes saving an initial state of the at least one component in a Redux store, tracking the at least one component being updated with data by a user, updating a current state of the at least one component being tracked in real-time in the Redux store, receiving a predefined action on the at least one component or the web application, determining whether the current state of the at least one component is updated in the Redux store, and generating a warning to the user in response to receiving the predefined action, when the current state of the at least one component is not updated in the Redux store.Type: GrantFiled: September 12, 2020Date of Patent: September 19, 2023Inventors: Nicholas Board, Peter Kamenkovich, Asiyah Ahmad, Heath Thomann
-
Patent number: 11726669Abstract: Methods, systems, and devices for coherency locking are described in which different types of writes have different coherency locking schemes. The types of writes can be associated with different sources of write commands, such as external commands from a host system or internal commands from a garbage collection procedure. Coherency locking can be performed for external write commands received from a host system, while coherency locking is not performed for internal write commands. If an internal write is received for data that has been previously written at a prior location, a write to one or more physical memory devices can be performed and, once an acknowledgment is received that the write is complete, an update to a mapping table with the new location of the data is performed.Type: GrantFiled: March 24, 2022Date of Patent: August 15, 2023Assignee: Micron Technology, Inc.Inventors: Yun Li, John Traver
-
Patent number: 11720416Abstract: A computer's processes and/or threads generate and store in memory, data to reimplement or reverse a transaction on a database, so that the database can be recovered. This data is written to persistent memory storage (“persisted”) by another process, for which the processes and/or threads may wait. This wait includes at least a sleep phase, and additionally a spin phase which is entered if after awakening from sleep and checking (“on-awakening” check), the data to be persisted is found to not have been persisted. To sleep in the sleep phase, each process/thread specifies a sleep duration determined based at least partially on previous results of on-awakening checks. The previous results in which to-be-persisted data was found to be not persisted are indications the sleeps were insufficient, and these indications are counted and used to determine the sleep duration. Repeated determination of sleep duration makes the sleep phase adaptive.Type: GrantFiled: September 13, 2019Date of Patent: August 8, 2023Assignee: Oracle International CorporationInventors: Graham Ivey, Yunrui Li
-
Patent number: 11706317Abstract: A memory system having one or more memory components and a controller. The controller can receive access requests from a communication connection. The access requests can identify data items associated with the access requests, addresses of the data items, and contexts of the data items in which the data items are used for the access requests. The controller can identify separate memory regions for separate contexts respectively, determine placements of the data items in the separate memory regions based on the contexts of the data items, and determine a mapping between the addresses of the data items and memory locations that are within the separate memory regions corresponding to the contexts of the data items. The memory system stores the data items at the memory locations separated by different memory regions according to different contexts.Type: GrantFiled: December 28, 2020Date of Patent: July 18, 2023Assignee: Micron Technology, Inc.Inventors: Parag R. Maharana, Anirban Ray, Gurpreet Anand
-
Patent number: 11694299Abstract: Embodiments are disclosed for emulation of graphics processing unit instructions. An example method executing an instrumented kernel using a logic circuit, the instrumented kernel including an emulation sequence; saving, in response to determination that the emulation sequence is to be executed, source data to a shared memory; setting an emulation request flag to indicate to processor circuitry separate from the logic circuit that offloaded execution of the emulation sequence is to be executed; monitoring the emulation request flag to determine whether the offloaded execution of the emulation sequence is complete; and accessing resulting data from the shared memory.Type: GrantFiled: September 24, 2021Date of Patent: July 4, 2023Assignee: INTEL CORPORATIONInventors: Konstantin Levit-Gurevich, Michael Berezalsky, Noam Itzhaki, Arik Narkis, Orr Goldman
-
Patent number: 11656985Abstract: Methods and systems are provided for allocating memory. A portion of memory may be allocated by: selecting a type of memory to allocate in a client device from a group of memory types in response to a memory allocation request and/or in response to a request to access a portion of an address space, wherein the selection of the type of memory to allocate is based on an available memory determination; selecting a portion of the local primary memory, a portion of the external primary memory, or a portion of the memory-mapped file for the portion of memory to allocate at the client device depending on the selected type of memory; and mapping at least the selected portion to the address space.Type: GrantFiled: January 29, 2021Date of Patent: May 23, 2023Assignee: Kove IP, LLCInventors: Timothy A. Stabrawa, Zachary A. Cornelius, John Overton, Andrew S. Poling, Jesse I. Taylor
-
Patent number: 11652717Abstract: Example methods and systems are provided for simulation-based cross-cloud connectivity checks. One example method may include injecting a connectivity check packet in a first cloud environment, and obtaining first report information associated with a first stage of forwarding the connectivity check packet from one or more first observation points in the first cloud environment. The method may also comprise: based on configuration information associated with one or more second observation points in the second cloud environment, simulating a second stage of forwarding the connectivity check packet towards a second virtualized computing instance via the one or more second observation points. The method may further comprise: generating second report information associated with the simulated second stage to identify a connectivity status between a first virtualized computing instance and the second virtualized computing instance based on the first report information and the second report information.Type: GrantFiled: June 24, 2021Date of Patent: May 16, 2023Assignee: VMWARE, INC.Inventors: Qiao Huang, Donghai Han, Qiong Wang, Jia Cheng, Xiaoyan Jin, Qiaoyan Hou
-
Patent number: 11630803Abstract: Methods, non-transitory computer readable media, computing devices and systems for persistent indexing and space management for flat directory include creating, using at least one of said at least one processors, an index file to store mapping information, computing, using at least one of said at least one processor, a hash based on a lookup filename, searching, using at least one of said at least one processor, the index file to find all matching directory cookies based on the computed hash, selecting, using at least one of said at least one processor, the directory entity associated with the lookup filename from among the matched directory cookies, and returning, using at least one of said at least one processor, the determined directory entity.Type: GrantFiled: June 19, 2020Date of Patent: April 18, 2023Assignee: NETAPP, INC.Inventor: Ravi Basrani
-
Patent number: 11593399Abstract: System and method for managing copy-on-write (COW) B tree structures for metadata of storage objects stored in a storage system determine, when a request to modify a target storage object stored in the storage system that requires a modification of a target leaf node in a B tree structure for metadata of the target storage object is received, whether an operation sequence number of the target leaf node is greater than a snapshot sequence number of a parent snapshot of a running point of the B tree structure. When the operation sequence number is greater than the snapshot sequence number, the target leaf mode is modified in place without copying the target leaf node. When the operation sequence number is not greater than the snapshot sequence number, the target leaf node is copied as a new leaf node for the B tree structure and the new leaf node is modified.Type: GrantFiled: June 22, 2021Date of Patent: February 28, 2023Assignee: VMWARE, INC.Inventors: Enning Xiang, Wenguang Wang, Pranay Singh, Subhradyuti Sarkar, Nitin Rastogi
-
Patent number: 11556518Abstract: An embodiment relates to a computer-implemented data processing system and method for storing a data set at a plurality of data centers. The data centers and hosts within the data centers may, for example, be organized according to a multi-tiered ring arrangement. A hashing arrangement may be used to implement the ring arrangement to select the data centers and hosts where the writing and reading of the data sets occurs. Version histories may also be written and read at the hosts and may be used to evaluate causal relationships between the data sets after the reading occurs.Type: GrantFiled: March 2, 2016Date of Patent: January 17, 2023Assignee: Amazon Technologies, Inc.Inventors: Peter S. Vosshall, Swaminathan Sivasubramanian, Giuseppe deCandia, Deniz Hastorun, Avinash Lakshman, Alex Pilchin, Ivan D. Rosero
-
Patent number: 11507512Abstract: The described technology is generally directed towards fault tolerant cluster data handling techniques, as well as devices and computer readable media configured to perform the disclosed fault tolerant cluster data handling techniques. Nodes in a computing cluster can be configured to generate wire format resources corresponding to operating system resources. A wire format resource can comprise a cache key and a hint information to locate data, such as a file, corresponding to the operating system resource. The wire format resource can be stored in a resource cache along with a pointer that points to the operating system resource. The wire format resource can also be provided to client devices. Nodes in the computing cluster can be configured to receive and process client instructions that include wire format resources, as well as to use hint information to re-allocate data associated with a wire format resource.Type: GrantFiled: December 8, 2020Date of Patent: November 22, 2022Assignee: EMC IP Holding Company LLCInventors: Ben Ellerby, Austin Voecks, Evgeny Popovich
-
Patent number: 11494308Abstract: A calculation system comprises a computing device having one or more instruction-controlled processing cores and a memory controller, the memory controller including a cache memory; and a memory circuit coupled to the memory controller via a data bus and an address bus, the memory circuit being adapted to have a first m-bit memory location accessible by a plurality of first addresses provided on the address bus, the calculation device being configured to select, in order to each memory operation accessing the first m-bit memory location, one address among the plurality first addresses.Type: GrantFiled: September 6, 2017Date of Patent: November 8, 2022Assignee: UPMEMInventors: Jean-François Roy, Fabrice Devaux
-
Patent number: 11449450Abstract: A processing and storage circuit includes an internal bus, one or more first-level internal memory units, a central processing unit (CPU), one or more hardware acceleration engines, and an arbiter. The first-level internal memory unit is coupled to the internal bus. The CPU includes a second-level internal memory unit, and is configured to access the first-level internal memory unit via the internal bus, and when the CPU accesses data, the first-level internal memory unit is accessed preferentially. The hardware acceleration engine is configured to access the first-level internal memory unit via the internal bus. The arbiter is coupled to the internal bus, configured to decide whether the CPU or the hardware acceleration engine be allowed to access the first-level internal memory unit. The arbiter sets the priority of the CPU accessing the first-level internal memory unit to be over the hardware acceleration engine.Type: GrantFiled: December 31, 2020Date of Patent: September 20, 2022Assignee: RAYMX MICROELECTRONICS CORP.Inventors: Shuai Lin, Yu Zhang
-
Patent number: 11451441Abstract: Described herein are enhancements for operating content nodes in a content delivery network. In at least one implementation, a content node deploys a request handler configuration and a key-value object, wherein the key-value object includes one or more key-value pairs and wherein the request handler configuration calls, in response to a content request from an end user device, the key-value object using a key associated with the content request and the key-value object returns a value associated with the key. The content node further obtains a request to modify the key-value object, identifies a modification to the key-value object based on the command, and updates the key-value object with the modification.Type: GrantFiled: January 10, 2017Date of Patent: September 20, 2022Assignee: Fastly, Inc.Inventor: Tyler B. McMullen
-
Patent number: 11392508Abstract: A first processor is configured to detect migration of a page from a second memory associated with a second processor to a first memory associated with the first processor or to detect duplication of the page in the first memory and the second memory. The first processor implements a translation lookaside buffer (TLB) and the first processor is configured to insert an entry in the TLB in response to the duplication or the migration of the page. The entry maps a virtual address of the page to a physical address in the first memory and the entry is inserted into the TLB without modifying a corresponding entry in a page table that maps the virtual address of the page to a physical address in the second memory. In some cases, a duplicate translation table (DTT) stores a copy of the entry that is accessed in response to a TLB miss.Type: GrantFiled: November 29, 2017Date of Patent: July 19, 2022Assignee: ADVANCED MICRO DEVICES, INC.Inventors: Nuwan Jayasena, Yasuko Eckert
-
Patent number: 8892845Abstract: A method begins by a processing module receiving data of a file for storage in a dispersed storage network (DSN) memory and determining a segmentation scheme for storing the data. The method continues with the processing module determining how to store the data in accordance with the segmentation scheme to produce information for storing the data and generating an entry within a segment allocation table associated with the file, wherein the entry includes the information for storing the data and the segmentation scheme. The method continues with the processing module facilitating storage of the segment allocation table in the DSN memory. The method continues with the processing module segmenting the data in accordance with the segmentation scheme to produce a plurality of data segments and facilitating storage of the plurality of data segments in the DSN memory in accordance with the information for storing the data.Type: GrantFiled: December 1, 2011Date of Patent: November 18, 2014Assignee: Cleversafe, Inc.Inventors: Ilya Volvovski, Andrew Baptist, Wesley Leggette
-
Patent number: 8677068Abstract: Techniques using scalable storage devices represent a plurality of host-accessible storage devices as a single logical interface, conceptually aggregating storage implemented by the devices. A primary agent of the devices accepts storage requests from the host using a host-interface protocol, processing the requests internally and/or forwarding the requests as sub-requests to secondary agents of the storage devices using a peer-to-peer protocol. The secondary agents accept and process the sub-requests, and report sub-status information for each of the sub-requests to the primary agent and/or the host. The primary agent optionally accumulates the sub-statuses into an overall status for providing to the host. Peer-to-peer communication between the agents is optionally used to communicate redundancy information during host accesses and/or failure recoveries. Various failure recovery techniques reallocate storage, reassign agents, recover data via redundancy information, or any combination thereof.Type: GrantFiled: June 17, 2011Date of Patent: March 18, 2014Assignee: LSI CorporationInventors: Timothy Lawrence Canepa, Carlton Gene Amdahl
-
Patent number: 8645634Abstract: One embodiment of the present invention sets forth a technique for reducing the copying of data between memory allocated to a primary processor and a coprocessor is disclosed. The system memory is aliased as device memory to allow the coprocessor and the primary processor to share the same portion of memory. Either device may write and/or read the shared portion of memory to transfer data between the devices rather than copying data from a portion of memory that is only accessible by one device to a different portion of memory that is only accessible by the other device. Removal of the need for explicit primary processor memory to coprocessor memory and coprocessor memory to primary processor memory copies improves the performance of the application and reduces physical memory requirements for the application since one portion of memory is shared rather than allocating separate private portions of memory.Type: GrantFiled: January 16, 2009Date of Patent: February 4, 2014Assignee: NVIDIA CorporationInventors: Michael Brian Cox, Nicholas Patrick Wilt, Richard Hough
-
Patent number: 8635412Abstract: A multi-processor system is disclosed comprising a first processor, a first memory coupled to the first processor, a second processor, and a shared memory subsystem including a shared memory and a data transfer unit. The first processor is configured to build a data structure in the first memory and to send a direct memory access (DMA) transfer request to the data transfer unit of the shared memory subsystem, the DMA transfer request including an address of the data structure in the first memory. The data transfer unit is configured to retrieve the data structure from the first memory based on the DMA transfer request, to store the data structure in the shared memory, and to send a shared memory pointer to the second processor indicating an address of the data structure in the shared memory.Type: GrantFiled: June 23, 2011Date of Patent: January 21, 2014Assignee: Western Digital Technologies, Inc.Inventor: James C. Wilshire
-
Patent number: 8464009Abstract: A distributed shared memory multiprocessor system that supports both fine- and coarse- grained interleaving of the shared memory address space. A ceiling mask sets a boundary between the fine-grain interleaved and coarse-grain interleaved memory regions of the distributed shared memory. A method for satisfying a memory access request in a distributed shared memory subsystem of a multiprocessor system having both fine- and coarse-grain interleaved memory segments. Certain low or high order address bits, depending on whether the memory segment is fine- or coarse-grain interleaved, respectively, are used to determine if the memory address is local to a processor node. A method for setting the ceiling mask of a distributed shared memory multiprocessor system to optimize performance of a first application run on a single node and performance of a second application run on a plurality of nodes.Type: GrantFiled: June 4, 2008Date of Patent: June 11, 2013Assignee: Oracle America, Inc.Inventors: Ramaswamy Sivaramakrishnan, Connie Cheung, William Bryg
-
Patent number: 8423723Abstract: A method of declaring and using variables includes; determining whether variables are independent variables or common variables, declaring and storing the independent variables in a plurality of data structures respectively corresponding to the plurality of processors, declaring and storing the common variables in a shared memory area, allowing each one of the plurality of processors to simultaneously use the independent variables in a corresponding one of the plurality of data structures, and allowing only one of the plurality of processors at a time to use the common variables in the shared memory area.Type: GrantFiled: March 16, 2010Date of Patent: April 16, 2013Assignee: Samsung Electronics Co., Ltd.Inventors: Hye-ran Jeon, Woo-hyong Lee, Min-gyu Lee, Woon-gee Kim, Ji-seong Oh, Ja-gun Kwon, Taek-gyun Ko
-
Patent number: 8347065Abstract: A shared memory management system and method are described. In one embodiment, a memory management system includes a memory management unit for concurrently managing memory access requests from a plurality of engines. The shared memory management system independently controls access to the context memory without interference from other engine activities. In one exemplary implementation, the memory management unit tracks an identifier for each of the plurality of engines making a memory access request. The memory management unit associates each of the plurality of engines with particular translation information respectively. This translation information is specified by a block bind operation. In one embodiment the translation information is stored in a portion of instance memory. A memory management unit can be non-blocking and can also permit a hit under miss.Type: GrantFiled: November 1, 2006Date of Patent: January 1, 2013Inventors: David B. Glasco, John S. Montrym, Lingfeng Yuan
-
Patent number: 8327086Abstract: Migration management is provided for a shared memory logical partition migrating from a source system to a target system. The management approach includes managing migration of the logical partition from the source system to the target system by: transferring a portion of logical partition state information for the migrating logical partition from the source system to the target system by copying at the source system contents of a logical page of the migrating logical partition into a state record buffer for forwarding to the target system; forwarding the state record buffer to the target system; and determining whether the migrating logical partition is suspended at the source system, and if not, copying at the target system contents of the state record buffer to paging storage of the target system, the paging storage being external to physical memory managed by a hypervisor of the target system.Type: GrantFiled: January 6, 2012Date of Patent: December 4, 2012Assignee: International Business Machines CorporationInventors: Stuart Z. Jacobs, David A. Larson, Wade B. Ouren, Kenneth C. Vossen
-
Publication number: 20120110276Abstract: Migration management is provided for a shared memory logical partition migrating from a source system to a target system. The management approach includes managing migration of the logical partition from the source system to the target system by: transferring a portion of logical partition state information for the migrating logical partition from the source system to the target system by copying at the source system contents of a logical page of the migrating logical partition into a state record buffer for forwarding to the target system; forwarding the state record buffer to the target system; and determining whether the migrating logical partition is suspended at the source system, and if not, copying at the target system contents of the state record buffer to paging storage of the target system, the paging storage being external to physical memory managed by a hypervisor of the target system.Type: ApplicationFiled: January 6, 2012Publication date: May 3, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Stuart Z. Jacobs, David A. Larson, Wade B. Ouren, Kenneth C. Vossen
-
Patent number: 8171236Abstract: Migration management is provided for a shared memory logical partition migrating from a source system to a target system. The management approach includes managing migration of the logical partition from the source system to the target system by: transferring a portion of logical partition state information for the migrating logical partition from the source system to the target system by copying at the source system contents of a logical page of the migrating logical partition into a state record buffer for forwarding to the target system; forwarding the state record buffer to the target system; and determining whether the migrating logical partition is suspended at the source system, and if not, copying at the target system contents of the state record buffer to paging storage of the target system, the paging storage being external to physical memory managed by a hypervisor of the target system.Type: GrantFiled: March 13, 2009Date of Patent: May 1, 2012Assignee: International Business Machines CorporationInventors: Stuart Z. Jacobs, David A. Larson, Wade B. Ouren, Kenneth C. Vossen
-
Patent number: 8156289Abstract: The claimed matter provides systems and/or methods that effectuate utilization of fine-grained concurrency in parallel processing and efficient management of established memory structures. The system can include devices that establish memory structures associated with individual processors that can comprise a parallel processing phalanx. The system can thereafter utilize various enqueuing and/or dequeuing directives to add or remove work descriptors to or from the memory structures individually associated with each of the individual processors thereby providing improved work flow synchronization amongst the processors that comprise the parallel processing complex.Type: GrantFiled: June 3, 2008Date of Patent: April 10, 2012Assignee: Microsoft CorporationInventor: David T. Harper, III
-
Patent number: 8145878Abstract: A system may comprise one or more source agents, target agents, and a plurality of directory agents, which may determine the target agent to which one or more transactions generated by the source agents is to be sent. A controller may identify one of a plurality of directory agents to process the transactions. The directory agent may determine the control and status registers of the target agents to which the transaction is to be sent. The target agent may complete the transaction after receiving the transaction from the directory agent. The directory agents may store a memory map to resolve the target agent to which the transactions is to be sent. The directory based distributed CSR access may provide scalability to ever increasing number of heterogeneous agents in the system.Type: GrantFiled: December 17, 2007Date of Patent: March 27, 2012Assignee: Intel CorporationInventors: Ramacharan Sundararaman, Faisal Azeem
-
Patent number: 8055872Abstract: A data processing system in the form of an integrated circuit includes a general purpose programmable processor and a hardware accelerator. A shared memory management unit provides memory management operations on behalf of both of the processor core and the hardware accelerator. The processor and the hardware accelerator share a memory system. A first communication channel between the processor and the hardware accelerator communicates at least control signals therebetween. A second communication channel coupling the hardware accelerator and the memory system allows the hardware accelerator to perform its own data access operations upon the memory system.Type: GrantFiled: February 21, 2008Date of Patent: November 8, 2011Assignee: ARM LimitedInventors: Stuart David Biles, Nigel Charles Paver, Chander Sudanthi
-
Patent number: 7975109Abstract: A data processing system includes one or more nodes, each node including a memory sub-system. The sub-system includes a fine-grained, memory, and a less-fine-grained (e.g., page-based) memory. The fine-grained memory optionally serves as a cache and/or as a write buffer for the page-based memory. Software executing on the system uses a node address space which enables access to the page-based memories of all nodes. Each node optionally provides ACID memory properties for at least a portion of the space. In at least a portion of the space, memory elements are mapped to locations in the page-based memory. In various embodiments, some of the elements are compressed, the compressed elements are packed into pages, the pages are written into available locations in the page-based memory, and a map maintains an association between the some of the elements and the locations.Type: GrantFiled: May 30, 2008Date of Patent: July 5, 2011Assignee: Schooner Information Technology, Inc.Inventors: Thomas M. McWilliams, Earl T. Cohen, James M. Bodwin, Ulrich Bruening
-
Publication number: 20110161604Abstract: Multiple types of executable agents operating within a domain. The domain includes mutable shared state and immutable shared state, with agents internal to the domain only operating on the shared state. Writer agents are defined to be agents that have read access and write access to mutable shared state and read access only to immutable shared state. General reader agents have read access to both mutable shared state and immutable shared state and have no write access. Immutable reader agents have read access to only immutable shared state and have no write access. By appropriate scheduling of the different types of agents, data races may be reduced or eliminated.Type: ApplicationFiled: December 29, 2009Publication date: June 30, 2011Applicant: MICROSOFT CORPORATIONInventors: Artur Laksberg, Joshua D. Phillips, Niklas Gustafsson
-
Publication number: 20110161602Abstract: An object storage system comprises one or more computer processors or threads that can concurrently access a shared memory, the shared memory comprising an array of equally-sized cells. In one embodiment, each cell is of the size used by the processors to represent a pointer, e.g., 64 bits. Using an algorithm performing only one memory write, and using a hardware-provided transactional operation, such as a compare-and-swap instruction, to implement the memory write, concurrent access is safely accommodated in a lock-free manner.Type: ApplicationFiled: December 31, 2009Publication date: June 30, 2011Inventors: Keith Adams, Spencer Ahrens
-
Publication number: 20110153959Abstract: A method for implementing data storage and a dual port, dual element storage device are provided. A storage device includes a predefined form factor including a first port and a second port, and a first storage element and a second storage element. A controller coupled between the first port and second port, and the first storage element and second storage element controls access and provides two separate data paths to the first storage element and second storage element.Type: ApplicationFiled: December 23, 2009Publication date: June 23, 2011Applicant: Hitachi Global Storage Technologies Netherlands B.V.Inventors: Frank R. Chu, Spencer W. Ng, Motoyasu Tsunoda, Marco Sanvido
-
Publication number: 20110125973Abstract: The transactional memory system described herein may apply a mix of read validation techniques to validate read operations (e.g., invisible reads and/or semi-visible reads) in different transactions, or to validate different read operations within a single transaction (including reads of the same location). The system may include mechanisms to dynamically determine that a read validation technique should be replaced by a different technique for reads of particular locations or for all subsequent reads, and/or to dynamically adjust the balance between different read validation techniques to manage costs. Some of the read validation techniques may be supported by hardware transactional memory (HTM). The system may delay acquisition of ownership records for reading, and may acquire two or more ownership records back-to-back (e.g., within a single hardware transaction). The user code of a software transaction may be divided into multiple segments, some of which may be executed within a hardware transaction.Type: ApplicationFiled: November 25, 2009Publication date: May 26, 2011Inventors: Yosef Lev, Marek K. Olszewski, Mark S. Moir
-
Publication number: 20110119453Abstract: A method for implementing a high-availability system that includes a plurality of controllers that each includes a shared memory. The method includes storing in the shared memory, by each controller, status data related to each of a plurality of failure modes, and calculating, by each controller, an availability score based on the status data. The method also includes determining, by each controller, one of the plurality of controllers having a highest availability score, and identifying the one of the plurality of controllers having the highest availability score as a master controller.Type: ApplicationFiled: November 19, 2009Publication date: May 19, 2011Inventors: Yan Hua Xu, Mark Reitzel, Jerry Simons, Terrance John Walsh
-
Publication number: 20110087847Abstract: Systems and methods for improved multiple-port memory are provided. In one embodiment, a processing system comprises: at least one processing core; a peripheral bus; and a memory for storing digital data, the memory divided into a first and a second partition of memory segments. The memory includes a first port coupled to the peripheral bus providing read access and write access only to the first partition, wherein the first partition stores peripheral data associated with one or more peripheral components coupled to the peripheral bus; a second port coupled to the at least one processor providing read-only access to only the second partition, wherein the second partition stores executable code for the at least one processing core; and a third port coupled to the at least one processor providing read access and write access to the entire first partition and the second partition.Type: ApplicationFiled: October 8, 2009Publication date: April 14, 2011Applicant: HONEYWELL INTERNATIONAL INC.Inventors: Scott Gray, Nicholas Wilt
-
Publication number: 20110055492Abstract: Sorting data using a multi-core processing system is disclosed. An unsorted data set is copied from a global memory device to a shared memory device. The global memory device can store data sets for the multi-core processing system. The shared memory device can store unsorted data sets for sorting. The unsorted data set can include a plurality of data elements. The unsorted data set can be sorted into sorted data in parallel on the shared memory device using a cluster of processors of the multi-core processing system. The cluster of processors may include at least as many processors as a number of the data elements in the unsorted data set. The sorted data can be copied from the shared memory device to the global memory device.Type: ApplicationFiled: September 3, 2009Publication date: March 3, 2011Inventors: Ren Wu, Bin Zhang, Meichun Hsu
-
Publication number: 20100332770Abstract: A system and method for concurrency control may use slotted read-write locks. A slotted read-write lock is a lock data structure associated with a shared memory area, wherein the slotted read-write lock indicates whether any thread has a read-lock and/or a write-lock for the shared memory area. Multiple threads may concurrently have the read-lock but only one thread can have the write-lock at any given time. The slotted read-write lock comprises multiple slots, each associated with a single thread. To acquire the slotted read-write lock for reading, a thread assigned to a slot performs a store operation to the slot and then attempts to determine that no other thread holds the slotted read-write lock for writing. To acquire the slotted read-write lock for writing, a thread assigned to a slot sets its write-bit and then attempts to determine that the write-lock is not held.Type: ApplicationFiled: June 26, 2009Publication date: December 30, 2010Inventors: David Dice, Nir N. Shavit
-
Publication number: 20100250851Abstract: A method of declaring and using variables includes; determining whether variables are independent variables or common variables, declaring and storing the independent variables in a plurality of data structures respectively corresponding to the plurality of processors, declaring and storing the common variables in a shared memory area, allowing each one of the plurality of processors to simultaneously use the independent variables in a corresponding one of the plurality of data structures, and allowing only one of the plurality of processors at a time to use the common variables in the shared memory area.Type: ApplicationFiled: March 16, 2010Publication date: September 30, 2010Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Hye-ran JEON, Woo-hyong LEE, Min-gyu LEE, Woon-gee KIM, Ji-seong OH, Ja-gun KWON, Taek-gyun KO
-
Patent number: 7783857Abstract: In a data management method for supervising a non-volatile memory having a plurality of blocks erasable in a lump, each of the blocks being formed by a plurality of pages, each of the pages including a redundant area, the aggregate management information is used for data management to enable prompt booting. The distributed management information, as the management information for the respective blocks, is stored in the redundant area of each page, and the aggregate management information supervises data stored in each block, in a lump, in association with the distributed management information. It is verified, at the time of booting, whether the aggregate management information is effective. The data is supervised based on the aggregate management information when the aggregate management information is effective and, when the aggregate management information is not effective, the data is supervised based on the distributed management information.Type: GrantFiled: October 8, 2004Date of Patent: August 24, 2010Assignee: Sony CorporationInventors: Hiroaki Fuse, Akira Sassa, Atsushi Onoe