Shared Memory Area Patents (Class 711/147)
-
Publication number: 20150046661Abstract: Mobile computing devices may be configured to compile and execute portions of a general purpose software application in an auxiliary processor (e.g., a DSP) of a multiprocessor system by reading and writing information to a shared memory. A first process (P1) on the applications processor may request address negotiation with a second process (P2) on the auxiliary processor, obtain a first address map from a first operating system, and send the first address map to the auxiliary processor. The second process (P2) may receive the first address map, obtain a second address map from a second operating system, identify matching addresses in the first and second address maps, store the matching addresses as common virtual addresses, and send the common virtual addresses back to the applications processor. The first and second processes (i.e., P1 and P2) may each use the common virtual addresses to map physical pages to the memory.Type: ApplicationFiled: August 7, 2013Publication date: February 12, 2015Applicant: QUALCOMM IncorporatedInventors: Anil Gathala, Andrey Ermolinskiy, Christopher A. Vick
-
Patent number: 8954674Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.Type: GrantFiled: October 8, 2013Date of Patent: February 10, 2015Assignee: Intel CorporationInventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
-
Patent number: 8954683Abstract: A translation table has entries that each include a share bit and a delta bit, with pointers that point to a memory block that includes reuse bits. When two translation table entries reference identical fragments in a memory block, one of the translation table entries is changed to refer to the same memory block referenced in the other translation table entry, which frees up a memory block. The share bit is set to indicate a translation table entry is sharing its memory block with another translation table entry. In addition, a translation table entry may include a private delta in the form of a pointer that references a memory fragment in the memory block that is not shared with other translation table entries. When a translation table has a private delta, its delta bit is set.Type: GrantFiled: August 16, 2012Date of Patent: February 10, 2015Assignee: Lenovo Enterprise Solutions (Singapore) Pte. Ltd.Inventors: Bulent Abali, James A. Marcella, Michael Mi Tsao, Steven M. Wheeler
-
Patent number: 8954684Abstract: A translation table has entries that each include a share bit and a delta bit, with pointers that point to a memory block that includes reuse bits. When two translation table entries reference identical fragments in a memory block, one of the translation table entries is changed to refer to the same memory block referenced in the other translation table entry, which frees up a memory block. The share bit is set to indicate a translation table entry is sharing its memory block with another translation table entry. In addition, a translation table entry may include a private delta in the form of a pointer that references a memory fragment in the memory block that is not shared with other translation table entries. When a translation table has a private delta, its delta bit is set.Type: GrantFiled: December 3, 2012Date of Patent: February 10, 2015Assignee: Lenovo Enterprise Solutions (Singapore) Pte. Ltd.Inventors: Bulent Abali, James A. Marcella, Michael M. Tsao, Steven M. Wheeler
-
Patent number: 8954698Abstract: Memory is dynamically switched through the optical-switching fabric using at least one communication pattern to transfer memory space in the memory blades from one processor to an alternative processor in the processor blades without physically copying data in the memory to the processors. Various communication patterns for the dynamically switching are supported.Type: GrantFiled: April 13, 2012Date of Patent: February 10, 2015Assignee: International Business Machines CorporationInventors: Eugen Schenfeld, Abhirup Chakraborty
-
Patent number: 8954701Abstract: Memory is dynamically switched through the optical-switching fabric using at least one communication pattern to transfer memory space in the memory blades from one processor to an alternative processor in the processor blades without physically copying data in the memory to the processors. Various communication patterns for the dynamically switching are supported.Type: GrantFiled: March 6, 2013Date of Patent: February 10, 2015Assignee: International Business Machines CorporationInventors: Eugen Schenfeld, Abhirup Chakraborty
-
Publication number: 20150039840Abstract: A data processing node has an inter-node messaging module including a plurality of sets of registers each defining an instance of a GET/PUT context and a plurality of data processing cores each coupled to the inter-node messaging module. Each one of the data processing cores includes a mapping function for mapping each one of a plurality of user level processes to a different one of the sets of registers and thereby to a respective GET/PUT context instance. Mapping each one of the user level processes to the different one of the sets of registers enables a particular one of the user level processes to utilize the respective GET/PUT context instance thereof for performing a GET/PUT action to a ring buffer of a different data processing node coupled to the data processing node through a fabric without involvement of an operating system of any one of the data processing cores.Type: ApplicationFiled: August 5, 2013Publication date: February 5, 2015Inventors: Prashant R. Chandra, Thomas A. Volpe, Mark Bradley Davis, Niall Joseph Dalton
-
Publication number: 20150039821Abstract: A communication apparatus comprises a general-purpose memory, and a high-speed memory that allows higher-speed access than the general-purpose memory. Protocol processing is executed to packetize transmission data using a general-purpose buffer allocated to the general-purpose memory and/or a high-speed buffer allocated to the high-speed memory as network buffers.Type: ApplicationFiled: July 18, 2014Publication date: February 5, 2015Inventor: Akitomo Sasaki
-
Patent number: 8949548Abstract: One or more methods and systems of sharing an external memory between functional modules of an integrated circuit chip are presented. The invention provides a system and method of reducing the amount of off-chip memory utilized by one or more integrated circuit chips. In one embodiment, a method for sharing an off-chip memory among one or more on-chip functional modules comprises arbitrating the communication of data between one or more on-chip functional modules and the off-chip memory. In one embodiment, the arbitration is facilitated by using an internal data bus that is controlled by a bus arbiter control unit. In one embodiment, a system for sharing an off-chip memory between functional modules of an integrated circuit comprises a security processing module, a media access controller module, a data interface, and a data bus.Type: GrantFiled: July 27, 2004Date of Patent: February 3, 2015Assignee: Broadcom CorporationInventor: Mark Buer
-
Patent number: 8949550Abstract: The present invention relates to a coarse-grained reconfigurable array, comprising: at least one processor; a processing element array including a plurality of processing elements, and a configuration cache where commands being executed by the processing elements are saved; and a plurality of memory units forming a one-to-one mapping with the processor and the processing element array. The coarse-grained reconfigurable array further comprises a central memory performing data communications between the processor and the processing element array by switching the one-to-one mapping such that when the processor transfers data from/to a main memory to/from a frame buffer, a significant bottleneck phenomenon that may occur due to the limited bandwidth and latency of a system bus can be improved.Type: GrantFiled: June 1, 2010Date of Patent: February 3, 2015Assignee: SNU R&DB FoundationInventors: Ki Young Choi, Kyung Wook Chang, Jong Kyung Paek
-
Patent number: 8949549Abstract: A method to exchange data in a shared memory system includes the use of a buffer in communication with a producer processor and a consumer processor. The cache data is temporarily stored in the buffer. The method includes for the consumer and the producer to indicate intent to acquire ownership of the buffer. In response to the indication of intent, the producer, consumer, buffer are prepared for the access. If the consumer intends to acquire the buffer, the producer places the cache data into the buffer. If the producer intends to acquire the buffer, the consumer removes the cache data from the buffer. The access to the buffer, however, is delayed until the producer, consumer, and the buffer are prepared.Type: GrantFiled: November 26, 2008Date of Patent: February 3, 2015Assignee: Microsoft CorporationInventors: David T. Harper, III, Charles David Callahan, II
-
Patent number: 8949307Abstract: A computer-implemented method and system for updating application data for a first instance of an application via C2DM is disclosed. An application server may receive a request from the first client computing device for updated application data via a network connection. The updated application data may correspond to a second instance of the application at a second client computing device. In response to determining the second instance of the application at the second client computing device supports push notifications, the system and method may generate a C2DM message including a user ID corresponding to the first client computing device and the request for updated application data. A server may then send the C2DM message to the second client computing device, wherein the C2DM message causes the second instance to wake up and generate the updated application data. The updated application data may be returned to the first client computing device.Type: GrantFiled: November 15, 2011Date of Patent: February 3, 2015Assignee: Google Inc.Inventors: Andrew Oplinger, Ken Leftin, Philip C. Verghese, Kenneth Norton, Joseph LaPenna
-
Publication number: 20150032924Abstract: In one embodiment, the present invention includes a method for receiving a non-coherent atomic request from a device coupled to an agent via a non-coherent link, accessing a mapping table of the agent to convert the non-coherent atomic request into a coherent atomic request, and transmitting the coherent atomic request via a coherent link to a second agent coupled to the agent to cause the second agent to be a completer of the non-coherent atomic request. Other embodiments are described and claimed.Type: ApplicationFiled: September 10, 2014Publication date: January 29, 2015Inventor: Ramakrishna Saripalli
-
Publication number: 20150032960Abstract: Electronic devices have a semiconductor memory unit including a magnetization compensation layer in a contact plug. One implementation of the semiconductor memory unit includes a variable resistance element having a stacked structure of a first magnetic layer, a tunnel barrier layer, and a second magnetic layer, and a contact plug arranged in at least one side of the variable resistance element and comprising a magnetization compensation layer. Another implementation includes a variable resistance element having a stacked structure of a first magnetic layer having a variable magnetization, a tunnel barrier layer, and a second magnetic layer having a pinned magnetization; and a contact plug arranged at one side of and separated from the variable resistance element to include a magnetization compensation layer that produces a magnetic field to reduce an influence of a magnetic field of the second magnetic layer on the first magnetic layer.Type: ApplicationFiled: December 29, 2013Publication date: January 29, 2015Applicant: SK HYNIX INC.Inventor: Cha-Deok Dong
-
Patent number: 8943502Abstract: A method, system, and computer usable program product for retooling lock interfaces for using a dual mode reader writer lock. An invocation of a method is received using an interface. The method is configured to operate on a lock associated with a resource in a data processing system. A determination is made whether the lock is an upgraded lock. The upgraded lock is the DML operating in an upgraded mode. An operation corresponding to the method is executed on the DML, if the lock is the upgraded lock.Type: GrantFiled: March 15, 2010Date of Patent: January 27, 2015Assignee: International Business Machines CorporationInventors: Bruce Mealey, James Bernard Moody
-
Patent number: 8937965Abstract: A switch unit, which is connected to one or more computers and one or more storage systems, comprises an update function for updating transfer management information (a routing table, for example). The storage system has a function for adding a virtual port to a physical port. The storage system migrates the virtual port addition destination from a first physical port to a second physical port and transmits a request of a predetermined type which includes identification information on the virtual port of the migration target to the switch unit. The transfer management information is updated by the update function of the switch unit so that the transfer destination which corresponds with the migration target virtual port is the switch port connected to the second physical port.Type: GrantFiled: November 8, 2011Date of Patent: January 20, 2015Assignee: Hitachi, Ltd.Inventors: Norio Shimozono, Shintaro Ito
-
Patent number: 8938595Abstract: A method for removing redundant data from a backup storage system is presented. In one example, the method may include identifying a first back-up data object, identifying a second back-up data object, detecting a first portion of the first back-up data object that is a copy of a second portion of the second back-up data object, and replacing the second portion with a pointer to the first portion.Type: GrantFiled: June 29, 2007Date of Patent: January 20, 2015Assignee: Sepaton, Inc.Inventors: Miklos Sandorfi, Timmie G. Reiter
-
Patent number: 8938561Abstract: A time-sharing buffer access system manages a buffer among plural master devices. Plural buffer handling units are operable to associatively couple the master devices, respectively, and a first end of each buffer handling unit is used to independently transfer data to or from the associated master device. A second end of each buffer handling unit is coupled to a buffer switch. A time slot controller defines a time slot, during which one of the buffer handling units is selected by the buffer switch such that data are only transferred between the selected buffer handling unit and the buffer.Type: GrantFiled: January 10, 2013Date of Patent: January 20, 2015Assignee: Skymedi CorporationInventors: Ting Wei Chen, Hsingho Liu, Chuang Cheng
-
Patent number: 8938631Abstract: A technique for determining if a processor in a multiprocessor system implementing a read-copy update (RCU) subsystem may be placed in a low power state. The technique may include performing a first predictive query of the RCU subsystem to request permission for the processor to enter the low power state. If permission is denied, the processor is not placed in the low power state. If permission is granted, the processor is placed in the low power state for a non-fixed duration. Regardless whether permission is denied or granted, a second confirming query of the RCU subsystem is performed to redetermined whether it is permissible for the processor to be in the low power state.Type: GrantFiled: June 30, 2012Date of Patent: January 20, 2015Assignee: International Business Machines CorporationInventor: Paul E. McKenney
-
Patent number: 8937562Abstract: This disclosure relates to synchronizing dictionaries of acceleration nodes in a computer network. For example, dictionaries of a plurality of acceleration nodes of a client-server network can be synchronized to each include one or more identical data items and data identifier pairs. Synchronization can include transmitting a particular data item, or a combination of a data item and an associated data identifier, to another acceleration node which includes it in its dictionary. A particular acceleration node can, instead of transmitting a data item, transmit an associated data identifier to another acceleration node. As all (or a subset) of the acceleration nodes can have an identical dictionary when employing the methods described herein, the particular acceleration node can use the same dictionary to communicate with all (or the subset of) other acceleration nodes of the computer network.Type: GrantFiled: July 29, 2013Date of Patent: January 20, 2015Assignee: SAP SEInventor: Or Igelka
-
Patent number: 8935500Abstract: Distributed storage resources having multiple storage units are managed based on data collected from online monitoring of workloads on the storage units and performance characteristics of the storage units. The collected data is sampled at discrete time intervals over a time period of interest, such as a congested time period. Normalized load metrics are computed for each storage unit based on time-correlated sums of the workloads running on the storage unit over the time period of interest and the performance characteristic of the storage unit. Workloads that are migration candidates and storage units that are migration destinations are determined from a representative value of the computed normalized load metrics, which may be the 90th percentile value or a weighted sum of two or more different percentile values.Type: GrantFiled: November 10, 2011Date of Patent: January 13, 2015Assignee: VMware, Inc.Inventors: Ajay Gulati, Irfan Ahmad, Carl A. Waldspurger, Chethan Kumar
-
Patent number: 8935485Abstract: A data processing apparatus 2 includes a plurality of transaction sources 8, 10 each including a local cache memory. A shared cache memory 16 stores cache lines of data together with shared cache tag values. Snoop filter circuitry 14 stores snoop filter tag values tracking which cache lines of data are stored within the local cache memories. When a transaction is received for a target cache line of data, then the snoop filter circuitry 14 compares the target tag value with the snoop filter tag values and the shared cache circuitry 16 compares the target tag value with the shared cache tag values. The shared cache circuitry 16 operates in a default non-inclusive mode. The shared cache memory 16 and the snoop filter 14 accordingly behave non-inclusively in respect of data storage within the shared cache memory 16, but inclusively in respect of tag storage given the combined action of the snoop filter tag values and the shared cache tag values.Type: GrantFiled: August 8, 2011Date of Patent: January 13, 2015Assignee: ARM LimitedInventors: Jamshed Jalal, Brett Stanley Feero, Mark David Werkheiser, Michael Alan Filippo
-
Publication number: 20150012714Abstract: A method and system for multiple processors to share memory are disclosed. The method includes that: at least one local interconnection network is set, each of which is connected with at least two function modules; a local shared memory unit connected with the local interconnection network is set, and address space of each function module is mapped to the local shared memory unit; a first function module of the at least two function modules writes processed initial data into the local shared memory unit through the local interconnection network; and a second function module of the at least two function modules acquires data from the local shared memory unit via the local interconnection network. The technical solution of the disclosure can solve the drawbacks that a conventional system for multiple processors to globally share memory suffers a large transmission delay, high management overhead and the like.Type: ApplicationFiled: May 8, 2012Publication date: January 8, 2015Applicant: ZHONGXING MICROELECTRONICS TECHNOLOGY CO.LTDInventors: Cissy Yuan, Fang Qiu, Xuehong Tian, Wanting Tian, Daibing Zeng, Zhigang Zhu
-
Patent number: 8930672Abstract: A multiprocessor using a shared virtual memory (SVM) is provided. The multiprocessor includes a plurality of processing cores and a memory manager configured to transform a virtual address into a physical address to allow a processing core to access a memory region corresponding to the physical address.Type: GrantFiled: March 29, 2011Date of Patent: January 6, 2015Assignees: SNU R&DB Foundation, Samsung Electronics Co., Ltd.Inventors: Choon-Ki Jang, Jaejin Lee, Soo-Jung Ryu, Bernhard Egger, Yoon-Jin Kim, Woong Seo, Young-Chul Cho
-
Patent number: 8930642Abstract: Embodiments of a multi-port memory device may include a plurality of ports and a plurality of memory banks some of which are native to each port and some of which are non-native to each port. The memory device may include a configuration register that stores configuration data indicative of the mapping of the memory banks to the ports. In response to the configuration data, for example, a steering logic may couple each of the ports either to one or all of the native memory banks or to one or all of the non-native memory banks.Type: GrantFiled: August 20, 2012Date of Patent: January 6, 2015Assignee: Micron Technology, Inc.Inventors: Robert Walker, Dan Skinner
-
Patent number: 8930660Abstract: A distributing device for generating private information correctly even if shared information is destroyed or tampered with. A shared information distributing device for use in a system for managing private information by a secret sharing method, including: segmenting unit that segments private information into a first through an nth pieces of shared information; first distribution unit that distributes the n pieces of shared information to n holding devices on a one-to-one basis; and second distribution unit that distributes the n pieces of shared information to the n holding devices so that each holding device holds an ith piece of shared information distributed by the first distribution unit, as well as a pieces of shared information being different from the ith piece of shared information in ordinal position among n pieces of shared information, “i” being an integer in a range from 1 to n.Type: GrantFiled: January 31, 2008Date of Patent: January 6, 2015Assignee: Panasonic CorporationInventors: Manabu Maeda, Masao Nonaka, Yuichi Futa, Kaoru Yokota, Natsume Matsuzaki, Hiroki Shizuya, Masao Sakai, Shuji Isobe, Eisuke Koizumi, Shingo Hasegawa, Masaki Yoshida
-
Patent number: 8930436Abstract: Provided is an apparatus and method of dynamically distributing load occurring in multiple cores that may determine a corresponding core to perform functions constituting an application program, thereby enhancing the entire processing rate.Type: GrantFiled: October 6, 2010Date of Patent: January 6, 2015Assignee: Samsung Electronics Co., Ltd.Inventors: Min Soo Kim, Shi Hwa Lee, Do Hyung Kim, Joon Ho Song, Sang Jo Lee, Won Chang Lee, Doo Hyun Kim
-
Patent number: 8930893Abstract: Embodiments of the disclosure are directed to inserting a declaration of a non-overwritable variable pointing to a current object in a source code, and inserting a code of storing a value referencing the current object to the non-overwritable variable. Embodiments of the disclosure are directed to converting a source code to generate a shared object in a lock-free mode by inserting a declaration of a non-overwritable variable pointing to a current object in the source code, and inserting a code of storing a value referencing the current object to the non-overwritable variable.Type: GrantFiled: June 28, 2012Date of Patent: January 6, 2015Assignee: International Business Machines CorporationInventor: Takeshi Ogasawara
-
Patent number: 8930639Abstract: A transactional memory (TM) receives a lookup command across a bus from a processor. The command includes a memory address. In response to the command, the TM pulls an input value (IV). The memory address is used to read a word containing multiple result values (RVs), multiple reference values, and multiple prefix values from memory. A selecting circuit within the TM uses a starting bit position and a mask size to select a portion of the IV. The portion of the IV is a lookup key value (LKV). Mask values are generated based on the prefix values. The LKV is masked by each mask value thereby generating multiple masked values that are compared to the reference values. Based on the comparison a lookup table generates a selector value that is used to select a result value. The selected result value is then communicated to the processor via the bus.Type: GrantFiled: November 13, 2012Date of Patent: January 6, 2015Assignee: Netronome Systems, IncorporatedInventor: Gavin J. Stark
-
Patent number: 8930669Abstract: A method for managing memory in a system for an application, comprising: assigning a first block (i.e., a big block) of the memory to the application when the application is initiated, the first block having a first size, the first block being assigned to the application until the application is terminated; dividing the first block into second blocks (i.e., intermediate blocks), each second block having a same second size, a second block of the second blocks for containing data for one or more components of a single data structure to be accessed by one thread of the application at a time; and, dividing the second block into third blocks (i.e., small blocks), each third block having a same third size, a third block of the third blocks for containing data for a single component of the single data structure.Type: GrantFiled: June 7, 2013Date of Patent: January 6, 2015Assignee: Inetco Systems LimitedInventors: Thomas Bryan Rushworth, Angus Richard Telfer
-
Patent number: 8930643Abstract: Multi-port memory having an additional control bus for passing commands between ports have individual ports that can be configured to respond to a command received from an external control bus or to a command received from the additional control bus. This facilitates various combinations of ports to vary the bandwidth or latency of the memory to facilitate tailoring performance characteristics to differing applications.Type: GrantFiled: June 9, 2014Date of Patent: January 6, 2015Assignee: Micron Technology, Inc.Inventors: Dan Skinner, J. Thomas Pawlowski
-
Publication number: 20150006826Abstract: Embodiments include integrated circuits (ICs), system-on-chips (SoCs), and related methods for a strap-based multiplexing scheme for a memory control module. In one embodiment, a memory control module may include a first memory controller coupled to a first bus including a first conductor configured to carry a first signal, and a second memory controller coupled to a second bus including a second conductor configured to carry a second signal. The memory control module may further include a fuse configured to have a fuse setting, and a strap register configured to store a register value. The memory control module may further include a multiplexer configured to selectively pass the first signal or the second signal responsive to the fuse setting and the register value. Other embodiments may be described and claimed.Type: ApplicationFiled: June 28, 2013Publication date: January 1, 2015Inventor: Yean Kee Yong
-
Patent number: 8924653Abstract: A method for providing a transactional memory is described. A cache coherency protocol is enforced upon a cache memory including cache lines, wherein each line is in one of a modified state, an owned state, an exclusive state, a shared state, and an invalid state. Upon initiation of a transaction accessing at least one of the cache lines, each of the lines is ensured to be either shared or invalid. During the transaction, in response to an external request for any cache line in the modified, owned, or exclusive state, each line in the modified or owned state is invalidated without writing the line to a main memory. Also, each exclusive line is demoted to either the shared or invalid state, and the transaction is aborted.Type: GrantFiled: October 31, 2006Date of Patent: December 30, 2014Assignee: Hewlett-Packard Development Company, L.P.Inventors: Blaine D. Gaither, Judson E. Veazey
-
Patent number: 8924596Abstract: A shared counter resource, such as a register, is disclosed in the hardware, where the register representing how much free space there is in the command queue is accessible to one or more processing elements. When a processing element reads the “reservation” register, the hardware automatically decrements the available free space by a preconfigured amount (e.g., 1) and returns the value of the free space immediately prior to the read/reservation. If the read returns 0 (or a number less than the preconfigured amount), there was insufficient free space to satisfy the request. In the event there was insufficient space to satisfy the request the reservation register may be configured to reserve however much space was available or to not reserve any space at all. Any number of processing elements may read these registers and various scenarios are described where the input and output queues are accessible via various processing elements.Type: GrantFiled: December 6, 2013Date of Patent: December 30, 2014Assignee: Concurrent Ventures, LLCInventors: Jesse D. Beeson, Jesse B. Yates
-
Patent number: 8924654Abstract: A computerized method, apparatus, and executable instructions on a machine readable medium for using multiple processors in parallel to create a pack vector from an array in memory. In some embodiments creating the pack vector includes reading portions of the array into a plurality of processors that each select a subset of elements from the their respective portions of the array based on a predetermined criteria. Some embodiments further include counting each of the selected subsets of elements and storing each count in a commonly accessible storage location, reading into the processors at least some of the count values once all of the processors have stored their count, and storing only the selected subsets of elements in the pack vector based at least in part on the count values.Type: GrantFiled: August 18, 2003Date of Patent: December 30, 2014Assignee: Cray Inc.Inventors: Vincent J. Graziano, James R. Kohn
-
Publication number: 20140379999Abstract: A method for transferring messages from a producer element to a consumer element uses a memory shared between the producer element and the consumer element, and a hardware queue including several registers designed to contain addresses of the shared memory. The method includes the steps of storing each message for the consumer element in the shared memory in the form of a node of a linked list, including a pointer to a next node in the list, the pointer being initially void, writing successively the address of each node in a free slot of the queue, whereby the node identified by each slot of the queue is the first node of a linked list assigned to the slot, and when the queue is full, writing the address of the current node in memory, in the pointer of the last node of the linked list assigned to the last slot of the queue, whereby the current node is placed at the end of the linked list assigned to the last slot of the queue.Type: ApplicationFiled: June 19, 2014Publication date: December 25, 2014Inventors: Gilles Pelissier, Jean-Philippe Cousin, Badr Bentaybi
-
Patent number: 8918593Abstract: A single-ported memory for storing information and only accessible to a plurality of clients, and a dual-ported memory for storing links and accessible to the plurality of clients and to a list manager that maintains a data structure for allocating memory blocks from the first memory and the second memory to the plurality of clients. The dual-ported memory is accessible to both the plurality of clients and the list manager. A method includes receiving a request from a client for access to memory storage at the single-ported memory and the dual-ported memory, and allocating a block of the single-ported memory to the client and a block of the dual-ported memory to the client. After the client has used the memory storage, the allocated block of the single-ported memory and the dual-ported memory are released to a free list data structure used by the list manager to assign storage.Type: GrantFiled: September 25, 2013Date of Patent: December 23, 2014Assignee: QLOGIC, CorporationInventors: Biswajit Khandai, Oscar L. Grijalva
-
Patent number: 8918786Abstract: A multiprocessing system executes a plurality of processes concurrently. A process execution circuit (10) issues requests to access a shared resource (16) from the processes. A shared access circuit (14) sequences conflicting ones of the requests. A simulating access circuit (12) generates signals to stall at least one of the processes at simulated stall time points selected as a predetermined function of requests from only the at least one of the processes and/or the timing of the requests from only the at least one of the processes, irrespective of whether said stalling is made necessary by sequencing of conflicting ones of the requests. Thus, part from predetermined maximum response times, predetermined average timing can be guaranteed, independent of the combination of processes that is executed.Type: GrantFiled: March 26, 2008Date of Patent: December 23, 2014Assignee: NXP, B.V.Inventors: Marco J. G. Bekooij, Jan W. Van Den Brand
-
Patent number: 8909872Abstract: A computer system is provided including a central processing unit having an internal cache, a memory controller is coupled to the central processing unit, and a closely coupled peripheral is coupled to the central processing unit. A coherent interconnection may exist between the internal cache and both the memory controller and the closely coupled peripheral, wherein the coherent interconnection is a bus.Type: GrantFiled: October 31, 2006Date of Patent: December 9, 2014Assignee: Hewlett-Packard Development Company, L. P.Inventors: Michael S. Schlansker, Boon Ang, Erwin Oertli
-
Publication number: 20140359231Abstract: A system and method for system and method for efficient buffer management for banked shared memory designs. In one embodiment, a controller within the switch is configured to manage the buffering of the shared memory banks by allocating full address sets to write sources. Each full address set that is allocated to a write source includes a number of memory addresses, wherein each memory address is associated with a different shared memory bank. A size of the full address set can be based on a determined number of buffer access contenders.Type: ApplicationFiled: June 26, 2013Publication date: December 4, 2014Inventor: William Brad Matthews
-
Publication number: 20140359232Abstract: A method and apparatus of a device that includes a shared memory hash table that notifies one or more readers of changes to the shared memory hash table is described. In an exemplary embodiment, a device modifies a value in the shared memory hash table, where the value has a corresponding key. The device further stores a notification in a notification queue that indicates the value has changed. In addition, the device invalidates a previous entry in the notification queue that indicates the value has been modified. The device signals to the reader that a notification is ready to be processed.Type: ApplicationFiled: May 5, 2014Publication date: December 4, 2014Inventors: Hugh W. Holbrook, Duncan Stuart Ritchie, Sebastian Sapa, Simon Francis Capper
-
Patent number: 8904103Abstract: A data processing apparatus includes a calculating unit configured to calculate a compression ratio when a block selected from among the plurality of blocks is compressed; a determining unit configured to determine whether a block is to be compressed by comparing the calculated compression ratio with a threshold; a recording unit configured to record the block on the storage device in a compressed or uncompressed state on a basis of a result of the determination; a management information creating unit configured to create a management information in association with data identification information for identifying the data, state information indicating a compressed or uncompressed state is recorded to the management information in association with each block, when the each block is recorded on the storage device; and a storage processing unit configured to store the management information created by the management information creating unit on a memory.Type: GrantFiled: November 8, 2011Date of Patent: December 2, 2014Assignee: Fujitsu LimitedInventor: Yukio Taniyama
-
Patent number: 8904118Abstract: Mechanism of efficient intra-die collective processing across the nodelets with separate shared memory coherency domains is provided. An integrated circuit die may include a hardware collective unit implemented on the integrated circuit die. A plurality of cores on the integrated circuit die is grouped into a plurality of shared memory coherence domains. Each of the plurality of shared memory coherence domains is connected to the collective unit for performing collective operations between the plurality of shared memory coherence domains.Type: GrantFiled: January 7, 2011Date of Patent: December 2, 2014Assignee: International Business Machines CorporationInventors: Amith R. Mamidala, Valentina Salapura, Robert W. Wisniewski
-
Patent number: 8904115Abstract: Parallel pipelines are used to access a shared memory. The shared memory is accessed via a first pipeline by a processor to access cached data from the shared memory. The shared memory is accessed via a second pipeline by a memory access unit to access the shared memory. A first set of tags is maintained for use by the first pipeline to control access to the cache memory, while a second set of tags is maintained for use by the second pipeline to access the shared memory. Arbitrating for access to the cache memory for a transaction request in the first pipeline and for a transaction request in the second pipeline is performed after each pipeline has checked its respective set of tags.Type: GrantFiled: August 18, 2011Date of Patent: December 2, 2014Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, Raguram Damodaran, Jonathan (Son) Hung Tran, Timothy David Anderson, Sanjive Agarwala
-
Publication number: 20140351516Abstract: A method of virtualizing an application to execute on a plurality of operating systems without installation. The method includes creating an input configuration file for each operating system. The templates each include a collection of configurations that were made by the application during installation on a computing device executing the operating system. The templates are combined into a single application template having a layer including the collection of configurations for each operating system. The collection of configurations includes files and registry entries. The collections also identifies and configures environmental variables, systems, and the like. Files in the collection of configurations and references to those files may be replaced with references to files stored on installation media. The application template is used to build an executable of the virtualized application.Type: ApplicationFiled: August 5, 2014Publication date: November 27, 2014Inventors: Stefan I. Larimore, C. Michael Murphey, Kenji C. Obata
-
Publication number: 20140351525Abstract: A method of providing memory accesses for a multi-core processor includes reserving a group of pins of a multi-core processor to transmit either data or address information in communication with one or more memory chips, receiving memory access requests from the plurality of processor cores, determining granularity of the memory access requests by a memory controller, and dynamically adjusting the number of pins in the group of pins to be used to transmit address information based with the granularity of the memory access requests.Type: ApplicationFiled: December 30, 2013Publication date: November 27, 2014Applicant: Peking UniversityInventors: Yifeng Chen, Weilong Cui, Xiang Cui
-
Patent number: 8898396Abstract: Memory sharing in a software pipeline on a network on chip (‘NOC’), the NOC including integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, with each IP block adapted to a router through a memory communications controller and a network interface controller, where each memory communications controller controlling communications between an IP block and memory, and each network interface controller controlling inter-IP block communications through routers, including segmenting a computer software application into stages of a software pipeline, the software pipeline comprising one or more paths of execution; allocating memory to be shared among at least two stages including creating a smart pointer, the smart pointer including data elements for determining when the shared memory can be deallocated; determining, in dependence upon the data elements for determining when the shared memory can be deallocated, that the shared memory can be deallocated; and dType: GrantFiled: April 23, 2012Date of Patent: November 25, 2014Assignee: International Business Machines CorporationInventors: Eric O. Mejdrich, Paul E. Schardt, Robert A. Shearer
-
Publication number: 20140344527Abstract: A multiprocessor system and method for swapping applications executing on the multiprocessor system are disclosed. The plurality of applications may include a first application and a plurality of other applications. The first application may be dynamically swapped with a second application. The swapping may be performed without stopping the plurality of other applications. The plurality of other applications may continue to execute during the swapping to perform a real-time operation and process real-time data. After the swapping, the plurality of other applications may continue to execute with the second application, and at least a subset of the plurality of other applications may communicate with the second application to perform the real time operation and process the real time data.Type: ApplicationFiled: May 17, 2013Publication date: November 20, 2014Applicant: COHERENT LOGIX INCORPORATEDInventors: Wilbur William Kaku, Michael Lyle Purnell, Geoffrey Neil Ellis, John Mark Beardslee, Zhong Qing Shang, Teng-I Wang, Stephen E. Lim
-
Patent number: 8892678Abstract: In a method for writing (S9, S11) of operating data (6) through a writing system (1, 2) comprising a central station (1) and at least one distribution station (2) to a portable data carrier (3) connected with the at least one distribution station (2) within the framework of a production of the data carrier (3) there is generated (S4, S5) an individual addressing for the data carrier (3) connected with the at least one distribution station (2), via which addressing the data carrier (3) is uniquely addressable system-wide upon the writing (S9, S11) of the operating data (6). In doing so, at least a part of the system-wide unique individual addressing can be generated (S4, S5) by the data carrier (3) itself or by the distribution station (2) with which the data carrier (3) is connected.Type: GrantFiled: November 26, 2008Date of Patent: November 18, 2014Assignee: Giesecke & Devrient GmbHInventors: Erich Englbrecht, Walter Hinz, Thomas Palsherm, Stephan Spitz
-
Patent number: 8886857Abstract: Multiple devices are provided access to a common, single instance of data and may use it without consuming resources beyond what would be required if only one device were using that data in a traditional configuration. In order to retain the device-specific differences, they are kept separate, but their relationship to the common data is maintained. All of this is done in a fashion that allows a given device to perceive and use its data as though it was its own separately accessible data.Type: GrantFiled: March 6, 2014Date of Patent: November 11, 2014Assignee: DataCore Software CorporationInventors: Jeffry Z. Slutzky, Roni J. Putra, Ziya Aral