Patents Examined by Pierre-Michel Bataille
-
Patent number: 9274991Abstract: A processor-based system includes a processor coupled to a system controller through a processor bus. The system controller is used to couple at least one input device, at least one output device, and at least one data storage device to the processor. Also coupled to the processor bus is a memory hub controller coupled to a memory hub of at least one memory module having a plurality of memory devices coupled to the memory hub. The memory hub is coupled to the memory hub controller through a downstream bus and an upstream bus. The downstream bus has a width of M bits, and the upstream bus has a width of N bits. Although the sum of M and N is fixed, the individual values of M and N can be adjusted during the operation of the processor-based system to adjust the bandwidths of the downstream bus and the upstream bus.Type: GrantFiled: June 10, 2014Date of Patent: March 1, 2016Assignee: Micron Technology, Inc.Inventors: Jeffrey R. Jobs, Thomas A. Stenglein
-
Patent number: 9262077Abstract: The solid state drive device includes a memory device including a plurality of flash memories and a memory controller connected with a host and configured to control the memory device. The memory controller includes first and second cores, a host interface configured to interface with the host, and a flash memory controller configured to control the plurality of flash memories. The first core is configured to control transmission and reception of data to and from the host. The second core is configured to control transmission and reception of data to and from the memory device.Type: GrantFiled: May 13, 2015Date of Patent: February 16, 2016Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Seungho Lim, Sil Wan Chang, Woonhyug Jee
-
Patent number: 9262312Abstract: The present disclosure describes systems and techniques relating to processing of network communications. According to an aspect of the described systems and techniques, a network device includes a content addressable memory (CAM); and processing circuitry configured to receive records to be stored in the CAM, compare the records to identify similar bit values at respective bit positions of at least a portion of the records, store in the CAM the similar bit values in a single sample record corresponding to the portion of the records, store in the CAM remaining non-similar bit values of the portion of the records, thereby compressing the portion of the records stored in the CAM, store in the CAM one or more remaining records of the received records not included in the portion of the records, and search the CAM including the compressed portion of the records and the one or more remaining records.Type: GrantFiled: October 9, 2013Date of Patent: February 16, 2016Assignee: Marvell International Ltd.Inventors: Hillel Gazit, Sohail Syed, Gevorg Torjyan
-
Patent number: 9262343Abstract: A memory access operand of an instruction that accesses memory may be treated as a transaction atomic access. The processor may execute one or more processor state setting instructions, causing state information to be set in the processor. Upon executing a transaction policy override instruction, the default conflict detection policy is overridden for one or more subsequent memory accessing instructions.Type: GrantFiled: March 26, 2014Date of Patent: February 16, 2016Assignee: International Business Machines CorporationInventors: Michael Karl Gschwind, Maged M. Michael, Valentina Salapura, Eric M. Schwarz, Chung-Lung K. Shum
-
Patent number: 9262284Abstract: Embodiments of the invention address deficiencies of the art in respect to memory fault tolerance, and provide a novel and non-obvious method, system and apparatus for single channel memory mirroring. In one embodiment of the invention, a single channel memory mirroring system can be provided. The single channel memory mirroring system can include a memory controller, a single communications channel, and an operational data portion of memory, and a duplicate data portion of memory, both portions being communicatively coupled to the memory controller over the single communications channel. Finally, the system can include single channel memory mirror logic. The logic can include program code enabled to mirror data in the operational data portion of memory in the duplicate data portion of memory.Type: GrantFiled: December 7, 2006Date of Patent: February 16, 2016Assignee: LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD.Inventors: William E. Atherton, Jimmy G. Foster, Sr.
-
Patent number: 9262337Abstract: A translation lookaside buffer (TLB) of a computing device is a cache of virtual to physical memory address translations. A TLB flush promotion threshold value indicates when all entries of the TLB are to be flushed rather than individual entries of the TLB. The TLB flush promotion threshold value is determined dynamically by the computing device by determining an amount of time it takes to flush and repopulate all entries of the TLB. A determination is then made as to the number of TLB entries that can be individually flushed and repopulated in that same amount of time. The threshold value is set based on (e.g., equal to) the number of TLB entries that can be individually flushed and repopulated in that amount of time.Type: GrantFiled: October 9, 2013Date of Patent: February 16, 2016Assignee: Microsoft Technology Licensing, LLCInventor: Landy Wang
-
Patent number: 9256553Abstract: A memory access operand of an instruction that accesses memory may be treated as a transaction atomic access. Non-default atomicity handling of memory accesses is enabled based on successful comparison by the processor of specified storage values at run-time. Upon executing a transaction policy override instruction, the default conflict detection policy is overridden for one or more subsequent memory accessing instructions.Type: GrantFiled: March 26, 2014Date of Patent: February 9, 2016Assignee: International Business Machines CorporationInventor: Chung-Lung K. Shum
-
Patent number: 9252131Abstract: By arranging dies in a stack such that failed cores are aligned with adjacent good cores, fast connections between good cores and cache of failed cores can be implemented. Cache can be allocated according to a priority assigned to each good core, by latency between a requesting core and available cache, and/or by load on a core.Type: GrantFiled: October 10, 2013Date of Patent: February 2, 2016Assignee: GLOBALFOUNDRIES INC.Inventors: Edgar R. Cordero, Anand Haridass, Subrat K. Panda, Saravanan Sethuraman, Diyanesh Babu Chinnakkonda Vidyapoornachary
-
Patent number: 9251060Abstract: Described herein are embodiments of an apparatus configured for compression-enabled blending of data, a system including the apparatus configured for compression-enabled blending of data, and a method for compression-enabled blending of data. An apparatus configured for compression-enabled blending of data may include non-volatile memory configured to operate in a single-level cell mode and a multi-level cell mode, a compression module configured to compress data to generate compressed data, and a memory controller configured to write, in response to a reduction ratio of the compressed data being less than a threshold compression ratio, a first portion of the compressed data to the non-volatile memory in the single-level cell mode, and a second portion of the compressed data to the non-volatile memory in the multi-level cell mode. Other embodiments may be described and/or claimed.Type: GrantFiled: March 29, 2012Date of Patent: February 2, 2016Assignee: INTEL CORPORATIONInventors: Jawad B. Khan, Richard L. Coulson
-
Patent number: 9244620Abstract: A storage device includes a semiconductor memory storing data. A controller instructs to write data to the semiconductor memory in accordance with a request the controller receives. A register holds performance class information showing one performance class required to allow the storage device to demonstrate best performance which the storage device supports, of performance classes specified in accordance with performance.Type: GrantFiled: April 13, 2015Date of Patent: January 26, 2016Assignee: KABUSHIKI KAISHA TOSHIBAInventor: Akihisa Fujimoto
-
Patent number: 9244616Abstract: A storage system manages a pool to which multiple VVOLs (virtual logical volumes conforming to thin provisioning) are associated, assigns a real area (RA) from any tier in an available tier pattern associated with a write-destination VVOL to a write-destination virtual area (VA), and carries out a reassignment process for migrating data inside this RA to an RA of a different tier than the tier having this RA based on the access status of the RA assigned to the VA. A management system assumes that a specified tier has been removed from the available tier pattern of a target VVOL, predicts the performance of the target VVOL and all the other VVOL associated with the pool to which the target VVOL is associated, determines whether or not there is a VVOL for which the predicted performance is lower than a required performance, and when such a VVOL does not exist, instructs the storage system to remove the specified tier from the available tier pattern of the target VVOL.Type: GrantFiled: March 24, 2015Date of Patent: January 26, 2016Assignee: Hitachi, Ltd.Inventors: Kyoko Miwa, Tsukasa Shibayama, Masayasu Asano
-
Patent number: 9229858Abstract: An example method of managing memory for an application includes identifying a plurality of regions of a heap storing one or more objects of a first type and one or more objects of a second type. Each object of the second type stores a memory address of an object of the first type. The method also includes selecting a set of target collection regions of the heap. The method includes in a concurrent marking phase, marking one or more reachable objects of the first type as live data. The method further includes for each region of the plurality maintaining a calculation of live data in the respective region. The method also includes traversing the objects of the first type marked in the concurrent marking phase and evacuating a set of traversed objects from a target collection region to a destination region of the heap.Type: GrantFiled: October 8, 2013Date of Patent: January 5, 2016Assignee: Red Hat, Inc.Inventor: Christine H. Flood
-
Patent number: 9229854Abstract: This disclosure provides for improvements in managing multi-drive, multi-die or multi-plane NAND flash memory. In one embodiment, the host directly assigns physical addresses and performs logical-to-physical address translation in a manner that reduces or eliminates the need for a memory controller to handle these functions, and initiates functions such as wear leveling in a manner that avoids competition with host data accesses. A memory controller optionally educates the host on array composition, capabilities and addressing restrictions. Host software can therefore interleave write and read requests across dies in a manner unencumbered by memory controller address translation. For multi-plane designs, the host writes related data in a manner consistent with multi-plane device addressing limitations. The host is therefore able to “plan ahead” in a manner supporting host issuance of true multi-plane read commands.Type: GrantFiled: October 7, 2013Date of Patent: January 5, 2016Assignee: Radian Memory Systems, LLCInventors: Andrey V. Kuzmin, James G. Wayda
-
Patent number: 9223690Abstract: Freeing memory safely with low performance overhead in a concurrent environment is described. An example method includes creating a reference count for each sub block in a global memory block, and each global memory block includes a plurality of sub blocks aged based on respective allocation time. A reference count for a first sub block is incremented when a thread operates a collection of data items and accesses the first sub block for a first time. Reference counts for the first sub block and a second sub block are lazily updated. Subsequently, the sub blocks are scanned through in the order of their age until a sub block with a non-zero reference count is encountered. Accordingly, one or more sub blocks whose corresponding reference counts are equal to zero are freed safely and with low performance overhead.Type: GrantFiled: October 4, 2013Date of Patent: December 29, 2015Assignee: Sybase, Inc.Inventor: Vivek Kandiyanallur
-
Patent number: 9223716Abstract: Processors and methods disclosed herein include a cache memory unit, n processor cores where n?1, a controller connected to the cache memory unit and to each of the n processor cores, and n obstruction monitoring units, where each obstruction monitoring unit is connected to the controller and to a different one of the n processor cores, and where during operation of the electronic processor, each obstruction monitoring unit is configured to detect an obstruction corresponding to an operation from the processor core connected to the obstruction monitoring unit before the operation executes in the cache memory unit.Type: GrantFiled: October 4, 2013Date of Patent: December 29, 2015Assignee: The Penn State Research FoundationInventors: Jue Wang, Yuan Xie
-
Patent number: 9213646Abstract: Systems and methods are disclosed for cache data value tracking. In an embodiment, a controller may be configured to select data; set a node weight for the data representing a cache hit potential for the data; store a first time stamp value for the data representing when the data was accessed; and store the data in a cache memory based on the node weight and the first time stamp value. In another embodiment, a method may comprise setting a node weight for data associated with a data access command, storing a first access counter value for the data representing a number of times new data has been stored to the cache memory when the data was accessed, and removing the data from the cache memory or maintaining the data in the cache memory based on the node weight and the first access counter value.Type: GrantFiled: June 20, 2013Date of Patent: December 15, 2015Assignee: Seagate Technology LLCInventors: Margot Ann LaPanse, Joseph Masaki Baum, Stanton MacDonough Keeler, Michael Edward Baum, Thomas Dale Hosman, Robert Dale Murphy
-
Patent number: 9208092Abstract: A coherent attached processor proxy (CAPP) includes transport logic having a first interface configured to support communication with a system fabric of a primary coherent system and a second interface configured to support communication with an attached processor (AP) that is external to the primary coherent system and that includes a cache memory that holds copies of memory blocks belonging to a coherent address space of the primary coherent system. The CAPP further includes one or more master machines that initiate memory access requests on the system fabric of the primary coherent system on behalf of the AP, one or more snoop machines that service requests snooped on the system fabric, and a CAPP directory having a precise directory having a plurality of entries each associated with a smaller data granule and a coarse directory having a plurality of entries each associated with a larger data granule.Type: GrantFiled: September 24, 2013Date of Patent: December 8, 2015Assignee: GLOBALFOUNDRIES INC.Inventors: Bartholomew Blaner, Michael S. Siegel, Jeffrey A. Stuecheli, Charles F. Marino
-
Patent number: 9208091Abstract: A coherent attached processor proxy (CAPP) includes transport logic having a first interface configured to support communication with a system fabric of a primary coherent system and a second interface configured to support communication with an attached processor (AP) that is external to the primary coherent system and that includes a cache memory that holds copies of memory blocks belonging to a coherent address space of the primary coherent system. The CAPP further includes one or more master machines that initiate memory access requests on the system fabric of the primary coherent system on behalf of the AP, one or more snoop machines that service requests snooped on the system fabric, and a CAPP directory having a precise directory having a plurality of entries each associated with a smaller data granule and a coarse directory having a plurality of entries each associated with a larger data granule.Type: GrantFiled: June 19, 2013Date of Patent: December 8, 2015Assignee: GLOBALFOUNDRIES INC.Inventors: Bartholomew Blaner, Michael S. Siegel, Jeffrey A. Stuecheli, Charles F. Marino
-
Patent number: 9208901Abstract: A memory device, and a method of operating same, utilize a memory buffer associated with a memory array to maintain information to be available subsequent to a program-fail event associated with the memory array.Type: GrantFiled: October 6, 2014Date of Patent: December 8, 2015Assignee: Micron Technology, Inc.Inventors: Cimmino Pasquale, Falanga Francesco, Massimo Iaculo, Minopoli Dionisio, Marco Ferrara, Campardo Giovanni
-
Patent number: 9203919Abstract: In one embodiments, one or more first computing devices receive updated values for user data associated with a plurality of users; and for each of the user data for which an updated value has been received, determine one or more second systems that each have subscribed to be notified when the value of the user datum is updated and each have a pre-established relationship with the user associated with the user datum; and push notifications to the second systems indicating that the value of the user datum has been updated without providing the updated value for the user datum to the second systems.Type: GrantFiled: July 11, 2014Date of Patent: December 1, 2015Assignee: Facebook, Inc.Inventors: Wei Zhu, Ray C. He, Luke Jonathan Shepard