VOLATILE MEMORY ACCESS VIA SHARED BITLINES
A memory includes an array of memory cells that form rows and columns. The rows of the array include memory cell pairs. The memory cells may include two cross-coupled inverters and two pass-devices that couple to alternate sides of the cross-coupled inverters. The two memory cells of a memory cell pair share a common intra-pair bitline. Adjacent memory cell pairs share a common inter-pair bitline. To perform a data read operation on a particular memory cell in a memory cell pair in the rows and columns of the array, wordline drive circuitry transmits wordline activate signals to select both the row for the data read operation and a particular one of the pair of memory cells for the data read operation.
Latest IBM Patents:
This patent application relates to the U.S. patent application entitled “Single-Ended Volatile Memory Access”, inventors Michael Lee and Bao Truong, Attorney Docket No. AUS920110481US1 (application Ser. No. to be assigned, filed on the same day as the subject patent application, and assigned to the same assignee), the disclosure of which is incorporated herein by reference in its entirety.
BACKGROUNDThe disclosures herein relate generally to volatile memory, and more specifically, to writing information to and reading information from static random access memory (SRAM). Writing to and reading information from SRAM expends valuable energy. Reduction of such energy expenditures by SRAM is desirable. One use of SRAM is in an information handling system (IHS) to store information in an SRAM array.
BRIEF SUMMARYIn one embodiment, a memory array is disclosed that includes a plurality of memory cells configured in rows and columns. The memory array includes a first pair of memory cells situated in a first row of the memory array. The first pair of memory cells includes first and second memory cells that couple to a first intra-pair bitline between the first and second cells to share the first intra-pair bitline. The first and second memory cells also couple to first and second opposed inter-pair bitlines, respectively. The first and second memory cells are configured to couple via the first and second opposed inter-pair bitlines to second and third pairs of memory cells, respectively, in the first row of the memory array.
In another embodiment, an information handling system (IHS) is disclosed. The IHS includes a processor. The IHS also includes a memory that is coupled to the processor. The memory includes a plurality of memory cells configured in a memory array of rows and columns. The memory includes a first pair of memory cells situated in a first row of the memory array. The first pair of memory cells includes first and second memory cells that couple to a first intra-pair bitline between the first and second cells to share the first intra-pair bitline. The first and second memory cells also couple to first and second opposed inter-pair bitlines, respectively. The first and second memory cells are configured to couple via the first and second opposed inter-pair bitlines to second and third pairs of memory cells, respectively, in the first row of the memory array.
In yet another embodiment, a method is disclosed that includes configuring a plurality of memory cells in rows and columns, wherein a first pair of memory cells is situated in a first row of the memory array. The first pair of memory cells includes first and second memory cells that couple to a first intra-pair bitline between the first and second cells to share the first intra-pair bitline. The first and second memory cells also couple to first and second opposed inter-pair bitlines, respectively. The method also includes sharing the first intra-pair bitline for writing and reading operations of the first pair of memory cells. The method further includes sharing the first opposed inter-pair bitline with a second pair of memory cells adjacent the first pair of memory cells in the first row for writing and reading operations of the first pair of memory cells and the second pair of memory cells. The method still further includes sharing the second opposed inter-pair bitline with a third pair of memory cells adjacent the first pair of memory cells in the first row for writing and reading operations of the first pair of memory cells and the third pair of memory cells.
The appended drawings illustrate only exemplary embodiments of the invention and therefore do not limit its scope because the inventive concepts lend themselves to other equally effective embodiments.
In one embodiment, the disclosed memory circuit includes an array of memory cells wherein memory cells in adjacent columns share both complement (bl′) and true (bl) bitlines on boundaries between cells to provide energy saving during a read operation. More particularly, the two memory cells of a particular memory cell pair share an intra-cell bitline between the two cells of that pair. Moreover, adjacent pairs of memory cells may share an inter-cell bitline between the pairs of memory cells. A read/write head provides robust differential writing of data to the memory cells of the memory cell pairs, as well as energy efficient reading of memory cell data. Memory cells may be manufactured on integrated circuit wafers. Overlapping memory cells slightly may provide desirably efficient use of wafer area while leaving sufficient space for placing wordline pairs in each row of memory.
Bitlines bl 125 and blb 130, and also wordlines wl_A 190 and wl_B 195, couple to memory cell 101. The designations “bl” and “blb” indicate that these bitlines are differential bitlines that complement one another. In one embodiment, bitline bl 125 is a true bitline and bitline blb 130 is a complement bitline. In other embodiments, the roles of bitlines 125 and 130 may reverse. Bitlines bl 165 and blb 170, and also wordlines wl_A 190 and wl_B 195, couple to memory cell 102. Bitlines blb 130 and blb 170 couple to the respective downstream output gates 103 and 103′. In one embodiment, data output gates 103 and 103′ function as evaluation gates for data content of memory cells 101 and 102. In actual practice, gate 103 and gate 103′ may be implemented as two inverters, wherein one inverter couples to bitline 130 and the other inverter couples to bitline 170. Bitlines blb 130 and blb 170 are corresponding bitlines of SRAM memory cells 101 and 102 because they each exhibit the same logic convention in their respective SRAM cells. Data output gate 103 senses bitlines blb 130 and data output gate 103′ senses blb 170 in a single-ended read operation of the complement of the logic value that cell 101 or 102 stores depending on which wordline, wl_A 190 or wl_B 195, activates during the read operation. Gate 103 or gate 103′ thus acts as an evaluation gate for the data contents of the selected memory cell and outputs the data content of the addressed memory cell on output data_out 104 or data_out 104′ respectively in one embodiment.
More particularly, the output data bit at data_out 104 corresponds to the stored bit in memory cell 101 when the wordline wl_A 190 activates for a single-ended read operation. Alternatively, the output data bit at data_out 104′ corresponds to the stored bit in memory cell 102 when wordline wl_B 195 activates for a single-ended read operation. Bitline drive circuit (230 in
To store a data bit in memory cell 101 during a differential write operation, wordline drive circuit (240 in
When wordline drive circuit (240 in
Memory cell arrays may include a single row or multiple rows with multiple columns. The particular aspect ratio of the rows and columns may depend on the application for the memory cell array and other considerations such as the energy needed to pre-charge bitlines and timing considerations. At least two columns of memory cells form the exemplary embodiment of the disclosed memory circuit.
In memory array 200 of
Returning to
In one embodiment, bitline drive circuit 230 precharges all of the bitlines 210 to the supply voltage (not specifically shown) when memory array 200 is in the quiescent or inactive state. The pre-charge voltage level corresponds to a logic 1. A memory that needlessly causes a memory cell bitline to discharge carries a penalty in wasted energy in the memory array. The disclosed memory array 200 may avoid wasting energy by arranging memory cells in pairs, as exemplified by memory cell pair 101 and 102 of
Gate 103 or gate 103′ senses the state of memory cell 101 or memory cell 102 respectively by passing the data bit from the selected memory cell to the data output line data_out 104 or 104′ via either bitline 130 or 170. The output data reflects the state of the memory cell uniquely appearing on one bitlines 130 or 170, and addressed on one corresponding wordlines wl_A 190 or wl_B 195. The non-selected bitline remains at logic level 1. More specifically, gate 103 and 103′ couple to input bitlines 130 and 170 respectively. When wordline wl_A 190 activates pass gate 120 of memory cell 101, the complement of the logic state of memory cell 101 appears on bitline 130, while bitline 170 remains in a pre-charged logical 1 state. Alternatively, when wordline wl_B195 activates pass gate 160, the complement of the state of memory cell 102 appears on bitline 170 while bitline 130 remains in a pre-charged logical 1 state.
TABLE 1 shows the logic states or “truth table” of gate 103 when gate 103 is an inverter.
In reading the contents of memory cell 101 of row 1, bitline 170 is in its pre-charged state (logical 1), which corresponds to the TABLE 1 entries having a logic 1 in the bitline 170 column. Wordline wl_A 190 activates pass device 120 which reflects the complement of the memory contents of memory cell 101 to bitline 130. If the memory cell contains a logic 1, then the complement 0 appears on bitline 130. From TABLE 1, a logic level 1 then appears at the data output data_out 104. Similarly, if memory cell 101 contained a logic level 0, then the complement logic value 1 appears on bitline 130, which results in a logic level 0 appearing at the data output data_out 104.
In a similar manner, to read the contents of memory cell 102, the data bit stored in memory cell 102 reflects in the data output data_out ‘104 when wordline wl_B 195 activates pass device 160 of memory cell 102. In this case, bitline blb 130 stays at a logic 1 level. Memory cell 102 content of logic level 1 appears as the complement 0 on bitline blb 170 which appears as a logic level 1 at data output data_out’ 104. Similarly, memory cell 102 content of logic level 0 appears as the complement 1 on bitline blb 170 which appears as a logic level 0 at data output data_out 104.
In one embodiment, since one of the two wordlines wl_A 190 and wl_B 195 uniquely activates only one of the single-ended bitlines blb 130 and blb 170 of the pair of memory cells 101, 102, this action effectively uniquely addresses one of memory cells 101, 102 in the addressed row. This approach may avoid the use of multiplexer circuitry otherwise needed to distinguish which bitline is addressed in other methods of reading cells. Thus, the disclosed memory array 200 may reduce the discharge of energy on unneeded bitlines. In this embodiment, gates 103 and 103′ act as evaluation gates that senses respective single-ended read bitlines of a pair of cells, and pass the data from the memory cell selected by the wordline. In a preferred embodiment, gate 103 is a inverter gate.
Exemplary memory cells 101 and 102 each include a true memory bitline bl and a complement memory bitline blb. Although the read operation of one embodiment operates on complement memory bit lines, a memory read operation may be configured to sense the true bitlines with substantially equal results to the scenario wherein a memory read operation senses the complement bit lines blb. Sensing true bitlines produces the complement of the memory cell logic state at the data output.
In summary, for one embodiment of the disclosed methodology, Table 2 below shows the state changes of the bitlines of SRAM cells 101 and 102 in row 1 of memory array 200 when wordline drive circuit 240 addresses one of cells 101 and 102. Bitline state changes consume energy. The disclosed methodology may reduce bitline state changes. For discussion purposes, assume that wordline circuit 240 addresses memory cell 101 to read the data contents of that SRAM cell. In this scenario, memory cell 101 is the addressed cell and SRAM cell 102 is the unaddressed cell of an memory cell pair.
When wordline circuit 240 addresses memory cell 101 for a read operation by activating the blb bitline 130 of memory cell 101, the blb bitline 130 of addressed SRAM cell 101 may change state depending on the memory content of the memory cell and drives evaluation gate 103. This state change on the blb bitline of addressed SRAM cell 101 and also the possible state change on the bl line of the unaddressed may consume energy. However, in one embodiment, by virtue of the effective termination of bitline 165 at 165A for read operations (
In summary, the choice of activating either wordline wl_A or wordline wl_B selects which one of memory cells of column A or column B respectively outputs data to its respective complement bitline blb. The evaluation gates processes complement the respective bitlines blb, evaluating and outputting the data from the selected memory cell to the respective data_out line.
Returning now to
During a differential write operation to memory cell 501 of SRAM cell pair 501, 502, wordline wl_a 595 activates pass devices 515 and 520 via nodes 910 and 915, respectively. More particularly, a read/write head 700 (discussed below with reference to
During a differential write operation to memory cell 501 of an SRAM memory cell pair 501 and 502, addressing circuitry (not shown) transmits an enable signal on write enable input 710A of AND gate 710 to enable driver 705, while input 702 enables driver 720. For this write operation to memory cell 501 to proceed, the addressing circuitry (not shown) also transmits an enable signal to the remaining input 710B of gate 710 (and also to wordline wl_a 595), thus enabling gate 710. Driver 705 sends a data bit on input 701 to bitline bl 525. Simultaneously, the inverter 725 complements (inverts) the data bit and drives the complement of the data bit through the write enabled driver 720 onto bitline bl′ 530 for a robust write operation to memory cell 501 through the enabled pass devices 515 and 520.
For a differential write operation to memory cell 502 of
During a singled-ended read operation, the wordline wl_a 595 or wl_b 590 activates pass device 520 or pass device 555, respectively of
Returning to
Returning again to
Table 3 summarizes the different types of bitline sharing that memory array 600 employs to efficiently write data to, and read data from, the memory array. As seen in Table 3, write operations employ both the disclosed intra-pair bitline sharing and inter-pair bitline sharing, while read operations employ the disclosed single-ended intra-pair bitline sharing.
Returning to
Returning to
Sharing of the bitlines as provided by the exemplary embodiments has the benefit that the memory read operation does not needlessly discharge bitlines associated with memory cells for which the data would be discarded. Practicing the disclosed technology may achieve significant energy savings.
Returning now to
IHS 400 may include a computer program product on digital media 475 such as a CD, DVD or other media. In one embodiment, digital media 475 includes an application 482. A user may load application 482 on nonvolatile storage 445 as application 482′. Nonvolatile storage 445 may store an operating system 481. When IHS 400 initializes, the IHS loads operating system 481 and application 485′ into system memory 420 for execution as operating system 481′ and application 482″. Operating system 481′ governs the operation of IHS 400.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, blocks, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, blocks, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. For example, those skilled in the art will appreciate that the logic sense (logic high (1), logic low (0)) of the apparatus and methods described herein may be reversed and still achieve equivalent results. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
Claims
1. A memory array, comprising:
- a plurality of memory cells configured in rows and columns;
- a first pair of memory cells situated in a first row of the memory array, the first pair of memory cells including first and second memory cells that couple to a first intra-pair bitline between the first and second cells to share the first intra-pair bitline, the first and second memory cells also coupling to first and second opposed inter-pair bitlines, respectively;
- wherein the first and second memory cells are configured to couple via the first and second opposed inter-pair bitlines to second and third pairs of memory cells, respectively, in the first row of the memory array.
2. The memory array of claim 1, wherein the first and second memory cells share the first intra-cell bitline for both reading and writing operations.
3. The memory cell of claim 2, wherein the first and second memory cells share the first intra-cell bitline for single-ended reading operations.
4. The memory cell of claim 1, wherein the first pair of memory cells shares the first opposed inter-pair bitline with the second pair of memory cells.
5. The memory cell of claim 4 wherein the first pair of memory cells shares the second opposed inter-pair bitline with the third pair of memory cells.
6. The memory array of claim 1, wherein each memory cell of the first, second and third memory cell pairs includes a cross-coupled inverter pair and two pass-devices that couple to alternate sides of the cross-coupled inverter pair.
7. The memory array of claim 1, further comprising first, second and third read/write heads coupled to the first, second and third memory cell pairs, respectively.
8. The memory array of claim 1, wherein the rows and columns of the memory cells overlap and cell pairs exhibit quadrilateral symmetry.
9. The memory array of claim 1, wherein the memory cells comprise static random access memory (SRAM).
10. An information handling system (IHS), comprising:
- a processor;
- a memory, coupled to the processor, the memory including: a plurality of memory cells configured in a memory array of rows and columns; a first pair of memory cells situated in a first row of the memory array, the first pair of memory cells including first and second memory cells that couple to a first intra-pair bitline between the first and second cells to share the first intra-pair bitline, the first and second memory cells also coupling to first and second opposed inter-pair bitlines, respectively; wherein the first and second memory cells are configured to couple via the first and second opposed inter-pair bitlines to second and third pairs of memory cells, respectively, in the first row of the memory array.
11. The IHS of claim 10, wherein the first and second memory cells share the first intra-cell bitline for both reading and writing operations.
12. The IHS of claim 11, wherein the first and second memory cells share the first intra-cell bitline for single-ended reading operations.
13. The IHS of claim 10, wherein the first pair of memory cells shares the first opposed inter-pair bitline with the second pair of memory cells.
14. The IHS of claim 10, wherein the first pair of memory cells shares the second opposed inter-pair bitline with the third pair of memory cells.
15. The IHS of claim 10, wherein each memory cell of the first, second and third memory cell pairs includes a cross-coupled inverter pair and two pass-devices that couple to alternate sides of the cross-coupled inverter pair.
16. The IHS of claim 10, further comprising first, second and third read/write heads coupled to the first, second and third memory cell pairs, respectively.
17. The IHS of claim 10, wherein the rows and columns of the memory cells overlap and cell pairs exhibit quadrilateral symmetry.
18. The IHS of claim 10, wherein the plurality of memory cells comprise static random access memory (SRAM).
19. A method, comprising:
- configuring a plurality of static random access memory (memory) cells in rows and columns, wherein a first pair of memory cells is situated in a first row of the memory array, the first pair of memory cells including first and second memory cells that couple to a first intra-pair bitline between the first and second cells to share the first intra-pair bitline, the first and second memory cells also coupling to first and second opposed inter-pair bitlines, respectively;
- sharing the first intra-pair bitline for writing and reading operations of the first pair of memory cells;
- sharing the first opposed inter-pair bitline with a second pair of memory cells adjacent the first pair of memory cells in the first row for writing and reading operations of the first pair of memory cells and the second pair of memory cells; and
- sharing the second opposed inter-pair bitline with a third pair of memory cells adjacent the first pair of memory cells in the first row for writing and reading operations of the first pair of memory cells and the third pair of memory cells.
20. The method of claim 19, further comprising accessing, by respective first, second and third write heads, the first, second and third pairs of memory cells.
Type: Application
Filed: Dec 6, 2011
Publication Date: Jun 6, 2013
Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION (Armonk, NY)
Inventors: Michael Ju Hyeok Lee (Austin, TX), Bao G. Truong (Austin, TX)
Application Number: 13/312,867
International Classification: G11C 7/00 (20060101);