Patents by Inventor William S. Wu

William S. Wu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9176864
    Abstract: A non-volatile memory organized into flash erasable blocks sorts units of data according to a temperature assigned to each unit of data, where a higher temperature indicates a higher probability that the unit of data will suffer subsequent rewrites due to garbage collection operations. The units of data either come from a host write or from a relocation operation. The data are sorted either for storing into different storage portions, such as SLC and MLC, or into different operating streams, depending on their temperatures. This allows data of similar temperature to be dealt with in a manner appropriate for its temperature in order to minimize rewrites. Examples of a unit of data include a logical group and a block.
    Type: Grant
    Filed: May 10, 2012
    Date of Patent: November 3, 2015
    Assignee: SANDISK TECHNOLOGIES, INC.
    Inventors: Sergey Anatolievich Gorobets, Alan David Bennett, Tom Hugh Shippey, Liam Michael Parker, Yauheni Yaromenka, Steven T. Sprouse, William S. Wu, Marielle Bundukin
  • Patent number: 9070449
    Abstract: In a flash memory, erase blocks containing shorted or broken word lines may be used, at least in part, to store user data. Such blocks may use different parameters to those used by non-defective blocks, may be subject to different wear leveling, and may store data selected to reduce the number of access operations.
    Type: Grant
    Filed: April 26, 2013
    Date of Patent: June 30, 2015
    Assignee: SanDisk Technologies Inc.
    Inventors: Nian Niles Yang, Uday Chandrasekhar, Yichao Huang, Alexandra Bauche, William S. Wu
  • Publication number: 20140321202
    Abstract: In a flash memory, erase blocks containing shorted or broken word lines may be used, at least in part, to store user data. Such blocks may use different parameters to those used by non-defective blocks, may be subject to different wear leveling, and may store data selected to reduce the number of access operations.
    Type: Application
    Filed: April 26, 2013
    Publication date: October 30, 2014
    Applicant: SanDisk Technologies, Inc.
    Inventors: Nian Niles Yang, Uday Chandrasekhar, Yichao Huang, Alexandra Bauche, William S. Wu
  • Patent number: 8700840
    Abstract: A portion of a nonvolatile memory is partitioned from a main multi-level memory array to operate as a cache. The cache memory is configured to store at less capacity per memory cell and finer granularity of write units compared to the main memory. In a block-oriented memory architecture, the cache has multiple functions, not merely to improve access speed, but is an integral part of a sequential update block system. Decisions to archive data from the cache memory to the main memory depend on the attributes of the data to be archived, the state of the blocks in the main memory portion and the state of the blocks in the cache portion.
    Type: Grant
    Filed: January 5, 2009
    Date of Patent: April 15, 2014
    Assignee: Sandisk Technologies, Inc.
    Inventors: Alexander Paley, Sergey Anatolievich Gorobets, Eugene Zilberman, Alan David Bennett, Shai Traister, Andrew Tomlin, William S. Wu, Bum Suck So
  • Publication number: 20120297122
    Abstract: A non-volatile memory organized into flash erasable blocks sorts units of data according to a temperature assigned to each unit of data, where a higher temperature indicates a higher probability that the unit of data will suffer subsequent rewrites due to garbage collection operations. The units of data either come from a host write or from a relocation operation. The data are sorted either for storing into different storage portions, such as SLC and MLC, or into different operating streams, depending on their temperatures. This allows data of similar temperature to be dealt with in a manner appropriate for its temperature in order to minimize rewrites. Examples of a unit of data include a logical group and a block.
    Type: Application
    Filed: May 10, 2012
    Publication date: November 22, 2012
    Inventors: Sergey Anatolievich Gorobets, Alan David Bennett, Tom Hugh Shippey, Liam Michael Parker, Yauheni Yaromenka, Steven T. Sprouse, William S. Wu, Marielle Bundukin
  • Publication number: 20120297121
    Abstract: A non-volatile memory organized into flash erasable blocks receives data from host writes by first staging into logical groups before writing into the blocks. Each logical group contains data from a predefined set of order logical addresses and has a fixed size smaller than a block. The totality of logical groups are obtained by partitioning a logical address space of the host into non-overlapping sub-ranges of ordered logical addresses, each logical group having a predetermined size within a range delimited by a minimum size of at least one page and a maximum size of fitting at least two logical groups in a block and up to an order of magnitude higher than a typical size of a host write. In this way, excessive garbage collection due to operating a large logical group is avoided while the address space is reduced to minimize the size of a caching RAM.
    Type: Application
    Filed: May 10, 2012
    Publication date: November 22, 2012
    Inventors: Sergey Anatolievich Gorobets, William S. Wu, Steven T. Sprouse
  • Patent number: 8244960
    Abstract: A portion of a nonvolatile memory is partitioned from a main multi-level memory array to operate as a cache. The cache memory is configured to store at less capacity per memory cell and finer granularity of write units compared to the main memory. In a block-oriented memory architecture, the cache has multiple functions, not merely to improve access speed, but is an integral part of a sequential update block system. The cache memory has a capacity dynamically increased by allocation of blocks from the main memory in response to a demand to increase the capacity. Preferably, a block with an endurance count higher than average is allocated. The logical addresses of data are partitioned into zones to limit the size of the indices for the cache.
    Type: Grant
    Filed: January 5, 2009
    Date of Patent: August 14, 2012
    Assignee: SanDisk Technologies Inc.
    Inventors: Alexander Paley, Sergey Anatolievich Gorobets, Eugene Zilberman, Alan David Bennett, Shai Traister, Andrew Tomlin, William S. Wu, Bum Suck So
  • Patent number: 8094500
    Abstract: A portion of a nonvolatile memory is partitioned from a main multi-level memory array to operate as a cache. The cache memory is configured to store at less capacity per memory cell and finer granularity of write units compared to the main memory. In a block-oriented memory architecture, the cache has multiple functions, not merely to improve access speed, but is an integral part of a sequential update block system. Decisions to write data to the cache memory or directly to the main memory depend on the attributes and characteristics of the data to be written, the state of the blocks in the main memory portion and the state of the blocks in the cache portion.
    Type: Grant
    Filed: January 5, 2009
    Date of Patent: January 10, 2012
    Assignee: Sandisk Technologies Inc.
    Inventors: Alexander Paley, Sergey Anatolievich Gorobets, Eugene Zilberman, Alan David Bennett, Shai Traister, Andrew Tomlin, William S. Wu, Bum Suck So
  • Publication number: 20110153912
    Abstract: A method of operating a memory system is presented. The memory system includes a controller and a non-volatile memory circuit, where the non-volatile memory circuit has a first portion, where data is stored in a binary format, and a second portion, where data is stored in a multi-state format. The controller manages the transfer of data to and from the memory system and the storage of data on the non-volatile memory circuit. The method includes receiving a first set of data and storing this first set of data in a first location in the second portion of the non-volatile memory circuit. The memory system subsequently receives updated data for a first subset of the first data set. The updated data is stored in a second location in the first portion of the non-volatile memory circuit, where the controller maintains a logical correspondence between the second location and the first subset of the first set of data.
    Type: Application
    Filed: December 18, 2009
    Publication date: June 23, 2011
    Inventors: Sergey Anatolievich Gorobets, William S. Wu, Shai Traister, Alexander Lyashuk, Steven T. Sprouse
  • Publication number: 20100174846
    Abstract: A portion of a nonvolatile memory is partitioned from a main multi-level memory array to operate as a cache. The cache memory is configured to store at less capacity per memory cell and finer granularity of write units compared to the main memory. In a block-oriented memory architecture, the cache has multiple functions, not merely to improve access speed, but is an integral part of a sequential update block system. Decisions to archive data from the cache memory to the main memory depend on the attributes of the data to be archived, the state of the blocks in the main memory portion and the state of the blocks in the cache portion.
    Type: Application
    Filed: January 5, 2009
    Publication date: July 8, 2010
    Inventors: Alexander Paley, Sergey Anatolievich Gorobets, Eugene Zilberman, Alan David Bennett, Shai Traister, Andrew Tomlin, William S. Wu, Bum Suck So
  • Publication number: 20100172180
    Abstract: A portion of a nonvolatile memory is partitioned from a main multi-level memory array to operate as a cache. The cache memory is configured to store at less capacity per memory cell and finer granularity of write units compared to the main memory. In a block-oriented memory architecture, the cache has multiple functions, not merely to improve access speed, but is an integral part of a sequential update block system. Decisions to write data to the cache memory or directly to the main memory depend on the attributes and characteristics of the data to be written, the state of the blocks in the main memory portion and the state of the blocks in the cache portion.
    Type: Application
    Filed: January 5, 2009
    Publication date: July 8, 2010
    Inventors: Alexander Paley, Sergey Anatolievich Gorobets, Eugene Zilberman, Alan David Bennett, Shai Traister, Andrew Tomlin, William S. Wu, Bum Suck So
  • Publication number: 20100174847
    Abstract: A portion of a nonvolatile memory is partitioned from a main multi-level memory array to operate as a cache. The cache memory is configured to store at less capacity per memory cell and finer granularity of write units compared to the main memory. In a block-oriented memory architecture, the cache has multiple functions, not merely to improve access speed, but is an integral part of a sequential update block system. The cache memory has a capacity dynamically increased by allocation of blocks from the main memory in response to a demand to increase the capacity. Preferably, a block with an endurance count higher than average is allocated. The logical addresses of data are partitioned into zones to limit the size of the indices for the cache.
    Type: Application
    Filed: January 5, 2009
    Publication date: July 8, 2010
    Inventors: Alexander Paley, Sergey Anatolievich Gorobets, Eugene Zilberman, Alan David Bennett, Shai Traister, Andrew Tomlin, William S. Wu, Bum Suck So
  • Patent number: 6598103
    Abstract: A method and apparatus for transferring data between bus agents in a computer system. The present invention includes transmitting a control signal, from a first agent to a second agent, via a first transfer protocol; and, transmitting data corresponding to the control signal, from the first agent to the second agent, via a second transfer protocol. In one embodiment, the control signals are transmitted from the first agent to the second agent via a synchronous transmission with respect to a bus clock; and, the data is transmitted via an asynchronous transmission with respect to the bus clock, which has a data width greater than the synchronous transmission. In addition, in one embodiment of the present invention, the synchronous transmission is a common clock data transfer protocol, and the asynchronous transmission is a source clock data transfer protocol.
    Type: Grant
    Filed: November 6, 2001
    Date of Patent: July 22, 2003
    Assignee: Intel Corporation
    Inventors: Peter D. MacWilliams, William S. Wu, Dilip K. Sampath, Bindi A. Prasad
  • Patent number: 6434692
    Abstract: A high-throughput memory access interface allows higher data transfer rates between a system memory controller and video/graphics adapters than is possible using standard local bus architectures. The interface enables data to be written directly to a peripheral device at either one of two selectable speeds. The peripheral device may be a graphics adapter. A signal indicative of whether the adapter's write buffers are full is used to determine whether a write transaction to the adapter can proceed. If the transaction can not proceed at that time, it can be enqueued in the interface.
    Type: Grant
    Filed: February 23, 2001
    Date of Patent: August 13, 2002
    Assignee: Intel Corporation
    Inventors: Norman J. Rasmussen, William S. Wu
  • Publication number: 20020065967
    Abstract: A method and apparatus for transferring data between bus agents in a computer system. The present invention includes transmitting a control signal, from a first agent to a second agent, via a first transfer protocol; and, transmitting data corresponding to the control signal, from the first agent to the second agent, via a second transfer protocol. In one embodiment, the control signals are transmitted from the first agent to the second agent via a synchronous transmission with respect to a bus clock; and, the data is transmitted via an asynchronous transmission with respect to the bus clock, which has a data width greater than the synchronous transmission. In addition, in one embodiment of the present invention, the synchronous transmission is a common clock data transfer protocol, and the asynchronous transmission is a source clock data transfer protocol.
    Type: Application
    Filed: November 6, 2001
    Publication date: May 30, 2002
    Inventors: Peter D. MacWilliams, William S. Wu, Dilip K. Sampath, Bindi A. Prasad
  • Patent number: 6336159
    Abstract: A method and apparatus for transferring data between bus agents in a computer system. The present invention includes transmitting a control signal, from a first agent to a second agent, via a first transfer protocol; and, transmitting data corresponding to the control signal, from the first agent to the second agent, via a second transfer protocol. In one embodiment, the control signals are transmitted from the first agent to the second agent via a synchronous transmission with respect to a bus clock; and, the data is transmitted via an asynchronous transmission with respect to the bus clock, which has a data width greater than the synchronous transmission. In addition, in one embodiment of the present invention, the synchronous transmission is a common clock data transfer protocol, and the asynchronous transmission is a source clock data transfer protocol.
    Type: Grant
    Filed: January 13, 1998
    Date of Patent: January 1, 2002
    Assignee: Intel Corporation
    Inventors: Peter D. MacWilliams, William S. Wu, Dilip K. Sampath, Bindi A. Prasad
  • Patent number: 6266719
    Abstract: A high-throughput memory access interface allows higher data transfer rates between a system memory controller and video/graphics adapters than is possible using standard local bus architectures. The interface enables data to be written directly to a peripheral device at either one of two selectable speeds. The peripheral device may be a graphics adapter. A signal indicative of whether the adapter's write buffers are full is used to determine whether a write transaction to the adapter can proceed. If the transaction can not proceed at that time, it can be enqueued in the interface.
    Type: Grant
    Filed: September 6, 2000
    Date of Patent: July 24, 2001
    Assignee: Intel Corporation
    Inventors: Norman J. Rasmussen, William S. Wu
  • Patent number: 6263397
    Abstract: An I/O agent delivers the interrupt message through a chipset to a system bus connected to a number of processors. The interrupt message includes the transaction type and a destination identification. The servicing processor on the system bus matches the destination identification with its own identification to determine if it is the intended recipient of the interrupt message. The I/O agent writes the data associated with the interrupt into the buffer queue inside the chipset. The chipset automatically flushes the contents of the buffer queue to the main memory before the interrupt message is delivered. The interrupt delivery mechanism avoids complexity and delay in handshaking operations between the chipset and the I/O agent.
    Type: Grant
    Filed: December 7, 1998
    Date of Patent: July 17, 2001
    Assignee: Intel Corporation
    Inventors: William S. Wu, Mani Azimi, Stephen Pawlowski, Daniel G. Lau, M. Jayakumar
  • Publication number: 20010007999
    Abstract: A high-throughput memory access interface allows higher data transfer rates between a system memory controller and video/graphics adapters than is possible using standard local bus architectures. The interface enables data to be written directly to a peripheral device at either one of two selectable speeds. The peripheral device may be a graphics adapter. A signal indicative of whether the adapter's write buffers are full is used to determine whether a write transaction to the adapter can proceed. If the transaction can not proceed at that time, it can be enqueued in the interface.
    Type: Application
    Filed: February 23, 2001
    Publication date: July 12, 2001
    Inventors: Norman J. Rasmussen, William S. Wu
  • Patent number: RE40921
    Abstract: A bus agent defers an ordered transaction if the transaction cannot be completed in order. When an ordered transaction is deferred, its visibility for the next ordered transaction is asserted if it can guarantee a sequential order of the ordered transaction and the next ordered transaction. This visibility indication allows the bus agent to proceed with the next ordered transaction without waiting for the completion status of the deferred transaction. The visibility indication provides fast processing of ordered transactions.
    Type: Grant
    Filed: October 4, 2001
    Date of Patent: September 22, 2009
    Assignee: Intel Corporation
    Inventors: William S. Wu, Peter D. MacWilliams, Stephen Pawlowski, Muthurajan Jayakumar