Patents by Inventor Dejan Vucinic

Dejan Vucinic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11204705
    Abstract: A memory array controller includes memory media scanning logic to sample a bit error rate of memory blocks of a first memory device. A data management logic may then move data from the first memory device to a second memory device if the bit error rate matches a threshold level. The threshold level is derived from a configurable data retention time parameter for the first memory device. The configurable data retention time parameter may be received from a user or determined utilizing various known machine learning techniques.
    Type: Grant
    Filed: March 5, 2019
    Date of Patent: December 21, 2021
    Assignee: Western Digital Technologies, Inc.
    Inventors: Chao Sun, Pi-Feng Chiu, Dejan Vucinic
  • Publication number: 20210365333
    Abstract: In some implementations, the present disclosure relates to a method. The method includes obtaining a set of weights for a neural network comprising a plurality of nodes and a plurality of connections between the plurality of nodes. The method also includes identifying a first subset of weights and a second subset of weights based on the set of weights. The first subset of weights comprises weights that used by the neural network. The second subset of weights comprises weights that are prunable. The method further includes storing the first subset of weights in a first portion of a memory. A first error correction code is used for the first portion of the memory. The method further includes storing the second subset of weights in a second portion of the memory. A second error correction code is used for the second portion of the memory. The second error correction code is weaker than the first error correction code.
    Type: Application
    Filed: August 2, 2021
    Publication date: November 25, 2021
    Inventors: Chao Sun, Yan Li, Dejan Vucinic
  • Patent number: 11165717
    Abstract: Embodiments disclosed herein generally relate to the use of Network-on-Chip architecture for solid state memory structures, both volatile and non-volatile, which provide for the access of memory storage blocks via a router. As such, data may be sent to and/or from the memory storage blocks as data packets on the chip. The Network-on-Chip architecture may further be utilized to interconnect unlimited numbers of memory cell matrices, spread on a die, thus allowing for reduced latencies among matrices, selective power control, unlimited memory density growth without major latency penalties, and reduced parasitic capacitance and resistance. Other benefits may include improved signal integrity, larger die areas available to implement memory arrays, and higher frequency of operation.
    Type: Grant
    Filed: October 26, 2015
    Date of Patent: November 2, 2021
    Assignee: Western Digital Technologies, Inc.
    Inventors: Zvonimir Z. Bandic, Luis Cargnini, Dejan Vucinic
  • Patent number: 11106534
    Abstract: An apparatus is disclosed having a parity buffer having a plurality of parity pages and one or more dies, each die having a plurality of layers in which data may be written. The apparatus also includes a storage controller configured to write a stripe of data across two or more layers of the one or more dies, the stripe having one or more data values and a parity value. When a first data value of the stripe is written, it is stored as a currant value in a parity page of the parity buffer, the parity page corresponding to the stripe. For each subsequent data value that is written, an XOR operation is performed with the subsequent data value and the current value of the corresponding parity page and the result of the XOR operation is stored as the current value of the corresponding parity page.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: August 31, 2021
    Assignee: Western Digital Technologies, Inc.
    Inventors: Chao Sun, Pi-Feng Chiu, Dejan Vucinic
  • Patent number: 11080152
    Abstract: In some implementations, the present disclosure relates to a method. The method includes obtaining a set of weights for a neural network comprising a plurality of nodes and a plurality of connections between the plurality of nodes. The method also includes identifying a first subset of weights and a second subset of weights based on the set of weights. The first subset of weights comprises weights that used by the neural network. The second subset of weights comprises weights that are prunable. The method further includes storing the first subset of weights in a first portion of a memory. A first error correction code is used for the first portion of the memory. The method further includes storing the second subset of weights in a second portion of the memory. A second error correction code is used for the second portion of the memory. The second error correction code is weaker than the first error correction code.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: August 3, 2021
    Assignee: Western Digital Technologies, Inc.
    Inventors: Chao Sun, Yan Li, Dejan Vucinic
  • Publication number: 20210194829
    Abstract: A programmable network switch includes at least one pipeline including a packet parser configured to parse packets, and a plurality of ports for communication with network devices including a plurality of Data Storage Devices (DSDs). A packet comprising a write command is received to store data in a DSD of the plurality of DSDs, and an identifier generated for the data is compared to a plurality of identifiers generated for data stored in the plurality of DSDs. It is determined whether to send the write command to store the data to the DSD based on whether the generated identifier matches an identifier of the plurality of identifiers. In one aspect, the data to be stored for the write command is extracted from the packet using a pipeline of the programmable network switch, and at least a portion of the extracted data is used to generate the identifier for the data.
    Type: Application
    Filed: December 21, 2019
    Publication date: June 24, 2021
    Inventors: Chao Sun, Pietro Bressana, Dejan Vucinic
  • Publication number: 20210194830
    Abstract: A programmable network switch includes at least one pipeline including a packet parser configured to parse packets received by the programmable network switch. The programmable network switch further includes a plurality of ports for communication with a plurality of Data Storage Devices (DSDs). Packets comprising commands are received by the programmable network switch to perform at least one of retrieving data from and storing data in the plurality of DSDs. The commands are sent by the programmable network switch to the plurality of DSDs via the plurality of ports, and the use of each port for sending the commands is monitored. According to one aspect, it is determined which port to use to send a command based on the monitored use of at least one port of the plurality of ports.
    Type: Application
    Filed: December 23, 2019
    Publication date: June 24, 2021
    Inventors: Chao Sun, Pietro Bressana, Dejan Vucinic, Huynh Tu Dang
  • Patent number: 10949115
    Abstract: A Data Storage Device (DSD) includes a flash memory for storing data. Portions of the flash memory are grouped into logical groups based on at least one of a number of Program/Erase (P/E) cycles and a physical level location of the portions of the flash memory. A command performance latency is monitored for each logical group, and at least one polling time for each respective logical is set based on the monitored command performance latency for the logical group. The at least one polling time indicates a time to wait before checking whether a portion of the flash memory in the logical group has completed a command.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: March 16, 2021
    Assignee: Western Digital Technologies, Inc.
    Inventors: Chao Sun, Xinde Hu, Dejan Vucinic
  • Publication number: 20200401339
    Abstract: A Data Storage Device (DSD) includes a flash memory for storing data. Portions of the flash memory are grouped into logical groups based on at least one of a number of Program/Erase (P/E) cycles and a physical level location of the portions of the flash memory. A command performance latency is monitored for each logical group, and at least one polling time for each respective logical is set based on the monitored command performance latency for the logical group. The at least one polling time indicates a time to wait before checking whether a portion of the flash memory in the logical group has completed a command.
    Type: Application
    Filed: June 24, 2019
    Publication date: December 24, 2020
    Inventors: Chao Sun, Xinde Hu, Dejan Vucinic
  • Publication number: 20200364118
    Abstract: In some implementations, the present disclosure relates to a method. The method includes obtaining a set of weights for a neural network comprising a plurality of nodes and a plurality of connections between the plurality of nodes. The method also includes identifying a first subset of weights and a second subset of weights based on the set of weights. The first subset of weights comprises weights that used by the neural network. The second subset of weights comprises weights that are prunable. The method further includes storing the first subset of weights in a first portion of a memory. A first error correction code is used for the first portion of the memory. The method further includes storing the second subset of weights in a second portion of the memory. A second error correction code is used for the second portion of the memory. The second error correction code is weaker than the first error correction code.
    Type: Application
    Filed: May 15, 2019
    Publication date: November 19, 2020
    Inventors: Chao SUN, Yan LI, Dejan VUCINIC
  • Publication number: 20200351370
    Abstract: A programmable switch includes a plurality of ports for communication with devices on a network. Circuitry of the programmable switch is configured to receive a cache line request from a client on the network to obtain a cache line for performing an operation by the client. A port is identified for communicating with a memory device storing the cache line. The memory device is one of a plurality of memory devices used for a distributed cache. The circuitry is further configured to update a cache directory for the distributed cache based on the cache line request, and send the cache line request to the memory device using the identified port. In one aspect, it is determined whether the cache line request is for modifying the cache line.
    Type: Application
    Filed: November 26, 2019
    Publication date: November 5, 2020
    Inventors: Marjan Radi, Dejan Vucinic
  • Publication number: 20200349080
    Abstract: A programmable switch receives a cache line request from a client of a plurality of clients on a network to obtain a cache line. One or more additional cache lines are identified based on the received cache line request and prefetch information. The cache line and the one or more additional cache lines are requested from one or more memory devices on the network. The requested cache line and the one or more additional cache lines are received from the one or more memory devices, and are sent to the client.
    Type: Application
    Filed: August 22, 2019
    Publication date: November 5, 2020
    Inventors: Marjan Radi, Dejan Vucinic
  • Publication number: 20200310667
    Abstract: Systems and methods are disclosed for determining whether data to be written to a memory should be deduplicated. In some implementations, a method is provided. The method includes determining whether data to be written to a memory should be deduplicated based, at least in part, on status information of a controller and media characteristics of the memory, wherein the status information of the controller indicates a level of resources available for a deduplication operation. In response to determining that the data should be deduplicated, determining whether the data is duplicative based on the type of memory the data is being written to.
    Type: Application
    Filed: March 28, 2019
    Publication date: October 1, 2020
    Inventors: Chao Sun, Qingbo Wang, Dejan Vucinic
  • Patent number: 10789003
    Abstract: Systems and methods are disclosed for determining whether data to be written to a memory should be deduplicated. In some implementations, a method is provided. The method includes determining whether data to be written to a memory should be deduplicated based, at least in part, on status information of a controller and media characteristics of the memory, wherein the status information of the controller indicates a level of resources available for a deduplication operation. In response to determining that the data should be deduplicated, determining whether the data is duplicative based on the type of memory the data is being written to.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: September 29, 2020
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventors: Chao Sun, Qingbo Wang, Dejan Vucinic
  • Publication number: 20200285391
    Abstract: A memory array controller includes memory media scanning logic to sample a bit error rate of memory blocks of a first memory device. A data management logic may then move data from the first memory device to a second memory device if the bit error rate matches a threshold level. The threshold level is derived from a configurable data retention time parameter for the first memory device. The configurable data retention time parameter may be received from a user or determined utilizing various known machine learning techniques.
    Type: Application
    Filed: March 5, 2019
    Publication date: September 10, 2020
    Inventors: Chao Sun, Pi-Feng Chiu, Dejan Vucinic
  • Publication number: 20200272540
    Abstract: An apparatus is disclosed having a parity buffer having a plurality of parity pages and one or more dies, each die having a plurality of layers in which data may be written. The apparatus also includes a storage controller configured to write a stripe of data across two or more layers of the one or more dies, the stripe having one or more data values and a parity value. When a first data value of the stripe is written, it is stored a a currant value in a parity page of the parity buffer, the pants page corresponding to the stripe. For each subsequent data value that is written, an XOR operation is performed with the subsequent data value and the current value of the corresponding parity page and the result of the XOR operation is stored as the current value of the corresponding parity page.
    Type: Application
    Filed: February 27, 2019
    Publication date: August 27, 2020
    Inventors: Chao Sun, Pi-Feng Chiu, Dejan Vucinic
  • Patent number: 10691537
    Abstract: Techniques are presented for efficiently storing deep neural network (DNN) weights or similar type data sets in non-volatile memory. For data sets, such as DNN weights, where the elements are multi-bit values, bits of the same level of significance from the elements of the data set are formed into data streams. For example, the most significant bit from each of the data elements are formed into one data stream, the next most significant bit into a second data stream, and so on. The different bit streams are then encoded with differing strengths of error correction code (ECC), with streams corresponding to more significant bits encoded with stronger ECC code than streams corresponding to less significant bits, giving the more significant bits of the data set elements a higher level of protection.
    Type: Grant
    Filed: October 12, 2018
    Date of Patent: June 23, 2020
    Assignee: Western Digital Technologies, Inc.
    Inventors: Chao Sun, Minghai Qin, Dejan Vucinic
  • Publication number: 20200117539
    Abstract: Techniques are presented for efficiently storing deep neural network (DNN) weights or similar type data sets in non-volatile memory. For data sets, such as DNN weights, where the elements are multi-bit values, bits of the same level of significance from the elements of the data set are formed into data streams. For example, the most significant bit from each of the data elements are formed into one data stream, the next most significant bit into a second data stream, and so on. The different bit streams are then encoded with differing strengths of error correction code (ECC), with streams corresponding to more significant bits encoded with stronger ECC code than streams corresponding to less significant bits, giving the more significant bits of the data set elements a higher level of protection.
    Type: Application
    Filed: October 12, 2018
    Publication date: April 16, 2020
    Applicant: Western Digital Technologies, Inc.
    Inventors: Chao Sun, Minghai Qin, Dejan Vucinic
  • Patent number: 10552251
    Abstract: Disclosed include a device and a method for storing a neural network. The device includes a plurality of memory cells configured to store weights of the neural network. The plurality of memory cells may include one or more faulty cells. The device further includes a processor coupled to the plurality of memory cells. The processor is configured to construct the neural network based on a structure of the neural network and a subset of the weights stored by the plurality of memory cells. The subset of the weights may exclude another subset of the weights stored by one or more memory cells comprising the one or more faulty cells.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: February 4, 2020
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventors: Minghai Qin, Dejan Vucinic, Chao Sun
  • Publication number: 20190311267
    Abstract: The system described herein can include neural networks with noise-injection layers. The noise-injection layers can enable the neural networks to be trained such that the neural networks are able to maintain their classification and prediction performance in the presence of noisy data signals. Once trained, the parameters from the neural networks with noise-injection layers can be used in the neural networks of systems that include resistive random-access memory (ReRAM), memristors, or phase change memory (PCM), which use analog signals that can introduce noise into the system. The use of ReRAM, memristors, or PCM can enable large-scale parallelism that improves the speed and computational efficiency of neural network training and classification. Using the parameters from the neural networks trained with noise-injection layers, enables the neural networks to make robust predictions and calculations in the presence of noisy data.
    Type: Application
    Filed: June 28, 2018
    Publication date: October 10, 2019
    Inventors: Minghai Qin, Dejan Vucinic