Patents by Inventor Dejan Vucinic
Dejan Vucinic has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11573718Abstract: A network device includes at least one control path port and data path ports configured to communicate on a network. A connection request is received from a host via a control path port, and a resource of the network device is allocated to the host. A data path port is determined from among the plurality of data path ports for communication between the host and the allocated resource. An indication of the determined data path port is sent to the host via the control path port for communication on a data path between the host and the allocated resource. In one aspect, a network interface includes at least one control path port and a first plurality of data path ports configured to communicate on a network. A connection request is received from a host via a control path port, and a locally connected device is allocated to the host.Type: GrantFiled: February 12, 2021Date of Patent: February 7, 2023Assignee: Western Digital Technologies, Inc.Inventors: Qingbo Wang, Martin Lueker-Boden, Dejan Vucinic
-
Patent number: 11546272Abstract: Embodiments disclosed herein generally relate to the use of Network-on-Chip architecture for solid state memory structures which provide for the access of memory storage blocks via a router. As such, data may be sent to and/or from the memory storage blocks as data packets on the chip. The Network-on-Chip architecture may further be utilized to interconnect unlimited numbers of memory cell matrices, spread on a die, thus allowing for reduced latencies among matrices, selective power control, unlimited memory density growth without major latency penalties, and reduced parasitic capacitance and resistance. Other benefits may include improved signal integrity, larger die areas available to implement memory arrays, and higher frequency of operation.Type: GrantFiled: September 27, 2021Date of Patent: January 3, 2023Assignee: Western Digital Technologies, Inc.Inventors: Zvonimir Z. Bandic, Luis Cargnini, Dejan Vucinic
-
Patent number: 11537293Abstract: A data storage device includes a memory device that includes a plurality of zones of a zoned namespace and a controller coupled to the memory device. During operation, the controller maintains a window-based read and write monitor data structure to determine the read density and write density of each of the zones. The read density and write density are utilized to determine a cost for allocating wear leveling data for each zone. Based on the cost and the available storage capacity of the storage class memory, data, in a data management operation, is moved to either the storage class memory or the zone with the low cost. The host device is informed of the storage class memory usage for future data management operations.Type: GrantFiled: February 18, 2021Date of Patent: December 27, 2022Assignee: Western Digital Technologies, Inc.Inventors: Chao Sun, Xinde Hu, Dejan Vucinic
-
Publication number: 20220407625Abstract: A programmable switch includes a plurality of ports for communicating with a plurality of network devices. A packet for a distributed system is received via a port and at least one indicator is identified in the received packet. Reliability metadata associated with a network device used for the distributed system is generated using the at least one indicator. The generated reliability metadata is sent to a controller for the distributed system for predicting or determining a reliability of at least one of the network device and a communication link for the network device and the programmable switch.Type: ApplicationFiled: June 21, 2021Publication date: December 22, 2022Inventors: Marjan Radi, Dejan Vucinic
-
Publication number: 20220398037Abstract: Certain aspects of the present disclosure provide techniques for performing compute in memory (CIM) computations. A device comprises a CIM module configured to apply analog weights to input data using multiply-accumulate operations to generate an output. The device further comprises a digital weight storage unit configured to store digital weight references, wherein a digital weight reference corresponds to an analog weight of the analog weights. The device also comprises a device controller configured to program the analog weights to the CIM module, cause the CIM module to process the input data, and reprogram one or more analog weights that are degraded. The digital weight references in the digital weight storage unit are populated with values from a host processing device. Degraded analog weights in the CIM module are reprogrammed based on the corresponding digital weight references from the digital weight storage unit without reference to the host processing device.Type: ApplicationFiled: June 14, 2021Publication date: December 15, 2022Applicant: Western Digital Technologies, Inc.Inventors: Chao SUN, Tung Thanh HOANG, Dejan VUCINIC
-
Publication number: 20220398036Abstract: Certain aspects of the present disclosure provide techniques for performing compute in memory (CIM) computations. A device comprises a CIM module configured to apply a plurality of analog weights to data using multiply-accumulate operations to generate an output. The device further comprises a digital weight storage unit configured to store digital weight references, wherein a digital weight reference corresponds to an analog weight of the plurality of analog weights. The device also comprises a device controller configured to program the plurality of analog weights to the CIM module based on the digital weight references and determine degradation of one or more analog weights. The digital weight references in the digital weight storage unit are populated with values from a host device. Degraded analog weights in the CIM module are replaced with corresponding digital weight references from the digital weight storage unit without reference to the host device.Type: ApplicationFiled: June 14, 2021Publication date: December 15, 2022Applicant: Western Digital Technologies, Inc.Inventors: Chao SUN, Tung Thanh HOANG, Dejan VUCINIC
-
Publication number: 20220385732Abstract: A programmable switch includes ports to communicate with nodes including at least one node providing a cache accessible by other nodes. The programmable switch inspects received packets to identify information related to the cache. One or more cache metrics are determined for the cache based on the identified information and at least a portion of the cache is allocated to at least one application executed by at least one of the nodes based on the one or more cache metrics. According to one aspect, a distributed cache is formed of caches stored at nodes. The network controller stores distributed cache metrics and receives cache metrics from programmable switches for the caches to update the distributed cache metrics. Portions of the distributed cache are allocated to different applications based on the updated distributed cache metrics.Type: ApplicationFiled: May 26, 2021Publication date: December 1, 2022Inventors: Marjan Radi, Dejan Vucinic
-
Patent number: 11503140Abstract: A programmable network interface for a server includes at least one memory storing connection parameters for previously active Non-Volatile Memory express over Fabric (NVMeoF) connections with different NVMe nodes. An NVMeoF connection request is received from an NVMe node, and it is determined whether the NVMe node is associated with connection parameters stored in the at least one memory. In response to determining that the NVMe node is associated with connection parameters stored in the at least one memory, a new NVMeoF connection is established for communicating with the NVMe node using the stored connection parameters. In one aspect, an address space of the server is partitioned, and an NVMe request queue is assigned to each partition of the address space. At least one address is identified in a received NVMeoF message, and an NVMe request queue is determined for performing an NVMe request included in the NVMeoF message.Type: GrantFiled: February 10, 2021Date of Patent: November 15, 2022Assignee: Western Digital Technologies, Inc.Inventors: Marjan Radi, Dejan Vucinic
-
Publication number: 20220261165Abstract: A network device includes at least one control path port and data path ports configured to communicate on a network. A connection request is received from a host via a control path port, and a resource of the network device is allocated to the host. A data path port is determined from among the plurality of data path ports for communication between the host and the allocated resource. An indication of the determined data path port is sent to the host via the control path port for communication on a data path between the host and the allocated resource. In one aspect, a network interface includes at least one control path port and a first plurality of data path ports configured to communicate on a network. A connection request is received from a host via a control path port, and a locally connected device is allocated to the host.Type: ApplicationFiled: February 12, 2021Publication date: August 18, 2022Inventors: Qingbo Wang, Martin Lueker-Boden, Dejan Vucinic
-
Publication number: 20220261160Abstract: A data storage device includes a memory device that includes a plurality of zones of a zoned namespace and a controller coupled to the memory device. During operation, the controller maintains a window-based read and write monitor data structure to determine the read density and write density of each of the zones. The read density and write density are utilized to determine a cost for allocating wear leveling data for each zone. Based on the cost and the available storage capacity of the storage class memory, data, in a data management operation, is moved to either the storage class memory or the zone with the low cost. The host device is informed of the storage class memory usage for future data management operations.Type: ApplicationFiled: February 18, 2021Publication date: August 18, 2022Inventors: Chao SUN, Xinde HU, Dejan VUCINIC
-
Patent number: 11403529Abstract: The system described herein can include neural networks with noise-injection layers. The noise-injection layers can enable the neural networks to be trained such that the neural networks are able to maintain their classification and prediction performance in the presence of noisy data signals. Once trained, the parameters from the neural networks with noise-injection layers can be used in the neural networks of systems that include resistive random-access memory (ReRAM), memristors, or phase change memory (PCM), which use analog signals that can introduce noise into the system. The use of ReRAM, memristors, or PCM can enable large-scale parallelism that improves the speed and computational efficiency of neural network training and classification. Using the parameters from the neural networks trained with noise-injection layers, enables the neural networks to make robust predictions and calculations in the presence of noisy data.Type: GrantFiled: June 28, 2018Date of Patent: August 2, 2022Assignee: Western Digital Technologies, Inc.Inventors: Minghai Qin, Dejan Vucinic
-
Publication number: 20220200867Abstract: A programmable switch includes ports configured to communicate with Non-Volatile Memory express (NVMe) nodes. The programmable switch is configured to store a mapping of NVMe namespaces to physical storage locations located in the NVMe nodes. An NVMe node is determined by the programmable switch to have become inactive, and one or more NVMe namespaces are removed from the mapping that are associated with one or more physical storage locations in the inactive NVMe node. A notification of the one or more removed NVMe namespaces is sent to a network controller. According to one aspect, the network controller stores a global mapping of NVMe namespaces to physical storage locations in the NVMe nodes. The network controller sends at least one notification of the update to at least one other programmable switch to update at least one mapping stored at the at least one other programmable switch.Type: ApplicationFiled: February 12, 2021Publication date: June 23, 2022Inventors: Marjan Radi, Dejan Vucinic
-
Publication number: 20220191306Abstract: A programmable network interface for a server includes at least one memory storing connection parameters for previously active Non-Volatile Memory express over Fabric (NVMeoF) connections with different NVMe nodes. An NVMeoF connection request is received from an NVMe node, and it is determined whether the NVMe node is associated with connection parameters stored in the at least one memory. In response to determining that the NVMe node is associated with connection parameters stored in the at least one memory, a new NVMeoF connection is established for communicating with the NVMe node using the stored connection parameters. In one aspect, an address space of the server is partitioned, and an NVMe request queue is assigned to each partition of the address space. At least one address is identified in a received NVMeoF message, and an NVMe request queue is determined for performing an NVMe request included in the NVMeoF message.Type: ApplicationFiled: February 10, 2021Publication date: June 16, 2022Inventors: Marjan Radi, Dejan Vucinic
-
Patent number: 11360899Abstract: A programmable switch includes a plurality of ports for communication with devices on a network. Circuitry of the programmable switch is configured to receive a cache line request from a client on the network to obtain a cache line for performing an operation by the client. A port is identified for communicating with a memory device storing the cache line. The memory device is one of a plurality of memory devices used for a distributed cache. The circuitry is further configured to update a cache directory for the distributed cache based on the cache line request, and send the cache line request to the memory device using the identified port. In one aspect, it is determined whether the cache line request is for modifying the cache line.Type: GrantFiled: November 26, 2019Date of Patent: June 14, 2022Assignee: Western Digital Technologies, Inc.Inventors: Marjan Radi, Dejan Vucinic
-
Patent number: 11297010Abstract: A programmable network switch includes at least one pipeline including a packet parser configured to parse packets received by the programmable network switch. The programmable network switch further includes a plurality of ports for communication with a plurality of Data Storage Devices (DSDs). Packets comprising commands are received by the programmable network switch to perform at least one of retrieving data from and storing data in the plurality of DSDs. The commands are sent by the programmable network switch to the plurality of DSDs via the plurality of ports, and the use of each port for sending the commands is monitored. According to one aspect, it is determined which port to use to send a command based on the monitored use of at least one port of the plurality of ports.Type: GrantFiled: December 23, 2019Date of Patent: April 5, 2022Assignee: Western Digital Technologies, Inc.Inventors: Chao Sun, Pietro Bressana, Dejan Vucinic, Huynh Tu Dang
-
Publication number: 20220052970Abstract: A programmable switch includes a plurality of ports for communicating with devices on a network. Circuitry of the programmable switch is configured to receive a series of related messages from a first device on the network via at least one port, and determine whether one or more messages of the series of related messages have been received out-of-order based at least in part on a sequence number included in the one or more messages. The series of related messages are sent by the programmable switch to a second device via one or more ports in an order indicated by sequence numbers included in the series of related messages by delaying at least one message. According to one aspect, a network controller selects a programmable switch between the first device and the second device to serve as a message sequencer for reordering out-of-order messages using a stored network topology.Type: ApplicationFiled: February 12, 2021Publication date: February 17, 2022Inventors: Marjan Radi, Dejan Vucinic
-
Publication number: 20220014480Abstract: Embodiments disclosed herein generally relate to the use of Network-on-Chip architecture for solid state memory structures which provide for the access of memory storage blocks via a router. As such, data may be sent to and/or from the memory storage blocks as data packets on the chip. The Network-on-Chip architecture may further be utilized to interconnect unlimited numbers of memory cell matrices, spread on a die, thus allowing for reduced latencies among matrices, selective power control, unlimited memory density growth without major latency penalties, and reduced parasitic capacitance and resistance. Other benefits may include improved signal integrity, larger die areas available to implement memory arrays, and higher frequency of operation.Type: ApplicationFiled: September 27, 2021Publication date: January 13, 2022Applicant: Western Digital Technologies, Inc.Inventors: Zvonimir Z. Bandic, Luis Cargnini, Dejan Vucinic
-
Publication number: 20210406124Abstract: An apparatus is disclosed having a parity buffer having a plurality of parity pages and one or more dies, each die having a plurality of layers in which data may be written. The apparatus also includes a storage controller configured to write a stripe of data across two or more layers of the one or more dies, the stripe having one or more data values and a parity value. When a first data value of the stripe is written, it is stored as a current value in a parity page of the parity buffer, the parity page corresponding to the stripe. For each subsequent data value that is written, an XOR operation is performed with the subsequent data value and the current value of the corresponding parity page and the result of the XOR operation is stored as the current value of the corresponding parity page.Type: ApplicationFiled: August 30, 2021Publication date: December 30, 2021Inventors: Chao Sun, Pi-Feng Chiu, Dejan Vucinic
-
Publication number: 20210406191Abstract: A programmable switch includes at least one memory configured to store a cache directory for a distributed cache, and circuitry configured to receive a cache line request from a client device to obtain a cache line. The cache directory is updated based on the received cache line request, and the cache line request is sent to a memory device to obtain the requested cache line. An indication of the cache directory update is sent to a controller for the distributed cache to update a global cache directory. In one aspect, the controller sends at least one additional indication of the update to at least one other programmable switch to update at least one backup cache directory stored at the at least one other programmable switch.Type: ApplicationFiled: June 30, 2020Publication date: December 30, 2021Inventors: Marjan Radi, Dejan Vucinic
-
Publication number: 20210409506Abstract: A programmable switch includes ports, and circuitry to receive cache messages for a distributed cache from client devices. The cache messages are queued for sending to memory devices from the ports. Queue occupancy information is generated and sent to a controller that determines, based at least in part on the queue occupancy information, at least one of a cache message transmission rate for a client device, and one or more weights for the queues used by the programmable switch. In another aspect, the programmable switch extracts cache request information from a cache message. The cache request information indicates a cache usage and is sent to the controller, which determines, based at least in part on the extracted cache request information, at least one of a cache message transmission rate for a client device, and one or more weights for queues used in determining an order for sending cache messages.Type: ApplicationFiled: June 26, 2020Publication date: December 30, 2021Inventors: Marjan Radi, Dejan Vucinic