Patents Examined by Cheng-Yuan Tseng
-
Patent number: 11720509Abstract: A system includes a fabric switch including a motherboard, a baseboard management controller (BMC), a network switch configured to transport network signals, and a PCIe switch configured to transport PCIe signals; a midplane; and a plurality of device ports. Each of the plurality of device ports is configured to connect a storage device to the motherboard of the fabric switch over the midplane and carry the network signals and the PCIe signals over the midplane. The storage device is configurable in multiple modes based a protocol established over a fabric connection between the system and the storage device.Type: GrantFiled: October 5, 2020Date of Patent: August 8, 2023Inventors: Sompong Paul Olarig, Fred Worley, Son Pham
-
Patent number: 11704271Abstract: A system-in-package architecture in accordance with aspects includes a logic die and one or more memory dice coupled together in a three-dimensional slack. The logic die can include one or more global building blocks and a plurality of local building blocks. The number of local building blocks can be scalable. The local building blocks can include a plurality of engines and memory controllers. The memory controllers can be configured to directly couple one or more of the engines to the one or more memory dice. The number and type of local building blocks, and the number and types of engines and memory controllers can be scalable.Type: GrantFiled: August 20, 2020Date of Patent: July 18, 2023Assignee: Alibaba Group Holding LimitedInventors: Lide Duan, Wei Han, Yuhao Wang, Fei Xue, Yuanwei Fang, Hongzhong Zheng
-
Patent number: 11704124Abstract: Disclosed embodiments relate to executing a vector multiplication instruction. In one example, a processor includes fetch circuitry to fetch the vector multiplication instruction having fields for an opcode, first and second source identifiers, and a destination identifier, decode circuitry to decode the fetched instruction, execution circuitry to, on each of a plurality of corresponding pairs of fixed-sized elements of the identified first and second sources, execute the decoded instruction to generate a double-sized product of each pair of fixed-sized elements, the double-sized product being represented by at least twice a number of bits of the fixed size, and generate an unsigned fixed-sized result by rounding the most significant fixed-sized portion of the double-sized product to fit into the identified destination.Type: GrantFiled: January 11, 2022Date of Patent: July 18, 2023Assignee: Intel CorporationInventors: Venkateswara R. Madduri, Carl Murray, Elmoustapha Ould-Ahmed-Vall, Mark J. Charney, Robert Valentine, Jesus Corbal
-
Patent number: 11698790Abstract: Methods and parallel processing units for avoiding inter-pipeline data hazards identified at compile time. For each identified inter-pipeline data hazard the primary instruction and secondary instruction(s) thereof are identified as such and are linked by a counter which is used to track that inter-pipeline data hazard. When a primary instruction is output by the instruction decoder for execution the value of the counter associated therewith is adjusted to indicate that there is hazard related to the primary instruction, and when primary instruction has been resolved by one of multiple parallel processing pipelines the value of the counter associated therewith is adjusted to indicate that the hazard related to the primary instruction has been resolved.Type: GrantFiled: November 10, 2021Date of Patent: July 11, 2023Assignee: Imagination Technologies LimitedInventors: Luca Iuliano, Simon Nield, Yoong-Chert Foo, Ollie Mower
-
Patent number: 11687820Abstract: Methods, systems, and apparatus for operating a system of qubits. In one aspect, a method includes operating a first qubit from a first plurality of qubits at a first qubit frequency from a first qubit frequency region, and operating a second qubit from the first plurality of qubits at a second qubit frequency from a second first qubit frequency region, the second qubit frequency and the second first qubit frequency region being different to the first qubit frequency and the first qubit frequency region, respectively, wherein the second qubit is diagonal to the first qubit in a two-dimensional grid of qubits.Type: GrantFiled: June 17, 2021Date of Patent: June 27, 2023Assignee: Google LLCInventors: John Martinis, Rami Barends, Austin Greig Fowler
-
Patent number: 11687764Abstract: A method of flattening channel data of an input feature map in an inference system includes retrieving pixel values of a channel of a plurality of channels of the input feature map from a memory and storing the pixel values in a buffer, extracting first values of a first region having a first size from among the pixel values stored in the buffer, the first region corresponding to an overlap region of a kernel of the inference system with channel data of the input feature map, rearranging second values corresponding to the overlap region of the kernel from among the first values in the first region, and identifying a first group of consecutive values from among the rearranged second values for supplying to a first dot-product circuit of the inference system.Type: GrantFiled: June 12, 2020Date of Patent: June 27, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Ali Shafiee Ardestani, Joseph Hassoun
-
Patent number: 11681640Abstract: Enabling multi-channel communications between controllers in a storage array, including: creating a plurality of logical communications channels between two or more storage array controllers; inserting, into a buffer utilized by a direct memory access (‘DMA’) engine of a first storage array controller, a data transfer descriptor describing data stored in memory of the first storage array controller and a location to write the data to memory of a second storage array controller; retrieving, in dependence upon the data transfer descriptor, the data stored in memory of the first storage array controller; and writing, via a predetermined logical communications channel, the data into the memory of the second storage array controller in dependence upon the data transfer descriptor.Type: GrantFiled: October 27, 2021Date of Patent: June 20, 2023Assignee: PURE STORAGE, INC.Inventors: Roland Dreier, Yan Liu, Sandeep Mann
-
Patent number: 11681529Abstract: Systems, methods, and apparatuses relating to access synchronization in a shared memory are described. In one embodiment, a processor includes a decoder to decode an instruction into a decoded instruction, and an execution unit to execute the decoded instruction to: receive a first input operand of a memory address to be tracked and a second input operand of an allowed sequence of memory accesses to the memory address, and cause a block of a memory access that violates the allowed sequence of memory accesses to the memory address. In one embodiment, a circuit separate from the execution unit compares a memory address for a memory access request to one or more memory addresses in a tracking table, and blocks a memory access for the memory access request when a type of access violates a corresponding allowed sequence of memory accesses to the memory address for the memory access request.Type: GrantFiled: August 24, 2021Date of Patent: June 20, 2023Assignee: Intel CorporationInventors: Swagath Venkataramani, Dipankar Das, Sasikanth Avancha, Ashish Ranjan, Subarno Banerjee, Bharat Kaul, Anand Raghunathan
-
Patent number: 11675326Abstract: In one embodiment, an apparatus comprises a fabric controller of a first computing node. The fabric controller is to receive, from a second computing node via a network fabric that couples the first computing node to the second computing node, a request to execute a kernel on a field-programmable gate array (FPGA) of the first computing node; instruct the FPGA to execute the kernel; and send a result of the execution of the kernel to the second computing node via the network fabric.Type: GrantFiled: May 26, 2021Date of Patent: June 13, 2023Assignee: Intel CorporationInventors: Nicolas A. Salhuana, Karthik Kumar, Thomas Willhalm, Francesc Guim Bernat, Narayan Ranganathan
-
Patent number: 11669470Abstract: The present disclosure provides a storage system including a first storage device (e.g., a main storage device) and one or more additional storage devices (e.g., sub storage devices). The first storage device includes a host interface for communicating with a host device and is directly connected to the host device. The additional storage devices may be directly connected to the first storage device and may communicate with the host device through the host interface included in the first storage device. The storage system thus has a total combined capacity of both the capacity of the first storage device and the capacity of the one or more additional storage devices. Further, the one or more additional storage devices may be added or removed to increase or decrease the total capacity of the storage system, and the one or more additional storage devices may not necessarily themselves include a host interface.Type: GrantFiled: March 24, 2021Date of Patent: June 6, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sungwon Jeong, Jinhyuk Lee, Younghoi Heo, Jaeshin Lee
-
Patent number: 11669471Abstract: A method, computer program product, and computing system for receiving an input/output (IO) command for processing data within a storage system. An IO command-specific entry may be generated in a register based upon, at least in part, the IO command. An compare-and-swap operation may be performed on the IO command-specific entry to determine an IO command state associated with the IO command. The IO command may be processed based upon, at least in part, the IO command state associated with the IO command.Type: GrantFiled: October 21, 2021Date of Patent: June 6, 2023Assignee: EMC IP Holding Company, LLCInventors: Eldad Zinger, Ran Anner, Amit Engel
-
Patent number: 11656668Abstract: A method is disclosed for synchronizing operating modes across a plurality of peripheral devices. The method includes each of the plurality of peripheral devices transmitting requests to change a rate of power consumption in response to a set of criteria for each peripheral device. A host computing device determines if one or more peripheral devices should change power modes and transmits corresponding commands to synchronize the plurality of peripheral devices into a single operating mode. Each peripheral device may have a unique power mode for a given operating mode and each peripheral device includes one or more common features, such as a lighting display, that is synchronized across all peripheral devices for a given operating mode.Type: GrantFiled: February 18, 2021Date of Patent: May 23, 2023Assignee: Logitech Europe S.A.Inventor: David Tarongi Vanrell
-
Patent number: 11657015Abstract: A device is provided with two or more uplink ports to connect the device via two or more links to one or more sockets, where each of the sockets includes one or more processing cores, and each of the two or more links is compliant with a particular interconnect protocol. The device further includes I/O logic to identify data to be sent to the one or more processing cores for processing, determine an affinity attribute associated with the data, and determine which of the two or more links to use to send the data to the one or more processing cores based on the affinity attribute.Type: GrantFiled: January 20, 2021Date of Patent: May 23, 2023Assignee: Intel CorporationInventors: Debendra Das Sharma, Anil Vasudevan, David Harriman
-
Patent number: 11657267Abstract: A neural network apparatus (20) includes a storage unit (24) storing a neural network model, and an arithmetic unit (22) inputting input information into an input layer of the neural network and outputting an output layer. A weight matrix (W) of an FC layer of the neural network model is constituted by a product of a weight basis matrix (Mw) of integers and a weight coefficient matrix (Cw) of real numbers. In the FC layer, the arithmetic unit (22) uses an output vector from a previous layer as an input vector (x) to decompose the input vector (x) into a product of a binary input basis matrix (Mx) and an input coefficient vector (cx) of real numbers and an input bias (bx) and derives a product of the input vector (x) and a weight matrix (W).Type: GrantFiled: July 20, 2017Date of Patent: May 23, 2023Assignee: DENSO IT LABORATORY, INC.Inventor: Mitsuru Ambai
-
Patent number: 11650949Abstract: A system includes a fabric switch including a motherboard, a baseboard management controller (BMC), a network switch configured to transport network signals, and a PCIe switch configured to transport PCIe signals; a midplane; and a plurality of device ports. Each of the plurality of device ports is configured to connect a storage device to the motherboard of the fabric switch over the midplane and carry the network signals and the PCIe signals over the midplane. The storage device is configurable in multiple modes based a protocol established over a fabric connection between the system and the storage device.Type: GrantFiled: October 5, 2020Date of Patent: May 16, 2023Inventors: Sompong Paul Olarig, Fred Worley, Son Pham
-
Patent number: 11645212Abstract: Processing elements include interfaces that allow direct access to memory banks on one or more DRAMs in an integrated circuit stack. These additional (e.g., per processing element) direct interfaces may allow the processing elements to have direct access to the data in the DRAM stack. Based on the size/type of operands being processed, and the memory bandwidth of the direct interfaces, rate calculation circuitry on the processor die determines the speed each processing element and/or processing nodes within each processing element are operated.Type: GrantFiled: October 19, 2021Date of Patent: May 9, 2023Assignee: Rambus Inc.Inventors: Steven C. Woo, Thomas Vogelsang, Joseph James Tringali, Pooneh Safayenikoo
-
Patent number: 11635969Abstract: Systems and methods described herein are directed to upgrading one or more of add-on firmware and disk firmware for a server, which can involve connecting a port of the server to an isolated network, the isolated network dedicated to firmware upgrades for the server; caching onto cache memory of the server, an operating system received through the isolated network; booting the operating system on the server from the cache memory; conducting an Network File System (NFS) mount on the server to determine hardware information associated with the upgrading of the one or more of the add-on firmware and the disk firmware; and upgrading the one or more of the add-on firmware and the disk firmware based on the hardware information.Type: GrantFiled: January 23, 2020Date of Patent: April 25, 2023Assignee: HITACHI VANTARA LLCInventors: Francis Kin-Wing Hong, Arturo Cruz, Liren Zhao
-
Patent number: 11630785Abstract: The present disclosure generally relates to improving data transfer speed. A data storage device includes both a controller and a memory device. The controller provides instructions regarding read and/or write commands to the memory device through the use of control lines. The data to be written/read is transferred between the controller and the memory device along data lines. The control lines typically are not used during data transfer. During data transfer, the control lines can be used to increase data transfer speed by utilizing the otherwise idle control lines for data transfer in addition to the data lines. Hence, data transfer speed is increased by using not only the data lines, but additionally the control lines. Once the data transfer is complete, the control lines return to their legacy function.Type: GrantFiled: February 24, 2021Date of Patent: April 18, 2023Assignee: Western Digital Technologies, Inc.Inventors: Refael Ben-Rubi, Moshe Cohen
-
Patent number: 11614947Abstract: An example device includes a plurality of computational memory banks. Each computational memory bank of the plurality of computational memory banks includes an array of memory units and a plurality of processing elements connected to the array of memory units. The device further includes a plurality of single instruction, multiple data (SIMD) controllers. Each SIMD controller of the plurality of SIMD controllers is contained within at least one computational memory bank of the plurality of computational memory banks. Each SIMD controller is to provide instructions to the at least one computational memory bank.Type: GrantFiled: August 31, 2018Date of Patent: March 28, 2023Assignee: UNTETHER AI CORPORATIONInventors: William Martin Snelgrove, Darrick Wiebe
-
Patent number: 11615334Abstract: Quantum memory management is becoming a pressing problem, especially given the recent research effort to develop new and more complex quantum algorithms. The disclosed technology concerns various example memory management schemes for quantum computing. For example, certain embodiments concern methods for managing quantum memory based on reversible pebbling games constructed from SAT-encodings.Type: GrantFiled: June 28, 2019Date of Patent: March 28, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Martin Roetteler, Giulia Meuli