Patents Examined by Christopher A Daley
-
Patent number: 12217147Abstract: Techniques in wavelet filtering for advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets in accordance with virtual channel specifiers. Each processing element is enabled to perform local filtering of wavelets received at the processing element, selectively, conditionally, and/or optionally discarding zero or more of the received wavelets, thereby preventing further processing of the discarded wavelets. The wavelet filtering is performed by one or more configurable wavelet filters operable in various modes, such as counter, sparse, and range modes.Type: GrantFiled: October 15, 2020Date of Patent: February 4, 2025Assignee: Cerebras Systems Inc.Inventors: Michael Morrison, Michael Edwin James, Sean Lie, Srikanth Arekapudi, Gary R. Lauterbach
-
Patent number: 12212431Abstract: The present invention relates to a CAN node being configured to predict, based on the at least one response message and a reference response, a fault of the CAN network and to determine a fault location of the predicted fault of the CAN network. The present disclosure also relates to a CAN system and a method for the CAN node.Type: GrantFiled: June 13, 2023Date of Patent: January 28, 2025Assignee: NXP B.V.Inventors: Clemens Gerhardus Johannes de Haas, Matthias Berthold Muth, Gerald Kwakernaat, Lucas Pieter Lodewijk van Dijk
-
Patent number: 12204482Abstract: Semiconductor devices, packaging architectures and associated methods are disclosed. In one embodiment, a memory chiplet is disclosed. The memory chiplet includes a D2D interface of a first type for coupling to a host IC chip via multiple lanes. The D2D interface includes multiple unit interface modules, each of the multiple unit interface modules corresponding to a first set of signal path resources of a lowest granularity provided by the multiple lanes. A memory port includes a memory physical interface of a first memory type for accessing memory storage of the first memory type. The memory physical interface of the first memory type includes a second set of signal path resources corresponding to multiple memory channels of the first memory type. Mapping circuitry maps the second set of signal path resources to the first set of signal path resources in a manner that utilizes all of the first signal path resources for an integer number of the multiple unit interface modules.Type: GrantFiled: May 1, 2024Date of Patent: January 21, 2025Assignee: Eliyan CorporationInventors: Ramin Farjadrad, Kevin Donnelly
-
Patent number: 12204954Abstract: Techniques in placement of compute and memory for accelerated deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets. The routing is in accordance with virtual channel specifiers of the wavelets and controlled by routing configuration information of the router. A software stack determines placement of compute resources and memory resources based on a description of a neural network. The determined placement is used to configure the routers including usage of the respective colors. The determined placement is used to configure the compute elements including the respective programmed instructions each is configured to execute.Type: GrantFiled: October 29, 2020Date of Patent: January 21, 2025Assignee: Cerebras Systems Inc.Inventors: Vladimir Kibardin, Michael Edwin James, Michael Morrison, Sean Lie, Gary R. Lauterbach, Stanislav Funiak
-
Patent number: 12204468Abstract: Semiconductor devices, packaging architectures and associated methods are disclosed. In one embodiment, a memory chiplet is disclosed. The memory chiplet includes at least one memory die of a first memory type. Memory control circuitry is coupled to the at least one memory die. An interface circuit is for coupling to a host IC chiplet. The interface circuit includes data input/output (I/O) circuitry for coupling to multiple data lanes. Link directional control circuitry selects, for a first memory transaction, a first subset of the multiple data lanes to transfer data between the memory chiplet and the host IC chiplet.Type: GrantFiled: May 1, 2024Date of Patent: January 21, 2025Assignee: Eliyan CorporationInventors: Curtis McAllister, Syrus Ziai
-
Patent number: 12197351Abstract: Various examples are directed to systems and methods for requesting an atomic operation. A first hardware compute element may send a first request via a network structure, where the first request comprises an atomic opcode indicating an atomic operation to be performed by a second hardware compute element. The network structure may provide an address bus from the first hardware compute element for providing the atomic opcode to the second hardware compute element. The second hardware compute element may execute the atomic operation and send confirmation data indicating completion of the atomic operation. The network structure may provide a second bus from the second hardware compute element and the first hardware compute element. The second bus may be for providing the confirmation data from the second hardware compute element to the first hardware compute element.Type: GrantFiled: July 20, 2022Date of Patent: January 14, 2025Assignee: Micron Technology, Inc.Inventors: Christopher Baronne, Tony M. Brewer
-
Patent number: 12182039Abstract: The implementation of the present disclosure provides a memory, an operation method thereof and a memory system. For example, the memory can include a first memory plane, a second memory plane, and a plane data bus connected to each of the first memory plane and the second memory plane. The plane data bus can be configured to receive input data. The first memory plane can be configured to store first data of the input data. The second memory plane can be configured to store second data of the input data. The second data can be configured to indicate whether the first data has been performed with an inversion operation prior to transmission.Type: GrantFiled: March 15, 2023Date of Patent: December 31, 2024Assignee: Yangtze Memory Technologies Co., Ltd.Inventors: Wenjie Mu, Jiawei Chen, Shu Xie
-
Patent number: 12182611Abstract: An apparatus includes an interrupt cache having cache storage configured to store a plurality of interrupts received from an interrupt source, the plurality of interrupts corresponding to a plurality of interrupt events configured for execution by the plurality of interrupt service routines and a cache manager component configured to generate an interrupt message for transmission to the processing unit, the interrupt message generated to include at least one interrupt of the plurality of interrupts from the cache storage.Type: GrantFiled: December 22, 2022Date of Patent: December 31, 2024Assignees: Advanced Micro Devices, Inc, ATI Technologies ULCInventors: Philip Ng, Anil Kumar
-
Patent number: 12177133Abstract: Techniques in dynamic routing for advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element enabled to execute programmed instructions using the data and a router enabled to route the wavelets via static routing, dynamic routing, or both. The routing is in accordance with a respective virtual channel specifier of each of the wavelets and controlled by routing configuration information of the router. The static techniques enable statically specifiable neuron connections. The dynamic techniques enable information from the wavelets to alter the routing configuration information during neural network processing.Type: GrantFiled: October 14, 2020Date of Patent: December 24, 2024Assignee: Cerebras Systems Inc.Inventors: Michael Morrison, Michael Edwin James, Sean Lie, Srikanth Arekapudi, Gary R. Lauterbach, Vijay Anand Reddy Korthikanti
-
Patent number: 12169466Abstract: Passage of data packets on a data pipeline is arbitrated in a distributed manner along the pipeline. Multiple data arbiters each operate to merge data from a respective data source to the data pipeline at a distinct point in the pipeline. At each stage, a multiplexer selectively passes, to the data pipeline, an upstream data packet or a local data packet from the respective data source. A register stores an indication of data packets passed by the multiplexer based on the respective data source originating the data packet. A controller controls the multiplexer to select the upstream data packet or the local data packet based on the indication of data packets passed by the multiplexer.Type: GrantFiled: January 26, 2023Date of Patent: December 17, 2024Assignee: MARVELL ASIA PTE LTDInventor: Thomas Lorne Drabenstott
-
Patent number: 12169771Abstract: Techniques in wavelet filtering for advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets in accordance with virtual channel specifiers. Each processing element is enabled to perform local filtering of wavelets received at the processing element, selectively, conditionally, and/or optionally discarding zero or more of the received wavelets, thereby preventing further processing of the discarded wavelets. The wavelet filtering is performed by one or more configurable wavelet filters operable in various modes, such as counter, sparse, and range modes.Type: GrantFiled: October 15, 2020Date of Patent: December 17, 2024Assignee: Cerebras Systems Inc.Inventors: Michael Morrison, Michael Edwin James, Sean Lie, Srikanth Arekapudi, Gary R. Lauterbach
-
Patent number: 12164446Abstract: According to one embodiment, a memory system includes a first chip and a second chip. The second chip is bonded with the first chip. The memory system includes a semiconductor memory device and a memory controller. The semiconductor memory device includes a memory cell array, a peripheral circuit, and an input/output module. The memory controller is configured to receive an instruction from an external host device and control the semiconductor memory device via the input/output module. The first chip includes the memory cell array. The second chip includes the peripheral circuit, the input/output module, and the memory controller.Type: GrantFiled: September 29, 2023Date of Patent: December 10, 2024Assignee: KIOXIA CORPORATIONInventors: Kenji Sakaue, Toshiyuki Furusawa, Shinya Takeda
-
Patent number: 12166851Abstract: A slave device for IO-Link communication with a master device, wherein the master device and the slave device operate on a common basic timing, the slave device including at least one Universal Asynchronous Receiver Transmitter (UART) module configured to detect an INIT request sent from the master device during communication setup, calculate an actual timing of the master device from the INIT request and correct an initial timing of the slave device to an actual timing of the slave device based on the actual timing of the master device.Type: GrantFiled: February 25, 2022Date of Patent: December 10, 2024Assignee: Renesas Electronics Germany GmbHInventors: Lars Goepfert, Thomas Reichel, Tilo Schubert, Miru Richard George
-
Patent number: 12164460Abstract: Systems, methods, and apparatus are configured to enable a receiver to provide feedback. In one example, a method performed at a device coupled to a serial bus includes receiving a write command from the serial bus in a datagram, writing a data byte received in a first data frame of the datagram to a register address identified by the datagram, and using a second data frame of the datagram to provide feedback regarding the datagram. Feedback may be provided by driving a data line of the serial bus to provide a negative acknowledgement during the second data frame when a transmission error is detected in the datagram, and refraining from driving the data line of the serial bus during the second data frame when no transmission error is detected in the datagram, thereby providing an acknowledgement of the datagram.Type: GrantFiled: April 16, 2021Date of Patent: December 10, 2024Assignee: QUALCOMM IncorporatedInventors: Sharon Graif, Navdeep Mer, Naveen Kumar Narala, Sriharsha Chakka
-
Patent number: 12147376Abstract: Systems and methods for translation and transmission of video and audio data over a first-in-first-out interface (FIFO) in a field programmable gate array (FPGA) are provided. The method includes receiving audio and video data including a number of video frames, each with a plurality of video lines separated by a line blanking interval. A first video line is translated and transmitted to a packet-based network through the FIFO in the FPGA while concurrently buffering the audio data in an audio buffer in the FPGA. Next, at least a portion of the audio data in the audio buffer is transmitted to the packet-based network through the FIFO during the line blanking interval separating the first video line from a second video line. Where video frames are separated by frame blanking intervals the method further includes transmitting through the FIFO any data remaining in the buffer after the preceding line blanking interval.Type: GrantFiled: March 27, 2023Date of Patent: November 19, 2024Assignee: Cypress Semiconductor CorporationInventors: Rajagopal Narayanasamy, Ashwin Nair, Harsh Vinodchandra Gandhi, Sanat Kumar Mishra
-
Patent number: 12130763Abstract: A storage enclosure connected to a server via an external network and includes a network switch, an expander that is connected to the network switch and that is configured to generate enclosure data that supports a format conforming with SCSI Enclosure Services, and a board management controller (BMC) that is connected to the network switch and the expander. The BMC is configured to translate the enclosure data into enclosure translating data that supports a Redfish® format. The expander is configured to, after generating the enclosure data, transmit the enclosure data through the network switch to the BMC via an internal network. The BMC is configured to translate the enclosure data into the enclosure translating data, and to transmit the enclosure translating data to the network switch. The network switch transmits the enclosure translating data to the server through the external network.Type: GrantFiled: December 14, 2022Date of Patent: October 29, 2024Assignee: MITAC COMPUTING TECHNOLOGY CORPORATIONInventors: Jyun-Jie Wang, Shao-Che Chang, Cheng-Tung Wang, Yen-Lun Tseng, Chin-Hung Tan
-
Patent number: 12130772Abstract: A multi-processor device is disclosed. The multi-processor device includes interface circuitry to receive requests from at least one host device. A primary processor is coupled to the interface circuitry to process the requests in the absence of a failure event associated with the primary processor. A secondary processor processes operations on behalf of the primary processor and selectively receives the requests from the interface circuitry based on detection of the failure event associated with the primary processor.Type: GrantFiled: October 24, 2022Date of Patent: October 29, 2024Assignee: Rambus Inc.Inventors: Michael Raymond Miller, Evan Lawrence Erickson
-
Patent number: 12130760Abstract: A system-on-chip including: a first slave having a first safety level; a second slave having a second safety level; a first master having a third safety level, the first master outputs a first access request for the first slave and a second access request for the second slave; a safety function protection controller that outputs first attribute information corresponding to the first safety level, second attribute information corresponding to the second safety level, and third attribute information corresponding to the third safety level; and an interconnect bus that receives the first, second and third attribute information, transfers the first access request to the first slave when it is determined that the third safety level is higher than or equal to the first safety level, and blocks the second access request when it is determined that the third safety level is lower than the second safety level.Type: GrantFiled: September 29, 2023Date of Patent: October 29, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Byungtak Lee, Hee-Seong Lee, Myungkyoon Yim
-
Patent number: 12124709Abstract: The present application discloses a computing system and an associated method. The computing system includes a first host, a second host, a first memory extension device and a second memory extension device. The first host includes a first memory, and the second host includes a second memory. The first host has a plurality of first memory addresses corresponding to a plurality of memory spaces of the first memory, and a plurality of second memory addresses corresponding to a plurality of memory spaces of the second memory. The first memory extension device is coupled to the first host. The second memory extension device is coupled to the second host and the first memory extension device. The first host accesses the plurality of memory spaces of the second memory through the first memory extension device and the second memory extension device.Type: GrantFiled: December 12, 2022Date of Patent: October 22, 2024Assignee: ALIBABA (CHINA) CO., LTD.Inventors: Tianchan Guan, Yijin Guan, Dimin Niu, Hongzhong Zheng
-
Patent number: 12124392Abstract: Multiple device stacks are interconnected in a ring topology. The inter-device stack communication may utilize a handshake protocol. This ring topology may include the host so that the host may initialize and load the device stacks with data and/or commands (e.g., software, algorithms, etc.). The inter-device stack interconnections may also be configured to include/remove the host and/or to implement varying numbers of separate ring topologies. By configuring the system with more than one ring topology, and assigning different problems to different rings, multiple, possibly unrelated, machine learning tasks may be performed in parallel by the device stack system.Type: GrantFiled: August 4, 2023Date of Patent: October 22, 2024Assignee: Rambus Inc.Inventor: Steven C. Woo