Patents Assigned to Marvell Asia PTE, Ltd.
-
Patent number: 12293174Abstract: A method includes receiving a machine learning (ML) network model in high-level code; generating an internal representation (IR), the IR is mapped to components in a multi-processing tile device; determining whether a first processing tile with a first on-chip memory (OCM) has a same dimension for an input/output tensor data as a second processing tile with a second OCM performing a same primitive function based on the IR; allocating a same memory address range within the first and the second OCM for the same primitive function if the first processing tile has the same dimension for the input/output tensor data as the second processing tile for the same primitive function; linking the memory address range of the first OCM to the memory address range of the second OCM to form a grouped memory space within the first and the second OCM respectively; and compiling low-level instructions based on the linking.Type: GrantFiled: July 26, 2023Date of Patent: May 6, 2025Assignee: Marvell Asia Pte LtdInventors: Nikhil Bernard John Stephen, Senad Durakovic, Chien-Chun Chou, Pranav Jonnalagadda, Ulf Hanebutte
-
Patent number: 12294399Abstract: Transceiver circuitry in an integrated circuit device includes a receive path including an analog front end for receiving analog signals from an analog transmission path and conditioning the analog signals, and an analog-to-digital converter configured to convert the conditioned analog signals into received digital signals for delivery to functional circuitry, and a transmit path including a digital front end configured to accept digital signals from the functional circuitry and to condition the accepted digital signals, and a digital-to-analog converter configured to convert the conditioned digital signals into analog signals for transmission onto the analog transmission path. At least one of the analog front end and the digital front end introduces distortion and outputs a distorted conditioned signal.Type: GrantFiled: January 4, 2024Date of Patent: May 6, 2025Assignee: Marvell Asia Pte LtdInventors: Ray Luan Nguyen, Benjamin Tomas Reyes, Geoffrey Hatcher, Stephen Jantzi
-
Patent number: 12292978Abstract: A new approach is proposed to support SRAM less bootup of an electronic device. A portion of a cache unit of a processor is utilized as a SRAM to maintain data to be accessed via read and/or write operations for bootup of the electronic device. First, the portion of the cache unit is mapped to a region of a memory, which has not been initialized. The processor reads data from a non-modifiable storage to be used for the bootup process of the electronic device and writes the data into the portion of the cache unit serving as the SRAM. To prevent having to read or write to the uninitialized memory, any read operation to the memory region returns a specific value and any write operation to the memory region is dropped. The processor then accesses the data stored in the portion of the cache unit to bootup the electronic device.Type: GrantFiled: November 2, 2021Date of Patent: May 6, 2025Assignee: Marvell Asia Pte LtdInventors: Ramacharan Sundararaman, Avinash Sodani
-
Patent number: 12293190Abstract: In a pipeline configured for out-of-order issuing, handling translation of virtual addresses to physical addresses includes: storing translations in a translation lookaside buffer (TLB), and updating at least one entry in the TLB based at least in part on an external instruction received from outside a first processor core. Managing external instructions includes: updating issue status information for each of multiple instructions stored in an instruction queue, processing the issue status information in response to receiving a first external instruction to identify at least two instructions in the instruction queue, including a first queued instruction and a second queued instruction. An instruction for performing an operation associated with the first external instruction is inserted into a stage of the pipeline so that the operation associated with the first external instruction is committed before the first queued instruction is committed and after the second queued instruction is committed.Type: GrantFiled: September 29, 2020Date of Patent: May 6, 2025Assignee: Marvell Asia Pte, Ltd.Inventors: Shubhendu Sekhar Mukherjee, David Albert Carlson, Michael Bertone
-
Patent number: 12289185Abstract: Method and apparatus for providing an equalizer that achieves a given precision for non-invertible matrices. The equalizer receives a plurality of symbols of an uplink transmission in a wireless communication system and performs an equalization operation on the plurality of received symbols of uplink transmission, wherein the equalization operation requires to perform an inversion of a matrix. The equalization operation on the plurality of received symbols is completed within a user-specified precision without adding any bit to the precision when the matrix is non-invertible. A gain normalizer performs a gain normalization operation on the plurality of received symbols following the equalization operation with certain values excluded from an ?IRCaverage of gain normalization factors used for the gain normalization operation.Type: GrantFiled: October 12, 2022Date of Patent: April 29, 2025Assignee: Marvell Asia Pte LtdInventors: Sabih Guzelgoz, Hong Jik Kim, Fariba Heidari
-
Patent number: 12288104Abstract: A system and corresponding method consumerize cloud computing by incorporating consumer devices into an infrastructure of cloud computing environment. The consumer device comprises a client job manager that spawns a processing task on the consumer device responsive to a job request to perform at least a portion of a computational job. The computational job is requested by an end user device to be performed via cloud computing. The consumer device further comprises a network interface. The job request is received via the network interface from a cloud job manager of a cloud service provider system of a cloud service provider. The processing task performs the at least a portion of the computational job. The consumer device is selected by the cloud job manager based, at least in part, on proximity of the consumer device to the end user device and at least one characteristic of the consumer device.Type: GrantFiled: April 30, 2021Date of Patent: April 29, 2025Assignee: Marvell Asia Pte LtdInventor: Thiagarajan Muthuganesan
-
Patent number: 12289256Abstract: Link data is stored in a distributed link descriptor memory (“DLDM”) including memory instances storing protocol data unit (“PDU”) link descriptors (“PLDs”) or cell link descriptors (“CLDs”). Responsive to receiving a request for buffering a current transfer data unit (“TDU”) in a current PDU, a current PLD is accessed in a first memory instance in the DLDM. It is determined whether any data field designated to store address information in connection with a TDU is currently unoccupied within the current PLD. If no data field designated to store address information in connection with a TDU is currently unoccupied within the current PLD, a current CLD is accessed in a second memory instance in the plurality of memory instances of the same DLDM. Current address information in connection with the current TDU is stored in an address data field within the current CLD.Type: GrantFiled: October 12, 2023Date of Patent: April 29, 2025Assignee: Marvell Asia Pte, Ltd.Inventors: William Brad Matthews, Puneet Agarwal, Ajit Kumar Jain
-
Patent number: 12282658Abstract: A system and corresponding method perform large memory transaction (LMT) stores. The system comprises a processor associated with a data-processing width and a processor accelerator. The processor accelerator performs a LMT store of a data set to a coprocessor in response to an instruction from the processor targeting the coprocessor. The data set corresponds to the instruction. The LMT store includes storing data from the data set, atomically, to the coprocessor based on a LMT line (LMTLINE). The LMTLINE is wider than the data-processing width. The processor accelerator sends, to the processor, a response to the instruction. The response is based on completion of the LMT store of the data set in its entirety. The processor accelerator enables the processor to perform useful work in parallel with the LMT store, thereby improving processing performance of the processor.Type: GrantFiled: February 21, 2024Date of Patent: April 22, 2025Assignee: Marvell Asia Pte LtdInventors: Aadeetya Shreedhar, Jason D. Zebchuk, Wilson P. Snyder, II, Albert Ma, Joseph Featherston
-
Patent number: 12284059Abstract: A data channel on an integrated circuit device includes a non-linear equalizer having as inputs digitized samples of signals on the data channel, decoding circuitry configured to determine from outputs of the non-linear equalizer a respective value of each of the signals, and adaptation circuitry configured to adapt parameters of the non-linear equalizer based on respective ones of the value. The non-linear equalizer includes a non-linear filter portion, and a front-end filter portion configured to reduce numbers of the inputs from the digitized samples. The non-linear equalizer may be a neural network equalizer, such as a multi-layer perceptron neural network equalizer, a reduced complexity multi-layer perceptron neural network equalizer, or a radial-basis function neural network equalizer. Alternatively, the non-linear equalizer may include a linear filter and a non-linear activation function, which may be a hyperbolic tangent function.Type: GrantFiled: January 25, 2022Date of Patent: April 22, 2025Assignee: Marvell Asia Pte LtdInventors: Nitin Nangare, Ahmed Medhat Ahmed Fahmi Eid Aboutaleb
-
Patent number: 12284122Abstract: A circuit and corresponding method perform resource arbitration. The circuit comprises a pending arbiter (PA) that outputs a PA selection for accessing a resource. The PA selection is based on PA input. The PA input represents respective pending-state of requesters of the resource. The circuit further comprises a valid arbiter (VA) that outputs a VA selection for accessing the resource. The VA selection is based on VA input. The VA input represents respective valid-state of the requesters. The circuit performs a validity check on the PA selection output. The circuit outputs a final selection for accessing the resource by selecting, based on the validity check performed, the PA selection output or VA selection output. The circuit addresses arbitration fairness issues that may result when multiple requesters are arbitrating to be selected for access to a shared resource and such requesters require a credit (token) to be eligible for arbitration.Type: GrantFiled: February 6, 2024Date of Patent: April 22, 2025Assignee: Marvell Asia Pte LtdInventors: Joseph Featherston, Aadeetya Shreedhar
-
Patent number: 12284708Abstract: A link between a PHY device and a link partner is established by performing link training. If the link becomes at least partially inoperable, a first fast retrain technique is executed. A signature indicating that the first fast retrain technique successfully began but failed during execution before completion of the first fast retrain technique is received at the PHY device. In response to receiving the signature, a second fast retrain technique is executed.Type: GrantFiled: January 28, 2022Date of Patent: April 22, 2025Assignee: Marvell Asia Pte LtdInventors: Seid Alireza Razavi Majomard, Ehab Tahir
-
Patent number: 12284010Abstract: Methods and apparatus for beamforming in MIMO systems are disclosed. In an embodiment, a method is provided that includes associating a plurality of signal-to-noise ratio (SNR) ranges with a plurality of precoding schemes, respectively, identifying groups of user equipment (UE) that have SNRs within each SNR range, and configuring downlink transmissions to each group of UE to use a precoding scheme associated with the SNR range of that group.Type: GrantFiled: March 1, 2023Date of Patent: April 22, 2025Assignee: Marvell Asia Pte, Ltd.Inventors: Sabih Guzelgoz, Nagabhushana Kurapati
-
Patent number: 12283997Abstract: In an optical receiver apparatus for use with multiple optical modulation techniques, a photodiode circuit is configured to process optical signals corresponding to multiple optical modulation techniques, including a first modulation technique and a second modulation technique different from the first modulation technique. The photodiode circuit includes: a first photodiode configured to receive a first optical signal corresponding to a first modulation technique, and a multiple-input second photodiode coupled in series with the first photodiode. The multiple-input second photodiode is configured to receive i) a second optical signal corresponding to the first modulation technique, and ii) a third optical signal corresponding to the second modulation technique. An input of a transimpedance amplifier is coupled to the first photodiode and the second photodiode via a node between the first photodiode and the second photodiode.Type: GrantFiled: April 11, 2023Date of Patent: April 22, 2025Assignee: Marvell Asia Pte LtdInventors: Tonmoy Shankar Mukherjee, Gary Mak, Masaki Kato, Lenin Kumar Patra, Radhakrishnan Nagarajan
-
Patent number: 12278798Abstract: A cable assembly includes connectors and protection switching circuitry. A first connector connects the cable assembly to a first switch, which has a first network path to a first host device. A second connector connects the cable assembly to a second switch, which has a second network path to the first host device. A third connector is connected to the first and second connectors via respectively a first cable and a second cable and connects the cable assembly to a second host device. The protection switching circuitry is embedded in the cable assembly and: establishes a communications connection to transfer data between the host devices using a first data path, which includes the first network path, connector, cable, and switch; determines the first data path is degraded; and in responsive, switches the communications connection to a second data path, which includes the second network path, connector, cable, and switch.Type: GrantFiled: February 15, 2024Date of Patent: April 15, 2025Assignee: Marvell Asia Pte LtdInventors: Whay Sing Lee, Arash Farhoodfar
-
Patent number: 12271593Abstract: A memory device includes a plurality of memory cells. Each memory cell stores a plurality of signal levels representing a plurality of values corresponding to a respective plurality of bits, bits in corresponding respective positions of significance across the plurality of memory cells constituting respective memory pages of the memory device. The memory device also includes decoding circuitry to decode each bit value of one of the respective memory pages using bit values read from at least one other one of the respective memory pages, adjacent to the one of the respective memory pages. The plurality of signal levels may represent the plurality of values according to a Gray code. The decoding circuitry may be configured to compare each signal level to a set of voltage thresholds, and to decode a subset of the plurality of signal levels using fewer than all voltage thresholds in the set of voltage thresholds.Type: GrantFiled: April 28, 2023Date of Patent: April 8, 2025Assignee: Marvell Asia Pte LtdInventors: Nirmal V. Shende, Nedeljko Varnica, Mats Oberg
-
Patent number: 12273186Abstract: A network includes a first plurality of nodes operating in a first clock domain based on a first clock source, a second plurality of nodes operating in a second clock domain based on a second clock source, and synchronization circuitry accessible to both of the clock domains without requiring network traffic between the clock domains. The synchronization circuitry is configured to periodically calculate a drift rate between the time of day in the respective clock domains. Each node in one of the clock domains is configured to, when sending a message to a node in the other of the clock domains, calculate a time of day in the other of the clock domains based on an actual time of day in the one of the clock domains and the drift rate, and to include, in the message to the node in the other clock domain, the calculated time of day.Type: GrantFiled: February 22, 2022Date of Patent: April 8, 2025Assignee: Marvell Asia Pte LtdInventors: Olaf Mater, Lukas Reinbold, Xiongzhi Ning, Steffen Dolling
-
Patent number: 12273269Abstract: A memory array circuit routes packet data to a destination within the array. The memory array includes memory devices arranged in a plurality of rows and columns, as well as passthrough channels connecting non-adjacent memory devices. Each of the memory devices includes a memory configured to store packet data, and a packet router configured to interface with at least one adjacent memory device of the memory array. The packet router determines a destination address for a packet, and, based on the destination address, selectively forwards the packet to a non-adjacent memory device via a passthrough channel of the plurality of passthrough channels. A memory interface routes the packet from a source to the memory array, and selectively forwarding the packet to one of the plurality of memory devices based on the destination address.Type: GrantFiled: April 12, 2023Date of Patent: April 8, 2025Assignee: Marvell Asia Pte LtdInventors: Nir Ofir, Robert Michael Bunce
-
Patent number: 12273248Abstract: An integrated Circuit (IC) for use in a network device includes a receiver and a Link Quality Estimation Circuit (LQEC). The receiver is configured to receive a signal over a link and to process the received signal. The LQEC is configured to predict a link quality measure indicative of communication quality over the link in the future, by analyzing at least one or more settings of circuitry of the receiver, and to initiate a responsive action depending on the predicted link quality measure.Type: GrantFiled: January 27, 2022Date of Patent: April 8, 2025Assignee: Marvell Asia Pte LtdInventors: Venugopal Balasubramonian, Lenin Kumar Patra
-
Patent number: 12271737Abstract: An instruction execution circuit operable to reduce two or more micro-operations into one by producing multiple permutation and merge results in one execution cycle. The execution circuit includes a permutation and merge switching fabric and a bank of multiplexers. For a fetched instruction, a decoder decodes an opcode to generate a set of control indications used to control the multiplexers to select bytes from the respective inputs that are destined for each of the multiple results. In this manner, multiple permutation results can be output from the execution circuits in one micro-operation.Type: GrantFiled: January 31, 2019Date of Patent: April 8, 2025Assignee: Marvell Asia Pte, Ltd.Inventors: David Kravitz, David A. Carlson
-
Patent number: 12274036Abstract: An optical communication system includes a co-packaged optical module and a heatsink mounted to the co-packaged optical module. The co-packaged optical module includes a processor disposed on a substrate and a plurality of light engines disposed at different locations around the processor on the substrate. The processor and the light engines generating different amounts of heat during operation. The heatsink includes a plurality of heat pipes non-uniformly distributed throughout the heatsink to remove the different amounts of heat generated at a location of the processor and respective locations of the different ones of the light engines.Type: GrantFiled: March 16, 2023Date of Patent: April 8, 2025Assignee: MARVELL ASIA PTE LTDInventors: Radhakrishnan L. Nagarajan, Liang Ding, Mark Patterson, Roberto Coccioli, Steve Aboagye