Patents Issued in August 20, 2024
  • Patent number: 12066937
    Abstract: Techniques for flushing metadata involve: receiving a flushing request, the flushing request instructing to flush metadata in at least one cache region to a persistent storage device; acquiring a plurality of target indicators, the target indicator at least indicating a type of a cache region and a block in the cache region, where the plurality of target indicators are classified based on types of cache regions indicated by the target indicators among the plurality of target indicators; determining, from the plurality of target indicators, at least one target indicator of the same type as the at least one cache region; and flushing metadata in a block indicated by the at least one target indicator. Such techniques avoid flushing a cache region that does not need to be flushed, shortens the response time to the flushing request, and reduces the occupancy of system resources.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: August 20, 2024
    Assignee: EMC IP Holding Company LLC
    Inventors: Ming Zhang, Chen Gong, Qiaosheng Zhou
  • Patent number: 12066938
    Abstract: A cache memory circuit that evicts cache lines based on which cache lines are storing background data patterns is disclosed. The cache memory circuit can store multiple cache lines and, in response to receiving a request to store a new cache line, can select a particular one of previously stored cache lines. The selection may be performed based on data patterns included in the previously stored cache lines. The cache memory circuit can also perform accesses where the internal storage arrays are not activated in response to determining data in the location specified by the requested address is background data. In systems employing virtual addresses, a translation lookaside buffer can track the location of background data in the cache memory circuit.
    Type: Grant
    Filed: July 27, 2023
    Date of Patent: August 20, 2024
    Assignee: Apple Inc.
    Inventor: Michael R. Seningen
  • Patent number: 12066939
    Abstract: Examples described herein relate to a manner of demoting multiple cache lines to shared memory. In some examples, a shared cache is accessible by at least two processor cores and a region of the cache is larger than a cache line and is designated for demotion from the cache to the shared cache. In some examples, the cache line corresponds to a memory address in a region of memory. In some examples, an indication that the region of memory is associated with a cache line demote operation is provided in an indicator in a page table entry (PTE). In some examples, the indication that the region of memory is associated with a cache line demote operation is based on a command in an application executed by a processor. In some examples, the cache is an level 1 (L1) or level 2 (L2) cache.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: August 20, 2024
    Assignee: Intel Corporation
    Inventors: Rahul R. Shah, Omkar Maslekar, Priya Autee, Edwin Verplanke, Andrew J. Herdrich, Jeffrey D. Chamberlain
  • Patent number: 12066940
    Abstract: Data reuse cache techniques are described. In one example, a load instruction is generated by an execution unit of a processor unit. In response to the load instruction, data is loaded by a load-store unit for processing by the execution unit and is also stored to a data reuse cache communicatively coupled between the load-store unit and the execution unit. Upon receipt of a subsequent load instruction for the data from the execution unit, the data is loaded from the data reuse cache for processing by the execution unit.
    Type: Grant
    Filed: September 29, 2022
    Date of Patent: August 20, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Alok Garg, Neil N Marketkar, Matthew T. Sobel
  • Patent number: 12066941
    Abstract: Described are methods and a system for atomic memory operations with contended cache lines. A processing system includes at least two cores, each core having a local cache, and a lower level cache in communication with each local cache. One local cache configured to request a cache line to execute an atomic memory operation (AMO) instruction, receive the cache line via the lower level cache, receive a probe downgrade due to other local cache requesting the cache line prior to execution of the AMO, and send the AMO instruction to the lower level cache for remote execution in response to the probe downgrade.
    Type: Grant
    Filed: October 6, 2022
    Date of Patent: August 20, 2024
    Assignee: SiFive, Inc.
    Inventors: John Ingalls, Wesley Waylon Terpstra, Henry Cook, Leigang Kou
  • Patent number: 12066942
    Abstract: A multi-resolution cache includes a first, second and third cache segments the first segment having a first resolution and the second and third segments having a second resolution, the second resolution less than the first resolution, the first and third cache segments communicatively coupled to an off-chip memory, the first and third cache segments configured to each receive a cache line of data having the first and second resolutions, a fourth and fifth cache segments having the second resolution, a first downscaler communicatively coupled to the first and fourth cache segments configured to reduce the resolution when a first resolution cached data is shifted from the first cache segment to the fourth cache segment, a first upscaler communicatively coupled to the all cache segments that have the second resolution, and is configured to increase the reduced resolution cached data to the first resolution and output it.
    Type: Grant
    Filed: April 3, 2023
    Date of Patent: August 20, 2024
    Assignee: V-Silicon Semiconductor (Hefei) Co., Ltd
    Inventors: Bahman Zafarifar, Jeroen Maria Kettenis
  • Patent number: 12066943
    Abstract: The present disclosure is suitable for the field of hardware chip design, and particularly relates to an alias processing method and system based on L1D-L2 caches and a related device. A method for solving an alias problem of the L1D cache based on a L1D cache-L2 cache structure and a corresponding system module are disclosed. The method provided by the present disclosure can maximize hardware resource efficiency, without limiting a chip structure, a hardware system type, an operating system compatibility and a chip performance, and meanwhile, the module realized based on the cache cannot greatly increase power consumption of the whole system, thus having good expandability.
    Type: Grant
    Filed: November 20, 2023
    Date of Patent: August 20, 2024
    Assignee: Rivai Technologies (Shenzhen) Co., Ltd.
    Inventors: Muyang Liu, Rong Chen, Zhilei Yang
  • Patent number: 12066944
    Abstract: A coherency management device receives requests to read data from or write data to an address in a main memory. On a write, if the data includes zero data, an entry corresponding to the memory address is created in a cache directory if it does not already exist, is set to an invalid state, and indicates that the data includes zero data. The zero data is not written to main memory or a cache. On a read, the cache directory is checked for an entry corresponding to the memory address. If the entry exists in the cache directory, is invalid, and includes an indication that data corresponding to the memory address includes zero data, the coherency management device returns zero data in response to the request without fetching the data from main memory or a cache.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 20, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Vydhyanathan Kalyanasundharam, Amit P. Apte
  • Patent number: 12066945
    Abstract: An embodiment of an integrated circuit may comprise a core, a first level core cache memory coupled to the core, a shared core cache memory coupled to the core, a first cache controller coupled to the core and communicatively coupled to the first level core cache memory, a second cache controller coupled to the core and communicatively coupled to the shared core cache memory, and circuitry coupled to the core and communicatively coupled to the first cache controller and the second cache controller to determine if a workload has a large code footprint, and, if so determined, partition N ways of the shared core cache memory into first and second chunks of ways with the first chunk of M ways reserved for code cache lines from the workload and the second chunk of N minus M ways reserved for data cache lines from the workload, where N and M are positive integer values and N minus M is greater than zero. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: August 20, 2024
    Assignee: Intel Corporation
    Inventors: Prathmesh Kallurkar, Anant Vithal Nori, Sreenivas Subramoney
  • Patent number: 12066946
    Abstract: Embodiments are generally directed to methods and apparatuses for dynamically changing data priority in a cache. An embodiment of an apparatus comprising: a priority controller to: receive a memory access request to request data; and set a priority flag for the memory access request based on an accumulated access amount of data stored in a memory block to be accessed by the memory access request to dynamically change a priority level of the requested data.
    Type: Grant
    Filed: March 25, 2022
    Date of Patent: August 20, 2024
    Assignee: INTEL CORPORATION
    Inventors: Xiaodong Qiu, Yong Jiang, Changwon Rhee, Cui Tang, Shuangpeng Zhou, Lei Chen, Danyu Bi, Peiqing Jiang, Chengxi Wu
  • Patent number: 12066947
    Abstract: Systems, methods and apparatuses to control quality of service of a data storage device. For example, the data storage device receives an input data stream and provides an output data stream. Based at least in part on the input data stream and/or the output data stream, the data storage device determines a quality of service configuration using an artificial neural network. A controller of the data storage device uses the quality of service configuration to control operations of the data storage device that are relevant to quality of service of the data storage device. For example, the configuration identifies optimized strategies and parameters of caching or buffering, and optimized timing and frequency of background maintenance processes, such as garbage collection, wear leveling, etc.
    Type: Grant
    Filed: February 14, 2020
    Date of Patent: August 20, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Robert Richard Noel Bielby, Poorna Kale
  • Patent number: 12066948
    Abstract: Memories that are configurable to operate in either a banked mode or a bit-separated mode. The memories include a plurality of memory banks; multiplexing circuitry; input circuitry; and output circuitry. The input circuitry inputs at least a portion of a memory address and configuration information to the multiplexing circuitry. The multiplexing circuitry generates read data by combining a selected subset of data corresponding to the address from each of the plurality of memory banks, the subset selected based on the configuration information, if the configuration information indicates a bit-separated mode. The multiplexing circuitry generates the read data by combining data corresponding to the address from one of the memory banks, the one of the memory banks selected based on the configuration information, if the configuration information indicates a banked mode. The output circuitry outputs the generated read data from the memory.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: August 20, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventor: Russell J. Schreiber
  • Patent number: 12066949
    Abstract: Translated addresses of a memory device can be stored in a first LUT maintained by control circuitry. Untranslated addresses can be stored in a second LUT maintained by the control circuitry. In response to a translation request for a particular translated address of the memory device corresponding to a target untranslated address, an index of the second LUT associated with the target untranslated address can be determined, the index of the second LUT can be mapped to an index of the first LUT, and the particular translated address corresponding to the target untranslated address can be retrieved from the first LUT.
    Type: Grant
    Filed: December 3, 2021
    Date of Patent: August 20, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Chung Kuang Chin, Di Hsien Ngu, Horia C. Simionescu
  • Patent number: 12066950
    Abstract: An approach is provided for managing PIM commands and non-PIM commands at a memory controller. A memory controller enqueues PIM commands and non-PIM commands and selects the next command to process based upon various selection criteria. The memory controller maintains and uses a page table to properly configure memory elements, such as banks in a memory module, for the next memory command, whether a PIM command or a non-PIM command. The page table tracks the status of memory elements as of the most recent memory command that was issued. The page table includes an “All Bank” entry that indicates the status of banks after processing the most recent PIM command. For example, the All Banks entry indicates whether all the banks have a row open and if so, specifies the open row for all the banks.
    Type: Grant
    Filed: December 23, 2021
    Date of Patent: August 20, 2024
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Niti Madan, John Kalamatianos
  • Patent number: 12066951
    Abstract: A computer system includes physical memory devices of different types that store randomly-accessible data in memory of the computer system. In one approach, access to memory in an address space is maintained by an operating system of the computer system. A virtual page is associated with a first memory type. A page table entry is generated to map a virtual address of the virtual page to a physical address in a first memory device of the first memory type. The page table entry is used by a memory management unit to store the virtual page at the physical address.
    Type: Grant
    Filed: October 12, 2022
    Date of Patent: August 20, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Samuel E. Bradshaw, Justin M. Eno, Sean Stephen Eilert, Shivasankar Gunasekaran, Hongyu Wang, Shivam Swami
  • Patent number: 12066952
    Abstract: Provided is a data processing method, which includes: in response to a logical volume receiving a write request, whether a logical address carried in the write request is occupied by a data unit in the logical volume is determined; if not, a data grain which is closest to the size of a data block and is greater than the size of the data block is determined; a new data unit is created in the logical volume by use of the logical address as an initial address and by use of the closest data grain as the length, and a logical address range occupied by the data block in the new data unit is recorded; the data block is written into an underlying storage and a written physical address is returned; and a mapping relationship between the initial address and the physical address is established and saved.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: August 20, 2024
    Assignee: INSPUR SUZHOU INTELLIGENT TECHNOLOGY CO., LTD.
    Inventor: Yazhou Gang
  • Patent number: 12066953
    Abstract: A memory management unit comprises an interface for receiving an address translation request from a device, the address translation request specifying a virtual request to be translated. Translation circuitry translates the virtual address into an intermediate address different from a physical address directly specifying a memory location. The interface provides an address translation response specifying the intermediate address to the device in response to the address translation request. This improves security by avoiding exposure of physical addresses to the device.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: August 20, 2024
    Assignee: Arm Limited
    Inventor: Matthew Lucien Evans
  • Patent number: 12066954
    Abstract: A secure demand paging system (1020) includes a processor (1030) operable for executing instructions, an internal memory (1034) for a first page in a first virtual machine context, an external memory (1024) for a second page in a second virtual machine context, and a security circuit (1038) coupled to the processor (1030) and to the internal memory (1034) for maintaining the first page secure in the internal memory (1034).
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: August 20, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: Steven C. Goss, Gregory Remy Philippe Conti, Narendar M. Shankar, Mehdi-Laurent Akkar, Aymeric Vial
  • Patent number: 12066955
    Abstract: Systems and method for transferring data are disclosed herein. In an embodiment, a method of transferring data includes reading a plurality of bytes from a first memory, discarding first bytes of the plurality of bytes, realigning second bytes of the plurality of bytes, and storing the realigned second bytes in a second memory.
    Type: Grant
    Filed: December 30, 2021
    Date of Patent: August 20, 2024
    Assignee: HUGHES NETWORK SYSTEMS, LLC
    Inventors: Aneeshwar Danda, Robert H. Lager, Sahithi Vemuri
  • Patent number: 12066956
    Abstract: A semiconductor device includes a controller circuit and a signal generating circuit. The controller circuit is coupled to a plurality of memory devices and configured to generate a plurality of chip enable signals. One of the chip enable signals is provided to one of the memory devices, so as to respectively enable the corresponding memory device. The signal generating circuit is disposed outside of the controller circuit and configured to receive the chip enable signals and generate a termination circuit enable signal according to the chip enable signals. The termination circuit enable signal is provided to the memory devices. When a state of any of the chip enable signals is set to an enabled state, a state of the termination circuit enable signal generated by the signal generating circuit is set to an enabled state.
    Type: Grant
    Filed: July 6, 2022
    Date of Patent: August 20, 2024
    Assignee: Realtek Semiconductor Corp.
    Inventor: Tsan-Lin Chen
  • Patent number: 12066957
    Abstract: Memory controllers, devices, modules, systems and associated methods are disclosed. In one embodiment, an integrated circuit (IC) memory component is disclosed that includes a memory core, a primary interface, and a secondary interface. The primary interface includes data input/output (I/O) circuitry and control/address (C/A) input circuitry, and accesses the memory core during a normal mode of operation. The secondary interface accesses the memory core during a fault mode of operation.
    Type: Grant
    Filed: April 3, 2023
    Date of Patent: August 20, 2024
    Assignee: Rambus Inc.
    Inventors: Frederick A. Ware, Kenneth L. Wright
  • Patent number: 12066958
    Abstract: A memory controller includes a clock generator to generate a first clock signal and a timing circuit to generate a second clock signal from the first clock signal. The second clock signal times communications with any of a plurality of memory devices in respective ranks, including a first memory device in a first rank and a second memory device in a second rank. The timing circuit is configured to adjust a phase of the first clock signal, when the memory controller is communicating with the second memory device, based on calibration data associated with the second memory device and timing adjustment data associated with feedback from at least the first memory device.
    Type: Grant
    Filed: April 14, 2023
    Date of Patent: August 20, 2024
    Assignee: RAMBUS INC.
    Inventors: Jared L. Zerbe, Ian P. Shaeffer, John Eble
  • Patent number: 12066959
    Abstract: Techniques and mechanisms for determining a reference voltage which is to be provided with an integrated circuit (IC) die. In an embodiment, the IC die comprises a resistor, and a hardware interface which accommodates coupling of the IC die to a test unit. The test unit provides functionality to perform an evaluation of a resistance of the resistor, wherein said resistance is indicative of the respective resistances of one or more other resistors of the IC die. Based on the evaluation, the test unit provides to the IC die an indication of a scale factor, wherein the reference voltage is generated based on the scale factor. In another embodiment, the IC die further comprises an amplifier circuit which receives the reference voltage, wherein a variable resistance circuit of the IC die is configured based on an output of the amplifier circuit.
    Type: Grant
    Filed: May 12, 2022
    Date of Patent: August 20, 2024
    Assignee: Intel Corporation
    Inventors: Vijayalakshmi Ramachandran, Mingming Xu, Dror Lazar
  • Patent number: 12066960
    Abstract: Systems, devices, and methods for direct memory access. A system direct memory access (SDMA) device disposed on a processor die sends a message which includes physical addresses of a source buffer and a destination buffer, and a size of a data transfer, to a data fabric device. The data fabric device sends an instruction which includes the physical addresses of the source and destination buffer, and the size of the data transfer, to first agent devices. Each of the first agent devices reads a portion of the source buffer from a memory device at the physical address of the source buffer. Each of the first agent devices sends the portion of the source buffer to one of second agent devices. Each of the second agent devices writes the portion of the source buffer to the destination buffer.
    Type: Grant
    Filed: December 27, 2021
    Date of Patent: August 20, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Vydhyanathan Kalyanasundharam, Narendra Kamat
  • Patent number: 12066961
    Abstract: A method for improving reliability of a storage system and a related apparatus, where the storage system includes a first control device and a second control device. The method includes receiving, by a target controller, a write request, where the write request includes to-be-written data, and the target controller belongs to the first control device; writing, by the target controller, the to-be-written data into a memory of the target controller; and writing, by the target controller, the to-be-written data into a memory of a mirror controller of the target controller, where at least one mirror controller belongs to the second control device.
    Type: Grant
    Filed: January 21, 2022
    Date of Patent: August 20, 2024
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Ping Lin, Jian Xiao, Bin Wang
  • Patent number: 12066962
    Abstract: A device includes a master device, a set of slave devices and a bus. The master device is configured to transmit first messages carrying a set of operation data message portions indicative of operations for implementation by slave devices of the set of slave devices, and second messages addressed to slave devices in the set of slave devices. The second messages convey identifiers identifying respective ones of the slave devices to which the second messages are addressed requesting respective reactions towards the master device within respective expected reaction intervals. The slave devices are configured to receive the first messages transmitted from the master device, read respective operation data message portions in the set of operation data message portions, implement respective operations as a function of the respective operation data message portions read, and receive the second messages transmitted from the master device.
    Type: Grant
    Filed: April 28, 2023
    Date of Patent: August 20, 2024
    Assignees: STMicroelectronics Application GMBH, STMicroelectronics Design & Application S.R.O.
    Inventors: Fred Rennig, Ludek Beran
  • Patent number: 12066963
    Abstract: A universal serial bus (USB) server includes USB connectors. Each USB connector is configured to interface via USB to an endpoint server. The server includes a terminal manager configured to issue a command to a first endpoint server via a selected one of the USB connectors. The selected USB connector is associated with and connected to the first endpoint server. The terminal manager is further configured to determine whether a response has been received to the command, and, based on a determination that no response has been received to the command, attempt to power up the first endpoint server through the selected one of USB connectors.
    Type: Grant
    Filed: September 21, 2022
    Date of Patent: August 20, 2024
    Assignee: SOFTIRON LIMITED
    Inventors: Phillip Edward Straw, Stephen Hardwick
  • Patent number: 12066964
    Abstract: A system includes a rack with multiple hardware acceleration devices and multiple modular controllers coupled together into a single system implementing one or more servers. Each modular hardware acceleration device includes multiple hardware accelerators, such as graphical processing units, field programmable gate arrays or other specialized processing circuits. In each modular hardware acceleration device, hardware accelerators are communicatively coupled to a multi-port connection device, such as a switch, and also communicatively coupled to at least two external ports. A modular controller of a particular server coordinates operation of hardware accelerators of multiple hardware acceleration devices included in the particular server to provide advanced processing capabilities. Hardware accelerators may be dynamically assigned to particular processing servers to adjust processing capabilities of those servers.
    Type: Grant
    Filed: December 10, 2021
    Date of Patent: August 20, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Diwakar Urjan Anandakumar, Bassam Abdel-Dayem
  • Patent number: 12066965
    Abstract: Data are serially communicated over an interconnect between an encoder and a decoder. The encoder includes a first training unit to count a frequency of symbol values in symbol blocks of a set of N number of symbol blocks in an epoch. A circular shift unit of the encoder stores a set of most-recently-used (MRU) amplitude values. An XOR unit is coupled to the first training unit and the first circular shift unit as inputs and to the interconnect as output. A transmitter is coupled to the encoder XOR unit and the interconnect and thereby contemporaneously sends symbols and trains on the symbols. In a system, a device includes a receiver and decoder that receive, from the encoder, symbols over the interconnect. The decoder includes its own training unit for decoding the transmitted symbols.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: August 20, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: SeyedMohammad Seyedzadehdelcheh, Steven Raasch, Sergey Blagodurov
  • Patent number: 12066966
    Abstract: A device configured to receive a set of bits for a first package from a previous device of a plurality of devices connected in a daisy chain configuration, the set of bits for the first package including a first priority value and if the device does not have a second package for output to a destination device of the plurality of devices, output the set of bits for the first package to a subsequent device of the plurality of devices. When the device has the second package for output to the destination device of the plurality of devices, the device is configured to determine whether the second priority value is higher than the first priority value and if the second priority value is higher than the first priority value output a set of bits for the second package to the subsequent device of the plurality of devices.
    Type: Grant
    Filed: January 13, 2022
    Date of Patent: August 20, 2024
    Assignee: Infineon Technologies AG
    Inventor: Ewald Frensch
  • Patent number: 12066967
    Abstract: Systems and methods of communicating in a network use a physical device. The physical device includes hardware including a management data input/output interface and firmware configured to cause the hardware to provide a logical message interface using the management data input/output interface. The logical message interface is used to receive messages for configuring and/or managing the physical device.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: August 20, 2024
    Assignee: Avago Technologies International Sales Pte. Limited
    Inventor: Sathish Kumar Reddy Yenna
  • Patent number: 12066968
    Abstract: A communication interface structure and a Die-to-Die package are provided. The communication interface structure includes first bumps arranged in a first row-column configuration, second bumps arranged in a second row-column configuration, and conductive lines disposed between the first bumps and the second bumps to connect each of the first bumps to each of the second bumps. The first bumps in neighboring rows are alternately shifted with each other. The second bumps are disposed under or over the first bumps, wherein each of the second bumps in even rows is at a position shifted in a column direction from a center of each of the first bumps in the even rows, and each of the second bumps in odd rows is at a position between two of the second bumps in the even rows in the column direction.
    Type: Grant
    Filed: July 13, 2022
    Date of Patent: August 20, 2024
    Assignees: Global Unichip Corporation, Taiwan Semiconductor Manufacturing Company, Ltd.
    Inventors: Sheng-Fan Yang, Chih-Chiang Hung, Yuan-Hung Lin, Shih-Hsuan Hsu, Igor Elkanovich
  • Patent number: 12066969
    Abstract: Embodiments herein describe using an adaptive chip-to-chip (C2C) interface to interconnect two chips, wherein the adaptive C2C interface includes circuitry for performing multiple different C2C protocols to communicate with the other chip. One or both of the chips in the C2C connection can include the adaptive C2C interface. During boot time, the adaptive C2C interface is configured to perform one of the different C2C protocols. During runtime, the chip then uses the selected C2C protocol to communicate with the other chip in the C2C connection.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: August 20, 2024
    Assignee: XILINX, INC.
    Inventors: Krishnan Srinivasan, Sagheer Ahmad, Ygal Arbel, Millind Mittal
  • Patent number: 12066970
    Abstract: In one embodiment, a method includes connecting, via a first interface of a controller card, a multiplexer of the controller card to a central processing unit (CPU) of the controller card. The method also includes connecting, via an interface of a first remote card, the multiplexer of the controller card to the first remote card. The method further includes interconnecting, by the multiplexer, the first interface of the controller card to the interface of the first remote card.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: August 20, 2024
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Mridul Bajpai, Hsi-Wen Chen, Mete Yilmaz
  • Patent number: 12066971
    Abstract: A network interface peripheral device (NIP) may include a network interface for communicating with a network, and an interconnect interface for communicating with a processor subsystem. First buffers in the NIP may hold data received from and/or distributed to peer peripherals by the NIP, and second buffers may hold payload data of scheduled data streams transmitted to and/or received from the network by the NIP. Payload data from the data in the first buffers may be stored in the second buffers and transmitted to the network according to transmit events generated based on a received schedule. Data may be received from the network according to receive events generated based on the received schedule, and distributed from the second buffers to the first buffers. A centralized system configuration entity may generate the schedule, manage configuration of the NIP, and coordinate the internal configuration of the NIP with a network configuration flow.
    Type: Grant
    Filed: February 11, 2022
    Date of Patent: August 20, 2024
    Assignee: National Instruments Corporation
    Inventors: Sundeep Chandhoke, Glen O. Sescila, III, Rafael Castro Scorsi
  • Patent number: 12066972
    Abstract: The communication device 111 included in the active cable comprises a controller 11, a comparator 12, a resistor 13, a voltage source 14, and a redriver 16. The comparator 12 receives the voltage value of the SBU signal line and the reference voltage value output from the voltage source 14, and compares the voltage value of the SBU signal line with the reference voltage value to detect the level of the sideband signal. The controller 11 receives the detection result of the sideband signal level from the comparator 12, and sets the redriver 16, which is an active device, to the low-power-consumption state when the sideband signal level stays at L level for a predetermined period of time or longer.
    Type: Grant
    Filed: August 11, 2021
    Date of Patent: August 20, 2024
    Assignee: THINE ELECTRONICS, INC.
    Inventor: Yusuke Fujita
  • Patent number: 12066973
    Abstract: A computer system that includes at least one host device comprising at least one processor. The at least one processor is configured to implement, in a host operating system (OS) space, a teamed network interface card (NIC) software program that provides a unified interface to host OS space upper layer protocols including at least a remote direct memory access (RDMA) protocol and an Ethernet protocol. The teamed NIC software program provides multiplexing for at least two data pathways. The at least two data pathways include an RDMA data pathway that transmits communications to and from an RDMA interface of a physical NIC, and an Ethernet data pathway that transmits communications to and from an Ethernet interface of the physical NIC through a virtual switch that is implemented in a host user space and a virtual NIC that is implemented in the host OS space.
    Type: Grant
    Filed: June 4, 2021
    Date of Patent: August 20, 2024
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventor: Omar Cardona
  • Patent number: 12066974
    Abstract: An information handling system may include a processor and non-transitory computer-readable media communicatively coupled to the processor and having stored thereon a program of instructions configured to, when read and executed by the processor, perform data collection to retrieve hardware information regarding a second information handling system and analyze the hardware information to determine one or more recommended purposes for the second information handling system.
    Type: Grant
    Filed: January 12, 2022
    Date of Patent: August 20, 2024
    Assignee: Dell Products L.P.
    Inventors: Venkatesan K, Latchumi K, Suren Kumar
  • Patent number: 12066975
    Abstract: Embodiments are generally directed to cache structure and utilization. An embodiment of an apparatus includes one or more processors including a graphics processor; a memory for storage of data for processing by the one or more processors; and a cache to cache data from the memory; wherein the apparatus is to provide for dynamic overfetching of cache lines for the cache, including receiving a read request and accessing the cache for the requested data, and upon a miss in the cache, overfetching data from memory or a higher level cache in addition to fetching the requested data, wherein the overfetching of data is based at least in part on a current overfetch boundary, and provides for data is to be prefetched extending to the current overfetch boundary.
    Type: Grant
    Filed: March 14, 2020
    Date of Patent: August 20, 2024
    Assignee: INTEL CORPORATION
    Inventors: Altug Koker, Lakshminarayanan Striramassarma, Aravindh Anantaraman, Valentin Andrei, Abhishek R. Appu, Sean Coleman, Varghese George, K Pattabhiraman, Mike MacPherson, Subramaniam Maiyuran, ElMoustapha Ould-Ahmed-Vall, Vasanth Ranganathan, Joydeep Ray, S Jayakrishna P, Prasoonkumar Surti
  • Patent number: 12066976
    Abstract: This invention provides a generalized electronic computer architecture with multiple cores, memory distributed amongst the cores (a core-local memory). This arrangement provides predictable, low-latency memory response time, as well as a flexible, code-supplied flow of memory from one specific operation to another (using an operation graph). In one instantiation, the operation graph consists of a set of math operations, each accompanied by an ordered list of one or more input addresses. Input addresses may be specific addresses in memory, references to other math operations in the graph, or references to the next item in a particular data stream, where data streams are iterators through a continuous block of memory. The arrangement can also be packaged as a PCIe daughter card, which can be selectively plugged into a host server/PC constructed/organized according to traditional von Neumann architecture.
    Type: Grant
    Filed: August 28, 2023
    Date of Patent: August 20, 2024
    Assignee: The Trustees of Dartmouth College
    Inventors: Elijah F. W. Bowen, Richard H. Granger, Jr.
  • Patent number: 12066977
    Abstract: Embodiments include a content collaboration system that can be configured to display a hierarchical document tree that includes graphical objects corresponding to content items hosted by the content collaboration system. The collaboration system can receive a selection of a graphical object corresponding to a content item for archiving, and in response, generate a first updated hierarchical relationship that includes the archived content item and generate a second updated hierarchical relationship that excludes the archived content item. The collaboration system can construct a first hierarchical document tree instance based on the first updated hierarchical relationship for displaying the graphical objects for the first user account and construct a second hierarchical document tree instance based on the second updated hierarchical relationship for displaying the graphical objects for a second user account.
    Type: Grant
    Filed: December 22, 2021
    Date of Patent: August 20, 2024
    Assignees: ATLASSIAN PTY LTD., ATLASSIAN US, INC.
    Inventors: Thirumalaivel Alagianambi, Shaziya Tambawala, Puneet Jain, Ali Dasdan
  • Patent number: 12066978
    Abstract: The present disclosure relates to linking electronic activities between systems of record based on a comparison of electronic activity signals and system of record signals. Indexed files can be generated for each of a plurality of record objects of a system of record. An electronic activity may be accessed. A search query may be generated. Match scores for the record objects may be generated. An association between an electronic activity and a record object may be stored. Instructions to link to the electronic activity to the record object may be transmitted.
    Type: Grant
    Filed: December 30, 2022
    Date of Patent: August 20, 2024
    Inventors: Sergey Surkov, Mykola Pavlov, Andrii Kvachov
  • Patent number: 12066979
    Abstract: In some embodiments, a meta-data inspection data store may contain hierarchical components and subcomponents of an industrial asset and define points of interest. An industrial asset inspection platform may access that information and generate an inspection plan, including an association of at least one sensor type with each of the points of interest. The platform may then store information about the inspection plan in an inspection plan data store and receive inspection data (e.g., from a manual inspection, from an inspection robot, from a fixed sensor, etc.). A smart tagging algorithm may be executed to associate at least one point of interest with an appropriate portion of the received inspection data based on information in the inspection plan data store.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: August 20, 2024
    Assignee: General Electric Company
    Inventors: Alok Gupta, John Spirtos, Robert Schwaber, Andrew Chappell, Ashish Jain, Alex Tepper
  • Patent number: 12066980
    Abstract: Aspects for remote analysis of file system metadata are described. In an example, a computer-readable file from a client system is received. The computer-readable file comprises file system metadata of a file system, and corresponding source location of the file system metadata on a volume of the client system. Thereafter, a target location on a target volume is identified, wherein the target location corresponds to the source location on the volume of the client system. In an example, the file system metadata is replicated onto the target location based on the computer-readable file, for analysis.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: August 20, 2024
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Anand Andaneppa Ganjihal, Ankit Gupta
  • Patent number: 12066981
    Abstract: A data ingestion system prevents data duplication during a data ingestion operation by determining whether a current instance of an ingestion pending indicator file associated with a data set is present at initialization of the data ingestion operation. Upon determining that no current instance of the ingestion pending indicator file is present, the system generates and stores a new ingestion pending indicator file and performs the data ingestion operation using current watermark data. Upon determining that the current instance of the ingestion pending indicator file is not present, the system generates corrected watermark data and performs the data ingestion operation with respect to the corrected watermark data. Upon completion of the data ingestion operation, the system deletes the current instance of the ingestion pending indicator file.
    Type: Grant
    Filed: April 20, 2023
    Date of Patent: August 20, 2024
    Assignee: Honeywell International Inc.
    Inventors: Nikhil Bansal, Saurabh Jaiswal, Arnab Bhattacharjee
  • Patent number: 12066982
    Abstract: A computer system provides shared access to electronic data assets. The system may perform operations including: receiving, from a first user, a request to access a shared data asset, wherein: the shared data asset is associated with a shared data asset object, and the shared data asset object identifies at least a second user authorized to approve sharing of the shared data asset; in response to receiving the request from the first user: generating a data access request object including at least an identification of the first user and an identification of the shared data asset object; and providing an indication of the data access request object to the second user associated with the shared data asset object; receiving, from the second user, an approval of the request; and in response to receiving the approval of the request from the second user: granting the first user access to the shared data asset associated with the shared data asset object.
    Type: Grant
    Filed: August 18, 2022
    Date of Patent: August 20, 2024
    Assignee: Palantir Technologies Inc.
    Inventors: Alexandra Greehy, Craig Massie, Alexander Bell-Thomas, Helena Kertesz, Mihai Condur, Nicolas Prettejohn, Pieris Christofi, Sam Stoll
  • Patent number: 12066983
    Abstract: Methods, systems, and devices supporting managing a data processing flow are described. A device (e.g., an application server) may host a cloud-based collaboration application, such as an interactive document application. The device may receive an instance of a data processing flow for a flow application based on a first user input to the cloud-based collaboration application. The device may receive the instance of the data processing flow from a source device hosting the flow application. The device may embed the flow application in the cloud-based collaboration application. The device may then receive user inputs to the data processing flow from multiple users collaborating on the same flow in the cloud-based collaboration application. Based on the user inputs, the device may modify the instance of the data processing flow and transmit the modified instance back to the source device to synchronize the data processing flow in the flow application.
    Type: Grant
    Filed: January 23, 2023
    Date of Patent: August 20, 2024
    Assignee: Salesforce, Inc.
    Inventors: Kongposh Sapru, Joshua Goodman, Alexander John Trzeciak
  • Patent number: 12066984
    Abstract: Compact size, extensibility, and built-in security is provided by enclosing into a file's header custom specifications and preventing file execution without knowing these specifications. The format allows for defined sections, organizing preliminary pre-processing of data before operating system (OS) execution. A file header, including standard and user-defined sections, is created and read by delegated processing; forming an executable file's header with inclusive specifications using the abstract data syntax description language (ASN.1); encoding header with compression encoding rules (PER); and creating a separate header section of interfaces table for components. Program assembly output includes an executable file in machine and/or byte code with a dynamic extensible header encoded according to ASN.1 with PER.
    Type: Grant
    Filed: November 25, 2020
    Date of Patent: August 20, 2024
    Assignee: LIMITED LIABILITY COMPANY “PEERF”
    Inventors: Nikolay Olegovich Ilyin, Vladimir Nikolaevich Bashev
  • Patent number: 12066985
    Abstract: A method is provided, comprising: receiving, at a source system, a first copy instruction, the first copy instruction being associated with a token that represents one or more data items, the first copy instruction instructing the source system to copy the one or more data items from a first volume to a second volume; in response to the first copy instruction, retrieving one or more hash digests from a snapshot that is associated with the token, each of the one or more hash digests being associated with a different one of the one or more data items; and transmitting, to a target system, a second copy instruction that is associated with the one or more hash digests, the second copy instruction instructing the target system to copy the one or more data items to a replica of the second volume that is stored at the target system.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: August 20, 2024
    Assignee: EMC IP Holding Company LLC
    Inventors: Xiangping Chen, David Meiri
  • Patent number: 12066986
    Abstract: Disclosed are systems, apparatus, methods, and computer readable media for suppressing network feed activities using an information feed in an on-demand database service environment. In one embodiment, a message is received, including data indicative of a user action. An entity associated with the user action is identified, where the entity is a type of record stored in a database. A type of the entity is identified. It is determined whether the entity type is a prohibited entity type. When the entity type is not a prohibited entity type, the message data is saved to one or more tables in the database. The tables are configured to store feed items of an information feed capable of being displayed on a device. When the entity type is a prohibited entity type, the saving of the message data, to the one or more tables in the database configured to store the feed items, is prohibited.
    Type: Grant
    Filed: September 21, 2022
    Date of Patent: August 20, 2024
    Assignee: Salesforce, Inc.
    Inventors: William Gradin, Matthew Davidchuk, Qiu Ma, Leonid Zemskov, Amy Palke