Patents Issued in February 20, 2024
-
Patent number: 11907122Abstract: The disclosure relates to technology for up-evicting cache lines. An apparatus comprises a hierarchy of caches comprising a first cache having a first cache controller and a second cache having a second cache controller. The first cache controller is configured to store cache lines evicted from a first processor group to the first cache and to down-evict cache lines from the first cache to the second cache. The second cache controller is configured to store cache lines evicted from a second processor group into the second cache, to up-evict a first cache line from the second cache to the first cache in response to an eviction of a second cache line from the second processor group to the second cache, and to provide the up-evicted first cache line from the first cache to the second processor group in response to a request from the second processor group.Type: GrantFiled: August 12, 2022Date of Patent: February 20, 2024Assignee: Huawei Technologies Co., Ltd.Inventors: Yuejian Xie, Qian Wang, Xingyu Jiang
-
Patent number: 11907123Abstract: Embodiments include methods, systems and computer program products for managing a flash memory device. Aspects include monitoring a percentage of memory of the flash memory device that is in a ready to use state. Aspects also include operating the flash memory device in a first operating mode based on a determination that the percentage is greater than a first threshold value. Aspects further include operating the flash memory device in a second operating mode based on a determination that the percentage has fallen below the first threshold value. Aspects include operating the flash memory device in a third operating mode until the percentage exceeds the first threshold value based on a determination that the percentage has fallen below a second threshold value, which is lower than the first threshold value. The erasing of ready to erase memory block stripes is only performed during the third operating mode.Type: GrantFiled: April 20, 2021Date of Patent: February 20, 2024Assignee: International Business Machines CorporationInventors: Robert Edward Galbraith, Daniel Frank Moertl, Rick A. Weckwerth, Matthew Szekely
-
Patent number: 11907124Abstract: Aspects include using a shadow copy of a level 1 (L1) cache in a cache hierarchy. A method includes maintaining the shadow copy of the L1 cache in the cache hierarchy. The maintaining includes updating the shadow copy of the L1 cache with memory content changes to the L1 cache a number of pipeline cycles after the L1 cache is updated with the memory content changes.Type: GrantFiled: March 31, 2022Date of Patent: February 20, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Yair Fried, Aaron Tsai, Eyal Naor, Christian Jacobi, Timothy Bronson, Chung-Lung K. Shum
-
Patent number: 11907125Abstract: A computer-implemented method is provided. The method includes determining whether a rejection of a request is required and determining whether the request is software forward progress (SFP)-likely or SFP-unlikely upon determining that the rejection of the request is required. The method also includes executing a first pseudo random decision to set or not set a requested state of the request in an event the request is SFP-likely or SFP-unlikely, respectively, and rejecting the request following execution of the second pseudo random decision.Type: GrantFiled: April 5, 2022Date of Patent: February 20, 2024Assignee: International Business Machines CorporationInventors: Gregory William Alexander, Tu-An T. Nguyen, Deanna Postles Dunn Berger, Timothy Bronson, Christian Jacobi
-
Patent number: 11907126Abstract: A processor employs a plurality of op cache pipelines to concurrently provide previously decoded operations to a dispatch stage of an instruction pipeline. In response to receiving a first branch prediction at a processor, the processor selects a first op cache pipeline of the plurality of op cache pipelines of the processor based on the first branch prediction, and provides a first set of operations associated with the first branch prediction to the dispatch queue via the selected first op cache pipeline.Type: GrantFiled: December 9, 2020Date of Patent: February 20, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Robert B. Cohen, Tzu-Wei Lin, Anthony J. Bybell, Sudherssen Kalaiselvan, James Mossman
-
Patent number: 11907127Abstract: In certain aspects, one or more solid-state storage devices (SSDs) are provided that include a controller and non-volatile memory coupled to the controller. The non-volatile memory can include one or more portions configured as main memory or cache memory. When data stored in the main memory is written to the cache memory for processing, the data in the main memory is erased. In certain aspects, storage systems are provided that include one or more of such SSDs coupled to a host system. In certain aspects, methods are provided that include: receiving, by a first such SSD, a first command to write data to memory; determining that the data is stored in a main memory and is to be written to the cache memory for processing; writing the data to the cache memory; and erasing the data from the main memory.Type: GrantFiled: May 6, 2022Date of Patent: February 20, 2024Assignee: SMART IOPS, INC.Inventors: Ashutosh Kumar Das, Manuel Antonio d'Abreu
-
Patent number: 11907128Abstract: A technique for managing a storage system involves determining, in response to a first write operation on a first data block on a persistent storage device, whether a first group of data corresponding to the first data block is included in a cache; updating the first group of data in the cache if it is determined that the first group of data is included in the cache; and adding the first group of data to an associated data set of the cache to serve as a first record. Accordingly, such a technique can associatively manage different types of cached data corresponding to a data block, thereby optimizing the system performance.Type: GrantFiled: May 10, 2022Date of Patent: February 20, 2024Assignee: EMC IP Holding Company LLCInventors: Ming Zhang, Chen Gong, Qiaosheng Zhou
-
Patent number: 11907129Abstract: Disclosed herein is an information processing device including a host unit adapted to request data access by specifying a logical address of a secondary storage device, and a controller adapted to accept the data access request and convert the logical address into a physical address using an address conversion table to perform data access to an associated area of the secondary storage device, in which an address space defined by the address conversion table includes a coarsely granular address space that collectively associates, with logical addresses, physical addresses that are in units larger than those in which data is read.Type: GrantFiled: September 9, 2021Date of Patent: February 20, 2024Assignee: SONY INTERACTIVE ENTERTAINMENT INC.Inventor: Hideyuki Saito
-
Patent number: 11907130Abstract: An apparatus comprising a cache comprising a plurality of cache entries, cache access circuitry responsive to a cache access request to perform, based on a target memory address associated with the cache access request, a cache lookup operation, tracking circuitry to track pending requests to modify cache entries of the cache, and prediction circuitry responsive to the cache access request to make a prediction of whether the pending requests tracked by the tracking circuitry include a pending request to modify a cache entry associated with the target memory address, wherein the cache access circuitry is responsive to the cache access request to determine, based on the prediction, whether to perform an additional lookup of the tracking circuitry. A method and a non-transitory computer-readable medium to store computer-readable code for fabrication of the apparatus are also provided.Type: GrantFiled: January 26, 2023Date of Patent: February 20, 2024Assignee: Arm LimitedInventors: Alexander Alfred Hornung, Kenny Ju Min Yeoh
-
Patent number: 11907131Abstract: Techniques for efficiently flushing a user data log may postpone or delay establishing chains of metadata pages used as mapping information to map logical addresses to storage locations of content stored at the logical addresses. Processing can include: receiving a write operation that writes data to a logical address; storing an entry for the write operation in the user data log; and flushing the entry from the user data log. Flushing can include storing a metadata log entry in a metadata log, wherein the metadata log entry represents a binding of the logical address to a data block including the data stored at the logical address; and destaging the metadata log entry. Destaging can include updating mapping information used to map the logical address to the data block. The mapping information can include a metadata page in accordance with the metadata log entry.Type: GrantFiled: July 1, 2022Date of Patent: February 20, 2024Assignee: Dell Products L.P.Inventors: Vladimir Shveidel, Bar David
-
Patent number: 11907132Abstract: A method for managing designated authority status in a cache line includes identifying an initial designated authority (DA) member cache for a cache line, transferring DA status from the initial DA member cache to a new DA member cache, determining whether the new DA member cache is active, indicating a final state of the initial DA cache responsive to determining that the new DA member cache is active, and overriding a DA state in a cache control structure in a directory. A method for managing cache accesses during a designated authority transfer includes receiving a designated authority (DA) status transfer request, receiving an indication that a first cache will invalidate its copy of the cache line, allowing a second cache to assume DA status for the cache line, and denying access to the first cache's copy of the cache line until invalidation by the first cache is complete.Type: GrantFiled: March 23, 2022Date of Patent: February 20, 2024Assignee: International Business Machines CorporationInventors: Jason D Kohl, Gregory William Alexander, Timothy Bronson, Akash V. Giri, Winston Herring
-
Patent number: 11907133Abstract: Standardized address generation from address substrings includes receiving an address string for a place-of-interest, one-to-many mapping at least one of a plurality of address substrings of the address string to respective address components, concatenating the address substrings using a template that specifies an order of concatenating the address substrings, and making the concatenated address substrings available for further use.Type: GrantFiled: July 29, 2022Date of Patent: February 20, 2024Assignee: SafeGraph, Inc.Inventor: Vera Sazonova
-
Patent number: 11907134Abstract: This disclosure provides techniques hierarchical address virtualization within a memory controller and configurable block device allocation. By performing address translation only at select hierarchical levels, a memory controller can be designed to have predictable I/O latency, with brief or otherwise negligible logical-to-physical address translation time. In one embodiment, address transition may be implemented entirely with logical gates and look-up tables of a memory controller integrated circuit, without requiring processor cycles. The disclosed virtualization scheme also provides for flexibility in customizing the configuration of virtual storage devices, to present nearly any desired configuration to a host or client.Type: GrantFiled: September 8, 2021Date of Patent: February 20, 2024Assignee: Radian Memory Systems, Inc.Inventors: Robert Lercari, Alan Chen, Mike Jadon, Craig Robertson, Andrey V. Kuzmin
-
Patent number: 11907135Abstract: To increase the speed with which a Second Layer Address Table (SLAT) is traversed, memory having the same access permissions is contiguously arranged such that one or more hierarchical levels of the SLAT need not be referenced, thereby resulting in more efficient SLAT traversal. “Slabs” of memory are established whose memory range is sufficiently large that reference to a hierarchically lower level table can be skipped and a hierarchically higher level table's entries can directly identify relevant memory addresses. Such slabs are aligned to avoid smaller intermediate memory ranges. The loading of code or data into memory is performed based on a next available memory location within a slab having equivalent access permissions, or, if such a slab is not available, or if an existing slab does not have a sufficient quantity of available memory remaining, a new slab with the proper access permissions is established.Type: GrantFiled: February 6, 2023Date of Patent: February 20, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Yevgeniy Bak, Mehmet Iyigun, Jonathan E. Lange
-
Patent number: 11907136Abstract: An apparatus and/or system is described including a memory device including a memory range and a temporal data management unit (TDMU) coupled to the memory device to receive from an interface, the memory range and a temporal range corresponding to validity of data in the memory range, check the temporal range against a time and/or date value provided by a timer or clock to identify the data in the memory range as expired, and invalidate the data that is expired in the memory device. In some embodiments, the TDMU includes hardware logic that resides on a memory module with the memory device and is coupled to invalidate expired data when the memory module is decoupled from the interface. Other embodiments may be disclosed and claimed.Type: GrantFiled: March 16, 2020Date of Patent: February 20, 2024Assignee: Intel CorporationInventors: Ginger H. Gilsdorf, Karthik Kumar, Mark A. Schmisseur, Thomas Willhalm, Francesc Guim Bernat
-
Patent number: 11907137Abstract: Disclosed are systems and methods for leader node election, comprising a cluster system including a plurality of nodes, a node registry, wherein nodes are configured to transmit registration requests to the node registry, receive node data is response, and to determine a leader node based on the earliest registered node, and wherein the leader node is configured to dynamically allocate data slots between the plurality of nodes, and each of the nodes are configured to store data associated with allocated data slots in an in-memory least recently used component and data associated with all of the data slots in a persistent storage component.Type: GrantFiled: January 26, 2022Date of Patent: February 20, 2024Assignee: CAPITAL ONE SERVICES, LLCInventors: Rohit Joshi, Ashish Gupta
-
Patent number: 11907138Abstract: Various embodiments include methods and devices for implementing a criterion aware cache replacement policy by a computing device. Embodiments may include updating a staling counter, writing a value of a local counter to a system cache in association with a location in the system cache for with data, in which the value of the local counter includes a value of the staling counter when (i.e., at the time) the associated data is written to the system cache, and using the value of the local counter of the associated data to determine whether the associated data is stale.Type: GrantFiled: December 31, 2021Date of Patent: February 20, 2024Assignee: QUALCOMM IncorporatedInventors: Hiral Nandu, Subbarao Palacharla, George Patsilaras, Alain Artieri, Simon Peter William Booth, Vipul Gandhi, Girish Bhat, Yen-Kuan Wu, Younghoon Kim
-
Patent number: 11907139Abstract: A mother board topology including a processor operable to be coupled to one or more communication channels for communicating commands. The topology includes a first communication channel electrically coupling a first set of two or more dual in-line memory modules (DIMMs) and a first primary data buffer on a mother board. The topology includes a second communication channel electrically coupling a second set of two or more DIMMs and a second primary data buffer on the mother board. The topology includes a third channel electrically coupling the first primary data buffer, the primary second data buffer, and the processor.Type: GrantFiled: December 20, 2022Date of Patent: February 20, 2024Assignee: Rambus Inc.Inventors: Chi-Ming Yeung, Yoshie Nakabayashi, Thomas Giovannini, Henry Stracovsky
-
Patent number: 11907140Abstract: A system for serial communication includes a controller, a semiconductor package comprising a plurality of semiconductor die, and a serial interface configured to connect the plurality of semiconductor die to the controller. The serial interface includes a controller-to-package connection and a package-to-controller connection, and the serial interface is configured to employ a signaling protocol using differential data signaling with no separate clock signals.Type: GrantFiled: March 21, 2022Date of Patent: February 20, 2024Assignee: KIOXIA CORPORATIONInventors: Benjamin Kerr, Philip Rose, Robert Reed
-
Patent number: 11907141Abstract: Various embodiments include methods for implementing flexible ranks in a memory system. Embodiments may include receiving, at a memory controller, a first memory access command and a first address at which to implement the first memory access command in a logical rank, generating, by the memory controller, a first signal configured to indicate to a first memory device of the logical rank to implement the first memory access command via a first partial channel, sending, from the memory controller, the first signal to the first memory device, generating, by the memory controller, a second signal configured to indicate to a second memory device of the logical rank that is different from the first memory device to implement the first memory access command via a second partial channel, and sending, from the memory controller, the second signal to the second memory device.Type: GrantFiled: September 6, 2022Date of Patent: February 20, 2024Assignee: QUALCOMM IncorporatedInventors: Jungwon Suh, Pankaj Deshmukh, Shyamkumar Thoziyoor, Subbarao Palacharla
-
Patent number: 11907142Abstract: Excessive polling that may result in wasted computing resources and unnecessary network traffic can be avoided using some techniques described herein. In one example, a method can include obtaining historical data indicating execution times associated with computing operations. The method can also include determining polling times to assign to the computing operations by applying a model to the historical data. The method may also include configuring a software application to implement the polling times in relation to polling processes for transmitting requests to execute the computing operations to one or more destinations.Type: GrantFiled: February 4, 2022Date of Patent: February 20, 2024Assignee: RED HAT, INC.Inventors: Brian Gallagher, Cathal O'Connor
-
Patent number: 11907143Abstract: A method for timestamping and synchronization with high-accuracy timestamps in low-power sensor systems is provided. The method is performed by a device and includes: receiving, by a sensor hub of the device, an interrupt signal from a sensor and performing an interrupt service routine (ISR) to obtain an interrupt timestamp obtained by a latch, wherein the interrupt timestamp is obtained from an always-running unified time reference; obtaining, by the sensor hub, sensor data from the sensor; predicting, by the sensor hub, a prediction timestamp based on an amount of sensor data and the interrupt timestamp by using a filtering algorithm; and correcting, by the sensor hub, a timestamp of each sensor data based on the prediction timestamp.Type: GrantFiled: April 14, 2022Date of Patent: February 20, 2024Assignee: MEDIATEK SINGAPORE PTE. LTD.Inventors: Hongxu Zhao, Cunliang Du, Chieh-Lin Chuang, Zhen Jiang
-
Patent number: 11907144Abstract: Techniques to reduce the latency in notifying that space in a memory has been freed up are described. For example, when moving data from on-chip memory of a computing engine to system memory, the computing engine can be notified that its on-chip memory is free before an acknowledgment is provided by the system memory that the data being moved has been written into the system memory. The computing engine can be given access to the on-chip memory sooner by generating an early semaphore update based on a determination that the set of data being moved to system memory has been read out from the on-chip memory. The early semaphore update need not wait for the acknowledgement from the system memory, thus reducing the latency of notifying the computing engine that the on-chip memory is free.Type: GrantFiled: June 3, 2022Date of Patent: February 20, 2024Assignee: Amazon Technologies, Inc.Inventors: Raymond S. Whiteside, Thomas A. Volpe
-
Patent number: 11907145Abstract: An integrated circuit (IC) includes first and second memory devices and a bridge. The IC also includes a first interconnect segment coupled between the first memory device and the bridge. The IC further includes a second interconnect segment coupled between the first and second memory devices, and a third interconnect segment coupled between the bridge and the second memory device. The IC includes a first DMA circuit coupled to the first interconnect segment, and a second DMA circuit coupled to the second interconnect segment. A fourth interconnect segment is coupled between the first and second DMA circuits.Type: GrantFiled: October 24, 2022Date of Patent: February 20, 2024Assignee: Texas Instruments IncorporatedInventors: Brian Jason Karguth, Charles Lance Fuoco, Samuel Paul Visalli, Michael Anthony Denio
-
Patent number: 11907146Abstract: System and method for implementing accelerated memory transfers in an integrated circuit includes identifying memory access parameters for configuring memory access instructions for accessing a target corpus of data from within a defined region of an n-dimensional memory; converting the memory access parameters to direct memory access (DMA) controller-executable instructions, wherein the converting includes: (i) defining dimensions of a data access tile based on a first parameter of the memory access parameters; (ii) generating multi-directional data accessing instructions that, when executed, automatically moves the data access tile along multiple distinct axes within the defined region of the n-dimensional memory based at least on a second parameter of the memory access parameters; transferring a corpus of data from the n-dimensional memory to a target memory based on executing the DMA controller-executable instructions.Type: GrantFiled: November 15, 2022Date of Patent: February 20, 2024Assignee: quadric.io, Inc.Inventors: Aman Sikka, Marian Petre, Nigel Drego, Veerbhan Kheterpal
-
Patent number: 11907147Abstract: A message inspection engine, implemented in hardware in a System on Chip (SOC), is configured using configuration information to obtain a configured message inspection engine. An input message is received at the configured message inspection engine from an upstream functional module in the SOC. The configured message inspection engine is used to analyze the input message to determine a content modification plan and a destination control plan and to generate an output message based at least in part on the input message, the content modification plan, and the destination control plan, including by populating the output message with a downstream functional module specified by the destination control plan. The output message is output from the configured message inspection engine.Type: GrantFiled: June 30, 2023Date of Patent: February 20, 2024Assignee: Beijing Tenafe Electronic Technology Co., Ltd.Inventors: Priyanka Nilay Thakore, Lyle E. Adams
-
Patent number: 11907148Abstract: An open compute project (OCP) adapter card and a computer device are disclosed. The adapter card includes an OCP connector, a controller, a selector, and a motherboard connector. The OCP connector is configured to connect to an OCP network interface card (NIC). The controller is configured for bandwidth allocation, in-situ control and power-on/off control of the OCP NIC. The selector gates a single-homed host or a dual-homed host based on working mode configuration information stored in the controller. The motherboard connector is configured to connect to a motherboard device.Type: GrantFiled: September 28, 2020Date of Patent: February 20, 2024Assignee: INSPUR SUZHOU INTELLIGENT TECHNOLOGY CO., LTD.Inventors: Shuming Wang, Xiangtao Kong
-
Patent number: 11907149Abstract: Sideband signaling in Universal Serial Bus (USB) Type-C communication link allows multiple protocols that are tunneled through a USB link, where sideband signals may be provided through the sideband use (SBU) pins. Further, the SBU pins may be transitioned between different modes of sideband signals. In particular, signals in an initial mode may indicate a need or desire transition to a second mode. After a negotiation, linked devices agree to transition, the two devices may transition to the second mode. By providing this inband sideband signaling that allows mode changes, more protocols can be tunneled with accompanying sideband signaling and flexibility of the USB link is expanded.Type: GrantFiled: December 9, 2020Date of Patent: February 20, 2024Assignee: QUALCOMM IncorporatedInventors: Lalan Jee Mishra, Richard Dominic Wietfeldt, Yiftach Benjamini
-
Patent number: 11907150Abstract: A flexible storage system. A storage motherboard accommodates, on a suitable connector, a storage adapter circuit that provides protocol translation between a host bus interface and a storage interface, and that provides routing, to accommodate a plurality of mass storage devices that may be connected to the storage adapter circuit through the storage motherboard. The storage adapter circuit may be replaced with a circuit supporting a different host interface or a different storage interface.Type: GrantFiled: May 10, 2021Date of Patent: February 20, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Fred Worley, Harry Rogers, Sreenivas Krishnan, Zhan Ping, Michael Scriber
-
Patent number: 11907151Abstract: Described are methods for configuring computing system for and computing systems for PCIe communication between remote computing assets. The system uses a fabric interface device configured to receive multi-lane serial PCIe data from functional elements of a computing asset through a multi-lane PCIe bus, and to transparently extend the multi-lane PCIe bus by converting the multi-lane PCIe data into a retimed parallel version of the PCIe multi-lane data to be sent on bidirectional data communication paths. The fabric interface device is also configured so that the multi-lane PCIe bus can have a first number of lanes and the bidirectional data communication paths can have a different second number of lanes.Type: GrantFiled: November 11, 2021Date of Patent: February 20, 2024Assignee: Drut Technologies Inc.Inventors: Jitender Miglani, Will Ferry, Dileep Desai
-
Patent number: 11907152Abstract: A reconfigurable server includes improved bandwidth connection to adjacent servers and allows for improved access to near-memory storage and for an improved ability to provision resources for an adjacent server. The server includes processor array and a near-memory accelerator module that includes near-memory and the near-memory accelerator module helps provide sufficient bandwidth between the processor array and near-memory. A hardware plane module can be used to provide additional bandwidth and interconnectivity between adjacent servers and/or adjacent switches.Type: GrantFiled: November 18, 2022Date of Patent: February 20, 2024Assignee: Molex, LLCInventors: Augusto Panella, Allan Cantle, Ray Matyka, John W. Comish, Jr.
-
Patent number: 11907153Abstract: Methods, systems, and devices for providing computer implemented services using managed systems are disclosed. To provide the computer implemented services, the managed systems may need to operate in a predetermined manner conducive to, for example, execution of applications that provide the computer implemented services. Similarly, the managed system may need access to certain hardware resources (e.g., and also software resources such as drivers, firmware, etc.) to provide the desired computer implemented services. To improve the likelihood of the computer implemented services being provided, the managed devices may be managed using a subscription based model. The subscription model may utilize a highly accessible service to obtain information regarding desired capabilities (e.g., a subscription) of a managed system, and use the acquired information to automatically configure and manage the features and capabilities of the managed systems.Type: GrantFiled: January 7, 2022Date of Patent: February 20, 2024Assignee: Dell Products L.P.Inventors: Lucas A. Wilson, Dharmesh M. Patel
-
Patent number: 11907154Abstract: A receive clock generated at a receiver coupled to a one-wire bus is synchronized in each clock cycle, permitting reception of a data frame of unlimited length without clock overrun or underrun. A base clock signal provided by an oscillator is passed by a clock gating circuit while the clock gating circuit is enabled. A counter counts positive and negative edges in an output of the clock gating circuit. The clock gating circuit is disabled when an output of the counter indicates a preconfigured maximum count value. An edge synchronization circuit that synchronizes edges in the base clock signal with edges in a data signal received over the one-wire bus ignores edges in the data signal while the counter output has a value that is less than the maximum count value, and resets the counter in response to an edge detected in the data signal received over the one-wire bus.Type: GrantFiled: July 11, 2022Date of Patent: February 20, 2024Assignee: QUALCOMM IncorporatedInventors: Lalan Jee Mishra, Umesh Srikantiah, Francesco Gatta, Muhlis Kenan Ozel, Richard Dominic Wietfeldt
-
Patent number: 11907155Abstract: A bus system is provided. A plurality of slave devices are electrically connected to a master device through an enhanced serial peripheral interface (eSPI) bus. Each slave device has an alert handshake pin. The alert handshake pins of the slave devices are electrically connected together via an alert handshake control line. In a first phase of a plurality of phases in each assignment period of an assignment stage after a synchronization stage, the first slave device is configured to control the alert handshake control line to a second voltage level via the alert handshake pin. In the phases of each of the assignment periods except for the first phase, a first slave device of the slave devices is configured to control the alert handshake control line to communicate with the slave devices via the alert handshake pin. The first phase corresponds to a first slave device.Type: GrantFiled: January 12, 2022Date of Patent: February 20, 2024Assignee: NUVOTON TECHNOLOGY CORPORATIONInventors: Kang-Fu Chiu, Chih-Hung Huang, Hao-Yang Chang
-
Patent number: 11907156Abstract: According to one aspect, provision is made of a system-on-chip comprising a master device, a slave device, a clock configured to clock the operation of the slave device, a clock controller configured to activate or deactivate the clock and/or a power-on controller configured to power on/off the slave device, a control system configured to detect that the clock is deactivated and/or that the slave device is powered off when the master device emits an access request to the slave device, the master device being configured for activating the clock when the control system detects that this clock is deactivated and/or powering on the slave device when the control system detects that the slave device is powered off, then emitting a new access request to the slave device.Type: GrantFiled: December 3, 2021Date of Patent: February 20, 2024Assignees: STMicroelectronics (Alps) SAS, STMicroelectronics FranceInventors: Michael Soulie, Thomas Martin
-
Patent number: 11907157Abstract: A representative reconfigurable processing circuit and a reconfigurable arithmetic circuit are disclosed, each of which may include input reordering queues; a multiplier shifter and combiner network coupled to the input reordering queues; an accumulator circuit; and a control logic circuit, along with a processor and various interconnection networks. A representative reconfigurable arithmetic circuit has a plurality of operating modes, such as floating point and integer arithmetic modes, logical manipulation modes, Boolean logic, shift, rotate, conditional operations, and format conversion, and is configurable for a wide variety of multiplication modes. Dedicated routing connecting multiplier adder trees allows multiple reconfigurable arithmetic circuits to be reconfigurably combined, in pair or quad configurations, for larger adders, complex multiplies and general sum of products use, for example.Type: GrantFiled: December 31, 2022Date of Patent: February 20, 2024Assignee: Cornami, Inc.Inventors: Paul L. Master, Steven K. Knapp, Raymond J. Andraka, Alexei Beliaev, Martin A. Franz, Rene Meessen, Frederick Curtis Furtek
-
Patent number: 11907158Abstract: A vector processor with a vector first and multi-lane configuration. A vector operation for a vector processor can include a single vector or multiple vectors as input. Multiple lanes for the input can be used to accelerate the operation in parallel. And, a vector first configuration can enhance the multiple lanes by reducing the number of elements accessed in the lanes to perform the operation in parallel.Type: GrantFiled: December 28, 2020Date of Patent: February 20, 2024Assignee: Micron Technology, Inc.Inventor: Steven Jeffrey Wallach
-
Patent number: 11907159Abstract: A method of representing a distributed computing system, the distributed computing system comprising a plurality of processing devices connected together according to a predefined topology. The method comprising receiving at least one piece of data from an activity log file relating to at least one processing device among the plurality of processing devices, receiving at least one metric relating to at least one processing device among the plurality of processing devices, receiving at least the predefined topology of the distributed computing system, constructing a graph representative of a distributed computing system operation, the graph comprising the data item extracted from the received log file, the received metric, and the received topology, and embedding at least one part of the graph to obtain at least one state vector representing the at least one part of the embedded graph.Type: GrantFiled: August 18, 2022Date of Patent: February 20, 2024Assignees: BULL SAS, LE COMMISSARIAT À L'ÉNERGIE ATOMIQUE ET AUX ÉNERGIES ALTERNATIVESInventors: Emeric Dynomant, Pierre Seroul
-
Patent number: 11907160Abstract: This disclosure relates to a distributed processing system for configuring multiple processing channels. The distributed processing system includes a main processor, such as an ARM processor, communicatively coupled to a plurality of co-processors, such as stream processors. The co-processors can execute instructions in parallel with each other and interrupt the ARM processor. Longer latency instructions can be executed by the main processor and lower latency instructions can be executed by the co-processors. There are several ways that a stream can be triggered in the distributed processing system. In an embodiment, the distributed processing system is a stream processor system that includes an ARM processor and stream processors configured to access different register sets. The stream processors can include a main stream processor and stream processors in respective transmit and receive channels. The stream processor system can be implemented in a radio system to configure the radio for operation.Type: GrantFiled: August 5, 2022Date of Patent: February 20, 2024Assignee: Analog Devices, Inc.Inventors: Manish J. Manglani, Shipra Bhal, Christopher Mayer
-
Patent number: 11907161Abstract: An example method of upgrading a distributed storage object from a first version to a second version includes: querying metadata of a first component configured according to the first version of the distributed storage object, the metadata defining extents of data on a disk group of the first component; populating, for a second component configured according to the second version of the distributed storage object, logical and middle maps based on the metadata such that initial entries in the logical map point to initial entries in the middle map, and the initial entries in the middle map point to physical addresses of the disk group of the first component; and reading the data from the disk group of the first component and writing the data to a disk group of the second component while updating the initial entries in the middle map.Type: GrantFiled: July 2, 2021Date of Patent: February 20, 2024Assignee: VMware, Inc.Inventors: Asit Desai, Abhay Kumar Jain, Wenguang Wang, Eric Knauft, Enning Xiang
-
Patent number: 11907162Abstract: Computer-readable media, methods, and systems are disclosed for minimizing data volume growth in a database system under changes to an encryption status of a plurality of data pages persisted to a database. Initially, a request is received to update an encryption parameter associated with the database. Next, it is determined whether a candidate page requires encryption changes. In response to determining that the candidate page is not currently in use by one or more active database snapshots and not currently loaded in main memory, the candidate page is loaded into main memory. Next, an encryption operation is performed on the candidate page, and the encrypted page is designated for persistence. Finally, based on a current number of candidate pages already encrypted during a current save point cycle, the selective iteration is paused until a subsequent save point cycle.Type: GrantFiled: May 28, 2021Date of Patent: February 20, 2024Assignee: SAP SEInventors: Dirk Thomsen, Axel Schroeder
-
Cloud snapshot lineage mobility between virtualization software running on different storage systems
Patent number: 11907163Abstract: An apparatus comprises a processing device configured to identify a cloud snapshot lineage that is being managed by first virtualization software running on a first storage system, the cloud snapshot lineage comprising one or more snapshots of at least one storage volume, the cloud snapshot lineage being stored on cloud storage of at least one cloud external to the first storage system. The processing device is also configured to pause management of the cloud snapshot lineage by the first virtualization software running on the first storage system, to obtain, from the first virtualization software running on the first storage system, configuration data for the cloud snapshot lineage, to provide, to second virtualization software running on a second storage system, the configuration data for the cloud snapshot lineage, and to resume management of the cloud snapshot lineage by the second virtualization software running on the second storage system.Type: GrantFiled: January 5, 2023Date of Patent: February 20, 2024Assignee: Dell Products L.P.Inventors: Michael Malamud, Shane Sullivan, Shanmuga A. Gunasekaran, Mithun Mahendra Varma -
Patent number: 11907164Abstract: This application relates to a file loading method performed at an electronic device, and a non-transitory computer-readable storage medium thereof. The method including: receiving, in response to a user operation, an instruction for loading a target file; determining an associated feature of at least one piece of resource information in the target file; determining a type of the resource information according to the associated feature of the resource information; and loading the resource information by using a loading algorithm corresponding to the type of the resource information.Type: GrantFiled: May 19, 2021Date of Patent: February 20, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventor: Liang Du
-
Patent number: 11907165Abstract: The described technology is generally directed towards coordinating the generation, validation and enabling of content selection graphs in an in-memory content selection graph data store. When a set of content selection graphs is requested, a coordinator starts the generation of the relevant graphs. Upon successful generation, the coordinator starts a validation of the generated graphs against rules for the nodes/response data in the graphs. If the generated graphs pass validation, the coordinator enables the graph set for use in an in-memory cache, whereby when a request to return content selection data is received, an active graph that corresponds to the request and the current time is accessed to obtain and return the response data as the requested content selection data.Type: GrantFiled: September 27, 2022Date of Patent: February 20, 2024Assignee: HOME BOX OFFICE, INC.Inventors: Jonathan David Lutz, Allen Arthur Gay, Dylan Carney
-
Patent number: 11907166Abstract: Various embodiments of the disclosure disclose a method and an apparatus, which includes: a display, a memory, and a processor operatively connected to the display and/or the memory, wherein the processor is configured to: add a frame to an appended file based on a request to update application data, allocate a reserved space to the appended file, update a database file based on an update condition, and allocate the reserved space to the database file.Type: GrantFiled: December 20, 2021Date of Patent: February 20, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Kisung Lee, Hyeeun Jun, Kiwon Song
-
Patent number: 11907167Abstract: A multi-cluster configuration of a database management system in a virtual computing system includes a server that defines a first policy for a source database on a first cluster of a plurality of clusters. Each of the plurality of clusters is registered with the server and the first policy defines capture of snapshots and/or transactional logs from the source database on the first cluster. The server defines a second policy for the source database to replicate at least some of the snapshots and/or transactional logs from the first cluster to a second cluster of the plurality of clusters, captures a first snapshot and/or a first transactional log from the source database in accordance with the first policy, and replicates the first snapshot and/or the first transactional log to the second cluster in accordance with the second policy.Type: GrantFiled: June 2, 2021Date of Patent: February 20, 2024Assignee: Nutanix, Inc.Inventors: Kamaldeep Khanuja, Yashesh Mankad, Sagar Sontakke, Bakul Banthia, Balasubrahmanyam Kuchibhotla, Anil Madan, Manish Pratap Singh
-
Patent number: 11907168Abstract: Data storage operations, including content-indexing, containerized deduplication, and policy-driven storage, are performed within a cloud environment. The systems support a variety of clients and cloud storage sites that may connect to the system in a cloud environment that requires data transfer over wide area networks, such as the Internet, which may have appreciable latency and/or packet loss, using various network protocols, including HTTP and FTP. Methods are disclosed for content indexing data stored within a cloud environment to facilitate later searching, including collaborative searching. Methods are also disclosed for performing containerized deduplication to reduce the strain on a system namespace, effectuate cost savings, etc. Methods are disclosed for identifying suitable storage locations, including suitable cloud storage sites, for data files subject to a storage policy.Type: GrantFiled: March 16, 2022Date of Patent: February 20, 2024Assignee: Commvault Systems, Inc.Inventors: Anand Prahlad, Marcus S. Muller, Rajiv Kottomtharayil, Srinivas Kavuri, Parag Gokhale, Manoj Kumar Vijayan
-
Patent number: 11907169Abstract: A delta set information management device (delta device) stores full versions of files and updates such files based upon delta information. The delta device can be a web server running delta software. It can store original files as either seed files or node files in a tree structure and store modifications to seed files and node files based upon the time and identity of the entity (e.g., user or computer) that requested or made such modifications.Type: GrantFiled: December 13, 2019Date of Patent: February 20, 2024Inventor: Steven Reynolds
-
Patent number: 11907170Abstract: Provided are a computer program product, system, and method for switching serialization techniques for handling concurrent write requests to a shared file. A first node serializes write requests from client nodes to write to the shared file. The first node determines whether to switch to a second node to manage write quests to the shared file based on a pattern of write requests to the shared file. The client nodes are notified to direct write requests to the shared file to the second node in response to determining to switch to the second node. The second node processes write requests to the shared file to serialize writes to the shared file after the client nodes are notified to submit the write requests to the shared file to the second node.Type: GrantFiled: June 14, 2021Date of Patent: February 20, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Enci Zhong, Frank Schmuck, Felipe Knop, Owen T. Anderson, Huzefa Pancha, Abhishek Jain
-
Patent number: 11907171Abstract: Techniques for implementing a dynamic intelligent log analysis tool are disclosed. In some embodiments, a computer system performs operations comprising: obtaining a log file comprising a plurality of log entries, each log entry comprising an error message; identifying a set of unique words from the error messages; for each error message, computing a term-frequency vector based on a frequency of occurrence for each unique word of the set of unique words in the error message; for each error message, computing a similarity measure between the term-frequency vectors of the error message and every other error message of the log entries; for each error message, computing a score based on a sum of the similarity measures; and displaying an indication of one or more of the error messages on a computing device based on the scores for the one or more of the error messages.Type: GrantFiled: October 29, 2021Date of Patent: February 20, 2024Assignee: SAP SEInventors: Anviti Srivastava, Nirjar Gandhi, Akash Gupta, Sudhir Verma, Divyanshu Bajpai