Patents Issued in November 12, 2024
-
Patent number: 12141049Abstract: A system and method is disclosed for injecting in-process agents into processes executing self-contained, statically linked binaries that do not interact with a dynamic loader mechanism that identifies and resolves required libraries at run time. System calls directed to the execution of binaries in processes are intercepted and the targeted binary is analyzed to determine whether it is statically linked. In case a statically linked binary is identified, a proxy launcher process is started instead of the binary which starts the original binary as traceable child process. After the child process has loaded the original binary into its process memory, the memory image of the child process is copied to the launcher process and the child process is terminated. An agent is loaded into the launcher process to instruments the copied memory image.Type: GrantFiled: December 8, 2023Date of Patent: November 12, 2024Assignee: Dynatrace LLCInventors: Gernot Reisinger, Thomas Koeckerbauer, Michael Obermueller
-
Patent number: 12141050Abstract: Systems and methods include, responsive to a request to test or troubleshoot a software system including a plurality of sub-components that communicate with one another via Application Programming Interfaces (APIs), creating one or more gadgets that one or more of inject data in any sub-component and probes responses from any sub-component; performing one or more tests of one of more of the plurality of sub-components utilizing the one or more gadgets to invoke specific behavior of the software system and to collect internal data to examine correctness of the behavior; and subsequent to the one or more tests, removing the one or more gadgets. The one or more gadgets are non-intrusive and do not alter behavior of the plurality of sub-components.Type: GrantFiled: August 26, 2021Date of Patent: November 12, 2024Assignee: Ciena CorporationInventors: Padma Sanampudi, David Henry Gilson, Bruce Todd Jorgens
-
Patent number: 12141051Abstract: Various embodiments of the present invention provide methods, apparatuses, computing devices, and/or the like that are configured to perform software application framework monitoring using alert signatures for the software applications that are generated by at least one of: (i) a conditional ensemble machine learning framework comprise one or more alert priority score generation machine learning models, one or more alert priority explanation generation machine learning models, and a conditional ensemble machine learning model that is configured to generate an explanation-inclusive alert signature if a deep-learning-based alert priority score generated by the alert priority score generation machine learning models is identical to a decision-tree-based alert priority designation generated by the alert priority explanation generation machine learning models, and (ii) a set of alert priority score adjustment models such as an entity-based alert priority score adjustment model, a temporal alert priority score adjuType: GrantFiled: May 31, 2022Date of Patent: November 12, 2024Assignees: Atlassian Pty Ltd, Atlassian US, Inc.Inventors: Vipul Gupta, Akshar Prasad
-
Patent number: 12141052Abstract: According to some embodiments, a system, method and non-transitory computer-readable medium are provided to protect a cyber-physical system having a plurality of monitoring nodes comprising: a normal space data source storing, for each of the plurality of monitoring nodes, a series of normal monitoring node values over time that represent normal operation of the cyber-physical system; a situational awareness module including an abnormal data generation platform, wherein the abnormal data generation platform is operative to generate abnormal data to represent abnormal operation of the cyber-physical system using values in the normal space data source and a generative model; a memory for storing program instructions; and a situational awareness processor, coupled to the memory, and in communication with the situational awareness module and operative to execute the program instructions to: receive a data signal, wherein the received data signal is an aggregation of data signals received from one or more of the pType: GrantFiled: May 22, 2023Date of Patent: November 12, 2024Assignee: GENERAL ELECTRIC COMPANYInventors: Hema K Achanta, Masoud Abbaszadeh, Weizhong Yan, Mustafa Tekin Dokucu
-
Patent number: 12141053Abstract: A test flakiness system retrieves, from a repository, a software test and a software module. The test flakiness system performs the flakiness test against the software module, determining a flakiness value for the software test. On a condition that a difference between the flakiness value and a set of historical flakiness values exceeds a threshold, the test flakiness system creates a defect record.Type: GrantFiled: November 22, 2022Date of Patent: November 12, 2024Assignee: Red Hat, Inc.Inventors: Alexander Rukletsov, Benjamin Bannier
-
Patent number: 12141054Abstract: A test device includes a first determination unit that determines based on setting information set in advance whether each screen element of a first screen is a non-test target and an execution unit that automatically executes an operation with respect to the screen element determined not to be a non-test target by the first determination unit, whereby it is possible to exclude a part of elements related to a screen from test targets.Type: GrantFiled: May 9, 2019Date of Patent: November 12, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Yu Adachi, Haruto Tanno, Toshiyuki Kurabayashi, Yu Yoshimura, Hiroyuki Kirinuki
-
Patent number: 12141055Abstract: Disclosed in some examples, are methods, systems, devices, and machine-readable mediums which solve the above problems using a global shared region of memory that combines memory segments from multiple CXL devices. Each memory segment is a same size and naturally aligned in its own physical address space. The global shared region is contiguous and naturally aligned in the virtual address space. By organizing this global shared region in this manner, a series of three tables may be used to quickly translate a virtual address in the global shared region to a physical address. This prevents TLB thrashing and improves performance of the computing system.Type: GrantFiled: August 31, 2022Date of Patent: November 12, 2024Assignee: Micron Technology, Inc.Inventors: Bryan Hornung, Patrick Estep
-
Patent number: 12141056Abstract: A mnemonic phrase management method. The method comprises: generating a random number having a first preset length and performing calculation on the random number; splicing the random number and data having a second preset length acquired from the random number calculation result to obtain a first spliced value; grouping the first spliced value according to a third preset length; sequentially searching an offset storage area for corresponding indexes according to values of the groups; obtaining mnemonic phrase starting offset addresses and mnemonic phrase lengths according to the indexes; and acquiring corresponding mnemonic phrases from a mnemonic phrase storage area according to the mnemonic phrase starting offset addresses and the mnemonic phrase lengths and sequentially storing the mnemonic phrases into a mnemonic phrase buffer. The present invention relates to the field of information security.Type: GrantFiled: April 22, 2022Date of Patent: November 12, 2024Assignee: Feitian Technologies Co., Ltd.Inventors: Zhou Lu, Huazhang Yu
-
Patent number: 12141057Abstract: The present technology includes a controller and a method of operating the same. The controller includes a stress manager configured to generate a conversion value according to a number of selected planes during an erase operation and configured to calculate a stress index of a memory block based on the conversion value, a register configured to store the stress index corresponding to the memory block, and a garbage collection manager configured to compare the stress index to a garbage collection reference value to output a garbage collection control signal.Type: GrantFiled: December 7, 2022Date of Patent: November 12, 2024Assignee: SK hynix Inc.Inventor: Jong Wook Kim
-
Patent number: 12141058Abstract: Methods, computer systems, and computer readable medium are described for low latency reads using cached deduplicated data, including: receiving a request to read data from a storage system; query, using a generated hash value associated with the request to read data, one or more deduplication tables that corresponds to the hash value; and responsive to determining that the one or more deduplication tables includes an entry that corresponds to the hash value, using a mapping contained in the entry to perform the requested to read data, wherein the mapping includes a pointer to a physical location where at least a portion of the data is stored.Type: GrantFiled: April 24, 2023Date of Patent: November 12, 2024Assignee: PURE STORAGE, INC.Inventors: John Colgrove, John Hayes, Ethan Miller, Feng Wang
-
Patent number: 12141059Abstract: Methods, systems, and devices for data separation for garbage collection are described. A control component coupled to the memory array may identify a source block for a garbage collection procedure. In some cases, a first set of pages of the source block may be identified as a first type associated with a first access frequency and a second set of pages of the source block ay be identified as a second type associated with a second access frequency. Once the pages are identified as either the first type or the second type, the first set of pages may be transferred to a first destination block, and the second set of pages may be transferred to a second destination block as part of the garbage collection procedure.Type: GrantFiled: November 22, 2022Date of Patent: November 12, 2024Assignee: Micron Technology, Inc.Inventors: Nicola Colella, Antonino Pollio
-
Patent number: 12141060Abstract: A method of managing a garbage collection (GC) operation on a flash memory includes: dividing a GC operation into a plurality of partial GC operations; determining a default partial GC operation time period for each partial GC operation; determining a partial GC intensity according to at least a basic adjustment factor and an amplification factor; determining the basic adjustment factor according to a type of one or more source blocks corresponding to the GC operation; determining the amplification factor according to a percentage of invalid pages in the one or more source blocks corresponding to the GC operation; and performing the plurality of partial GC operations according to the partial GC intensity and the default partial GC operation time period.Type: GrantFiled: May 9, 2023Date of Patent: November 12, 2024Assignee: Silicon Motion, Inc.Inventors: Chia-Chi Liang, Cheng-Yu Tsai
-
Patent number: 12141061Abstract: The technology disclosed herein may detect, avoid, or protect against “use after free” or “double free” programing logic errors. An example method may involve: receiving, by a processing device, a memory allocation request; identifying a physical memory address referencing a chunk of memory; identifying a security parameter specifying a number of virtual memory addresses comprised by a set of memory addresses that are mapped to the identified physical memory address; generating a plurality of pointers to the chunk of memory, wherein each pointer of the plurality of pointers references a corresponding virtual memory address of the set of virtual memory addresses; determining a sequential number assigned to the memory allocation request; selecting, among the plurality of pointers, a pointer corresponding to the sequential number; providing the pointer in response to the memory allocation request; and updating pointer validation data indicating validity of the pointer.Type: GrantFiled: January 27, 2023Date of Patent: November 12, 2024Assignee: Red Hat, Inc.Inventor: Michael Tsirkin
-
Patent number: 12141063Abstract: A method for efficient write-back for journal truncation is provided. A method includes maintaining a journal in a memory of a computing system including a plurality of records. Each record indicates a transaction associated with one or more pages in an ordered data structure and maintaining a dirty list including an entry for each page indicated by a record in the journal. Each entry in the dirty list includes a respective first log sequence number (LSN) associated with a least recent record of the plurality of records that indicates the page and a respective second LSN associated with a most recent record of the plurality of records that indicates the page. The method includes determining to truncate the journal. The method includes identifying one or more records, of the plurality of records, from the journal to write back to a disk, where the identifying is based on the dirty list.Type: GrantFiled: September 1, 2022Date of Patent: November 12, 2024Assignee: VMware LLCInventors: Jiaqi Zuo, Junlong Gao, Wenguang Wang, Eric Knauft, Hardik Singh Negi
-
Patent number: 12141064Abstract: A container image can be used to determine a caching algorithm for a software application. For example, a storage system can receive a context tag indicating an input/output (IO) pattern associated with a software application of a container. The context tag can be determined based on a container image of the container. The storage system can determine a caching algorithm for the software application based on the context tag. The storage system can apply the caching algorithm to the software application.Type: GrantFiled: November 30, 2021Date of Patent: November 12, 2024Assignee: Red Hat, Inc.Inventors: Orit Wasserman, Gabriel Zvi BenHanokh, Yehoshua Salomon
-
Patent number: 12141065Abstract: In at least one embodiment, processing can include determining, by a first node, an update to a metadata (MD) page, wherein the first node includes a first cache; sending, from the first node to a second node, a commit message including the update to the MID page; receiving, at the second node, the commit message from the first node; and storing, by the second node, an updated version of the MID page in a second cache of the second node only if the second cache of the second node includes a cached copy of the MD page, wherein the updated version of the MID page, as stored in the second cache of the second node, is constructed by applying the first update to the cached copy of the first MD page.Type: GrantFiled: March 1, 2023Date of Patent: November 12, 2024Assignee: Dell Products L.P.Inventors: Ami Sabo, Vladimir Shveidel, Dror Zalstein
-
Patent number: 12141066Abstract: A data processing system includes a plurality of coherent masters, a plurality of coherent slaves, and a coherent data fabric. The coherent data fabric has upstream ports coupled to the plurality of coherent masters and downstream ports coupled to the plurality of coherent slaves for selectively routing accesses therebetween. The coherent data fabric includes a probe filter and a directory cleaner. The probe filter is associated with at least one of the downstream ports and has a plurality of entries that store information about each entry. The directory cleaner periodically scans the probe filter and selectively removes a first entry from the probe filter after the first entry is scanned.Type: GrantFiled: December 20, 2021Date of Patent: November 12, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Amit P. Apte, Kevin Michael Lepak, Ganesh Balakrishnan, Vydhyanathan Kalyanasundharam
-
Patent number: 12141067Abstract: A second memory stores a plurality of input data sets DSi composed of a plurality of pieces of input data. N multiply-accumulate units are capable of performing parallel processings, and each performs a multiply-accumulate operation on any one of the plurality of weight parameter sets and any one of the plurality of input data sets. A second DMA controller transfers the input data set from the second memory to the n multiply-accumulate units. A measurement circuit measures a degree of matching/mismatching of logic levels among the plurality of pieces of input data contained in the input data set within the memory MEM2, the sequence controller controls the number of parallel processings by the n multiply-accumulate units based on a measurement result by the measurement circuit.Type: GrantFiled: July 5, 2023Date of Patent: November 12, 2024Assignee: RENESAS ELECTRONICS CORPORATIONInventor: Kazuaki Terashima
-
Patent number: 12141068Abstract: Methods, systems, and devices for loading data in a tiered memory system are described. A respective allocation of computing resources may be determined for each node in a cluster, where at least one of the nodes may include multiple memory tiers, and a data set to be processed by the nodes may be analyzed. Based on the allocation of computing resources and the analysis of the data set, respective data processing instructions indicating respective portions of the data set to be processed by respective nodes may be generated and sent to the respective nodes. The respective data processing instructions may also indicate a respective distribution of subsets of the respective portions of the data set across the multiple memory tiers at the respective nodes.Type: GrantFiled: September 9, 2022Date of Patent: November 12, 2024Assignee: Micron Technology, Inc.Inventors: Sudharshan Sankaran Vazhkudai, Moiz Arif, Kevin Assogba, Muhammad Mustafa Rafique
-
Patent number: 12141069Abstract: A data processing apparatus is provided. Prefetch circuitry generates a prefetch request for a cache line prior to the cache line being explicitly requested. The cache line is predicted to be required for a store operation in the future. Issuing circuitry issues the prefetch request to a memory hierarchy and filter circuitry filters the prefetch request based on at least one other prefetch request made to the cache line, to control whether the prefetch request is issued by the issuing circuitry.Type: GrantFiled: December 28, 2022Date of Patent: November 12, 2024Assignee: Arm LimitedInventors: Luca Maroncelli, Cedric Denis Robert Airaud, Florent Begon, Peter Raphael Eid
-
Patent number: 12141070Abstract: Computer-readable media, methods, and systems are disclosed for an in-memory cache in a memory of a client device. The system may send a first request for a first data from the client device to the in-memory cache and may receive a null response. The system may send a second request from the client device for the first data to a server and may receive a response from the server with the first data. The system may then send the first data to the in-memory cache and store the first data in the in-memory cache, thereby eliminating an additional request for the first data from the server.Type: GrantFiled: December 5, 2022Date of Patent: November 12, 2024Assignee: BUSINESS OBJECTS SOFTWARE LTDInventor: Raffaele Sangiovanni
-
Patent number: 12141071Abstract: Provided is a processor that includes a load and store unit (LSU) and a cache memory, and transfers data information from a store queue in the LSU to the cache memory. The cache memory requests an information packet from the LSU when the cache memory determines that an available entry exists in a store queue within the cache memory. The LSU acknowledges the request and transfers an information packet to the cache memory. The LSU anticipates that an additional available entry exists in the cache memory, transmits an additional acknowledgement to the cache memory, and transfers an additional information packet, before receiving an additional request from the cache memory.Type: GrantFiled: July 21, 2022Date of Patent: November 12, 2024Assignee: International Business Machnes CorporationInventors: Shakti Kapoor, Nelson Wu, Manoj Dusanapudi
-
Patent number: 12141072Abstract: Techniques described herein relate to a method for managing training data. The method includes monitoring, by a training data stream manager (TDSM), a cache comprising a plurality of training data examples associated with streams of mini-batch sequences scheduled to be transmitted to a machine learning training environment; making a first determination that a cache eviction is required; in response to the first determination: selecting a training data example of the plurality of training data examples; making a second determination that the training data example is eligible for cache eviction; in response to the second determination: evicting the training data example from the cache; and updating a training data example database entry to indicate that the training data example is evicted from the cache.Type: GrantFiled: March 31, 2023Date of Patent: November 12, 2024Assignee: Dell Products, L.P.Inventors: John Thomas Cardente, Qi Bao
-
Methods and apparatus for inflight data forwarding and invalidation of pending writes in store queue
Patent number: 12141073Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to forward and invalidate inflight data in a store queue. An example apparatus includes a cache storage, a cache controller coupled to the cache storage and operable to receive a first memory operation, determine that the first memory operation corresponds to a read miss in the cache storage, determine a victim address in the cache storage to evict in response to the read miss, issue a read-invalidate command that specifies the victim address, compare the victim address to a set of addresses associated with a set of memory operations being processed by the cache controller, and in response to the victim address matching a first address of the set of addresses corresponding to a second memory operation of the set of memory operations, provide data associated with the second memory operation.Type: GrantFiled: April 24, 2023Date of Patent: November 12, 2024Assignee: Texas Instruments IncorporatedInventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser -
Patent number: 12141074Abstract: A method of managing data in a storage device is provided. The storage device includes a plurality of nonvolatile memory chips each including a plurality of pages. A first data object is received from an external host device. The first data object has an unfixed size and corresponds to a first logical address which is a single address. Based on determining that it is impossible to store the first data in a single page among the plurality of pages, a buffering policy for the first data object is set based on at least one selection parameter. While mapping the first logical address of the first data object and a first physical address of pages in which the first data object is stored, a first buffering direction representing the buffering policy for the first data object is stored with a mapping result.Type: GrantFiled: June 7, 2023Date of Patent: November 12, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Jaeju Kim, Youngho Park, Sangyoon Oh, Hyungchul Jang, Jekyeom Jeon
-
Patent number: 12141075Abstract: In one example of the present technology, an input/output memory management unit (IOMMU) of a computing device is configured to: receive a prefetch message including a virtual address from a central processing unit (CPU) core of a processor of the computing device; perform a page walk on the virtual address through a page table stored in a main memory of the computing device to obtain a prefetched translation of the virtual address to a physical address; and store the prefetched translation of the virtual address to the physical address in a translation lookaside buffer (TLB) of the IOMMU.Type: GrantFiled: June 9, 2022Date of Patent: November 12, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Ramakrishna Huggahalli, Shachar Raindel
-
Patent number: 12141076Abstract: Disclosed herein is a virtual cache and method in a processor for supporting multiple threads on the same cache line. The processor is configured to support virtual memory and multiple threads. The virtual cache directory includes a plurality of directory entries, each entry is associated with a cache line. Each cache line has a corresponding tag. The tag includes a logical address, an address space identifier, a real address bit indicator, and a per thread validity bit for each thread that accesses the cache line. When a subsequent thread determines that the cache line is valid for that thread the validity bit for that thread is set, while not invalidating any validity bits for other threads.Type: GrantFiled: August 18, 2023Date of Patent: November 12, 2024Assignee: International Business Machines CorporationInventors: Markus Helms, Christian Jacobi, Ulrich Mayer, Martin Recktenwald, Johannes C. Reichart, Anthony Saporito, Aaron Tsai
-
Patent number: 12141077Abstract: This application is directed to memory management in an electronic device. A memory includes a plurality of superblocks and receives a plurality of access requests. The electronic device stores information of an ordered list of superblocks in a cache, and each of a first subset of superblocks has a hint value and is ordered based on the hint value. In response to the plurality of access requests, the electronic device accumulates respective hint values of the first subset of superblocks and dynamically determines positions of the first subset of superblocks in the ordered list of superblocks based on the respective hint values of the first subset of superblocks. The ordered list of superblocks is pruned to generate a pruned list of superblocks. Based on the pruned list of superblocks, the electronic device converts a second subset of superblocks from a first memory type to a second memory type.Type: GrantFiled: June 8, 2023Date of Patent: November 12, 2024Assignee: SK Hynix NAND Product Solutions Corp.Inventor: Sriram Natarajan
-
Patent number: 12141078Abstract: A caching system including a first sub-cache, and a second sub-cache coupled in parallel with the first sub-cache; wherein the second sub-cache includes line type bits configured to store an indication that a corresponding line of the second sub-cache is configured to store write-miss data.Type: GrantFiled: May 22, 2020Date of Patent: November 12, 2024Assignee: Texas Instruments IncorporatedInventors: Naveen Bhoria, Timothy David Anderson, Pete Hippleheuser
-
Patent number: 12141079Abstract: Methods, apparatus, systems and articles of manufacture to facilitate an atomic operation and/or a histogram operation in cache pipeline are disclosed An example system includes a cache storage coupled to an arithmetic component; and a cache controller coupled to the cache storage, wherein the cache controller is operable to: receive a memory operation that specifies a set of data; retrieve the set of data from the cache storage; utilize the arithmetic component to determine a set of counts of respective values in the set of data; generate a vector representing the set of counts; and provide the vector.Type: GrantFiled: November 22, 2022Date of Patent: November 12, 2024Assignee: Texas Instruments IncorporatedInventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
-
Patent number: 12141080Abstract: A communication method, a related computing system and a storage medium are described. A communication method for a computing system runs at least one process, wherein the at least one process comprises a plurality of modules, and the method comprises: acquiring attribute information of each of the plurality of modules, wherein the plurality of modules at least comprise a first module and a second module; in response to determining that data is to be transmitted from the first module to the second module, comparing attribute information of the first module with attribute information of the second module; and selecting a communication channel for each of the first module and the second module according to the comparison, to transmit the data from the first module to the second module through the selected communication channel.Type: GrantFiled: November 14, 2022Date of Patent: November 12, 2024Assignee: Beijing Tusen Zhitu Technology Co., Ltd.Inventors: Yifan Gong, Jiangming Jin
-
Patent number: 12141081Abstract: System and method for training and performing operations (e.g., read and write operations) on a double buffered memory topology. In some embodiments, eight DIMMs are coupled to a single channel. The training and operations schemes are configured with timing and signaling to allow training and operations with the double buffered memory topology. In some embodiments, the double buffered memory topology includes one or more buffers on a system board (e.g., motherboard).Type: GrantFiled: August 21, 2023Date of Patent: November 12, 2024Assignee: Rambus Inc.Inventors: Chi-Ming Yeung, Yoshie Nakabayashi, Thomas Giovannini, Henry Stracovsky
-
Patent number: 12141082Abstract: A parallel processing unit comprises a plurality of processors each being coupled to a memory access hardware circuitry. Each memory access hardware circuitry is configured to receive, from the coupled processor, a memory access request specifying a coordinate of a multidimensional data structure, wherein the memory access hardware circuit is one of a plurality of memory access circuitry each coupled to a respective one of the processors; and, in response to the memory access request, translate the coordinate of the multidimensional data structure into plural memory addresses for the multidimensional data structure and using the plural memory addresses, asynchronously transfer at least a portion of the multidimensional data structure for processing by at least the coupled processor. The memory locations may be in the shared memory of the coupled processor and/or an external memory.Type: GrantFiled: March 10, 2022Date of Patent: November 12, 2024Assignee: NVIDIA CORPORATIONInventors: Alexander L. Minkin, Alan Kaatz, Oliver Giroux, Jack Choquette, Shirish Gadre, Manan Patel, John Tran, Ronny Krashinsky, Jeff Schottmiller
-
Patent number: 12141083Abstract: One example system for preventing data loss during memory blackout events comprises a memory device, a sensor, and a controller operably coupled to the memory device and the sensor. The controller is configured to perform one or more operations that coordinate at least one memory blackout event of the memory device and at least one data transmission of the sensor.Type: GrantFiled: February 17, 2023Date of Patent: November 12, 2024Assignee: Waymo LLCInventors: Sabareeshkumar Ravikumar, Daniel Rosenband
-
Patent number: 12141084Abstract: Separate inter-die connectors for data and error correction information and related apparatuses, methods, and computing systems are disclosed. An apparatus including a master die, a target die, inter-die data connectors, and inter-die error correction connectors. The target die includes data storage elements. The inter-die data connectors electrically couple the master die to the target die. The inter-die data connectors are configured to conduct write data bits from the master die to the target die. The write data bits are written to the data storage elements. The inter-die error correction connectors electrically couple the master die to the target die. The inter-die error correction connectors are configured to conduct error correction information corresponding to the write data bits from the master die to the target die. The target die includes error correction circuitry configured to generate new error correction information responsive to the write data bits received from the master die.Type: GrantFiled: August 4, 2023Date of Patent: November 12, 2024Assignee: Lodestar Licensing Group LLCInventor: Vijayakrishna J. Vankayala
-
Patent number: 12141085Abstract: A transmitter includes a pull-down circuit coupled between an output of the transmitter and a first rail, a first pull-up circuit coupled between a second rail and the output of the transmitter, and a second pull-up circuit coupled between the second rail and the output of the transmitter. The transmitter also includes a control circuit coupled to a control input of the first pull-up circuit and a control input of the second pull-up circuit. The control circuit is configured to output a first control signal to the control input of the first pull-up circuit, wherein the first control signal controls a drive strength of the first pull-up circuit. The control circuit is also configured to output a second control signal to the control input of the second pull-up circuit, wherein the second control signal controls a drive strength of the second pull-up circuit.Type: GrantFiled: December 14, 2022Date of Patent: November 12, 2024Assignee: QUALCOMM INCORPORATEDInventors: Changkyo Lee, Ashwin Sethuram
-
Patent number: 12141086Abstract: A system on chip, semiconductor device, and/or method are provided that include a plurality of masters, an interface, and a semaphore unit. The interface interfaces the plurality of masters with a slave device. The semaphore unit detects requests of the plurality of masters, controlling the salve device, about an access to the interface and assigns a semaphore about each of the plurality of masters by a specific operation unit according to the detection result.Type: GrantFiled: November 20, 2023Date of Patent: November 12, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: DongSik Cho, Jeonghoon Kim, Rohitaswa Bhattacharya, Jaeshin Lee, Honggi Jeong
-
Patent number: 12141087Abstract: Methods and apparatus for improved data movement operations through interconnect fabric. In one embodiment, Non-Transparent Bridge (NTB) technology used to perform data movement operations between a host and multiple peer devices using a DMA (direct memory access) engine and at least one descriptor ring having enhanced descriptor entries. In one implementation, descriptor ring entries include source and destination address information, address translation information, and fabric partition information. In one implementation, a DMA engine is configured directly access host memory and generate data packets using the descriptor entry information. In one embodiment, the descriptor ring is a virtual descriptor ring located on DMA hardware, host memory, or elsewhere in the NT fabric address space, and may be accessed by user processes.Type: GrantFiled: May 24, 2022Date of Patent: November 12, 2024Assignee: GigaIO Networks, Inc.Inventor: Doug Meyer
-
Patent number: 12141088Abstract: A tile of an FPGA provides memory, arithmetic functions, or both. Connections directly between multiple instances of the tile are available, allowing multiple tiles to be treated as larger memories or arithmetic circuits. By using these connections, referred to as cascade inputs and outputs, the input and output bandwidth of the arithmetic and memory circuits are increased, operand sizes are increased, or both. By using the cascade connections, multiple tiles can be used together as a single, larger tile. Thus, implementations that need memories of different sizes, arithmetic functions operating on different sized operands, or both, can use the same FPGA without additional programming or waste. Using cascade communications, more tiles are used when a large memory is needed and fewer tiles are used when a small memory is needed and the waste is avoided.Type: GrantFiled: June 13, 2023Date of Patent: November 12, 2024Assignee: Achronix Semiconductor CorporationInventors: Daniel Pugh, Raymond Nijssen, Michael Philip Fitton, Marcel Van der Goot
-
Patent number: 12141089Abstract: Data manipulation using intraoral, connected devices is disclosed. Wired connectivity is provided between a first device and a second device, both inside the mouth. The first device comprises a processor device. The wired connectivity is provided by a serpentine coupling. The serpentine coupling enables three-dimensional flexibility inside the mouth for the coupling. The serpentine coupling comprises an electrical cable. The serpentine coupling can be routed in the valley space between teeth, routed behind a last molar tooth, or routed over one or more teeth inside the mouth. Wireless connectivity is provided between the first device and a device outside the mouth. The wireless connectivity is enabled using a wireless transceiver. Data transmission is enabled between the first device and the device outside the mouth. The enabling includes powering at least the first device using an intraoral energy source.Type: GrantFiled: March 17, 2022Date of Patent: November 12, 2024Assignee: Augmental Technologies Inc.Inventors: Tomas Alfonso Vega Galvez, Corten Singer, Carlos Nunez Lopez
-
Patent number: 12141090Abstract: Embodiments herein provide more efficient, more flexible, and more cost-effective ways to provide additional and/or increased functionality to an information handling system. Presented herein are embodiments of an application acceleration port interface module (which embodiments may be referred to herein for convenience as “AAPIM”) into which pluggable I/O (input/output) modules may be inserted and the other ends inserted into ports of an information handling system to provide the information handling system with increase capabilities (e.g., increased resource, such as added processing, and increased services, such as new applications or accelerated services). AAPIM embodiments are versatile solutions to address application acceleration needs that can be quickly reprogrammed to address specific needs of a user.Type: GrantFiled: July 21, 2022Date of Patent: November 12, 2024Assignee: DELL PRODUCTS L.P.Inventors: Padmanabhan Narayanan, Raja Sathianarayan Jayakumar, Anoop Ghanwani, Per Henrik Fremrot
-
Patent number: 12141091Abstract: A semiconductor device capable of communicating with a host apparatus includes a symbol generation unit, a coding unit, and a transmission unit. The symbol generation unit includes a random number generation circuit and generates a symbol according to a random number generated by the random number generation circuit. The coding unit performs 8b/10b coding for the symbol. The transmission unit transmits the symbol coded by the 8b/10b coding unit to the host apparatus.Type: GrantFiled: August 3, 2023Date of Patent: November 12, 2024Assignee: KIOXIA CORPORATIONInventors: Kunihiko Yamagishi, Toshitada Saito
-
Patent number: 12141092Abstract: The invention relates to a computer program comprising a sequence of instructions for execution on a processing unit having instruction storage for holding the computer program, an execution unit for executing the computer program and data storage for holding data, the computer program comprising one or more computer executable instruction which, when executed, implements: a send function which causes a data packet destined for a recipient processing unit to be transmitted on a set of connection wires connected to the processing unit, the data packet having no destination identifier but being transmitted at a predetermined transmit time; and a switch control function which causes the processing unit to control switching circuitry to connect a set of connection wires of the processing unit to a switching fabric to receive a data packet at a predetermined receive time.Type: GrantFiled: April 6, 2022Date of Patent: November 12, 2024Assignee: GRAPHCORE LIMTIEDInventors: Simon Christian Knowles, Daniel John Pelham Wilkinson, Richard Luke Southwell Osborne, Alan Graham Alexander, Stephen Felix, Jonathan Mangnall, David Lacey
-
Patent number: 12141093Abstract: A system includes a first processing device and a second processing device, each of which is coupled to a NIC implemented with an RDMA interface. The NICs are capable of rendezvous flows of RDMA write exchange. In an example where the first NIC is at the sender side and the second NIC is at the receiver side, a rendezvous flow is initiated by an execution of a RDMA write operation by the second NIC. The second NIC provides at least an address of a buffer in the second processing device to the first NIC through the RDMA write operation. Then the first NIC initiates a RDMA write operation to send data in a buffer in the first processing device to the second NIC. The second NIC may acknowledge receipt of the data with the second NIC. The second NIC can update a CI of the WQE based on the acknowledgement.Type: GrantFiled: January 25, 2022Date of Patent: November 12, 2024Assignee: Habana Labs Ltd.Inventors: Itay Zur, Ira Joffe, Shlomi Gridish, Amit Pessach, Yanai Pomeranz
-
Patent number: 12141094Abstract: Embodiments described herein include software, firmware, and hardware logic that provides techniques to perform arithmetic on sparse data via a systolic processing unit. One embodiment provides techniques to optimize training and inference on a systolic array when using sparse data. One embodiment provides techniques to use decompression information when performing sparse compute operations. One embodiment enables the disaggregation of special function compute arrays via a shared reg file. One embodiment enables packed data compress and expand operations on a GPGPU. One embodiment provides techniques to exploit block sparsity within the cache hierarchy of a GPGPU.Type: GrantFiled: March 14, 2020Date of Patent: November 12, 2024Assignee: Intel CorporationInventors: Prasoonkumar Surti, Subramaniam Maiyuran, Valentin Andrei, Abhishek Appu, Varghese George, Altug Koker, Mike Macpherson, Elmoustapha Ould-Ahmed-Vall, Vasanth Ranganathan, Joydeep Ray, Lakshminarayanan Striramassarma, SungYe Kim
-
Patent number: 12141095Abstract: A systolic array includes a plurality of basic computation units arranged in a matrix. A basic computation includes a feature input register configured to store first feature data, a result buffer configured to store first temporary data, a comparator connected to the feature input register and the result buffer, and a control register connected to the feature input register, the result buffer, and the comparator. The comparator is configured to compare the first feature data input with the first temporary data successively. The control register is configured to control the first feature data of the feature input register and the first temporary data to be input to the comparator, output a comparison result to the result buffer and a feature input register of a next basic computation unit, and after sorting, output the first temporary data last stored in the result buffer as a first data result.Type: GrantFiled: January 24, 2023Date of Patent: November 12, 2024Assignee: Nanjing SemiDrive Technology LTD.Inventors: Yu Wang, Junyuan Wu
-
Patent number: 12141096Abstract: A computerized-method for determining and utilizing an effectiveness of lifecycle-management for storage of interactions-related objects. In a computerized system that is communicating with a multi-tier storage in a cloud-environment having a lifecycle-rules data-storage to store one or more lifecycle-rules, operating a Retention Effectiveness Calculation (REC) module. The operating of the REC module includes: (i) retrieving all lifecycle-rules from the lifecycle-rules data-storage; (ii) for each lifecycle-rule in the lifecycle rules data-storage calculating a Rule Effectiveness Score (RES); (iii) grouping all the calculated RES by media type; (iv) for each media type, calculating an Object Retention Score (ORS) for the media type; (v) dividing an aggregation of the ORS of all media types by a total number of media types to yield a total ORS for a contact-center; and (vi) updating each lifecycle-rule by changing span of interactions-related-objects in active-storage.Type: GrantFiled: April 17, 2023Date of Patent: November 12, 2024Assignee: INCONTACT INC.Inventor: Seemit Shah
-
Patent number: 12141097Abstract: The present teaching generally relates to detecting providing pre-validated data buckets for online experiments. In a non-limiting embodiment, user activity data representing user activity for a first plurality of user identifiers may be obtained. A first set of values and a second values, representing first and second user engagement parameters, respectively, may be generated for each user identifier based on the user activity data. A first ranking and a second ranking may be determined for the first and second sets, respectively. A first exclusion range including a first number of values to be removed from the first and second sets may be determined. A homogenous value set may be generated by removing the first number of values from the first and second sets, where each value from the homogenous value set corresponds to a user identifier available to be placed in a data bucket for an online experiment.Type: GrantFiled: August 4, 2023Date of Patent: November 12, 2024Assignee: YAHOO ASSETS LLCInventors: Russell Chen, Miao Chen, Don Matheson, Mahendrasinh Ramsinh Jadav
-
Patent number: 12141098Abstract: An exemplary system and method facilitate the identify and/or extract content hiding software, e.g., in a software curation environment (e.g., Apple's App Store). In some embodiments, the exemplary system and method may be applied to U.S.-based platforms as well as international platforms in Russia, India, China, among others.Type: GrantFiled: June 30, 2022Date of Patent: November 12, 2024Assignee: The Florida State University Research Foundation, Inc.Inventors: Gokila Dorai, Sudhir Aggarwal, Charisa Powell, Neet Patel
-
Patent number: 12141099Abstract: Examples described herein generally relate to a scalable multi-tier storage system. An entry may be added and/or deleted within the storage system. To delete an entry, the storage system may determine whether the entry corresponds to the file or the directory based on directory metadata, and request deletion of the directory metadata associated with the entry from the directory volume based on determining that the entry corresponds to the directory, and further requesting deletion of the file from a file volume based on a determination that the entry corresponds to the file. To add a file, the storage system may generate directory metadata associated with the entry in the directory volume based on a determination that the entry corresponds to the directory, and may further allocate file metadata in the file volume based on a determination that the entry corresponds to the file.Type: GrantFiled: March 8, 2022Date of Patent: November 12, 2024Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Vladimirs Petters, Roopesh Battepati, David Kruse, Mathew George