Patents Issued in January 9, 2024
-
Patent number: 11868231Abstract: A technique is described for evaluating code at a local computing device before deploying the code to a cloud computing platform to be compiled. In an example embodiment, class files including the code in a programming language associated with the cloud computing environment are loaded by a local computer system, for example, associated with a software developer. The local computer system then parses the code to identify elements in the code and checks the identified elements. Errors in the code are identified based on the checking and are displayed to a user (e.g., the developer), for example, via a graphical user interface of a code editor application.Type: GrantFiled: October 28, 2021Date of Patent: January 9, 2024Assignee: Certinia Inc.Inventors: Kevin James Jones, Simon Kristiansen Ejsing
-
Patent number: 11868232Abstract: The execution-time reporting of telemetry of execution of a software program. Subscribers submit subscriptions to telemetry of the software program. As each subscription is received, the telemetry scope of the subscription is evaluated to determine what portion of an object model is to be augmented. The augmented portion will include portion(s) related to the scope of telemetry subscribed to in the subscription. Thereafter, that portion of the object model is indeed augmented as execution of the computer program proceeds further. Subsequently, telemetry reports are generated for a subscription based on the interpretation of a defined subscription-specific portion of the object model.Type: GrantFiled: September 2, 2022Date of Patent: January 9, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Pritesh Rajesh Kanani, Siunie Aquawati Sutjahjo, James Feore, Wei Zhong
-
Patent number: 11868233Abstract: A system for read-access of a regulated system, the system comprising a specialized data store, at least one memory, and a flexible reader. The specialized data store able to receive at least a portion of a set of procedures that define a respective set of systematic data and executable operations. The at least one memory including at least one set of data related to the set of procedures.Type: GrantFiled: November 9, 2022Date of Patent: January 9, 2024Assignee: GE Aviation Systems LLCInventors: Joachim Karl Ulf Hochwarth, Terrell Michael Brace, Víctor Mario Leal Herrera, Antonio Lugo Trejo
-
Patent number: 11868234Abstract: Monitoring and troubleshooting tools provide the capability to visualize different levels of a client's application that is deployed as a suite of independent but cooperating services (e.g., an application that includes a monolithic application and a microservices-based application), collect values of monitored or tracked metrics at those different levels, and visualize values of the metrics at those levels. For example, metrics values can be generated for components of the monolithic application and/or for components of a microservice of the microservice-based application.Type: GrantFiled: March 18, 2022Date of Patent: January 9, 2024Assignee: SPLUNK Inc.Inventors: Mayank Agarwal, Steven Karis, Justin Smith
-
Patent number: 11868235Abstract: Examples include aggregating logs, where each of the logs is associated with a workflow instance. Each log includes information indicative of an event occurring during the workflow instance. Further, examples include assigning, based on user intent of the workflow instance, a workflow name to each log, where the user intent is indicative of an outcome of execution of the workflow instance and assigning an instance identifier to each log, where the instance identifier corresponds to the workflow instance. Further, identifying a subset of the plurality of logs having an identical workflow name and an identical instance identifier, associating a tracking identifier to the subset, and creating an index of processed logs, wherein each processed log in the index includes the tracking identifier. Further, analyzing the index of processed logs based on a set of rules and identifying, based on the analysis, an error in execution of each the workflow instance.Type: GrantFiled: July 21, 2021Date of Patent: January 9, 2024Assignee: Hewlett Packard Enterprise Development LPInventors: Akshar Kumar Ranka, Nitish Midha, Christopher Wild
-
Patent number: 11868236Abstract: Certain aspects of the present disclosure provide techniques for handling crash events in a software application using application-agnostic machine learning models. An example method generally includes receiving a data set of crash reports from a software application for analysis. Using a first neural network, a representation of each respective crash report in the data set is generated. The data set of crash reports and a mapping between functions in the software application and a multidimensional space are input into the first neural network. Each respective crash report in the data set is classified using a second neural network and the representation of each crash report in the data set. One or more actions are taken with respect to the software application based on the classifying each respective crash report in the data set.Type: GrantFiled: July 27, 2021Date of Patent: January 9, 2024Assignee: INTUIT INC.Inventors: Sudhindra A, Sri Aurobindo Munagala
-
Patent number: 11868237Abstract: Techniques for monitoring operating statuses of an application and its dependencies are provided. A monitoring application may collect and report the operating status of the monitored application and each dependency. Through use of existing monitoring interfaces, the monitoring application can collect operating status without requiring modification of the underlying monitored application or dependencies. The monitoring application may determine a problem service that is a root cause of an unhealthy state of the monitored application. Dependency analyzer and discovery crawler techniques may automatically configure and update the monitoring application. Machine learning techniques may be used to determine patterns of performance based on system state information associated with performance events and provide health reports relative to a baseline status of the monitored application. Also provided are techniques for testing a response of the monitored application through modifications to API calls.Type: GrantFiled: December 15, 2022Date of Patent: January 9, 2024Assignee: Capital One Services, LLCInventors: Muralidharan Balasubramanian, Eric K. Barnum, Julie Dallen, David Watson
-
Patent number: 11868238Abstract: A method includes receiving a test input, obtaining a resource access feedback indicating an access for a resource based on the test input, selectively, based on the resource access feedback indicating whether a first-time accessed resource is accessed based on the test input, adding the test input to a test input queue, and performing a fuzz test based on the test input queue with the test input.Type: GrantFiled: June 2, 2021Date of Patent: January 9, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Hongze Wu, Jianyun Qu
-
Patent number: 11868239Abstract: Embodiments presented herein provide techniques for evaluating an asynchronous application using a test framework. The test framework may perform a load test of an asynchronous application or service composed from a collection of applications or services. To do so, the test framework may submit transactions to a distributed application at a specified transaction rate and monitor how the distributed application operates at that transaction rate. An aggregate load test component may evaluate the remaining work pending at work accumulation points of the distributed application to determine whether the distributed application can sustain the specified transaction rate. A transaction tracking component may initiate transactions to generate load at the specified transaction rate without blocking while the transactions are processed by the distributed application.Type: GrantFiled: June 7, 2016Date of Patent: January 9, 2024Assignee: Amazon Technologies, Inc.Inventors: Ryan Preston Gantt, Carlos Alejandro Arguelles, Aman Ahmed, Brian Thomas Kachmarck, Phillip Scott Segel, Michael Leo Weiss
-
Patent number: 11868240Abstract: An information processing device comprises one or more known hardware devices including a processor and memory. An intelligent test program is provided to drive the device to develop a smoke test for a target program the name, use and functionality of which is unknown to the test program. The intelligent test program can generate a report on the functionality of the target program and can capture call back functions associated with the target program in order to automatically develop a smoke test script file for use in subsequent smoke test runs on the device.Type: GrantFiled: July 23, 2021Date of Patent: January 9, 2024Assignee: Rimo Capital Ltd.Inventor: Alon Moss
-
Patent number: 11868241Abstract: A method for optimizing a verification regression includes obtaining data, by a processor, of previously executed runs of at least one verification regression session; extracting from the data, by the processor, values of one or a plurality of control knobs and values of one or a plurality verification metrics that were recorded during the execution for each of the previously executed runs of said at least one verification regression; finding, by the processor, correlation between said one or a plurality of the control knobs and each said one or a plurality of verification metrics, and generating a set of one or a plurality of control conditions based on the found correlation; and applying, by the processor, the generated set of one or a plurality of control conditions on the verification environment or on the DUT, or on both, to obtain a new verification regression session.Type: GrantFiled: December 10, 2019Date of Patent: January 9, 2024Assignee: Cadence Design Systems, Inc.Inventors: Yael Kinderman, Yosinori Watanabe, Michele Petracca, Ido Avraham
-
Patent number: 11868242Abstract: Embodiments of the present disclosure provide methods, systems, apparatuses, and computer program products for selecting a test suite for an API. In one embodiment, a computing entity or apparatus is configured to receive test patterns and heuristics, receive an input API, the input API comprising API specifications, parse the input API to extract the API specifications, and based at least in part on the extracted API specifications and the test patterns and heuristics, select a test suite, wherein the test suite is programmatically generated using a machine learning model and comprises one or more test routines, one or more data values, and one or more expected results.Type: GrantFiled: May 27, 2022Date of Patent: January 9, 2024Assignee: Liberty Mutual Insurance CompanyInventor: Gordon Merritt
-
Patent number: 11868243Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing topological scheduling on a machine-learning accelerator having an array of tiles. One of the methods includes performing, at each time step of a plurality of time steps corresponding respectively to columns within each of a plurality of wide columns of the tile array, operations comprising: performing respective multiplications using tiles in a respective tile column for the time step, computing a respective output result for each respective tile column for the time step including computing a sum of results of the multiplications for the tile column, and storing the respective output result for the tile column in a particular output RAM having a location within the same tile column and on a row from which the output result will be read by a subsequent layer of the model.Type: GrantFiled: June 21, 2022Date of Patent: January 9, 2024Assignee: Google LLCInventor: Lukasz Lew
-
Patent number: 11868244Abstract: A compressed memory system of a processor-based system includes a memory partitioning circuit for partitioning a memory region into data regions with different priority levels. The system also includes a cache line selection circuit for selecting a first cache line from a high priority data region and a second cache line from a low priority data region. The system also includes a compression circuit for compressing the cache lines to obtain a first and a second compressed cache line. The system also includes a cache line packing circuit for packing the compressed cache lines such that the first compressed cache line is written to a first predetermined portion and the second cache line or a portion of the second compressed cache line is written to a second predetermined portion of the candidate compressed cache line. The first predetermined portion is larger than the second predetermined portion.Type: GrantFiled: January 10, 2022Date of Patent: January 9, 2024Assignee: QUALCOMM IncorporatedInventors: Norris Geng, Richard Senior, Gurvinder Singh Chhabra, Kan Wang
-
Patent number: 11868245Abstract: Devices and techniques for improving memory access operations of a memory device are provided. In an example, a method can include loading multiple LBA-to-physical address (L2P) regions of an L2P table from memory arrays of the memory device to a mapping cache in response to determining the LBA of the memory access command is not within the L2P region including of a mapping cache. When the memory access command is a sequential command, the multiple L2P regions loaded to the mapping cache can provide improved memory access performance.Type: GrantFiled: September 21, 2020Date of Patent: January 9, 2024Assignee: Micron Technology, Inc.Inventors: Xinghui Duan, Bin Zhao, Jianxiong Huang
-
Patent number: 11868246Abstract: According to one embodiment, a memory system includes a nonvolatile memory, configuration unit, address translation unit, write unit and control unit. The configuration unit assigns write management areas included in the nonvolatile memory to spaces. The write management area is a unit of an area which manages the number of write. The address translation unit translates a logical address of write data into a physical address of a space corresponding to the write data. The write unit writes the write data to a position indicated by the physical address in the nonvolatile memory. The control unit controls the spaces individually with respect to the nonvolatile memory.Type: GrantFiled: April 19, 2022Date of Patent: January 9, 2024Assignee: Kioxia CorporationInventor: Shinichi Kanno
-
Patent number: 11868247Abstract: This disclosure provides for improvements in managing multi-drive, multi-die or multi-plane NAND flash memory. In one embodiment, the host directly assigns physical addresses and performs logical-to-physical address translation in a manner that reduces or eliminates the need for a memory controller to handle these functions, and initiates functions such as wear leveling in a manner that avoids competition with host data accesses. A memory controller optionally educates the host on array composition, capabilities and addressing restrictions. Host software can therefore interleave write and read requests across dies in a manner unencumbered by memory controller address translation. For multi-plane designs, the host writes related data in a manner consistent with multi-plane device addressing limitations. The host is therefore able to “plan ahead” in a manner supporting host issuance of true multi-plane read commands.Type: GrantFiled: March 21, 2023Date of Patent: January 9, 2024Assignee: Radian Memory Systems, Inc.Inventors: Andrey V. Kuzmin, James G. Wayda
-
Patent number: 11868248Abstract: A garbage collection process is performed in a storage system which comprises a storage control node, and storage nodes which implement a striped volume comprising a plurality of stripes having strips that are distributed over the storage nodes. The storage control node selects a victim stripe for garbage collection, and an empty stripe in the striped volume. The storage control node determines a data strip of the victim stripe having predominantly valid data based on a specified threshold, and sends a copy command to a target storage node which comprises the predominantly valid data strip, to cause the target storage node to copy the predominantly valid data strip to a data strip of the empty stripe which resides on the target storage node. The storage control node writes valid data blocks of the victim stripe to remaining data strips of the empty stripe, and releases the victim stripe for reuse.Type: GrantFiled: February 25, 2022Date of Patent: January 9, 2024Assignee: Dell Products L.P.Inventors: Yosef Shatsky, Doron Tal
-
Patent number: 11868249Abstract: A method, performed by an electronic device, includes: based on a target event associated with an application being initiated, transmitting initiation of the target event to a runtime environment of the application, and after transmitting the initiation of the target event to the runtime environment, based on a memory value allocated to the application exceeding a threshold value for determining whether to initiate a garbage collection, skipping performing the garbage collection and updating a bound memory value, defined in the garbage collection, and the threshold value.Type: GrantFiled: August 3, 2022Date of Patent: January 9, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Kwanhee Jeong, Hyojong Kim
-
Patent number: 11868250Abstract: A processor having a functional slice architecture is divided into a plurality of functional units (“tiles”) organized into a plurality of slices. Each slice is configured to perform specific functions within the processor, which may include memory slices (MEM) for storing operand data, and arithmetic logic slices for performing operations on received operand data. The tiles of the processor are configured to stream operand data across a first dimension, and receive instructions across a second dimension orthogonal to the first dimension. The timing of data and instruction flows are configured such that corresponding data and instructions are received at each tile with a predetermined temporal relationship, allowing operand data to be transmitted between the slices of the processor without any accompanying metadata. Instead, each slice is able to determine what operations to perform on received data based upon the timing at which the data is received.Type: GrantFiled: January 24, 2022Date of Patent: January 9, 2024Assignee: Groq, Inc.Inventors: Jonathan Alexander Ross, Dennis Charles Abts, John Thompson, Gregory M. Thorson
-
Patent number: 11868251Abstract: Provided are a memory access method and a server for performing the same. A memory access method performed by an optical interleaver included in a server includes receiving a request message from a requester processing engine included in the server, setting receiving buffers corresponding to different wavelengths corresponding to the number of external memory/storage devices connected to the server, multiplexing the same request message at the different wavelengths according to a wavelength division multiplexing (WDM) scheme, and transmitting the multiplexed request messages to the respective external memory/storage devices, wherein an address of a virtual memory managed by the server is separated and stored according to an interleaving scheme by a responder included in each of the external memory/storage devices.Type: GrantFiled: July 6, 2022Date of Patent: January 9, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jongtae Song, Daeub Kim, Ji Wook Youn, Kyeong-Eun Han, Joon Ki Lee
-
Patent number: 11868252Abstract: Memory devices and systems with post-packaging master die selection, and associated methods, are disclosed herein. In one embodiment, a memory device includes a plurality of memory dies. Each memory die of the plurality includes a command/address decoder. The command/address decoders are configured to receive command and address signals from external contacts of the memory device. The command/address decoders are also configured, when enabled, to decode the command and address signals and transmit the decoded command and address signals to every other memory die of the plurality. Each memory die further includes circuitry configured to enable, or disable, or both individual command/address decoders of the plurality of memory dies. In some embodiments, the circuitry can enable a command/address decoder of a memory die of the plurality after the plurality of memory dies are packaged into a memory device.Type: GrantFiled: December 6, 2019Date of Patent: January 9, 2024Assignee: Micron Technology, Inc.Inventors: Evan C. Pearson, John H. Gentry, Michael J. Scott, Greg S. Gatlin, Lael H. Matthews, Anthony M. Geidl, Michael Roth, Markus H. Geiger, Dale H. Hiscock
-
Patent number: 11868253Abstract: Memory devices, systems and methods include a buffer interface to translate high speed data interactions on a host interface side into slower, wider data interactions on a DRAM interface side. The slower, and wider DRAM interface may be configured to substantially match the capacity of the narrower, higher speed host interface. In some configurations, the buffer interface may be configured to provide multiple sub-channel interfaces each coupled to one or more regions within the memory structure and configured to facilitate data recovery in the event of a failure of some portion of the memory structure. Selected memory devices, systems and methods include an individual DRAM die, or one or more stacks of DRAM dies coupled to a buffer die.Type: GrantFiled: July 11, 2022Date of Patent: January 9, 2024Inventors: Brent Keeth, Owen Fay, Chan H. Yoo, Roy E. Greeff, Matthew B. Leslie
-
Patent number: 11868254Abstract: An electronic device includes a cache, a memory, and a controller. The controller stores an epoch counter value in metadata for a location in the memory when a cache block evicted from the cache is stored in the location. The controller also controls how the cache block is retained in the cache based at least in part on the epoch counter value when the cache block is subsequently retrieved from the location and stored in the cache.Type: GrantFiled: September 30, 2021Date of Patent: January 9, 2024Assignee: Advanced Micro Devices, Inc.Inventor: Nuwan Jayasena
-
Patent number: 11868255Abstract: Techniques for providing horizontally scaled caching of versioned data are provided. In some aspects, the techniques described herein relate to a method including initializing a first version cache (VC) object based on a version of data stored in a data storage device; replicating the first VC to generate a second VC; receiving a write operation at the first VC; generating a delta for the write operation, the delta representing a change in the version of data; writing the delta to a persistent replication log, the persistent replication log storing an ordered set of deltas including the delta; writing data in the write operation to the data storage device; and applying the ordered set of deltas at the second VC to update data stored by the second VC.Type: GrantFiled: January 28, 2022Date of Patent: January 9, 2024Assignee: WORKDAY, INC.Inventors: Darren Lee, Christof Bornhoevd
-
Patent number: 11868256Abstract: Processing a read request to read metadata from an entry of a metadata page may include: determining whether the metadata page is cached; responsive to determining the metadata page is cached, obtaining the first metadata from the cached metadata page; responsive to determining the metadata page is not cached, determining whether the requested metadata is in a metadata log of metadata changes stored in a volatile memory; and responsive to determining the metadata is the metadata log of metadata changes stored in the volatile memory, obtaining the requested metadata from the metadata log. Processing a write request that overwrites an existing value of a metadata page with an updated value may include: recording a metadata change in the metadata log that indicates to update the metadata page with the updated value; and performing additional processing during destaging that uses the existing value prior to overwriting it with the updated value.Type: GrantFiled: July 20, 2021Date of Patent: January 9, 2024Assignee: EMC IP Holding Company LLCInventors: Philip Love, Vladimir Shveidel, Bar David
-
Patent number: 11868257Abstract: Embodiments of the present disclosure generally relate to a target device handling overlap write commands. In one embodiment, a target device includes a non-volatile memory and a controller coupled to the non-volatile memory. The controller includes a random accumulated buffer, a sequential accumulated buffer, and an overlap accumulated buffer. The controller is configured to receive a new write command, classify the new write command, and write data associated with the new write command to one of the random accumulated buffer, the sequential accumulated buffer, or the overlap accumulated buffer. Once the overlap accumulated buffer becomes available, the controller first flushes to the non-volatile memory the data in the random accumulated buffer and the sequential accumulated buffer that was received prior in sequence to the data in the overlap accumulated buffer. The controller then flushes the available overlap accumulated buffer, ensuring that new write commands override prior write commands.Type: GrantFiled: July 8, 2022Date of Patent: January 9, 2024Assignee: Western Digital Technologies, Inc.Inventor: Shay Benisty
-
Patent number: 11868258Abstract: A scalable cache coherency protocol for system including a plurality of coherent agents coupled to one or more memory controllers is described. The memory controller may implement a precise directory for cache blocks from the memory to which the memory controller is coupled. Multiple requests to a cache block may be outstanding, and snoops and completions for requests may include an expected cache state at the receiving agent, as indicated by a directory in the memory controller when the request was processed, to allow the receiving agent to detect race conditions. In an embodiment, the cache states may include a primary shared and a secondary shared state. The primary shared state may apply to a coherent agent that bears responsibility for transmitting a copy of the cache block to a requesting agent. In an embodiment, at least two types of snoops may be supported: snoop forward and snoop back.Type: GrantFiled: January 27, 2023Date of Patent: January 9, 2024Assignee: Apple Inc.Inventors: James Vash, Gaurav Garg, Brian P. Lilly, Ramesh B. Gunna, Steven R. Hutsell, Lital Levy-Rubin, Per H. Hammarlund, Harshavardhan Kaushikkar
-
Patent number: 11868259Abstract: Embodiments herein described a coherency protocol for a distributed computing topology that permits for large stalls on various interfaces. In one embodiment, the computing topology includes multiple boards which each contain multiple processors. When a particular core on a processor wants access to data that is not currently stored in its cache, the core can first initiate a request to search for the cache line in the caches for other cores on the same processor. If the cache line is not found, the cache coherency protocol permits the processor to then broadcast a request to the other processors on the same board. If a processor on the same board does not have the data, the processor can then broadcast the request to the other boards in the system. The processors in those boards can then search their caches to identify the data.Type: GrantFiled: April 4, 2022Date of Patent: January 9, 2024Assignee: International Business Machines CorporationInventors: Vesselina Papazova, Robert J. Sonnelitter, III, Chad G. Wilson, Chakrapani Rayadurgam
-
Patent number: 11868260Abstract: A method of caching large data objects of greater than 1 GB, comprising: populating a sharded cache with large data objects backfilled from a data store; servicing large data object requests from a plurality of worker nodes via the sharded cache, comprising deterministically addressing objects within the sharded cache; and if a number of requests for an object within a time exceeds a threshold: after receiving a request from a worker node for the object, sending the worker node a redirect message directed to a hot cache, wherein the hot cache is to backfill from a hot cache backfill, and wherein the hot cache backfill is to backfill from the sharded cache.Type: GrantFiled: December 23, 2021Date of Patent: January 9, 2024Assignee: GM CRUISE HOLDINGS LLCInventors: Hui Dai, Seth Alexander Bunce
-
Patent number: 11868261Abstract: Techniques are described herein for prediction of an buffer pool size (BPS). Before performing BPS prediction, gathered data are used to determine whether a target workload is in a steady state. Historical utilization data gathered while the workload is in a steady state are used to predict object-specific BPS components for database objects, accessed by the target workload, that are identified for BPS analysis based on shares of the total disk I/O requests, for the workload, that are attributed to the respective objects. Preference of analysis is given to objects that are associated with larger shares of disk I/O activity. An object-specific BPS component is determined based on a coverage function that returns a percentage of the database object size (on disk) that should be available in the buffer pool for that database object. The percentage is determined using either a heuristic-based or a machine learning-based approach.Type: GrantFiled: July 20, 2021Date of Patent: January 9, 2024Assignee: Oracle International CorporationInventors: Peyman Faizian, Mayur Bency, Onur Kocberber, Seema Sundara, Nipun Agarwal
-
Patent number: 11868262Abstract: A memory request, including an address, is accessed. The memory request also specifies a type of an operation (e.g., a read or write) associated with an instance (e.g., a block) of data. A group of caches is selected using a bit or bits in the address. A first hash of the address is performed to select a cache in the group. A second hash of the address is performed to select a set of cache lines in the cache. Unless the operation results in a cache miss, the memory request is processed at the selected cache. When there is a cache miss, a third hash of the address is performed to select a memory controller, and a fourth hash of the address is performed to select a bank group and a bank in memory.Type: GrantFiled: February 9, 2023Date of Patent: January 9, 2024Assignee: Marvell Asia Pte, Ltd.Inventors: Richard E. Kessler, David Asher, Shubhendu S Mukherjee, Wilson P. Snyder, II, David Carlson, Jason Zebchuk, Isam Akkawi
-
Patent number: 11868263Abstract: A microprocessor includes a virtually-indexed L1 data cache that has an allocation policy that permits multiple synonyms to be co-resident. Each L2 entry is uniquely identified by a set index and a way number. A store unit, during a store instruction execution, receives a store physical address proxy (PAP) for a store physical memory line address (PMLA) from an L1 entry hit upon by a store virtual address, and writes the store PAP to a store queue entry. The store PAP comprises the set index and the way number of an L2 entry that holds a line specified by the store PMLA. The store unit, during the store commit, reads the store PAP from the store queue, looks up the store PAP in the L1 to detect synonyms, writes the store data to one or more of the detected synonyms, and evicts the non-written detected synonyms.Type: GrantFiled: May 24, 2022Date of Patent: January 9, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan, Robert Haskell Utley
-
Patent number: 11868264Abstract: One embodiment provides circuitry coupled with cache memory and a memory interface, the circuitry to compress compute data at multiple cache line granularity, and a processing resource coupled with the memory interface and the cache memory. The processing resource is configured to perform a general-purpose compute operation on compute data associated with multiple cache lines of the cache memory. The circuitry is configured to compress the compute data before a write of the compute data via the memory interface to the memory bus, in association with a read of the compute data associated with the multiple cache lines via the memory interface, decompress the compute data, and provide the decompressed compute data to the processing resource.Type: GrantFiled: February 13, 2023Date of Patent: January 9, 2024Assignee: Intel CorporationInventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, David Puffer, Prasoonkumar Surti, Lakshminarayanan Striramassarma, Vasanth Ranganathan, Kiran C. Veernapu, Balaji Vembu, Pattabhiraman K
-
Patent number: 11868265Abstract: Techniques are described herein processing asynchronous power transition events while maintaining a persistent memory state. In some embodiments, a system may proxy asynchronous reset events through system logic, which generates an interrupt to invoke a special persistent flush interrupt handler that performs a persistent cache flush prior to invoking a hardware power transition. Additionally or alternatively, the system may include a hardware backup mechanism to ensure all resets and power-transitions requested in hardware reliably complete within a bounded window of time independent of whether the persistent cache flush handler succeeds.Type: GrantFiled: March 25, 2022Date of Patent: January 9, 2024Assignee: Oracle International CorporationInventor: Benjamin John Fuller
-
Patent number: 11868266Abstract: Memory bank redistribution based on power consumption of multiple memory banks of a memory die can provide an overall reduced power consumption of a memory device. The respective power consumption of each bank can be determined and memory operations to the banks can be distributed based on the determined power consumption. The memory die can include an interface coupled to each bank. Control circuitry can remap logical to physical addresses of the banks based on one or more parameters such as a power consumption of each bank, counts of memory operations for each bank, and/or a relative physical distance of each bank.Type: GrantFiled: March 11, 2021Date of Patent: January 9, 2024Assignee: Micron Technology, Inc.Inventors: Ji-Hye G Shin, Kazuaki Ohara, Rosa M. Avila-Hernandez, Rachael R. Skreen
-
Patent number: 11868267Abstract: A system includes a first memory component having a particular access size associated with performance of memory operations, a second memory component to store a logical to physical data structure whose entries map management segments to respective physical locations in the memory component, wherein each management segment corresponds to an aggregated plurality of logical access units having the particular access size, and a processing device, operably coupled to the memory component. The processing device can perform memory management operations on a per management segment basis by: for each respective management segment, tracking access requests to constituent access units corresponding to the respective management segment, and determining whether to perform a particular memory management operation on the respective management segment based on the tracking.Type: GrantFiled: March 30, 2022Date of Patent: January 9, 2024Assignee: Micron Technology, Inc.Inventors: Edward C. McGlaughlin, Gary J. Lucas, Joseph M. Jeddeloh
-
Patent number: 11868268Abstract: A computer system includes physical memory devices of different types that store randomly-accessible data in a main memory of the computer system. In one approach, data is stored in memory at one or more logical addresses allocated to an application by an operating system. The data is physically stored in a first memory device of a first memory type (e.g., NVRAM). The operating system determines an access pattern for the stored data. In response to determining the access pattern, the data is moved from the first memory device to a second memory device of a different memory type (e.g., DRAM).Type: GrantFiled: February 7, 2022Date of Patent: January 9, 2024Assignee: Micron Technology, Inc.Inventors: Kenneth Marion Curewitz, Sean S. Eilert, Hongyu Wang, Samuel E. Bradshaw, Shivasankar Gunasekaran, Justin M. Eno, Shivam Swami
-
Patent number: 11868269Abstract: Tracking memory block access frequency in processor-based devices is disclosed herein. In one exemplary embodiment, a processor-based device provides a processing element (PE) that is configured to include an access count table for tracking accesses to memory blocks. The access count table is a packed table that comprises a plurality of access count values, each of which corresponds to a memory block of a plurality of memory blocks. Upon detecting a memory access operation (i.e., data-side operations such as memory load operations, memory store operations, atomic increment operations, set operations, and the like, or instruction-side operations such as code fetch operations) directed to a given memory block, the PE increments an access count value corresponding to the memory block. The access count value then can be accessed (e.g., by a process executing on the PE), and used to determine an access frequency for the memory block.Type: GrantFiled: September 28, 2021Date of Patent: January 9, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Andrew Joseph Rushing, Thomas Philip Speier
-
Patent number: 11868270Abstract: A storage device includes a storage controller and a host interface which sends an address translation service request to a host. The host interface includes an address translation cache which stores first address information included in the address translation service request, and an address translation service latency storage which stores latency-related information including a first time until the address translation cache receives an address translation service response corresponding to the address translation service request from the host. After the host interface sends the address translation service request to the host based on the latency-related information including the first time, and after the first time elapses, the storage controller polls the host interface.Type: GrantFiled: August 18, 2022Date of Patent: January 9, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Seung Moon Woo, Seon Bong Kim, Han-Ju Lee
-
Patent number: 11868271Abstract: A method for accessing compressed computer memory residing in physical computer memory is disclosed. In the method, compressed memory blocks are represented as sectors, wherein all sectors contain a fixed number of compressed memory blocks, have a fixed logical size in the form of the fixed number of compressed memory blocks, and have varying physical sizes in the form of the total size of data stored in the respective compressed memory blocks.Type: GrantFiled: November 14, 2019Date of Patent: January 9, 2024Assignee: Zeropoint Technologies ABInventors: Angelos Arelakis, Vasileios Spiliopoulos, Per Stenström
-
Patent number: 11868272Abstract: Methods, apparatus, systems and articles of manufacture are disclosed for allocation in a victim cache system. An example apparatus includes a first cache storage, a second cache storage, a cache controller coupled to the first cache storage and the second cache storage and operable to receive a memory operation that specifies an address, determine, based on the address, that the memory operation evicts a first set of data from the first cache storage, determine that the first set of data is unmodified relative to an extended memory, and cause the first set of data to be stored in the second cache storage.Type: GrantFiled: September 30, 2022Date of Patent: January 9, 2024Assignee: Texas Instruments IncorporatedInventors: Naveen Bhoria, Timothy David Anderson, Pete Michael Hippleheuser
-
Patent number: 11868273Abstract: Embodiments are directed to memory protection with hidden inline metadata to indicate data type and capabilities. An embodiment of a processor includes a processor core and cache memory. The processor core is to implant hidden inline metadata in one or more cachelines for the cache memory, the hidden inline metadata hidden at a linear address level, hidden from software, the hidden inline metadata to indicate data type or capabilities for the associated data stored on the same cacheline.Type: GrantFiled: June 29, 2019Date of Patent: January 9, 2024Assignee: Intel CorporationInventor: David M. Durham
-
Patent number: 11868274Abstract: Systems, apparatuses, and methods related to a computer system having a processor and a main memory storing scrambled data are described. The processor may have a secure zone configured to store keys and an unscrambled zone configured to operate on unscrambled data. The processor can convert the scrambled data into the unscrambled data in the unscrambled zone using the keys retrieved from the secure zone in response to execution of instructions configured to operate on the unscrambled data. Another processor may also be coupled with the memory, but can be prevented from accessing the unscrambled data in the unscrambled zone.Type: GrantFiled: June 8, 2021Date of Patent: January 9, 2024Assignee: Lodestar Licensing Group LLCInventor: Steven Jeffrey Wallach
-
Patent number: 11868275Abstract: Aspects of the present disclosure relate to encrypted data processing (EDAP). A processor includes a register file configured to store ciphertext data, an instruction fetch and decode unit configured to fetch and decode instructions, and a functional unit configured to process the stored ciphertext data. The functional unit further includes a decryption module configured to decrypt ciphertext data from the register file to receive cleartext data using an encryption key stored within the functional unit. The functional unit further includes a local buffer configured to store the cleartext data. The functional unit further includes an arithmetic logical unit configured to generate cleartext computation results using the cleartext data The functional unit further includes an encryption module configured to encrypt the cleartext computation results to generate ciphertext computation results for storage back into the register file.Type: GrantFiled: June 24, 2021Date of Patent: January 9, 2024Assignee: International Business Machines CorporationInventors: Manoj Kumar, Gianfranco Bilardi, Kattamuri Ekanadham, Jose E. Moreira, Pratap C. Pattnaik, Jessica Hui-Chun Tseng
-
Patent number: 11868276Abstract: An example non-transitory computer readable storage medium comprising instructions that when executed cause a processor of a computing device to: in response to a trigger of a system management mode (SMM), verify all processor threads have been pulled into the SMM; in response to a successful verification, enable write access to a non-volatile memory of the computing device via two registers, where the writing access is disabled upon booting of the computing device; and upon exiting the SMM, disable the write access via the two registers.Type: GrantFiled: June 2, 2022Date of Patent: January 9, 2024Assignee: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.Inventors: Richard A Bramley, Baraneedharan Anbazhagan, Valiuddin Ali
-
Patent number: 11868277Abstract: The data processing apparatus includes a memory protection setting storage unit capable of storing a plurality of address sections as memory protection setting targets, a plurality of first determination units provided for each of the address sections stored in the memory protection setting storage unit and provisionally determining whether or not an access request is permitted based on whether or not an access destination address specified by the access request corresponds to the address section acquired from the memory protection setting storage unit, and a second determination unit finally determining whether or not the access request is permitted based on the classification information and the results of provisional determinations by the first determination unit.Type: GrantFiled: December 22, 2021Date of Patent: January 9, 2024Assignee: RENESAS ELECTRONICS CORPORATIONInventor: Yasuhiro Sugita
-
Patent number: 11868278Abstract: Embodiments are provided for protecting boot block space in a memory device. Such a memory device may include a memory array having a protected portion and a serial interface controller. The memory device may have a register that enables or disables access to the portion when data indicating whether to enable or disable access to the portion is written into the register via a serial data in (SI) input.Type: GrantFiled: February 24, 2022Date of Patent: January 9, 2024Inventor: Theodore T. Pekny
-
Patent number: 11868279Abstract: Designs for a rackmount chassis having multiple card slots are presented herein. In one example, an apparatus includes a chassis configured to mount into a server rack, including a plurality of peripheral card slots, and a plurality of status lights configured to provide indications of operational status for an associated slot. The chassis further includes switch circuitry, including at least three switch elements, configured to couple the slots, wherein a first portion of ports on each of the switch elements is coupled to corresponding slots, a second portion of the ports on each of the switch elements is coupled to external ports of the chassis, and a third portion of the ports on each of the switch elements is coupled to at least another among the switch elements. The chassis may further include a plurality of external ports on the chassis communicatively coupled to the slots through the switch circuitry.Type: GrantFiled: September 26, 2022Date of Patent: January 9, 2024Assignee: Liqid Inc.Inventors: Christopher R. Long, Andrew Rudolph Heyd, Brenden Rust
-
Patent number: 11868280Abstract: A system can include a plurality of sequencers each configured to provide a number of sequenced output signals responsive to assertion of a respective sequencer enable signal provided thereto. The system can include chaining circuitry coupled to the plurality of sequencers. The chaining circuitry can comprise logic to: responsive to assertion of a primary enable signal received thereby, assert respective sequencer enable signals provided to the plurality of sequencers in accordance with a first sequence; and responsive to deassertion of the primary enable signal, assert the respective sequencer enable signals provided to the plurality of sequencers in accordance with a second sequence.Type: GrantFiled: January 3, 2023Date of Patent: January 9, 2024Assignee: Micron Technology, Inc.Inventors: Keith A Benjamin, Thomas Dougherty