Patents Issued in December 24, 2020
-
Publication number: 20200401486Abstract: The invention provides a method for transmitting insulator on-site monitoring data backup, comprising establishing connections between CPU and communication module, and between communication module and backup terminal, such that leak current data of horizontal and inclined insulators, and environmental humidity data are transmitted to backup terminal for storage to perform the backup; the number of CPUs matches that of backup terminals; each CPU is connected with all backup terminals through a corresponding communication module; CPU is also connected with flash memory. By incorporating other structures and methods, the invention effectively solves the problems in the existing data transmission mode that the data volume of data transmission in a set time and effect of transmission capacity of the communication module on data transmission, i.e.Type: ApplicationFiled: December 8, 2019Publication date: December 24, 2020Inventors: Qihou SONG, Honggao FENG, Baichuan XU
-
Publication number: 20200401487Abstract: Files are identified for file-level restore by bitmaps of cloud snapshots of a storage volume. The bitmaps comprise a data structure for each snapshot containing record numbers of files of the storage volume and a file status bit for each such file that indicates the presence or absence of the file in the corresponding snapshot. File numbers of files of interest are obtained from record numbers of records in a volume files index of the storage volume. The bitmaps can be searched to locate the snapshots containing particular files of interest without the necessity of mounting and searching the separate snapshots on a virtual machine.Type: ApplicationFiled: June 19, 2019Publication date: December 24, 2020Applicant: EMC IP Holding Company, LLCInventors: Ganesh Ghodake, Sachin Patil
-
Publication number: 20200401488Abstract: Recovery of a database system by taking the database system offline is initiated. Thereafter, recovery operations specified by a redo log of the database system are replayed. A cleanup log is generated that identifies cleanup operations occurring during the replay of the recovery operations for garbage collection. Concurrent with the startup of the database, garbage collection of the cleanup operations as specified in the database savepoint is initiated. In addition, concurrent with the replay of the recovery operations, garbage collection of the cleanup operations specified by the cleanup log is initiated. The database system is later brought online after all of the recovery operations are replayed.Type: ApplicationFiled: August 31, 2020Publication date: December 24, 2020Inventors: Thorsten Glebe, Werner Thesing, Christoph Roterring
-
Publication number: 20200401489Abstract: A streamlined approach analyzes block-level backups of VM virtual disks and creates both coarse and fine indexes of backed up VM data files in the block-level backups. The indexes (collectively the “content index”) enable granular searching by filename, by file attributes (metadata), and/or by file contents, and further enable granular live browsing of backed up VM files. Thus, by using the illustrative data storage management system, ordinary block-level backups of virtual disks are “opened to view” through indexing. Any block-level copies can be indexed according to the illustrative embodiments, including file system block-level copies. The indexing occurs offline in an illustrative data storage management system, after VM virtual disks are backed up into block-level backup copies, and therefore the indexing does not cut into the source VM's performance. The disclosed approach is widely applicable to VMs executing in cloud computing environments and/or in non-cloud data centers.Type: ApplicationFiled: December 10, 2019Publication date: December 24, 2020Inventors: Amit MITKAR, Vinit Dilip DHATRAK
-
Publication number: 20200401490Abstract: Methods, systems, and devices for efficient power scheme for redundancy are described. A memory device may include circuitry that stores memory address information related to one or more defective or unreliable memory components and that compares memory address information to memory addresses targeted for memory access operations. The memory device may selectively distribute a targeted memory address to one or more circuits within the circuitry based on whether those circuits store memory address information. Additionally or alternatively, the memory device may selectively power one or more circuits within the circuitry based on whether those circuits store memory address information.Type: ApplicationFiled: June 19, 2019Publication date: December 24, 2020Inventors: Richard E. Fackenthal, Duane R. Mills
-
Publication number: 20200401491Abstract: The disclosed embodiments provide a system for testing machine learning workflows. During operation, the system obtains a configuration for a staging test of a machine learning model, wherein the configuration includes a model name for the machine learning model, a duration of the staging test, and a use case associated with the machine learning model. Next, the system selects a staging test host for the staging test. The system then deploys the staging test on the staging test host in a staging environment, wherein the deployed staging test executes the machine learning model based on live traffic received from a production environment. After the staging test has completed, the system outputs a set of metrics representing a system impact of the machine learning model on the staging test host.Type: ApplicationFiled: June 20, 2019Publication date: December 24, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Ali Sadiq Mohamed, Manish Swaminathan, Shunlin Liang, Prateek Sachdev, Vivek Desai, Adam R. Peck, Sunny Sanjiv Ketkar
-
Publication number: 20200401492Abstract: Embodiments of the present disclosure relate to container-level monitoring. Embodiments include detecting, by an agent of a virtual machine, an event. Embodiments include determining, by the agent of the virtual machine, an address related to the event. Embodiments include accessing, by the agent of the virtual machine, container mapping information. Embodiments include locating, by the agent of the virtual machine, the address in the container mapping information. Embodiments include determining, by the agent of the virtual machine, based on the locating, that the event is associated with a container. Embodiments include determining, by the agent of the virtual machine, one or more attributes of the container. Embodiments include determining, by the agent of the virtual machine, based on information related to the event and the one or more attributes of the container, whether to block or allow an action related to the event.Type: ApplicationFiled: August 8, 2019Publication date: December 24, 2020Inventors: Shirish Vijayvargiya, Alok Nemchand Kataria, Rayanagouda Bheemanagouda Patil
-
Publication number: 20200401493Abstract: Embodiments disclose a method and a device for processing frequency converter monitoring data and a storage medium. An embodiment of the method includes: acquiring a script file containing a monitoring parameter specifying field and a storage location; parsing the script file to acquire the storage location and the monitoring parameter specifying field and determining the monitoring parameter specified by the monitoring parameter specifying field; collecting the monitoring data corresponding to the monitoring parameter; and storing the monitoring data in the storage location. The embodiments provide a script file based processing solution for frequency converter monitoring data, without no special tracking or debugging software tool required. Thus, the implementation complexity is reduced and the service cost and time are saved.Type: ApplicationFiled: February 12, 2019Publication date: December 24, 2020Applicant: Siemens AktiengesellschaftInventor: Jing Wei ZHANG
-
Publication number: 20200401494Abstract: Systems and methods for controlling one or more visual indicators, such as unit identification devices (UIDs), are provided. Control of such visual indicators allows lighting sequences to be displayed vis-à-vis the visual indicators in a data center or similar environment including multiple hardware devices or units, such as server blades in multiple chassis enclosures in a data center. In this way, users such as data center administrators, information technology (IT) personnel, etc. can be alerted to hardware events that impact hardware devices or units, such as hardware faults. The visual indicators can be controlled in such a way that animated lighting sequences can be used to guide users to the hardware devices or units experiencing the hardware event(s).Type: ApplicationFiled: June 24, 2019Publication date: December 24, 2020Inventors: CRAIG A. BOEKER, JUSTIN YORK
-
Publication number: 20200401495Abstract: Techniques message selection for hardware tracing in receiving system-on-chip (SoC) post-silicon debugging are described herein. An aspect includes receiving SoC design information corresponding to an SoC. Another aspect includes determining, based on the SoC design information, a set of messages that are exchanged between blocks of the SoC. Another aspect includes determining a set of possible combinations of messages of the set of messages. Another aspect includes determining a respective mutual information gain for each possible combination of messages in the set of possible combinations of messages. Another aspect includes selecting a combination of messages having a highest determined mutual information gain for monitoring via hardware tracing in the SoC.Type: ApplicationFiled: June 21, 2019Publication date: December 24, 2020Inventors: Flavio M De Paula, Debjit Pal, Shobha Vasudevan, Abhishek Sharma
-
Publication number: 20200401496Abstract: Techniques of displaying comments relative to video frames are described herein. The disclosed techniques include obtaining page data comprising a video file and rendering the page data to play a video comprising a plurality of frames; obtaining a comment file comprising a plurality of comments on the video; displaying the plurality of comments relative to the plurality of frames while playing the video; detecting a computer performance parameter during rendering the page data; and reducing a density of displaying comments in response to determining that the computer performance parameter is less than a predetermined value.Type: ApplicationFiled: June 18, 2020Publication date: December 24, 2020Inventors: Zhaoxin Tan, Jingqiang Zhang, Qi Tang, Jianqiang Ding, Hao Liu
-
Publication number: 20200401497Abstract: A detecting device includes a memory, and processing circuitry coupled to the memory and configured to collect communication information from a communication device, have a model learn a characteristic of the communication information by the communication device using the communication information collected for each of the communication devices, and input communication information on a detection target to the model, detect whether the communication information on the detection target indicates abnormal communication on the basis of an output result from the model, and have the model relearn at the learning when the number of detected abnormalities about the communication information during a predetermined evaluation period exceeds a first threshold value.Type: ApplicationFiled: February 25, 2019Publication date: December 24, 2020Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Takuya SAEKI, Iifan TYOU, Yukio NAGAFUCHI, Masaki TANIKAWA
-
Publication number: 20200401498Abstract: Disclosed are methods, circuits, apparatus, systems and associated software modules for dynamically evaluating code behavior in runtime. There is provided a code testing platform and/or framework which may include: (1) a code execution environment instancing module (CEEIM), (2) code execution resources, (3) executed code isolation logic, and (4) code call response logic. The CEEIM may instance, on a computing platform, a code execution environment (CEE) which is at least partially isolated from external resources functionally associated with the computing platform. The CEE may include code execution resources adapted to execute code whose behavior is to be evaluated, wherein a resource call generated from code execution may be analyzed by the code isolation logic and may under certain conditions be routed to the code call response logic.Type: ApplicationFiled: January 10, 2020Publication date: December 24, 2020Inventors: Eli Lopian, Doron Peretz
-
Publication number: 20200401499Abstract: A real-time debugger implementation maintains and manages multiple debug contexts allowing developers to interact with real-time applications without “breaking” the system in which the debug application is executing. The debugger allows multiple debug contexts to exist and allows break points in real-time and non-real-time code portions of one or more applications executing on a debug enabled core of a processor. A debug monitor function may be implemented as a hardware logic module on the same integrated circuit as the processor. Higher priority interrupt service requests may be serviced while otherwise maintaining a context for the debug session (e.g., stopped at a developer defined breakpoint). Accordingly, the application developer executing the debugger may not have to be concerned with processing occurring on the processor that may be unrelated to the current debug session.Type: ApplicationFiled: August 31, 2020Publication date: December 24, 2020Inventors: Jason Lynn PECK, Gary A. COOPER, Markus KOESLER
-
Publication number: 20200401500Abstract: A real-time debugger implementation maintains and manages multiple debug contexts allowing developers to interact with real-time applications without “breaking” the system in which the debug application is executing. The debugger allows multiple debug contexts to exist and allows break points in real-time and non-real-time code portions of one or more applications executing on a debug enabled core of a processor. A debug monitor function may be implemented as a hardware logic module on the same integrated circuit as the processor. Higher priority interrupt service requests may be serviced while otherwise maintaining a context for the debug session (e.g., stopped at a developer defined breakpoint). Accordingly, the application developer executing the debugger may not have to be concerned with processing occurring on the processor that may be unrelated to the current debug session.Type: ApplicationFiled: August 31, 2020Publication date: December 24, 2020Inventors: Jason Lynn PECK, Gary A. COOPER, Markus KOESLER
-
Publication number: 20200401501Abstract: A real-time debugger implementation maintains and manages multiple debug contexts allowing developers to interact with real-time applications without “breaking” the system in which the debug application is executing. The debugger allows multiple debug contexts to exist and allows break points in real-time and non-real-time code portions of one or more applications executing on a debug enabled core of a processor. A debug monitor function may be implemented as a hardware logic module on the same integrated circuit as the processor. Higher priority interrupt service requests may be serviced while otherwise maintaining a context for the debug session (e.g., stopped at a developer defined breakpoint). Accordingly, the application developer executing the debugger may not have to be concerned with processing occurring on the processor that may be unrelated to the current debug session.Type: ApplicationFiled: September 3, 2020Publication date: December 24, 2020Inventors: Jason Lynn PECK, Gary A. COOPER, Markus KOESLER
-
Publication number: 20200401502Abstract: Methods and systems for detecting hard-coded strings in source code are described herein. According to an aspect of an example method, a first list of strings may be generated via a processor. The first list of strings may include strings that are embedded in source code of an application. A second list of strings may be generated. The second list of strings may include strings that are rendered via a user interface of the application. Each string of the first list of strings may be compared against the strings of the second list of strings. Based on the comparison, a filtered list of strings may be generated by removing, from the first of strings, at least one string that does not have a match in the second list of strings. By this method, the software development process, and especially updating, maintaining, and localizing code, may become more efficient and cost-effective.Type: ApplicationFiled: August 2, 2019Publication date: December 24, 2020Inventors: Bo Zang, Tianze Jiang, Taodong Lu
-
Publication number: 20200401503Abstract: A method and a system are provided of testing AI applications/systems on a System for testing one or more artificial intelligence System (STAIS) connectable to under-test AI systems (UTAIS) via the AI test connectors/interfaces through AI test platform. The method includes test modeling, data preparation, testing script generation, testing automation, and quality assurance. The system automatically identifies, analyzes, and displays all quality issues of UTAIS.Type: ApplicationFiled: January 23, 2020Publication date: December 24, 2020Inventor: Zeyu GAO
-
Publication number: 20200401504Abstract: Methods, systems, and computer readable media for configuring a test system using source code of a device being tested are disclosed. According to one method, the method occurs at a network equipment test device. The method includes receiving one or more device source files associated with a device under test (DUT); analyzing the one or more device source files to determine configuration source code for configuring at least one test system resource in the network equipment test device, wherein analyzing the one or more device source files includes identifying functionality of the DUT based on device source code portions and determining, using the device source code portions, the configuration source code for testing the functionality of the DUT; configuring, using the configuration source code, the at least one test system resource; and testing the DUT using the at least one test system resource.Type: ApplicationFiled: June 19, 2019Publication date: December 24, 2020Inventors: Christian Paul Sommers, Peter J. Marsico
-
Publication number: 20200401505Abstract: The present invention relates to a method for automated testing of an Application Program Interface (API). A test requirement data is received to test an API from a first database. Further, the test requirement data is translated into a first set of vectors. Furthermore, one or more test scripts from a plurality of test scripts stored in a second database is selected based on output of the trained artificial neural network. The output indicative of a probability of effectiveness associated with the one or more test scripts is generated using the first set of vectors as inputs to a trained artificial neural network. The one or more test scripts are executed to test and validate the API.Type: ApplicationFiled: August 5, 2019Publication date: December 24, 2020Inventors: Gopinath Chenguttuvan, Vaishali Rajakumari
-
Publication number: 20200401506Abstract: A framework and a method for ad-hoc batch testing of APIs are provided, where batches of API calls are dynamically generated directly through the framework according inputs identifying the required tests and the sources of the test data, rather than through execution of prewritten test scripts that explicitly write out the test API calls in preset sequences. When performing the validation for an API test, a test payload is generated for the test, an endpoint is called using the test payload to obtain the response used for validation, where generating the test payload includes determining an API reference corresponding to the test, obtaining relevant data from the test data according to a reference key in the test, generating input assignment operations for one or more input parameters in the API reference according to the relevant data, and generating an API call based on the API reference.Type: ApplicationFiled: September 25, 2019Publication date: December 24, 2020Inventors: Ramanathan Sathianarayanan, Krishna Bharath Kashyap
-
Publication number: 20200401507Abstract: A method for testing a software component implemented in a host system on the basis of one or more test campaigns, a test campaign includes computer test cases and being associated with input test data. The method comprises the steps of: executing the computer test cases of each test campaign for an operating time of the software component, which provides output test data associated with each test campaign; determining a reference operating model and a data partition on the basis of the input and output test data associated with each test campaign; running the software component using input production run data, which provides output production run data; determining an operating characteristic of the software component on the basis of the reference operating models according to a comparison between the input and output production run data and the data from the data partitions associated with the one or more test campaigns.Type: ApplicationFiled: June 19, 2020Publication date: December 24, 2020Inventors: Fateh KAAKAI, Béatrice PESQUET
-
Publication number: 20200401508Abstract: The invention introduces a method for multi-namespace data access, performed by a controller, at least including: obtaining a host write command from a host, which includes user data and metadata associated with one Logical Block Address (LBA) or more; and programming the user data and the metadata into a user-data part and a metadata part of a segment of a Logical Unit Number (LUN), respectively, wherein a length of the metadata part is the maximum metadata length of a plurality of LBA formats that the controller supports.Type: ApplicationFiled: January 2, 2020Publication date: December 24, 2020Applicant: Silicon Motion, Inc.Inventor: Che-Wei HSU
-
Publication number: 20200401509Abstract: The invention introduces an apparatus for handling flash physical-resource sets, at least including a random access memory (RAM), a processing unit and an address conversion circuit. The RAM includes multiple segments of temporary space and each segment thereof stores variables associated with a specific flash physical-resource set. The processing unit accesses user data of a flash physical-resource set when executing program code of a Flash Translation Layer (FTL). The address conversion circuit receives a memory address issued from the FTL, converts the memory address into a relative address of one segment of temporary space associated with the flash physical-resource set and outputs the relative address to the RAM for accessing a variable of the associated segment of temporary space.Type: ApplicationFiled: January 2, 2020Publication date: December 24, 2020Applicant: Silicon Motion, Inc.Inventor: Che-Wei HSU
-
Publication number: 20200401510Abstract: There is provided a data storage device for managing memory resources by using a flash translation layer (FTL) for condensing mapping information. The FTL divides a total logical address space for input and output requests of a host into n virtual logical address streams, generates a preliminary cluster mapping table in accordance with stream attributes of the n virtual logical address streams, generates a condensed cluster mapping table by performing a k-mean clustering algorithm on the preliminary cluster mapping table, and generates a cache cluster mapping table configured as a part of a condensed cluster mapping table frequently referred to by using a DFTL method. The FTL extends a space of data buffers allotted to non-mapped physical address streams to a DFTL cache map in a data buffer of a volatile memory device by the condensed cluster mapping table.Type: ApplicationFiled: February 4, 2020Publication date: December 24, 2020Applicants: Samsung Electronics Co., Ltd., RESEARCH & BUSINESS FOUNDATION SUNGKYUNKWAN UNIVERSITYInventors: Sangwoo LIM, Dongkun SHIN
-
Publication number: 20200401511Abstract: A molded reinforced polymer case, which houses an integrated solid-state data storage drive, which attaches to a laptop. The solid-state drive connects to the laptop drive through a connective data cable.Type: ApplicationFiled: March 17, 2020Publication date: December 24, 2020Applicant: Byrdbyte Creations Inc.Inventors: Emil Fadi Kort, Maximillan Kent Dykes
-
Publication number: 20200401512Abstract: An object-based storage system comprising a host system capable of executing applications for and with an object-based storage device (OSD). Exemplary configurations include a call interface, a physical layer interface, an object-based storage solid-state device (OSD-SSD), and are further characterized by the presence of a storage processor capable of processing object-based storage device algorithms interleaved with processing of physical storage device management. Embodiments include a storage controller capable of executing recognition, classification and tagging of application files, especially including image, music, and other media. Also disclosed are methods for initializing and configuring an OSD-SSD device.Type: ApplicationFiled: September 2, 2020Publication date: December 24, 2020Inventor: Paul A. Duran
-
Publication number: 20200401513Abstract: Systems and methods for adapting garbage collection (GC) operations in a memory device to a host write activity are described. A host write progress can be represented by an actual host write count relative to a target host write count. The host write activity may be estimated in a unit time such as per day, or accumulated over a specified time period. A memory controller can adjust an amount of memory space to be freed by a GC operation according to the host write progress. The memory controller can also dynamically reallocate a portion of the memory cells between a single level cell (SLC) cache and a multi-level cell (MLC) storage according to the host write progress.Type: ApplicationFiled: June 19, 2019Publication date: December 24, 2020Inventors: Deping He, Qing Liang, David Aaron Palmer
-
Publication number: 20200401514Abstract: Systems and methods for adapting garbage collection (GC) operations in a memory device to an estimated device age are discussed. An exemplary memory device includes a memory controller to track an actual device age, determine a device wear metric using a physical write count and total writes over an expected lifetime of the memory device, estimate a wear-indicated device age, and adjust an amount of memory space to be freed by a GC operation according to the wear-indicated device age relative to the actual device age. The memory controller can also dynamically reallocate a portion of the memory cells between a single level cell (SLC) cache and a multi-level cell (MLC) storage according to the wear-indicated device age relative to the actual device age.Type: ApplicationFiled: June 19, 2019Publication date: December 24, 2020Inventors: Qing Liang, Deping He, David Aaron Palmer
-
Publication number: 20200401515Abstract: Systems and methods for adapting garbage collection (GC) operations in a memory device to a pattern of host accessing the device are discussed. The host access pattern can be represented by how frequent the device is in idle states free of active host access. An exemplary memory device includes a memory controller to track a count of idle periods during a specified time window, and to adjust an amount of memory space to be freed by a GC operation in accordance with the count of idle periods. The memory controller can also dynamically reallocate a portion of the memory cells between a single level cell (SLC) cache and a multi-level cell (MLC) storage according to the count of idle periods during the specified time window.Type: ApplicationFiled: June 19, 2019Publication date: December 24, 2020Inventors: Qing Liang, Deping He, David Aaron Palmer
-
Publication number: 20200401516Abstract: A data storage device includes a memory device and a memory controller. The memory device includes multiple memory blocks. The memory controller determines whether execution of a garbage collection procedure is required according to a number of spare memory blocks. When the execution of the garbage collection procedure is required, the memory controller determines an execution period according to a latest editing status of a plurality of open memory blocks; starts the execution of the garbage collection procedure so as to perform at least a portion of the garbage collection procedure in the execution period; and suspends the execution of the garbage collection procedure when the execution period has expired but the garbage collection procedure is not finished. The memory controller further determines a time interval for continuing the execution of the garbage collection procedure later according to the latest editing status of the open memory blocks.Type: ApplicationFiled: March 26, 2020Publication date: December 24, 2020Inventor: Kuan-Yu Ke
-
Publication number: 20200401517Abstract: An arena-based memory management system is disclosed. In response to a call to reclaim memory storing a group of objects allocated in an arena, an object not in use of the group of objects allocated in the arena is collected. A live object of the plurality of objects is copied from the arena to a heap.Type: ApplicationFiled: June 20, 2019Publication date: December 24, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Peter Franz Valentin Sollich, Robert Lovejoy Goodwin, Charles Ryan Salada
-
Publication number: 20200401518Abstract: There are provided a memory controller for performing a program operation and a memory system having the memory controller. The memory system includes a memory device including first and second planes each including a plurality of m-bit (m is a natural number of 2 or more) multi-level cell (MLC) blocks; and a memory controller for allocating a first address corresponding to a first MLC block of the m-bit MLC blocks in which first m-bit MLC data is to be programmed and a second address corresponding to a second MLC block of the m-bit MLC blocks in which second m-bit MLC data is to be programmed, and transmitting the allocated addresses and logical page data included in the m-bit MLC data to the memory device. The memory controller differently determines a transmission sequence of the logical page data according to whether the addresses correspond to the same plane among the planes.Type: ApplicationFiled: November 27, 2019Publication date: December 24, 2020Inventor: Jeen PARK
-
Publication number: 20200401519Abstract: Systems, apparatuses, and methods for maintaining region-based cache directories split between node and memory are disclosed. The system with multiple processing nodes includes cache directories split between the nodes and memory to help manage cache coherency among the nodes' cache subsystems. In order to reduce the number of entries in the cache directories, the cache directories track coherency on a region basis rather than on a cache line basis, wherein a region includes multiple cache lines. Each processing node includes a node-based cache directory to track regions which have at least one cache line cached in any cache subsystem in the node. The node-based cache directory includes a reference count field in each entry to track the aggregate number of cache lines that are cached per region. The memory-based cache directory includes entries for regions which have an entry stored in any node-based cache directory of the system.Type: ApplicationFiled: July 2, 2020Publication date: December 24, 2020Inventors: Vydhyanathan Kalyanasundharam, Kevin M. Lepak, Amit P. Apte, Ganesh Balakrishnan
-
Publication number: 20200401520Abstract: A system includes a memory, a producer processor and a consumer processor. The memory includes a shared ring buffer, which has a partially overlapping active ring and processed ring. The producer processor is in communication with the memory and is configured to receive a request associated with a memory entry, store the request in a first slot of the shared ring buffer at a first offset, receive another request associated with another memory entry, and store the other request in a second slot (in the overlapping region adjacent to the first slot) of the shared ring buffer. The consumer processor is in communication with the memory and is configured to process the request and write the processed request in a third slot (outside of the overlapping region at a second offset and in a different cache-line than the second slot) of the shared ring buffer.Type: ApplicationFiled: June 18, 2019Publication date: December 24, 2020Inventor: Michael Tsirkin
-
Publication number: 20200401521Abstract: An example memory system may include a central processing unit (CPU) comprising a CPU cache, a storage class memory, a volatile memory and a memory controller. The memory controller is to store, in the storage class memory, a first cache line including first data and a first directory tag corresponding to the first data. The memory controller is to further store, in the storage class memory, a second cache line including second data and a second directory tag corresponding to the second data. The memory controller is to store, in the volatile memory, a third cache line that comprises the first directory tag and the second directory tag, the third cache line excluding the first data and the second data.Type: ApplicationFiled: June 19, 2019Publication date: December 24, 2020Inventors: Robert C. Elliott, James A. Fuxa
-
Publication number: 20200401522Abstract: Systems, methods, and non-transitory computer-readable media can determine that a user is interacting with a software application running on a computing device. One or more content items to be prefetched for the software application are identified based on one or more machine learning models. A request to prefetch the one or more content items for the software application is generated.Type: ApplicationFiled: March 30, 2018Publication date: December 24, 2020Inventor: Chenyong Xu
-
Publication number: 20200401523Abstract: According to one general aspect, an apparatus may include a multi-tiered cache system that includes at least one upper cache tier relatively closer, hierarchically, to a processor and at least one lower cache tier relatively closer, hierarchically, to a system memory. The apparatus may include a memory interconnect circuit hierarchically between the multi-tiered cache system and the system memory. The apparatus may include a prefetcher circuit coupled with a lower cache tier of the multi-tiered cache system, and configured to issue a speculative prefetch request to the memory interconnect circuit for data to be placed into the lower cache tier. The memory interconnect circuit may be configured to cancel the speculative prefetch request if the data exists in an upper cache tier of the multi-tiered cache system.Type: ApplicationFiled: August 16, 2019Publication date: December 24, 2020Inventors: Vikas SINHA, Teik TAN, Tarun NAKRA
-
Publication number: 20200401524Abstract: A high-frequency and low-power L1 cache and associated access technique. The method may include inspecting a virtual address of an L1 data cache load instruction, and indexing into a row and a column of a way predictor table using metadata and a virtual address associated with the load instruction. The method may include matching information stored at the row and the column of the way predictor table to a location of a cache line. The method may include predicting the location of the cache line within the L1 data cache based on the information match. A hierarchy of way predictor tables may be used, with higher level way predictor tables refreshing smaller lower level way predictor tables. The way predictor tables may be trained to make better predictions over time. Only selected circuit macros need to be enabled based on the predictions, thereby saving power.Type: ApplicationFiled: August 21, 2019Publication date: December 24, 2020Inventor: Karthik SUNDARAM
-
Publication number: 20200401525Abstract: A storage system and method for enabling host-driven regional performance in memory are provided. In one embodiment, a method is provided comprising receiving a directive from a host device as to a preferred logical region of a non-volatile memory in a storage system; and based on the directive, modifying a caching policy specifying which pages of a logical-to-physical address map stored in the non-volatile memory are to be cached in a volatile memory of the storage system. Other embodiments are provided, such as modifying a garbage collection policy of the storage system based on information from the host device regarding a preferred logical region of the memory.Type: ApplicationFiled: June 18, 2019Publication date: December 24, 2020Applicant: Western Digital Technologies, Inc.Inventors: Ramanathan Muthiah, Ramkumar Ramamurthy, Judah Gamliel Hahn
-
Publication number: 20200401526Abstract: A streaming engine employed in a digital data processor specifies a fixed read only data stream defined by plural nested loops. An address generator produces address of data elements. A steam head register stores data elements next to be supplied to functional units for use as operands. The streaming engine stores an early address of next to be fetched data elements and a late address of a data element in the stream head register for each of the nested loops. The streaming engine stores an early loop counts of next to be fetched data elements and a late loop counts of a data element in the stream head register for each of the nested loops.Type: ApplicationFiled: June 30, 2020Publication date: December 24, 2020Inventors: Joseph Zbiciak, Timothy D. Anderson
-
Publication number: 20200401527Abstract: Apparatuses and methods related to providing coherent memory access. An apparatus for providing coherent memory access can include a memory array, a first processing resource, a first cache line and a second cache line coupled to the memory array, a first cache controller, and a second cache controller. The first cache controller coupled to the first processing resource and to the first cache line can be configured to provide coherent access to data stored in the second cache line and corresponding to a memory address. A second cache controller coupled through an interface to a second processing resource external to the apparatus and coupled to the second cache line can be configured to provide coherent access to the data stored in the first cache line and corresponding to the memory address.Type: ApplicationFiled: September 4, 2020Publication date: December 24, 2020Inventors: Timothy P. Finkbeiner, Troy D. Larsen
-
Publication number: 20200401528Abstract: A method, system, and computer program product for maintaining a cache obtain request data associated with a plurality of previously processed requests for aggregated data; predict, based on the request data, (i) a subset of the aggregated data associated with a subsequent request and (ii) a first time period associated with the subsequent request; determine, based on the first time period and a second time period associated with a performance of a data aggregation operation that generates the aggregated data, a third time period associated with instructing a memory controller managing a cache to evict cached data stored in the cache and load the subset of the aggregated data into the cache; and provide an invalidation request to the memory controller managing the cache to evict the cached data stored in the cache and load the subset of the aggregated data into the cache during the third time period.Type: ApplicationFiled: June 19, 2019Publication date: December 24, 2020Inventors: Abhinav Sharma, Sonny Thanh Truong
-
Publication number: 20200401529Abstract: Wavefront loading in a processor is managed and includes monitoring a selected wavefront of a set of wavefronts. Reuse of memory access requests for the selected wavefront is counted. A cache hit rate in one or more caches of the processor is determined based on the counted reuse. Based on the cache hit rate, subsequent memory requests of other wavefronts of the set of wavefronts are modified by including a type of reuse of cache lines in requests to the caches. In the caches, storage of data in the caches is based on the type of reuse indicated by the subsequent memory access requests. Reused cache lines are protected by preventing cache line contents from being replaced by another cache line for a duration of processing the set of wavefronts. Caches are bypassed when streaming access requests are made.Type: ApplicationFiled: June 19, 2019Publication date: December 24, 2020Inventors: Xianwei ZHANG, John KALAMATIANOS, Bradford BECKMANN
-
Publication number: 20200401530Abstract: Various embodiments are provided for providing byte granularity accessibility of memory in a unified memory-storage hierarchy in a computing system by a processor. A location of one or more secondary memory medium pages in a secondary memory medium may be mapped into an address space of a primary memory medium to extend a memory-storage hierarchy of the secondary memory medium. The one or more secondary memory medium pages may be promoted from the secondary memory medium to the primary memory medium. The primary memory medium functions as a cache to provide byte level accessibility to the one or more primary memory medium pages. A memory request for the secondary memory medium page may be redirected using a promotion look-aside buffer (“PLB”) in a host bridge associated with the primary memory medium and the secondary memory medium.Type: ApplicationFiled: June 20, 2019Publication date: December 24, 2020Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Ahmed ABULILA, Vikram SHARMA MAILTHODY, Zaid QURESHI, Jian HUANG, Nam SUNG KIM, Jinjun XIONG, Wen-Mei HWU
-
Publication number: 20200401531Abstract: A method for managing memory access for implementing at least one layer of a convolutional neural network is provided. The method comprises predicting an access procedure in relation to a portion of memory based on a characteristic of the convolutional neural network. In response to the prediction, the method comprises performing an operation to obtain and store a memory address translation, corresponding to the portion of memory, in storage in advance of the predicted access procedure. An apparatus is provided comprising at least one processor and storage. The apparatus is configured to predict an access procedure in relation to a portion of memory which is external to the processor. In response to the prediction, the apparatus is configured to obtain and store a memory address translation corresponding to the portion of memory in storage in advance of the predicted access procedure.Type: ApplicationFiled: June 20, 2019Publication date: December 24, 2020Inventors: Sharjeel SAEED, Daren CROXFORD, Graeme Leslie INGRAM
-
Publication number: 20200401532Abstract: A queuing requester for access to a memory system is provided. Transaction requests are received from two or more requestors for access to the memory system. Each transaction request includes an associated priority value. A request queue of the received transaction requests is formed in the queuing requester. Each transaction request includes an associated priority value. A highest priority value of all pending transaction requests within the request queue is determined. An elevated priority value is selected when the highest priority value is higher than the priority value of an oldest transaction request in the request queue; otherwise the priority value of the oldest transaction request is selected. The oldest transaction request in the request queue with the selected priority value is then provided to the memory system. An arbitration contest with other requesters for access to the memory system is performed using the selected priority value.Type: ApplicationFiled: June 30, 2020Publication date: December 24, 2020Inventors: Abhijeet Ashok Chachad, Raguram Damodaran, Ramakrishnan Venkatasubramanian, Joseph Raymond Michael Zbiciak
-
Publication number: 20200401533Abstract: The disclosed embodiments describe devices and methods for preventing unauthorized access to memory devices. The disclosed embodiments utilize a one-time programmable (OTP) memory added to both a memory device and a processing device. The OTP memory stores encryption keys and the encryption and decryption of messages between the two devices are used as a heartbeat to determine that the memory device has not been separated from the processing device and, in some instances, connected to a malicious processing device.Type: ApplicationFiled: June 18, 2019Publication date: December 24, 2020Inventor: Gil Golov
-
Publication number: 20200401534Abstract: A memory device includes a memory module that encrypts and decrypts data with a key. To encrypt, the memory module performs a first modified XOR operation in which a ciphertext has a same logical value as a corresponding key when the data has a low logical value and the ciphertext has an inverse of the logical value of the corresponding key when the data is at a high logical value. To decrypt, the memory module performs a second modified XOR operation in which the logical value of the ciphertext forms the logical value of the data when the corresponding key is at the low logical value and the inverse of the logical value of the ciphertext forms the logical value of the corresponding data when the corresponding key is at the high logical value.Type: ApplicationFiled: June 24, 2019Publication date: December 24, 2020Applicant: SanDisk Technologies LLCInventors: Federico Nardi, Ali Al-Shamma
-
Publication number: 20200401535Abstract: The present disclosure provides methods, apparatus, and systems for implementing and operating a memory module that receive, using dedicated processing circuitry implemented in a memory module, a first data object and a second data object. The memory module performs pre-processing of the first data object and post-processing of the second data object.Type: ApplicationFiled: August 21, 2020Publication date: December 24, 2020Inventor: Richard C. Murphy