Patents Issued in January 25, 2018
-
Publication number: 20180024905Abstract: A method includes, based on communication times regarding an execution command and a response of each of a plurality of services included in a task executed two or more times, and on information on an information processing device that executes each of the plurality of services, generating a group of one or more services executed continuously by a same information processing device in each of the executions of the task, calculating a first processing time of an entirety of the one or more services in each group, calculating a second processing time per service obtained by dividing the first processing time by a number of the one or more services in each group, calculating an average processing time by averaging the second processing times for each of the plurality of services for each task, and outputting a specific service based on the average processing time.Type: ApplicationFiled: July 12, 2017Publication date: January 25, 2018Applicant: FUJITSU LIMITEDInventors: Shinya KITAJIMA, Shinji KIKUCHI
-
Publication number: 20180024906Abstract: An object of the present invention is to provide an information processing apparatus capable of performing a performance evaluation easily without using a specific communication protocol.Type: ApplicationFiled: March 9, 2015Publication date: January 25, 2018Applicant: Mitsubishi Electric CorporationInventors: Norio IKEDA, Mitsuo SHIMOTANI, Shinji OTA
-
Publication number: 20180024907Abstract: A technique for generating terminal resource recommendations is discussed. Terminal activity status data is retrieved from a number of sensors in order to generate and render terminal resource recommendations. The activity status data is used to calculate a current active terminal recommendation indicative of the number of terminals needed to currently satisfy a particular service metric. A predicted active terminal recommendation indicative of the number of terminals needed at a specified future time period in order to satisfy a particular service metric at the specified future time period may also be generated.Type: ApplicationFiled: July 19, 2017Publication date: January 25, 2018Applicant: Wal-Mart Stores, Inc.Inventors: Stephen Tyler Caution, Tricia Mcpherson Hicks, Lori Lee Wise, Douglas Jahe Ryner, Joshua David Osmon, Jaclyn Moreda
-
Publication number: 20180024908Abstract: A debugger for distributed software running on multiple computer systems analyzes and compares system environments for the multiple computer systems. When a breakpoint occurs, or when a failure in one of the computer systems occurs, the debugger determines when one or more values of interest in the distributed software differ among the different computer systems. The debugger then determines whether the one or differing values correlate to the system environment for the corresponding computer systems. When the one or more differing values correlate to the system environment for the corresponding computer systems, the user of the debugger is notified of the correlation between the differing values and the system environments of the computer systems, to help potentially identify differences in system environments that could be contributing to the differing values.Type: ApplicationFiled: July 22, 2016Publication date: January 25, 2018Inventors: Eric L. Barsness, Jay S. Bryant, James E. Carey, Joseph W. Cropper, John M. Santosuosso
-
Publication number: 20180024909Abstract: Computer implemented methods for monitoring growth of memory buffers in logging and dynamically adapting quantity and detail of logging. In one method, a computer determines whether an operation of a thread has a failure and whether the failure is severe and logs details from a pre-thread logging buffer. In another method, a computer calculates an increase in a log buffer size, reads from a configuration file a maximum allowed increase in the log buffer size, and returns logging details, in response to determining that the increase is more than the maximum allowed increase. In yet another method, a computer writes a log of a use case to a disk, calculates an actual size of the log in the database, and returns logging details, in response to determining that the actual size is more than the allowed size.Type: ApplicationFiled: July 25, 2016Publication date: January 25, 2018Inventors: Scott J. Broussard, Thangadurai Muthusamy, Amartey S. Pearson, Rejy V. Sasidharan
-
Publication number: 20180024910Abstract: According to one general aspect, a method may include monitoring the execution of at least a portion of a software application. The method may also include collecting subroutine call information regarding a plurality of subroutine calls included by the portion of the software application, wherein one or more of the subroutine calls is selected for detailed data recording. The method may further include pruning, as the software application is being executed, a subroutine call tree to include only the subroutine calls selected for detailed data recording and one or more parent subroutine calls of each subroutine calls selected for detailed data recording.Type: ApplicationFiled: September 11, 2017Publication date: January 25, 2018Inventors: Eyal Koren, Asaf Dafner, Shiri Semo Judelman
-
Publication number: 20180024911Abstract: Example methods, apparatuses, and systems are presented for a software code debugger tool. The code debugger tool may be configured to access source code intended to be examined for debugging. The debugger tool may compile the source code using a specially designed compiler that incorporates state recording functionality and other debugging functionality directly into the source code. When the source code is executed, the debugging tool may be configured to record a plurality of states that the application progresses through upon executing the application. The debugger tool can read recorded states while the application runs or after its execution. Various visual indicators may also provide additional annotations to aid in debugging, such as displaying the content of variables in a visual annotation showing their historical values, and providing a temporal status indicator to provide context to the user about what the present state represents in relation to specific functions in the source code.Type: ApplicationFiled: March 7, 2017Publication date: January 25, 2018Applicant: T Komp Tomasz KruszewskiInventors: Tomasz Kruszewski, Adam Kruszewski
-
Publication number: 20180024912Abstract: Embodiments of the present invention are directed to a computer implemented web based application testing system and method for testing at least one software application. The system and method receiving at least one test selection from a user using a user interface at a display device. The test selection may include at least one of a feature, a scenario, a background and a predefined condition. A feature file generation engine may then generate at least one feature file based on the test selection. Also, the feature file may be stored in a non-transitory computer memory. A feature file execution engine may execute the feature file and generate at least one execution result. A reporting engine may then generate a report based on the execution result. The execution result may then be displayed at the web dashboard.Type: ApplicationFiled: September 15, 2017Publication date: January 25, 2018Inventor: Naveen Verma
-
Publication number: 20180024913Abstract: Implementations of the present disclosure include methods, systems, and computer-readable storage mediums for receiving source code of an application, providing intermediate code based on the source code, the intermediate code including at least one instruction for profiling at least one line of the source code, providing profiling data by processing the intermediate code, processing the profiling data based on one or more of a latency model and an energy model to respectively provide at least one latency metric and at least one energy metric of the at least one line, and storing modified source code that is provided based on a modification of the at least one line of source code.Type: ApplicationFiled: July 19, 2016Publication date: January 25, 2018Inventor: Ahmad Hassan
-
Publication number: 20180024914Abstract: There is provided a computer-implemented method of testing an application. The method obtains first temporary test scripts for testing at least one test case of a first version of the application, and the first temporary test scripts are recorded with first mark data used for testing the first version of the application. The method obtains a first correspondence between the first mark data and test data. The method substitutes the test data for the first mark data in the first temporary test scripts based on the first correspondence to obtain first test scripts for testing the at least one test case of the first version of the application.Type: ApplicationFiled: July 20, 2016Publication date: January 25, 2018Inventor: Ang Yi
-
Publication number: 20180024915Abstract: A user interface automation framework is described. A system records multiple user interface screenshots during a session of a user interacting with a user interface application executing on a host computer. The system records metadata associated with the host computer during the session. The system executes a test of the user interface application based on the multiple user interface screenshots and the metadata.Type: ApplicationFiled: July 20, 2016Publication date: January 25, 2018Inventor: Vikas TANEJA
-
Publication number: 20180024916Abstract: Provided are techniques for system testing using time compression. A first program and a second program of a workload are executed in accordance with a test clock, wherein the test clock is independent of a computer system clock, and wherein the first program and the second program are to be run in a specified sequence and each at a specified date and time. In response to the first program completing, the test clock is dynamically updated to the specified date and time of the second program to start execution of the second program.Type: ApplicationFiled: July 25, 2016Publication date: January 25, 2018Inventors: Hassan A. Shazly, Debra K. Wagner
-
Publication number: 20180024917Abstract: A machine may be configured to perform A/B testing on mobile applications. For example, the machine receives an identifier. The machine identifies a particular experiment variant for a mobile application based on the identifier. The machine generates an instruction executable by the mobile application to cause a display of a user interface on a mobile device according to a user interface layout based on the particular experiment variant. The machine, in response to the receiving of the identifier of the mobile device, transmits the instruction to the mobile device. An execution of the instruction on the mobile device results in the display of the user interface on the mobile device according to the user interface layout based on the particular experiment variant. The machine generates metric data associated with the particular experiment data based on tracking one or more interactions with the user interface on the mobile device.Type: ApplicationFiled: October 2, 2017Publication date: January 25, 2018Inventor: Dawnray Young
-
Publication number: 20180024918Abstract: Techniques for calculating a test confidence metric (TCM) are disclosed. Calculating the TCM involves obtaining test results of a testing application. Calculating the TCM is based on confidence scores corresponding respectively to the test results. Calculate a confidence score for a particular test result involves identifying a failure reason for the test result, determining a weight corresponding to the failure reason, and calculating the confidence score based on the weight.Type: ApplicationFiled: July 20, 2017Publication date: January 25, 2018Applicant: Oracle International CorporationInventors: Mayank Agarwal, Jagannadha Prasad Srinivas Vadlamani, Wendy Mui
-
Publication number: 20180024919Abstract: In some examples, a storage device includes a first non-volatile memory array configured to store data from a host device and the storage device and a second non-volatile memory array configured to store data from the storage device, wherein the second non-volatile memory array is separate from the first non-volatile memory array. The storage device also includes a controller configured to store a virtual-to-physical mapping table to the first non-volatile memory array and store a portion of the virtual-to-physical mapping table to the second non-volatile memory array.Type: ApplicationFiled: July 19, 2016Publication date: January 25, 2018Inventors: Adam Christopher Geml, Colin Christopher McCambridge, Philip James Sanders, Lee Anton Sendelbach
-
Publication number: 20180024920Abstract: A system and method is disclosed for tracking block mapping overhead in a non-volatile memory. The system may include a non-volatile memory having multiple memory blocks and a processor configured to track a block level mapping overhead for closed blocks of the multiple memory blocks. The processor may be configured to track predetermined logical address ranges within which data written to a block fall, and then store the sum of the number of different logical address ranges for each respective block as a block address entropy metric. The method may include the processor using the block address entropy metric to select source blocks for garbage collection with a lower block address entropy metric or to adjust other operational characteristics such as data routing within the non-volatile memory system based on average block address entropy for a group of blocks.Type: ApplicationFiled: July 20, 2016Publication date: January 25, 2018Applicant: SanDisk Technologies LLCInventors: Nicholas James Thomas, Oleg Kragel, Michael Anthony Moser
-
Publication number: 20180024921Abstract: According to one embodiment, a memory system includes a nonvolatile memory, detection unit, management unit, selection unit, and garbage collection unit. The nonvolatile memory includes memory areas divided into units of execution of garbage collection. The detection unit detects a data amount of data written to a different memory area among the memory areas when the garbage collection is executed, for each of the memory areas. The management unit manages a threshold set for each of the memory areas. The selection unit selects, based on the data amount and the threshold for each of the memory areas, a memory area whose data amount is smaller than the threshold of the memory area. The garbage collection unit executes the garbage collection on the memory area selected by the selection unit.Type: ApplicationFiled: October 2, 2017Publication date: January 25, 2018Inventor: Shinichi Kanno
-
Publication number: 20180024922Abstract: Implementations of the present disclosure include methods, systems, and computer-readable storage mediums for receiving an annotated query execution plan (aQEP), the aQEP being processed to execute a query on an in-memory database in a hybrid memory system, and including one or more annotations, each annotation indicating an output of a respective operator that is to be provided as input to a join operator, determining a payload size at least partially based on an estimated size of an intermediate output of the join operator, selecting a memory type from a plurality of memory types in the hybrid memory system based on the payload size and a cache size, and storing, after execution of the aQEP, the intermediate output on the memory type in the hybrid memory system.Type: ApplicationFiled: July 19, 2016Publication date: January 25, 2018Inventor: Ahmad Hassan
-
Publication number: 20180024923Abstract: Implementations of the present disclosure include methods, systems, and computer-readable storage mediums for determining that an object implicated in an executing application is to be allocated to memory in an in-memory system, determining a type of the object, and allocating the object to one of a first size of virtual memory page and a second size of virtual memory page of an operating system based on the type of the object.Type: ApplicationFiled: July 19, 2016Publication date: January 25, 2018Inventor: Ahmad Hassan
-
Publication number: 20180024924Abstract: According to some embodiments, system and methods are provided comprising providing one or more applications that can be used by a processor; storing one or more data elements in one or more systems of record; providing a cache associated with the one or more applications; selecting a default cache expiration time via a caching mechanism; determining if the default cache expiration time is met in response to execution of a query associated with the one or more applications; retrieving one or more data elements from the one or more systems of record and transmitting the retrieved one or more data elements to a cache optimization module in response to execution of the query; retrieving one or more cache stored data elements from the cache and transmitting the retrieved one or more cache stored data elements to the cache optimization module in response to execution of the query; determining, via the cache optimization module, whether the retrieved one or more cache stored data elements are the same value as theType: ApplicationFiled: July 25, 2016Publication date: January 25, 2018Inventor: Steve WINKLER
-
Publication number: 20180024925Abstract: An example system on a chip (SoC) includes a processor, a cache, and a main memory. The SoC can include a first memory to store data in a memory line, wherein the memory line is set to an invalid state. The processor can include a processor coupled to the first memory. The processor can determine that a data size of a first data set received from an application is within a data size range. The processor can determine that an aggregate data size of the first data set and a second data set received from the application is at least a same data size as data size of the memory line. The processor can perform an invalid-to-modify (I2M) operation to change the memory line from the invalid state to a modified state. The processor can write the first data set and the second data set to the memory line.Type: ApplicationFiled: July 20, 2016Publication date: January 25, 2018Inventors: Raanan Sade, Joseph Nuzman, Stanislav Shwartsman, Igor Yanover, Liron Zur
-
Publication number: 20180024926Abstract: The present disclosure includes apparatuses and methods related to shifting data. An example apparatus comprises a cache coupled to an array of memory cells and a controller. The controller is configured to perform a first operation beginning at a first address to transfer data from the array of memory cells to the cache, and perform a second operation concurrently with the first operation, the second operation beginning at a second address.Type: ApplicationFiled: July 20, 2016Publication date: January 25, 2018Inventors: Daniel B. Penney, Gary L. Howe
-
Publication number: 20180024927Abstract: A data storage device and a data processing system having the same are disclosed. The data storage device includes a nonvolatile memory and a controller, coupled to the nonvolatile memory, configured to receive first and second commands generated by a host and control an operation of the nonvolatile memory in response to the first command. The controller includes a core configured to receive and process the first command, a trace circuit corresponding to the core and configured to generate and output first data, based on pieces of information generated while the core processes the first command, and a trace controller configured to control output of the first data and second data differing from the first data, based on a result of performing at least one authentication control operation corresponding to the second command.Type: ApplicationFiled: July 20, 2017Publication date: January 25, 2018Applicant: Samsung Electronics Co., Ltd.Inventors: Seung Chul RYU, Bum Seok YU, Chan Ho YOON
-
Publication number: 20180024928Abstract: Implementations of the present disclosure include methods, systems, and computer-readable storage mediums for receiving a query from an application, processing a query execution plan (QEP) of the query using a cache simulator to simulate queries to an in-memory database in a hybrid memory system, providing a miss-curve based on the QEP, the miss-curve relating miss-ratios to memory sizes, and determining relative sizes of a first type of memory and a second type of memory in the hybrid memory system at least partially based on the miss-curve.Type: ApplicationFiled: July 19, 2016Publication date: January 25, 2018Inventor: Ahmad Hassan
-
Publication number: 20180024929Abstract: A prefetch request having a priority assigned thereto is obtained, based on executing a prefetch instruction included within a program. Based on obtaining the prefetch request, a determination is made as to whether the prefetch request may be placed on a prefetch queue. This determination includes determining whether the prefetch queue is full; checking, based on determining the prefetch queue is full, whether the priority of the prefetch request is considered a high priority; determining, based on the checking indicating the priority of the prefetch request is considered a high priority, whether another prefetch request on the prefetch queue may be removed; removing the other prefetch request from the prefetch queue, based on determining the other prefetch request may be removed; and adding the prefetch request to the prefetch queue, based on removing the other prefetch request.Type: ApplicationFiled: July 20, 2016Publication date: January 25, 2018Inventors: Dan F. Greiner, Michael K. Gschwind, Christian Jacobi, Anthony Saporito, Chung-Lung K. Shum, Timothy J. Slegel
-
Publication number: 20180024930Abstract: Processing of prefetched data based on cache residency. Data to be used in future processing is prefetched. A block of data being prefetched is selected for processing, and a check is made as to whether the block of data is resident in a selected cache (e.g., L1 cache). If the block of data is resident in the selected cache, it is processed; otherwise, processing is bypassed until a later time when it is resident in the selected cache.Type: ApplicationFiled: July 20, 2016Publication date: January 25, 2018Inventors: Michael K. Gschwind, Timothy J. Slegel
-
Publication number: 20180024931Abstract: A processor applies a transfer policy to a portion of a cache based on access metrics for different test regions of the cache, wherein each test region applies a different transfer policy for data in cache entries that were stored in response to a prefetch requests but were not the subject of demand requests. One test region applies a transfer policy under which unused prefetches are transferred to a higher level cache in a cache hierarchy upon eviction from the test region of the cache. The other test region applies a transfer policy under which unused prefetches are replaced without being transferred to a higher level cache (or are transferred to the higher level cache but stored as invalid data) upon eviction from the test region of the cache.Type: ApplicationFiled: July 20, 2016Publication date: January 25, 2018Inventor: Paul James Moyer
-
Publication number: 20180024932Abstract: Various embodiments are generally directed to an apparatus, method and other techniques for prefetching data for a workload based on memory access information of the workload. For example, an apparatus may include at least one memory, at least one processor, and logic, at least a portion of the logic comprised in hardware, the logic to determine a workload to be executed via the at least one processor, monitor a plurality of memory accesses of the at least one memory by the workload during execution, and generate memory access information for the workload. Other embodiments are described.Type: ApplicationFiled: March 31, 2017Publication date: January 25, 2018Inventors: MURUGASAMY K. NACHIMUTHU, MOHAN J. KUMAR
-
Publication number: 20180024933Abstract: A query is performed to obtain cache residency and/or other information regarding selected data. The data to be queried is data of a cache line, prefetched or otherwise. The capability includes a Query Cache instruction that obtains cache residency information and/or other information and returns an indication of the requested information.Type: ApplicationFiled: July 20, 2016Publication date: January 25, 2018Inventors: Dan F. Greiner, Michael K. Gschwind, Christian Jacobi, Anthony Saporito, Chung-Lung K. Shum, Timothy J. Slegel
-
Publication number: 20180024934Abstract: A processor includes an operations scheduler to schedule execution of operations at, for example, a set of execution units or a cache of the processor. The operations scheduler periodically adds sets of operations to a tracking array, and further identifies when an operation in the tracked set is blocked from execution scheduling in response to, for example, identifying that the operation is dependent on another operation that has not completed execution. The processor further includes a counter that is adjusted each time an operation in the tracking array is blocked from execution, and is reset each time an operation in the tracking array is executed. When the value of the counter exceeds a threshold, the operations scheduler prioritizes the remaining tracked operations for execution scheduling.Type: ApplicationFiled: July 19, 2016Publication date: January 25, 2018Inventors: Paul James Moyer, Richard Martin Born
-
Publication number: 20180024935Abstract: The described embodiments include a computing device that caches data acquired from a main memory in a high-bandwidth memory (HBM), the computing device including channels for accessing data stored in corresponding portions of the HBM. During operation, the computing device sets each of the channels so that data blocks stored in the corresponding portions of the HBM include corresponding numbers of cache lines. Based on records of accesses of cache lines in the HBM that were acquired from pages in the main memory, the computing device sets a data block size for each of the pages, the data block size being a number of cache lines. The computing device stores, in the HBM, data blocks acquired from each of the pages in the main memory using a channel having a data block size corresponding to the data block size for each of the pages.Type: ApplicationFiled: July 21, 2016Publication date: January 25, 2018Inventors: Mitesh R. Meswani, Jee Ho Ryoo
-
Publication number: 20180024936Abstract: A method, a computing device, and a non-transitory machine-readable medium for modifying cache settings in the array cache are provided. Cache settings are set in an array cache, such that the array cache caches data in an input/output (I/O) stream based on the cache settings. Multiple cache simulators simulate the caching the data from the I/O stream in the array cache using different cache settings in parallel with the array cache. The cache settings in the array cache are replaced with the cache settings from one of the cache simulators based on the determination that the cache simulators increase effectiveness of caching data in the array cache.Type: ApplicationFiled: April 25, 2017Publication date: January 25, 2018Inventors: Brian McKean, Sai Susarla, Ariel Hoffman
-
Publication number: 20180024937Abstract: Various systems and methods for caching and tiering in cloud storage are described herein. A system for managing storage allocation comprises a storage device management system to maintain an access history of a plurality of storage blocks of solid state drives (SSDs) managed by the storage device management system; and automatically configure each of a plurality of storage blocks to operate in cache mode or tier mode, wherein a ratio of storage blocks operating in cache mode and storage blocks operating in tier mode is based on the access history.Type: ApplicationFiled: March 6, 2017Publication date: January 25, 2018Inventors: Sudip Chahal, Husni Bahra, Nigel Wayman, Terry Yoshii, Charles Lockwood, Shane Healy
-
Publication number: 20180024938Abstract: A processing system for reduction of a virtual memory page fault rate that includes a first memory to store a dataset, a second memory to store a subset of the dataset, and a processing unit. The processing unit is configured to receive a memory access request including a virtual address and determine whether the virtual address is mapped to a first physical page in the first memory and or a second physical page in the second memory. The processing unit maps a third physical page in a free page pool of the second memory to the virtual address in response to the virtual address not being mapped to the second physical page. The processing unit also grants access to the third physical page that is mapped to the virtual address.Type: ApplicationFiled: July 21, 2016Publication date: January 25, 2018Inventors: Timour T. Paltashev, Christopher Brennan
-
Publication number: 20180024939Abstract: This method for executing a request to exchange data, between first and second disjoint physical addressing spaces controlled by first and second distinct circuits for first and second respective software processes, comprises the creation of a communication channel between these two circuits. It further comprises sending, by the first process, of said request to exchange data, this request designates a virtual address in a virtual addressing space of the second process, and execution of the request to exchange data between the disjoint physical addressing spaces of the two processes, without invoking a processor executing the second process. During creation of the channel, a translation of the virtual addressing space of the second process into its physical addressing space is created and associated with this channel in the second circuit. During execution of the request, data for identification of the channel is added to the virtual address designated in the request.Type: ApplicationFiled: February 4, 2016Publication date: January 25, 2018Applicant: Commissariat a l'energie atomique et aux energies alternativesInventors: Remy GAUGUEY, Denis DUTOIT, Eric GUTHMULLER, Jerome MARTIN
-
Publication number: 20180024940Abstract: Systems and methods for accessing a unified translation lookaside buffer (TLB) are disclosed. A method includes receiving an indicator of a level one translation lookaside buffer (L1TLB) miss corresponding to a request for a virtual address to physical address translation, searching a cache that includes virtual addresses and page sizes that correspond to translation table entries (TTEs) that have been evicted from the L1TLB, where a page size is identified, and searching a second level TLB and identifying a physical address that is contained in the second level TLB. Access is provided to the identified physical address.Type: ApplicationFiled: August 15, 2017Publication date: January 25, 2018Inventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
-
Publication number: 20180024941Abstract: A system for generating predictions for a hardware table walk to find a map of a given virtual address to a corresponding physical address is disclosed. The system includes a plurality memories, which each includes respective plurality of entries, each of which includes a prediction of a particular one of a plurality of buffers which includes a portion of a virtual to physical address translation map. A first circuit may generate a plurality of hash values to retrieve a plurality of predictions from the plurality of memories, where each has value depends on a respective address and information associated with a respective thread. A second circuit may select a particular prediction of the retrieved predictions to use based on a history of previous predictions.Type: ApplicationFiled: July 20, 2016Publication date: January 25, 2018Inventors: John Pape, Manish Shah, Gideon Levinsky, Jared Smolens
-
Publication number: 20180024942Abstract: Systems and methods for using encryption keys to manage data retention are described. In one embodiment, the systems and methods may include receiving data such as user data from a host of the storage drive, encrypting the data using an encryption key, writing the encrypted data to the storage drive, and retaining the encrypted data on the storage drive based at least in part on a validity of the encryption key.Type: ApplicationFiled: July 22, 2016Publication date: January 25, 2018Applicant: SEAGATE TECHNOLOGY LLCInventors: Timothy Canepa, Ramdas Kachare
-
Publication number: 20180024943Abstract: The disclosure generally describes computer-implemented methods, software, and systems for risk identification of a first address information. A computer-implemented method includes receiving a user input including identification information of a user requesting a service, processing the identification information to determine a first address coding information, retrieving a second address coding information based on the identification information, matching the first address coding information with the second address coding information to generate an address matching result, and determining a risk score based on the address matching result and a service type of the service.Type: ApplicationFiled: September 29, 2017Publication date: January 25, 2018Applicant: Alibaba Group Holding LimitedInventors: Min Xu, Kai Xu, Dijun He
-
Publication number: 20180024944Abstract: Disclosed are methods and apparatus for memory management in shared virtual memory (SVM) systems. The methods and apparatus provide SVM access control on a per master basis through the assignment of a first classification identifier (ID) upon reception of a memory access request from a memory master. The assigned first classification ID assigned to the memory request is compared with a second classification ID stored in at least one page table entry of a page table used to manage the SVM system. The page table entry (PTE) corresponds to one or more memory locations of the SVM being requested in the memory access request. SVM system access operations for the memory access request are then denied if the first classification ID does not match the second classification ID, thereby providing added per master access control for the SVM system.Type: ApplicationFiled: July 22, 2016Publication date: January 25, 2018Inventors: Thomas Zeng, Azzedine Touzni, Mitchel Humpherys
-
Publication number: 20180024945Abstract: A context-based protection system uses tiered protection structures including master protection units, shared memory protection units, a peripheral protection units to provide security to bus transfer operations between central processing units (CPUs), memory array or portions of arrays, and peripherals.Type: ApplicationFiled: July 18, 2017Publication date: January 25, 2018Inventors: Jan-Willem Van de Waerdt, Kai Dieffenbach, Uwe Moslehner, Jens Wagner, Mathias Sedner, Venkat NATARAJAN
-
Publication number: 20180024946Abstract: One embodiment provides a method, including: detecting, using a processor of a host device, that the host device is busy with respect to an impending data transfer to a connectable storage device operatively coupled to the host device; and communicating, to the connectable storage device, data that triggers an indicator of the connectable storage device. Other aspects are described and claimed.Type: ApplicationFiled: July 22, 2016Publication date: January 25, 2018Inventors: Robert James Kapinos, Russell Speight VanBlon, Timothy Winthrop Kingsbury, Scott Wentao Li
-
Publication number: 20180024947Abstract: Technologies for a low-latency interface with data storage of a storage sled in a data center are disclosed. In the illustrative embodiment, a storage sled stores metadata including the location of data in a storage device in low-latency non-volatile memory. When accessing data, the storage sled may access the metadata on the low-latency non-volatile memory and then, based on the location determined by the access to the metadata, access the location of the data in the storage device. Such an approach results in only one access to the data storage in order to read the data instead of two.Type: ApplicationFiled: December 30, 2016Publication date: January 25, 2018Inventor: Steven C. Miller
-
Publication number: 20180024948Abstract: Systems and methods for controlling data flow and data alignment using data expand and compress circuitry arranged between a variable data rate bi-directional first in, first out (FIFO) buffer and one or more memory arrays to compensate for bad column locations within the one or more memory arrays are described. The bi-directional FIFO may have a variable data rate with the array side and a fixed data rate with a serializer/deserializer (SERDES) circuit that drives input/output (I/O) circuitry. The data expand and compress circuitry may pack and unpack data and then align the data passing between the one or more memory arrays and the bi-directional FIFO using a temporary buffer, data shuffling logic, and selective pipeline stalls.Type: ApplicationFiled: March 14, 2017Publication date: January 25, 2018Applicant: SANDISK TECHNOLOGIES LLCInventors: Wanfang Tsai, Yan Li
-
Publication number: 20180024949Abstract: A data storage system includes a host having a write buffer, a memory region, a submission queue and a driver therein. The driver is configured to: (i) transfer data from the write buffer to the memory region in response to a write command, (ii) generate a write command completion notice; and (iii) send at least an address of the data in the memory region to the submission queue. The host may also be configured to transfer the address to a storage device external to the host, and the storage device may use the address during an operation to transfer the data in the memory region to the storage device.Type: ApplicationFiled: July 7, 2017Publication date: January 25, 2018Inventor: Venkataratnam NIMMAGADDA
-
Publication number: 20180024950Abstract: Ring bus architectures for use in a memory module are disclosed. A memory module may include a primary ring bus; a ring bus controller positioned on the primary ring bus; a secondary ring bus in communication with the primary ring bus via a first bus bridge; and a tertiary ring bus in communication with the secondary ring bus via a second bus bridge. The ring bus controller is configured to direct the first bus bridge to route data between the primary ring bus and the secondary ring bus and is configured to direct the second bus bridge to route data between the secondary ring bus and the tertiary ring bus.Type: ApplicationFiled: September 26, 2017Publication date: January 25, 2018Applicant: SanDisk Technologies LLCInventor: Alan Welsh Sinclair
-
Publication number: 20180024951Abstract: A heterogeneous multi-processor device having a first processor component arranged to issue a data access command request, a second processor component arranged to execute a set of threads, a task scheduling component arranged to schedule the execution of threads by the second processor component, and an internal memory component. In response to the data access command request being issued by the first processor component, the task scheduling component is arranged to wait for activities relating to the indicated subset of threads to finish, and when the activities relating to the indicated subset of threads have finished to load a command thread for execution by the second processor component, the command thread being arranged to cause the second processor component to read the indicated data from the at least one region of memory and make the read data available to the first processor component.Type: ApplicationFiled: July 19, 2016Publication date: January 25, 2018Inventor: Graham Edmiston
-
Publication number: 20180024952Abstract: To realize DMA data transfer between a host computer and another computer even in the case that the host computer and the another computer are each equipped with a CPU, a memory, and so forth independently. A computer communicably connected with a first computer including a first memory and a driver for controlling a device, the computer comprising: the device; and a second memory, wherein a first DMA transfer is executed based on a DMA transfer request received from the driver, a second DMA transfer is executed to transfer data existing at a transfer destination address of the first DMA transfer between the first memory and the second memory, and the transfer destination address is detected as a result of executing the first DMA transfer.Type: ApplicationFiled: January 15, 2016Publication date: January 25, 2018Applicant: NEC CorporationInventor: Masahiko TAKAHASHI
-
Publication number: 20180024953Abstract: A peripheral interface chip and a data transmission method thereof are provided. The peripheral interface chip includes a switching circuit, a universal serial bus (USB) host controller, a keyboard controller and a microprocessor. The switching circuit receives a device identification transmitted from a USB device, and the device identification is used for determining whether the USB device is a keyboard device. When the USB device is the keyboard device, input data of the USB device is transmitted to a controller hub through a first USB interface, the switching circuit, the USB host controller, the microprocessor, the keyboard controller and a low pin count (LPC) interface.Type: ApplicationFiled: October 14, 2016Publication date: January 25, 2018Applicant: ITE Tech. Inc.Inventors: Shang-Heng Lin, Jiun-Shiue Huang, Yu-Hsiang Lee
-
Publication number: 20180024954Abstract: Apparatus and methods for USB hosts and USB devices to dynamically switch roles such that a product which initially operates as a USB host may instead operate as a USB device and vice versa. Products such as smartphones and tablets which initially operate as USB devices may dynamically switch roles to become USB hosts. Similarly, products such as PCs and in-vehicle infotainment systems which initially operate as USB hosts may dynamically switch roles to become USB devices. Dynamic USB role switching is permitted in a variety of topologies including those in which a direct connection exists between a host and a device as well as those in which a USB hub is present. In addition, such dynamic role switching may be performed in topologies which incorporate widely used USB Type A connectors and cables, thus avoiding the need for a special connector or cable.Type: ApplicationFiled: October 4, 2017Publication date: January 25, 2018Inventor: Terrill M. Moore