Patents Issued in November 29, 2018
  • Publication number: 20180341555
    Abstract: A data processing method, a data processing system and a computer program product are provided. The data processing method includes executing a running operation. The data processing method also includes suspending the running operation at a preset time point in a period of the running operation and calculating a remaining processing time according to a transfer amount of a plurality of dirty pages which are collected before the preset time point in the period of the running operation. The data processing method further includes continuing to execute the running operation, suspending the running operation, and executing a snapshot operation to generate a corresponding data snapshot based on the remaining processing time, and executing a transfer operation to transfer the corresponding data snapshot.
    Type: Application
    Filed: July 27, 2017
    Publication date: November 29, 2018
    Applicant: Industrial Technology Research Institute
    Inventors: Po-Jui Tsao, Yi-Feng Sun, Chuan-Yu Cho, Tzi-Cker Chiueh
  • Publication number: 20180341556
    Abstract: A data backup method and device, a storage medium and a server are provided. The data backup method is applied to a first server, and includes: a backup request containing first data to be backed up is acquired from a terminal, the backup request being configured to request the first server to back up the first data; a key acquisition request is sent to a second server according to the backup request, the key acquisition request containing characteristic information of the first data; a first encryption key is acquired from the second server, the first encryption key being generated according to the characteristic information of the first data; and the first data is encrypted to generate first encrypted data according to the first encryption key, and the first encrypted data is stored. The data backup method and device and server provided by the embodiments have a beneficial effect of improving security of data stored in the server.
    Type: Application
    Filed: November 13, 2017
    Publication date: November 29, 2018
    Applicant: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventor: Lian LIN
  • Publication number: 20180341557
    Abstract: A method for operating a data storage device which includes a nonvolatile memory device including a plurality of memory blocks, includes generating a valid page count table including the number of valid pages of each of closed blocks among the plurality of memory blocks in which data are written in all pages thereof and the number of valid pages of at least one open block among the plurality of memory blocks in which data is written in a part of pages thereof; generating a valid page scan table including a scan pointer for scanning the number of valid pages of the open block; and backing up the valid page count table and the valid page scan table in a meta block among the plurality of memory blocks.
    Type: Application
    Filed: December 4, 2017
    Publication date: November 29, 2018
    Inventors: Duck Hoi KOO, Yong Tae KIM, Soong Sun SHIN, Cheon Ok JEONG
  • Publication number: 20180341558
    Abstract: In accordance with implementations of the present disclosure, a backup of live data received by a data forwarder is generated at the data forwarder while the live data is provided to a real-time data pipeline for forwarding from the data forwarder. A first portion of the live data is recovered from the backup to a stale data pipeline of the data forwarder. A request to forward the live data to a destination node is received by the data forwarder. In response to the request data is forwarded to the destination node, where the first portion of the live data from the stale data pipeline is added to a second portion of the live data from the real-time data pipeline in the response based on determining headroom remains to reach an amount of the data identified to include in the response.
    Type: Application
    Filed: July 18, 2018
    Publication date: November 29, 2018
    Inventors: PANAGIOTIS PAPADOMITSOS, IOANNIS VLACHOGIANNIS
  • Publication number: 20180341559
    Abstract: The present disclosure provides virtual machine deployment methods and apparatuses. One exemplary virtual machine deployment method comprises: acquiring a fragment node by scrambling and fragmenting to-be-processed data; allocating a target virtual machine to the fragment node according to the data amount of the fragment node; and deploying the fragment node onto the target virtual machine. According to some embodiments of the present disclosure, when a virtual machine is allocated to a fragment node, the data amount of the fragment node can be controlled firstly by scrambling data. Then a virtual machine matching the data amount can be allocated according to the actual data amount of the fragment node, so as to prevent the virtual machine from overloading, thereby achieving better load balancing.
    Type: Application
    Filed: August 3, 2018
    Publication date: November 29, 2018
    Inventor: Jirong YANG
  • Publication number: 20180341560
    Abstract: A method, computer program product, and a computer system, to store information related to changed data in response to a transaction with a first database of the on-line system requested during a planned period and data in the first database changed by the transaction. In response to a planned event a back-up system with a second database is switched to set up a new connection and a new transaction with the second database, wherein the second database is backup of the first database. The on-line system prevents setting up a new connection to the on-line system and prevents conducting a new transaction with the first database, sends information related to the changed data from the on-line system to the back-up system, and switches to the back-up system for a new connection and for a new transaction. The on-line system synchronizes data between the first database and the second database.
    Type: Application
    Filed: May 23, 2017
    Publication date: November 29, 2018
    Inventors: E Feng Lu, Yu Fang, Ying Mao, Ning LL Liu, Lu Yu
  • Publication number: 20180341561
    Abstract: A computer-implemented method of determining modified portions of a RAID storage array for use in resynchronizing said RAID storage array after a failure, the computer-implemented method comprising: resolving areas in the RAID storage array that represent space allocated to volumes; resolving which of said allocated volumes comprise gathered writes; and for said allocated volumes that comprise gathered writes, resolving a set of writes that potentially have incomplete parity updates at the time of the failure.
    Type: Application
    Filed: May 23, 2017
    Publication date: November 29, 2018
    Applicant: International Business Machines Corporation
    Inventors: Gordon D. Hutchison, Miles Mulholland, Lee J. Sanders, Ben Sasson
  • Publication number: 20180341562
    Abstract: A method, computer program product, and a computer system, to store information related to changed data in response to a transaction with a first database of the on-line system requested during a planned period and data in the first database changed by the transaction. In response to a planned event a back-up system with a second database is switched to set up a new connection and a new transaction with the second database, wherein the second database is backup of the first database. The on-line system prevents setting up a new connection to the on-line system and prevents conducting a new transaction with the first database, sends information related to the changed data from the on-line system to the back-up system, and switches to the back-up system for a new connection and for a new transaction. The on-line system synchronizes data between the first database and the second database.
    Type: Application
    Filed: November 29, 2017
    Publication date: November 29, 2018
    Inventors: E Feng Lu, Yu Fang, Ying Mao, Ning LL Liu, Lu Yu
  • Publication number: 20180341563
    Abstract: A server system, a server device, and a power supply recovery method therefor are provided. The server system includes a plurality of servers and a power controller. Each of the servers includes an AC power supply and a DC power supply. The DC power supplies of the servers are mutually connected through a cable. The power controller communicates with the servers. When a specific server detects that the AC power supply belonging thereto does not operate normally, the power controller informs other servers of a power required by the specific server, and the specific server controls the DC power supply belonging thereto to obtain powers provided from other servers through the cable, so as to maintain the operation of the specific server.
    Type: Application
    Filed: September 20, 2017
    Publication date: November 29, 2018
    Applicant: Wistron Corporation
    Inventors: Wei-Yu Chiang, Kui-Yeh Chen, Yi-Chen Luo, Chih-Yuan Hsu, Chung-Chin Li
  • Publication number: 20180341564
    Abstract: A system and method for managing bad blocks in a hardware accelerated caching solution are provided. The disclosed method includes receiving an Input/Output (I/O) request, performing a hash search for the I/O request against a hash slot data structure, and based on the results of the hash search, either performing the I/O request with a data block identified in the I/O request or diverting the I/O request to a new data block not identified in the I/O request. The diversion may also include diverting the I/O request from hardware to firmware of a memory controller.
    Type: Application
    Filed: May 25, 2017
    Publication date: November 29, 2018
    Inventors: Horia Simionescu, Gowrisankar Radhakrishnan, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit
  • Publication number: 20180341565
    Abstract: A data storage device is disclosed wherein a first host block is written to a first data sector, and when writing a second host block to a second data sector the first host block is read from the first data sector. When the read of the first host block fails, a partial map is generated identifying a location of the second host block in the second data sector, the partial map is stored in a non-volatile memory, and the second host block is written to the second data sector. When a power failure occurs after writing the second host block to the second data sector, an exception entry is updated using the partial map, wherein the exception entry is associated with the first and second host blocks.
    Type: Application
    Filed: May 25, 2017
    Publication date: November 29, 2018
    Inventor: Ajay S. Nair
  • Publication number: 20180341566
    Abstract: Methods and systems are directed to quantifying and prioritizing the impact of problems or changes in a computer system. Resources of a computer system are monitored by management tools. When a change occurs at a resource of a computer system or in log data generated by event sources of the computer system, one or more of the management tools generates an alert. The alert may be an alert that indicates a problem with the computer system resource or the alert may be an alert trigger identified in an event message of the log data. Methods described herein compute an impact factor that serves as a measure of the difference between event messages generated before the alert and event messages generated after the alert. The value of the impact factor associated with an alert may be used to quantitatively prioritize the alert and generate appropriate recommendations for responding to the alert.
    Type: Application
    Filed: May 24, 2017
    Publication date: November 29, 2018
    Applicant: VMware, Inc.
    Inventors: Ashot Nshan Harutyunyan, Vardan Movsisyan, Arnak Poghosyan, Naira Movses Grigoryan
  • Publication number: 20180341567
    Abstract: Systems, methods and computer program products assess processor performance metrics by monitoring probes constructed using instruction sequences. A first probe value can be determined from execution of a broad spectrum probe in an execution environment. In response to determining that the first probe value is not a first expected probe value, a targeted probe providing a second probe value directed to a subsystem of the execution environment, a feature of the subsystem, or a component of the execution environment is executed. In response to determining that the second probe value is not a second expected probe value, a differential between the second probe value and the second expected probe value can be used to determine that a bottleneck exists in at least one of the subsystem of the execution environment, the feature of the subsystem, or the component of the execution environment.
    Type: Application
    Filed: July 16, 2018
    Publication date: November 29, 2018
    Inventors: Mark Robert Funk, Aaron Christoph Sawdey, Philip Lee Vitale
  • Publication number: 20180341568
    Abstract: A method, terminal and computer-readable storage medium are provided for displaying activity record information. The method includes: acquiring specified activity record information after switching to a specified interface; and displaying the specified activity record information in the specified interface. In the present disclosure, after the specified activity record information is obtained by an operating system from extracting and integrating the activity record information of at least one application installed in the terminal, the specified activity record information may be displayed in a specified interface. Since the specified activity record information may come from at least one application in the terminal, the activity record information scattered in various applications may be displayed in the specified interface.
    Type: Application
    Filed: May 29, 2018
    Publication date: November 29, 2018
    Applicant: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.
    Inventors: Bo LIU, Chao TANG, Tong QIN
  • Publication number: 20180341569
    Abstract: Presented herein are methods, non-transitory computer readable media, and devices providing an application centric view of storage within a network storage system, which include: creating an application instance, by the network storage system, wherein the application instance comprises at least one application-component determined based on application configuration information of the application instance; tracking the application configuration information of the application; and displaying the application configuration information in view of the storage within the network storage system.
    Type: Application
    Filed: May 26, 2017
    Publication date: November 29, 2018
    Applicant: NetApp, Inc.
    Inventors: Anureita RAO, Rupa NATARAJAN, Srishylam SIMHARAJAN
  • Publication number: 20180341570
    Abstract: In response to identifying a type of change in audio/video (A/V) content that was received from a content source, an A/V hub may generate display instructions specifying a display layout of the A/V content on a display in an A/V display device, and may provide the display instructions and the A/V content to the display to dynamically modify the display of the A/V content. For example, the A/V hub may display the A/V content in a central window of the display. This may involve swapping the A/V content with other A/V content that was previously displayed in the central window, and the other A/V content may be displayed in a tiled window (which may be smaller than the central window and may be located proximate to a periphery of the display). Alternatively, the A/V hub may display the A/V content in a new tiled window of the display.
    Type: Application
    Filed: May 29, 2017
    Publication date: November 29, 2018
    Applicant: EVA Automation, Inc.
    Inventor: Gaylord Yu
  • Publication number: 20180341571
    Abstract: An autonomous vehicle software management system can distribute AV software versions to safety-driven autonomous vehicles (SDAVs) operating within a given region. The system can receive log data from the SDAVs indicating any trip anomalies of the SDAVs while executing the AV software version. When a predetermined safety standard has been met based on the log data, the system can verify the AV software version for execution on fully autonomous vehicles (FAVs) operating within the given region.
    Type: Application
    Filed: May 23, 2017
    Publication date: November 29, 2018
    Inventors: Dima Kislovskiy, David McAllister Bradley
  • Publication number: 20180341572
    Abstract: A user system includes a user interface, a processor, and one or more stored sequences of instructions. The one or more stored sequences of instructions, when executed by the processor, cause the processor to display a script field within an editor dashboard, of a runtime environment, displayed on the user interface, the editor dashboard configured to define an interactive dashboard of the runtime environment, identify a script entry input into the script field, parse the script entry to identify an operation to be performed within the interactive dashboard in response to a trigger event, and associate the operation with the interactive dashboard, so that the operation will be performed within the interactive dashboard in response to the trigger event based on the association.
    Type: Application
    Filed: May 23, 2017
    Publication date: November 29, 2018
    Applicant: salesforce.com, inc.
    Inventors: Zuye ZHENG, James DIEFENDERFER, Srividhya AGANDESWARAN, Deepinder BADESHA
  • Publication number: 20180341573
    Abstract: Method and apparatus for efficient test execution in a testing environment is provided. The method may include a test file. The test file may include one or more test cases for test execution. The test execution request may include one or more test files. The method may further use an execution server for the handling and the managing of the test request. The method may further use a plurality of agents. The agents may process and execute the test execution requests that are being handled and managed by the execution server. The processing and the executing of the test requests may produce real-time execution test results. The method may further cause the execution server to connect, in real-time, to the agent. The agent may display, on a GUI, the real-time status of the test execution requests.
    Type: Application
    Filed: May 24, 2017
    Publication date: November 29, 2018
    Inventors: Akshay Patel, Alexander Arkadyev, Ramesh Sharma
  • Publication number: 20180341574
    Abstract: A computer-implemented facility is provided for intelligent mobile device selection for mobile application testing. The computer-implemented facility determines features of a new mobile application to be tested, and compares the features of the new mobile application with features of multiple known mobile applications to identify one or more known mobile applications with similar features. Based at least in part on automated analysis of user reviews of the one or the more known mobile applications operating in one or more types of mobile devices, the facility provides one or more risk scores for operation of the new mobile application in the one or more types of mobile devices. Further, based on the risk scores, a recommended set of mobile devices in which to test the new mobile application may be generated for use in testing the new mobile application.
    Type: Application
    Filed: August 7, 2018
    Publication date: November 29, 2018
    Inventors: Vijay EKAMBARAM, Roger SNOOK, Leigh A. WILLIAMSON, Shinoj ZACHARIAS
  • Publication number: 20180341575
    Abstract: An example apparatus includes a first semiconductor chip and a second semiconductor chip; and a first via and a plurality of second vias coupling the first semiconductor chip and the second semiconductor chip. The first semiconductor chip provides a first timing signal to the first via and further provides first data responsive to the first timing signal to the plurality of second vias. The second semiconductor chip receives the first timing signal from the first via and the first data from the plurality of second vias and further provides the first data responsive to the first timing signal, when the first semiconductor chip is designated, and provides a second timing signal and further provides second data responsive to the second timing signal, when the second semiconductor chip is designated.
    Type: Application
    Filed: May 26, 2017
    Publication date: November 29, 2018
    Applicant: Micron Technology, Inc.
    Inventors: Seiji Narui, Homare Sato, Chikara Kondo
  • Publication number: 20180341576
    Abstract: A memory system may include: a memory device; and a controller. When at least one data group is received, the data group including a plurality of data which is required to be collectively processed, the controller reads preceding logical-to-physical (L2P) map information for the data group from a first table and stores the read L2P map information in a second table before reception of the plurality of the data of the data group is committed, and the controller stores the plurality of the data in the memory device, and the controller updates the L2P map information for the data group that is stored in the first table in response to the storing of the plurality of the data in the memory device.
    Type: Application
    Filed: November 20, 2017
    Publication date: November 29, 2018
    Inventors: Hae-Gi CHOI, Kyeong-Rho KIM, Dong-Hyun CHO, Su-Chang KIM
  • Publication number: 20180341577
    Abstract: A data processing system includes a host suitable for providing an access request; and a plurality of memory systems suitable for storing or reading data thereto or therefrom in response to the access request, wherein the host includes a host memory buffer suitable for storing a plurality of meta-data respectively corresponding to the plurality of memory systems, wherein each of the plurality of meta-data includes a first threshold value representing storage capacity for user data in a corresponding memory system among the plurality of memory systems, a second threshold value representing a number of read operations for logical block addresses (LBAs) of the corresponding memory system, a third threshold value representing a temperature of the corresponding memory system and respective LBAs of the plurality of memory systems.
    Type: Application
    Filed: December 5, 2017
    Publication date: November 29, 2018
    Inventors: Soong-Sun Shin, Duck-Hoi Koo, Yong-Tae Kim, Cheon-Ok Jeong
  • Publication number: 20180341578
    Abstract: A data storage device utilized for storing a plurality of data, wherein the data storage device includes a memory and a controller. The memory includes a plurality of blocks, and each of the blocks includes a plurality of physical pages. The controller is coupled to the memory and maps the logical pages to the physical pages of the memory, and it performs a leaping linear search for the logical pages. The controller searches the Nth logical page of the logical pages according to a predetermined value N. N is a positive integer greater than 1. When the Nth logical page is a currently-used logical page, the controller incrementally decreases the predetermined value N to keep searching the logical pages until a non-currently-used logical page is detected.
    Type: Application
    Filed: December 22, 2017
    Publication date: November 29, 2018
    Inventor: Chiu-Han CHANG
  • Publication number: 20180341579
    Abstract: A method for controlling an SSD (Solid State Disk), performed by a processing unit when loading and executing a driver, including: obtaining a data access command including information indicating a namespace, a command type, and a logical storage address; determining one of a plurality of storage mapping tables according to the namespace; reading a physical location corresponding to the logical storage address from the determined storage mapping table; generating a data access request including information indicating a request type and the physical location; and issuing the data access request to a SSD.
    Type: Application
    Filed: January 9, 2018
    Publication date: November 29, 2018
    Inventor: Ningzhong MIAO
  • Publication number: 20180341580
    Abstract: A method for accessing an SSD (Solid State Disk), performed by a processing unit when loading and executing a driver, including: selecting either a first queue or a second queue, wherein the first queue stores a plurality of regular access commands issued by an application and the second queue stores a plurality of access optimization commands; removing the data access command that arrived earliest from the selected queue; and generating a data access request comprising a physical location according to the removed data access command and sending the data access request to the SSD.
    Type: Application
    Filed: January 9, 2018
    Publication date: November 29, 2018
    Inventor: Ningzhong MIAO
  • Publication number: 20180341581
    Abstract: An object of the present invention is to provide a semiconductor device and a control method thereof that can suppress a circuit scale from being increased while maintaining a high interruption response performance.
    Type: Application
    Filed: March 7, 2018
    Publication date: November 29, 2018
    Inventors: Hiroshi Ueki, Eiji Koeta
  • Publication number: 20180341582
    Abstract: An access method of a nonvolatile memory device included in a user device includes receiving a write request to write data into the nonvolatile memory device; detecting an application issuing the write request, a user context, a queue size of a write buffer, an attribute of the write-requested data, or an operation mode of the user device; and deciding one of a plurality of write modes to use for writing the write-requested data into the nonvolatile memory device according to the detected information. The write modes have different program voltages and verify voltage sets.
    Type: Application
    Filed: August 7, 2018
    Publication date: November 29, 2018
    Inventors: Sangkwon Moon, Kyung Ho Kim, Seunguk Shin, Sung Won Jung
  • Publication number: 20180341583
    Abstract: A method for operating a data storage device including memory regions each including memory units of levels, the levels respectively corresponding to bitmaps, and each of the bitmaps including entries respectively corresponding to the memory regions includes controlling a read operation for a first memory unit of a first level among the levels in a first memory region among the memory regions; increasing a read count by checking a first entry corresponding to the first memory region in a first bit map corresponding to the first level, wherein each of entries included in the first bitmap reflects whether a corresponding memory region is included in at least one second memory region in which a memory unit of the first level has been read, during a predetermined period before the read operation for the first memory unit; and performing a management operation for the memory regions, based on the read count.
    Type: Application
    Filed: August 7, 2018
    Publication date: November 29, 2018
    Inventor: Ik Joon SON
  • Publication number: 20180341584
    Abstract: The disclosed technology is generally directed to data security. In one example of the technology, data is stored in a memory. The memory includes a plurality of memory banks including a first memory bank and a second memory bank. At least a portion of the data is interleaved amongst at least two of the plurality of memory banks. Access is caused to be prevented to at least one of the plurality of memory banks while a debug mode or recovery mode is occurring. Also, access is caused to be prevented to the at least one of the plurality of memory banks starting with initial boot until a verification by a security complex is successful. The verification by the security complex includes the security complex verifying a signature.
    Type: Application
    Filed: May 26, 2017
    Publication date: November 29, 2018
    Inventors: George Thomas LETEY, Douglas L. STILES, Edmund B. NIGHTINGALE
  • Publication number: 20180341585
    Abstract: Systems and methods provide a storage controller with write-back caching capabilities that may be used during scenarios where the storage controller is required to provide write-through caching, and thus unable to utilize internal cache memory for write-back caching. The storage controller utilizes an allocation of persistent memory that is made available by the host IHS (Information Handling System), to which the storage controller is coupled. In scenarios where the storage controller is required to provide write-through caching, the storage controller may be configured to route received write data to the allocated host memory. In this manner, the data integrity provided by write-through operations is maintained, while also providing the host IHS with the speed of write-back operations. When ready to store the write data, the storage controller may request the flushing of write data from the allocated host memory.
    Type: Application
    Filed: May 23, 2017
    Publication date: November 29, 2018
    Applicant: Dell Products, L.P.
    Inventors: Deepu Syam Sreedhar M, Stuart A. Berke, Sandeep Agarwal, Amit Pratap Singh
  • Publication number: 20180341586
    Abstract: Embodiments of the present invention are directed to managing a shared high-level cache for dual clusters of fully connected integrated circuit multiprocessors. An example of a computer-implemented method includes: providing a drawer comprising a plurality of clusters, each of the plurality of clusters comprising a plurality of processors; providing a shared cache integrated circuit to manage a shared cache memory among the plurality of clusters; receiving, by the shared cache integrated circuit, an operation of one of a plurality of operation types from one of the plurality of processors; and processing, by the shared cache integrated circuit, the operation based at least in part on the operation type of the operation according to a set of rules for processing the operation type.
    Type: Application
    Filed: May 26, 2017
    Publication date: November 29, 2018
    Inventors: Michael A. Blake, Timothy C. Bronson, Pak-kin Mak, Vesselina K. Papazova, Robert J. Sonnelitter, III
  • Publication number: 20180341587
    Abstract: Embodiments of the present invention are directed to managing a shared high-level cache for dual clusters of fully connected integrated circuit multiprocessors. An example of a computer-implemented method includes: providing a drawer comprising a plurality of clusters, each of the plurality of clusters comprising a plurality of processors; providing a shared cache integrated circuit to manage a shared cache memory among the plurality of clusters; receiving, by the shared cache integrated circuit, an operation of one of a plurality of operation types from one of the plurality of processors; and processing, by the shared cache integrated circuit, the operation based at least in part on the operation type of the operation according to a set of rules for processing the operation type.
    Type: Application
    Filed: November 1, 2017
    Publication date: November 29, 2018
    Inventors: Michael A. Blake, Timothy C. Bronson, Pak-kin Mak, Vesselina K. Papazova, Robert J. Sonnelitter, III
  • Publication number: 20180341588
    Abstract: A system and method are described for integrating a memory and storage hierarchy including a non-volatile memory tier within a computer system. In one embodiment, PCMS memory devices are used as one tier in the hierarchy, sometimes referred to as “far memory.” Higher performance memory devices such as DRAM placed in front of the far memory and are used to mask some of the performance limitations of the far memory. These higher performance memory devices are referred to as “near memory.” In one embodiment, the “near memory” is configured to operate in a plurality of different modes of operation including (but not limited to) a first mode in which the near memory operates as a memory cache for the far memory and a second mode in which the near memory is allocated a first address range of a system address space with the far memory being allocated a second address range of the system address space, wherein the first range and second range represent the entire system address space.
    Type: Application
    Filed: August 1, 2018
    Publication date: November 29, 2018
    Inventors: Raj K. RAMANUJAN, Rajat AGARWAL, Glenn J. HINTON
  • Publication number: 20180341589
    Abstract: Provided herein is a computer-implemented method. The computer-implemented method includes updating, by a processor, a value of a delta field of an entry of a data structure indexed for the processor. The computer-implemented method also includes comparing, by the processor, a predefined threshold for a global field corresponding to the delta field and the value of the delta field. The computer-implemented method also includes rolling, by the processor, the value of the delta field into the global field when an absolute value of the value of the delta field meets or exceeds the predefined threshold for the global field. Note that the data structure is stored in a first area of a memory in communication with the processor that is separate from a second area of the memory storing the global field.
    Type: Application
    Filed: May 23, 2017
    Publication date: November 29, 2018
    Inventors: Harris M. Morgenstern, Steven M. Partlow, Christopher L. Wood
  • Publication number: 20180341590
    Abstract: Provided herein is a computer-implemented method. The computer-implemented method includes updating, by a processor, a value of a delta field of an entry of a data structure indexed for the processor. The computer-implemented method also includes comparing, by the processor, a predefined threshold for a global field corresponding to the delta field and the value of the delta field. The computer-implemented method also includes rolling, by the processor, the value of the delta field into the global field when an absolute value of the value of the delta field meets or exceeds the predefined threshold for the global field. Note that the data structure is stored in a first area of a memory in communication with the processor that is separate from a second area of the memory storing the global field.
    Type: Application
    Filed: November 13, 2017
    Publication date: November 29, 2018
    Inventors: Harris M. Morgenstern, Steven M. Partlow, Christopher L. Wood
  • Publication number: 20180341591
    Abstract: Techniques are disclosed for identifying data streams in a processor that are likely to and not likely to benefit from data prefetching. A prefetcher receives at least a first request in a plurality of requests to pre-fetch data from a stream in a plurality of streams. The prefetcher assigns a confidence level to the first request based on an amount of confirmations observed in the stream. The request is in a confident state if the confidence level exceeds a specified value. The first request is in a non-confident state if the confidence level does not exceed the specified value. Requests to prefetch data in the plurality of requests that are associated with respective streams with a low prefetch utilization are deprioritized. Doing so allows a memory controller to determine whether to drop the at least the first request based on the confidence level, prefetch utilization, and memory resource utilization.
    Type: Application
    Filed: May 26, 2017
    Publication date: November 29, 2018
    Inventors: Bernard C. Drerup, Richard J. Eickemeyer, Guy L. Guthrie, Mohit Karve, George W. Rohrbaugh, III, Brian W. Thompto
  • Publication number: 20180341592
    Abstract: Techniques are disclosed for identifying data streams in a processor that are likely to and not likely to benefit from data prefetching. A prefetcher receives at least a first request in a plurality of requests to pre-fetch data from a stream in a plurality of streams. The prefetcher assigns a confidence level to the first request based on an amount of confirmations observed in the stream. The request is in a confident state if the confidence level exceeds a specified value. The first request is in a non-confident state if the confidence level does not exceed the specified value. Requests to prefetch data in the plurality of requests that are associated with respective streams with a low prefetch utilization are deprioritized. Doing so allows a memory controller to determine whether to drop the at least the first request based on the confidence level, prefetch utilization, and memory resource utilization.
    Type: Application
    Filed: November 13, 2017
    Publication date: November 29, 2018
    Inventors: Bernard C. Drerup, Richard J. Eickemeyer, Guy L. Guthrie, Mohit Karve, George W. Rohrbaugh, III, Brian W. Thompto
  • Publication number: 20180341593
    Abstract: In one example, a system for embedded image management includes a blade enclosure that includes a set of server blades, an embedded image management appliance coupled to the blade enclosure, comprising: an image repository stored in a memory resource, an appliance operating system that operates on the memory resource, and an image resource manager that operates on the appliance operating system.
    Type: Application
    Filed: November 29, 2015
    Publication date: November 29, 2018
    Inventors: Aland B. Adams, Michael S. Bunker, John Smi, Gary W. Thome
  • Publication number: 20180341594
    Abstract: Method and apparatus for managing data in a memory, such as a flash memory. A memory module has a non-volatile memory (NVM) and a memory module electronics (MME) circuit configured to program data to and read data from solid-state non-volatile memory cells of the NVM. A map structure associates logical addresses of user data blocks with physical addresses in the NVM at which the user data blocks are stored. A controller circuit arranges the user data blocks into map units (MUs), and directs the MME circuit to write the MUs to a selected page of the NVM. The controller circuit updates the map structure to list only a single occurrence of a physical address for all of the MUs written to the selected page. The map structure is further updated to list an MU offset and an MU length for each of the MUs written to the selected page.
    Type: Application
    Filed: May 26, 2017
    Publication date: November 29, 2018
    Inventors: Timothy Canepa, Jeffrey Munsil, Jackson Ellis, Mark Ish
  • Publication number: 20180341595
    Abstract: A memory address assignment method and a virtual machine that runs on a host machine. The host machine includes a physical memory area with power failure protection, the virtual machine includes a virtual memory area with power failure protection. The method includes determining that a virtual memory address in which a page fault occurs in the virtual machine belongs to the virtual memory area with power failure protection, and assigning a physical memory address of the host machine from the physical memory area with power failure protection to the virtual memory address.
    Type: Application
    Filed: August 3, 2018
    Publication date: November 29, 2018
    Inventors: Meng Gao, Kunpeng Liu
  • Publication number: 20180341596
    Abstract: A hashing scheme includes a cache-friendly, latchless, non-blocking dynamically resizable hash index with constant-time lookup operations that is also amenable to fast lookups via remote memory access. Specifically, the hashing scheme provides each of the following features: latchless reads, fine grained lightweight locks for writers, non-blocking dynamic resizability, cache-friendly access, constant-time lookup operations, amenable to remote memory access via RDMA protocol through one sided read operations, as well as non-RDMA access.
    Type: Application
    Filed: May 26, 2017
    Publication date: November 29, 2018
    Inventors: Siddharth Teotia, Krishna Kunchithapadam, Tirthankar Lahiri, Jesse Kamp, Michael J. Gleeson, Juan R. Loaiza, Garret F. Swart, Neil J.S. MacNaughton, Kam Shergill
  • Publication number: 20180341597
    Abstract: The present disclosure is related to a virtual register file. Source code can be compiled to include references to a virtual register file for data subject to a logical operation. The references can be dereferenced at runtime to obtain physical addresses of memory device elements according to the virtual register file. The logical operation can be performed in the memory device on data stored in the memory device elements.
    Type: Application
    Filed: August 3, 2018
    Publication date: November 29, 2018
    Inventors: John D. Leidel, Geoffrey C. Rogers
  • Publication number: 20180341598
    Abstract: Disclosed aspects relate to memory affinity management in a shared pool of configurable computing resources that utilizes non-uniform memory access (NUMA). An access relationship is monitored between a set of hardware memory components and a set of software assets. A set of memory affinity data is stored. The set of memory affinity data indicates the access relationship between the set of software assets and the set of hardware memory components. Using the set of memory affinity data, a NUMA utilization configuration with respect to the set of software assets is determined. Based on the NUMA utilization configuration, a set of accesses pertaining to the set of software assets and the set of hardware memory components is executed.
    Type: Application
    Filed: May 23, 2017
    Publication date: November 29, 2018
    Inventors: Mehulkumar Patel, Vaidyanathan Srinivasan, Venkatesh Sainath
  • Publication number: 20180341599
    Abstract: Provided are a computer program product, system, and method for adjusting active cache size based on cache usage. An active cache in at least one memory device caches tracks in a storage during computer system operations. An inactive cache in the at least one memory device is not available to cache tracks in the storage during the computer system operations. During caching operations in the active cache, information is gathered on cache hits to the active cache and cache hits that would occur if the inactive cache was available to cache data during the computer system operations. The gathered information is used to determine whether to configure a portion of the inactive cache as part of the active cache for use during the computer system operations.
    Type: Application
    Filed: August 3, 2018
    Publication date: November 29, 2018
    Inventors: Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta, Will A. Wright
  • Publication number: 20180341600
    Abstract: In various examples a compute node is described. The compute node has a central processing unit which implements a hardware transactional memory using at least one cache of the central processing unit. The compute node has a memory in communication with the central processing unit, the memory storing information comprising at least one of: code and data. The compute node has a processor which loads at least part of the information, from the memory into the cache. The processor executes transactions using the hardware transactional memory and at least the loaded information, such that the processor ensures that the loaded information remains in the cache until completion of the execution.
    Type: Application
    Filed: June 29, 2017
    Publication date: November 29, 2018
    Inventors: Felix Schuster, Olga Ohrimenko, Istvan Haller, Manuel Silverio da Silva Costa, Daniel Gruss, Julian Lettner
  • Publication number: 20180341601
    Abstract: If software for embedded devices is executed in a virtual environment, a simulation stops when an access to an uninstalled area in a virtual environment occurs. An information processing apparatus includes a virtual environment for executing an embedded program for a predetermined embedded device. The virtual environment includes a virtual bus unit. The virtual bus unit includes an access processing unit that processes bus access in the execution of the embedded program. The virtual bus unit includes an area reserving unit that reserves, when an access destination of the bus access is not defined in the virtual environment, a storage area corresponding to the access destination as a stub area in the virtual bus unit.
    Type: Application
    Filed: March 10, 2018
    Publication date: November 29, 2018
    Inventor: Eiichi ARAI
  • Publication number: 20180341602
    Abstract: A method utilizing a system encompassing a free pool buffer; a deadlock avoidance buffer; and a controller communicatively coupled to the free pool buffer and the deadlock avoidance buffer to reorder out-of-order responses to fetch requests into correct order by: receiving a fetch request on behalf of a consumer; allocating space first in the free pool buffer and when such space is not available then allocating space in a division associated with the consumer in the deadlock avoidance buffer. Issuing segment(s) of the fetch request including associated tag(s) to one of one or more memories; writing response data for each of the segment(s) to the allocated space in the free buffer or the deadlock avoidance buffer according to each of the associated tag(s); and transferring the response data to the consumer according to an entry in an ordering first-in, first-out buffer and an entry in a pending request array.
    Type: Application
    Filed: May 23, 2017
    Publication date: November 29, 2018
    Applicant: Cavium, Inc.
    Inventors: Kalyana Sundaram Venkataraman, Jason Daniel Zebchuk, Gregg Alan Bouchard, Tejas Maheshbhai Bhatt, Hong Jik Kim, Eric Marenger, Ahmed Shahid
  • Publication number: 20180341603
    Abstract: An expandable memory system that enables a fixed signaling bandwidth to be configurably re-allocated among dedicated memory channels. Memory channels having progressively reduced widths are dedicated to respective memory sockets, thus enabling point-to-point signaling with respect to each memory socket without signal-compromising traversal of unloaded sockets or costly replication of a full-width memory channel for each socket.
    Type: Application
    Filed: May 29, 2018
    Publication date: November 29, 2018
    Inventors: Ian P. Shaeffer, Arun Vaidyanath, Sanku Mukherjee
  • Publication number: 20180341604
    Abstract: A digital processing device with high input/output connectivity and modular architecture comprises a first plurality of input ports, a second plurality of output ports, and a third plurality of at least four basic elementary modules. The third plurality of the elementary modules is split up according to a partitioning of at least two sub-assemblies of module(s), at least two of which form different islets comprising at least two modules.
    Type: Application
    Filed: May 23, 2018
    Publication date: November 29, 2018
    Inventors: Hélène GACHON, Norbert VENET