Patents Issued in December 6, 2018
-
Publication number: 20180349232Abstract: A method for creating a common platform graphical user interface is provided. The interface may enable a user to trigger a data load job from a tool. The tool may monitor file upload events, trigger jobs and identify lists of missing or problematic file names. The tool may run on a single thread, thereby consuming relatively less system resources than a multi-thread program to perform its capabilities. The tool may enable selection of file names using wildcard variables or keyword variables. The tool may validate a list of files received against a master file list for each data load job. The tool may receive user input relating to each data load job. The tool may generate a loop within the single thread to receive information. The tool may analyze the received information and use the received information to predict future metadata associated with future data load jobs.Type: ApplicationFiled: June 5, 2017Publication date: December 6, 2018Inventors: Sireesh Kumar Vasantha, Suki Ramasamy
-
Publication number: 20180349233Abstract: A synchronization process for virtual-machine images (and other segmented files) provides for generating a “delta” bitmap indicating which segments (e.g., clusters) of a first virtual-machine image were changed to obtain a second (e.g., updated) virtual-machine image on a source node. The delta bitmap can be applied to the second-virtual-machine image to generate a delta file. The delta file can be sent along with the delta bitmap to a target node that already has a copy of the first virtual-machine image. The transferred delta bitmap and delta file can then be used on the target node to generate a replica of the second virtual-machine image, thereby effecting synchronization. In variations, different bitmaps and delta files can be transferred to optimize the synchronization process.Type: ApplicationFiled: June 5, 2017Publication date: December 6, 2018Applicant: VMware, Inc.Inventors: Oleg ZAYDMAN, Preeti KOTA
-
Publication number: 20180349234Abstract: Embodiments of the present disclosure provide a method for a storage system, a storage system and a computer program product. The method comprises determining a first drive in a drive array is temporarily unavailable. The method further comprises setting the first drive in a frozen state. The method further comprises: in response to receiving a write request for the first drive during the frozen state, pending the write request or recording the write request in a second drive in the drive array. The method further comprises: in response to receiving a read request for the first drive during the frozen state, reconstructing data to which the read request is directed through data stored in a third drive in the drive array.Type: ApplicationFiled: May 31, 2018Publication date: December 6, 2018Inventors: Bing Liu, Man Lv
-
Publication number: 20180349235Abstract: A redundant computer system utilizing comparison diagnostics and voting techniques includes a plurality of redundant channels. Each pair of the processors receives/obtains process information from I/O modules via dual redundant sensors (DRS). The processors execute an application program, whereby output module is utilized for comparing output data of the two processors. Output module receives output data from neighboring modules, if there is a deviation or other disparity in the output data. Each pair of processors, a voter and an improper sequence detector component disables the output module, if a majority of signals vote that output module fails. In addition, the system uses 2-of-3 voting, the system remains operational in the presence of up two transient or hard failures.Type: ApplicationFiled: April 30, 2018Publication date: December 6, 2018Applicant: The University of AkronInventors: Lev Raphaelovich Freydel, Nathan Ida
-
Publication number: 20180349236Abstract: A method for transmitting a request message and an apparatus are disclosed, to resolve a prior-art problem that in an ICT network, when a request message is transmitted, a probability that the request message fails to be transmitted is increased, and reliability of transmitting the request message is reduced. The method includes: determining, by a dispatcher according to information that is about a first controller and that is included in a received request message, a corresponding first driver adaptation plug-in group, where the first controller is connected to at least one driver adaptation plug-in included in the first driver adaptation plug-in group; and selecting, by the dispatcher from the at least one driver adaptation plug-in, at least one to-be-selected driver adaptation plug-in whose running status is normal, and eventually sending the request message to the first controller by using one to-be-selected driver adaptation plug-in.Type: ApplicationFiled: August 13, 2018Publication date: December 6, 2018Inventors: Yuejin Liu, Youyu Jiang
-
Publication number: 20180349237Abstract: Provided are a computer program product, system, and method for monitoring correctable errors on a bus interface to determine whether to redirect traffic to another bus interface. A processing unit sends Input/Output (I/O) requests from a host to a storage over a first bus interface to a first device adaptor, wherein the first device adaptor provides a first connection to the storage. A determination is made as to whether a number of correctable errors on the first bus interface exceeds an error threshold. The correctable errors are detected and corrected in the first bus interface by hardware of the first bus interface. In response to determining that the number of correctable errors on the first bus interface exceeds the error threshold, at least a portion of I/O requests are redirected to use a second bus interface to connect to a second device adaptor providing a second connection to the storage.Type: ApplicationFiled: June 2, 2017Publication date: December 6, 2018Inventors: Matthew G. Borlick, Lokesh M. Gupta, Trung N. Nguyen
-
Publication number: 20180349238Abstract: Account data comprising metadata for primary application instances running at a primary active cloud environment instance (ACEI) is stored. Application data associated with the primary application instances is stored at primary databases (DBs). The account and application data are transferred to secondary DBs at a secondary ACEI. The secondary ACEI may be a backup instance to substitute services provided by the primary ACEI in case of unavailability. For example, the location where the primary ACEI is hosted may be affected by a disaster. To failover a primary data center hosting the primary ACEI, a database takeover to the secondary DBs is performed. The secondary ACEI is configured correspondingly to the primary ACEI based on the transferred account data. Secondary application instances corresponding to the primary application instances are started at the secondary ACEI. Requests directed to the primary application instances are redirected to the secondary application instances.Type: ApplicationFiled: May 31, 2017Publication date: December 6, 2018Inventors: Stoyan Boshev, Petio Petev, Thomas Walter, Bogdan Vatkov, Hristo Dobtchev, Borislav Arnaudov
-
Publication number: 20180349239Abstract: A system may include a first device to provide a uniform resource identifier (URI) resolution or routing service among a first data center and a second data center. The first device may provide a first failover service among devices associated with the first data center for a set of interfaces. The system may include a first set of devices and a second set of devices associated with a first application and a second application. The first device may provide a second failover service for the first and second sets of devices. The system may include a first database cluster to provide software or a service related to clustering another set of devices or providing a threshold level of availability for the other set of devices. The first database cluster may provide a failover service for the other set of devices.Type: ApplicationFiled: June 2, 2017Publication date: December 6, 2018Inventors: Harshad GOHIL, Jeffrey James BATH, Shyam Kumar DESU, Jeetendra PRADHAN
-
Publication number: 20180349240Abstract: A device that provides power fail handling using command suspension includes non-volatile memory circuits and a controller that is configured to determine that a power fail event has occurred. The controller is configured to determine, in response to the determination that the power fail event has occurred, which of the non-volatile memory circuits are executing a first type of memory commands. The controller is also configured to issue a stop command to the determined non-volatile memory circuits to stop execution of the first type of memory commands.Type: ApplicationFiled: May 31, 2017Publication date: December 6, 2018Inventors: YungLi JI, Yuriy PAVLENKO, Kum-Jung SONG
-
Publication number: 20180349241Abstract: A processor includes an execution pipeline that includes a plurality of execution stages, execution pipeline control logic, and a debug system. The execution pipeline control logic is configured to control flow of an instruction through the execution stages. The debug system includes a debug pipeline and debug pipeline control logic. The debug pipeline includes a plurality of debug stages. Each debug pipeline stage corresponds to an execution pipeline stage, and the total number of debug stages corresponds to the total number of execution stages. The debug pipeline control logic is coupled to the execution pipeline control logic. The debug pipeline control logic is configured to control flow through the debug stages of debug information associated with the instruction, and to advance the debug information into a next of the debug stages in correspondence with the execution pipeline control logic advancing the instruction into a corresponding stage of the execution pipeline.Type: ApplicationFiled: August 13, 2018Publication date: December 6, 2018Inventors: Shrey Bhatia, Christian Wiencke, Armin Stingl, Ralph Ledwa, Wolfgang Lutsch
-
Publication number: 20180349242Abstract: An accessory communication control protocol can facilitate faster and more secure transmission of status updates from an accessory to a controller (or network base station). An accessory can register with a controller, where the controller can provide some subscription and key generation information to the accessory. The accessory can detect changes to characteristics of the accessory and generate a broadcast notification that includes updates to the state of the characteristic. The broadcast notification can also include a counter, a device identifier, and a key. According to timing or rules, the accessory can transmit the broadcast notification to the controller without the need to establish a secure session with the controller.Type: ApplicationFiled: September 21, 2017Publication date: December 6, 2018Applicant: Apple Inc.Inventor: Dennis Mathews
-
Publication number: 20180349243Abstract: Provided herein may be a storage device and a method of operating the same. In a storage device for controlling operational performance depending on temperature, a memory controller configured to control a memory device may include an internal temperature sensing unit configured to generate an internal temperature information by sensing a temperature of the memory controller and a performance adjustment unit configured to receive an external temperature information from an external temperature sensing unit, and controlling operational performance of the memory controller using the internal temperature information and the external temperature information, wherein the external temperature information represents a temperature of the memory device.Type: ApplicationFiled: December 12, 2017Publication date: December 6, 2018Inventors: Soong Sun SHIN, Sang Hyun KIM
-
Publication number: 20180349244Abstract: Circuits, methods, and apparatus that may estimate the power being consumed by an OLED display screen of an electronic device, may provide further information about that power usage, may modify or change functions performed by the electronic device based on that power usage, and may inform an application's developer about the amount of power being used by the electronic device while the electronic device is running the application. One example may estimate the power being used by an OLED display screen of an electronic device by determining the content of images being displayed during a duration. The estimated power may then be presented to a user. The estimated power may be used in decisions to modify or change parameters of the screen or other device components.Type: ApplicationFiled: February 20, 2018Publication date: December 6, 2018Applicant: Apple Inc.Inventors: Abhinav Pathak, Conor J. O'Reilly, Shashi K. Dua, Udaykumar R. Raval, Christopher W. Chaney, Amit K. Vyas, Albert S. Liu, Roberto Alvarez, Rohit Mundra, Vladislav Sahnovich, Patrick Y. Law, Paul M. Thompson, Paolo Sacchetto, Chaohao Wang, Arthur L. Spence, Jean-Pierre Simon Guillou, Mohammad Ali Jangda, Christopher Edward Glazowski, Yifan Zhang, Prajakta S. Karandikar, Han Ming Ong
-
Publication number: 20180349245Abstract: A computer-implemented method, a computer program product, and a computer system for parallel task management. A computer system receives a new task that requests to access a resource may be received. In response to an access workload being above a first threshold, the computer system dispatches the new task to at least one predefined processing unit, wherein the access workload may be associated with the resource that is in parallel accessed by a plurality of existing tasks.Type: ApplicationFiled: May 30, 2017Publication date: December 6, 2018Inventors: Ping Ping Cheng, Jun Hua Gao, Guan Jun Liu, Xue Yong Zhang, Xi Bo Zhu, Bei Chun Zhou
-
Publication number: 20180349246Abstract: A computer program product and a computer system for parallel task management. A computer system receives a new task that requests to access a resource may be received. In response to an access workload being above a first threshold, the computer system dispatches the new task to at least one predefined processing unit, wherein the access workload may be associated with the resource that is in parallel accessed by a plurality of existing tasks.Type: ApplicationFiled: October 31, 2017Publication date: December 6, 2018Inventors: Ping Ping Cheng, Jun Hua Gao, Guan Jun Liu, Xue Yong Zhang, Xi Bo Zhu, Bei Chun Zhou
-
Publication number: 20180349247Abstract: In one embodiment, a server determines a particular computer network outside of a lab environment to recreate, and also determines, for the particular computer network, hardware components and their interconnectivity, as well as installed software components and their configuration. The server then controls interconnection of lab hardware components within the lab environment according to the interconnectivity of the hardware components of the particular computer network. The server also installs and configures lab software components on the lab hardware components according to the configuration of the particular computer network. Accordingly, the server operates the installed lab software components on the interconnected lab hardware components within the lab environment to recreate operation of the particular computer network within the lab environment, and provides information about the recreated operation of the particular computer network.Type: ApplicationFiled: June 5, 2017Publication date: December 6, 2018Inventors: Michael David Hanes, Joseph Michael Clarke, Charles Calvin Byers, Gonzalo Salgueiro
-
Publication number: 20180349248Abstract: A software analysis device which efficiently analyzes a computer environment in which software is capable of running are provided. The software analysis device sets at least two configurations to a virtual machine, executes processing at a timing on individual configuration, determines whether or not results of the processing satisfy a predetermined criterion, and determines that the software is executed when the results satisfies the predetermined condition.Type: ApplicationFiled: November 17, 2016Publication date: December 6, 2018Applicant: NEC CorporationInventor: Yuki ASHINO
-
Publication number: 20180349249Abstract: An apparatus includes a memory and a processor, where the processor includes a performance counter that stores performance data for the processor. The apparatus store plural groups of calculation instructions in the memory. The apparatus calculates a first execution result by executing, based on the performance data obtained from the performance counter, each calculation instruction included in a first group of calculation instructions, and selects a second group of calculation instructions to be executed next, from among the plural groups of calculation instructions, based on the calculated first execution result.Type: ApplicationFiled: May 30, 2018Publication date: December 6, 2018Applicant: FUJITSU LIMITEDInventor: Fumitake ABE
-
Publication number: 20180349250Abstract: Systems and methods for implementing content-level anomaly detection for devices having limited memory are provided. At least one log content model is generated based on training log content of training logs obtained from one or more sources associated with the computer system. The at least one log content model is transformed into at least one modified log content model to limit memory usage. Anomaly detection is performed for testing log content of testing logs obtained from one or more sources associated with the computer system based on the at least one modified log content model. In response to the anomaly detection identifying one or more anomalies associated with the testing log content, the one or more anomalies are output.Type: ApplicationFiled: May 3, 2018Publication date: December 6, 2018Inventors: Biplob Debnath, Hui Zhang, Erik Kruus
-
Publication number: 20180349251Abstract: Disclosed herein are system, method, and computer program product embodiments for error root cause detection. An embodiment operates by a computer implemented method that includes receiving, by at least one processor, a request to determine a root cause of an error associated with a code and executing a first execution path and a second execution path, where the first and second execution paths correspond to the code. The method further includes determining whether a difference between first data generated by the execution of the first execution path and second data generated by the execution of the second execution path affects the error associated with the code. The method also includes identifying a code component that contributed to the difference between the first data and the second data, if the difference between the first data and the second data affects the error associated with the code.Type: ApplicationFiled: June 6, 2017Publication date: December 6, 2018Inventors: Sebastian MIETKE, Toni Fabijancic
-
Publication number: 20180349252Abstract: Methods and devices for testing graphics hardware may include reading content of a selected capture file from a plurality of capture files. The methods and devices may include transferring content from the selected capture file to an emulator memory of an emulator separate from the computer device. The methods and devices may include executing at least one pseudo central processing unit (pseudo CPU) operation to coordinate the execution of work on a graphics processing unit (GPU) of the emulator using the content from the selected capture file to test the GPU. The methods and devices may include receiving and store rendered image content from the emulator when the work is completed.Type: ApplicationFiled: June 6, 2017Publication date: December 6, 2018Inventors: Aaron RODRIGUEZ HERNANDEZ, Jason GOULD, Cole BROOKING, Nihar MOHAPATRA, Parikshit NARKHEDE, Veena K. MALWANKAR
-
Publication number: 20180349253Abstract: Embodiments include apparatuses, methods, and computer devices including a processor and a device programmer coupled to the processor. The processor may detect an error during an execution of a program on the processor, and transmit an error message to the device programmer. Afterwards, the processor may receive a probe input signal from the device programmer to stop the execution of the program on the processor, and transmit a probe output signal to the device programmer to indicate that the execution of the program on the processor has stopped. On the other hand, the device programmer may receive the error message from the processor, and transmit the probe input signal to the processor to stop the execution of the program on the processor. Afterwards, the device programmer may receive the probe output signal from the processor.Type: ApplicationFiled: June 2, 2017Publication date: December 6, 2018Inventor: James S. Woodward
-
Publication number: 20180349254Abstract: The present disclosure generally relates to end-to-end testing of applications using simulated data. More particularly, the present disclosure relates to systems and methods that test applications in a production environment by dynamically generating and tracking the simulated data in real time. In some implementations, an expected number of simulated user profiles (e.g., based on a protocol for generating simulated user profiles) can be compared against an actual number of simulated user profiles stored in a state machine to identify issues within the end-to-end environment of the application being tested.Type: ApplicationFiled: May 7, 2018Publication date: December 6, 2018Applicant: Oracle International CorporationInventor: Vernon W. Hui
-
Publication number: 20180349255Abstract: A method and device for transmitting metrologically acquired and digitized measured data in a test device. The measured data corresponds to a program task, and a direction of the transmission of the measured data from a measured data transmitter of the test device is provided via a data channel to a measured data receiver of the test device. The measured data transmitter has a signal preprocessing processor, a task monitoring processor and a data channel arbiter. Via the task monitoring processor, a task ID data packet is generated at an execution start of the program task or at an execution end of the program task, and the task ID data packet is transmitted to the data channel arbiter. Via the data channel arbiter, the measured data and the task ID data packet are successively forwarded via the data channel as a data stream to the measuring data receiver.Type: ApplicationFiled: June 4, 2018Publication date: December 6, 2018Applicant: dSPACE digital signal processing and control engineering GmbHInventors: Matthias FROMME, Jochen SAUER, Matthias SCHMITZ
-
Publication number: 20180349256Abstract: Computer implemented methods and systems are provided for generating one or more test cases based on received one or more natural language strings. An example system comprises a natural language classification unit that utilizes a trained neural network in conjunction with a reinforcement learning model, the system receiving as inputs various natural language strings and providing as outputs mapped test actions, mapped by the neural network.Type: ApplicationFiled: June 1, 2018Publication date: December 6, 2018Inventor: Cory FONG
-
Publication number: 20180349257Abstract: The present disclosure generally relates to predicting automated software tests for testing units of work delivery in a continuous integration development environment. More particularly, the present disclosure relates to systems and methods for improving the efficiency of code integration by predicting a subset of automated software tests from amongst a set of all available automated software tests, thereby improving testing time and reducing processing loads.Type: ApplicationFiled: May 31, 2017Publication date: December 6, 2018Applicant: Oracle International CorporationInventors: Abhijit Bhattacharjee, Manoj Dash
-
Publication number: 20180349258Abstract: Methods and systems for generating a combined metric parameter for A/B testing comprising: acquiring a respective first metric parameter for a first and second plurality of feature vectors, a combination of the respective first metric parameters being indicative of a direction of a change in user interactions between the control version and the treatment version, acquiring a respective second metric parameter for the first and second plurality of feature vectors, a combination of the respective second metric parameters being indicative of a magnitude of the change in user interactions between the control and treatment version, generating a respective combined control metric parameter for the first plurality of feature vectors and the second plurality of feature vectors, the combination of the respective combined metric parameters being simultaneously indicative of the magnitude and the direction of the change in user interactions between the control and treatment version.Type: ApplicationFiled: June 4, 2018Publication date: December 6, 2018Inventors: Evgeny Vyacheslavovich KHARITONOV, Aleksey Valyerevich DRUTSA, Pavel Viktorovich SERDYUKOV
-
Publication number: 20180349259Abstract: A method for executing programs (P) in an electronic system for applications with functional safety that comprises a single-processor or multiprocessor processing system (10) and a further independent control module (15), including: carrying out a decomposition of a program (P) that includes a safety function (SF) to be executed via said system (10) into a plurality of parallel subprograms (P1, . . . , Pn); assigning execution of each parallel subprogram (P1, . . . , Pn) to a respective processing module (11) of the system, in particular a processor (C1, . . . , Cm) of said multiprocessor architecture (10) or a virtual machine (V1, . . . , Vn) associated to one of said processors (C1, . . . , Cm); carrying out in the system (10), periodically according to a cycle frequency (fcyc) of the program (P) during normal operation of said system (10), in the context of said safety function (SF), self-test operations (Astl, Asys, Achk) associated to each of said subprograms (P1, . . .Type: ApplicationFiled: October 12, 2016Publication date: December 6, 2018Applicant: Intel CorporationInventor: Riccardo MARIANI
-
Publication number: 20180349260Abstract: Separating data of trusted and untrusted data types in a memory of a computer during execution of a software program. Assigning mutually separated memory regions in the memory, namely, for each of the data types, a memory region for storing any data of the respective data type, and an additional memory region for storing any data which cannot be uniquely assigned to one of the data types. For each allocation instruction, performing a memory allocation including linking the allocation instruction to at least one data source, generating instruction-specific context information, evaluating the data source to determine the data type, associating the data type with the context information, based on the context information, assigning the allocation instruction to the memory region assigned to the evaluated data type, and allocating memory for storing data from the data source in the assigned memory region.Type: ApplicationFiled: June 1, 2017Publication date: December 6, 2018Inventors: Anil Kurmus, Matthias Neugschwandtner, Alessandro Sorniotti
-
Publication number: 20180349261Abstract: According to one embodiment, it is determined whether data stored in a compressor pool exceeds a first predetermined threshold, the compressor pool being a fixed-size memory pool maintained in a kernel of an operating system. The compressor pool stores a plurality of compressed memory pages, each memory page storing compressed data pages that can be paged out to or paged in from a persistent storage device. The compressed memory pages are associated with a plurality of processes. A memory consumption reduction action is performed to reduce memory usage, including terminating at least one of the processes to reclaim a memory space occupied by the process, in response to determining that the data stored in the compressor pool exceeds the first predetermined threshold.Type: ApplicationFiled: June 1, 2018Publication date: December 6, 2018Inventors: LIONEL D. DESAI, RUSSELL A. BLAINE, BENJAMIN C. TRUMBULL
-
Publication number: 20180349262Abstract: A method includes using a memory address map, locating a first portion of an application in a first memory and loading a second portion of the application from a second memory. The method includes executing in place from the first memory the first portion of the application, during a first period, and by completion of the loading of the second portion of the application from the second memory. The method further includes executing the second portion of the application during a second period, wherein the first period precedes the second period.Type: ApplicationFiled: June 15, 2018Publication date: December 6, 2018Applicant: Cypress Semiconductor CorporationInventors: Stephan Rosner, Qamrul Hasan, Venkat NATARAJAN
-
Publication number: 20180349263Abstract: In one or more embodiments, one or more methods, processes and/or systems may receive multiple characters (e.g., a string), determine multiple offsets respectively corresponding to the multiple characters, determine multiple addresses based on a base address and the multiple offsets respectively corresponding to the multiple characters, and execute multiple subroutine call instructions to each of the multiple addresses. In one or more embodiments, an execution log of the subroutine call instructions to each of the multiple addresses may be analyzed. For instance, the execution log of the subroutine call instructions to each of the multiple addresses may be utilized in determining the multiple characters (e.g., the string) that were received. In one or more embodiments, determining the multiple characters may include determining offsets from a base address and utilizing the offsets as a mapping to characters. For example, the string may be recovered and/or recreated from the offsets.Type: ApplicationFiled: June 1, 2017Publication date: December 6, 2018Inventor: Craig Lawrence Chaiken
-
Publication number: 20180349264Abstract: Disclosed are methods, systems and devices for operation of memory device. In one aspect, bit positons of a portion of a memory array may be placed in a first value state. Values to be written to the bit positions may be determined subsequent to placement of the bit positions in the first value state. Values at selected ones of the bit positions may then be changed from the first value state to a second value state while maintaining remaining unselected ones of the bit positions in the first value state so that the bit positions store or represent the values determined to be written to the bit positions.Type: ApplicationFiled: June 5, 2017Publication date: December 6, 2018Inventors: Joel Thornton Irby, Mudit Bhargava, Alan Jeremy Becker
-
Publication number: 20180349265Abstract: According to one embodiment, data stored in past can be effectively used without being influenced by the characteristics and capacity of a memory storing storable data. Maintenance data are generated by managing a state in a memory area, alert notification data are transferred on the basis of the generated maintenance data, and/or at least a part of the data which are already stored in the memory area is transferred.Type: ApplicationFiled: August 9, 2018Publication date: December 6, 2018Applicants: KABUSHIKI KAISHA TOSHIBA, Toshiba Digital Solutions CorporationInventors: Shinichi KASHIMOTO, Keisuke AZUMA, Yuji CHOTOKU
-
Publication number: 20180349266Abstract: Method and apparatus for managing data such as in a flash memory. In some embodiments, a memory module electronics (MME) circuit writes groups of user data blocks to consecutive locations within a selected section of a non-volatile memory (NVM), and concurrently writes a directory map structure as a sequence of map entries distributed among the groups of user data blocks. Each map entry stores address information for the user data blocks in the associated group and a pointer to a subsequent map entry in the sequence. A control circuit accesses a first map entry in the sequence and uses the address information and pointer in the first map entry to locate the remaining map entries and the locations of the user data blocks in the respective groups. Lossless data compression may be applied to the groups prior to writing.Type: ApplicationFiled: May 31, 2017Publication date: December 6, 2018Inventors: Timothy Canepa, Ryan J. Goss, Stephen Hanna
-
Publication number: 20180349267Abstract: A query an asynchronous operation of an asynchronous function is disclosed. One or more state machine objects of the asynchronous function are identified. The identified objects of the asynchronous function are queried to determine information regarding the current state of the identified objects. For state machines that have not completed, the heap is examined to determine whether the identified object is rooted.Type: ApplicationFiled: June 5, 2017Publication date: December 6, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Stephen H. Toub, Clayton L. Culver
-
Publication number: 20180349268Abstract: An overlaid erase block (EB) mapping scheme for a flash memory provides efficient wear-leveling and reduces mount operation latency. The overlaid EB mapping scheme maps a first type of EB onto one of a plurality of physical erase blocks, in a corresponding portion of the flash memory. The first type of EB includes a plurality of pointers. The overlaid EB mapping scheme also maps each of second and third types of EBs onto one of the physical EBs that is not mapped to the first type of EB. The second type of EBs store system management information and the third type of EBs store user data. When the flash memory is started up, the overlaid EB mapping scheme scans the corresponding portion to locate the first type of EB, locates the system EBs using the pointers, and locates the data EBs using the system management information.Type: ApplicationFiled: May 18, 2018Publication date: December 6, 2018Applicant: Cypress Semiconductor CorporationInventors: Shinsuke Okada, Sunil Atri, Hiroyuki Saito
-
Publication number: 20180349269Abstract: A lifecycle of an item is controlled based on a type of the item. Particular item types may be required by law, industry, or organizational policies and procedures to be retained for a defined time period. Often these required retention policies are triggered by events, and thus embodiments are directed to providing event triggered data retention. Items stored in a hosted service environment may each be associated with a label that defines an item category, a retention type, a retention period, and/or a retention trigger for the item. In response to detecting an occurrence of a retention trigger event associated with a person or a project, the items may be queried to determine each item associated with an asset identifier identifying the person or the project. The retention period and type for each item may be updated or set based on a retention policy associated with each item.Type: ApplicationFiled: June 6, 2017Publication date: December 6, 2018Applicant: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Nakul GARG, Yong Hua YANG, Tho V. NGUYEN, Dheepak RAMASWAMY
-
Publication number: 20180349270Abstract: A system for an in-memory row storage architecture can be provided. In some implementations, the system performs operations including processing a database statement comprising a first database transaction affecting one or more rows in an in-memory row store, registering the database statement at a start of execution of the database statement, the start of execution occurring at a first time, unregistering the registered database statement at an end of execution of the database statement, determining a second time at which execution of an oldest currently registered database statement was started, assigning a garbage collection thread to a second database transaction committed at a third time and affecting at least one row of the plurality of rows, and activating the garbage collection thread to reclaim memory within the in-memory row store when the third time is less than the second time. Related systems, methods, and articles of manufacture are also described.Type: ApplicationFiled: February 28, 2018Publication date: December 6, 2018Inventors: Rahul Mittal, Amit Pathak, Jay Sudrik, Simhachala Sasikanth Gottapu
-
Publication number: 20180349271Abstract: An upper limit is set on the number of socket objects that can be generated by a virtual machine. If the upper limit is exceeded when a socket object is to be generated, garbage collection is executed. In garbage collection, a socket object is closed, and a resource that has been used by the socket is released.Type: ApplicationFiled: May 25, 2018Publication date: December 6, 2018Inventor: Kentaro Takahashi
-
Publication number: 20180349272Abstract: A storage system and a system garbage collection method are provided. The storage system includes a first controller, a second controller, and a solid state disk. The first controller or the second controller manages storage space of the solid state disk in a unit of a segment. The first controller is configured to perform system garbage collection on multiple segments of segments managed by the first controller. The second controller is configured to: when the first controller performs system garbage collection, perform system garbage collection on multiple segments of segments managed by the second controller. The multiple segments of the segments managed by the first controller and the multiple segments of the segments managed by the second controller are allocated within a same time period. Therefore, a quantity of times of write amplification in the solid state disk can be reduced.Type: ApplicationFiled: August 9, 2018Publication date: December 6, 2018Applicant: HUAWEI TECHNOLOGIES CO.,LTD.Inventors: Qiang Xue, Peijun Jiang
-
Publication number: 20180349273Abstract: A garbage collection facility is provided for memory management within a computer. The facility implements, in part, grouping of infrequently accessed data units in a designated transient memory area, and includes designating an area of the memory as a transient memory area and an area as a conventional memory area, and counting, for each data unit in the transient or conventional memory areas a number of accesses to the data unit. The counting provides a respective access count for each data unit. For each data unit in the transient memory area or the conventional memory area, a determination is made whether the respective access count is below a transient threshold ascertained to separate frequently accessed data units and infrequently used data units. Data units with respective access counts below the transient threshold are grouped together as transient data units within the transient memory area.Type: ApplicationFiled: May 31, 2017Publication date: December 6, 2018Inventors: Giles R. FRAZIER, Michael K. GSCHWIND, Christian JACOBI, Younes MANTON, Anthony SAPORITO, Chung-Lung K. SHUM
-
Publication number: 20180349274Abstract: A garbage collection facility is provided for memory management within a computer. The facility implements, in part, grouping of infrequently accessed data units in a designated transient memory area, and includes designating an area of the memory as a transient memory area and an area as a conventional memory area, and counting, for each data unit in the transient or conventional memory areas a number of accesses to the data unit. The counting provides a respective access count for each data unit. For each data unit in the transient memory area or the conventional memory area, a determination is made whether the respective access count is below a transient threshold ascertained to separate frequently accessed data units and infrequently used data units. Data units with respective access counts below the transient threshold are grouped together as transient data units within the transient memory area.Type: ApplicationFiled: November 22, 2017Publication date: December 6, 2018Inventors: Giles R. FRAZIER, Michael K. GSCHWIND, Christian JACOBI, Younes MANTON, Anthony SAPORITO, Chung-Lung K. SHUM
-
Publication number: 20180349275Abstract: The subject matter described herein relates to a file system with adaptive flushing for an electronic device. The file system keeps data in memory much longer and its policy for flushing in-memory write cache to storage is application-aware and adaptive. More specifically, what parts of the cached data are ready for flushing could be determined according to the access characteristic of an application. In addition, when to do flushing can be selected flexibly at least partly based on user input interactions with an application of the electronic device or with the electronic device. Further, a multi-priority scheduling mechanism for scheduling data units that are ready to be flushed could be employed, which ensures fairness among applications and further improves flushing performance.Type: ApplicationFiled: August 15, 2014Publication date: December 6, 2018Inventors: Jinglei Ren, Mike Chieh-Jan Liang, Thomas Moscibroda
-
Publication number: 20180349276Abstract: A system, method, and computer-readable medium are disclosed for performing a multi-level application cache operation, comprising: defining a first application level cache; defining an intermediate second application level cache; communicating with a last memory level, the last memory level including a source for a plurality of data objects; and, accessing a data object via the first application level cache when the data object is present and valid within the first application level cache.Type: ApplicationFiled: May 31, 2017Publication date: December 6, 2018Applicant: Dell Products L.P.Inventors: Zachary S. Toliver, Luis E. Bocaletti
-
Publication number: 20180349277Abstract: A method and computer processor performs a translation lookaside buffer (TLB) purge with concurrent cache updates. Each cache line contains a virtual address field and a data field. A TLB purge process performs operations for invalidating data in the primary cache memory which do not conform to the current state of the translation lookaside buffer. Whenever the TLB purge process and a cache update process perform a write operation to the primary cache memory concurrently, the write operation by the TLB purge process has no effect on the content of the primary cache memory and the cache update process overwrites a data field in a cache line of the primary cache memory but does not overwrite a virtual address field of said cache line. The translation lookaside buffer purge process is subsequently restored to an earlier state and restarted from the earlier state.Type: ApplicationFiled: June 5, 2017Publication date: December 6, 2018Inventors: Simon H. Friedmann, Markus Kaltenbach, Dietmar Schmunkamp, Johannes C. Reichart
-
Publication number: 20180349278Abstract: A method and computer processor performs a translation lookaside buffer (TLB) purge with concurrent cache updates. Each cache line contains a virtual address field and a data field. A TLB purge process performs operations for invalidating data in the primary cache memory which do not conform to the current state of the translation lookaside buffer. Whenever the TLB purge process and a cache update process perform a write operation to the primary cache memory concurrently, the write operation by the TLB purge process has no effect on the content of the primary cache memory and the cache update process overwrites a data field in a cache line of the primary cache memory but does not overwrite a virtual address field of said cache line. The translation lookaside buffer purge process is subsequently restored to an earlier state and restarted from the earlier state.Type: ApplicationFiled: October 23, 2017Publication date: December 6, 2018Inventors: Simon H. Friedmann, Markus Kaltenbach, Dietmar Schmunkamp, Johannes C. Reichart
-
Publication number: 20180349279Abstract: A system includes sensors, a first memory component, a second memory component, and an interface. The sensors are configured to generate data responsive to stimuli. Each sensor may transmit its associated data as it becomes available. The first memory component may receive and store sensor data. The second memory component may receive data from the first memory component. The interface may receive data from the second memory component. The sensor data generated during a time which the interface is receiving data from the second memory component is transmitted to the first memory component and stored thereto. No data is transmitted from the first memory component or from the sensors to the second memory component during the time which the interface is receiving data from the second memory component. Subsequently, a subset of data stored on the first memory component is advanced to the second memory component.Type: ApplicationFiled: June 2, 2017Publication date: December 6, 2018Inventors: Vinod BHAT, Amr ZAKY, Jatin GANGANI
-
Publication number: 20180349280Abstract: Techniques are disclosed relating to cache coherency and snoop filtering. In some embodiments, an apparatus includes multiple processor cores and corresponding filter circuitry that is configured to filter snoops to the processor cores. The filter circuitry may implement a Bloom filter. The filter circuitry may include a first set of counters. The filter circuitry may determine a group of counters in the first set based on applying multiple hash functions to an incoming address. For allocations, the filter circuitry may increment the counters in the corresponding group of counters; for evictions, the filter circuitry may decrement the counters in the corresponding group of counters; and for snoops, the filter circuitry may determine whether to filter the snoop based on whether any of the counters in the corresponding group are at a start value.Type: ApplicationFiled: June 2, 2017Publication date: December 6, 2018Inventors: Bipin Prasad, John Fernando, Benjamin Michelson
-
Publication number: 20180349281Abstract: Disclosed are a method for controlling a device including at least one memory, and a smart TV. The method comprises the steps of: receiving, from a remote controller, a signal for executing at least one application; outputting video data and audio data of the executed application; temporarily storing the executed application in an internal memory; swapping, to an external memory, a page corresponding to a specific application of the at least one application stored in the internal memory; and displaying information on the application swapped to the external memory.Type: ApplicationFiled: December 11, 2014Publication date: December 6, 2018Inventors: Gunho LEE, Baeguen KANG