Patents Issued in February 20, 2020
-
Publication number: 20200057640Abstract: A system, apparatus and method for ordering a sequence of processing transactions. The method includes accessing, from a memory, a program sequence of operations that are to be executed. Instructions are received, some of them having an identifier, or mnemonic, that is used to distinguish those identified operations from other operations that do not have an identifier, or mnemonic. The mnemonic indicates a distribution of the execution of the program sequence of operations. The program sequence of operations is grouped based on the mnemonic such that certain operations are separated from other operations.Type: ApplicationFiled: August 16, 2018Publication date: February 20, 2020Applicant: Arm LimitedInventors: Curtis Glenn DUNHAM, Pavel SHAMIS, Jamshed JALAL, Michael FILIPPO
-
Publication number: 20200057641Abstract: A processor-implemented method is provided. The processor-implemented includes reading, by a processor, an instruction stream by fetching instructions from an instruction cache of the processor. The processor then executes a branch prediction operation based on a context of the instruction stream and an index when one of the instructions includes a branch instruction. The branch prediction operation output a prediction and a context. The processor then compares the context of the instruction stream and the context from the branch prediction operation to determine whether to execute a stop fetch.Type: ApplicationFiled: August 16, 2018Publication date: February 20, 2020Inventors: Jentje Leenstra, Nicholas R. Orzol, Christian Zoellin, Michael J. Genden, Robert A. Philhower
-
Publication number: 20200057642Abstract: A methodology for populating an instruction word for simultaneous execution of instruction operations by a plurality of ALUs in a data path is provided. The methodology includes: creating a dependency graph of instruction nodes, each instruction node including at least one instruction operation; first selecting a first available instruction node from the dependency graph; first assigning the selected first available instruction node to the instruction word; second selecting any available dependent instruction nodes that are dependent upon a result of the selected first available instruction node and do not violate any predetermined rule; second assigning to the instruction word the selected any available dependent instruction nodes; and updating the dependency graph to remove any instruction nodes assigned during the first and second assigning from further consideration for assignment.Type: ApplicationFiled: August 14, 2019Publication date: February 20, 2020Applicant: TACHYUM LTD.Inventor: Radoslav DANILAK
-
Publication number: 20200057643Abstract: An apparatus and method are provided for performing branch prediction. The apparatus has processing circuitry for executing instructions out-of-order with respect to original program order, and event counting prediction circuitry for maintaining event count values for branch instructions, for use in making branch outcome predictions for those branch instructions. Further, checkpointing storage stores state information of the apparatus at a plurality of checkpoints to enable the state information to be restored for a determined one of those checkpoints in response to a flush event. The event counting prediction circuitry has training storage with a first number of training entries, each training entry being associated with a branch instruction.Type: ApplicationFiled: August 20, 2018Publication date: February 20, 2020Inventors: Houdhaifa BOUZGUARROU, Guillaume BOLBENES, Vincenzo CONSALES
-
Publication number: 20200057644Abstract: An arithmetic processing apparatus includes weight tables each configured to store weighting factors in one-to-one correspondence with indexes associated with instruction addresses, a first weight arithmetic unit configured to perform a first operation and a second operation based on the weighting factors retrieved from the weight tables in response to an instruction fetch address, the first operation producing a first value for branch prediction for the instruction fetch address, the second operation producing second values for future branch prediction, and a second weight arithmetic unit configured to perform, in parallel with the second operation, a third operation equivalent to the second operation based on the weighting factors retrieved from the weight tables in response to an address of a completed branch instruction, wherein the second values stored in the first weight arithmetic unit are replaced with the third values upon detection of a wrong branch prediction.Type: ApplicationFiled: August 9, 2019Publication date: February 20, 2020Applicant: FUJITSU LIMITEDInventors: Takashi Suzuki, Seiji HIRAO
-
Publication number: 20200057645Abstract: A methodology for preparing a series of instruction operations for execution by plurality of arithmetic logic units (ALU) is provided. The methodology includes first assigning a first instruction operation to the first ALU; first determining, for a second instruction operation having an input that depends directly on an output of a first instruction operation, whether all inputs for the second instruction operation are available within a locally predefined range from the first ALU; second assigning, in response to at least a positive result of the first determining, the second instruction operation to the second ALU; in response to a negative result of the first determining: ensuring a pause of at least one clock cycle will occur between execution of the first instruction operation and the second instruction operation; and third assigning the second instruction operation to an ALU of the plurality of ALUs.Type: ApplicationFiled: August 14, 2019Publication date: February 20, 2020Applicant: TACHYUM LTD.Inventor: Radoslav DANILAK
-
Publication number: 20200057646Abstract: A methodology for populating multiple instruction words is provided. The methodology includes: creating a dependency graph of instruction nodes, each instruction node including at least one instruction operation; first assigning a first instruction node to a first instruction word; identifying a dependent instruction node that is directly dependent upon a result of the first instruction node; first determining whether the dependent instruction node requires any input from two or more sources that are outside of a predefined physical range of each other, the range being smaller than the full extent of the data path; and second assigning, in response to satisfaction of at least one predetermined criteria including a negative result of the first determining, the dependent instruction node to the first instruction word.Type: ApplicationFiled: August 14, 2019Publication date: February 20, 2020Inventor: Radoslav DANILAK
-
Publication number: 20200057647Abstract: A convolution operation method and a processing device for performing the same are provided. The method is performed by a processing device. The processing device includes a main processing circuit and a plurality of basic processing circuits. The basic processing circuits are configured to perform convolution operation in parallel. The technical solutions disclosed by the present disclosure can provide short operation time and low energy consumption.Type: ApplicationFiled: October 24, 2019Publication date: February 20, 2020Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Tianshi Chen, Bingrui Wang, Yao Zhang
-
Publication number: 20200057648Abstract: A convolution operation method and a processing device for performing the same are provided. The method is performed by a processing device. The processing device includes a main processing circuit and a plurality of basic processing circuits. The basic processing circuits are configured to perform convolution operation in parallel. The technical solutions disclosed by the present disclosure can provide short operation time and low energy consumption.Type: ApplicationFiled: October 24, 2019Publication date: February 20, 2020Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Tianshi Chen, Bingrui Wang, Yao Zhang
-
Publication number: 20200057649Abstract: A pooling operation method and a processing device for performing the same are provided. The pooling operation method may rearrange a dimension order of the input data before pooling is performed. The technical solutions provided by the present disclosure have the advantages of short operation time and low energy consumption.Type: ApplicationFiled: October 24, 2019Publication date: February 20, 2020Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Tianshi Chen, Bingrui Wang, Yao Zhang
-
Publication number: 20200057650Abstract: A fully connected operation method and a processing device for performing the same are provided. The fully connected operation method designates distribution data and broadcast data. The distribution data is divided into basic data blocks and distributed to parallel processing units, and the broadcast data is broadcasted to the parallel processing units. Operations between the basic data blocks and the broadcasted data are carried out by the parallel processing units before the results are returned to a main unit for further processing. The technical solutions disclosed by the present disclosure provide short Operation time and low energy consumption.Type: ApplicationFiled: October 24, 2019Publication date: February 20, 2020Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Tianshi Chen, Bingrui Wang, Yao Zhang
-
Publication number: 20200057651Abstract: A matrix-multiplying-matrix operation method and a processing device for performing the same are provided. The matrix-multiplying-matrix method includes distributing, by a main processing circuit, basic data blocks of one matrix and broadcasting the other matrix to a plurality of the basic processing circuits. That way, the basic processing circuits can perform inner-product operations between the basic data blocks and the broadcasted matrix in parallel. The results are then provided back to main processing circuit for combining. The technical solutions proposed by the present disclosure provide short operation time and low energy consumption.Type: ApplicationFiled: October 24, 2019Publication date: February 20, 2020Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Tianshi Chen, Bingrui Wang, Yao Zhang
-
Publication number: 20200057652Abstract: A matrix-multiplying-vector operation method and a processing device for performing the same are provided. The matrix-multiplying-vector method includes distributing, by a main processing circuit, basic data blocks of the matrix and broadcasting the vector to a plurality of the basic processing circuits. That way, the basic processing circuits can perform inner-product operations between the basic data blocks and the broadcasted vector in parallel. The results are then provided back to main processing circuit for combining. The technical solutions proposed by the present disclosure provide short operation time and low energy consumption.Type: ApplicationFiled: October 24, 2019Publication date: February 20, 2020Applicant: CAMBRICON TECHNOLOGIES CORPORATION LIMITEDInventors: Shaoli Liu, Tianshi Chen, Bingrui Wang, Yao Zhang
-
Publication number: 20200057653Abstract: A data processing system comprising: a host; and a memory system comprising a nonvolatile memory device and a controller suitable for controlling the nonvolatile memory device, wherein the controller comprises: a first reset circuitry suitable for loading firmware from the nonvolatile memory device to a volatile memory, and setting a reset default status; a second reset circuitry suitable for determining whether a reason for a reset request coincides with the reset default status, when the reset request is received from the host, and resetting the memory system; and a firmware load determination circuitry suitable for determining whether to reload the firmware by checking the reset default status.Type: ApplicationFiled: April 4, 2019Publication date: February 20, 2020Inventor: Joo-Young LEE
-
Publication number: 20200057654Abstract: A method for mirror image package preparation and application operation includes: acquiring a launch operation package and launch mirror image package; upon launching of the target application, establishing a first channel between a local buffer manager and the launch mirror image package, and a second channel between the local buffer manager and a server; creating a first virtual file system locally, and establishing a third channel between the local buffer manager and the first virtual file system; if a received file access request is a read request, acquiring first data from the launch mirror image package, and/or from an original mirror image package on the server; feeding the first data back to the target application, wherein storage directory structures of data sets in the launch mirror image package and original mirror image package individually correspond to logic directory relations of file sets in an original data package.Type: ApplicationFiled: March 30, 2017Publication date: February 20, 2020Inventors: Zheng YANG, Cong LU
-
Publication number: 20200057655Abstract: Concurrent maintenance of an input/output (I/O) adapter backing a virtual network interface connection (VNIC) including receiving, by a hardware management console (HMC), a request to disconnect the I/O adapter from a computing system, wherein the computing system comprises a logical partition and virtual I/O server; instructing, by the HMC over a communications link, the virtual I/O server to deconfigure and remove the server VNIC driver; determining, by the HMC, that a replacement I/O adapter is installed on the computing system; and in response to determining that the replacement I/O adapter is installed on the computing system, instructing, by the HMC over the communications link, the virtual I/O server to add and configure a replacement server VNIC driver.Type: ApplicationFiled: October 25, 2019Publication date: February 20, 2020Inventors: CURTIS S. EIDE, DWAYNE G. MCCONNELL, XIAOHAN QIN
-
Publication number: 20200057656Abstract: A method for preparing fast boot of an information handling apparatus. The information handling apparatus contains a first CPU configured to connect to a storage device storing firmware and a second CPU connected to the first CPU. The method contains the steps of: allocating a firmware region in memories associated with each one of the first and second CPUs respectively; and copying a firmware from a storage device to the firmware region of each one of the memories. By utilizing a system memory such as NVDIMM which provides higher access speed than NAND flash and also persistent data storage, one or more CPUs can be booted from firmware images in the NVDIMM much faster, thus saving the total booting time.Type: ApplicationFiled: August 19, 2019Publication date: February 20, 2020Inventors: ZHIJUN LIU, CHEKIM CHHUOR, WEN WEI TANG
-
Publication number: 20200057657Abstract: One embodiment provides a method, including: identifying, using a processor of an information handling device, a presence of at least one other device; requesting, from the at least one other device, configuration information; receiving, at the information handling device, the configuration information; and configuring, responsive to the receiving, one or more settings on the information handling device. Other aspects are described and claimed.Type: ApplicationFiled: August 20, 2018Publication date: February 20, 2020Inventors: Russell Speight VanBlon, Aaron Michael Stewart, Joshua Neil Novak
-
Publication number: 20200057658Abstract: Loading resources is disclosed including sending, using a first thread, a resource loading request to a second thread, the resource loading request including a request for a resource, the first thread and the second thread being located in one process; and the first thread running on a dynamic language runtime platform, receiving, using the first thread, an instruction sent back by the second thread in response to the resource loading request, and based on the instruction and the resource preloaded by the process, loading, using the first thread, the resource included in the resource loading request, the resource being preloaded by the process comprises a web engine.Type: ApplicationFiled: August 30, 2019Publication date: February 20, 2020Inventors: Hongbo Min, Yongsheng Zhu, Zhenhua Lu, Zhiping Lin, Yanming Cai, Xu Zeng
-
Publication number: 20200057659Abstract: Embodiments described herein provide for system and methods to enable an operating environment that supports multi-OS applications. One embodiment provides for a non-transitory machine-readable medium storing instructions that cause a data processing system to perform operations to detect conflicts during a build process for a dynamic library. The operations include loading program code for the dynamic library to build for a first platform, parsing the set of interfaces and data structures exported by the dynamic library to verify consistency of a build contract for the dynamic library, and generating a build error during a build process for the dynamic library upon detecting an inconsistent build contract specifying at least an application binary interface (ABI) and an API for the dynamic library.Type: ApplicationFiled: October 25, 2019Publication date: February 20, 2020Inventors: Jeremiah R. Sequoia, Juergen Ributzka, Shengzhao Wu
-
Publication number: 20200057660Abstract: Rendering user interfaces is disclosed including acquiring, using a first thread, a to-be-handled user interface render event, the first thread being a thread on a dynamic language application runtime platform, and the dynamic language application runtime platform being preloaded with a render engine, and calling, using the first thread, a corresponding user interface rendering function provided by the render engine based on an interface that corresponds to the event and that is used to call the render engine.Type: ApplicationFiled: September 3, 2019Publication date: February 20, 2020Inventors: Zheng Liu, Xu Zeng, Yongcai Ma, Lidi Jiang, Kerong Shen, Decai Jin, Chong Zhang, Qinghe Xu
-
Publication number: 20200057661Abstract: Systems and methods for customizing an output based on user data are described herein. An example method for customizing an output based on user data may commence with continuously capturing, by at least one sensor, the user data. The method may continue with analyzing, by at least one computing resource, the user data received from the at least one sensor and determining dependencies between the user data and output data. The method may further include determining, based on predetermined criteria, that an amount of the user data and the dependencies is sufficient to customize the output data. The method may continue with continuously customizing, by an adaptive interface, the output data using at least one machine learning technique based on the analysis of the user data. The customized output data may be intended to elicit a personalized change.Type: ApplicationFiled: September 23, 2019Publication date: February 20, 2020Inventor: Hannes Bendfeldt
-
Publication number: 20200057662Abstract: The specification provides example service processing methods and devices. One example method includes detecting a device type of an electronic device. An instruction processing rule corresponding to the device type is obtained. The instruction processing rule includes an instruction set conversion rule defining a process for converting display modification instructions generated by the electronic device into unified display modification instructions. A first display modification instruction initiated in response to a user interacting with the electronic device is obtained based on the instruction processing rule. The first display modification instruction is converted into a corresponding first unified display modification instruction according to the instruction processing rule. A portion of interaction data output to a display us updated by invoking a service processing mode corresponding to the first unified display modification instruction.Type: ApplicationFiled: October 25, 2019Publication date: February 20, 2020Applicant: Alibaba Group Holding LimitedInventor: Yuguo Zhou
-
Publication number: 20200057663Abstract: System and methods providing for categorizing individual virtual machines, as well as the associated application that they form by working in concert, into groups based on the feasibility of hosting the processes that occur on a virtual machine within a container, as well as the relative difficulty of doing so on a virtual machine and application level. The data used to create these scores is collected from the individual machines, at regular intervals through the use of an automated scoring engine that collects and aggregates the data. Said data is then analyzed by the system, that with the aid of passed in configuration data, is configured to generate the scores to allows for an educated and focused effort to migrate from hosting applications on virtual machines to hosting applications on containers.Type: ApplicationFiled: August 15, 2019Publication date: February 20, 2020Inventors: Jacob ABBOTT, James BECK, Jacquelyn DU, Charles WALKER
-
Publication number: 20200057664Abstract: A host Virtual Machine Monitor (VMM) operates “blindly,” without the host VMM having the ability to access data within a guest virtual machine (VM) or the ability to access directly control structures that control execution flow of the guest VM. Guest VMs execute within a protected region of memory (called a key domain) that even the host VMM cannot access. Virtualization data structures that pertain to the execution state (e.g., a Virtual Machine Control Structure (VMCS)) and memory mappings (e.g., Extended Page Tables (EPTs)) of the guest VM are also located in the protected memory region and are also encrypted with the key domain key. The host VMM and other guest VMs, which do not possess the key domain key for other key domains, cannot directly modify these control structures nor access the protected memory region. The host VMM, however, using VMPageIn and VMPageOut instructions, can build virtual machines in key domains and page VM pages in and out of key domains.Type: ApplicationFiled: March 30, 2019Publication date: February 20, 2020Applicant: Intel CorporationInventors: David Durham, Siddhartha Chhabra, Geoffrey Strongin, Ronald Perez
-
Publication number: 20200057665Abstract: The current document is directed to automated methods and systems that employ unsupervised-machine-learning approaches as well as rule-based systems to discover distributed applications within distributed-computing environments. These automated methods and systems provide a basis for higher-level distributed-application administration and management tools and subsystems that provide distributed-application-level user interfaces and operations. In one implementation, the currently disclosed methods and systems employ agents within virtual machines that execute routines and programs and that together comprise a distributed application to continuously furnish information about the virtual machines to a pipeline of stream processors that collect and filter the information to provide for periodic application-discovery.Type: ApplicationFiled: August 15, 2018Publication date: February 20, 2020Applicant: VWware, Inc.Inventor: Nicholas Kushmerick
-
Publication number: 20200057666Abstract: Concepts and technologies directed to agentless personal firewall security in virtualized datacenters are disclosed herein. Embodiments can include a computer system that can host a hypervisor via a memory and a processor. Upon execution, the processor can cause the computer system to perform operations. The operations can include receiving an inbound communication request to a virtual machine associated with the hypervisor. The operations also can include identifying a virtual port associated with the virtual machine based on the inbound communication request. The operations can include determining that the inbound communication request lacks an identity of a virtual application process that executes on the virtual machine. The operation also can include building a virtual machine memory map. The operation also can include forcing exposure of the virtual application process based on the virtual machine memory map.Type: ApplicationFiled: August 20, 2018Publication date: February 20, 2020Applicant: Interwise Ltd.Inventors: Sofia Belikovetsky, Ofer HaCohen
-
Publication number: 20200057667Abstract: A system and method can include requesting, by a network agent in a virtual machine in a hypervisor-attached infrastructure, a first identifier of a first resource device. The method can include comparing the first identifier to a plurality of known identifiers. The method can include determining a first location of the first resource device in response to matching the first identifier to one of the plurality of known identifiers. The method can include requesting a second identifier of a second resource device. The method can include determining a second location of the second resource device in response to the second identifier being different from each of the plurality of known identifiers. The second location can be different than the first location.Type: ApplicationFiled: August 20, 2018Publication date: February 20, 2020Applicant: Nutanix, Inc.Inventors: Partha Ramachandran, Ritesh Rekhi, Srini Ramasubramanian, Gregory A. Smith
-
Publication number: 20200057668Abstract: A method of identifying historical snapshots for a virtual machine (VM) is provided. Some example operations include receiving a request for a historical snapshot of a VM, the request indicating an ID for the VM. A detection is made that the ID for the VM in the request received is a new ID assigned to a VM. A determination is made whether the new ID corresponds to a newly created VM or an existing VM that has been previously registered using a previous ID, wherein the determining includes accessing a property of the VM including a use case identifier associated with an instant recovery request for a specific VM. Based on identifying that the new ID corresponds to a newly created VM, a new VM Group (VMG) object is created for the newly created VM corresponding to the new ID.Type: ApplicationFiled: June 26, 2019Publication date: February 20, 2020Inventors: ABDULLAH AL REZA, FABIANO BOTELHO, MUDIT MALPANI, PRATEEK PANDEY
-
Publication number: 20200057669Abstract: The Prioritization and Source-Nonspecific Based Virtual Machine Recovery Apparatuses, Methods and Systems (“MBR”) transforms backup configuration request, restore request inputs via MBR components into backup configuration response, restore response outputs. A restore request is obtained. A reestablishing virtual machine is booted. A recovery virtual machine configuration identifying source-nonspecific software is determined. A recovery prioritization index for data blocks of the associated backup disk image is determined. Essential data blocks of the backup disk image are prefetched to build a pseudo abridged virtual machine. User access to the reestablishing virtual machine is provided. A latent virtual machine is created inside the reestablishing virtual machine. Command data blocks are fetched for both the reestablishing virtual machine and the latent virtual machine when a user command is received. Remaining data blocks are fetched for the latent virtual machine in priority order.Type: ApplicationFiled: October 23, 2019Publication date: February 20, 2020Inventors: Campbell Hutcheson, William Robert Speirs, Robert J. Gibbons, JR.
-
Publication number: 20200057670Abstract: Embodiments include systems, methods, and computer program products to perform an operation for managing different virtual machine images as a single virtual machine image. The operation generally includes generating a representation of a virtual machine (VM) image, and generating a first VM instance from the VM image. The representation of the VM image includes a set of artifacts associated with the VM image. The operation also includes receiving an indication of an available software update. Upon determining that the software update is applicable to the representation of the VM image, the operation further includes applying the software update to the first VM instance image.Type: ApplicationFiled: October 24, 2019Publication date: February 20, 2020Inventors: Gianluca BERNARDINI, Antonio DI COCCO, Claudio MARINELLI, Luigi PICHETTI
-
Publication number: 20200057671Abstract: A computer implemented method manages access to resources of a philanthropy cloud platform. The method includes retrieving, at a computing device of the philanthropy cloud platform, context data and load policies for a requestor and an identified resource, combining, by the computing device, loaded policies with context data into a combined data structure, evaluating, by the computing device, a resource request and apply policies for requestor based on role of requestor using the combined data structure, generating, by the computing device, resource permissions for the requestor, and returning, by the computing device, resource permissions to the requestor.Type: ApplicationFiled: December 20, 2018Publication date: February 20, 2020Inventors: Nicholas Bailey, Jon Stahl, David Manelski, Michael McCormick, Nicholaus Lacock
-
Publication number: 20200057672Abstract: Data can be processed in parallel across a cluster of nodes using a parallel processing framework. Using Web services calls between components allows the number of nodes to be scaled as necessary, and allows developers to build applications on the framework using a Web services interface. A job scheduler works together with a queuing service to distribute jobs to nodes as the nodes have capacity, such that jobs can be performed in parallel as quickly as the nodes are able to process the jobs. Data can be loaded efficiently across the cluster, and levels of nodes can be determined dynamically to process queries and other requests on the system.Type: ApplicationFiled: August 29, 2019Publication date: February 20, 2020Applicant: Amazon Technologies, Inc.Inventors: Govindaswamy Bacthavachalu, Peter Grant Gavares, Ahmed A. Badran, James E. Scharf, JR.
-
Publication number: 20200057673Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for identifying a set of resources in response to crawling multiple webpages that use at least one resource in the set. For each resource in the set, a system determines an age of the resource using a timestamp for the resource. The system determines a pre-fetch measure of the resource based on the age of the resource and usage information that describes use of the resource at a webpage. The system selects a first resource from the set based on the pre-fetch measure and determines whether a respective age of the selected first resource exceeds a threshold age. The system generates an index entry for a pre-fetch index. The index entry includes a command to pre-fetch the first resource based on a determination that the respective age of the first resource exceeds the threshold age.Type: ApplicationFiled: August 20, 2019Publication date: February 20, 2020Inventor: Dani Suleman
-
Publication number: 20200057674Abstract: A service provider may provide a companion container instance associated with a mobile device in order to facilitate operation of the mobile device. The companion container instance and the mobile device may be associated in a database operated by the service provider. Furthermore, the companion container instance may execute various operations on behalf of the mobile diver based at least in part on a task definition indicating a software function to be executed by the companion container instance. The software function configured to execute the various operations on behalf of the mobile device.Type: ApplicationFiled: October 23, 2019Publication date: February 20, 2020Inventors: Marco Argenti, Khawaja Salman Shams
-
Publication number: 20200057675Abstract: A computer implemented method, computer program product, and system for managing execution of a workflow comprising a set of subworkflows, comprising optimizing the set of subworkflows using a deep neural network, wherein each subworkflow of the set of subworkflows has a set of tasks, wherein each task of the sets of tasks has a requirement of resources of a set of resources; wherein each task of the sets of tasks is enabled to be dependent on another task of the sets of tasks, training the deep neural network by: executing the set of subworkflows, collecting provenance data from the execution, and collecting monitoring data that represents the state of said set of resources, wherein the training causes the neural network to learn relationships between the states of said set of resources, the said sets of tasks, their parameters and the obtained performance, optimizing an allocation of resources of the set of resources to each task of the sets of tasks to ensure compliance with a user-defined quality metric bType: ApplicationFiled: August 16, 2018Publication date: February 20, 2020Inventors: Jonas F. Dias, Angelo Ciarlini, Romulo D. Pinho, Vinicius Gottin, Andre Maximo, Edward Pacheco, David Holmes, Keshava Rangarajan, Scott David Senften, Joseph Blake Winston, Xi Wang, Clifton Brent Walker, Ashwani Dev, Chandra Yeleshwarapu, Nagaraj Srinivasan
-
Publication number: 20200057676Abstract: The disclosure provides an approach for distribution of functions among data centers of a cloud system that provides function-as-a-service (FaaS). For example, the disclosure provides one or more function distributors configured to receive a request for loading or executing a function, automatically determine an appropriate data center to load or execute the function, and automatically load or execute the function on the determined data center. In certain embodiments, the function distributors are further configured to determine an appropriate data center to provide storage resources for the function and configure the function to utilize the storage resources of the determined data center.Type: ApplicationFiled: December 10, 2018Publication date: February 20, 2020Inventor: AMOL MANOHAR VAIKAR
-
Publication number: 20200057677Abstract: Techniques are described for providing security-aware partitioning of processes. An example method includes identifying an integration scenario for optimization in a cloud-based system based on optimization constraints. The identified integration scenario is translated into a directed graph comprising connections between particular flows within the integration scenario. Each flow in the identified scenario is automatically analyzed to determine whether the flow is shareable across processing units associated with a different tenant, and each flow can be annotated in the direct graph with results of the analysis. At least one optimization to the integration scenario is determined based on the annotated directed graph and a set of optimization constraints. An assignment of flows to particular processing units is generated based on the determined at least one optimization.Type: ApplicationFiled: August 20, 2018Publication date: February 20, 2020Inventors: Daniel Ritter, Phillipp Stefan Womser
-
Publication number: 20200057678Abstract: A set of test cases is obtained to evaluate the resource configuration of a computing environment. One or more test cases of the set of test cases are randomly selected and sent to one or more logical partitions of the computing environment. Execution of the one or more test cases on the one or more logical partitions is monitored. Based on the monitoring, verify whether processing associated with the one or more logical partitions is being performed at an acceptable level. Based on the verifying indicating that the processing is not at an acceptable level, initiating reconfiguring of resources of at least one logical partition of the one or more logical partitions.Type: ApplicationFiled: August 17, 2018Publication date: February 20, 2020Inventors: Ali Y. Duale, Paul Wojciak
-
Publication number: 20200057679Abstract: In a switch fabric-based infrastructure a flexible scalable server is obtained by physical disaggregation of converged resources to obtain pools of a plurality of operationally independent resource element types such as storage, computing, networking and more. A plurality of computing facilities can be created either dynamically or statically by resource element managers by composing instances of resources from such pools of a plurality of resource element types expressed across a single disaggregated logical resource plane.Type: ApplicationFiled: January 27, 2017Publication date: February 20, 2020Applicant: Kaleao LimitedInventors: John GOODACRE, Giampietro TECCHIOLLI
-
Publication number: 20200057680Abstract: Systems and methods are described for adjusting a number of concurrent code executions allowed to be performed for a given user on an on-demand code execution environment or other distributed code execution environments. Such environments utilize pre-initialized virtual machine instances to enable execution of user-specified code in a rapid manner, without delays typically caused by initialization of the virtual machine instances. However, to improve utilization of computing resources, such environments may temporarily restrict the number of concurrent code executions performed on behalf of the given user to a number less than the maximum number of concurrent code executions allowed for the given user. Such environments may adjust the temporary restriction on the number of concurrent code executions based on the number of incoming code execution requests associated with the given user.Type: ApplicationFiled: August 27, 2019Publication date: February 20, 2020Inventors: Dylan Owen Marriner, Mauricio Roman, Marc John Brooker, Julian Embry Herwitz, Sean Reque
-
Publication number: 20200057681Abstract: Exploiting FPGAs for acceleration may be performed by transforming concurrent programs. One example mode of operation may provide one or more of creating synchronous hardware accelerators from concurrent asynchronous programs at software level, by obtaining input as software instructions describing concurrent behavior via a model of communicating sequential processes (CSP) of message exchange between concurrent processes performed via channels, mapping, on a computing device, each of the concurrent processes to synchronous dataflow primitives, comprising at least one of join, fork, merge, steer, variable, and arbiter, producing a clocked digital logic description for upload to one or more field programmable gate array (FPGA) devices, performing primitive remapping of the output design for throughput, clock rate and resource usage via retiming, and creating an annotated graph of the input software description for debugging of concurrent code for the field FPGA devices.Type: ApplicationFiled: November 1, 2017Publication date: February 20, 2020Applicant: RECONFIGURE.IO LIMITEDInventors: Mahdi Jelodari Mamaghani, Robert James Taylor
-
Publication number: 20200057682Abstract: A barrier-free atomic transfer method of multiword data is described. In the barrier-free method, a producer processor deconstructs an original parameter set of data into a deconstructed parameter set; and performs a series of single-copy-atomic writes to a series of single-copy-atomic locations. Each single-copy-atomic location in the series of single-copy-atomic locations comprises a portion of the deconstructed parameter set and a sequence number. A consumer processor can read the series of single-copy-atomic locations; verifies that the sequence number for each single-copy-atomic location in the series of single-copy-atomic locations is consistent (e.g., are all the same sequence number); and reconstructs the portions of deconstructed parameter set into the original parameter set.Type: ApplicationFiled: May 15, 2019Publication date: February 20, 2020Inventor: Alasdair GRANT
-
Publication number: 20200057683Abstract: Methods and systems for allocating disk space and other limited resources (e.g., network bandwidth) for a cluster of data storage nodes using distributed semaphores with atomic updates are described. The distributed semaphores may be built on top of a distributed key-value store and used to reserve disk space, global disk streams for writing data to disks, and per node network bandwidth settings. A distributed semaphore comprising two or more semaphores that are accessed with different keys may be used to reduce contention and allow a globally accessible semaphore to scale as the number of data storage nodes within the cluster increases over time. In some cases, the number of semaphores within the distributed semaphore may be dynamically adjusted over time and may be set based on the total amount of disk space within the cluster and/or the number of contention fails that have occurred to the distributed semaphore.Type: ApplicationFiled: September 20, 2019Publication date: February 20, 2020Inventor: Noel Moldvai
-
Publication number: 20200057684Abstract: Techniques for transforming plug-in application recipe (PIAR) variables are disclosed. A PIAR definition identifies a trigger and an action. Trigger variable values, exposed by a first plug-in application, are necessary to evaluate the trigger. Evaluating the trigger involves determining whether a condition is satisfied, based on values of trigger variables. A second plug-in application exposes an interface for carrying out an action. Evaluating the action involves carrying out the action based on input variable values. A user selects, via a graphical user interface of a PIAR management application, a variable for a trigger or action operation and a transformation operation to be applied to the variable. The PIAR management application generates a PIAR definition object defining the trigger, the action, and the transformation operation, and stores the PIAR definition object for evaluation on an ongoing basis.Type: ApplicationFiled: October 24, 2019Publication date: February 20, 2020Applicant: Oracle International CorporationInventors: Tim Diekmann, Tuck Chang, Najeeb Andrabi, Anna Igorevna Bokhan-Dilawari
-
Publication number: 20200057685Abstract: An accelerator manager monitors one or more requests for accelerators from one or more users, and deploys one or more accelerators to one or more programmable devices based on the one or more requests from the one or more users. In a first embodiment, the accelerator manager deploys an accelerator that satisfies multiple user requests. The multiple user requests can be multiple requests from one user, or could be multiple requests from multiple users. In a second embodiment, the accelerator manager compiles historical data from the monitored requests, identifies one or more accelerators that are requested following the request for a first accelerator, and deploys the one or more accelerators after the first accelerator is requested and before they are requested.Type: ApplicationFiled: August 14, 2018Publication date: February 20, 2020Inventors: Paul E. Schardt, Jim C. Chen, Lance G. Thompson, James E. Carey
-
Publication number: 20200057686Abstract: A compute node, a failure detection method thereof and a cloud data processing system are provided. The method is adapted to the cloud data processing system having a plurality of compute nodes and at least one management node, and includes following steps: performing a self-inspection on operating statuses of services being provided and resource usage statuses, and reporting an inspection result to the management node by each compute node; dynamically adjusting a time interval of a next report and informing the management node of the time interval by the compute node; and checking a report condition of the inspection result according to the time interval by the management node, so as to determine whether the compute node fails.Type: ApplicationFiled: January 10, 2019Publication date: February 20, 2020Applicant: Industrial Technology Research InstituteInventors: Chun-Chieh Huang, Tzu-Chia Wang
-
Publication number: 20200057687Abstract: A method of writing data into a memory device discloses utilizing a pipeline to process write operations of a first plurality of data words addressed to a memory bank. The method further comprises writing a second plurality of data words into an error buffer, wherein the second plurality of data words comprises data words that are awaiting write verification associated with the memory bank. The method further comprises searching for a data word that is awaiting write verification in the error buffer, wherein the verify operation occurs in a same row as the write operation. The method also comprises determining if an address of the data word is proximal to an address for the write operation and responsive to a positive determination, delaying a start of the verify operation so that a rising edge of the verify operation occurs a predetermined delay after a rising edge of the write operation.Type: ApplicationFiled: October 10, 2019Publication date: February 20, 2020Inventors: Benjamin LOUIE, Neal BERGER, Lester CRUDELE
-
Publication number: 20200057688Abstract: Devices and methods for error checking transmissions include using error checking circuitry configured to receive a clock and reset. The error checking circuitry includes an input counter that is configured to receive the clock and to count out multiple input clocks from the received clock. The error checking circuitry also includes a delay model that is configured to receive the clock and to output a delayed clock. Also, the error checking circuitry includes an output counter that is configured to receive the delayed clock and to count out multiple output clocks from the received delayed clock. Furthermore, the error checking circuitry includes multiple error calculation circuits arranged in parallel that each are configured to: receive data based on a respective input clock, generate an error indicator based on the received data with the error indicator indicating whether an error exists in the received data, and output the error indicator based at least in part on a respective output clock.Type: ApplicationFiled: August 20, 2018Publication date: February 20, 2020Inventor: William C. Waldrop
-
Publication number: 20200057689Abstract: Example implementations described herein involve a system for maintenance recommendation based on data-driven failure prediction. The example implementations can involve estimating the probability of having a failure event in the near future given sensor measurements and events from the equipment, and then alerts the system user or maintenance staff if the probability of failure exceeds a certain threshold. The example implementations utilize historical failure cases along with the associated sensor measurements and events to learn a group of classification models that differentiate between failure and non-failure cases. In example implementations, the system then chooses the optimal model for failure prediction such that the overall cost of the maintenance process is minimized.Type: ApplicationFiled: July 26, 2017Publication date: February 20, 2020Inventors: Ahmed Khairy FARAHAT, Chetan GUPTA, Kosta RISTOVSKI