Patents Issued in January 9, 2020
-
Publication number: 20200012487Abstract: A firmware updating method is provided. The firmware updating method is adapted to a data storage device, and it can generate a new parameter table according to a conversion formula segment in an update image file required for updating the data storage device. Therefore, even if in a condition where there is a parameter change between a code segment of an old version firmware and a code segment of a new version firmware, the updated or upgraded data storage device can still operate normally.Type: ApplicationFiled: March 21, 2019Publication date: January 9, 2020Inventor: Chien-Ting LIN
-
Publication number: 20200012488Abstract: The present disclosure is directed to devices, systems and methods for tracking and upgrading firmware in intelligent electronic devices (IEDs). The present disclosure provides for tracking firmware versions of at least one or a fleet of IEDs, e.g., electronic power or revenue meters, notifying a user that an update to an existing firmware is available and providing the ability to automatically upload the current or latest version of the firmware to all IEDs.Type: ApplicationFiled: September 20, 2019Publication date: January 9, 2020Inventors: Rory A. Koval, Erran Kagan
-
Publication number: 20200012489Abstract: A method of combined file firmware upgrade includes providing a combo file comprising a plurality of firmware files for a plurality of data storage device product categories. The method also includes downloading the combo file to a data storage device that belongs to one of the plurality of data storage device product categories. The method further includes comparing parameters of the data storage device with parameters of individual firmware files of the plurality of firmware files. When parameters of a particular one of the plurality of firmware files are found to correspond with the parameters of the data storage device, the method includes utilizing the particular one of the plurality of firmware files to perform an automatic firmware upgrade in the data storage device.Type: ApplicationFiled: July 6, 2018Publication date: January 9, 2020Inventors: Choo Swee Kieong, Chng Yong Peng, Wang Lina
-
Publication number: 20200012490Abstract: A data storage device includes: a storage configured to store flag information on attributes, each attribute corresponding to a revision version, and firmware comprising register setting information and firmware execution code branch information for each attribute; and a controller configured to read the flag information and the firmware from the storage to execute the firmware according to the flag information.Type: ApplicationFiled: December 5, 2018Publication date: January 9, 2020Inventor: Jung Ae KIM
-
Publication number: 20200012491Abstract: A method of upgrading encryption machine, including: a controller for managing upgrading of encryption machine determines a first encryption machine to be upgraded; the controller transfers the data of the first encryption machine to a second encryption machine; and the controller sends an upgrade command for instructing the first encryption machine to conduct the upgrade to the first encryption machine. The above method solves the problem that in the process of upgrading the encryption machine in the conventional techniques, the operation is extremely complicated, which is easy to cause an operation error and interruption of user service.Type: ApplicationFiled: July 3, 2019Publication date: January 9, 2020Inventors: Libei Yang, Long Lin, Xianwei Lin, Haitao Jiang, Jiandong Su
-
Publication number: 20200012492Abstract: The disclosed technology is generally directed to updating of applications, firmware and/or other software on IoT devices. In one example of the technology, a request that is associated with a requested update is communicated from a normal world of a first application processor to a secure world of the first application processor. The secure world validates the requested update. Instructions associated with the validated update are communicated from the secure world to the normal world. Image requests are sent from the normal world to a cloud service for image binaries associated with the validated update. The secure world receives the requested image binaries from the cloud service. The secure world writes the received image binaries to memory, and validates the written image binaries.Type: ApplicationFiled: September 10, 2019Publication date: January 9, 2020Inventors: Adrian Bonar, Reuben R. Olinsky, Sang Eun Kim, Edmund B. Nightingale, Thales de Carvalho
-
Publication number: 20200012493Abstract: A system and method for comparative performance monitoring of software release versions is disclosed. A remote network management platform may include a computational instance for managing a network. Transactions between a server of the computational instance and a client device in the managed network may be logged to a database. Transactions may be carried out by a release version of a set of program code units executing on the server. A software application executing on a computing device may retrieve and analyze a first set of transactions carried out by a first release version of the set of program code units to determine a first set of performance metrics, and do the same for a second set of transactions carried out by a second release version of the set of program code units to determine a second set of performance metrics.Type: ApplicationFiled: July 3, 2018Publication date: January 9, 2020Inventor: Giora Sagy
-
Publication number: 20200012494Abstract: Techniques for cognitive interpretation of source code are provided. Metadata associated with a section of code in a software project is analyzed to determine a change history of the section of code. A plurality of discussions related to the section of code is evaluated, where each of the plurality of discussions occurred during a code review process. Further, a plurality of support records related to the section of code is analyzed. A sentiment score for the section of code is determined based on the associated metadata, the evaluation of the plurality of discussions, and the analysis of the plurality of support records. Additionally, a display color for the section of code is selected based on the sentiment score. Finally, generation of a graphical user interface (GUI) is facilitated, where the GUI displays the first display color in association with the first section of code.Type: ApplicationFiled: July 9, 2018Publication date: January 9, 2020Inventors: Rafal P. KONIK, Alec J. MATSCHINER, Avery GRANUM, Kyle G. CHRISTIANSON, Jim C. CHEN
-
Publication number: 20200012495Abstract: The present disclosure relates to systems and methods that provide a reconfigurable cryptographic coprocessor. An example system includes an instruction memory configured to provide ARX instructions and mode control instructions. The system also includes an adjustable-width arithmetic logic unit, an adjustable-width rotator, and a coefficient memory. A bit width of the adjustable-width arithmetic logic unit and a bit width of the adjustable-width rotator are adjusted according to the mode control instructions. The coefficient memory is configured to provide variable-width words to the arithmetic logic unit and the rotator. The arithmetic logic unit and the rotator are configured to carry out the ARX instructions on the provided variable-width words. The systems and methods described herein could accelerate various applications, such as deep learning, by assigning one or more of the disclosed reconfigurable coprocessors to work as a central computation unit in a neural network.Type: ApplicationFiled: July 3, 2018Publication date: January 9, 2020Inventors: Mohamed E Aly, Wen-Mei W. Hwu, Kevin Skadron
-
Publication number: 20200012496Abstract: Systems, methods, and computer program products are disclosed that control issuing branch instructions in a simultaneous multi-threading (SMT) system. An embodiment system includes an SMT processor circuit that receives, from one of a plurality of threads, a branch instruction having a favor bit. The SMT processor circuit schedules the branch instruction to issue, relative to branch instructions received from other threads in the plurality of threads, based on the favor bit. When the favor bit has a first value, the branch instruction is scheduled to have a higher priority to issue before the branch instructions received from other threads in the plurality of threads. When the favor bit has a second value, the branch instruction is scheduled to issue based an age of the branch instruction relative to respective ages of the branch instructions received from other threads in the plurality of threads.Type: ApplicationFiled: July 5, 2018Publication date: January 9, 2020Inventors: Salma Ayub, Glenn O. Kincaid, Christopher M. Mueller, Dung Q. Nguyen, Eula Faye Abalos Tolentino, Albert J. Van Norstrand, Jr., Kenneth L. Ward
-
Publication number: 20200012497Abstract: A processor includes two or more branch target buffer (BTB) tables for branch prediction, each BTB table storing entries of a different target size or width or storing entries of a different branch type. Each BTB entry includes at least a tag and a target address. For certain branch types that only require a few target address bits, the respective BTB tables are narrower thereby allowing for more BTB entries in the processor separated into respective BTB tables by branch instruction type. An increased number of available BTB entries are stored in a same or a less space in the processor thereby increasing a speed of instruction processing. BTB tables can be defined that do not store any target address and rely on a decode unit to provide it. High value BTB entries have dedicated storage and are therefore less likely to be evicted than low value BTB entries.Type: ApplicationFiled: July 9, 2018Publication date: January 9, 2020Inventors: Thomas CLOUQUEUR, Anthony JARVIS
-
Publication number: 20200012498Abstract: A method of accelerating inversion of injective operations includes identifying a first injective operation, storing information related to the first injective operation, identifying a second operation as an inverse of the first injective operation, and storing information related to the second operation. Accelerated action may be taken based on identifying the second operation as the inverse of the first injective operation, and may including preloading a cache with data and performing operations using data associated with the first injective operation.Type: ApplicationFiled: July 5, 2018Publication date: January 9, 2020Inventors: Lucas CROWTHERS, John INGALLS
-
Publication number: 20200012499Abstract: An apparatus, and corresponding method, for input/output (I/O) value determination, generates an I/O instruction for an I/O device, the I/O device including a state machine with state transition logic. The apparatus comprises a controller that includes a simplified state machine with a reduced version of the state transition logic of the state machine of the I/O device. The controller is configured to improve instruction execution performance of a processor core by employing the simplified state machine to predict at least one state value of at least one I/O device true state value to be affected by the I/O instruction at the I/O device.Type: ApplicationFiled: December 4, 2018Publication date: January 9, 2020Inventors: Jason D. Zebchuk, Wilson P. Snyder, II, Michael S. Bertone
-
Publication number: 20200012500Abstract: A method for deployment of a machine learning model (MLM) on a target field device is disclosed herein. The method includes automatically generating a set of source code files based on the machine learning model, wherein the set of source code files is configured to execute the machine learning model according to predetermined capabilities of the target field device; transforming the generated source code files into a model binary using a tool chain specific to the target field device; and deploying the model binary in a memory of the target field device.Type: ApplicationFiled: March 1, 2018Publication date: January 9, 2020Inventors: Christian Kern, Igor Kogan, Josep Soler Garrido
-
Publication number: 20200012501Abstract: The present disclosure provides an information handling system (IHS) and related methods that provide secure shared memory access (SMA) to shared memory locations within a Peripheral Component Interconnect (PCI) device of an IHS. The IHS and methods disclosed herein provide secure SMA to one or more operating system (OS) applications that are granted access to the shared memory. According to one embodiment, the disclosed method provides secure SMA to one or more OS applications by receiving a secure runtime request from at least one OS application to access shared memory locations within a PCI device, authenticating the secure runtime request received from the OS application, creating a secure session for communicating with the OS application, and providing the OS application secure runtime access to the shared memory locations within the PCI device.Type: ApplicationFiled: July 9, 2018Publication date: January 9, 2020Inventors: Shekar B. Suryanarayana, Chandrasekhar Puthillanthe
-
Publication number: 20200012502Abstract: The present disclosure may provide a method for loading a driver during a terminal starting up and a terminal device. The terminal includes at least one component having a driver to be loaded during starting up. The method includes: receiving a startup instruction; reading a component list; determining whether the component list comprises a driver related to a component; and if the component list includes the driver related to the component, loading the related driver. By such means, the present disclosure increases a startup speed and improves user experience.Type: ApplicationFiled: September 20, 2019Publication date: January 9, 2020Inventor: Bin Song
-
Publication number: 20200012503Abstract: A data-serialization system initially uses a recursive serialization algorithm to serialize a hierarchy of nested data objects by translating those objects into a serial stream of data. The system determines that a stack-overflow error is likely to occur whenever the number of objects serialized by the system exceeds a threshold value, or whenever the stack has reached an unacceptable level of utilization. When the system determines that a stack-overflow error is likely or if the system detects that a stack-overflow error will definitely occur if another object is serialized, the system either transfers control to a nonrecursive algorithm that does not require a stack data structure or reduces stack utilization by transferring contents of the stack to a variable-size queue-like data structure.Type: ApplicationFiled: September 16, 2019Publication date: January 9, 2020Inventors: Timothy P. Ellison, Amit S. Mane, Sathiskumar Palaniappan, Vijay Sundaresan
-
Publication number: 20200012504Abstract: A dynamic cloud stack testing system comprises a cloud network with cloud components and a cloud stack server coupled to the network. The server includes an interface, a memory, a cloud stack configuration engine, and a cloud stack testing engine. The interface receives a cloud stack request from a user device that includes functionality parameters. The memory stores historic cloud stack combinations. The cloud stack configuration engine identifies cloud components associated with the functionality parameters and determines a cloud stack configuration that incorporates them. The cloud stack testing engine determines a cloud stack configuration test. The cloud stack testing engine executes the test, and stores results and the associated cloud stack configuration in the memory.Type: ApplicationFiled: September 20, 2019Publication date: January 9, 2020Inventors: Sandeep Kumar Chauhan, Sasidhar Purushothaman
-
Publication number: 20200012505Abstract: A system and method of migrating a first module to a second module in a data center are disclosed. In certain aspects, a method includes instantiating a migration of the first module to the second module, wherein the first module operating with a configuration in a data center. The method also includes retrieving results of a compatibility check performed by a migration coordinator to determine potential incompatibilities between the configuration of the first module and the second module, the results including a first set of incompatibilities to be resolved. The method further includes requesting user input relating to the first set of incompatibilities. The method also includes periodically determining whether user input relating to the first set of incompatibilities has been received until a time threshold is reached.Type: ApplicationFiled: July 3, 2018Publication date: January 9, 2020Inventor: Suman Chandra SHIL
-
Publication number: 20200012506Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for allocating a number of first containers to implement one primary segment instance each and a number of second containers to implement one mirror segment instance each. In one example system, the second containers are configured to have less computing resources than the first containers. The containers are distributed among a number of physical computers. The system receives an indication of a failure of a physical computer hosting a particular first container implementing a first primary segment instance. In response to receiving the indication, the system promotes a second mirror segment instance that is a mirror of the first primary segment instance to be a new primary segment instance. The system allocates additional resources to the particular second container implementing the promoted mirror segment instance.Type: ApplicationFiled: July 3, 2018Publication date: January 9, 2020Inventors: Ivan D. Novick, Lawrence Hamel, Oz Basarir, Goutam Tadi
-
Publication number: 20200012507Abstract: Provided is a microkernel architecture control system of an industrial server and an industrial server, which relate to the technical field of industrial servers. According to the microkernel architecture control system, scheduling configuration information is customized on the basis of an architecture including a plurality of microkernels and a virtual machine monitor prior to startup of a system, each microkernel including industrial control middleware and a real-time operating system.Type: ApplicationFiled: January 9, 2019Publication date: January 9, 2020Applicant: KYLAND TECHNOLOGY CO., LTDInventors: Ping Li, Zhiwei Yan, Qiyun Jiang, Xueqiang Qiu, Xingpei Tang
-
Publication number: 20200012508Abstract: A performance manager (400, 500) and a method (200) performed thereby are provided, for managing the performance of a logical server of a data center. The data center comprises at least one memory pool in which a memory block has been allocated to the logical server. The method (200) comprises determining (230) performance characteristics associated with a first portion of the memory block, comprised in a first memory unit of the at least one memory pool; and identifying (240) a second portion of the memory block, comprised in a second memory unit of the at least one memory pool, to which data of the first portion of the memory block may be migrated to apply performance characteristics associated with the second portion. The method (200) further comprises initiating migration (250) of the data to the second portion of the memory block.Type: ApplicationFiled: March 31, 2017Publication date: January 9, 2020Inventors: Mozhgan Mahloo, Amir Roozbeh
-
Publication number: 20200012509Abstract: A method of improving performance of a software application executing with a virtualized computing infrastructure wherein the application has associated: a hypervisor profile of characteristics of a hypervisor in the infrastructure; a network communication profile of characteristics of network communication for the application; a data storage profile of characteristics of data storage for the infrastructure; and an application profile defined collectively by the other profiles.Type: ApplicationFiled: March 13, 2018Publication date: January 9, 2020Applicant: British Telecommunications Public Limited CompanyInventors: Kashaf Khan, Newland Andrews
-
Publication number: 20200012510Abstract: Systems, methods, apparatuses, and computer program products for multi-tiered virtualized network function (VNF) scaling are provided. One method includes detecting a need to scale at least one virtualized network function component (VNFC) implemented as a container, monitoring resource utilization by containers and determining remaining capacity within a current virtual machine hosting the containers, and deciding an allocation of the container to a virtual machine based at least on the resource utilization and the remaining capacity. When it is determined that the remaining capacity is low, the method may further include vertical scaling of the current virtual machine by allocating additional virtualized resources to the current virtual machine, and/or horizontal scaling of the current virtual machine by instantiating a new virtual machine and deploying the container to the newly instantiated virtual machine.Type: ApplicationFiled: March 24, 2017Publication date: January 9, 2020Inventors: Anatoly ANDRIANOV, Uwe RAUSCHENBACH, Gergely CSATARI
-
Publication number: 20200012511Abstract: A method for operating an electronic device, the method including spawning a name space tool (NST) as part of a boot process of a host OS, wherein the NST is a process with a plurality of root privileges of the host OS. The method further includes spawning, by the NST, a container for a guest OS, wherein the container for the guest OS is mapped to a dedicated domain in the host OS, and dropping, by the NST, a root privilege of the host OS in response to spawning the container for the guest OS.Type: ApplicationFiled: July 5, 2019Publication date: January 9, 2020Inventors: Guruprasad Ganesh, Sudhi Herle, Ahmed M. Azab, Rohan Bhutkar, Ivan Getta, Xun Chen, Wenbo Shen, Ruowen Wang, Haining Chen, Khaled Elwazeer, Mengmeng Li, Peng Ning, Hyungseok Yu, Myungsu Cha, Kyungsun Lee, Se Young Choi, Yurak Choe, Yong Shin, Kyoung-Joong Shin, Donguk Seo, Junyong Choi
-
Publication number: 20200012512Abstract: An outboard motor and methods of use thereof in general, includes a powerhead removeably affixed to the transom of a boat, and a gear case rotationally connected to a propeller shaft, the outboard motor including a telescopic drive shaft, the telescopic drive shaft having a first drive shaft section rotationally connected to the motor and a second drive shaft section rotationally connected to the gear case, and a telescopic drive shaft housing, the telescopic drive shaft housing configured to support the telescopic drive shaft internally therethrough, whereby the telescopic drive shaft and the telescopic drive shaft housing are configured to provide depth adjustment for the gear case and the propeller shaft, and thus enable the propeller to be raised and lowered during propulsion to improve propulsion efficiency.Type: ApplicationFiled: August 26, 2019Publication date: January 9, 2020Inventor: Robert J. GALLETTA, Jr.
-
Publication number: 20200012513Abstract: Virtual redundancy for active-standby cloud applications is disclosed herein. A virtual machine (“VM”) placement scheduling system is disclosed herein. The system can compute, for each standby VM of a plurality of available standby VMs, a minimum required placement overlap delta to meet an entitlement assurance rate (“EAR”) threshold. The system can compute a minimum number of available VM slots for activating each standby VM to meet the EAR threshold. For each standby VM of a given application, the system can filter out any server of a plurality of servers that does not meet criteria. If a given server meets the criteria, the system can add the given server to a candidate list; sort, in descending order, the candidate list by the minimum required placement overlap delta and the number of available virtual machine slots; and select, from the candidate list of servers, a candidate server from atop the candidate list.Type: ApplicationFiled: September 16, 2019Publication date: January 9, 2020Applicants: AT&T Intellectual Property I, L.P., The Regents of the University of Colorado, a Body CorporateInventors: Gueyoung Jung, Kaustubh Joshi, Sangtae Ha
-
Publication number: 20200012514Abstract: Systems, methods, and apparatuses for resource monitoring identification reuse are described. In an embodiment, a system comprising a hardware processor core to execute instructions storage for a resource monitoring identification (RMID) recycling instructions to be executed by a hardware processor core, a logical processor to execute on the hardware processor core, the logical processor including associated storage for a RMID and state, are described.Type: ApplicationFiled: May 9, 2019Publication date: January 9, 2020Inventors: Matthew FLEMING, Edwin VERPLANKE, Andrew HERDRICH, Ravishankar IYER
-
Publication number: 20200012515Abstract: Implementing static loaders and savers for the transfer of local and distributed data containers to and from storage systems can be difficult because there are so many different configurations of output formats, data containers and storage systems. Described herein is an extensible componentized data transfer framework for performant and scalable authoring of data loaders and data savers. Abstracted local and distributed workflows drive selection of plug-ins that can be composed by the framework into particular local or distributed scenario loaders and savers. Reusability and code sparsity are maximized.Type: ApplicationFiled: September 16, 2019Publication date: January 9, 2020Inventors: Tong Wen, Parry Husbands, Samuel Weiss
-
Publication number: 20200012516Abstract: A migration management method includes referring to a performance deterioration rate of a specific virtual server when utilization of a virtual server other than the specific virtual server in virtual servers that work on a physical server is changed in a stepwise manner, and calculating a first index value relating to a load state of the physical server before the performance deterioration rate exceeds a threshold based on a number of virtual CPUs allocated to each of the virtual servers and utilization of the virtual CPUs; calculating a second index value relating to the load state based on the number of virtual CPUs and the utilization while the specific virtual server is activated on the physical server; and conducting migration to another physical server for a virtual server other than the specific virtual server in the virtual servers when the calculated second index value exceeds the calculated first index value.Type: ApplicationFiled: May 31, 2019Publication date: January 9, 2020Applicant: FUJITSU LIMITEDInventors: Hiroyoshi Kodama, Shigeto SUZUKI, Hiroyuki FUKUDA, Kazumi Kojima
-
Publication number: 20200012517Abstract: A computer system infrastructure includes at least one edge computer system and at least one cloud computer system, wherein the edge computer system is connectable to the cloud computer system, both in the edge computer system and in the cloud computer system a virtual environment for hosting an application software is configured, respectively, the virtual environment of the edge computer system and the virtual environment of the cloud computer system are configured as unified host environments for the application software, respectively, the application software is provided within one of the virtual environments of the edge computer system and the cloud computer system, and the edge computer system and the cloud computer system are configured to transfer the application software between the two virtual environments of the edge computer system and the cloud computer system.Type: ApplicationFiled: July 9, 2019Publication date: January 9, 2020Inventors: Timo Bruderek, Jürgen Atzkern
-
Publication number: 20200012518Abstract: A hardware scheduling circuit may receive priority indications for a plurality of threads for processing, by an execution unit, multiple data samples associated with a signal. A particular thread of the plurality of threads may be scheduled for execution by the execution unit based on a priority of the particular thread and based on an availability of some of the multiple data samples that are to be processed by the particular thread.Type: ApplicationFiled: July 6, 2018Publication date: January 9, 2020Inventors: Richard T. Witek, Peter C. Eastty
-
Publication number: 20200012519Abstract: Provided is a method and apparatus for implementing microkernel architecture of industrial server. The method includes calculation of dependency of control programs according to a microkernel task type weight and a microkernel task priority weight and/or a control program running time weight prior to startup of a system, and determination of the number of the control programs running on each physical core and each control program running on multiple physical cores according to the dependency.Type: ApplicationFiled: January 9, 2019Publication date: January 9, 2020Applicant: KYLAND TECHNOLOGY CO., LTD.Inventors: Ping Li, Zhiwei Yan, Qiyun Jiang, Xueqiang Qiu, Xingpei Tang
-
Publication number: 20200012520Abstract: Exemplary embodiments include a method for scheduling multiple batches of concurrent jobs. The method includes: scheduling a plurality of batches where each batch has a plurality of jobs; identifying one or more dependencies via a configuration file, wherein the configuration file manages dependencies for each of the jobs of each batch; monitoring the one or more jobs; identifying and reporting one or more errors; and resolving the one or more errors by modifying one or more of hardware performance, CPU usage, memory consumption, database performance and/or other metrics to optimize system resource usage.Type: ApplicationFiled: June 27, 2019Publication date: January 9, 2020Inventor: Utkarsh Sudhir BIDKAR
-
Publication number: 20200012521Abstract: The present disclosure provides a task parallel processing method, a device, a system, a storage medium and computer equipment, which are capable of distributing and regulating tasks to be executed according to a task directed acyclic graph, and may thereby realize task parallelism of a multi-core processor and improve the efficiency of data processing.Type: ApplicationFiled: September 18, 2019Publication date: January 9, 2020Applicant: Shanghai Cambricon Information Technology Co., LtdInventors: Linyang WU, Xiaofu MENG
-
Publication number: 20200012522Abstract: A method for automatically generating scheduling algorithms, including determining a scheduling policy for a plurality of jobs to be executed on a computer system, where the scheduling policy specifies an execution order of a plurality of jobs; using the scheduling policy in a production environment for a period of time, and collecting data indicative of a business impact of each job executed during the period of time; generating a list of all pairwise comparisons of business impact between the plurality of jobs, together with outcomes of the comparisons; marking each pair for which the comparison outcome is inconsistent with the relative execution order of the pair of jobs according to a predefined criteria to create a reinforcement learning batch; and using the reinforcement learning batch to adjust a decision criteria used to determine the scheduling policy.Type: ApplicationFiled: September 19, 2019Publication date: January 9, 2020Inventors: Carlos Henrique CARDONHA, Renato Luiz de FREITAS CUNHA, Vitor Henrique LEAL MESQUITA, Eduardo ROCHA RODRIGUES
-
Publication number: 20200012523Abstract: Disclosed is a method and system for using a credit-based approach to scheduling workload in a compute environment. The method includes determining server capacity and load of a compute environment and running a first benchmark job to calibrate a resource scheduler. The method includes partitioning, based on the calibration, the compute environment into multiple priority portions (e.g. first portion, second portion etc.) and optionally a reserve portion. Credits are assigned to allocate system capacity or resources per time quanta. The method includes running a benchmark job to calibrate a complexity of supported job types to be run in the compute environment. When a request for capacity is received, the workload is assigned one or more credits and credits are withdrawn from the submitting entity's account for access to the compute environment at a scheduled time.Type: ApplicationFiled: September 19, 2019Publication date: January 9, 2020Inventors: Rajesh Kumar, Amit Mittal, Anjali Gugle, Hetal N. Badheka, Vasantha K. Tammana, Priyatam Prasad Veyyakula
-
Publication number: 20200012524Abstract: A processor and an instruction scheduling method for X-channel interleaved multi-threading, where X is an integer greater than one. The processor includes a decoding unit and a processing unit. The decoding unit is configured to obtain one instruction from each of Z predefined threads in each cyclic period, decode the Z obtained instructions to obtain Z decoding results, and send the Z decoding results to the processing unit, where each cyclic period includes X sending periods, one decoding result is sent to the processing unit in each sending period, a decoding result of the Z decoding results may be repeatedly sent by the decoding unit in a plurality of sending periods, wherein 1?Z?X or Z=X, and wherein Z is an integer. The processing unit (32) is configured to execute the instruction based on the decoding result.Type: ApplicationFiled: September 20, 2019Publication date: January 9, 2020Inventors: Shorin Kyo, Ye Gao, Shinri Inamori
-
Publication number: 20200012525Abstract: A virtual machine memory overcommit system includes an initialization memory, a device memory, at least one processor in communication with the initialization memory and the device memory, a guest operating system (OS) including a device driver, and a hypervisor executing on the at least one processor. The hypervisor is configured to expose the initialization memory to the guest OS of a virtual machine, initialize the guest OS, and expose the device memory to the guest OS. The device driver is configured to query an amount of memory available from the device memory and report the amount of memory available to the guest OS.Type: ApplicationFiled: September 16, 2019Publication date: January 9, 2020Inventors: David Hildenbrand, Michael Tsirkin
-
Publication number: 20200012526Abstract: A system receives a request to deploy a virtual machine on one of a plurality of nodes running a plurality of virtual machines in a cloud computing system. The system receives a predicted lifetime for the virtual machine and an indication of an average lifetime of virtual machines running on each of the plurality of nodes. The system allocates the virtual machine to a first node when a first policy of collocating virtual machines with similar lifetimes on a node is adopted and the predicted lifetime is within a predetermined range of the average lifetime of virtual machines running on the first node. The system allocates the virtual machine to a second node when a second policy of collocating virtual machines with dissimilar lifetimes on a node is adopted and the predicted lifetime is not within the predetermined range of the average lifetime of virtual machines running on the second node.Type: ApplicationFiled: September 19, 2019Publication date: January 9, 2020Inventors: Ricardo BIANCHINI, Eli CORTEZ, Marcus Felipe FONTOURA, Anand BONDE
-
Publication number: 20200012527Abstract: The current document is directed to methods and systems that establish secure, verifiable chains of control for computational entities within a distributed computing system. When a computational entity is first instantiated or introduced into the distributed computing system, public and private identities are generated for the computational entity and secure control is established over the computational entity by an initial controlling entity. Subsequently, control of the computational entity may be transferred from the initial controlling entity to a different controlling entity using a secure, three-party transaction that records the transfer of control in a distributed public ledger. As control of the computational entity is subsequently transferred to different controlling entities by secure three-party transactions, a chain of control from one controlling entity to another is established and recorded in the distributed public ledger.Type: ApplicationFiled: July 5, 2018Publication date: January 9, 2020Applicant: VMware, Inc.Inventor: Shawn Rud Hartsock
-
Publication number: 20200012528Abstract: Systems and methods for coordinating components can include: determining, by a first application executing on a client device, a need to perform a sharable functional task; identifying a first software component installed on the client device and capable of performing a first variation of the sharable functional task; identifying a second software component installed on the client device and capable of performing a second variation of the sharable functional task, wherein the second variation of the sharable functional task is functionally overlapping with and not identical to the first variation; identifying a set of characteristics of both the first software component and the second software component; selecting the second software component for performing the sharable functional task based on the set of characteristics, where the set of characteristics includes at least a version number; and delegating performance of the sharable functional task to the second software component.Type: ApplicationFiled: September 16, 2019Publication date: January 9, 2020Applicant: LOOKOUT, INC.Inventors: Matthew John Joseph LaMantia, Brian James Buck, Stephen J. Edwards, William Neil Robinson
-
Publication number: 20200012529Abstract: An embedded device, a method for executing a virus scan program, and a non-transitory storage medium storing instructions for executing the virus scan program are provided. The embedded device on which the virus scan program for detecting computer virus operates starts a virus scan, displays a first display component for receiving an instruction to pause the virus scan, receives the instruction to pause the virus scan, and pauses the virus scan when the instruction to pause the virus scan is received.Type: ApplicationFiled: June 17, 2019Publication date: January 9, 2020Applicant: Ricoh Company, Ltd.Inventor: Junya JIMBO
-
Publication number: 20200012530Abstract: Techniques for scalable virtualization of an Input/Output (I/O) device are described. An electronic device composes a virtual device comprising one or more assignable interface (AI) instances of a plurality of AI instances of a hosting function exposed by the I/O device. The electronic device emulates device resources of the I/O device via the virtual device. The electronic device intercepts a request from the guest pertaining to the virtual device, and determines whether the request from the guest is a fast-path operation to be passed directly to one of the one or more AI instances of the I/O device or a slow-path operation that is to be at least partially serviced via software executed by the electronic device. For a slow-path operation, the electronic device services the request at least partially via the software executed by the electronic device.Type: ApplicationFiled: March 12, 2019Publication date: January 9, 2020Inventors: Utkarsh Y. KAKAIYA, Rajesh SANKARAN, Sanjay KUMAR, Kun TIAN, Philip LANTZ
-
Publication number: 20200012531Abstract: A mechanism is described for facilitating hybrid processing of workloads for graphics processors in computing devices. A method of embodiments, as described herein, includes detecting workloads for graphics processor, and checking on status of a shared function unit (SFU) associated with the graphics processor to determine distribution of the workloads between the SFU and an execution unit (EU) associated with the graphics processor.Type: ApplicationFiled: April 1, 2017Publication date: January 9, 2020Applicant: Intel CorporationInventors: YUANYUAN LI, YONG ZHIANG, YUTING YANG, JAIJIE YAO, GUIZI LI, LIXIANG LIN
-
Publication number: 20200012532Abstract: For workflow test, a processor executes a workflow instance. The workflow instance includes a first workflow description of step names for a plurality of jobs. The processor further receives a modification to the first workflow description as the workflow instance executes. In addition, the processor synchronizes the modified workflow description to the executing workflow instance as the workflow instance executes.Type: ApplicationFiled: July 6, 2018Publication date: January 9, 2020Inventors: Qingda Wang, Kinson Chik, Jia Xin Gao, Qiang Jia, Dang Peng Liu, Yi Min Zhang
-
Publication number: 20200012533Abstract: A gateway in a computing system for interfacing a host with a subsystem for acting as a work accelerator to the host, the gateway having: an accelerator interface for enabling the transfer of batches of data to the subsystem at pre-compiled data exchange synchronisation points attained by the subsystem; a data connection interface for receiving data to be processed from storage; and a gateway interface for connection to a third gateway. The gateway is configured to store a number of credits indicating at least one of: the availability of data for transfer to the subsystem at a pre-compiled data exchange synchronisation point; and the availability of storage for receiving data from the subsystem at a pre-compiled data exchange synchronisation point. The gateway uses these credits to control whether or not synchronisation barrier is passed by transmitting synchronisation requests upstream to the third gateway or simply acknowledging the requests received.Type: ApplicationFiled: December 28, 2018Publication date: January 9, 2020Applicant: Graphcore LimitedInventors: Ola Tørudbakken, Daniel John Pelham Wilkinson, Brian Manula, Harald Høeg
-
Publication number: 20200012534Abstract: A gateway for interfacing a host with a subsystem for acting as a work accelerator to the host. The gateway enables the transfer of batches of data to the subsystem at precompiled data exchange synchronisation points. The gateway comprises a streaming engine having a data mover engine and a memory management engine, the data mover engine and memory management engine being configured to execute instructions in coordination from work descriptors. The memory management engine is configured to execute instructions from the work descriptor to transfer data between external storage and the local memory associated with the gateway. The data mover engine is configured to execute instructions from the work descriptor to transfer data between the local memory associated with the gateway and the subsystem.Type: ApplicationFiled: December 28, 2018Publication date: January 9, 2020Applicant: Graphcore LimitedInventors: Ola Tørudbakken, Daniel John Pelham Wilkinson, Richard Luke Southwell Osborne, Brian Manula, Harald Høeg
-
Publication number: 20200012535Abstract: Embodiments disclosed herein can allow a user of mobile device in a network environment to switch between using public network services and using private network services. To access private network services, a virtualization cloud client application running on mobile device connects to a virtualized device hosted in virtualization cloud and brokers access to private network services as well as local device functions. Embodiments disclosed herein provide a system, method, and computer program product for capturing touch events for a virtual mobile device platform and relaying the captured touch events to the virtual mobile device platform while ensuring that movements and speed of touch events are accurately represented at the virtual mobile device platform.Type: ApplicationFiled: September 20, 2019Publication date: January 9, 2020Applicant: INTELLIGENT WAVES LLCInventors: Brian J. Vetter, Rajesh P. Gopi, Galib Arrieta
-
Publication number: 20200012536Abstract: A system comprising: a first subsystem comprising one or more first processors, and a second subsystem comprising one or more second processors. The second subsystem is configured to process code over a series of steps delineated by barrier synchronizations, and in a current step, to send a descriptor to the first subsystem specifying a value of each of one or more parameters of each of one or more interactions that the second subsystem is programmed to perform with the first subsystem via an inter-processor interconnect in a subsequent step. The first subsystem is configured to execute a portion of code to perform one or more preparatory operations, based on the specified values of at least one of the one or more parameters of each interaction as specified by the descriptor, to prepare for said one or more interactions prior to the barrier synchronization leading into the subsequent phase.Type: ApplicationFiled: February 15, 2019Publication date: January 9, 2020Applicant: Graphcore LimitedInventors: David Lacey, Daniel John Pelham Wilkinson, Richard Luke Southwell Osborne, Matthew David Fyles