Patents Issued in October 25, 2018
-
Publication number: 20180307472Abstract: To simultaneously deploy software package on hosts such as cloud devices and on-premise device, a request is received at a central server to deploy a software package on hosts in a landscape. A topology file of the hosts in the landscape is generated. The hosts in the landscape include one or more hosts located in the cloud environment and one or more hosts located in the on-premise environment. A server side security certificate corresponding to the central server and host side security certificate corresponding to each of the hosts in the landscape is generated. The server side security certificate from the central server is sent to each of the hosts to establish a trusted communication between the central server and the hosts. Accordingly, the software package is deployed simultaneously on hosts as cloud devices and on-premise device.Type: ApplicationFiled: April 20, 2017Publication date: October 25, 2018Inventors: DHRUBAJYOTI PAUL, RAHUL LODHE, GANESH MOORTHY DURAISAMY
-
Publication number: 20180307473Abstract: Some embodiments of the present invention monitor and track usage patterns of various applications (apps) distributed among multiple mobile devices owned by a user. The data gathered during monitoring is stored in a mobile app usage repository. If the user considers installing a new app, a cognitive app analysis engine compares features and functionality of the new app against the usage patterns in the repository, to formulate recommendations as to which mobile device(s) the app should be installed on. The analysis engine provides its recommendations to the user, and may additionally perform automated installation of the app on the recommended device(s).Type: ApplicationFiled: April 25, 2017Publication date: October 25, 2018Inventors: Martin G. Keen, Brian M. O'Connell, James E. Bostick, John M. Ganci, JR.
-
Publication number: 20180307474Abstract: Examples disclosed herein relate to a plying a firmware update to a stacked network device. In an example, ports on a stacked network device that are used in a data path between an uplink device and a downlink device via the stacked network device may be identified. The stacked network device may be in a stacked configuration with a second network device. Each of the identified ports on the network device may be in respective link aggregation groups with respective ports on the second network device, LACP collecting flag for the ports on the stacked network device may be disabled. The egress traffic on the ports of the stacked network device may be drained. The data path may be disabled. A firmware update may be applied to the stacked network device.Type: ApplicationFiled: June 12, 2017Publication date: October 25, 2018Inventors: Sandesh V. Madhyastha, Vijay Chakravarthy Gilakala, Harish B. Kamath, Ravindhar Uppada, Muralidhar Hadamagere Venkataramu
-
Publication number: 20180307475Abstract: An information processing device, for an information processing system including a plurality of information processing devices, executes a process causing a processor of the information device to: classify the plurality of information processing devices into a plurality of device groups each including a given number of information processing devices; select information processing devices one by one from each of the plurality of device groups based on a given selection condition; assign the selected information processing devices to a first update group; and transmit, to first information processing devices each of which is any of the plurality of information processing devices and belongs to the first update group, an instruction to update software applied to the first information processing devices.Type: ApplicationFiled: April 12, 2018Publication date: October 25, 2018Applicant: FUJITSU LIMITEDInventor: Keiya Ishikawa
-
Publication number: 20180307476Abstract: An update processing method executed by a processor included in an update processing apparatus, the update processing method includes storing, in a memory, update information that is updated in accordance with update processing executed by using information called from another computer in accordance with accepted request information, the update information regarding a frequency of the call, and response information that is used for response to the request information; when the request information corresponding to the update processing is accepted, determining in accordance with the update information whether to transmit the response information stored in the memory as a response to the request information to a transmission source of the request information; and transmitting the response information selected in accordance with a result of the determination to the transmission source.Type: ApplicationFiled: April 18, 2018Publication date: October 25, 2018Applicant: FUJITSU LIMITEDInventor: Shinya Kitajima
-
Publication number: 20180307477Abstract: In an example, a system is provided and the system includes a motor vehicle component client, a server located in the cloud, and an application to be installed on a personal portable device, such as mobile phone or other portable, mobile electronic device. In some examples, the system enables efficient vehicle software updates to the Engine Control Unit (ECU), the head unit, or the like, or combinations thereof, and/or enables efficient wireless transmission of vehicle data analytics associated with diagnostic information, location information, or the like, or combinations thereof.Type: ApplicationFiled: June 29, 2018Publication date: October 25, 2018Applicant: Airbiquity Inc.Inventor: Leon Hong
-
Publication number: 20180307478Abstract: A source control system is used for the distributed incremental updating of trays that include all of the dependencies needed for an application to execute within a computing environment. An application of a first version of a tray is executed on a server responsive to the first version of the tray being retrieved from a source control system. Tray management software of the first version of the tray receives a request to update the tray to a second version. The tray management software requests a changeset including file differences between the first and second versions of the tray from the source control system. Responsive to a determination by the tray management software that there are no pending requests preventing an update, the tray is updated from the first version to the second version by updating files in the tray according to the changeset.Type: ApplicationFiled: April 19, 2017Publication date: October 25, 2018Inventor: Jeremy Norris
-
Publication number: 20180307479Abstract: Systems and methods for performing firmware update on an embedded system by patching. In operation, a computing device may receive an image of replacement firmware, which is a different version of current firmware stored in a non-volatile memory of the embedded system. The computing device then determines the different portions of the replacement firmware from the current firmware by comparing the image of the replacement firmware to the current firmware, and retrieves the different portions from the image of the replacement firmware to form the fragments. In this case, the computing device may create a patch file by data of the fragments, and send the patch file to the embedded system, such that the embedded system may use the patch file to update the current firmware. The size of the patch file would be relatively smaller than the firmware image, thereby reducing update time and resources consumption.Type: ApplicationFiled: April 24, 2017Publication date: October 25, 2018Inventors: Yugender P. Subramanian, Balasubramanian Chandrasekaran, David Yoon, Manikandan Ganesan Malliga
-
Publication number: 20180307480Abstract: Methods and systems for updating a file using heuristics. One system includes an electronic processor configured to identify a code file stored on a storage device and determine a signature of the code file. The electronic processor is also configured to compare the signature of the code file to each of a plurality of signatures to determine a degree of similarity between the signature of the code file and each of the plurality of signatures, wherein each of the plurality of signatures is associated with a code update, and, in response to the degree of similarity between the signature of the code file and one of the plurality of signatures satisfying a predetermined threshold, apply the code update associated with the one of the plurality of signatures to the code file.Type: ApplicationFiled: April 25, 2017Publication date: October 25, 2018Inventors: Darren Doyle, Terry Farrell, Thomas Doyle
-
Publication number: 20180307481Abstract: A method, system and computer readable medium are provided for software defect reduction. To perform the software defect reduction implementation parameters for a software application in a development phase are collected, and an Extract, Transform and Load (ETL) is performed. The ETL analyzes data from one or more databases based on the implementation parameters to obtain relevant implementation data. The one or more databases store implementation data related to previously developed software applications, and the relevant implementation data is data stored in the one or more databases that is data that is relevant to the implementation parameters. The relevant implementation data is then summarized to obtain predicted data relevant to the software application in the development phase.Type: ApplicationFiled: April 21, 2017Publication date: October 25, 2018Inventors: Sriram Ganesan, Kraig Dandan, Joseph P. Foley, Songdong Tian, John R. Sullivan, Abrar Desai
-
Publication number: 20180307482Abstract: An interrelated set of tools and methods are disclosed for recording the identity of software components responsible for creating files, recording the identity of software components that access software files, reasoning about the dependency relationships between software components, identifying and reporting undesirable dependencies between them, and reporting other useful information about a large-scale software architecture by instrumenting a software build process or test process.Type: ApplicationFiled: June 25, 2018Publication date: October 25, 2018Inventor: Daniel J. Sturtevant
-
Publication number: 20180307483Abstract: Examples described herein include systems and methods which include an apparatus comprising a plurality of configurable logic units and a plurality of switches, with each switch being coupled to at least one configurable logic unit of the plurality of configurable logic units. The apparatus further includes an instruction register configured to provide respective switch instructions of a plurality of switch instructions to each switch based on a computation to be implemented among the plurality of configurable logic units. For example, the switch instructions may include allocating the plurality of configurable logic units to perform the computation and activating an input of the switch and an output of the switch to couple at least a first configurable logic unit and a second configurable logic unit. In various embodiments, configurable logic units can include arithmetic logic units (ALUs), bit manipulation units (BMUs), and multiplier-accumulator units (MACs).Type: ApplicationFiled: April 21, 2017Publication date: October 25, 2018Applicant: MICRON TECHNOLOGY, INC.Inventors: FA-LONG LUO, TAMARA SCHMITZ, JEREMY CHRITZ, JAIME CUMMINS
-
Publication number: 20180307484Abstract: A method and system to provide user-level multithreading are disclosed. The method according to the present techniques comprises receiving programming instructions to execute one or more shared resource threads (shreds) via an instruction set architecture (ISA). One or more instruction pointers are configured via the ISA; and the one or more shreds are executed simultaneously with a microprocessor, wherein the microprocessor includes multiple instruction sequencers.Type: ApplicationFiled: February 20, 2018Publication date: October 25, 2018Inventors: Ed Grochowski, Hong Wang, John P. Shen, Perry H. Wang, Jamison D. Collins, James Held, Partha Kundu, Raya Leviathan, Tin-Fook Ngai
-
Publication number: 20180307485Abstract: In an example, an apparatus comprises a plurality of execution units, and logic, at least partially including hardware logic, to assemble a general register file (GRF) message and hold the GRF message in storage in a data port until all data for the GRF message is received. Other embodiments are also disclosed and claimed.Type: ApplicationFiled: April 21, 2017Publication date: October 25, 2018Applicant: Intel CorporationInventors: Abhishek R. Appu, Altug Koker, Joydeep Ray, Ramkumar Ravikumar, Kiran C. Veernapu, Prasoonkumar Surti, Vasanth Ranganathan
-
Publication number: 20180307486Abstract: An apparatus has processing circuitry comprising multiplier circuitry for performing multiplication on a pair of input operands. In response to a shift instruction specifying at least one shift amount and a source operand comprising at least one data element, the source operand and a shift operand determined in dependence on the shift amount are provided as input operands to the multiplier circuitry and the multiplier circuitry is controlled to perform at least one multiplication which is equivalent to shifting a corresponding data element of the source operand by a number of bits specified by a corresponding shift amount to generate a shift result value.Type: ApplicationFiled: April 24, 2017Publication date: October 25, 2018Inventors: François Christopher Jacques BOTMAN, Thomas Christopher GROCUTT
-
Publication number: 20180307487Abstract: An apparatus to facilitate control flow in a graphics processing system is disclosed. The apparatus includes logic a plurality of execution units to execute single instruction, multiple data (SIMD) and flow control logic to detect a diverging control flow in a plurality of SIMD channels and reduce the execution of the control flow to a subset of the SIMD channels.Type: ApplicationFiled: April 21, 2017Publication date: October 25, 2018Inventors: Subramaniam M. Maiyuran, Guei-Yuan Lueh, Supratim Pal, Gang Chen, Ananda V. Kommaraju, Joy Chandra, Altug Koker, Prasoonkumar Surti, David Puffer, Hong Bin Liao, Joydeep Ray, Abhishek R. Appu, Ankur N. Shah, Travis T. Schluessler, Jonathan Kennedy, Devan Burke
-
Publication number: 20180307488Abstract: An apparatus has processing circuitry comprising an L×M multiplier array. An instruction decoder associated with the processing circuitry supports a multiply-and-accumulate-product (MAP) instruction for generating at least one result element corresponding to a sum of respective E×F products of E-bit and F-bit portions of J-bit and K-bit operands respectively, where 1<E<J?L and 1<F<K?M. In response to the MAP instruction, the instruction decoder controls the processing circuitry to rearrange F-bit portions of the second K-bit operand to form a transformed K-bit operand, and to control the L×M multiplier array in dependence on the first J-bit operand and the transformed K-bit operand to add the respective E×F products using a subset of the adders used for accumulating partial products for a conventional multiplication.Type: ApplicationFiled: April 24, 2017Publication date: October 25, 2018Inventors: Neil BURGESS, David Raymond LUTZ, Javier Diaz BRUGUERA
-
Publication number: 20180307489Abstract: An apparatus and method are provided for performing multiply-and-accumulate-products (MAP) operations. The apparatus has processing circuitry for performing data processing, the processing circuitry including an adder array having a plurality of adders for accumulating partial products produced from input operands. An instruction decoder is provided that is responsive to a MAP instruction specifying a first J-bit operand and a second K-bit operand, to control the processing circuitry to enable performance of a number of MAP operations, where the number is dependent on a parameter. For each performed MAP operation, the processing circuitry is arranged to generate a corresponding result element representing a sum of respective E×F products of E-bit portions within an X-bit segment of the first operand with F-bit portions within a Y-bit segment of the second operand, where E<X?J and F<Y?K.Type: ApplicationFiled: January 2, 2018Publication date: October 25, 2018Inventors: Michael Alexander KENNEDY, Neil BURGESS
-
Publication number: 20180307490Abstract: One embodiment of the present invention sets forth a graphics processing system. The graphics processing system includes a screen-space pipeline and a tiling unit. The screen-space pipeline is configured to perform visibility testing and fragment shading. The tiling unit is configured to determine that a first set of primitives overlaps a first cache tile. The tiling unit is also configured to first transmit the first set of primitives to the screen-space pipeline with a command configured to cause the screen-space pipeline to process the first set of primitives in a z-only mode, and then transmit the first set of primitives to the screen-space pipeline with a command configured to cause the screen-space pipeline to process the first set of primitives in a normal mode. In the z-only mode, at least some fragment shading operations are disabled in the screen-space pipeline. In the normal mode, fragment shading operations are enabled.Type: ApplicationFiled: April 23, 2018Publication date: October 25, 2018Inventors: Ziyad S. HAKURA, Jerome F. DULUK, JR.
-
Publication number: 20180307491Abstract: A processing pipeline has at least one front end stage for issuing micro-operations for execution in response to program instructions, and an execute stage for performing data processing in response to the micro-operations. At least one predicate register stores at least one predicate value. In response to a predicated vector instruction for triggering execution of two or more lanes of processing, the at least one front end stage issues at least one micro-operation to control the execute stage to mask an effect of a lane of processing indicated as disabled by a target predicate value. One of the front end stages may perform an early predicate lookup of the target predicate value to vary in dependence on the early predicate lookup, which micro-operations are issued to the execute store for a predicated vector instruction.Type: ApplicationFiled: April 21, 2017Publication date: October 25, 2018Inventors: Alejandro Rico CARRO, Lee Evan EISEN
-
Publication number: 20180307492Abstract: A processor that reduces pipeline stall including a front end, a load queue, a scheduler, and a load buffer. The front end issues instructions while a first full indication is not provided, but otherwise stalls issuing instructions. The load queue stores issued load instruction entries including information needed to execute the issued load instruction. The load queue provides a second full indication when full. The scheduler dispatches issued instructions for execution except for stalled load instructions, such as when not yet been stored in the load queue. The load buffer transfers issued load instructions to the load queue when the load queue is not full. When the load queue is full, the load buffer temporarily buffers issued load instructions until the load queue is no longer full. The load buffer allows more accurate load queue full determination, and allows processing to continue even when the load queue is full.Type: ApplicationFiled: November 13, 2017Publication date: October 25, 2018Inventor: Qianli DI
-
Publication number: 20180307493Abstract: A system includes a processor configured to: initiate atomic execution of a plurality of instruction units in a thread, starting with a beginning instruction unit in the plurality of instruction units, wherein the plurality of instruction units in the thread are not programmatically specified to be executed atomically, and wherein the plurality of instruction units includes one or more memory modification instructions; in response to executing an instruction to commit inserted into the plurality of instructions units, incrementally commit a portion of the one or more memory modification instructions that have been atomically executed so far; and subsequent to incrementally committing the portion of the memory modification instructions that have been atomically executed so far, continue atomic execution of the plurality of instruction units. The system further includes a memory coupled to the processor, configured to provide the processor with the plurality of instruction units.Type: ApplicationFiled: February 15, 2018Publication date: October 25, 2018Inventors: Gil Tene, Michael A. Wolf, Cliff N. Click, JR.
-
Publication number: 20180307494Abstract: One embodiment provides for a compute apparatus to perform machine learning operations, the compute apparatus comprising instruction decode logic to decode a single instruction including multiple operands into a single decoded instruction, the multiple operands having differing precisions and a general-purpose graphics compute unit including a first logic unit and a second logic unit, the general-purpose graphics compute unit to execute the single decoded instruction, wherein to execute the single decoded instruction includes to perform a first instruction operation on a first set of operands of the multiple operands at a first precision and a simultaneously perform second instruction operation on a second set of operands of the multiple operands at a second precision.Type: ApplicationFiled: April 24, 2017Publication date: October 25, 2018Applicant: Intel CorporationInventors: ELMOUSTAPHA OULD-AHMED-VALL, BARATH LAKSHMANAN, TATIANA SHPEISMAN, Joydeep Ray, Ping T. Tang, Michael Strickland, Xiaoming Chen, Anbang Yao, Ben J. Ashbaugh, Linda L. Hurd, Liwei Ma
-
Publication number: 20180307495Abstract: One embodiment provides for a graphics processing unit (GPU) to accelerate machine learning operations, the GPU comprising an instruction cache to store a first instruction and a second instruction, the first instruction to cause the GPU to perform a floating-point operation, including a multi-dimensional floating-point operation, and the second instruction to cause the GPU to perform an integer operation; and a general-purpose graphics compute unit having a single instruction, multiple thread (SIMT) architecture, the general-purpose graphics compute unit to simultaneously execute the first instruction and the second instruction, wherein the integer operation corresponds to a memory address calculation.Type: ApplicationFiled: November 21, 2017Publication date: October 25, 2018Applicant: Intel CorporationInventors: ELMOUSTAPHA OULD-AHMED-VALL, BARATH LAKSHMANAN, TATIANA SHPEISMAN, Joydeep Ray, Ping T. Tang, Michael Strickland, Xiaoming Chen, Anbang Yao, Ben J. Ashbaugh, Linda L. Hurd, Liwei Ma
-
Publication number: 20180307496Abstract: The invention introduces a method for GC (garbage collection) POR (Power Off Recovery), performed by a processing unit, including at least the following steps: after a reboot subsequent to a power-off event, reading a GC recovery flag from a storage unit and determining whether the GC recovery flag indicates that a flash memory needs a POR; and, when the GC recovery flag indicates that the flash memory needs a POR, programming dummy data into a predefined number of empty pages next to the last programmed page of a destination block of the storage unit and performing an unfinished GC data-access operation.Type: ApplicationFiled: January 6, 2018Publication date: October 25, 2018Inventor: Kuan-Yu Ke
-
Publication number: 20180307497Abstract: An embedded system includes a processor, a volatile memory coupled to the processor, and a non-volatile memory storing a computer executable code. The computer executable code, when executed by the at least one processor, is configured to: perform a boot process; display a splash screen during the boot process; and during the display of the splash screen: provide a plurality of interactive links between executing a plurality of applications and a plurality of inputs; receive one of the inputs from a user; and in response to receiving the one of the inputs, execute the corresponding application based on the interactive links after accomplishing the boot process.Type: ApplicationFiled: April 19, 2017Publication date: October 25, 2018Inventors: Balasubramanain Chandrasekaran, David Yoon, Yugender P. Subramanian, Manikandan Ganesan Malliga
-
Publication number: 20180307498Abstract: A driver loading method and a server, where when receiving a service request, the server determines a first global index and a first global virtual function (VF) identifier corresponding to a first function description of a designated function included in the service request, determines a virtual machine (VM) corresponding to the service request, associates the first global VF identifier with the VM, allocates a first local index on the VM to the designated function, creates a correspondence between the first local index and the first function description, and sends the correspondence to the VM. The VM loads, according to the correspondence, a driver of the designated function for a first VF corresponding to the first global VF identifier. According to the foregoing method, different drivers can be loaded for VFs that have different functions and that are virtualized by a Peripheral Component Interconnect Express (PCIe) device.Type: ApplicationFiled: June 18, 2018Publication date: October 25, 2018Inventors: Dongtian Yang, Xinyu Hu, Yuming Xie, Yuping Zhao
-
Publication number: 20180307499Abstract: A method for configuring an accelerator, applied to a server including at least one bare accelerator. The at least one bare accelerator is an accelerator that is generated after a basic logic function is loaded for accelerator hardware, and the basic logic function includes a communications interface function and a loading function. The method includes determining, by the server, a target service type and a target bare accelerator, determining, by the server, a service logic function corresponding to the target service type, and loading, by the server, the service logic function corresponding to the target service type for the target bare accelerator to generate a target service accelerator, where the target service accelerator is capable of providing an acceleration service for a service of the target service type.Type: ApplicationFiled: July 2, 2018Publication date: October 25, 2018Inventors: Zhiping Chen, Chaofei Tang, Zhiming Yao
-
Publication number: 20180307500Abstract: The invention introduces a method for uninstalling SSD (Solid-state Disk) cards, performed by a processing unit when loading and executing a driver, including at least the following steps: reading the value of the register of an SSD card on which there is an access attempt according to a data access command in the time period between reception of the data access command from an application and transmission of a data access request corresponding to the data access command to lower layers; and executing an uninstall procedure when detecting that the SSD card has been removed according to a result of the reading.Type: ApplicationFiled: January 9, 2018Publication date: October 25, 2018Inventor: Ningzhong MIAO
-
Publication number: 20180307501Abstract: A method, computer program product, and system includes a processor(s) connecting a first computer system to a boot swarm, initiating formation of a peer to peer network. The processor(s) receive a request from a second computer system, a request for a file. The processor(s) configure the second computer system, including implementing a client application hosted from a resource in the first computer system, to facilitate the second computer system joining the peer to peer network. The processor(s) determine immediate peer(s) in the peer to peer network available to provide the file to the second computer system. The processor(s) generate a magnet link that includes a listing of address(es) of the immediate peer(s), ranking address(es) from best source to worst source for downloading the file. The processor(s) provide the second computer system with the magnet link to utilize in downloading the file from a peer.Type: ApplicationFiled: April 21, 2017Publication date: October 25, 2018Inventors: Alol A. CRASTA, Harshal S. PATIL, Kishorekumar G. PILLAI, Christoph RAISCH, Nishant RANJAN
-
Publication number: 20180307502Abstract: A method, computer program product, and system includes a processor(s) connecting a first computer system to a boot swarm, initiating formation of a peer to peer network. The processor(s) receive a request from a second computer system, a request for a file. The processor(s) configure the second computer system, including implementing a client application hosted from a resource in the first computer system, to facilitate the second computer system joining the peer to peer network. The processor(s) determine immediate peer(s) in the peer to peer network available to provide the file to the second computer system. The processor(s) generate a magnet link that includes a listing of address(es) of the immediate peer(s), ranking address(es) from best source to worst source for downloading the file. The processor(s) provide the second computer system with the magnet link to utilize in downloading the file from a peer.Type: ApplicationFiled: November 16, 2017Publication date: October 25, 2018Inventors: Alol A. CRASTA, Harshal S. PATIL, Kishorekumar G. PILLAI, Christoph RAISCH, Nishant RANJAN
-
Publication number: 20180307503Abstract: A device or apparatus may be configured to perform memory operations on a memory die while a current multi-level cell programming operation is being performed. In the event that a controller identifies pending memory operations to be performed in the memory die, the controller may communicate with the memory die to determine a status of auxiliary latches of the memory die. Depending on the status, the controller may determine if the memory die is in a suspend/resume period and/or which pending memory operations to have performed.Type: ApplicationFiled: April 25, 2017Publication date: October 25, 2018Applicant: SanDisk Technologies LLCInventors: Uri Peltz, Amir Hadar, Mark Shlick, Mark Murin
-
Publication number: 20180307504Abstract: Methods, apparatus, systems, and computer-readable media are provided for using selectable elements to invoke an automated assistant at a computing device. While operating the computing device, a user may not be aware that the automated assistant can be invoked according to certain invocation phrases. In order to inform the user of the functionality of the automated assistant, the user can be presented with selectable elements that can initialize the automated assistant when selected. Furthermore, a selectable element can provide an invocation phrase in textual form so that the user is aware of their ability to invoke the automated assistant by speaking the invocation phrase. The selectable element can be presented at different devices associated with the user, and the automated assistant can be initialized at a device that is separate from the device where the selectable element is presented.Type: ApplicationFiled: April 25, 2017Publication date: October 25, 2018Inventors: Vikram Aggarwal, Dina Elhaddad
-
Publication number: 20180307505Abstract: A method for dynamically loading one or more Extensible Mark-up Language (XML) schema definition (XSD) files into a JAVA™ Virtual Machine (JVM) during runtime is provided. The method includes generating JAVA™ objects from one or more initial XSD files. The method further includes grouping the JAVA™ objects by namespaces. The method also includes creating new XSD files for the namespaces. The new XSD file includes references to the initial XSD files that include a same namespace. The method further includes generating JAVA™ classes from the new XSD files. The method also includes compiling the new JAVA™ classes into bytecode. The bytecode is loaded into a ClassLoader, wherein the ClassLoader is available to the JVM during runtime.Type: ApplicationFiled: June 25, 2018Publication date: October 25, 2018Inventor: Christopher Tomas Santiago
-
Publication number: 20180307506Abstract: A device to execute an application design plugin associated with a user interface. The device may analyze, using the application design plugin, a set of historical applications to identify one or more dependencies included in the set of historical applications. The device may provide, to a storage device, historical application metadata relating to the one or more dependencies. The device may receive, via the user interface, a request to generate an application design. The device may update the user interface with design information that includes design feature metadata identifying the one or more dependencies. The device may determine that the application design is ready for validation. The device may validate the application design based on determining that the application design is ready for validation.Type: ApplicationFiled: April 24, 2017Publication date: October 25, 2018Inventor: Shivakumar RUDRAPPA GONIWADA
-
Publication number: 20180307507Abstract: An input device according to an exemplary embodiment of the present invention includes: a band including a contracted or extended flexible area; a sensor which detects a change of the flexible area and outputs a signal corresponding to the detected change of the flexible area; and a main body which is connected to the band and determines a cause operation for the change of the flexible area based on the signal output from the sensor, in which the flexible area includes fixed units, and connection units which connect the fixed units and are contracted or extended, and a distance between the fixed units is changed according to the cause operation.Type: ApplicationFiled: October 12, 2016Publication date: October 25, 2018Applicant: SPHEREDYNE CO., LTD.Inventor: Sug Whan KIM
-
Publication number: 20180307508Abstract: A method includes establishing a remote desktop connection between a first computing device and a first virtual machine executed by a second computing device. A stream of data generated by a first application executing on the first virtual machine is received in the first computing device over the remote desktop connection. A user interactivity metric associated with a user's interaction with the first application via the first computing device is determined. A compression metric is generated based on the user interactivity metric. The stream of data is compressed based on the compression metric.Type: ApplicationFiled: April 24, 2018Publication date: October 25, 2018Applicant: Stratus Silver Lining, Inc.Inventors: Suman Banerjee, Alok Sharma, Arjang Ghassem Zedeh
-
Publication number: 20180307509Abstract: A system including a master machine and a plurality of worker machines is disclosed. The master machine includes, for example, an API server configured to receive a job description; a resource allocation module configured to determine a number of virtual machines required to perform a job based on the job description; a container scheduling module configured to create a container containing the number of virtual machines required to perform the job, wherein at least two of the virtual machines in the container resides on different worker machines, and wherein each of the virtual machines is configured to run a same application to perform the job.Type: ApplicationFiled: October 27, 2017Publication date: October 25, 2018Inventors: Wei Dai, Weiren Yu, Eric P. Xing, Aurick Qiao, Qirong Ho
-
Publication number: 20180307510Abstract: Virtual machine (VM) proliferation may be reduced through the use of Virtual Server Agents (VSAs) assigned to a group of VM hosts that may determine the availability of a VM to perform a task. Tasks may be assigned to existing VMs instead of creating a new VM to perform the task. Furthermore, a VSA coordinator may determine a grouping of VMs or VM hosts based on one or more factors associated with the VMs or the VM hosts, such as VM type or geographical location of the VM hosts. The VSA coordinator may also assign one or more VSAs to facilitate managing the group of VM hosts. In some embodiments, the VSA coordinators may facilitate load balancing of VSAs during operation, such as during a backup operation, a restore operation, or any other operation between a primary storage system and a secondary storage system.Type: ApplicationFiled: April 23, 2018Publication date: October 25, 2018Inventors: Rajiv Kottomtharayil, Rahul S. Pawar, Ashwin Gautamchand Sancheti, Sumer Dilip Deshpande, Sri Karthik Bhagi, Henry Wallace Dornemann, Ananda Venkatesha
-
Publication number: 20180307511Abstract: Disclosed aspects relate to virtual machine management in a shared pool of configurable computing resources. A single multi-node server may be established. The single multi-node server may include a running virtual machine, a set of computing resources that includes a possessed subset of the set of computing resources, a source hypervisor, and a target hypervisor. The possessed subset of the set of computing resources may be assigned to the target hypervisor from the source hypervisor. The running virtual machine may be run using the target hypervisor.Type: ApplicationFiled: April 20, 2017Publication date: October 25, 2018Inventors: Saravanan Devendran, Venkatesh Sainath
-
Publication number: 20180307512Abstract: A computer-implemented method according to one embodiment includes identifying a set of virtual machines to be placed within a system, receiving characteristics associated with the set of virtual machines, determining characteristics associated with a current state of the system, determining a placement of the set of virtual machines within the system, based on the characteristics associated with the set of virtual machines and the characteristics associated with a current state of the system, determining an updated placement of all virtual machines currently placed within the system, based on the characteristics associated with the set of virtual machines and the characteristics associated with a current state of the system, and determining a migration sequence within the system in order to implement the updated placement of all virtual machines currently placed within the system.Type: ApplicationFiled: April 20, 2017Publication date: October 25, 2018Inventors: Ali Balma, Nejib Ben Hadj-Alouane, Aly Megahed, Mohamed Mohamed, Samir Tata, Hana Teyeb
-
Publication number: 20180307513Abstract: A method may include receiving one or more monitoring event definitions at an accelerator device from a first logical software entity to a first endpoint of the accelerator device having the first endpoint assigned for access by the first logical software entity, a second endpoint assigned to a second logical software entity such that second endpoint appears to the second logical software entity as a logical hardware adapter, and a third endpoint assigned to a third logical software entity, the accelerator device for accelerating data transfer operations between the second logical software entity and the third logical software entity via the second endpoint and the third endpoint. The method may also include monitoring by the accelerator device for one or more defined monitoring events occurring during the data transfer operations and communicating monitoring information to the first logical software entity from the accelerator device via the first endpoint.Type: ApplicationFiled: April 21, 2017Publication date: October 25, 2018Applicant: Dell Products L.P.Inventors: Shyam T. IYER, William Price DAWKINS
-
Publication number: 20180307514Abstract: Various examples are directed to systems and methods for orchestrating a first transaction workflow performed by a plurality of microservices. An orchestration service may write to a first tracking log a first log entry for a first action of the plurality of actions. The first tracking log may be stored at a persistent storage location that, for example, is accessible in the event that the orchestration service crashes. The first log entry may describe an initial state of the first action. The orchestration service may also write a second log entry for a second action of the plurality of actions to the first tracking log. The second log entry may describe an initial state of the second action. The orchestration service may determine that the first microservice successfully completed the first action and that the second microservice failed to complete the second action. The orchestration service may initiate a compensation action to reverse the first action.Type: ApplicationFiled: April 24, 2017Publication date: October 25, 2018Inventors: Oleg Koutyrine, Michael Stephan
-
Publication number: 20180307515Abstract: The present disclosure relates to an apparatus, method and system involving running at least one virtual machine on a portable computing device, wherein the virtual machine is operable to communicate with and control at least one device associated with at least one personal environment of a user of the portable computing device e.g. a motor vehicle, a house, an office or the body of the user of the portable computing device. By providing virtual machines dedicated to the control system of specific devices and physical hardware access, security is provided by preventing e.g. hacking and/or malware from entering the virtual machines.Type: ApplicationFiled: October 2, 2015Publication date: October 25, 2018Applicant: RED BEND SOFTWAREInventors: Evyatar MELLER, Yair NOAM, Micha RAVE, Tali EILAM
-
Publication number: 20180307516Abstract: An example method of managing guest code in a virtualized computing instance of a virtualized computing system includes: receiving, at a hypervisor that manages the virtualized computing instance, identifiers for a first guest-physical memory page, which stores a patched version of the guest code, and a second guest-physical memory page, which stores an original version of the guest code; modifying an entry in a nested page table (NPT), which is associated with the first guest-physical memory page, to cause an exception to the hypervisor in response to a first read operation, performed by first software in the virtualized computing instance, which targets the first guest-physical memory page; and executing, at the hypervisor in response to the exception, a second read operation that emulates the first read operation, but targets the second guest-physical memory page.Type: ApplicationFiled: July 7, 2017Publication date: October 25, 2018Inventors: PRASAD DABAK, Achindra Bhatnagar
-
Publication number: 20180307517Abstract: Disclosed aspects relate to virtual machine management in a shared pool of configurable computing resources. A single multi-node server may be established. The single multi-node server may include a running virtual machine, a set of computing resources that includes a possessed subset of the set of computing resources, a source hypervisor, and a target hypervisor. The possessed subset of the set of computing resources may be assigned to the target hypervisor from the source hypervisor. The running virtual machine may be run using the target hypervisor.Type: ApplicationFiled: October 6, 2017Publication date: October 25, 2018Inventors: Saravanan Devendran, Venkatesh Sainath
-
Publication number: 20180307518Abstract: A machine system includes a physical machine, a memory pool, and a memory pool management machine. The memory pool management machine manages, with respect to a memory region of the memory pool, an allocated region, a cleared region, and an uncleared region. When generating a virtual machine, a hypervisor in the physical machine sends a memory allocation request to the memory pool management machine. When a response, to the request, received from the memory pool management machine includes an address range belonging to the uncleared region, the hypervisor clears the memory region of the address range belonging to the uncleared region and then generates the virtual machine.Type: ApplicationFiled: January 8, 2016Publication date: October 25, 2018Inventors: Takayuki IMADA, Toshiomi MORIKI
-
Publication number: 20180307519Abstract: A processor comprises a register to store a first pointer to a context data structure specifying a virtual machine context, the context data structure comprising a first field to store a second pointer to a plurality of realm switch control structures (RSCSs), and an execution unit comprising a logic circuit to execute a virtual machine (VM) according to the virtual machine context, wherein the VM comprises a guest operating system (OS) comprising a plurality of kernel components, and wherein each RSCS of the plurality of RSCSs specifies a respective component context associated with a respective kernel component of the plurality of kernel components, and execute a first kernel component of the plurality of kernel components using a first component context specified by a first RSCS of the plurality of RSCSs.Type: ApplicationFiled: April 13, 2018Publication date: October 25, 2018Inventors: Deepak K. Gupta, Ravi L. Sahita, Barry E. Huntley
-
Publication number: 20180307520Abstract: A non-transitory computer-readable recording medium storing a program that causes a computer including a first-processor in which a first-thread is executed at a first-node having a first-buffer and a communication device and a second-processor in which a second-thread is executed at a second-node having a second-buffer, the first-thread includes setting an output-destination of the communication device to the second-buffer with respect to a flow addressed to a virtual machine executed in the second-processor, notifying the second-processor of a switch notification information of the setting the output-destination of the communication device to the second-buffer, and transferring a packet stored in the first-buffer to the second-processor, and the second-thread includes receiving the switching notification, suspending temporarily a reception-process of the second-buffer for the flow, transferring the packet transferred by the first-processor to the virtual machine, and resuming the reception-process of the seType: ApplicationFiled: April 19, 2018Publication date: October 25, 2018Applicant: FUJITSU LIMITEDInventor: Kazuki Hyoudou
-
Publication number: 20180307521Abstract: A storage device is disclosed. The storage device may include storage for data and at least one Input/Output (I/O) queue for requests from at least one virtual machine (VM) on a host device. The storage device may support an I/O queue creation command to request the allocation of an I/O queue for a VM. The I/O queue creation command may include an LBA range attribute for a range of Logical Block Addresses (LBAs) to be associated with the I/O queue. The storage device may map the range of LBAs to a range of Physical Block Addresses (PBAs) in the storage.Type: ApplicationFiled: April 20, 2018Publication date: October 25, 2018Inventor: Oscar P. PINTO