Patents Issued in April 14, 2016
-
Publication number: 20160103666Abstract: A service system in an IoT environment is provided. The service system includes an instance manager configured to manage generation and operation of an instance for a virtual object corresponding to an IoT device. The instance manager includes an instance pool configured to pool the instance, a message router configured to filter a data stream according to the instance and to transmit the filtered result value together with the data stream, and an instance lifecycle manager configured to reuse the instance pooled in the instance pool according to the filtered result value or to skip calling of an instance according to the data stream.Type: ApplicationFiled: September 15, 2015Publication date: April 14, 2016Applicant: Korea Advanced Institute of Science and TechnologyInventors: Jun Kyun Choi, Jin Hong Yang, Hyo Jin Park, Yong Rok Kim, Kyu Yeong Jeon
-
Publication number: 20160103667Abstract: Creating a deployment package for deploying an application. The method includes identifying a configuration dataset The method further includes identifying a plurality of target environments. The method further includes transforming the configuration dataset, during build time, for each of the target environments to create a plurality of different configuration datasets corresponding to the different target environments. The method further includes packaging the plurality of configuration datasets with a deployable application entity to create a package that can be deployed to a plurality of different targets to make application deployment across multiple targets more efficient.Type: ApplicationFiled: October 14, 2014Publication date: April 14, 2016Inventors: Dong Chen, Haonan Tan, Tao Cao
-
Publication number: 20160103668Abstract: An electronic device displays a first application. The device detects a user input requesting an operation of a first type, and displays a user interface with application icons. Each application icon in the application icons corresponds to a respective application that is capable of performing the operation of the first type when stored in the memory of the device. The application icons include one or more application icons that correspond to one or more applications that are stored in the memory of the device and one or more application icons that correspond to one or more applications that are not stored in the memory of the device. The device detects activation of an application icon, in the application icons, that corresponds to a second application that is not stored in the memory of the electronic device, and installs the second application in the memory of the device.Type: ApplicationFiled: October 9, 2014Publication date: April 14, 2016Inventors: Ragavan Srinivasan, Ievgenii Nazaruk
-
Publication number: 20160103669Abstract: In certain embodiments, a method includes accessing, in response to a request to monitor a host device, a first set of discovery information associated with the host device. The first set of discovery information indicates at least one characteristic of the host device. The method further includes determining, based on the first set of discovery information, a second set of discovery information associated with the host device. The method also includes determining, based on the first and second sets of discovery information and based on one or more pre-defined rules, a metric associated with the host device to be monitored. The method includes communicating, based on the metric to be monitored, an installation package to the host device. The installation package includes a probe that is configured to monitor the metric.Type: ApplicationFiled: October 13, 2014Publication date: April 14, 2016Inventors: Nimal K. K. Gamage, Jeffrey Daniel Alley, Eric Matthew Grunzke
-
Publication number: 20160103670Abstract: The disclosure discloses a firmware updating method for updating a main firmware component of an electronic apparatus electrically connected with a removable memory device. The method includes: (a) performing a firmware updating procedure to copy a firmware updating component from the removable memory device to the electronic apparatus, and to update the main firmware component by using the firmware updating component in the removable memory device; (b) deleting the firmware updating component and then rebooting the electronic device if the firmware updating procedure is successfully finished, or directly rebooting the electronic device if the firmware updating procedure is unsuccessfully finished; (c) determining whether the firmware updating component exists in the removable memory device after rebooting the electronic apparatus; and (d) sequentially performing the steps (a) to (c) again if the firmware updating component exists in the removable memory device in the step (c).Type: ApplicationFiled: October 8, 2014Publication date: April 14, 2016Inventors: Kuan-Jen CHEN, Chia-Ming HU
-
Publication number: 20160103671Abstract: A mechanism is provided for applying a maximum number of software patches to each computing system in a set of computing systems. A set of computing systems are grouped into a plurality of computing system groups based on characteristics associated with each computing system, the plurality of computing system groups comprising at least two different groups of computing systems that differ in implementation of previous software patches. For each group of computing systems, a set of pending software patches are bundled based on characteristics associated with that group of computing systems thereby forming a plurality of bundles of pending software patches, the plurality of bundles of pending software patches comprise at least two different sets of pending software patches. For the plurality of computing systems, an associated bundle of pending software patches is applied to an associated group of computing systems.Type: ApplicationFiled: October 8, 2014Publication date: April 14, 2016Inventors: Paul Curran, Bradford A. Fisher, James K. MacKenzie, Dominic O'Toole
-
Publication number: 20160103672Abstract: Provided are a method and system for upgrading firmware. The method includes: an upgrading control single board receives a firmware upgrading request from a master control single board, wherein the firmware upgrading request carries firmware upgrading parameter information; the upgrading control single board determines, according to the firmware upgrading parameter information, a sublink to be upgraded corresponding to the firmware upgrading parameter information; and the upgrading control single board acquires, from the master control single board, firmware upgrading data corresponding to the sublink to be upgraded, and upgrades, by adopting the firmware upgrading data, one or more pieces of firmware on the sublink to be upgraded. According to the solution, under a power-on state of a bare single board to be upgraded, remote firmware upgrading can be performed for the single board, so that the risk of influence of firmware upgrading operation on normal running of the system is lowered.Type: ApplicationFiled: September 17, 2013Publication date: April 14, 2016Inventors: Miaomiao MA, Yong YANG, Shuang YANG, Qi YANG, Rong XU
-
Publication number: 20160103673Abstract: A mechanism is provided for applying a maximum number of software patches to each computing system in a set of computing systems. A set of computing systems are grouped into a plurality of computing system groups based on characteristics associated with each computing system, the plurality of computing system groups comprising at least two different groups of computing systems that differ in implementation of previous software patches. For each group of computing systems, a set of pending software patches are bundled based on characteristics associated with that group of computing systems thereby forming a plurality of bundles of pending software patches, the plurality of bundles of pending software patches comprise at least two different sets of pending software patches. For the plurality of computing systems, an associated bundle of pending software patches is applied to an associated group of computing systems.Type: ApplicationFiled: August 5, 2015Publication date: April 14, 2016Inventors: Paul Curran, Bradford A. Fisher, James K. MacKenzie, Dominic O'Toole
-
Publication number: 20160103674Abstract: A sub-process is performed on a first computing platform to create a portable initialized object. The portable initialized object is communicated to a second computing platform. The second computing platform uses the portable initialized object to replace performing the sub-process.Type: ApplicationFiled: December 22, 2015Publication date: April 14, 2016Inventors: David B. Lection, Ruthie D. Lyle, Eric L. Masselle
-
Publication number: 20160103675Abstract: Embodiments of the present invention are directed at methods and systems for providing a partial personalization process that allows for more efficient and effective personalization of a mobile application on a communication device after updating the mobile application. For example, personalization profiles associated with multiple versions of the mobile application may be stored at an application update provisioning system and the application update provisioning system may determine the appropriate partial provisioning information to update the mobile application for each migration notification. Accordingly, a tailored partial personalization script including only that personalization information that is to be updated for the updated version of the mobile application may be generated and installed to enable new functionality and/or update the information contained within an updated mobile application, without requiring re-personalization of all personalized information into the updated mobile application.Type: ApplicationFiled: October 13, 2015Publication date: April 14, 2016Inventors: Christian Aabye, Kiushan Pirzadeh, Glenn Powell, Igor Karpenko
-
Publication number: 20160103676Abstract: At least one ALM artifact, indexed by a unified data store, that does not comply with at least one process convention can be identified. Responsive to identifying the ALM artifact, indexed by the unified data store, that does not comply with the process convention, a determination can be made by a process convention agent executed by a processor as to whether script code is available to update the ALM artifact to comply with the process convention. Responsive to the process convention agent determining that script code is available to update the ALM artifact to comply with the process convention, the process convention agent can automatically execute the script code to update the ALM artifact to comply with the process convention.Type: ApplicationFiled: October 9, 2014Publication date: April 14, 2016Inventors: Muhtar B. Akbulut, Mark T. Buquor, Vivek Garg, Matthew P. Jarvis, David Liman, Nimit Patel, Scott Patterson, Richard Watts, Keith A. Wells
-
Publication number: 20160103677Abstract: A method for executing program builds comprising: analyzing file dependency information and job duration information associated with jobs of the program build; scheduling jobs for a current program build based on the analysis of the dependency information and the job duration data; executing the jobs according to the schedule; collecting file usage information and new job duration information from each of the jobs; supplementing the file dependency information with the file usage information; and storing the new job duration information to be used for scheduling jobs in subsequent program builds.Type: ApplicationFiled: October 14, 2014Publication date: April 14, 2016Inventor: John Eric Melski
-
Publication number: 20160103678Abstract: At least one ALM artifact, indexed by a unified data store, that does not comply with at least one process convention can be identified. Responsive to identifying the ALM artifact, indexed by the unified data store, that does not comply with the process convention, a determination can be made by a process convention agent executed by a processor as to whether script code is available to update the ALM artifact to comply with the process convention. Responsive to the process convention agent determining that script code is available to update the ALM artifact to comply with the process convention, the process convention agent can automatically execute the script code to update the ALM artifact to comply with the process convention.Type: ApplicationFiled: May 28, 2015Publication date: April 14, 2016Inventors: Muhtar B. Akbulut, Mark T. Buquor, Vivek Garg, Matthew P. Jarvis, David Liman, Nimit K. Patel, Scott R. Patterson, Richard D. Watts, Keith A. Wells
-
Publication number: 20160103679Abstract: Software code is analyzed to identify one or more symbols in the software code, the one or more symbols corresponding to a defined software syntax. For each of one or more identified symbols: a corresponding annotation that conveys a meaning of the identified symbol is determined; a location within a document to display the annotation is determined so that the annotation, when displayed, is visually associated with the identified symbol; and the annotation is displayed at the location.Type: ApplicationFiled: October 12, 2015Publication date: April 14, 2016Inventor: Stephen WOLFRAM
-
Publication number: 20160103680Abstract: An arithmetic circuit comprises first to N-th, N being an integer equal to or larger than three, element circuits respectively including: input circuits which input first operand data and second operand data; and element data selectors which select operand data of any of element circuits on the basis of a request element signal; and a data bus which supplies the operand data from the input circuits to the element data selectors. When a control signal is in a first state, the element data selectors select, on the basis of the request element signal included in the second operand data, the first operand data of any of the element circuits and output the first operand data.Type: ApplicationFiled: August 24, 2015Publication date: April 14, 2016Inventor: Tomonori TANAKA
-
Publication number: 20160103681Abstract: A mechanism for simultaneous multithreading is provided. Responsive to performing a store instruction for a given thread of threads on a processor core and responsive to the core having ownership of a cache line in a cache, an entry of the store instruction is placed in a given store queue belonging to the given thread. The entry for the store instruction has a starting memory address and an ending memory address on the cache line. The starting memory addresses through ending memory addresses of load queues of the threads are compared on a byte-per-byte basis against the starting through ending memory address of the store instruction. Responsive to one memory address byte in the starting through ending memory addresses in the load queues overlapping with a memory address byte in the starting through ending memory address of the store instruction, the threads having the one memory address byte is flushed.Type: ApplicationFiled: October 10, 2014Publication date: April 14, 2016Inventors: Khary J. Alexander, Jonathan T. Hsieh, Christian Jacobi, Martin Recktenwald
-
Publication number: 20160103682Abstract: A mechanism for simultaneous multithreading is provided. Responsive to performing a store instruction for a given thread of threads on a processor core and responsive to the core having ownership of a cache line in a cache, an entry of the store instruction is placed in a given store queue belonging to the given thread. The entry for the store instruction has a starting memory address and an ending memory address on the cache line. The starting memory addresses through ending memory addresses of load queues of the threads are compared on a byte-per-byte basis against the starting through ending memory address of the store instruction. Responsive to one memory address byte in the starting through ending memory addresses in the load queues overlapping with a memory address byte in the starting through ending memory address of the store instruction, the threads having the one memory address byte is flushed.Type: ApplicationFiled: August 18, 2015Publication date: April 14, 2016Inventors: Khary J. Alexander, Jonathan T. Hsieh, Christian Jacobi, Martin Recktenwald
-
Publication number: 20160103683Abstract: A compiler apparatus copies a branch instruction included in first code to produce a plurality of branch instructions. The compiler apparatus generates a control instruction to cause different threads running on a processor, which is able to execute a plurality of threads that share storage space for storing information to be used for branch prediction, to execute different ones of the plurality of branch instructions. The compiler apparatus generates second code including the plurality of branch instructions and the control instruction.Type: ApplicationFiled: October 10, 2014Publication date: April 14, 2016Applicant: FUJITSU LIMITEDInventors: Masakazu UENO, Masahiro DOTEGUCHI
-
Publication number: 20160103684Abstract: According to one embodiment, a processor includes an instruction decoder to decode a first instruction to gather data elements from memory, the first instruction having a first operand specifying a first storage location and a second operand specifying a first memory address storing a plurality of data elements. The processor further includes an execution unit coupled to the instruction decoder, in response to the first instruction, to read contiguous a first and a second of the data elements from a memory location based on the first memory address indicated by the second operand, and to store the first data element in a first entry of the first storage location and a second data element in a second entry of a second storage location corresponding to the first entry of the first storage location.Type: ApplicationFiled: December 21, 2015Publication date: April 14, 2016Applicant: Intel CorporationInventors: Andrew T. Forsyth, Brian J. Hickmann, Jonathan C. Hall, Christopher J. Hughes
-
Publication number: 20160103685Abstract: A data processing apparatus for accessing several system registers using a single command includes system registers and command generation circuitry capable of analysing a plurality of decoded system register access instructions, each specifying a system register identifier. In response to a predetermined condition, the command generation circuitry generates a single command to represent the plurality of decoded system register access instructions. The predetermined condition comprises a requirement that a total width of the system registers specified by the plurality of decoded system register access instructions is less than or equal to a predefined data processing width.Type: ApplicationFiled: October 10, 2014Publication date: April 14, 2016Inventors: Loïc PIERRON, Antony John PENTON
-
Publication number: 20160103686Abstract: An apparatus comprising: at least one processor; and at least one memory including computer program code; the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform static code analysis of a plurality of instructions comprising, for each instruction: determining whether a trace message is generated by the instruction; determining whether a size of the trace message generated by the instruction is dependent on a context; determining a size of the trace message generated by the instruction; and updating the context; and to perform determining a cumulative size of trace messages generated by the plurality of instructions.Type: ApplicationFiled: December 9, 2014Publication date: April 14, 2016Applicant: FREESCALE SEMICONDUCTOR, INC.Inventors: RADU-MARIAN IVAN, RAZVAN LUCIAN IONESCU, FLORINA MARIA TERZEA
-
Publication number: 20160103687Abstract: A display control device for controlling a display unit in a vehicle, including a dedicated middleware that executes a dedicated application program on a vehicle side, a general purpose middleware that executes a general purpose application program from an external of the vehicle, and an interface that exchanges necessary information between the dedicate middleware and the general purpose middleware, includes: an activation device that activates the dedicated middleware first, and activates the general purpose middleware after the dedicated middleware; and a dedicated display control device that displays, before an activation of the general-purpose middleware is completed, a dedicated menu screen for activating the dedicated application program on the display unit via the dedicated middleware when the dedicated application program on the dedicated middleware is available.Type: ApplicationFiled: May 12, 2014Publication date: April 14, 2016Applicant: DENSO CORPORATIONInventors: Shigeo MATSUYAMA, Hiroshi ISHIGURO, Kiyohiko SAWADA
-
Publication number: 20160103688Abstract: The present disclosure provides a method of starting a computer system. The method includes rebooting a baseboard management controller of the computer system; the baseboard management controller loading a real time clock driver, wherein the real time clock driver is used for enabling a real time clock function; checking a field of a real time clock register after the real time clock driver loaded; enabling a real time clock update filed and clearing a time tag of the real time clock register if a state of the field of the real time clock register is disable; the baseboard management controller obtaining the time tag from the real time clock register; and the baseboard management controller completing a start process if the time tag is later than a predefined time.Type: ApplicationFiled: December 23, 2014Publication date: April 14, 2016Inventors: XiLang Zhang, Peng Hu
-
Publication number: 20160103689Abstract: Methods and apparatus for an inter-processor communication (IPC) link between two (or more) independently operable processors. In one aspect, the IPC protocol is based on a “shared” memory interface for run-time processing (i.e., the independently operable processors each share (either virtually or physically) a common memory interface). In another aspect, the IPC communication link is configured to support a host driven boot protocol used during a boot sequence to establish a basic communication path between the peripheral and the host processors. Various other embodiments described herein include sleep procedures (as defined separately for the host and peripheral processors), and error handling.Type: ApplicationFiled: October 8, 2015Publication date: April 14, 2016Inventors: Karan Sanghi, Saurabh Garg, Haining Zhang
-
Publication number: 20160103690Abstract: Multilingual information guidance system and device are provided. To elaborate, the system may include: at least one information output device configured to display guidance information to a user; and an information provision server connected to the information output device and configured to provide guidance information prepared in one or more languages to the information output device according to a request of the information output device. The information output device may output guidance information prepared in the native language corresponding to nationality information of a user, which is read from an RFID tag in which the nationality information of the user is stored or transmitted from the information provision server.Type: ApplicationFiled: December 18, 2015Publication date: April 14, 2016Inventors: Dong Soo Kim, Sang Ho Choi
-
Publication number: 20160103691Abstract: This follows a data processing system comprising multiple GPUs 2, 4, 6, 8 includes instruction queue circuitry 28 storing data specifying program instructions for threads awaiting issue for execution. Instruction characterisation circuitry 30 determines one or more characteristics of the program instructions awaiting issue within the instructional queue circuitry 28 and supplies this to operating parameter control circuitry 20. The operating parameter control circuitry 20 alters one or more operating parameters of the system in response to the one or more characteristics of the program instructions awaiting issue.Type: ApplicationFiled: October 9, 2014Publication date: April 14, 2016Inventors: Ankit SETHIA, Scott MAHLKE
-
Publication number: 20160103692Abstract: One embodiment of the present invention provides a switch. The switch includes a packet processor, a persistent storage module, and a boot-up management module. The packet processor identifies a switch identifier associated with the switch in the header of a packet. The persistent storage module stores configuration information of the switch in a first table in a local persistent storage. This configuration information is included in a configuration file, and the first table includes one or more columns for the attribute values of the configuration information. The boot-up management module loads the attribute values to corresponding switch modules from the first table without processing the configuration file.Type: ApplicationFiled: October 9, 2014Publication date: April 14, 2016Inventors: Vidyasagara R. Guntaka, Suresh Vobbilisetty, Manjunath A.G. Gowda, Pasupathi Duraiswamy
-
Publication number: 20160103693Abstract: An example apparatus may comprise a processor and a memory device including computer program code. The memory device and the computer program code, with the processor may cause the apparatus to execute a client application, the client application to consume a first protocol, the protocol having been produced by a Unified Extensible Firmware Interface UEFI wrapper driver; invoke, with the client application, the UEFI wrapper driver to perform at least one operation of the protocol; load a binary image of a worker application with the wrapper driver to invoke the at least one operation. The worker application calls at least one function of a software library to perform the at least one operation.Type: ApplicationFiled: June 14, 2013Publication date: April 14, 2016Inventors: Kimon Berlin, Guilherme Antonio Anzilago Tesser, Luis Fernando Pollo, Charles Ricardo Slaub, Cristiano Fernandes, Benito Silva
-
Publication number: 20160103694Abstract: A system and method can support distributed class loading in a computing environment, such as a virtual machine. A class loader can break a classpath into one or more subsets of a classpath, wherein the classpath is associated with a class. Furthermore, the class loader can use one or more threads to locate the class based on said one or more subsets of the classpath. Then, the class loader can load the class after a said thread locates the class.Type: ApplicationFiled: October 8, 2015Publication date: April 14, 2016Inventor: GAJANAN KULKARNI
-
Publication number: 20160103695Abstract: The present disclosure relates to assignment or generation of reducer virtual machines after the “map” phase is substantially complete in MapReduce. Instead of a priori placement, distribution of keys after the “map” phase over the mapper virtual machines can be used to efficiently reducer tasks in virtualized cloud infrastructure like OpenStack. By solving a constraint optimization problem, reducer VMs can be optimally assigned to process keys subject to certain constraints. In particular, the present disclosure describes a special variable matrix. Furthermore, the present disclosure describes several possible cost matrices for representing the costs determined based on the key distribution over the mapper VMs (and other suitable factors).Type: ApplicationFiled: October 8, 2014Publication date: April 14, 2016Applicant: CISCO TECHNOLOGY, INC.Inventors: Yathiraj B. Udupi, Debojyoti Dutta, Madhav V. Marathe, Raghunath O. Nambiar
-
Publication number: 20160103696Abstract: An example method for touchless multi-domain VLAN based orchestration in a network environment is provided and includes receiving mobility domain information for a virtual machine associated with a processor executing the method in a network environment, the mobility domain information comprising a mobility domain identifier (ID) indicating a scope within which the virtual machine can be moved between servers, generating a virtual station interface (VSI) discovery protocol (VDP) message in a type-length-value (TLV) format with the mobility domain information, and transmitting the VDP message to a leaf switch directly attached to the server, wherein the leaf switch provisions a port according to the mobility domain information.Type: ApplicationFiled: October 9, 2014Publication date: April 14, 2016Applicant: CISCO TECHNOLOGY, INC.Inventors: Rajesh Babu Nataraja, Shyam Kapadia, Nilesh Shah
-
Publication number: 20160103697Abstract: A streams manager monitors performance of parallel portions of a streaming application implemented in multiple virtual machines (VMs). When the performance provided by the multiple VMs is no longer needed, one or more of the VMs can be torn down. The performance of the VMs is monitored. When the least performing VM can be torn down, it is torn down. When the least performing VM cannot be torn down, information regarding a better performing VM is gathered, and it is determined whether the least performing VM can be made more similar to the better performing VM. When the least performing VM can be made more similar to the better performing VM, the least performing VM is changed to improve its performance, and the better performing VM is torn down.Type: ApplicationFiled: October 10, 2014Publication date: April 14, 2016Inventors: Lance Bragstad, Michael J. Branson, Bin Cao, James E. Carey, Mathew R. Odden
-
Publication number: 20160103698Abstract: Concepts and technologies are disclosed herein for providing a network virtualization policy management system. An event relating to a service can be detected, and virtual machines and virtual network functions that provide the service can be identified. A first policy that defines allocation of hardware resources to host the virtual machines and the virtual network functions can be obtained, as can a second policy that defines deployment of the virtual machines and the virtual network functions to the hardware resources. The hardware resources can be allocated based upon the first policy and the virtual machines and the virtual network functions can be deployed to the hardware resources based upon the second policy.Type: ApplicationFiled: October 13, 2014Publication date: April 14, 2016Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: Chen-Yui Yang, Paritosh Bajpay, David H. Lu, Chaoxin Qiu
-
Publication number: 20160103699Abstract: A hybrid cloud computing system is managed by determining communication affinity between a cluster of virtual machines, where one virtual machine in the cluster executes in a virtualized computing system, and another virtual machine in the cluster executes in a cloud computing environment, and where the virtualized computing system is managed by a tenant that accesses the cloud computing environment. After determining a target location in the hybrid cloud computing system to host the cluster of virtual machines based on the determined communication affinity, at least one of the cluster of virtual machines is migrated to the target location.Type: ApplicationFiled: October 30, 2014Publication date: April 14, 2016Inventors: Sachin THAKKAR, Debashis BASAK, Serge MASKALIK, Weiqing WU, Abhinav Vijay BHAGWAT
-
Publication number: 20160103700Abstract: A streams manager monitors performance of parallel portions of a streaming application implemented in multiple virtual machines (VMs). When the performance provided by the multiple VMs is no longer needed, one or more of the VMs can be torn down. The performance of the VMs is monitored. When the least performing VM can be torn down, it is torn down. When the least performing VM cannot be torn down, information regarding a better performing VM is gathered, and it is determined whether the least performing VM can be made more similar to the better performing VM. When the least performing VM can be made more similar to the better performing VM, the least performing VM is changed to improve its performance, and the better performing VM is torn down.Type: ApplicationFiled: October 30, 2014Publication date: April 14, 2016Inventors: Lance Bragstad, Michael J. Branson, Bin Cao, James E. Carey, Mathew R. Odden
-
Publication number: 20160103701Abstract: Execution of an application is suspended and the runtime state of the application is collected and persisted. Maintenance operations may then be performed on the computer that the application was executing upon. The runtime state might also be moved to another computer. In order to resume execution of the application, the runtime state of the application is restored. Once the runtime state of the application has been restored, execution of the application may be restarted from the point at which execution was suspended. A proxy layer might also be utilized to translate requests received from the application for resources that are modified after the runtime state of the application is persisted.Type: ApplicationFiled: December 17, 2015Publication date: April 14, 2016Inventors: Charles Kekeh, Aseem Kohli, Scott Elliot Stearns, Kristofer Hellick Reierson, Cread Wellington Mefford, Angela Mele Anderson
-
Publication number: 20160103702Abstract: Low latency communication between a transactional system and analytic data store resources can be accomplished through a low latency key-value store with purpose-designed queues and status reporting channels. Posting by the transactional system to input queues and complementary posting by analytic system workers to output queues is described. On-demand production and splitting of analytic data stores requires significant elapsed processing time, so a separate process status reporting channel is described to which workers can periodically post their progress, thereby avoiding progress inquiries and interruptions of processing to generate report status. This arrangement produces low latency and reduced overhead for interactions between the transactional system and the analytic data store system.Type: ApplicationFiled: October 10, 2014Publication date: April 14, 2016Applicant: SALESFORCE.COM, INC.Inventors: Donovan Schneider, Fred Im, Daniel C. Silver, Vijayasarathy Chakravarthy
-
Publication number: 20160103703Abstract: An application-level thread dispatcher that operates in a main full-weight thread allocated to an application is established. The application-level thread dispatcher initializes a group of application-level pseudo threads that operate as application-controlled threads within the main full-weight thread allocated to the application. The application-level thread dispatcher determines that at least one application-level pseudo thread meets configuration requirements to operate within a separate operating system-level thread in parallel with the main full-weight thread. In response to determining that the at least one application-level pseudo thread meets the configuration requirements to operate within the separate operating system-level thread in parallel with the main full-weight thread, the at least one application-level pseudo thread is dispatched from the main full-weight thread to the separate operating system-level thread by the application-level thread dispatcher.Type: ApplicationFiled: October 8, 2014Publication date: April 14, 2016Inventors: Paul M. Cadarette, Robert D. Love, Austin J. Willoughby
-
Publication number: 20160103704Abstract: A data processing device includes an instruction execution unit that executes a first task, a second task and an interrupt task, a counter that counts an execution time of one of the first task and the interrupt task, a first storage unit that stores a set value to start the counter when the execution unit executes one of the first task and the interrupt task, a second storage unit that stores the set value stored in the first storage unit when the instruction execution unit switches from an execution of the first task to an execution of the second task, and a memory that stores the set value stored in the first storage unit when the instruction unit switches from the execution of the first task to an execution of the interrupt task.Type: ApplicationFiled: December 18, 2015Publication date: April 14, 2016Inventors: Hitoshi Suzuki, Yukihiko Akaike
-
Publication number: 20160103705Abstract: The present invention provides an operational-task-oriented system and method for dynamically adjusting operational environment applicable to a computer cluster. Each operational node of the computer cluster has two or more operational systems installed. After receiving the operational task, the control node estimates the time required for completing different tasks requiring different operational systems by appropriate operational nodes and compares the estimated finish time and the assigned finish time for judging how to adjust the operating system running in the operational nodes. Thereby, the operational task can be completed in the assigned finish time. Another method is to use the control node to analyze the proportions of the tasks requiring different operational systems in an operational task and hence adjusts the operational system running in an operational node according to the proportion of requirement. Thereby, the operational task can be completed in the shortest time.Type: ApplicationFiled: November 14, 2014Publication date: April 14, 2016Inventors: MING-JEN WANG, CHIH-WEN CHANG, CHUAN-LIN LAI, CHIA-CHEN KUO, JIANG-SIANG LIAN
-
Publication number: 20160103706Abstract: The present disclosure relates to automatically generating execution sequences from workflow definitions.Type: ApplicationFiled: October 9, 2014Publication date: April 14, 2016Inventor: Marcos Novaes
-
Publication number: 20160103707Abstract: A method includes receiving, by a system on a chip (SoC) from a logically centralized controller, configuration information and reading, from a semantics aware storage module of the SoC, a data block in accordance with the configuration information. The method also includes performing scheduling to produce a schedule in accordance with the configuration information and writing the data block to an input data queue in accordance with the schedule to produce a stored data block. Additionally, the method includes writing a tag to an input tag queue to produce a stored tag, where the tag corresponds to the data block.Type: ApplicationFiled: October 7, 2015Publication date: April 14, 2016Inventors: Debashis Bhattacharya, Alan Gatherer, Ashish Rai Shrivastava, Mark Brown, Zhenguo Gu, Qiang Wang, Alex Elisa Chandra
-
Publication number: 20160103708Abstract: System and method for executing one or more tasks in data processing is disclosed. Data is received from at least one channel from multiple channels. The data is received in order to generate a corresponding result. A set of tasks is generated. The set of tasks is generated to process the data so received. The tasks receive the data as an input argument for generating the corresponding result. A worker node from a plurality of worker node is selected for executing the set of task in a pipeline. An idle worker node from the plurality of worker node is selected for executing the set of tasks. The set of task is executed by the selected worker nodes in order to generate the corresponding result. The results are stored for a predefined time in the system.Type: ApplicationFiled: October 9, 2015Publication date: April 14, 2016Inventor: ANOOP THOMAS MATHEW
-
Publication number: 20160103709Abstract: A method and an apparatus for managing and scheduling tasks in a many-core system are presented. The method improves process management efficiency in the many-core system. The method includes, when a process needs to be added to a task linked list, adding a process descriptor pointer of the process to a task descriptor entry corresponding to the process, and adding the task descriptor entry to the task linked list; if a process needs to be deleted, finding a task descriptor entry corresponding to the process, and removing the task descriptor entry from the task linked list; and when a processor core needs to run a new task, removing an available priority index register with a highest priority from a queue of the priority index register.Type: ApplicationFiled: December 21, 2015Publication date: April 14, 2016Inventors: Lunkai Zhang, DongRui Fan, Hao Zhang, Xiaochun Ye
-
Publication number: 20160103710Abstract: The invention relates to a scheduling device for receiving a set of requests and providing a set of grants to the set of requests, the scheduling device comprising: a lookup vector prepare unit configured to provide a lookup vector prepared set of requests depending on the set of requests and a selection mask and to provide a set of acknowledgements to the set of requests; and a prefix forest unit coupled to the lookup vector prepare unit, wherein the prefix forest unit is configured to provide the set of grants as a function of the lookup vector prepared set of requests and to provide the selection mask based on the set of grants.Type: ApplicationFiled: December 18, 2015Publication date: April 14, 2016Inventors: Yaron Shachar, Yoav Peleg, Alex Tal, Lixia Xiong, Yuchun Lu, Alex Umansky
-
Publication number: 20160103711Abstract: Methods and systems of determining an optimum power-consumption profile for virtual machines running in a data center are disclosed. In one aspect, a power-consumption profile of a virtual machine and a unit-rate profile of electrical power cost over a period are received. The methods determine an optimum power-consumption profile based on the power-consumption profile and the unit-rate profile. The optimum power-consumption profile may be used reschedule the virtual machine over the period.Type: ApplicationFiled: November 26, 2014Publication date: April 14, 2016Inventors: KUMAR GAURAV, HEMANTH KUMAR PANNEM, BHASKARDAS KAMBIVELU
-
Publication number: 20160103712Abstract: An example provides a method of creating an instance of a virtual machine in a cloud computing system that includes: accepting a network connection at a server resource in the cloud computing system from a first client resource in a first virtualized computing system to transfer a first virtual machine; receiving first signatures for guest files of the first virtual machine from the first client resource; checking the first signatures against a content library in the cloud computing system to identify first duplicate files of the guest files that match first base files stored in the content library, and to identify first unique files of the guest files; instructing the first client resource such that a response to the instructing will send the first unique files to the exclusion of the first duplicate files; and generating an instance of the first virtual machine in the cloud computing system having the first base files and the first unique files.Type: ApplicationFiled: December 24, 2014Publication date: April 14, 2016Inventors: Sachin THAKKAR, Serge MASKALIK, Debashis BASAK, Weiqing WU, Allwyn SEQUEIRA
-
Publication number: 20160103713Abstract: A method for sequencing a plurality of tasks performed by a processing system and a processing system for implementing the same are disclosed herein. In one embodiment, a method for sequencing a plurality of tasks performed by a processing system is provided that includes generating a schedule by iteratively performing a scheduling process and processing a plurality of substrates using the plurality of semiconductor processing equipment stations according to the schedule. The scheduling process uses highly constrained tasks and determines whether a portion of the first list of the highly constrained tasks exceeds a capacity of the processing system. The scheduling process further includes updating the latest start time and the earliest start time associated with each of the plurality of tasks yet to be scheduled based on the assigned task.Type: ApplicationFiled: October 9, 2015Publication date: April 14, 2016Inventor: David Everton NORMAN
-
Publication number: 20160103714Abstract: A system includes a load balancer; apparatuses; a control apparatus configured to execute a process including: selecting, from among the apparatuses, one or more first apparatuses as each processing node for processing data distributed by the load balancer, selecting, from among the apparatuses, one or more second apparatuses as each inputting and outputting node for inputting and outputting data processed by the each processing node, collecting load information from the one or more first apparatuses and the one or more second apparatuses, changing a number of the one or more first apparatuses or a number of the one or more second apparatuses based on the load information, and setting one or more third apparatuses not selected as the processing node and the inputting and outputting node from among the apparatuses based on the changing into a deactivated state.Type: ApplicationFiled: September 30, 2015Publication date: April 14, 2016Applicant: FUJITSU LIMITEDInventors: Yotaro Konishi, Takashi Miyoshi
-
Publication number: 20160103715Abstract: A multithreaded data processing system performs processing using resource circuitry which is a finite resource. A saturation signal is generated to indicate when the resource circuitry is no longer able to perform processing operations issued to it. This saturations signal may be used to select a scheduling algorithm to be used for further scheduling, such as switching to scheduling from a single thread as opposed to round-robin scheduling from all of the threads. Re-execution queue circuitry is used to queue processing operations which have been enabled to be issued so as to permit other processing operations which may not be blocked by the lack of use of circuitry to attempt issue.Type: ApplicationFiled: October 9, 2014Publication date: April 14, 2016Inventors: Ankit SETHIA, Scott MAHLKE