Patents Issued in March 21, 2019
-
Publication number: 20190087181Abstract: A storage system includes a management processor and main processors. Each of the main processors is configured to alternately switch between a period in which main function processing, including I/O processing in response to an I/O request from a host, is executed and a period in which a management instruction is executed. The management processor is configured to: manage information associating each of uncompleted management instructions, which are already transmitted to the main processors, with a transmission destination main processor to which the each of the uncompleted management instructions is transmitted; select, based on the uncompleted management instructions of the main processors, a transmission destination main processor to which a next management instruction is to be transmitted, from among the main processors; and transmit the next management instruction to the selected transmission destination main processor.Type: ApplicationFiled: February 21, 2018Publication date: March 21, 2019Inventors: Wataru OKADA, Keisuke OKAMURA
-
Publication number: 20190087182Abstract: In an aspect of the disclosure, a method, a computer-readable medium, and an apparatus are provided. The apparatus receives first command or data in accordance with a first management protocol from a first device. The apparatus then translates the first command or data into second command or data in accordance with second management protocol. The apparatus further sends the second command or data to a second device. One of the first device and the second device is a first managed element managing a first host.Type: ApplicationFiled: September 18, 2017Publication date: March 21, 2019Inventors: Satheesh Thomas, Aruna Venkataraman, Baskar Parthiban
-
Publication number: 20190087183Abstract: A method and apparatus for including in processor instructions for performing logical-comparison and branch support operations on packed or unpacked data. In one embodiment, instruction decode logic decodes instructions for an execution unit to operate on packed data elements including logical comparisons. A register file including 128-bit packed data registers stores packed single-precision floating point (SPFP) and packed integer data elements. The logical comparisons may include comparison of SPFP data elements and comparison of integer data elements and setting at least one bit to indicate the results. Based on these comparisons, branch support actions are taken. Such branch support actions may include setting the at least one bit, which in turn may be utilized by a branching unit in response to a branch instruction. Alternatively, the branch support actions may include branching to an indicated target code location.Type: ApplicationFiled: October 18, 2018Publication date: March 21, 2019Inventors: Rajiv KAPOOR, Ronen ZOHAR, Mark J. BUXTON, Zeev SPERBER, Koby GOTTLIEB
-
Publication number: 20190087184Abstract: Systems and methods are directed to instruction execution in a computer system having an out of order instruction picker, which are typically used in computing systems capable of executing multiple instructions in parallel. Such systems are typically block based and multiple instructions are grouped in execution units such as Reservation Station (RSV) Arrays. If an event, such as an exception, page fault, or similar event occurs, the block may have to be swapped out, that is removed from execution, until the event clears. Typically when the event clears the block is brought back to be executed, but typically will be assigned a different RSV Array and re-executed from the beginning of the block. Tagging instructions that may cause such events and then untagging them, by resetting the tag, once they have executed can eliminate much of the typical unnecessary re-execution of instructions.Type: ApplicationFiled: September 15, 2017Publication date: March 21, 2019Inventors: Vignyan Reddy KOTHINTI NARESH, Lisa HSU, Vinay MURTHY, Anil KRISHNA, Gregory WRIGHT, III
-
Publication number: 20190087185Abstract: An electronic device and a control method thereof are provided. The electronic device includes a memory configured to include a non-secure region operating in a normal world and a secure region operating in a secure world, and a processor configured to selectively operate in one of the normal world and the secure world, check integrity of a plurality of code blocks loaded on a first area of the non-secure region while operating in the secure world, and when one of the plurality of code blocks is compromised, change a memory region corresponding to a compromised code block to a secure region, and load an original code block of the compromised code block on a second area of the non-secure region.Type: ApplicationFiled: September 17, 2018Publication date: March 21, 2019Inventor: In-ho KIM
-
Publication number: 20190087186Abstract: A data processing apparatus including a waveform data acquisition unit which acquires waveform data of a consumption current and/or a voltage of a target device, a feature value extraction unit which extracts a waveform feature value from the waveform data, an environment data acquisition unit which acquires environment data indicating an environment of the target device at the time when the waveform data is acquired, an operation state data acquisition unit which acquires operation state data indicating an operation state of the target device at the time the waveform data is acquired, a distance calculation unit which calculates a distance between each of members including the waveform feature value, the environment data, and the operation state data, and each of a plurality of reference members, a grouping unit which groups the members, and a registration unit which registers a group satisfying a predetermined condition as training data.Type: ApplicationFiled: January 20, 2017Publication date: March 21, 2019Applicant: NEC CORPORATIONInventor: Kaoru ENDO
-
Publication number: 20190087187Abstract: Predicting a Table of Contents (TOC) pointer value responsive to branching to a subroutine. A subroutine is called from a calling module executing on a processor. Based on calling the subroutine, a value of a pointer to a reference data structure, such as a TOC, is predicted. The predicting is performed prior to executing a sequence of one or more instructions in the subroutine to compute the value. The value that is predicted is used to access the reference data structure to obtain a variable value for a variable of the subroutine.Type: ApplicationFiled: September 19, 2017Publication date: March 21, 2019Inventors: Michael K. Gschwind, Valentina Salapura
-
Publication number: 20190087188Abstract: An apparatus to facilitate register allocation is disclosed. The apparatus includes an execution unit (EU) to execute processing threads. The EU includes a plurality of registers and register allocation logic to map the plurality of registers into logical register banks and allocate the processing threads to one or more of the logical register banks.Type: ApplicationFiled: September 19, 2017Publication date: March 21, 2019Inventors: Karthik Vaidyanathan, Tomasz Janczak, Travis Schluessler, Subramaniam Maiyuran
-
Publication number: 20190087189Abstract: Predicting a Table of Contents (TOC) pointer value responsive to branching to a subroutine. A subroutine is called from a calling module executing on a processor. Based on calling the subroutine, a value of a pointer to a reference data structure, such as a TOC, is predicted. The predicting is performed prior to executing a sequence of one or more instructions in the subroutine to compute the value. The value that is predicted is used to access the reference data structure to obtain a variable value for a variable of the subroutine.Type: ApplicationFiled: November 21, 2017Publication date: March 21, 2019Inventors: Michael K. Gschwind, Valentina Salapura
-
Publication number: 20190087190Abstract: Table of Contents (TOC)-setting instructions are replaced in code with TOC predicting instructions. A determination is made as to whether code includes an instruction sequence to compute a value of a pointer to a reference data structure, such as a TOC. Based on determining the code includes the instruction sequence, the instruction sequence in the code is replaced with a set instruction. The set instruction predicts the value of the pointer to the reference data structure.Type: ApplicationFiled: September 19, 2017Publication date: March 21, 2019Inventors: Michael K. Gschwind, Valentina Salapura
-
Publication number: 20190087191Abstract: Table of Contents (TOC)-setting instructions are replaced in code with TOC predicting instructions. A determination is made as to whether code includes an instruction sequence to compute a value of a pointer to a reference data structure, such as a TOC. Based on determining the code includes the instruction sequence, the instruction sequence in the code is replaced with a set instruction. The set instruction predicts the value of the pointer to the reference data structure.Type: ApplicationFiled: November 17, 2017Publication date: March 21, 2019Inventors: Michael K. Gschwind, Valentina Salapura
-
Publication number: 20190087192Abstract: Systems and methods for constructing an instruction slice for prefetching data of a data-dependent load instruction include a slicer for identifying a load instruction in an instruction sequence as a first occurrence of a qualified load instruction which will miss in a last-level cache. A commit buffer stores information pertaining to the first occurrence of the qualified load instruction and shadow instructions which follow. For a second occurrence of the qualified load instruction, an instruction slice is constructed from the information in the commit buffer to form a slice payload. A pre-execution engine pre-executes the instruction slice based on the slice payload to determine an address from which data is to be fetched for execution of a third and any subsequent occurrences of the qualified load instruction. The data is prefetched from the determined address for the third and any subsequent occurrence of the qualified load instruction.Type: ApplicationFiled: September 21, 2017Publication date: March 21, 2019Inventors: Shivam PRIYADARSHI, Rami Mohammad A. AL SHEIKH, Brandon DWIEL, Derek HOWER
-
Publication number: 20190087193Abstract: Systems and methods for branch prediction include identifying a subset of branch instructions from an execution trace of instructions executed by a processor. The identified subset of branch instructions have greater benefit from branch predictions made by a neural branch predictor than branch predictions made by a non-neural branch predictor. During runtime, the neural branch predictor is selectively used for obtaining branch predictions of the identified subset of branch instructions. For remaining branch instructions outside the identified subset of branch instructions, branch predictions are obtained from a non-neural branch predictor. Further, a weight vector matrix comprising weight vectors for the identified subset of branch instructions of the neural branch predictor is pre-trained based on the execution trace.Type: ApplicationFiled: September 21, 2017Publication date: March 21, 2019Inventors: Gurkanwal BRAR, Christopher AHN, Gurvinder Singh CHHABRA
-
Publication number: 20190087194Abstract: Embodiments of the present invention include methods, systems, and computer program products to implementing a split store data queue for an out-of-order (OoO) processor. A non-limiting example of the computer-implemented method includes detecting, by the OoO processor, a mode of the OoO processor. The method further includes partitioning, by the OoO processor, a first store data queue (SDQ) and a second SDQ based at least in part on the mode of the OoO processor. The method further includes receiving, by the OoO processor, a vector operand. The method further includes storing, by the OoO processor, the vector operand in at least one of the first SDQ and the second SDQ based at least in part on the mode of the OoO processor.Type: ApplicationFiled: September 20, 2017Publication date: March 21, 2019Inventors: Bryan J. Lloyd, Balaram Sinharoy
-
Publication number: 20190087195Abstract: Embodiments of the present invention include methods, systems, and computer program products for allocating and deallocating reorder queue entries for an out-of-order (OoO) processor. An example method includes dividing the reorder queue into a plurality of regions to store reorder queue entries; allocating a plurality of reorder queue entries into an instruction tag array for tracking the reorder queue entries based at least in part on an associated instruction tag; loading instruction tags into each region of the plurality of regions beginning with a first region of the plurality of regions, wherein a first plurality of instruction tags is loaded into the first region; deallocating all of the first plurality of instruction tags of the first region; and subsequent to all of the instruction tags of the first region being deallocated, loading a second plurality of instruction tags to the first region of the plurality of regions.Type: ApplicationFiled: September 20, 2017Publication date: March 21, 2019Inventors: Bryan Lloyd, Balaram Sinharoy
-
Publication number: 20190087196Abstract: Aspects of the invention include a computer-implemented method for executing one or more instructions by a processing unit. The method includes fetching, by an instruction fetch unit, a first instruction from an instruction cache. The method further includes associating, by an effective address table logic, an entry in an effective address table (EAT) with the first instruction. The method further includes fetching, by the instruction fetch unit, a second instruction from the instruction cache, wherein the first instruction occurs before a branch has been taken and the second instruction occurs after the branch has been taken. The method further includes associating at least a portion of the entry in the EAT associated with the first instruction in response to the second instruction utilizing a cache line utilized by the first instruction and processing the first instruction and the second instruction through a processor pipeline utilizing the entry of the EAT.Type: ApplicationFiled: September 20, 2017Publication date: March 21, 2019Inventors: Richard J. Eickemeyer, Balaram Sinharoy
-
Publication number: 20190087197Abstract: A processor may include a reorder buffer, reservation stations, and execution units. The reorder buffer may be a circular buffer with a head pointer and a tail pointer, configured to assign indexes to instructions. Reservation stations may be configured to host instructions with the assigned indexes, while waiting to be issued to the execution units. Responsive to exception event, reservation stations may be configured to flush instructions that are younger, in program order, than the instruction executed with exception. Execution units may provide the reorder buffer index EX of the instruction executed with exception. The reorder buffer may provide the reorder buffer index TP stored in the tail pointer. Reservation stations may be configured to flush instructions with assigned indexes in the wrapped-around increasing interval from the index EX to the index TP.Type: ApplicationFiled: August 7, 2018Publication date: March 21, 2019Inventor: Dejan Spasov
-
Publication number: 20190087198Abstract: A method for camera processing using a camera application programming interface (API) is described. A processor executing the camera API may be configured to receive instructions that specify a use case for a camera pipeline, the use case defining at least one or more processing engines of a plurality of processing engines for processing image data with the camera pipeline, wherein the plurality of processing engines includes one or more of fixed-function image signal processing nodes internal to a camera processor and one or more processing engines external to the camera processor. The processor may be further configured to route image data to the one or more processing engines specified by the instructions, and return the results of processing the image data with the one or more processing engines to the application.Type: ApplicationFiled: September 21, 2017Publication date: March 21, 2019Inventors: Christopher Paul Frascati, Rajakumar Govindaram, Hitendra Mohan Gangani, Murat Balci, Lida Wang, Avinash Seetharamaiah, Mansoor Aftab, Rajdeep Ganguly, Josiah Vivona
-
Publication number: 20190087199Abstract: An electronic apparatus includes a first processor configured to restrict direct memory access by one or more peripheral circuits to a volatile memory, and thereafter make a transition from an active state to a sleep state, and a second processor configured to, after the first processor has been brought into the sleep state, set the volatile memory into a self-refresh mode in which a refresh circuit of the volatile memory periodically rewrites data stored in the volatile memory, and thereafter reboot the electronic apparatus.Type: ApplicationFiled: September 12, 2018Publication date: March 21, 2019Applicant: Brother Kogyo Kabushiki KaishaInventor: Tsutomu Tanaka
-
Publication number: 20190087200Abstract: A method for displaying an animation by a display chip of an electronic device, which includes a non-volatile memory and a random-access memory. The display chip includes a video output register and a display register. The method includes a first static programming phase including configuring the video output register; writing n images in the memory, n being an integer higher than or equal to two; writing into the memory of a plurality of nodes, such that each node includes the address in the memory of at least one portion of an image, as well as the address of the following node in the memory, the last node including the address in the random-access memory of the first node; and configuring the display register. The method also includes a second phase in which the n images are read by the display chip by the display register, to display the animation.Type: ApplicationFiled: February 27, 2017Publication date: March 21, 2019Inventor: Julien BELLANGER
-
Publication number: 20190087201Abstract: Systems and methods for utilizing a defect map to configure an automata processor in order to avoid defects when configuring the automata processor. A system includes automata processor having a state machine lattice. The system also includes a non-volatile memory having a defect map stored thereon and indicating logical defects found on the automata processor. By including the defect map, a compiler may access the defect map to map out defects in the automata processor during configuring to avoid such defects.Type: ApplicationFiled: November 20, 2018Publication date: March 21, 2019Inventor: Dale Hiscock
-
Publication number: 20190087202Abstract: The present disclosure describes a number of embodiments related to devices, systems, and methods related to a plurality of displays coupled to one or more processors to display images, and a device display manager to identify a gesture made on a first display of the plurality of displays, and to cause a second display to sleep or to wake based upon the identified gesture and a current state of the second display, where the first and second displays are different displays.Type: ApplicationFiled: September 21, 2017Publication date: March 21, 2019Inventors: TARAKESAVA REDDY KOKI, JAGADISH V. SINGH
-
Publication number: 20190087203Abstract: A system includes a processor configured to determine a set of context-variable values, responsive to a restriction imposition resulting in a limited-display capability for displaying selectable application icons. The processor is also configured to determine a correlation between the context-variable values and context states saved for each of a plurality of applications displayable as selectable application icons and display a predefined number of the plurality of selectable application icons corresponding to the applications having the highest correlation with the context-variable values.Type: ApplicationFiled: September 20, 2017Publication date: March 21, 2019Inventors: Jeffrey Yizhou HU, Michael CRIMANDO
-
Publication number: 20190087204Abstract: A technique to manage software licensing in an environment that provides virtual desktop infrastructure (VDI). A license manager is configured to receive first information identifying software applications associated with a virtual machine template used in the infrastructure, as well as second information that a user has logged into the VDI from a client device, thereby creating a VDI session. For a particular time period of interest, the license manager calculates software application usage information from the first and second information. Preferably, the software application usage information represents an application count that is based on the user and the client device āpairā when the user has the VDI session during at least some portion of the time period. The software application usage information is provided to one or more other computing systems to take a given action, such as tracking, managing, auditing, enforcing and accounting for software usage in the VDI environment.Type: ApplicationFiled: September 15, 2017Publication date: March 21, 2019Inventors: Adam Babol, Jan Galda, Piotr P. Godowski, Lukasz Tomasz Jeda, Jacek Midura
-
Publication number: 20190087205Abstract: A digital assistant supported on a local device and/or a remote digital assistant service is configured to track contextual data associated with a user and dynamically load or pre-load various modalities to provide increased ease of use for the user. Various modalities can include adjustments to the graphical icons displayed on the user's device, such as the type, shape, color, size, orientation, and position of the icons. The digital assistant may track context data such as the user's location, upcoming schedule in the user's calendar, user interactions with the digital assistant, and the like to determine the best modality for the user. In one exemplary embodiment, the digital assistant may pre-load a modality with travel applications when the digital assistant learns that the user has scheduled a flight. The digital assistant may render the pre-loaded modality when the user arrives at the airport.Type: ApplicationFiled: September 18, 2017Publication date: March 21, 2019Inventor: Shai Guday
-
Publication number: 20190087206Abstract: Examples of the present disclosure describe systems and methods for contextual security training. In an example, a user may use a user device to perform a variety of actions within a computing environment. Occasionally, the user may encounter a computer issue, which may be identified by an issue detection processor. In some examples, it may be determined that the user should receive contextual training based on the identified issue so as to improve the likelihood that the user will avoid encountering or experiencing a similar issue in the future. Contextual training may be provided based on whether the user has a high incidence of encountering similar issues, among other criteria. If the criteria are satisfied, contextual training may be mandatory. In an example, contextual training may be adapted based on issue attributes to provide training tailored to a specific issue and/or issue type.Type: ApplicationFiled: September 19, 2017Publication date: March 21, 2019Applicant: Webroot Inc.Inventors: Paul Barnes, Niyazi Goknel
-
Publication number: 20190087207Abstract: Methods and systems for accessing conflicting frameworks and classes are presented. In some embodiments, a conflicting frameworks computing platform may receive an application classloader corresponding to a mobile application. The application classloader may indicate one or more child application-defined classloaders. Subsequently, the conflicting frameworks computing platform may create a framework-defined classloader comprising a first class that conflicts with a second class in the one or more child application-defined classloaders. Further, the conflicting frameworks computing platform may create a framework-termination classloader. The framework-termination classloader may be a parent classloader of the framework-defined classloader. Next, the conflicting frameworks computing platform may replace, using a reflection function, the application classloader with a new application classloader.Type: ApplicationFiled: September 21, 2017Publication date: March 21, 2019Inventor: James Robert Walker
-
Publication number: 20190087208Abstract: The present invention discloses a method and apparatus for loading an ELF file of a Linux system into a Windows system. The method comprises: resolving the ELF file in accordance with a format of the ELF file; loading the whole ELF file into a Windows system memory according to a Windows system memory storage rule; acquiring a memory address, of a file content corresponding to a symbol recorded in a symbol table of the ELF file, in the Windows system in accordance with a resolution result of the ELF file; and linking the symbol with the memory address, of the file content corresponding to the symbol, in the Windows system.Type: ApplicationFiled: October 31, 2016Publication date: March 21, 2019Inventors: Han YAN, Xin RAN, Zhihui LIANG
-
Publication number: 20190087209Abstract: Systems and methods improve performance and resource-efficiency of Just-in-Time (JIT) compilation in a hypervisor-based virtualized computing environment. A user attempts to launch an application that has been previously compiled by a JIT compiler into an intermediate, platform-independent format. A JIT accelerator selects a unique function signature that identifies the application and the user's target platform. If the signature cannot be found in a repository, indicating that the application has never been run on the target platform, the accelerator generates and stores the requested executable program in shared memory and saves the signature in the repository. The system then returns to the user a pointer to the stored platform-specific executable. If multiple users of the same platform request the same application, the system recognizes an affinity among those requests identified by their shared signature, and provides each user a pointer to the same previously stored, shared executable.Type: ApplicationFiled: July 31, 2018Publication date: March 21, 2019Inventors: Rafael Camarda Silva Folco, Plinio A. S. Freire, Breno Henrique Leitao
-
Publication number: 20190087210Abstract: Described embodiments provide systems and methods for augmentation, instrumentation, and other runtime modifications of bytecode-based applications through introduction of static and dynamic hooks. In at least one aspect, described is a system for hooking Java native interface calls from native code to Java code in a Java virtual machine. In at least one aspect, described is a system for static hooking of a Windows Universal application. In at least one aspect, described is a system for dynamically hooking a Windows Universal application.Type: ApplicationFiled: September 20, 2017Publication date: March 21, 2019Inventors: Jeff Dowling, Abraham Mir
-
Publication number: 20190087211Abstract: Embodiments of this disclosure allow non-position-independent-code to be shared between a closed application and a subsequent application without converting the non-position-independent-code into position-independent-code. In particular, embodiment techniques store live data of a closed application during runtime of the closed application, and thereafter page a portion of the live data that is common to both the closed application and a subsequent application back into volatile memory at the same virtual memory address in which the portion of live data was stored during runtime of the closed application so that the paged lived data may be re-used to execute the subsequent application in the managed runtime environment. Because the paged live data is stored at the same virtual memory address during the runtimes of both applications, non-position-independent-code can be shared between the applications.Type: ApplicationFiled: February 6, 2018Publication date: March 21, 2019Inventors: Kai-Ting Amy Wang, Man Pok Ho, Peng Wu, Haichuan Wang
-
Publication number: 20190087212Abstract: An Android simulator and a method for implementing an Android simulator are provided, wherein the Android simulator comprises an Android virtual machine and an application running module; the Android virtual machine comprises a data converting unit and a running unit, wherein the data converting unit is configured to convert a data structure of Linux-based Android-related data to a data structure of Windows-based Windows-related data; the running unit is configured to establish and manage a thread and a signal of the Linux system in the Windows system, and manage a memory allocation of the Linux system in the Windows system; and the application running module is configured to run an application in the Android virtual machine running module.Type: ApplicationFiled: November 1, 2016Publication date: March 21, 2019Inventors: Han YAN, Xin RAN, Zhihui LIANG
-
Publication number: 20190087213Abstract: Apparatuses, methods, systems, and program products are disclosed for workload management and distribution. A method includes parking a virtual instance of a workload in a repository. The workload may be executing in a first virtual environment that is configured with a first set of execution parameters prior to being parked. The method includes receiving a request to unpark the virtual instance of the workload from the repository to a second virtual environment. The method includes unparking the virtual instance of the workload at the second virtual environment. The second virtual environment may be configured with a second set of execution parameters that are different than the first set of execution parameters. The virtual instance of the workload may be unparked at the second virtual environment using the second set of execution parameters such that the unparked virtual instance of the workload retains its operating state from the first virtual environment.Type: ApplicationFiled: September 20, 2017Publication date: March 21, 2019Inventors: Todd Matters, Aniket Kulkarni, Sash Sunkara
-
Publication number: 20190087214Abstract: Methods and devices for determining settings for a virtual machine may include partitioning a physical network into a plurality of traffic classes. The methods and devices may include determining at least one virtual enhanced transmission selection (ETS) setting for one or more virtual machines, wherein the virtual ETS setting includes at least one virtual traffic class that corresponds to one of the plurality of traffic classes. The methods and devices may include transmitting a notification to the one or more virtual machines identifying the virtual ETS setting.Type: ApplicationFiled: September 21, 2017Publication date: March 21, 2019Inventors: Khoa Anh TO, Omar CARDONA, Daniel FIRESTONE, Alireza DABAGH
-
Publication number: 20190087215Abstract: This disclosure generally relates to time and timer techniques that may be used to virtualize one or more virtual machines. In an example, it may be possible to save and restore a timer of a virtual machine while preserving timer information associated with the timer (e.g., an expiration time, whether the most recent expiration has been signaled, and the enable bit, etc.). For example, a first mode may enable restoring a timer based on a previously-existing enable bit, thereby retaining the state of the timer (e.g., whether the timer is programmed to fire and/or whether the most recent expiration has been signaled). By contrast, a second mode of setting a timer may automatically set the enable bit, thereby automatically enabling the timer to fire, as may be expected by a virtual machine when setting a timer.Type: ApplicationFiled: January 19, 2018Publication date: March 21, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Aditya BHANDARI, Bruce J. SHERWIN, JR., Xin David ZHANG
-
Publication number: 20190087216Abstract: This disclosure generally relates to hypervisor memory virtualization. In an example, multiple page table stages may be used to provide a page table that may be used by a processor when processing a workload for a nested virtual machine. An intermediate (e.g., nested) hypervisor may request an additional page table stage from a parent hypervisor, which may be used to virtualize memory for one or more nested virtual machines managed by the intermediate hypervisor. Accordingly, a processor may use the additional page table stages to ultimately translate a virtual memory address for a nested virtual machine to a physical memory address.Type: ApplicationFiled: January 19, 2018Publication date: March 21, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Aditya BHANDARI, Bruce J. SHERWIN, JR., Xin David ZHANG
-
Publication number: 20190087217Abstract: This disclosure generally relates to hypervisor memory virtualization. In an example, translation lookaside buffer (TLB) invalidation requests may be selectively delivered to processors to which they relate or may be ignored by processors to which they do not relate, so as to minimize the processing overhead that may be ordinarily associated with such TLB invalidation requests. In another example, a TLB invalidation request may be suspended in order to enable a hypervisor to finish executing instructions relating to one or more TLB entries that would be affected by the TLB invalidation request.Type: ApplicationFiled: January 19, 2018Publication date: March 21, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Aditya BHANDARI, Bruce J. SHERWIN, JR., Xin David ZHANG
-
Publication number: 20190087218Abstract: A virtual machine (VM) can provision a region of memory for a queue to receive packet header, packet payload, and/or descriptors from the network interface. A virtual switch can provide a routing rule to a network interface to route a received packet header, packet payload, and/or descriptors associated with the VM to the provisioned queue. A direct memory access (DMA) transfer operation can be used to copy the received packet header, packet payload, and/or descriptors associated with the VM from the network interface to the provisioned queue without copying the packet header or payload to an intermediate buffer and from the intermediate buffer to the provisioned queue. A DMA operation can be used to transfer a packet or its descriptor from the provisioned queue to the network interface for transmission.Type: ApplicationFiled: November 5, 2018Publication date: March 21, 2019Inventors: Ciara LOFTUS, Subarna KAR, Namakkal VENKATESAN, Mark D. GRAY
-
Publication number: 20190087219Abstract: Implementations of the disclosure describe manageable external wake of virtual machines. In one implementation, a method of the disclosure includes receiving, by a processor of a computer system, a message generated by a hardware device of the computer system while a virtual machine that is hosted by the computer system is asleep. The method further includes determining, by the processor, whether to wake the virtual machine in view of a hardware event of the hardware device that generated the message.Type: ApplicationFiled: November 16, 2018Publication date: March 21, 2019Inventors: Michael Tsirkin, Dor Laor
-
Publication number: 20190087220Abstract: A hyperconverged system is provided which comprises an orchestrator which installs and coordinates container pods on a cluster of container hosts; a plurality of containers installed by said orchestrator and running on a host operating system kernel cluster; and a configurations database in communication with said orchestrator by way of an application programming interface, wherein said configurations database provides shared configuration and service discovery for said cluster, and wherein said configurations database is readable and writable by containers installed by said orchestrator.Type: ApplicationFiled: May 19, 2017Publication date: March 21, 2019Inventor: William Jason Turner
-
Publication number: 20190087221Abstract: The present invention discloses a thread processor and a thread processing method. The thread processor implements processing of a Linux thread based on a Windows system and comprises: a thread function converting module, configured to convert a processing function of a Linux thread to a processing function of a corresponding Windows thread by resolving the processing function of the Linux thread; a thread data structure converting module, configured to convert a data structure applicable to the Linux thread to a data structure applicable to the corresponding Windows thread by resolving the data structure of the Linux thread; and a thread blocking management module, configured to process blocking of a Windows thread by performing a cyclic detection on the Windows thread running in the Windows system by means of a function conversion and a data structure conversion.Type: ApplicationFiled: October 31, 2016Publication date: March 21, 2019Inventors: Han YAN, Xin RAN, Zhihui LIANG
-
Publication number: 20190087222Abstract: This disclosure generally relates to enabling a hypervisor of a host machine to provide virtual interrupts to select virtual processors or a set of virtual processors. More specifically, the present disclosure describes how interrupts may be provided to targeted virtual processors, regardless of where the virtual processors are currently executing. That is, when an interrupt is received, the interrupt may be delivered to a specified virtual processor regardless of which logical processor is currently hosting the virtual processor.Type: ApplicationFiled: January 19, 2018Publication date: March 21, 2019Inventors: Aditya BHANDARI, Bruce J. SHERWIN, JR., Xin David ZHANG
-
Publication number: 20190087223Abstract: This disclosure generally relates to enabling a hypervisor of a host machine to provide virtual interrupts to select virtual processors or a set of virtual processors. More specifically, the present disclosure describes how a hypervisor of a host machine may monitor the status of one or more virtual processors that are executing on the host machine and deliver interrupts to the virtual processors based on a number of factors including, but not limited to, a priority of the interrupt, a priority of the virtual processor, a current workload of the virtual processor and so on.Type: ApplicationFiled: January 19, 2018Publication date: March 21, 2019Inventors: Aditya BHANDARI, Bruce J. SHERWIN, JR., Xin David ZHANG
-
Publication number: 20190087224Abstract: Various example embodiments herein provide a computerized method for scheduling a plurality of tasks for an operating system on a multicore processor. The method includes identifying the plurality of tasks to be executed on the multicore processor and determining a task schedule for scheduling of the plurality of tasks by providing a higher preference to the CPU-bound task than the non CPU-bound task. Further, the method includes scheduling the plurality of tasks on the multicore processor based on the task schedule.Type: ApplicationFiled: August 2, 2018Publication date: March 21, 2019Applicant: Samsung Electronics Co., Ltd.Inventors: Tushar VRIND, Chandan Kumar, Raju Udava Siddappa, Balaji Somu Kandaswamy, Venkata Raju Indukuri
-
Publication number: 20190087225Abstract: Systems, apparatuses and methods may provide for technology that assigns a plurality of data portions associated with a workload to a plurality of cores, wherein each data portion from the plurality of data portions is only modifiable by a respective one of the plurality of cores. The technology may further pass a message between the plurality of cores to modify one or more of the data portions in response to an identification that the one or more of the data portions are unmodifiable by one or more of the plurality of cores.Type: ApplicationFiled: November 15, 2018Publication date: March 21, 2019Inventors: Piotr Rozen, Sagar Koorapati
-
Publication number: 20190087226Abstract: Performance-hint-driven dynamic resource management, including: receiving workload requirements and sensor inputs of a system; determining a new allocation for resources of the system; reconfiguring the resources of the system using the new allocation; evaluating performance of the system based on the reconfigured resources of the system; and generating performance hints based on the evaluated performance of the system.Type: ApplicationFiled: September 20, 2017Publication date: March 21, 2019Inventors: Suryanarayana Raju KATARI, Terance WIJESINGHE, Amir VAJID, Krishna VSSSR VANKA
-
Publication number: 20190087227Abstract: The invention relates in particular to optimizing memory access in a microprocessor including several logic cores upon the resumption of executing a main application, and enabling the simultaneous execution of at least two processes in an environment including a hierarchically organized shared memory including a top portion and a bottom portion, a datum being copied from the bottom portion to the top portion for processing by the application. The computer is adapted to interrupt the execution of the main application. Upon an interruption in the execution of said application, a reference to a datum stored in a top portion of the memory is stored, wherein said datum must be used in order to enable the execution of the application. After programming a resumption of the execution of the application and before the resumption thereof, said datum is accessed in a bottom portion of the memory in accordance with the reference to be stored in a top portion of the memory.Type: ApplicationFiled: July 3, 2018Publication date: March 21, 2019Inventors: Philippe COUVEE, Yann KALEMKARIAN, BenoƮt WELTERLEN
-
Publication number: 20190087228Abstract: A system receives a time series of data values from instrumented software executing on an external system. Each data value corresponds to a metric of the external system, e.g., a metric related to a potential resource shortage event. The system stores a level value representing a current estimate of the time series and a trend value representing a trend in the time series. The level and trend values are based on data in a window having a trailing value. In response to receiving a most recent value, the system updates the level value and the trend value to add an influence of the most recent value and remove an influence of the trailing value. The system forecasts based on the updated level and trend values, and in response to determining that the forecast indicates the potential resource shortage event, takes action, e.g., assigning additional resources to the instrumented software.Type: ApplicationFiled: September 12, 2018Publication date: March 21, 2019Inventor: Joseph Ari Ross
-
Publication number: 20190087229Abstract: A memory subsystem for use with a single-instruction multiple-data (SIMD) processor comprising a plurality of processing units configured for processing one or more workgroups each comprising a plurality of SIMD tasks, the memory subsystem comprising: a shared memory partitioned into a plurality of memory portions for allocation to tasks that are to be processed by the processor; and a resource allocator configured to, in response to receiving a memory resource request for first memory resources in respect of a first-received task of a workgroup, allocate to the workgroup a block of memory portions sufficient in size for each task of the workgroup to receive memory resources in the block equivalent to the first memory resources.Type: ApplicationFiled: September 17, 2018Publication date: March 21, 2019Inventors: Luca Iuliano, Simon Nield, Yoong-Chert Foo, Ollie Mower, Jonathan Redshaw
-
Publication number: 20190087230Abstract: Methods, systems, and computer readable media may be operable to facilitate an anticipation of an execution of a process termination tool. An allocation stall counter may be queried at a certain frequency, and from the query of the allocation stall counter, a number of allocation stall counter increments occurring over a certain duration of time may be determined. If the number of allocation stall counter increments is greater than a threshold, a determination may be made that system memory is running low and that an execution of a process termination tool is imminent. In response to the determination that system memory is running low, a flag indicating that system memory is running low may be set, and one or more programs, in response to reading the flag, may free memory that is not necessary or required for execution.Type: ApplicationFiled: September 10, 2018Publication date: March 21, 2019Inventors: Doug R. Szperka, Ernest G. Schmitt, Rathnakar Shetty, Sandeep Guddekoppa Suresh