Patents Issued in July 9, 2020
  • Publication number: 20200218539
    Abstract: A graphics processing device comprises a set of compute units to execute multiple threads of a workload, a cache coupled with the set of compute units, and a prefetcher to prefetch instructions associated with the workload. The prefetcher is configured to use a thread dispatch command that is used to dispatch threads to execute a kernel to prefetch instructions, parameters, and/or constants that will be used during execution of the kernel. Prefetch operations for the kernel can then occur concurrently with thread dispatch operations.
    Type: Application
    Filed: January 9, 2019
    Publication date: July 9, 2020
    Applicant: Intel Corporation
    Inventors: JAMES VALERIO, VASANTH RANGANATHAN, JOYDEEP RAY, PRADEEP RAMANI
  • Publication number: 20200218540
    Abstract: In an embodiment, a processor includes a buffer in an interface unit. The buffer may be used to accumulate coprocessor instructions to be transmitted to a coprocessor. In an embodiment, the processor issues the coprocessor instructions to the buffer when ready to be issued to the coprocessor. The interface unit may accumulate the coprocessor instructions in the buffer, generating a bundle of instructions. The bundle may be closed based on various predetermined conditions and then the bundle may be transmitted to the coprocessor. If a sequence of coprocessor instructions appears consecutively in a program, the rate at which the instructions are provided to the coprocessor (on average) at least matches the rate at which the coprocessor consumes the instructions, in an embodiment.
    Type: Application
    Filed: January 8, 2019
    Publication date: July 9, 2020
    Inventors: Aditya Kesiraju, Brett S. Feero, Nikhil Gupta, Viney Gautam
  • Publication number: 20200218541
    Abstract: A method for camera processing using a camera application programming interface (API) is described. A processor executing the camera API may be configured to receive instructions that specify a use case for a camera pipeline, the use case defining at least one or more processing engines of a plurality of processing engines for processing image data with the camera pipeline, wherein the plurality of processing engines includes one or more of fixed-function image signal processing nodes internal to a camera processor and one or more processing engines external to the camera processor. The processor may be further configured to route image data to the one or more processing engines specified by the instructions, and return the results of processing the image data with the one or more processing engines to the application.
    Type: Application
    Filed: March 13, 2020
    Publication date: July 9, 2020
    Inventors: Christopher Paul Frascati, Rajakumar Govindaram, Hitendra Mohan Gangani, Murat Balci, Lida Wang, Avinash Seetharamaiah, Mansoor Aftab, Rajdeep Ganguly, Josiah Vivona
  • Publication number: 20200218542
    Abstract: Even when cores in a multi-core system are asynchronously operated, it is possible to transmit a data set with ensured simultaneity between the cores while improving real-time properties of processing of the cores. Bank memories and a write core and read cores accessible to these bank memories are provided. An access control unit assigns only one write core to the bank memories in which writing is performed, assigns one or more read cores to the bank memories in which reading is performed, and exclusively controls the accessing to the bank memories such that the bank memories in which the writing is performed and the bank memories in which the reading is performed are not the same.
    Type: Application
    Filed: September 14, 2018
    Publication date: July 9, 2020
    Inventor: Junji MIYAKE
  • Publication number: 20200218543
    Abstract: Provided is an apparatus configured to determine a common neural network based on a comparison between a first neural network included in a first application program and a second neural network included in a second application program, utilize the common neural network when the first application program or the second application program is executed.
    Type: Application
    Filed: November 8, 2019
    Publication date: July 9, 2020
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hyunjoo JUNG, Jaedeok KIM, Chiyoun PARK
  • Publication number: 20200218544
    Abstract: An information handling system may perform a quick boot based on a determination that a boot does not require an update to at least one of a firmware and hardware of an information handling system. The information handling system may reboot and may determine whether a boot of the system requires an update to at least one of a firmware and hardware of the information handling system. If the boot does not require an update to the at least one of a firmware and hardware of the information handling system, the information handling system may boot by bypassing one or more basic input/output system (BIOS) power-on self-test (POST) operations.
    Type: Application
    Filed: January 8, 2019
    Publication date: July 9, 2020
    Applicant: Dell Products L.P.
    Inventors: Vaideeswaran Ganesan, Suren Kumar, B. Balaji Singh, David Keith Chalfant, Swamy Kadaba Chaluvaiah
  • Publication number: 20200218545
    Abstract: An information handling system may reset components logged in a memory of the information handling system. For example, an information handling system may determine components logged in an information handling system memory and may perform a bulk reset of the logged components.
    Type: Application
    Filed: January 8, 2019
    Publication date: July 9, 2020
    Applicant: Dell Products L.P.
    Inventors: Vaideeswaran Ganesan, Suren Kumar, B. Balaji Singh, David Keith Chalfant, Swamy Kadaba Chaluvaiah
  • Publication number: 20200218546
    Abstract: Many mobile devices are used for documenting different scenarios that are encountered by the users as they go about their daily lives. In many situations, a mobile device may be used to document the scenario. This data may be of significant forensic interest to an investigator. In many situations, the owner of the phone may be willing to provide the investigator access to this data (through a documented consent agreement). Such consent is usually contingent upon the fact that not all the data available on the phone may be extracted for analysis, either due to privacy concerns or due to personal reasons. Courts have also opined in several cases that investigators must limit data extracted, so as to focus on only “relevant information” for the investigation at hand. Thus, only selective (or filtered) data should be extracted as per the consent available from the witness/victim (user). Described herein is the design and implementation of such a targeted data extraction system (TDES) for mobile devices.
    Type: Application
    Filed: January 6, 2020
    Publication date: July 9, 2020
    Inventors: Sudhir Aggarwal, Gokila Dorai, Umit Karabiyik, Tathagata Mukherjee, Nicholas A. Guerra, Manuel Hernandez-Romero, James Parsons, Khushboo Rathi
  • Publication number: 20200218547
    Abstract: Techniques for obtaining environment information are disclosed. In an embodiment, a host that has not yet completed a boot process obtains information (also referred to as “environment information”) about facilities that are available for use in a computing environment. The host does not need any functionality that is enabled through a complete boot process to obtain the environment information. The environment information is used for configuring a system service or application on the host, prior to initialization of the system service or application. Initializing the system service or application with such configurations prepares the system service or application to interact with the existing facilities. In an embodiment, a validator validates functional requirements for a computing environment. The validator obtains the environment information.
    Type: Application
    Filed: March 18, 2020
    Publication date: July 9, 2020
    Applicant: Oracle International Corporation
    Inventors: Mike Jared Carlson, Paul Gregory Greenstein
  • Publication number: 20200218548
    Abstract: Embodiments of the present disclosure provide a method and an apparatus for resource management in an edge cloud. The method comprises obtaining information on at least one of the following: resource occupation of a reconfigurable functional unit associated with hardware accelerator resources or GPP resources, power consumption of a hardware accelerator associated with hardware accelerator resources, and power consumption of a server associated with GPP resources; and performing processing on the reconfigurable functional unit based on the obtained information, the processing including at least one of configuration, reconfiguration, and migration. The method and apparatus of the embodiments of the present disclosure increase efficiency of resource management of the edge cloud, lower system energy consumption, and/or enable more efficient virtualization mechanisms for hardware accelerator resources.
    Type: Application
    Filed: June 23, 2017
    Publication date: July 9, 2020
    Inventors: Yan WAN, Chaohua GONG
  • Publication number: 20200218549
    Abstract: An electronic control unit includes a first non-volatile memory configured such that a control program is written thereto; a second non-volatile memory configured such that an identifier is written thereto; and a processor. The identifier is for verifying whether the control program is correct. The processor chooses either an identifier contained in advance in the control program or an identifier written in the second non-volatile memory, depending on how and/or whether the identifier is written in the second non-volatile memory. The processor verifies whether the control program is correct based on the chosen identifier.
    Type: Application
    Filed: March 20, 2018
    Publication date: July 9, 2020
    Inventor: Hisao ITO
  • Publication number: 20200218550
    Abstract: The present disclosure includes systems and methods for providing popups, including the following computer-implemented method. A trigger event is received that is generated by detection of a request for a presentation of a pop-up window. Based on the received trigger event, an activity pop-up component is launched that is configured to output the pop-up window, where a launch mode of the activity pop-up component is preconfigured as a single task mode. A determination is made whether the pop-up window output by the activity pop-up component is obscured by a pre-existing pop-up window. Upon determining that the pop-up window output by the activity pop-up component is obscured by the pre-existing pop-up window, the activity pop-up component is relaunched to trigger movement of the pop-up window to the top of an activity stack to force a non-obscured display of the pop-up window.
    Type: Application
    Filed: March 16, 2020
    Publication date: July 9, 2020
    Applicant: Alibaba Group Holding Limited
    Inventors: Xiangyu Zhao, Liangzi Ding
  • Publication number: 20200218551
    Abstract: Systems and methods for determining and presenting a graphical user interface including template metrics are disclosed. Exemplary implementations may: manage templates for work unit records that define units of work managed, created, and/or assigned within a collaboration environment; create one or more first work unit records based on the first template; monitor the units of work created using the templates to determine template information; determine template metric values for template metrics associated with the templates based on the template information such that first template metric values for the template metrics associated with the first template are determined based on the first template information; and effectuate presentation of a graphical user interface including the templates and the template metric values for the template metrics associated with the templates.
    Type: Application
    Filed: March 19, 2020
    Publication date: July 9, 2020
    Inventor: Gregory Louis Sabo
  • Publication number: 20200218552
    Abstract: A computerized personal assistant communicatively couples to a computer database including a plurality of available skills for the computerized personal assistant. The computerized personal assistant recognizes a current context of the user. The computerized personal assistant operates a previously-trained machine learning classifier to assess a match confidence for a candidate skill, the match confidence indicating a quality of match between the current context and a reference context previously associated with the candidate skill. The computerized personal assistant executes instructions defining an assistive action associated with the candidate skill responsive to the match confidence exceeding a predefined match confidence threshold. The computerized personal assistant executes the instructions defining a complementary help action associated with the candidate skill responsive to the match confidence not exceeding the predefined match confidence threshold.
    Type: Application
    Filed: March 23, 2020
    Publication date: July 9, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Vipindeep VANGALA, Swati VALECHA, Suryanarayana SHASTRI, Nitin PANDE, Tulasi MENON, Madan Gopal JHANWAR, Nishchay KUMAR
  • Publication number: 20200218553
    Abstract: Systems and methods for selective stack trace generation during Java exception handling are disclosed. In embodiments, a method includes determining, by a Java virtual machine (JVM) of a computing device, that an exception object escapes a catch block of Java bytecodes; setting, by the JVM of the computing device, an escaped flag based on the determining that the exception object escapes the catch block; walking, by the JVM of the computing device, a call stack to locate an applicable catch block for the exception object, wherein the applicable catch block is the catch block; determining, by the JVM of the computing device, that the escaped flag is set in response to locating the applicable catch block; and creating, by the JVM of the computing device, a stack trace based on the determining that the escaped flag is set.
    Type: Application
    Filed: January 7, 2019
    Publication date: July 9, 2020
    Inventors: Irwin D'SOUZA, Kevin J. LANGMAN, Daniel HEIDINGA
  • Publication number: 20200218554
    Abstract: A client system presents, within an execution environment of an application, a third-party media stream distinct from the application, received from a remote host server via a network. The client system detects interaction events during presentation of the third-party media stream, and transmits descriptions of the detected interaction events to the remote host server. The application may be pre-cued prior to presentation, e.g., to minimize start-up time. In some implementations, a side-band message channel is established to facilitate communication between the client system and the remote host server.
    Type: Application
    Filed: March 13, 2020
    Publication date: July 9, 2020
    Applicant: GOOGLE LLC
    Inventors: Tuna Toksoz, Thomas Price
  • Publication number: 20200218555
    Abstract: The device that includes a normalization engine configured to populate data fields in a normalized data structure with network information in accordance with normalization rules. The device further includes a virtualization engine configured to generate virtual data defining one or more virtual objects in accordance with virtualization rules that map data fields from the normalized data structure to physical attributes of virtual objects and to transmit the virtual data defining the one or more virtual objects for display on a user device. The virtualization engine is further configured to receive user feedback that identifies a selected virtual object, to identify data field values in the normalized data structure for the physical attributes of the selected virtual object, and to generate an error report comprising at least a portion of the identified data fields values. The virtualization engine is further configured to send the error report to the user device.
    Type: Application
    Filed: March 19, 2020
    Publication date: July 9, 2020
    Inventors: James M. Thomas, Alan W. Shields
  • Publication number: 20200218556
    Abstract: Methods and apparatus for centralized networking configuration in distributed systems are disclosed. Networking related metrics from a plurality of sources within a distributed system are obtained at a networking configuration server. A set of rules to be used to apply a network configuration option to a particular category of traffic associated with a node of the distributed system is determined based on the collected metrics and on networking management policies. A representation of the set of rules is transmitted to the node of the distributed system to schedule network transmissions in accordance with the networking configuration option.
    Type: Application
    Filed: March 20, 2020
    Publication date: July 9, 2020
    Applicant: Amazon Technologies, Inc.
    Inventor: Avichai Mendle Lissack
  • Publication number: 20200218557
    Abstract: Methods and systems for an event-based virtual machine that hosts microservices are disclosed.
    Type: Application
    Filed: January 4, 2019
    Publication date: July 9, 2020
    Inventors: Gireesh Punathil, Deepthi Sebastian, Vijayalakshmi Kannan, Kabir Islam
  • Publication number: 20200218558
    Abstract: A method to provide network connectivity to a virtual machine hosted on a server computer system includes detecting a change in a configuration of a software-defined network to which the server computer system provides access; issuing a network configuration update (NCU) for consumption by the virtual machine, the NCU including a data structure reflecting the change in the configuration; and providing a link-state notification (LSN) to a virtual network interface card of the virtual machine pursuant to the change in the configuration, the LSN including data indicating a state of network connectivity of the virtual machine. Receipt of the LSN triggers a dynamic host-configuration protocol (DHCP) handshake by the virtual machine; the NCU is received by the virtual machine pursuant to the DHCP handshake.
    Type: Application
    Filed: January 4, 2019
    Publication date: July 9, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Abhishek ELLORE SREENATH, Madhan SIVAKUMAR, Abhishek SHUKLA, Rishabh TEWARI
  • Publication number: 20200218559
    Abstract: A computer system may include a plurality of client computing devices, and a plurality of host computing devices each configured to provide virtual computing sessions for the client computing devices. Each host computing device may have a virtual delivery agent (VDA) associated therewith configured to connect the client computing devices with the virtual computing sessions. The VDAs within a first group may be configured to operate during off-peak hours, and VDAs within a second group different than the first group may be configured not to operate during the off-peak hours. The client computing devices may be configured to request virtual computing sessions from the VDAs in accordance with respective VDA leases, and each VDA lease may include at least one of the VDAs from the first group.
    Type: Application
    Filed: January 7, 2019
    Publication date: July 9, 2020
    Inventors: Leo C. SINGLETON, IV, Georgy MOMCHILOV
  • Publication number: 20200218560
    Abstract: Communicating a low-latency event across a virtual machine boundary. Based on an event signaling request by a first process running at a first virtual machine, the first virtual machine updates a shared register that is accessible by a second virtual machine. Updating the shared register includes updating a signal stored in the shared register. The first virtual machine sends an event signal message, which includes a register identifier, through a virtualization fabric to the second virtual machine. The second virtual machine receives the event signaling message and identifies the register identifier from the message. Based on the register identifier, the second virtual machine reads the shared register, identifying a value of the signal stored in the shared register. Based at least on the value of the signal comprising a first value, the second virtual machine signals a second process running at the second virtual machine.
    Type: Application
    Filed: January 9, 2019
    Publication date: July 9, 2020
    Inventors: Jason LIN, Gregory John COLOMBO, Mehmet IYIGUN, Yevgeniy BAK, Christopher Peter KLEYNHANS, Stephen Louis-Essman HUFNAGEL, Michael EBERSOL, Ahmed Saruhan KARADEMIR, Shawn Michael DENBOW, Kevin BROAS, Wen Jia LIU
  • Publication number: 20200218561
    Abstract: Methods and apparatus are disclosed that deploy a hybrid workload domain. An example apparatus includes a resource discoverer to determine whether a first bare metal server is available and a resource allocator to allocate virtual servers for a virtual server pool based on an availability of the virtual servers and, when the first bare metal server is available, allocate the first bare metal server for a bare metal server pool. The example apparatus further includes a hybrid workload domain generator to generate, for display in a user interface, a combination of die virtual server pool and the bare metal server pool and generate a hybrid workload domain used to run a user application based on a user selection in a user interface, the hybrid workload domain including virtual servers from the virtual server pool and bare metal servers from the bare metal server pool.
    Type: Application
    Filed: March 11, 2019
    Publication date: July 9, 2020
    Inventors: NAREN LAL, RANGANATHAN SRINIVASAN
  • Publication number: 20200218562
    Abstract: The present disclosure provides a communication method for virtual machines, an electronic device, and a non-transitory computer readable storage medium. The communication method for virtual machines suitable for a virtual machine architecture comprises the steps of: transmitting, through a shared link, an interrupt instruction to a second virtual machine by a first virtual machine; reading, in a shared configuration database, an instruction data corresponding to the interrupt instruction by the second virtual machine; and executing the instruction data and transmitting a result data through a virtual control plane to the first virtual machine by the second virtual machine, to exchange the data between the first virtual machine and the second virtual machine through the virtual control plane.
    Type: Application
    Filed: August 20, 2019
    Publication date: July 9, 2020
    Inventors: Wei-Chuan WANG, Po-Kai CHUANG, Yu-Ting TING, Chien-Kai TSENG, Tse HO LIN
  • Publication number: 20200218563
    Abstract: Systems and methods for enabling a user space process of a guest operating system to initiate hardware operations in a security-enhanced manner. An example method may comprise: configuring a storage unit to store resource requests of one or more user space processes, wherein the storage unit is accessible to a hypervisor and to a user space process managed by a guest operating system; determining, by a processing device, that the user space process managed by the guest operating system is authorized to store a resource request at the storage unit; and receiving, by the hypervisor, a signal from the user space process, wherein the signal is associated with the storage unit and initiates execution of the resource request.
    Type: Application
    Filed: March 19, 2020
    Publication date: July 9, 2020
    Inventors: Michael Tsirkin, Paolo Bonzini
  • Publication number: 20200218564
    Abstract: A method includes, with a Virtual Network Function (VNF) manager, managing a VNF that includes a plurality of VNF components running on a plurality of virtual machines, the virtual machines running on a plurality of physical computing machines, and with the VNF manager, causing a Network Function Virtualization Infrastructure (NFVI) to have a total number of virtual machines provisioned, the total number being equal to a number of virtual machines capable of providing for a current demand for VNF components plus an additional number of virtual machines equal to the highest number of virtual machines being provided by a single one of the plurality of physical computing machines.
    Type: Application
    Filed: March 23, 2020
    Publication date: July 9, 2020
    Inventor: Paul Miller
  • Publication number: 20200218565
    Abstract: Methods and apparatus for task processing in a distributed environment are disclosed and described. An example apparatus includes a task manager and a task dispatcher. The example task manager is to receive a task and create an execution context for the task, the execution context to associate the task with a routine for task execution. The example task dispatcher is to receive a report of task execution progress and provide an update regarding task execution progress, the task dispatcher, upon initiation of task execution, to facilitate blocking of interaction with a resource involved in the task execution. The example task dispatcher is to trigger an indication of task execution progress and, upon task finish, facilitate unblocking of the resource involved in the task execution.
    Type: Application
    Filed: January 20, 2020
    Publication date: July 9, 2020
    Inventors: Miroslav Mitevski, Zhan Ivanov, Tina Nakova, Ivan Strelkov, Nikola Atanasov
  • Publication number: 20200218566
    Abstract: In some examples, a system migrates, responsive to a request, a workload comprising components and relationships among the components as represented by a topology model, the migrating comprising migrating the workload from the source infrastructure to a target infrastructure, and migrating components of the workload from the source infrastructure to the target infrastructure.
    Type: Application
    Filed: January 7, 2019
    Publication date: July 9, 2020
    Inventors: Stephane Herman Maes, Srikanth Natarajan
  • Publication number: 20200218567
    Abstract: A master device that manages task processing is provided and includes a communication circuit and at least one processor to obtain first real-time resource information associated with resources that a first task processing device currently uses obtain second real-time resource information associated with resources that a second task processing device currently uses, obtain information associated with processing of a distribution task to be distributed to at least one of the plurality of task processing devices, obtain an amount of resources required for processing the distribution task, identify the first task processing device to be a task processing device to which the distribution task is to be distributed on the basis of the first real-time resource information, the second real-time resource information, and the amount of resources required for processing the distribution task, and transmit the information associated with processing of the distribution task to the first task processing device.
    Type: Application
    Filed: September 27, 2019
    Publication date: July 9, 2020
    Inventors: Kyungrae KIM, Hyeonsang EOM, Seokwon CHOI, Yoonsung NAM, Hyunil SHIN, Youngmin WON
  • Publication number: 20200218568
    Abstract: An apparatus is described having multiple cores, each core having: a) a CPU; b) an accelerator; and, c) a controller and a plurality of order buffers coupled between the CPU and the accelerator. Each of the order buffers is dedicated to a different one of the CPU's threads. Each one of the order buffers is to hold one or more requests issued to the accelerator from its corresponding thread. The controller is to control issuance of the order buffers' respective requests to the accelerator.
    Type: Application
    Filed: December 30, 2019
    Publication date: July 9, 2020
    Inventors: Ronny Ronen, Boris Ginzburg, Eliezer Weissmann
  • Publication number: 20200218569
    Abstract: A scheduler of a batch job management service determines that a set of resources a client is insufficient to execute one or more jobs. The scheduler prepares a multi-dimensional statistical representation of resource requirements of the jobs, and transmits it to a resource controller. The resource controller uses the multi-dimensional representation and resource usage state information to make resource allocation change decisions.
    Type: Application
    Filed: March 13, 2020
    Publication date: July 9, 2020
    Applicant: Amazon Technologies, Inc.
    Inventors: Dougal Stuart Ballantyne, James Edward Kinney, JR., Aswin Damodar, Chetan Hosmani, Rejith George Joseph, Chris William Ramsey, Kiuk Chung, Jason Roy Rupard
  • Publication number: 20200218570
    Abstract: A conflict resolution method for a remotely controlled device is provided. The method includes: issuing a command for the device by a remote control center or by the device; determining a criticality level of the command; depending on the criticality level of the command, sending the command to the other one of the device and the control center for acknowledgment or refusal of the command; and executing or disregarding the command by the device depending on the criticality level of the command and, if applicable, on the acknowledgment or refusal of the command.
    Type: Application
    Filed: March 18, 2020
    Publication date: July 9, 2020
    Inventors: Roman Schlegel, Thomas Locher
  • Publication number: 20200218571
    Abstract: Techniques for automated capacity management in computing systems are disclosed herein. In one embodiment, a method includes generating multiple time series models each representing predicted usage levels of the computing resource based on historical usage levels of the computing resource. The method can then include selecting, from the generated multiple time series models, one of the time series models that has a combined value of a forecast error and a forecast churn smaller than the other generated time series models. The method can further includes determining a future usage level of the computing resource in the computing system at the future time point using the selected one of the time series models and allocating and provisioning an amount of the computing resource in the computing system in accordance with the predicted future usage level of the computing resource at the future time point.
    Type: Application
    Filed: January 9, 2019
    Publication date: July 9, 2020
    Inventor: Yingying Chen
  • Publication number: 20200218572
    Abstract: Embodiments of the invention are directed to systems, methods, and computer program products for implementing dynamic resource allocation. The system is configured for transformation of resource allocations of predetermined intervals based on dynamic indices. In some embodiments, the system is configured to determine an adapted resource component for a predetermined time interval of a plurality of time intervals associated with the dynamic resource allocation. Moreover, the system may then identify a cumulative growth rate associated with the adapted resource component. Subsequently, the system may construct a total resource availability for the predetermined time interval based on escalating or modifying a prior total resource availability in direct proportion with the cumulative growth rate.
    Type: Application
    Filed: January 9, 2019
    Publication date: July 9, 2020
    Applicant: Brighthouse Services, LLC
    Inventors: Michael Anthony Villella, Melissa Sue Cox, Jensen Palencia, Alan Scott Assner, Tara Jean Figard
  • Publication number: 20200218573
    Abstract: The present disclosure discloses a memory management method, an electronic apparatus and a device with a storage function. The method includes: acquiring a currently required reserved memory value of a component/application, where the reserved memory is a part of physical memory reserved by an operating system for the component/application and not to be mapped into running memory; judging whether the currently required reserved memory value is smaller than an allocated reserved memory value, where the allocated reserved memory value is a reserved memory size allocated by the operating system to the component/application; and when the currently required reserved memory value is smaller than the allocated reserved memory value, recovering redundant reserved memory and reallocating the recovered reserved memory for use as running memory. In the above manner, the present disclosure can increase a utilization ratio of memory.
    Type: Application
    Filed: June 12, 2019
    Publication date: July 9, 2020
    Inventor: Daan Sun
  • Publication number: 20200218574
    Abstract: Improved techniques for dynamically responding to a fluctuating workload. Resources are reactively scaled for memory-intensive applications and automatically adapted to in response to workload changes without requiring pre-specified thresholds. A miss ratio curve (MRC) is generated for an application based on application runtime statistics. This MRC is then modeled as a hyperbola. An area on the hyperbola is identified as satisfying a flatten threshold. A resource allocation threshold is then established based on the identified area. This resource allocation threshold indicates how many resources are to be provisioned for the application. The resources are scaled using a resource scaling policy that is based on the resource allocation threshold.
    Type: Application
    Filed: January 8, 2020
    Publication date: July 9, 2020
    Inventors: Joe H. Novak, Sneha K. Kasera, Ryan Stutsman
  • Publication number: 20200218575
    Abstract: A system and method for optimizing runtime environments for applications by running the applications in a plurality of runtime environments and iteratively selecting and creating new runtime environments based on a fitness score determined for the plurality of runtime environments.
    Type: Application
    Filed: August 19, 2019
    Publication date: July 9, 2020
    Applicant: PayPal, Inc.
    Inventor: Shlomi BOUTNARU
  • Publication number: 20200218576
    Abstract: Systems and methods may use models to generate predictions of specific access rights for users. Further, systems and methods may generate the predictions in an environment in which the availability of the specific access rights change frequently. The access rights, predicted using embodiments described herein, may be both available and associated with user affinities. An interface associated with the primary load management system may be configured to display the predicted access rights for a user operating a user device.
    Type: Application
    Filed: January 6, 2020
    Publication date: July 9, 2020
    Inventors: Ish Rishabh, Mark Roden, Chris Smith, Spencer Brown, Scott Kline, Krisha Zagura
  • Publication number: 20200218577
    Abstract: A method includes receiving a plurality of data processing requests and assigning each data processing request to a group based on the source of the data. The method further includes generating a primary processing stack indicating a queue for processing the first data, wherein: the primary processing stack comprises a plurality of layers; each layer comprises a plurality of slices, wherein each slice represents a portion of the first data of at least one data processing request; and the plurality of slices are arranged within each layer based at least on the priority indicator corresponding to the first data that each slice represents. The method further includes receiving resource information about a plurality of servers, assigning each slice of the primary processing stack to one of the servers, and sending processing instructions comprising an identification of each slice of the primary processing stack assigned to the respective server.
    Type: Application
    Filed: January 3, 2019
    Publication date: July 9, 2020
    Inventors: Aditya Kulkarni, Rama Venkata S. Kavali, Venugopala Rao Randhi, Lawrence Anthony D'Silva
  • Publication number: 20200218578
    Abstract: Communication fabric-coupled computing architectures, platforms, and systems are provided herein. In one example, an apparatus includes a management entity configured to establish compute units each comprising components selected among a plurality of physical computing components. The apparatus includes a fabric interface configured to instruct a communication fabric communicatively coupling the plurality of physical computing components to establish logical isolation within the communication fabric to form the compute units.
    Type: Application
    Filed: March 16, 2020
    Publication date: July 9, 2020
    Applicant: Liqid Inc.
    Inventors: Christopher R. Long, James Scott Cannata, Jason Breakstone
  • Publication number: 20200218579
    Abstract: Example implementations relate to selecting a cloud service provider. A computing device may comprise a processing resource and a memory resource storing non-transitory machine-readable instructions to cause the processing resource to select a cloud service provider from a group of cloud service providers to provide a service for a workload based on a received deployment preference for the workload, and a service characteristic of the cloud service provider, deploy the workload to the selected cloud service provider.
    Type: Application
    Filed: January 8, 2019
    Publication date: July 9, 2020
    Inventors: Vikas D M, Lokesh S
  • Publication number: 20200218580
    Abstract: A cloud platform system is configured to ensure the availability and extendibility of application services, enable multi/hybrid cloud integration management, and construct, operate, and manage an enterprise cloud enabling efficient development and operation.
    Type: Application
    Filed: April 30, 2018
    Publication date: July 9, 2020
    Inventor: In Seok KIM
  • Publication number: 20200218581
    Abstract: The present invention relates to a communication system (1) comprising one or several execution node(s) (2) able to execute one or several microservice(s) (5), a computer device, called «host server» (3), including several routers (30) constituting an intermediate communication interface between each execution node (2) and the outside of the communication system (1), a heterogeneous computing platform (4), consisting of a set (40) of hardware and software or executable code for the access to and deployment of the microservices (5) on the system in a Java runtime environment (J) on the host server (3) and the execution nodes (2) allowing the execution of computer programs based on the Java language; the communication system (1) allows the creation of ephemeral microservices (5) by the use of a key/value system (6) stored in a distributed memory cache (8) at each creation by referencing each microservice (5) by filenames deposited in the system by a developer (10) and using an asynchronous TCP exchange protocol
    Type: Application
    Filed: July 24, 2018
    Publication date: July 9, 2020
    Inventor: Christophe BLETTRY
  • Publication number: 20200218582
    Abstract: A device and method for automatically allocating computing resources is disclosed herein.
    Type: Application
    Filed: March 18, 2020
    Publication date: July 9, 2020
    Inventors: Lanlan CONG, Heshan LIN, Yehui YANG
  • Publication number: 20200218583
    Abstract: Embodiments described include systems and methods for calling an application programming interface of a client application for a network application via an embedded browser of the client application. The method includes establishing, by a client application on a client device, one or more sessions to one or more network applications accessed via an embedded browser of the client application. The client application providing a plurality of application program interfaces (APIs). The client application can intercept a first API called by a network application of the one or more network applications and identify a policy for using the plurality of APIs of the client application. The client application can determine, based at least on the policy, a second API of the plurality of APIs to use for the intercepted first API, and execute, for the intercepted first API call, the second API of the plurality of APIs of the client application.
    Type: Application
    Filed: March 16, 2020
    Publication date: July 9, 2020
    Inventors: Vipin Borkar, Santosh Sampath, Deepak Sharma, Arvind SankaraSubramanian
  • Publication number: 20200218584
    Abstract: Methods and systems for an event loop based runtime are disclosed. A method includes: receiving, by a computing device, a first request to register a first event in an event loop; in response to determining that the first request to register the first event is not triggered by a callback routine, the computing device generating an event list and storing an event descriptor for the first event in the event list; determining, by the computing device, that the first event has occurred; executing, by the computing device, a callback routine associated with the first event; marking, by the computing device, the first event as complete in the event list; determining, by the computing device, information about a transaction including a start time and an end time based on the event list; and outputting, by the computing device, the information about the transaction.
    Type: Application
    Filed: January 3, 2019
    Publication date: July 9, 2020
    Inventor: Gireesh Punathil
  • Publication number: 20200218585
    Abstract: Aspects of the technology described herein improve the clarity of information provided in automatically generated notifications, such as reminders, tasks, alerts or other messages or communications provided to a user. The clarity may be improved through augmentations that provide additional information or specificity to the user. For example, instead of providing a notification reminding the user, “remember to send the slides before the meeting,” the user may be provided with a notification reminding the user “remember to send the updated sales presentation before the executive committee meeting on Tuesday. The augmentation may take several forms including substituting one word in the notification with another more specific word, adding additional content such as a word or phrase to the notification without altering the existing content, and/or by rephrasing the content for grammatical correctness and/or clarity.
    Type: Application
    Filed: January 8, 2019
    Publication date: July 9, 2020
    Inventors: Dikla DOTAN-COHEN, Ido PRINESS, Haim SOMECH, Anat INON, Amitay DROR, Michal Yarom Zarfati
  • Publication number: 20200218586
    Abstract: In an example, a Web Socket is used as an abstraction layer on top of one or more triggers. These triggers may be defined by DevOps tools and may be called bidirectionally. Specifically, a web application can call a trigger located in a Function as a Service layer at an ABAP application server, while the ABAP application server can also push data via a push channel through the WebSocket to trigger functions in the Web App.
    Type: Application
    Filed: January 9, 2019
    Publication date: July 9, 2020
    Inventor: Masoud Aghadavoodi Jolfaei
  • Publication number: 20200218587
    Abstract: A data processing system arranged for receiving over a network, according to a data transfer protocol, data directed to any of a plurality of destination identities, the data processing system comprising: data storage for storing data received over the network; and a first processing arrangement for performing processing in accordance with the data transfer protocol on received data in the data storage, for making the received data available to respective destination identities; and a response former arranged for: receiving a message requesting a response indicating the availability of received data to each of a group of destination identities; and forming such a response; wherein the system is arranged to, in dependence on receiving the said message.
    Type: Application
    Filed: October 11, 2019
    Publication date: July 9, 2020
    Applicant: Solarflare Communications, Inc.
    Inventors: Steven Leslie Pope, Derek Edward Roberts, David James Riddoch, Greg Law, Steve Grantham, Matthew Slattery
  • Publication number: 20200218588
    Abstract: Techniques for an application programming interface (API) notebook tool are disclosed. In some implementations, an API notebook is a tool, framework, and ecosystem that enables easy exploration of services that expose APIs, creation and documentation of examples, use cases and workflows, and publishing and collaboration of APIs. In some embodiments, systems, processes, and computer program products for an API notebook tool include receiving a request for a client for calling an API for a service, and dynamically generating the client for the API for the service.
    Type: Application
    Filed: March 17, 2020
    Publication date: July 9, 2020
    Applicant: Mulesoft, LLC
    Inventor: Uri Sarid