Patents Issued in October 17, 2019
  • Publication number: 20190317816
    Abstract: Computational methods and systems to reclaim capacity of a virtual infrastructure of distributed computing system are described. Methods and systems are directed to forecasting usage of resources that form a virtual infrastructure of a distributed computing system. Streams of metric data that represent usage of resources of the virtual infrastructure assigned to a virtual object are collected. A binary sequence of active status metric data is computed for the virtual object based on the streams of metric data. Forecasted active status metric data are computed in a forecast interval based on the sequence of active status metric data. Expected active or inactive status of virtual object over the forecast interval is determined from the forecasted active status metric data. If the virtual object is expected to inactive status over the forecast interval, resources assigned to the virtual object are reclaimed for use by active virtual objects.
    Type: Application
    Filed: June 20, 2018
    Publication date: October 17, 2019
    Applicant: VMware, Inc.
    Inventors: Rachil Chandran, Lalit Jain, Harutyun Beybutyan, James Ang, Leah Nutman, Keshav Mathur
  • Publication number: 20190317817
    Abstract: Computational methods and systems that proactively manage usage of computational resources of a distributed computing system are described. A sequence of metric data representing usage of a resource is detrended to obtain a sequence of non-trendy metric data. Stochastic process models, a pulse wave model and a seasonal model of the sequence of non-trendy metric data are computed. When a forecast request is received, a sequence of forecasted metric data is computed over a forecast interval based on the estimated trend and one of the pulse wave or seasonal model that matches the periodicity of the sequence of non-trendy metric data. Alternatively, the sequence of forecasted metric data is computed based on the estimated trend and the stochastic process model with a smallest accumulated residual error. Usage of the resource by virtual objects of the distributed computing system may be adjusted based on the sequence of forecasted metric data.
    Type: Application
    Filed: April 12, 2018
    Publication date: October 17, 2019
    Applicant: VMware, Inc.
    Inventors: Darren Brown, Junyuan Lin, Paul Pedersen, Keshav Mathur, Leah Nutman, Peng Gao, Xing Wang
  • Publication number: 20190317818
    Abstract: A system and computer-implemented method for managing a smart devices network using fog computing is provided. The system comprises an application manager configured to receive service requests from devices in a smart devices network and collect data related to fog computing nodes and intermediate computing nodes and a resource utilization predictor configured to predict availability of the fog computing nodes and the intermediate computing nodes. Furthermore, the system comprises a resource manager configured to dynamically allocate at least one of: a specific fog computing node and a specific intermediate computing node, schedule triggering of fog applications based on the predicted availability, trigger, at the specific fog computing node and the specific intermediate computing node, the fog applications for executing the received service requests corresponding to the devices and perform actions corresponding to the executed one or more service requests.
    Type: Application
    Filed: June 14, 2018
    Publication date: October 17, 2019
    Inventors: Geelapaturu Subrahmanya Venkata Radha Krishna Rao, Natarajan Venkatachalam, Anuj Kulshreshtha
  • Publication number: 20190317819
    Abstract: A computer-implemented method of enabling distributed computers to communicate more effectively in an enterprise that provides flexible approval notifications in an organization, wherein at least one of the distributed computers stores a graph database in which attributes regarding individuals of the organization are stored. The computer-implemented method includes receiving, at a server computer in the distributed computers, a request for a task to be performed, wherein the task requires approval by at least a first person in the organization who has authority to approve the request. The computer-implemented method also includes traversing, by the server computer, the graph database, to determine an identity of the first person, wherein traversing is performed based on criteria determined at least partially by information automatically extracted from the request.
    Type: Application
    Filed: April 17, 2018
    Publication date: October 17, 2019
    Inventors: Michael F. Brown, Robert Tucker, Kuntal Roy, Annelise Levitt, Edgardo Aviles Lopez, Kevin A. Miller, Lauren Miller, Lohit J. Sarma
  • Publication number: 20190317820
    Abstract: A system and method, including multi-channel, multi-control system and method, for transferring a logical partition in a virtualized computer network is disclosed. The system and method includes a source server having a logical partition and a Virtualized Input/Output Server (VIOS), where the VIOS has logical partition migration capabilities; a target server for receiving the logical partition; and a target migration console associated with the target server, where the system is configured to transfer the logical partition using the VIOS on the source server and the target migration console. The system and method may include, or be configured to prepare, a connection between the VIOS of the source server and the target migration console, and may further include and be configured in an embodiment to transfer the logical configuration using a management processor on the source server and a source management console associated with the source server.
    Type: Application
    Filed: April 12, 2018
    Publication date: October 17, 2019
    Inventors: Rizwan Sheikh Abdulla, Kuntal Dey, Konda Reddy Dumpa, Seema Nagar
  • Publication number: 20190317821
    Abstract: Certain aspects of the present disclosure provide a method for managing distributed computing resources, including: receiving a processing job request; determining that available system resources are insufficient to process the job; installing a container in a cloud processing node; installing an application in the container in the cloud processing node; splitting a processing job into a plurality of chunks; and distributing at least one of the plurality of chunks to an on-site processing node and at least another one of the plurality of chunks to a cloud processing node.
    Type: Application
    Filed: April 16, 2019
    Publication date: October 17, 2019
    Inventors: Tim O'NEAL, Andreas ROELL
  • Publication number: 20190317822
    Abstract: A computer system configures processing elements within a distributed computing system. A processing element within a distributed computing environment is determined to be affected by a software update, wherein each processing element of the distributed computing system includes a plurality of components and the software update modifies the components of the processing elements. The determined processing element is split into a plurality of processing elements based on a set of factors. The plurality of components of the processing element are assigned among the plurality of processing elements based on components affected by the software update. Embodiments of the present invention further include a method and program product for configuring processing elements within a distributed computing system in substantially the same manner described above.
    Type: Application
    Filed: April 16, 2018
    Publication date: October 17, 2019
    Inventors: David M. Koster, Jason A. Nikolai, John M. Santosuosso
  • Publication number: 20190317823
    Abstract: A computer system configures processing elements within a distributed computing system. A processing element within a distributed computing environment is determined to be affected by a software update, wherein each processing element of the distributed computing system includes a plurality of components and the software update modifies the components of the processing elements. The determined processing element is split into a plurality of processing elements based on a set of factors. The plurality of components of the processing element are assigned among the plurality of processing elements based on components affected by the software update. Embodiments of the present invention further include a method and program product for configuring processing elements within a distributed computing system in substantially the same manner described above.
    Type: Application
    Filed: June 20, 2019
    Publication date: October 17, 2019
    Inventors: David M. Koster, Jason A. Nikolai, John M. Santosuosso
  • Publication number: 20190317824
    Abstract: According to examples, a system may include a plurality of clusters of nodes and a plurality of container manager hardware processors, in which each of the container manager hardware processors may manage the nodes in a respective cluster of nodes. The system may also include at least one service manager hardware processor to manage deployment of customer services across multiple clusters of the plurality of clusters of nodes through the plurality of container manager hardware processors.
    Type: Application
    Filed: April 11, 2018
    Publication date: October 17, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Ajay MANI, David A. Dion, Marcus F. Fontoura, Prajakta S. Patil, Saad Syed, Shailesh P. Joshi, Sushant P. Rewaskar, Vipins Gopinadhan, James Ernest Johnson
  • Publication number: 20190317825
    Abstract: Certain aspects of the present disclosure provide methods and systems for managing deployment of distributed computing resources, including: causing a node agent to be installed on a remote computing node, wherein the node agent is configured to run as an application with user-level privileges on the remote computing node; transmitting, to the node agent using a compact messaging protocol, a request to install a container on the remote computing node, wherein the container is pre-configured with an application; transmitting, to the node agent using the compact messaging protocol, a request to run the application in the container on the remote computing node; and receiving, from the application running on the remote computing node, application data.
    Type: Application
    Filed: September 28, 2018
    Publication date: October 17, 2019
    Inventors: Tim O'NEAL, Konstantin BOGATYREV
  • Publication number: 20190317826
    Abstract: Computational methods and systems that estimate time remaining and right size for usable capacities of resources used to run virtual objects of a distributed computing system are described. For each stream of metric data that represents usage of a resource of a distributed computing system, a model for forecasting metric data is determined and used to compute forecasted metric data in a forecast interval. A resource utilization metric is computed from the forecasted metric data and may be used to estimate a time remaining before the usable capacity of the resource is expected to be insufficient and the resource usable capacity is adjusted. The resource utilization metric may be used to determine the capacity remaining is insufficient. A right-size usable capacity for the resource is computed based on the resource utilization metric and the usable capacity of the resource is adjusted to at least the right-size usable capacity.
    Type: Application
    Filed: November 5, 2018
    Publication date: October 17, 2019
    Applicant: VMware, Inc.
    Inventors: Lalit Jain, Rachil Chandran, Keshav Mathur, James Ang, Kien Chiew Wong, Leah Nutman
  • Publication number: 20190317827
    Abstract: The present disclosure provides method and apparatus for managing kernel services in multi-core system. Embodiments herein provide a method for managing kernel services in a multi-core system. The method includes configuring a lock for a kernel and object-specific locks for shared resources of the kernel and parallel processing IPC services for different shared resources on a plurality of cores of the multi-core system using the object-specific locks.
    Type: Application
    Filed: April 17, 2019
    Publication date: October 17, 2019
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Anup Manohar KAVERI, Vinayak HANAGANDI, Nischal JAIN, Rohit Kumar SARAF, Shwetang SINGH, Samarth VARSHNEY, Srinivasa Rao KOLA, Younjo OH
  • Publication number: 20190317828
    Abstract: In a logically partitioned host computer system comprising host processors (host CPUs) partitioned into a plurality of guest processors (guest CPUs) of a guest configuration, a perform topology function instruction is executed by a guest processor specifying a topology change of the guest configuration. The topology change preferably changes the polarization of guest CPUs, the polarization being related to the amount of a host CPU resource provided to a guest CPU.
    Type: Application
    Filed: June 27, 2019
    Publication date: October 17, 2019
    Inventors: Mark S. Farrell, Charles W. Gainey, JR., Jeffrey P. Kubala, Donald W. Schmidt
  • Publication number: 20190317829
    Abstract: Computational methods and systems that proactively manage usage of computational resources of a distributed computing system are described. A sequence of metric data representing usage of a resource is detrended to obtain a sequence of non-trendy metric data. Stochastic process models, a pulse wave model and a seasonal model of the sequence of non-trendy metric data are computed. When a forecast request is received, a sequence of forecasted metric data is computed over a forecast interval based on the estimated trend and one of the pulse wave or seasonal model that matches the periodicity of the sequence of non-trendy metric data. Alternatively, the sequence of forecasted metric data is computed based on the estimated trend and the stochastic process model with a smallest accumulated residual error. Usage of the resource by virtual objects of the distributed computing system may be adjusted based on the sequence of forecasted metric data.
    Type: Application
    Filed: July 26, 2018
    Publication date: October 17, 2019
    Applicant: VMware, Inc.
    Inventors: Darren Brown, Junyuan Lin, Paul Pedersen, Keshav Mathur, Peng Gao, Xing Wang, Leah Nutman
  • Publication number: 20190317830
    Abstract: Disclosed herein are system, apparatus, article of manufacture, method, and/or computer program product embodiments for a cross-cloud orchestration of data analytics for a plurality of research domains. An embodiment operates by receiving one or more command and control (C&C) requests to execute one or more analytic applications of a workflow. The workflow may include the one or more analytic applications for execution. The embodiment may further operate by generating one or more native access requests to execute the analytic applications at one or more analytics computing environments, and transmitting one or more native access requests to the analytics computing environments, wherein at least two native access requests are configured for different access protocol.
    Type: Application
    Filed: June 27, 2019
    Publication date: October 17, 2019
    Applicant: The MITRE Corporation
    Inventors: Joseph Peter JUBINSKI, Ransom Kershaw WINDER, Angela McIntee O'HANLON, Nathan Louis GILES
  • Publication number: 20190317831
    Abstract: A memory fence or other similar operation is executed with reduced latency. An early fence operation is executed and acts as a hint to the processor executing the thread that executes the fence. This hint causes the processor to begin performing sub-operations for the fence earlier than if no such hint were executed. Examples of sub-operations for the fence include operations to make data written to by writes prior to the fence operation available to other threads. A resolving fence, which occurs after the early fence, performs the remaining sub-operations for the fence. By triggering some or all of the sub-operations for a memory fence that will occur in the future, the early fence operation reduces the amount of latency associated with that memory fence operation.
    Type: Application
    Filed: April 12, 2018
    Publication date: October 17, 2019
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Amin Farmahini-Farahani, David A. Roberts, Nuwan Jayasena
  • Publication number: 20190317832
    Abstract: A thread holding a lock notifies a sleeping thread that is waiting on the lock that the lock holding thread is “about” to release the lock. In response to the notification, the waiting thread is woken up. While the waiting thread is woken up, the lock holding thread completes other operations prior to actually releasing the lock and then releases the lock. The notification to the waiting thread hides latency associated with waking up the waiting thread by allowing operations that wake up the waiting thread to occur while the lock holding thread is performing the other operations prior to releasing the thread.
    Type: Application
    Filed: April 12, 2018
    Publication date: October 17, 2019
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Nuwan Jayasena, Amin Farmahini-Farahani, David A. Roberts
  • Publication number: 20190317833
    Abstract: A data processing system (10) connected to a plurality of first applications (80) and a plurality of second applications (90) includes a setting information generation function section (20), wherein the setting information generation function section includes a first interface generator (22) configured to generate a first interface for the first applications, a second interface generator (24) configured to generate a second interface for the second applications, and a storage (30) to store common data structure generation source information which is common information based on which the first and second interfaces are generated, and when the common data structure generation source information is updated, the first interface generator and the second interface generator automatically generate the first interface and the second interface, respectively, based on the updated common data structure generation source information.
    Type: Application
    Filed: December 13, 2017
    Publication date: October 17, 2019
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Keiichiro KASHIWAGI, Hisaharu ISHII, Kenji UMAKOSHI, Ryohei BANNO, Yui YOSHIDA
  • Publication number: 20190317834
    Abstract: Using and updating topological relationships amongst a set of nodes in event clustering is disclosed. A current event occurs on a current node. A first cluster of related events includes a first event, occurring on a first node, that is time-correlated with the current event. The first cluster does not include any event that is topologically-correlated with the current event based on the existing set of topological relationships. A level of interdependence is determined between (a) occurrence of events on the current node and (b) occurrence of events on the first node. Based on the level of interdependence, the current event is added to the first cluster. Further, an event-based topological relationship between the first node and the second node is added to the set of topological relationships. Subsequently, clustering for new events may be determined based on the event-based topological relationship between the first node and the second node.
    Type: Application
    Filed: April 11, 2018
    Publication date: October 17, 2019
    Applicant: Oracle International Corporation
    Inventors: Mohammad Sadegh Ebrahimi, Raghu Hanumanth Reddy Patti, Dustin Garvey
  • Publication number: 20190317835
    Abstract: A system, method and computer program product are provided for improved management of events. A plurality of events is received from an event source. Each event comprises event data relating to a monitored system associated with the event source. A set of data fields in the event data of the plurality of events is identified. One or more relationships between at least one data field in the set of data fields and at least one other data field in the set of data fields are determined. A mapping of the event data of the plurality of events to a predefined common format is determined, based on the one or more relationships.
    Type: Application
    Filed: April 12, 2018
    Publication date: October 17, 2019
    Inventors: Paul B. French, Gerd Breiter, Kristian J. Stewart, John Lee
  • Publication number: 20190317836
    Abstract: Events are raised by a worker thread that is executing a workflow, when the worker thread starts, suspends, resumes and ends execution of the workflow. A latency detection system, that is external to the worker thread, detects when these events are raised and logs a timestamp and a stack trace, corresponding to the workflow and the event that was raised.
    Type: Application
    Filed: April 13, 2018
    Publication date: October 17, 2019
    Inventors: Matthew Christopher Kuzior, Harsh Satyanarayan Dangayach, Alexander Lawrence Wilkins, Matthew John Smith
  • Publication number: 20190317837
    Abstract: A bipartite workflow graph, representing an understanding of an overall service, comprises two different graph elements: entities and processes and each individual microservice defines their logical constructs as either an entity or a process in accordance with a universal schema. Notifications from such microservices conform to the universal schema, thereby enabling microservices to individually change how they operate internally, without affecting an understanding of the overall system as represented by the workflow graph. Each graph element has its state maintained by a separately addressable execution unit executing a state machine, which can be individually updated based on information received from the microservices. Changes to the workflow graph are logged and an insight engine monitors such a log to insert insight markers in accordance with predefined events, thereby enabling the collection of metrics on a service wide basis and across multiple microservices.
    Type: Application
    Filed: April 14, 2018
    Publication date: October 17, 2019
    Inventors: James FLETCHER, Robert Franz HAIN, Kelly Michael SMITH, Isaac MATICHUK, Jared James GOHR, Curtis Todd JOHNSON, Michael Dennis SCHUELLER
  • Publication number: 20190317838
    Abstract: In a system including a primary process followed by a secondary process, which are performed serially and sequentially, i.e., in a FIFO manner, where the secondary process is downstream of the primary process, the disclosed embodiments relate to selective/conditional secondary processing of electronic data transaction request messages, which speeds up the primary processing of the electronic data transaction request messages, reduces reduce the amount of computing resources wasted on calculating inaccurate information, and reducing the usage of network resources associated with publishing market data feeds and receiving new responsive messages.
    Type: Application
    Filed: April 16, 2018
    Publication date: October 17, 2019
    Inventors: Kyle Dennis Kavanagh, Pearce Ian Peck-Walden
  • Publication number: 20190317839
    Abstract: A communication unit acquires sensor-side metadata, which is information relating to a sensor, and application-side metadata, which is information relating to an application. A comparison unit extracts a sensor that can provide the sensing data through matching between the sensor-side metadata and the application-side metadata, and a notification unit transmits, based on a result of the extraction, a data flow control command to a sensor management device. The sensor-side metadata and the application-side metadata contain data that can be handled as dynamic data.
    Type: Application
    Filed: October 25, 2017
    Publication date: October 17, 2019
    Applicant: OMRON Corporation
    Inventors: Shuichi MISUMI, Toshihiko ODA, Tetsuji YAMATO, Ryota YAMADA
  • Publication number: 20190317840
    Abstract: An approach is provided for providing transactional operations in an event-driven polyglot language runtime environment. Native functionalities of a transaction processing system are exposed as interfaces to multiple languages and frameworks in the runtime environment. The transactional operations are called from modules. The transaction processing system is integrated with the modules. A prepare operation is sent to a resource manager (RM) via a resolution thread. For a committed transaction outcome of the resolution thread, the commit is logged, an indication of the commit is sent to the RM, the commit is performed, a completion indication of the commit is sent, and a forget operation is logged. For a rollback transaction outcome of the resolution thread, the rollback is logged, an indication of the rollback is sent to the RM, the rollback is performed, a completion indication of the rollback is sent, and the forget operation is logged.
    Type: Application
    Filed: June 27, 2019
    Publication date: October 17, 2019
    Inventors: Nageswararao V. Gokavarapu, Gopalakrishnan P, Parameswaran Selvam, Hariharan N. Venkitachalam
  • Publication number: 20190317841
    Abstract: The disclosure provide a method for prompting a message in a terminal and a terminal. The terminal includes multiple operating systems and a management system. The management system is configured to manage the multiple operating systems. The management system includes a cross-system application database. The method includes: when a first operating system in the multiple operating systems runs in a foreground, and a second operating system in the multiple operating systems runs in a background, if the second operating system receives a first message of a first application in the second operating system, sending, by the second operating system, a notification message to the management system; storing, by the management system, the notification message into the cross-system application database; and listening, by the first operating system, on the cross-system application database, and outputting a prompt of the first message when listening and obtaining the notification message.
    Type: Application
    Filed: June 27, 2019
    Publication date: October 17, 2019
    Inventors: Bo LU, Jianfei ZHONG, Yunjian YING
  • Publication number: 20190317842
    Abstract: Attribute-based application programming interface (API) comparative benchmarking is provided. In response to determining that a target API maps to an existing API classification based on attributes of the target API, a weighted average of benchmark confidence scores of other APIs in a same class as the target API is determined. A benchmark confidence score is determined for the target API based on feedback, reviews, and ratings. The benchmark confidence score of the target API is compared with the weighted average of benchmark scores. An attribute-based API classification mapping is updated based on the comparison. Pricing for the target API is determined based on a weighted average of API pricing across the other APIs in the same class as the target API.
    Type: Application
    Filed: April 17, 2018
    Publication date: October 17, 2019
    Inventors: Harish Bharti, Amol Dhondse, Abhay Patra, Anand Pikle, Rakesh Shinde
  • Publication number: 20190317843
    Abstract: An apparatus in one embodiment comprises a host device that includes at least one processor and an associated memory. The host device is configured to implement a plurality of processes each configured to access a shared region of the memory. The host device is further configured to establish a multi-process control group for the shared region, to maintain state information for the multi-process control group, and to track usage of the shared region by the processes based at least in part on the state information. At least a subset of the processes may comprise respective containers implemented utilizing operating system level virtualization of the processor of the host device. The multi-process control group established for the shared region illustratively comprises a coarse-grained control group having a granularity greater than a single page of the shared region.
    Type: Application
    Filed: April 11, 2018
    Publication date: October 17, 2019
    Inventors: Junping Zhao, Xiangping Chen
  • Publication number: 20190317844
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to select code data structure types. An example disclosed apparatus includes an application programming interface (API) engine to generate an abstract data structure (ADS) placeholder in a location of a code sample corresponding to a memory operation, and a data structure selector to select a first candidate data structure having a first candidate data structure type, the first candidate data structure to service the memory operation of the ADS placeholder.
    Type: Application
    Filed: June 26, 2019
    Publication date: October 17, 2019
    Inventors: Justin Gottschlich, Mohammad Mejbah Ul Alam, Shengtian Zhou
  • Publication number: 20190317845
    Abstract: A device may identify a tool operating on a first device for integration into a lifecycle management platform operating on a second device. The tool may be associated with providing a functionality not included in the lifecycle management platform. The first device may be external to the second device. The device may determine a set of tool attributes for data events associated with the tool. The data events may include a data input, a data output, a new message, an updated message, a deleted message, or the like. The device may select a message format based on the set of tool attributes. The device may configure adaptation for a tool application programming interface (API) of the tool and a platform API of the lifecycle management platform based on the message format. The device may provide information associated with configuring adaptation for the tool API and the platform API.
    Type: Application
    Filed: June 28, 2019
    Publication date: October 17, 2019
    Inventors: Krupa Srivastava, Vijayaraghavan Koushik, Chandrashekhar Deshpande, Mark Lazarus, Le G. Dang, Madhusudhana Desai, Sumit Kute, Arpan Shukla, Tiarenla Jamir, Prashant Sawant, Rohan Sharma, Bhaskar Ghosh, Mohan Sekhar, Rajendra T. Prasad
  • Publication number: 20190317846
    Abstract: Embodiments of the present invention provide an application interaction method, apparatus, and system. The method includes: receiving, by a transmission configuration module, a first instruction, where the first instruction includes an identifier of a first application and an identifier of a second application; determining, by the transmission configuration module, deployment information of the first application and deployment information of the second application based on the first instruction; determining, by the transmission configuration module, an information transmission mode between the first application and the second application based on the deployment information of the first application, the deployment information of the second application, and a transmission mode selection policy; and using, by the first application, the transmission mode to transmit information to the second application.
    Type: Application
    Filed: June 27, 2019
    Publication date: October 17, 2019
    Inventors: Xiaoxu LIU, Kai ZHENG
  • Publication number: 20190317847
    Abstract: An RPC conversion processing system (10) that performs relaying between a first application (70) and second application (80) of different protocols. The RPC conversion processing system includes a first interface provider (20) connected to the first application and configured to provide a first interface for the first application, an RPC conversion relay function section (40), and a second interface provider (30) connected to the second application and configured to provide a second interface for the second application. The first interface provider receives a request for processing an RPC from the first application. The RPC conversion relay function section converts the RPC into an RPC of the second application and outputs the RPC of the second application to the second interface provider, thereby relaying an RPC between the first application and the second application.
    Type: Application
    Filed: December 13, 2017
    Publication date: October 17, 2019
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Tomoyuki FUJINO, Yuji OSHIMA, Keiichiro KASHIWAGI, Hisaharu ISHII, Yui YOSHIDA
  • Publication number: 20190317848
    Abstract: Some embodiments include reception of a time-series of a respective data value generated by each of a plurality of sensors, calculation of a regression associated with a first sensor of the plurality of sensors based on the received plurality of time-series, the regression being a function of the respective data values of the others of the plurality of data sources, reception of respective data values associated with a time from and generated by each the plurality of respective sensors, determination of a predicted value associated with the time for the first sensor based on the regression associated with the first sensor and on the respective data values associated with the time, comparison of the predicted value with the received value associated with the time and generated by the first sensor, and determination of a value indicating a likelihood of an anomaly based on the comparison.
    Type: Application
    Filed: June 27, 2019
    Publication date: October 17, 2019
    Inventors: Robert Meusel, Jaakob Kind, Atreju Florian Tauschinsky, Janick Frasch, Minji Lee, Michael Otto
  • Publication number: 20190317849
    Abstract: A processor may identify, using historical data, an amount of computing resources consumed to remedy the failure with an automatic remedy step. The processor may determine that the amount of consumed computing resources to remedy the failure is less than an amount of computing resources consumed by restarting the process. The processor may perform the automatic remedy step. The processor may identify that the automatic remedy step has failed. The processor may determine a waiting period based on an estimated time to receive a user response to the failure and an estimated load on the computing cluster. The processor may display a generated alert to a user during the waiting period. The processor may identify that no user input has been received during the waiting period. The processor may release computing resources corresponding to the process.
    Type: Application
    Filed: April 11, 2018
    Publication date: October 17, 2019
    Inventors: Lukasz G. Cmielowski, Pawel Slowikowski, Rafal Bigaj, Bartlomiej Malecki
  • Publication number: 20190317850
    Abstract: Methods, computer program products, and systems are presented. The method computer program products, and systems can include, for instance: obtaining iteratively captured frames of image data representing a user interface screen, wherein one or more of the frames of image data represents an error screen indicating an error condition of one or more resource of a plurality of resources of a services system; performing recognition processing using image data of the captured frames of image data to determine an error classifier associated to the error screen: determining one or more action based on the error classifier; and performing the one or more action in response to the determining.
    Type: Application
    Filed: April 17, 2018
    Publication date: October 17, 2019
    Inventors: Kwan Yin Andrew CHAU, Tony LE, Patrick WONG, Dien D. NGUYEN
  • Publication number: 20190317851
    Abstract: A memory includes error correction circuitry that receives a data packet, outputs a correctable error flag indicating presence or absence of a correctable error in the data packet, and outputs an uncorrectable error flag indicating presence or absence of an uncorrectable error in the data packet. A response manager, operating in availability mode, generates output indicating that a correctable error was present if the correctable error flag indicates presence thereof, and generates an output indicating that an uncorrectable error was present if the uncorrectable error flag indicates presence thereof. In a coverage mode, the response manager generates an output indicating that a correctable error was potentially present but should be treated as an uncorrectable error if the correctable error flag indicates presence of the correctable error, and generates an output indicating that an uncorrectable error was present if the uncorrectable error flag indicates presence thereof.
    Type: Application
    Filed: June 27, 2019
    Publication date: October 17, 2019
    Applicants: STMicroelectronics International N.V., STMicroelectronics S.r.l.
    Inventors: Om RANJAN, Riccardo GEMELLI, Abhishek GUPTA
  • Publication number: 20190317852
    Abstract: In one embodiment, content-addressable memory lookup result integrity checking and correcting operations are performed, such as, but not limited to protecting the accuracy of packet processing operations. A lookup operation is performed in the content-addressable memory entries based on a lookup word resulting in one or more match vectors. One or multiple result match vectors are produced, depending on whether each of the content-addressable memory entries and the lookup word have been partitioned into multiple portions. An error accuracy code (e.g., error detection, error correction) is acquired for each portion of the one or multiple portions based on a corresponding portion of the lookup word. An accurate result is generated by processing each of the result match vector(s) with their corresponding error accuracy code. When using multiple portions, the (possibly corrected) result match vectors are combined into a single accurate result match vector.
    Type: Application
    Filed: April 16, 2018
    Publication date: October 17, 2019
    Applicant: Cisco Technology, Inc., a California corporation
    Inventor: Doron Shoham
  • Publication number: 20190317853
    Abstract: An apparatus for smart integrated cyclic data transport is provided. The apparatus may preserve the consistency and integrity of a file during the transfer of the file from a source system to a target system. The apparatus includes an orchestration subsystem. The orchestration subsystem includes an analyzer/generator module. The analyzer/generator module executes an algorithm on the file at the source location. An output is generated from the executed algorithm. The apparatus includes a consistency module. The consistency module pre-checks the output at the source location for pretransfer validation and creates a copy of the output. The copy may preserve the consistency and the integrity of the file. The apparatus includes a data transfer subsystem which transfers the file and the output from the source system to the target system. The apparatus may also include a validation subsystem for validating the integrity and consistency of the file.
    Type: Application
    Filed: May 30, 2019
    Publication date: October 17, 2019
    Inventors: Manu Kurian, Sorin Cismas, Jay Varma, Paul Grayson Roscoe, Balaji Subramanian, Himabindu Keesara, Nathan Allen Eaton, JR., Vibhuti Damania
  • Publication number: 20190317854
    Abstract: A semiconductor device includes an address conversion circuit which generates the second address for storing an error detecting code in a memory based on the first address for storing data; a write circuit which writes data at the first address and writes an error detecting code at the second address; and a read circuit which reads data from the first address, reads the error detecting code from the second address, and detects an error based on the data and the error detecting code. The address conversion circuit generates an address as the second address by modifying the value of at least one bit of the first address so as to offset the storing position of the error detecting code to the storing position of the data, and by inverting the value of or permutating the order of the prescribed number of bits among the other bits.
    Type: Application
    Filed: June 25, 2019
    Publication date: October 17, 2019
    Inventors: Yukitoshi TSUBOI, Hiroyuki HAMASAKI
  • Publication number: 20190317855
    Abstract: In the described examples, a memory controller includes a read-modify-write logic module that receives a partial write data request for partial write data in error-correcting code (ECC) memory and combines the partial write data in the partial write data request with read data provided from the ECC memory to form combined data prior to correcting the read data. The memory controller also includes a write control module that controls the writing of the combined data to the ECC memory.
    Type: Application
    Filed: June 26, 2019
    Publication date: October 17, 2019
    Inventors: Indu Prathapan, Prashanth Saraf, Desmond Pravin Martin Fernandes, Saket Jalan
  • Publication number: 20190317856
    Abstract: Embodiments of the present invention include a memory module that includes a plurality of memory devices and a memory buffer device. Each of the memory devices are characterized as one of a high random bit error rate (RBER) and a low RBER memory device. The memory buffer device includes a read data interface to receive data read from a memory address on one of the memory devices. The memory buffer device also includes common error correction logic to detect and correct error conditions in data read from both high RBER and low RBER memory devices. The common error correction logic includes a plurality of error correction units which provide different complexity levels of error correction and have different latencies. The error correction units include a first fast path error correction unit for isolating and correcting random symbol errors.
    Type: Application
    Filed: April 16, 2018
    Publication date: October 17, 2019
    Inventors: James A. O'Connor, JR., Barry M. Trager, Warren E. Maule, Marc A. Gollub, Brad W. Michael, Patrick J. Meaney
  • Publication number: 20190317857
    Abstract: Technologies for providing error correction for row direction and column direction in a cross point memory include a memory that includes media access circuitry coupled to a memory media having a cross point architecture. The media access circuitry is configured to read, from the memory media, a column of data. Additionally, the media access circuitry is configured to read, from the memory media, column error correction code (ECC) check data appended to the column of data and perform error correction on the column of data with the column ECC check data to produce error-corrected data.
    Type: Application
    Filed: April 26, 2019
    Publication date: October 17, 2019
    Inventors: Jawad B. Khan, Richard Coulson, Srikanth Srinivasan
  • Publication number: 20190317858
    Abstract: Data availability in geographically-distributed object storage systems that utilize erasure coding is increased without any run-time overheads. In one aspect, non-intersecting sub matrices of a great encoding matrix can be utilized to erasure code data fragments of a chunk at the zone level and generate coding fragments. Accordingly, the data fragments that are stored within different zones are identical, while the coding fragments stored within the different zones are disparate. Subsequent to a multi-zone data failure, wherein it is determined that a decoding operation cannot be performed at the zone level, available fragments associated with the chunk can be collected from the zones and collectively decoded to recover the chunk. In one aspect, the fragments can be decoded by utilizing a great decoding matrix that corresponds to the great encoding matrix.
    Type: Application
    Filed: April 12, 2018
    Publication date: October 17, 2019
    Inventors: Mikhail Danilov, Konstantin Buinov
  • Publication number: 20190317859
    Abstract: Data protection with meta chunks increases capacity use efficiency without verification and data copying. In one aspect, a meta chunk is a data protection unit, which combines two or more source chunks that are determined to have a reduced sets of data fragments. The meta chunk can be encoded to generate a set of coding fragments, which can be stored and utilized to recover data fragments of any of the two or more source chunks. Further, the source chunks can be linked to the meta chunk. Furthermore, the sets of coding fragments, that were previously generated by individually encoding each source chunk, can be deleted.
    Type: Application
    Filed: April 12, 2018
    Publication date: October 17, 2019
    Inventors: Mikhail Danilov, Kirill Gusakov
  • Publication number: 20190317860
    Abstract: Erasure code for data is generated by: calculating the size and bytes of an erasure code block, calculating a number of stripes for the erasure code, and generating each stripe of each block for the erasure code, such that the stripes alternate in a pattern for each block, and saving hashes. A portion of the data is repaired by: for each block of the portion of the data, calculating the stripe of the block, identifying each hash for which the hash of the block of the portion of data does not match the saved hash of the block as a bad block, and for each identified bad block, generating a repair block for the bad block based on the stripe of the block and corresponding block of the data in the erasure coding for the data.
    Type: Application
    Filed: June 20, 2018
    Publication date: October 17, 2019
    Inventor: Edmund B. NIGHTINGALE
  • Publication number: 20190317861
    Abstract: Embodiments of the present invention provide systems and methods for recovering a high availability storage system. The storage system includes a first layer and a second layer, each layer including a controller board, a router board, and storage elements. When a component of a layer fails, the storage system continues to function in the presence of a single failure of any component, up to two storage element failures in either layer, or a single power supply failure. While a component is down, the storage system will run in a degraded mode. The passive zone is not serving input/output requests, but is continuously updating its state in dynamic random access memory to enable failover within a short period of time using the layer that is fully operational. When the issue with the failed zone is corrected, a failback procedure brings the system back to a normal operating state.
    Type: Application
    Filed: June 27, 2019
    Publication date: October 17, 2019
    Inventors: Ladislav STEFFKO, Vijay KARAMCHETI
  • Publication number: 20190317862
    Abstract: In one approach, data blocks or files that have a history of change are tagged for automatic transfer to backup on the assumption that they have changed since the last backup. Other data blocks and files are first tested for change, for example by comparing digital fingerprints of the current data versus the previously backed up data, before transferring to backup.
    Type: Application
    Filed: June 26, 2019
    Publication date: October 17, 2019
    Inventor: Looi Chow Lee
  • Publication number: 20190317863
    Abstract: Various methods, systems, and processes for mitigating fragmentation in synthetic full backups are disclosed. One or more storage units out of multiple storage units are identified. The multiple storage units include one or more new storage units or one or more existing storage units. The multiple storage units are accessed and a determination is made as to whether the one or more storage units out of the multiple storage units meet a threshold. The threshold is a measure of a characteristic of those one or more storage units. If one or more storage units meet the threshold, those one or more storage units are included in a backup stream, and the backup stream is sent to a backup server.
    Type: Application
    Filed: March 25, 2019
    Publication date: October 17, 2019
    Inventor: Shuai Cheng
  • Publication number: 20190317864
    Abstract: Provided are a data backup method, electronic device, and storage medium, the data backup method including: acquiring application data to be backed up and update frequencies of the application data in the terminal; generating backup priorities based on the update frequencies; transmitting the application data to be backed up to a server based on the backup priorities.
    Type: Application
    Filed: July 18, 2017
    Publication date: October 17, 2019
    Inventor: Zhifeng MA
  • Publication number: 20190317865
    Abstract: Disclosed herein are system, method, and computer program product embodiments for a database recovery and optimization with batch processing system. An embodiment operates by retrieving a database log that includes a plurality of operations for modifying data of a database stored across a plurality of tables. From the database log, a plurality of consecutive insert operations for inserting data into the database are identified. The consecutive insert operations are sorted by table. The sorted insert operations are grouped into a batch message. The batch message is transmit to the database for replay. An acknowledgement is received that the replay has completed.
    Type: Application
    Filed: April 16, 2018
    Publication date: October 17, 2019
    Inventors: Martin Heidel, Xin Liu, Christoph Roterring, Shiping Chen, Vivek Kandiyanallur, Stephan Kottler, Joern Schmidt