Patents by Inventor Karthik Kumar

Karthik Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190158606
    Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QoS pre-allocation; and automatic QoS migration across edge computing nodes.
    Type: Application
    Filed: December 28, 2018
    Publication date: May 23, 2019
    Inventors: FRANCESC GUIM BERNAT, PATRICK BOHAN, KSHITIJ ARUN DOSHI, BRINDA GANESH, ANDREW J. HERDRICH, MONICA KENGUVA, KARTHIK KUMAR, PATRICK G. KUTCH, FELIPE PASTOR BENEYTO, RASHMIN PATEL, SURAJ PRABHAKARAN, NED M. SMITH, PETAR TORRE, ALEXANDER VUL
  • Publication number: 20190155239
    Abstract: In one embodiment, an apparatus comprises a fabric controller of a first computing node. The fabric controller is to receive, from a second computing node via a network fabric that couples the first computing node to the second computing node, a request to execute a kernel on a field-programmable gate array (FPGA) of the first computing node; instruct the FPGA to execute the kernel; and send a result of the execution of the kernel to the second computing node via the network fabric.
    Type: Application
    Filed: June 30, 2016
    Publication date: May 23, 2019
    Applicant: Intel Corporation
    Inventors: Nicolas A. Salhuana, Karthik Kumar, Thomas Willhalm, Francesc Guim Bernat, Narayan Ranganathan
  • Publication number: 20190158580
    Abstract: A server set may provide a document service to various clients in view of considerations such as availability, fault tolerance, flexibility, and performance. Presented herein are document service architectures that involve partitioning the document set into at least two document ranges, and assigning the respective document ranges to an agent that is deployed to at least one assigned server. A request to apply an operation to a selected document may be fulfilled by identifying the document range of the document; identifying a selected server of the server set that hosts the agent to which the range is assigned; and forwarding the request to the selected server. In some variations, servers may retain detailed information about neighboring servers (e.g., according to logical and/or physical proximity) and scant information about distant servers, thereby avoiding both the extensive information exchange of highly informed network architectures and the inefficiency of uninformed routing algorithms.
    Type: Application
    Filed: November 26, 2018
    Publication date: May 23, 2019
    Inventors: Dharma SHUKLA, Madhan GAJENDRAN, Quetzalcoatl BRADLEY, Shireesh Kumar THOTA, Karthik RAMAN, Mark Connolly BENVENUTO, John MACINTYRE, Nemanja MATKOVIC, Constantin DULU, Elisa Marie FLASKO, Atul KATIYAR
  • Patent number: 10298757
    Abstract: A curator captures input data corresponding to service tasks from an external source. Further, a browser extension collects intermediate service delivery data for the service tasks from the external source. Subsequently, a learner stores the input data and the intermediate service delivery data as training data. Then, a receiver receives a service request from a client. The service request is indicative of a service task to be performed and information associated with the service task. Further, an advisor processes the service request to generate an intermediate service response. Thereafter, the advisor determines a confidence level associated with the intermediate service response and ascertains whether the confidence level associated with service response is below pre-determined threshold level. If the confidence level is below a pre-determined threshold level, the advisor automatically generates a final service response corresponding to service request based on training data.
    Type: Grant
    Filed: February 21, 2018
    Date of Patent: May 21, 2019
    Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITED
    Inventors: Chung-Sheng Li, Guanglei Xiong, Emmanuel Munguia Tapia, Kyle P. Johnson, Christopher Cole, Sachin Aul, Suraj Govind Jadhav, Saurabh Mahadik, Mohammad Ghorbani, Colin Connors, Chinnappa Guggilla, Naveen Bansal, Praveen Maniyan, Sudhanshu A Dwivedi, Ankit Pandey, Madhura Shivaram, Sumeet Sawarkar, Karthik Meenakshisundaram, Nagendra Kumar M R, Hariram Krishnamurth, Karthik Lakshminarayanan
  • Patent number: 10291467
    Abstract: Techniques for deploying a server stack having a cross-server dependency are disclosed. A deployment engine initiates a deployment process for a server stack. The deployment engine provisions servers of one server type (“requisite servers”). The deployment engine attempts to provision servers of another server type (“dependent servers”). The deployment engine executes a test that requires the dependent servers to invoke a service executed by the requisite servers. Based on the test results, the deployment engine determines that an operational requirement of the dependent servers is not satisfied. The deployment engine modifies a configuration for the requisite servers to satisfy the operational requirement of the dependent servers. The deployment engine re-provisions the requisite servers using the modified configuration. The deployment engine completes the deployment process for the server stack.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: May 14, 2019
    Assignee: Oracle International Corporation
    Inventors: Pradip Kumar Pandey, Steven Mark Fillipi, Clayton Drew Seeley, Karthik M U, Sanjeev Kumar Sharma
  • Publication number: 20190140919
    Abstract: An architecture to enable verification, ranking, and identification of respective edge service properties and associated service level agreement (SLA) properties, such as in an edge cloud or other edge computing environment, is disclosed. In an example, management and use of service information for an edge service includes: providing SLA information for an edge service to an operational device, for accessing an edge service hosted in an edge computing environment, with the SLA information providing reputation information for computing functions of the edge service according to an identified SLA; receiving a service request for use of the computing functions of the edge service, under the identified SLA; requesting, from the edge service, performance of the computing functions of the edge service according to the service request; and tracking the performance of the computing functions of the edge service according to the service request and compliance with the identified SLA.
    Type: Application
    Filed: December 28, 2018
    Publication date: May 9, 2019
    Inventors: Ned M. Smith, Ben McCahill, Francesc Guim Bernat, Felipe Pastor Beneyto, Karthik Kumar, Timothy Verrall
  • Publication number: 20190138361
    Abstract: Technologies for providing dynamic selection of edge and local accelerator resources includes a device having circuitry to identify a function of an application to be accelerated, determine one or more properties of an accelerator resource available at the edge of a network where the device is located, and determine one or more properties of an accelerator resource available in the device. Additionally, the circuitry is to determine a set of acceleration selection factors associated with the function, wherein the acceleration factors are indicative of one or more objectives to be satisfied in the acceleration of the function. Further, the circuitry is to select, as a function of the one or more properties of the accelerator resource available at the edge, the one or more properties of the accelerator resource available in the device, and the acceleration selection factors, one or more of the accelerator resources to accelerate the function.
    Type: Application
    Filed: December 28, 2018
    Publication date: May 9, 2019
    Inventors: Francesc Guim Bernat, Karthik Kumar, Ned Smith, Thomas Willhalm, Timothy Verrall
  • Publication number: 20190141120
    Abstract: Technologies for providing selective offload of execution of an application to the edge include a device that includes circuitry to determine whether a section of an application to be executed by the device is available to be offloaded. Additionally, the circuitry is to determine one or more characteristics of an edge resource available to execute the section. Further, the circuitry is to determine, as a function of the one or more characteristics and a target performance objective associated with the section, whether to offload the section to the edge resource and offload, in response to a determination to offload the section, the section to the edge resource.
    Type: Application
    Filed: December 28, 2018
    Publication date: May 9, 2019
    Inventors: Francesc Guim Bernat, Ned Smith, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar, Timothy Verrall
  • Publication number: 20190138647
    Abstract: A method, computer system, and computer program product for a conversational system driven by a semantic network with a library of templated query operators are provided. The embodiment may include loading one or more operators for the conversational system to the library of templated query operators. The embodiment may also include receiving a query statement from a user. The embodiment may further include identifying an operator from the library to process the received query. The embodiment may also include identifying one or more input terms for the identified operator within the received query. The embodiment may further include generating one or more output terms based on processing the one or more identified input terms using the identified operator. The embodiment may also include generating a natural language response to the received query based on the one or more generated output terms.
    Type: Application
    Filed: November 8, 2017
    Publication date: May 9, 2019
    Inventors: Pratyush Kumar, Karthik Sankaranarayanan
  • Publication number: 20190140933
    Abstract: A device of a service coordinating entity includes communications circuitry to communicate with a plurality of access networks via a corresponding plurality of network function virtualization (NFV) instances, processing circuitry, and a memory device. The processing circuitry is to perform operations to monitor stored performance metrics for the plurality of NFV instances. Each of the NFV instances is instantiated by a corresponding scheduler of a plurality of schedulers on a virtualization infrastructure of the service coordinating entity. A plurality of stored threshold metrics is retrieved, indicating a desired level for each of the plurality of performance metrics. A threshold condition is detected for at least one of the performance metrics for an NFV instance of the plurality of NFV instances, based on the retrieved plurality of threshold metrics. A hardware resource used by the NFV instance to communicate with an access network is adjusted based on the detected threshold condition.
    Type: Application
    Filed: December 28, 2018
    Publication date: May 9, 2019
    Inventors: Francesc Guim Bernat, Andrew J. Herdrich, Karthik Kumar, Felipe Pastor Beneyto, Edwin Verplanke
  • Publication number: 20190139184
    Abstract: Methods, apparatuses and systems may provide for technology that processes portions of video frames in different hardware pipes. More particularly, implementations relate to technology that provides splitting of a frame into columns or rows and processing each of these in different hardware pipes and managing the dependency in hardware. Such operations may achieve this support while at the same time providing enough flexibility to use these pipes independently when the higher performance is not required.
    Type: Application
    Filed: December 28, 2018
    Publication date: May 9, 2019
    Inventors: Atthar Mohammed, Hiu-Fai Chan, Hyeong-Seok Ha, Jong Dae Oh, Karthik Nagasubramanian, Ping Liu, Samuel Wong, Satya Yedidi, Sumit Mohan, Vidhya Krishnan, Pavan Kumar Saranu, Ashokanand N
  • Publication number: 20190138534
    Abstract: Technologies for providing dynamic persistence of data in edge computing include a device including circuitry configured to determine multiple different logical domains of data storage resources for use in storing data from a client compute device at an edge of a network. Each logical domain has a different set of characteristics. The circuitry is also to configured to receive, from the client compute device, a request to persist data. The request includes a target persistence objective indicative of an objective to be satisfied in the storage of the data. Additionally, the circuitry is configured to select, as a function of the characteristics of the logical domains and the target persistence objective, a logical domain into which to persist the data and provide the data to the selected logical domain.
    Type: Application
    Filed: December 28, 2018
    Publication date: May 9, 2019
    Inventors: Francesc Guim Bernat, Karthik Kumar, Suraj Prabhakaran, Ramanathan Sethuraman, Timothy Verrall
  • Publication number: 20190129813
    Abstract: A system and method for handling one or more dependency services hosted by one or more dependency servers for an upstream service hosted by an administrative server in a distributed computer architecture is provided. The present invention provides for identifying any abnormality in the behavior of the dependency services on the basis of metric values associated with service-parameters of said dependency services. Further, the resiliency services are enabled in order to handle one or more faltering dependency services based on the faulty metric values associated with the service-parameters. Yet further, the one or more faltering dependency services are continuously monitored, and one or more resiliency services are withdrawn once the fault in said dependency services is resolved. Yet further, the present invention provides a conversational bot interface for managing the administrative server and associated dependency services.
    Type: Application
    Filed: January 12, 2018
    Publication date: May 2, 2019
    Inventors: Senthil Ramaswamy Sankarasubramanian, Deepak Panneerselvam, Karthik Kumar
  • Patent number: 10277096
    Abstract: A component for an electrical machine is disclosed. The component is a stator and/or a rotor. The component includes a core, a magnetic field-generating component, and an oscillating heat pipe assembly. The core includes a plurality of slots and the magnetic field-generating component is disposed in at least one slot of the plurality of slots. The oscillating heat pipe assembly is disposed in the core and the at least one slot of the plurality of slots. The oscillating heat pipe assembly is in contact with the core and the magnetic field-generating component. The oscillating heat pipe assembly includes a dielectric material, and where the oscillating heat pipe assembly has an in-plane thermal conductivity higher than a through-plane thermal conductivity.
    Type: Grant
    Filed: March 15, 2016
    Date of Patent: April 30, 2019
    Assignee: General Electric Company
    Inventors: Karthik Kumar Bodla, Joo Han Kim, Yogen Vishwas Utturkar
  • Publication number: 20190121564
    Abstract: Examples relate to an approximative memory deduplication method, a controller apparatus or controller device for a memory or storage controller, a memory or storage controller, a computer system and to a computer program. The approximative memory deduplication method comprises determining a hash value of a data block. The hash value is based on a user-defined approximative hashing function. The approximative memory deduplication method comprises storing a quantized version of the data block based on the hash value using a memory or storage device of the computer system.
    Type: Application
    Filed: December 17, 2018
    Publication date: April 25, 2019
    Inventors: Francesc GUIM BERNAT, Karthik KUMAR, Mustafa HAJEER, Thomas Willhalm, Amin FIROOZSHAHIAN, Chandan EGBERT
  • Patent number: 10255305
    Abstract: Technologies for object-based data consistency in a fabric architecture includes a network switch communicatively coupled to a plurality of computing nodes. The network switch is configured to receive an object read request that includes an object identifier and a data consistency threshold from one of the computing nodes. The network switch is additionally configured to perform a lookup for a value of an object in the cache memory as a function of the object identifier and determine whether a condition of the value of the object violates the data consistency threshold in response to a determination that the lookup successfully returned the value of the object. The network switch is further configured to transmit the value of the object to the computing node in response to a determination that the condition of the value of the object does not violate the data consistency threshold. Other embodiments are described herein.
    Type: Grant
    Filed: September 9, 2016
    Date of Patent: April 9, 2019
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Thomas Willhalm, Karthik Kumar, Raj K. Ramanujan, Daniel Rivas Barragan
  • Publication number: 20190102315
    Abstract: Various embodiments are generally directed to an apparatus, method and other techniques to receive a request from a core, the request associated with a memory operation to read or write data, and the request comprising a first address and an offset, the first address to identify a memory location of a memory. Embodiments include performing a first iteration of a memory indirection operation comprising reading the memory at the memory location to determine a second address based on the first address, and determining a memory resource based on the second address and the offset, the memory resource to perform the memory operation for the computing resource or perform a second iteration of the memory indirection operation.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Applicant: INTEL CORPORATION
    Inventors: FRANCESC GUIM BERNAT, KARTHIK KUMAR, MARK SCHMISSEUR, THOMAS WILLHALM
  • Publication number: 20190102147
    Abstract: Examples may include a data center in which memory sleds are provided with logic to filter data stored on the memory sled responsive to filtering requests from a compute sled. Memory sleds may include memory filtering logic arranged to receive filtering requests, filter data stored on the memory sled, and provide filtering results to the requesting entity. Additionally, a data center is provided in which fabric interconnect protocols in which sleds in the data center communicate is provided with filtering instructions such that compute sleds can request filtering on memory sleds.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Applicant: INTEL CORPORATION
    Inventors: Karthik Kumar, Francesc Guim Bernat, Thomas Willhalm, Mark A. Schmisseur
  • Publication number: 20190102403
    Abstract: Techniques and apparatus for providing access to data in a plurality of storage formats are described. In one embodiment, for example, an apparatus may include logic, at least a portion of comprised in hardware coupled to the at least one memory, to determine a first storage format of a database operation on a database having a second storage format, and perform a format conversion process responsive to the first storage format being different than the second storage format, the format conversion process to translate a virtual address of the database operation to a physical address, and determine a converted physical address comprising a memory address according to the first storage format. Other embodiments are described and claimed.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Applicant: INTEL CORPORATION
    Inventors: Mark A. Schmisseur, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar
  • Publication number: 20190102090
    Abstract: A memory controller method and apparatus, which includes a modification of at least one of a first timing scheme or a second timing scheme based on information about one or more data requests to be included in at least one of a first queue scheduler or a second queue scheduler, the first timing scheme indicating when one or more requests in the first queue scheduler are to be issued to the first memory set via a first memory set interface and over a channel, the second timing scheme indicating when one or more requests in the second queue scheduler are to be issued to the second memory set via a second memory set interface and over the channel. Furthermore, an issuance of a request to at least one of the first memory set in accordance with the modified first timing scheme or the second memory set in accordance with the modified second timing scheme may be included.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Inventors: Francesc GUIM BERNAT, Karthik KUMAR, Thomas WILLHALM, Mark SCHMISSEUR