Operation Patents (Class 712/30)
  • Patent number: 11797448
    Abstract: A computer-implemented method, according to one embodiment, includes: in response to a determination that an available capacity of one or more buffers in a primary cache is not outside a predetermined range, using the one or more buffers in the primary cache to satisfy all incoming I/O requests. In response to a determination that the available capacity of the one or more buffers in the primary cache is outside the predetermined range, one or more buffers in a secondary cache are allocated, and the one or more buffers in the secondary cache are used to satisfy at least some of the incoming I/O requests.
    Type: Grant
    Filed: July 5, 2022
    Date of Patent: October 24, 2023
    Assignee: International Business Machines Corporation
    Inventors: Beth Ann Peterson, Kevin J. Ash, Lokesh Mohan Gupta, Warren Keith Stanley, Roger G. Hathorn
  • Patent number: 11768939
    Abstract: An embodiment includes activating, responsive to receiving an update notification, an update mode of a mobile device, wherein the activating of the update mode includes disabling a primary communication interface and enabling a secondary communication interface, and wherein the update notification includes notification of a software update available for the mobile device. The embodiment also includes initiating execution of the software update on the mobile device while the mobile device remains in the update mode. The embodiment also includes deactivating, responsive to completing the software update, the update mode of the mobile device, wherein the deactivating of the update mode includes enabling the primary communication interface and disabling the secondary communication interface.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: September 26, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Manish Anand Bhide, Madan K Chukka, Phani Kumar V. U. Ayyagari, PurnaChandra Rao Jasti
  • Patent number: 11765041
    Abstract: Methods and systems related to construction and implementation of high radix topologies are disclosed. The nodes of the network topology are divided into a number of groups. Intra-group connections are constructed by connecting the nodes of each group according to a first complementary base graph. Inter-group connections are constructed based on a second complementary base graph and a plurality of permutation matrices. Each permutation matrix represents a pattern for selecting source group and destination group for each inter-group connection. One permutation matrix is randomly assigned to each edge of the second complementary base graph. An inter-group connection is constructed by identifying a source node and a destination node corresponding to a selected edge of the second complementary base graph, and identifying a source group and a destination group according to the permutation matrix assigned to the selected edge.
    Type: Grant
    Filed: September 15, 2022
    Date of Patent: September 19, 2023
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Ashkan Sobhani, Amir Baniamerian, Xingjun Chu
  • Patent number: 11748074
    Abstract: Certain example embodiments relate to techniques for use with mainframe computing systems that include both general-purpose processors (e.g., CPs) and special-purpose processors that can be used to perform only certain limited operations (e.g., zIIPs). Certain example embodiments automatically help these special-purpose processors perform user exits and other routines thereon, rather than requiring those operations to be performed on general-purpose processors. This approach advantageously can improve system performance when executing programs including these user exits and other routines, and in a preferred embodiment, it can be accomplished in connection with a suitably-configured user exit daemon. In a preferred embodiment, the daemon and its clients can use a user exit property table or the like to communicate with one another about the state of each user exit or other routine that has been analyzed, classified, and possibly modified.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: September 5, 2023
    Assignee: SOFTWARE AG
    Inventors: Uwe Henker, Arno Zude, Dieter Kessler
  • Patent number: 11720521
    Abstract: An accelerator system can include one or more clusters of eight processing units. The processing units can include seven communication ports. Each cluster of eight processing units can be organized into two subsets of four processing units. Each processing unit can be coupled to each of the other processing units in the same subset by a respective set of two bi-directional communication links. Each processing unit can also be coupled to a corresponding processing unit in the other subset by a respective single bi-directional communication link. Input data can be divided into one or more groups of four subsets of data. Each processing unit can be configured to sum corresponding subsets of the input data received on the two bi-directional communication links from the other processing units in the same subset with the input data of the respective processing unit to generate a respective set of intermediate data.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: August 8, 2023
    Assignee: Alibaba Singapore Holding Private Limited
    Inventor: Liang Han
  • Patent number: 11714647
    Abstract: A system includes a memory-mapped register (MMR) associated with a claim logic circuit, a claim field for the MMR, a first firewall for a first address region, and a second firewall for a second address region. The MMR is associated with an address in the first address region and an address in the second address region. The first firewall is configured to pass a first write request for an address in the first address region to the claim logic circuit associated with the MMR. The claim logic circuit associated with the MMR is configured to grant or deny the first write request based on the claim field for the MMR. Further, the second firewall is configured to receive a second write request for an address in the second address region and grant or deny the second write request based on a permission level associated with the second write request.
    Type: Grant
    Filed: November 16, 2021
    Date of Patent: August 1, 2023
    Assignee: Texas Instruments Incorporated
    Inventors: Eric Robert Hansen, Krishnan Sridhar
  • Patent number: 11709835
    Abstract: A method includes determining, in accordance with a first ordering, a plurality of read requests for a memory device. The plurality of read requests are added to a memory device queue for the memory device in accordance with the first ordering. The plurality of read requests in the memory device queue are processed, in accordance with a second ordering that is different from the first ordering, to determine read data for each of the plurality of read requests. The read data for the each of the plurality of read requests is added one of a set of ordered positions, based on the first ordering, of a ring buffer as the each of the plurality of reads requests is processed. The read data of a subset of the plurality of read requests is submitted based on adding the read data to a first ordered position of the set of ordered positions of the ring buffer.
    Type: Grant
    Filed: January 6, 2022
    Date of Patent: July 25, 2023
    Assignee: Ocient Holdings LLC
    Inventor: George Kondiles
  • Patent number: 11704248
    Abstract: A processor core associated with a first cache initiates entry into a powered-down state. In response, information representing a set of entries of the first cache are stored in a retention region that receives a retention voltage while the processor core is in a powered-down state. Information indicating one or more invalidated entries of the set of entries is also stored in the retention region. In response to the processor core initiating exit from the powered-down state, entries of the first cache are restored using the stored information representing the entries and the stored information indicating the at least one invalidated entry.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: July 18, 2023
    Assignee: Advanced Micro Devices, Inc.
    Inventors: William L. Walker, Michael L. Golden, Marius Evers
  • Patent number: 11689466
    Abstract: Methods, systems, and apparatus are described for throttling a distributed processing system. In one aspect, a method includes identifying records being processed by a distributed processing system that performs agent processes, each of the records including a corresponding timestamp; determining, based on timestamps of the records that have been processed by a first agent process, a first agent progress; identifying a dependent agent process performed by the distributed processing system, wherein the dependent agent process processes only records that have been processed by the first agent process; determining, based on timestamps of records that have been processed by the dependent agent process, a dependent agent progress; and throttling performance of the first process based on the first agent progress and the dependent agent progress.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: June 27, 2023
    Assignee: Google Inc.
    Inventors: Samuel Green McVeety, Vyacheslav Alekseyevich Chernyak
  • Patent number: 11663043
    Abstract: A system comprises a processor coupled to a plurality of memory units. Each of the plurality of memory units includes a request processing unit and a plurality of memory banks. The processor includes a plurality of processing elements and a communication network communicatively connecting the plurality of processing elements to the plurality of memory units. At least a first processing element of the plurality of processing elements includes a control logic unit and a matrix compute engine. The control logic unit is configured to access data from the plurality of memory units using a dynamically programmable distribution scheme.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: May 30, 2023
    Assignee: Meta Platforms, Inc.
    Inventors: Abdulkadir Utku Diril, Olivia Wu, Krishnakumar Narayanan Nair, Anup Ramesh Kadkol, Aravind Kalaiah, Pankaj Kansal
  • Patent number: 11651470
    Abstract: Example implementations relate to scheduling of jobs for a plurality of graphics processing units (GPUs) providing concurrent processing by a plurality of virtual GPUs. According to an example, a computing system including one or more GPUs receives a request to schedule a new job to be executed by the computing system. The new job is allocated to one or more vGPUs. Allocations of existing jobs are updated to one or more vGPUs. Operational cost of operating the one or more GPUs and migration cost of allocating the new job are minimized and allocations of the existing jobs on the one or more vGPUs is updated. The new job and the existing jobs are processed by the one or more GPUs in the computing system.
    Type: Grant
    Filed: June 28, 2021
    Date of Patent: May 16, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Diman Zad Tootaghaj, Junguk Cho, Puneet Sharma
  • Patent number: 11621991
    Abstract: Systems and methods for adaptive content streaming based on bandwidth are disclosed. According to one example method, content is requested for delivery. An indication of complexity of a plurality of media content items associated with the content is received. Based on the indication of complexity and an available bandwidth at the user device, at least one of the plurality media content items is selected and retrieved from the media server.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: April 4, 2023
    Assignee: ROVl GUIDES, INC.
    Inventors: Padmassri Chandrashekar, Daina Emmanuel, Reda Harb
  • Patent number: 11580357
    Abstract: A memory for storing a directed acyclic graph (DAG) for access by an application being executed by one or more processors of a computing device is described. The DAG includes a plurality of nodes, wherein each node represents a data point within the DAG. The DAG further includes a plurality of directional edges. Each directional edge connects a pair of the nodes and represents a covering-covered relationship between two nodes. Each node comprises a subgraph consisting of the respective node and all other nodes reachable via a covering path that comprises a sequence of covering and covered nodes. Each node comprises a set of node parameters including at least an identifier and an address range. Each node and the legal address specify a cover path. Utilizing DAG Path Addressing with bindings the memory can be organized to store a generalization hierarchy of logical propositions.
    Type: Grant
    Filed: September 22, 2022
    Date of Patent: February 14, 2023
    Assignee: Practical Posets LLC
    Inventor: John W. Esch
  • Patent number: 11573801
    Abstract: A processor includes a register file and control logic that detects multiple different sets of sequential zero bits of a register in the register file, wherein each of the multiple different sets has a bit length that corresponds to a partial instruction width and operates at a first partial instruction width or a second partial instruction width with the register file depending on number of sets of zero bits detected in the register. In certain examples, the control logic causes operating at first instruction width that avoids merging of a first bit length of data in the register and operating at the second instruction width that avoids merging of a second bit length of data in the register. In some examples, a register rename map table incudes multiple zero bits that identify the detected multiple different sets of bits of sequential zeros.
    Type: Grant
    Filed: September 29, 2021
    Date of Patent: February 7, 2023
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Eric Dixon, Erik Swanson, Theodore Carlson, Ruchir Dalal, Michael Estlick
  • Patent number: 11550651
    Abstract: There is provided execution circuitry. Storage circuitry retains a stored state of the execution circuitry. Operation receiving circuitry receives, from issue circuitry, an operation signal corresponding to an operation to be performed that accesses the stored state of the execution circuitry from the storage circuitry. Functional circuitry seeks to perform the operation in response to the operation signal by accessing the stored state of the execution circuitry from the storage circuitry. Delete request receiving circuitry receives a deletion signal and in response to the deletion signal, deletes the stored state of the execution circuitry from the storage circuitry. State loss indicating circuitry responds to the operation signal when the stored state of the execution circuitry is not present and is required for the operation by indicating an error. In addition, there is provided a data processing apparatus comprising issue circuitry to issue an operation to execution circuitry.
    Type: Grant
    Filed: November 18, 2020
    Date of Patent: January 10, 2023
    Assignee: Arm Limited
    Inventors: Alasdair Grant, Robert James Catherall
  • Patent number: 11532066
    Abstract: A graphics pipeline reduces the number of tessellation factors written to and read from a graphics memory. A hull shader stage of the graphics pipeline detects whether at least a threshold percentage of the tessellation factors for a thread group of patches are the same and, in some embodiments, whether at least the threshold percentage of the tessellation factors for a thread group of patches have a same value that either indicates that the plurality of patches are to be culled or that the plurality of patches are to be passed to a tessellator stage of the graphics pipeline.
    Type: Grant
    Filed: May 12, 2021
    Date of Patent: December 20, 2022
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Mangesh P. Nijasure, Tad Litwiller, Todd Martin, Nishank Pathak
  • Patent number: 11526432
    Abstract: There is provided a parallel processing device which allows consecutive parallel data processing to be performed. The parallel processing device includes: a plurality of addition units configured to selectively receive input data among output data from the plurality of input units according to configuration values for each addition unit of the plurality of addition units, and perform addition operation for the input data in parallel; and the plurality of the delay units configured to delay input data for one cycle. Each delay unit of the plurality of the delay units delays output data from each addition unit of the plurality of addition units and outputs the delayed output data to each input unit of the plurality of input units.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: December 13, 2022
    Assignee: MORUMI Co., Ltd.
    Inventor: Tae Hyoung Kim
  • Patent number: 11497039
    Abstract: A transmission device includes: a communication circuit configured to wirelessly communicate with a reception device, by using a plurality of wireless services including a first wireless service having a first priority and a second wireless service having a second priority that is a priority lower than the first priority; and a processing circuit configured to perform, in accordance with a first information element, allocating of an uplink radio resource to transmission data of the first wireless service, the allocating of the uplink radio resource being performed in a situation, the situation being a situation that a medium access control—protocol data unit (MAC-PDU) has been generated or can be generated in response to allocating the uplink radio resource to a transmission data of the second wireless service, the first information element indicating a value configured to control logical channel prioritization (LCP) procedure.
    Type: Grant
    Filed: August 12, 2020
    Date of Patent: November 8, 2022
    Assignee: FUJITSU LIMITED
    Inventor: Yoshiaki Ohta
  • Patent number: 11442776
    Abstract: Deployment of arrangements of physical computing components coupled over a communication fabric are presented herein. In one example, a method includes receiving execution jobs directed to a computing cluster comprising a pool of computing components coupled to at least a communication fabric. Based on properties of the execution jobs, the method includes determining resource scheduling for handling the execution jobs, the resource scheduling indicating timewise allocations of resources of the computing cluster, and initiating the execution jobs on the computing cluster according to the resource scheduling by at least instructing the communication fabric to compose compute units comprising sets of computing components selected from among the pool of computing components to handle the execution jobs. Responsive to completions of the execution jobs, the compute units are decomposed back into the pool of computing components.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: September 13, 2022
    Assignee: Liqid Inc.
    Inventor: Josiah Clark
  • Patent number: 11431673
    Abstract: This application relates to the field of mobile communication, and in particular, to a method, an apparatus, and a system for selecting MEC node. The method includes: receiving a domain name request initiated by a terminal forwarded by the UPF, the domain name request comprising at least one of: a domain name, a destination address, or a protocol port information; obtaining a corresponding edge-application VIP from the GSLB based on the domain name request; returning a domain name response to the terminal, the domain name response comprising the edge-application VIP; receiving a service request initiated from the terminal forwarded by the UPF, a destination address of the service request being the edge-application VIP; and determining a corresponding MEC processing server according to the service request and a preset offloading policy, and offloading the service request to the corresponding MEC processing server.
    Type: Grant
    Filed: July 20, 2021
    Date of Patent: August 30, 2022
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Zhiqiang You, Jiajia Lou
  • Patent number: 11409533
    Abstract: Devices and techniques for pipeline merging in a circuit are described herein. A parallel pipeline result can be obtained for a transaction index of a parallel pipeline. Here, the parallel pipeline is one of several parallel pipelines that share transaction indices. An element in a vector can be marked. The element corresponds to the transaction index. The vector is one of several vectors respectively assigned to the several parallel pipelines. Further each element in the several vectors corresponds to a possible transaction index with respective elements between vectors corresponding to the same transaction index. Elements between the several vectors that correspond to the same transaction index can be compared to determine when a transaction is complete. In response to the transaction being complete, the result can be released to an output buffer in response to the transaction being complete.
    Type: Grant
    Filed: October 20, 2020
    Date of Patent: August 9, 2022
    Assignee: Micron Technology, Inc.
    Inventor: Michael Grassi
  • Patent number: 11403731
    Abstract: Disclosed is an image upscaling apparatus that includes: multiple convolution layers, each configured to receive an input image or a feature map outputted by a previous convolution layer and extract features to output a feature map; and a multilayer configured to receive a final feature map outputted from the last convolution layer and output an upscaled output image. The multilayer includes: a first partition layer including first filters having a minimum size along the x-axis and y-axis directions and the same size as the final feature map along the z-axis direction; and at least one second partition layer, each including second filters, having a size greater than that of the first filter in the x-axis and y-axis directions and having a number and size of the first filter in the z-axis direction, and configured to shuffle features in the x-axis and y-axis directions of the first shuffle map.
    Type: Grant
    Filed: September 23, 2020
    Date of Patent: August 2, 2022
    Assignee: INDUSTRY-ACADEMIC COOPERATION FOUNDATION, YONSEI UNIVERSITY
    Inventors: Seong Ook Jung, Sung Hwan Joo, Su Min Lee
  • Patent number: 11237757
    Abstract: An integrated circuit package includes a memory integrated circuit die and a coprocessor integrated circuit die that is coupled to the memory integrated circuit die. The coprocessor integrated circuit die has a logic sector that is configured to accelerate a function for a host processor. The logic sector generates an intermediate result of a computation performed as part of the function. The intermediate result is transmitted to and stored in the memory integrated circuit die.
    Type: Grant
    Filed: July 10, 2017
    Date of Patent: February 1, 2022
    Assignee: Intel Corporation
    Inventors: Ravi Gutala, Aravind Dasu
  • Patent number: 11216385
    Abstract: Memory management unit (MMU) in an application processor responds to an access request, corresponding to inspection request, including target context and target virtual address and the inspection request is for translating the target virtual address to a first target physical address. The MMU includes context cache, translation cache, invalidation queue and address translation manager (ATM). The context cache stores contexts and context identifiers of the stored contexts, while avoiding duplicating contexts. The translation cache stores first address and first context identifiers second addresses, the first address corresponds to virtual address, the first context identifiers corresponds to first context, and the second addresses corresponds to the first address and the first context. The invalidation queue stores at least one context identifier to be invalidated, of the context identifiers stored in the translation cache. The ATM controls the context cache, the translation cache and the invalidation queue.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: January 4, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sung-Boem Park, Moinul Syed, Ju-Hee Choi
  • Patent number: 11194778
    Abstract: A database system, computer program product, and a method for evaluating aggregates in database systems includes hashing of aggregation keys on a per bucket basis, and depending on a number of hashed tuples per bucket, sorting said tuples. Additionally, depending on the number of hashed tuples per bucket, the bucket is kept without change. Moreover, depending on the number of hashed tuples per bucket, maintaining a secondary hash table for a particular bucket, map tuples to it, aggregate as you map.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: December 7, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Rajesh Ramkrishna Bordawekar, Vincent Kulandaisamy, Oded Shmueli
  • Patent number: 11188303
    Abstract: A processor system comprises one or more logic units configured to receive a processor instruction identifying a first floating point number to be multiplied with a second floating point number. The floating point numbers are each decomposed into a group of a plurality of component numbers, wherein a number of bits used to represent each floating point number is greater than a number of bits used to represent any component number in each group of the plurality of component numbers. The component numbers of the first group are multiplied with the component numbers of the second group to determine intermediate multiplication results that are summed together to determine an effective result that represents a result of multiplying the first floating point number with the second floating point number.
    Type: Grant
    Filed: October 2, 2019
    Date of Patent: November 30, 2021
    Assignee: Facebook, Inc.
    Inventors: Krishnakumar Narayanan Nair, Anup Ramesh Kadkol, Ehsan Khish Ardestani Zadeh, Olivia Wu, Yuchen Hao, Thomas Mark Ulrich, Rakesh Komuravelli
  • Patent number: 11157425
    Abstract: A memory device provides a first memory area and a second memory area. A smart buffer includes; a priority setting unit receiving sensing data and a corresponding weight, determining a priority of the sensing data based on the corresponding weight, and classifying the sensing data as first priority sensing data or second priority sensing data based on the priority, and a channel controller allocating a channel to a first channel group, allocating another channel to a second channel group, assigning the first channel group to process the first priority sensing data in relation to the first memory area, and assigning the second channel group to process the second priority sensing data in relation to the second memory area.
    Type: Grant
    Filed: April 21, 2020
    Date of Patent: October 26, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Youngmin Jo, Daeseok Byeon, Tongsung Kim
  • Patent number: 11137438
    Abstract: A method of testing a self-contained device under test having at least a circuit under test and a power source is provided. The method may include at least temporarily enabling power from the power source to the circuit under test, determining a first voltage across the circuit under test, determining a second voltage across the circuit under test after a test duration, and calculating an average current of the circuit under test based at least partially on the first voltage, the second voltage and the test duration.
    Type: Grant
    Filed: August 24, 2015
    Date of Patent: October 5, 2021
    Assignee: Disruptive Technologies
    Inventor: Bjornar Hernes
  • Patent number: 11054883
    Abstract: A power management algorithm framework proposes: 1) a Quality-of-Service (QoS) metric for throughput-based workloads; 2) heuristics to differentiate between throughput and latency sensitive workloads; and 3) an algorithm that combines the heuristic and QoS metric to determine target frequency for minimizing idle time and improving power efficiency without any performance degradation. A management algorithm framework enables optimizing power efficiency in server-class throughput-based workloads while still providing desired performance for latency sensitive workloads. The power savings are achieved by identifying workloads in which one or more cores can be run at a lower frequency (and consequently lower power) without a significant negative performance impact.
    Type: Grant
    Filed: June 18, 2018
    Date of Patent: July 6, 2021
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Leonardo De Paula Rosa Piga, Samuel Naffziger, Ivan Matosevic, Indrani Paul
  • Patent number: 10983921
    Abstract: A method and apparatus for performing memory access operations during a memory relocation in a computing system are disclosed. In response to initiating a relocation operation from a source region of memory to a destination region of memory, copying one or more lines of the source region to the destination region, and activating a mirror operation mode in a communication circuit coupled to one or more devices included in the computing system. In response to receiving an access request from a device, reading previously stored data from the source region, and in response to determining the access request includes a write request, storing new data included in the write request to locations in both the source and destination regions.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: April 20, 2021
    Assignee: Oracle International Corporation
    Inventors: John Feehrer, Patrick Stabile, Gregory Onufer, John Johnson
  • Patent number: 10977854
    Abstract: Embodiments of a device include on-board memory, an applications processor, a digital signal processor (DSP) cluster, a configurable accelerator framework (CAF), and at least one communication bus architecture. The communication bus communicatively couples the applications processor, the DSP cluster, and the CAF to the on-board memory. The CAF includes a reconfigurable stream switch and a data volume sculpting unit, which has an input and an output coupled to the reconfigurable stream switch. The data volume sculpting unit has a counter, a comparator, and a controller. The data volume sculpting unit is arranged to receive a stream of feature map data that forms a three-dimensional (3D) feature map. The 3D feature map is formed as a plurality of two-dimensional (2D) data planes.
    Type: Grant
    Filed: February 20, 2019
    Date of Patent: April 13, 2021
    Assignees: STMICROELECTRONICS INTERNATIONAL N.V., STMICROELECTRONICS S.R.L.
    Inventors: Surinder Pal Singh, Thomas Boesch, Giuseppe Desoli
  • Patent number: 10931825
    Abstract: A computer system routes contact center interactions. Interactions between contact center agents and contact center queries that are received at a contact center are monitored. A ranking model is trained according to the categories of the contact center queries and the interaction scores of each handled query using machine learning. The ranking model is tested according to various metrics to ensure that the ranking model ranks the agents according to one or more selected business outcomes. A net score may be determined for each contact center agent for each query category based on a predicted interaction score and one or more non-interaction features. Incoming queries may then be routed to an appropriate contact center agent based on the category of the incoming query. Embodiments may further include a method and program product for routing contact center interactions in substantially the same manner described above.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: February 23, 2021
    Assignee: Cisco Technology, Inc.
    Inventors: Ambareesh Revanur, Manigandan Ms, Sateesh Kumar Potturu Naga Venkata
  • Patent number: 10922412
    Abstract: A configuration manager is associated with a Networked Control System (NCS) comprising a plurality of sensors and actuators. The configuration manager automatically discovers the hardware and/or software configurations of the sensors and actuators, and analyzes that information in order to detect whether any of the sensors and actuators have been tampered with. Provided the configuration manager detects such tampering, the configuration manager indicates the tampering to a control manager of the NCS, which then functions to minimize potential damage to the NCS.
    Type: Grant
    Filed: January 22, 2018
    Date of Patent: February 16, 2021
    Assignee: The Boeing Company
    Inventor: Balaje T. Thumati
  • Patent number: 10831698
    Abstract: Embodiments are provided herein for facilitating high link bandwidth utilization in a disaggregated computing system. A plurality of general purpose links are used to connect respective pluralities of computing elements. A traffic pattern between respective ones of a first plurality of computing elements of a first type and respective ones of a second plurality of computing elements of a second type is detected. The first and second pluralities of computing elements are dynamically connected through the respective ones of the plurality of general purpose links according to the detected traffic pattern.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: November 10, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Min Li, John A. Bivens, Ruchi Mahindru, Valentina Salapura, Eugen Schenfeld
  • Patent number: 10817214
    Abstract: Provided is a storage device set. The storage device set includes a storage device configured to communicate with a host, the storage device including a controller configured to generate encrypted input data by encrypting data; and a reconfigurable logic chip configured to receive the encrypted input data from the storage device, generate processed data by processing the encrypted input data according to a configuration, and generate encrypted output data by encrypting the processed data.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: October 27, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jae-geun Park, Phil-yong Jung, Ho-jun Shim, Sang-young Ye
  • Patent number: 10761585
    Abstract: In one embodiment an apparatus includes a multiplicity of processor components; one or more device components communicatively coupled to one or more processor components of the multiplicity of processor components; and a controller comprising logic at least a portion of which is in hardware, the logic to schedule one or more forced idle periods interspersed with one or more active periods, a forced idle period spanning a duration during which the multiplicity of processor components and the one or more device components are simultaneously placed in respective idle states that define a forced idle power state during isolated sub-periods of the forced idle period. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: October 11, 2018
    Date of Patent: September 1, 2020
    Assignee: INTEL CORPORATION
    Inventors: Paul S. Diefenbaugh, Eugene Gorbatov, Andrew Henroid, Eric C. Samson, Barnes Cooper
  • Patent number: 10732985
    Abstract: An information and entertainment system of a vehicle providing a number of functions that can be used by a user of the vehicle. In the method, an order of priority of the multiple functions is set by the user, wherein the order of priority states a time availability of the functions desired by the user after an activation of the information and entertainment system. In accordance with the set order of priority, the multiple functions are carried out after a starting of the information and entertainment system, and sub-functions of the multiple functions can be immediately made available.
    Type: Grant
    Filed: November 13, 2015
    Date of Patent: August 4, 2020
    Assignee: VOLKSWAGEN AKTIENGESELLSCHAFT
    Inventor: Sven Höhne
  • Patent number: 10685143
    Abstract: Disabling communication in a multiprocessor fabric. The multiprocessor fabric may include a plurality of processors and a plurality of communication elements and each of the plurality of communication elements may include a memory. A configuration may be received for the multiprocessor fabric, which specifies disabling of communication paths between one or more of: one or more processors and one or more communication elements; one or more processors and one or more other processors; or one or more communication elements and one or more other communication elements. Accordingly, the multiprocessor fabric may be automatically configured in hardware to disable the communication paths specified by the configuration. The multiprocessor fabric may be operated to execute a software application according to the configuration.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: June 16, 2020
    Assignee: COHERENT LOGIX, INCORPORATED
    Inventors: Michael B. Doerr, Carl S. Dobbs, Michael B. Solka, Michael R. Trocino, David A. Gibson
  • Patent number: 10613619
    Abstract: Techniques and apparatuses are described that provide an ultra-low power mode for a low-cost force-sensing device. These techniques extend battery life of the device by minimizing power consumption for potential wake-up events. To do this, a high-pass filter (e.g., differentiator) is used to evaluate sensor signals in a time domain to provide an estimate of a rate of change of the signal. When the rate of change of the signal deviates from a baseline value by a threshold amount, then a microcontroller is woken to evaluate a large number of historical samples, such as 200 or more milliseconds worth of historical data. If a human gesture is not recognized, then the microcontroller returns to an idle state, but if a human gesture is recognized, then a high-power application processor is woken to execute an application configured to perform an operation mapped to the human gesture.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: April 7, 2020
    Assignee: Google LLC
    Inventors: Debanjan Mukherjee, James Brooks Miller
  • Patent number: 10599347
    Abstract: An information processing system includes: a processor in one information processing apparatus among information processing apparatuses coupled via a ring bus corresponding to a closed-loop bus; and a first memory, wherein the processor: generate a verification request for verification of completion of a write request after issuing the write request to a second memory in the information processing apparatuses; transmit the verification request to a subsequent information processing apparatus; transmit, when a request from a preceding information processing apparatus is not a verification request, the request to the subsequent information processing apparatus; transmit, when the request is a verification request to another information processing apparatus, the verification request and a request to the first memory to the subsequent information processing apparatus in order of receiving; and execute, when the request is a verification request to the one information processing apparatus, processing and generate
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: March 24, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Jun Kawahara, Masanori Higeta
  • Patent number: 10564633
    Abstract: A cloud-based virtualization generation service collects industrial data from multiple industrial automation systems of multiple industrial customers for storage and analysis on a cloud platform. A virtualization management component (VMC) generates a virtualized industrial automation system of an industrial automation system based on data analysis results. The VMC facilitates remotely controlling the industrial automation system based on user interactions with the virtualized industrial automation system, and updates the virtualized industrial automation system based on collected data relating to the industrial automation system. The VMC customizes a user's view of the virtualized industrial automation system based on a user's role, authorization, location, or preferences, wherein different views of the virtualized industrial automation system with different data overlays are presented on different communication devices of different users.
    Type: Grant
    Filed: June 13, 2017
    Date of Patent: February 18, 2020
    Assignee: Rockwell Automation Technologies, Inc.
    Inventors: Juan Asenjo, John Strohmenger, Stephen Nawalaniec, Bradford H. Hegrat, Joseph A. Harkulich, Jessica Lin Korpela, Jenifer Rydberg Wright, Rainer Hessmer, John Dyck, Edward Alan Hill, Sal Conti
  • Patent number: 10558490
    Abstract: An apparatus is described having multiple cores, each core having: a) a CPU; b) an accelerator; and, c) a controller and a plurality of order buffers coupled between the CPU and the accelerator. Each of the order buffers is dedicated to a different one of the CPU's threads. Each one of the order buffers is to hold one or more requests issued to the accelerator from its corresponding thread. The controller is to control issuance of the order buffers' respective requests to the accelerator.
    Type: Grant
    Filed: March 30, 2012
    Date of Patent: February 11, 2020
    Assignee: Intel Corporation
    Inventors: Ronny Ronen, Boris Ginzburg, Eliezer Weissmann
  • Patent number: 10530823
    Abstract: A method, computer-readable medium, and device for processing a stream of records with a guarantee that each record is accounted for exactly once are disclosed. A method may receive, via a first operator, a data stream having a plurality of records, the plurality of records provided by a plurality of first data sources; allocate the data stream to a plurality of shards of the first operator; process the plurality of records by each shard of the plurality of shards to generate a first output stream, where each shard being implemented with at least two replicas; and output the first output stream to a third operator or a subscriber.
    Type: Grant
    Filed: August 10, 2016
    Date of Patent: January 7, 2020
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Theodore Johnson, Vladislav Shkapenyuk
  • Patent number: 10462251
    Abstract: A file-mapping method and system can better manage the number of items (i.e., files, subdirectories, or a combination of them) within any single directory within a storage medium. The method and system can be used to limit the number of items within the directory, direct content and content components to different directories, and provide an internally recognizable name for the filename. When searching the storage medium, time is not wasted searching what appears to be a seemingly endless list of filenames or subdirectory names within any single directory. A client computer can have requests for content fulfilled quicker, and the network site can reduce the load on hardware or software components. While the method and system can be used for nearly any storage media, the method and system are well suited for cache memories used with web servers.
    Type: Grant
    Filed: July 10, 2017
    Date of Patent: October 29, 2019
    Assignee: Open Text SA ULC
    Inventors: Conleth S. O'Connell, Jr., Eric R. White, N. Isaac Rajkumar
  • Patent number: 10234841
    Abstract: A programmable logic controller includes a state monitoring unit to monitor a state of another programmable logic controller that is a counterpart in the duplication, and a system switching control unit to transmit, to a slave device, status information including information indicating whether the programmable logic controller is a control system or a standby system, to receive the control data that is addressed to the programmable logic controller that is the control system and is transmitted from the slave device in a case where the programmable logic controller is the control system, and to switch the programmable logic controller to the control system when hindrance in the another programmable logic controller is detected in a case where the programmable logic controller is the standby system.
    Type: Grant
    Filed: April 22, 2015
    Date of Patent: March 19, 2019
    Assignee: Mitsubishi Electric Corporation
    Inventors: Katsuhiro Annen, Katsumi Yamagiwa
  • Patent number: 10229357
    Abstract: The present disclosure is directed to a high-capacity training and prediction machine learning platform that can support high-capacity parameter models (e.g., with 10 billion weights). The platform implements a generic feature transformation layer for joint updating and a distributed training framework utilizing shard servers to increase training speed for the high-capacity model size. The models generated by the platform can be utilized in conjunction with existing dense baseline models to predict compatibilities between different groupings of objects (e.g., a group of two objects, three objects, etc.).
    Type: Grant
    Filed: September 11, 2015
    Date of Patent: March 12, 2019
    Assignee: Facebook, Inc.
    Inventors: Ou Jin, Stuart Michael Bowers, Dmytro Dzhulgakov
  • Patent number: 10108455
    Abstract: Methods and apparatus to manage and execute action in computing environments are disclosed. An example system includes a virtual machine resource platform to host a virtual compute node and a resource manager to: in response to a user request associated with the virtual compute node: determine a type of the virtual compute node; determine if an installed adapter identifies a type associated with the type of the virtual compute node; and when the adapter identifies the type associated with the type of the virtual compute node, present a user selectable identification of the adapter.
    Type: Grant
    Filed: August 24, 2016
    Date of Patent: October 23, 2018
    Assignee: VMware, Inc.
    Inventors: Phillip Smith, Timothy Binkley-Jones, Sean Bryan, Lori Marshall, Kathleen McDonough, Richard Monteleone, David Springer, Brian Williams, David Wilson
  • Patent number: 10103940
    Abstract: A method of updating at least two interconnected devices in a local network, a local network comprising at least two interconnected devices and a method of operating a remote management client and a device in this local network are provided. A resource location information of an update archive is communicated from a remote management client in the local network to the other devices in said network. The devices participating in the update communicate participation acknowledgement messages to the remote management client. The participating devices determine whether a next one of a predefined sequence of update statuses is reached. They notify the other participating devices that this update status has been reached and pause until all other participating devices have notified that they also have reached the same update status.
    Type: Grant
    Filed: June 4, 2014
    Date of Patent: October 16, 2018
    Assignee: Thomson Licensing
    Inventors: Sylvain Dumet, Dirk Van De Poel
  • Patent number: 10067556
    Abstract: A method maintains power usage prediction information for one or more functional units in branch prediction logic for a processing unit such that the power consumption of a functional unit may be selectively reduced in association with the execution of branch instructions when it is predicted that the functional unit will be idle subsequent to the execution of such branch instructions.
    Type: Grant
    Filed: August 31, 2015
    Date of Patent: September 4, 2018
    Assignee: International Business Machines Corporation
    Inventors: Mark J. Hickey, Adam J. Muff, Matthew R. Tubbs, Charles D. Wait
  • Patent number: 10055454
    Abstract: The present invention discloses a method for executing an SQL operator on compressed data chunk. The method comprising the step of: receiving SQL operator, accessing compressed data chunk blocks, receive e full set of derivatives of the compression scheme, check compression rules based on the compression scheme and relevant operator for approving SQL operation on compressed data and in case of approval applying respective SQL operator on relevant compressed data chunks.
    Type: Grant
    Filed: September 24, 2013
    Date of Patent: August 21, 2018
    Assignee: SQREAM TECHNOLOGIES LTD
    Inventors: Kostya Varakin, Ami Gal