Patents by Inventor Kun Wang

Kun Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190325148
    Abstract: A data processing method comprises: in response to data to be encrypted or decrypted, determining, at a blockchain node, whether an adapter coupled to the node has been initialized; in response to determining that the adapter has not been initialized, determining an access address of the adapter; initializing the adapter based on the access address; and enabling the initialized adapter to encrypt or decrypt the data. As such, data encryption or decryption at the blockchain node is accelerated via the adapter.
    Type: Application
    Filed: April 18, 2019
    Publication date: October 24, 2019
    Inventors: Fei Chen, Kun Wang
  • Publication number: 20190324809
    Abstract: Implementations of the present disclosure relate to a method, apparatus and computer program product for processing a computing task. The method comprises: obtaining status information of multiple computing resources; in response to receiving a neural network model-based computing task, determining configuration information of multiple layers associated with the neural network model; obtaining parameter data associated with at least one part of the multiple layers on the basis of the configuration information; and based on the status information and the parameter data, selecting from the multiple computing resources a group of computing resources for processing the computing task. According to the example implementations of the present disclosure, multiple computing resources may be utilized sufficiently, and it may be guaranteed that a load balance may be stricken between the multiple computing resources.
    Type: Application
    Filed: April 12, 2019
    Publication date: October 24, 2019
    Inventors: Junping Zhao, Layne Lin Peng, Zhi Ying, Kun Wang
  • Publication number: 20190324917
    Abstract: There is provided a method for managing addresses in a distributed system. The distributed system comprises a client and a resource pool, the resource pool comprising multiple hosts, a host among the multiple hosts comprising a computing node. The method comprises: receiving an access request from the client, the access request being for accessing first target data in a physical memory of the computing node via a first virtual address; determining the physical memory on the basis of the first virtual address; and determining a first physical address of the first target data in the physical memory on the basis of the first virtual address, wherein the computing node is a graphics processing unit, and the physical memory is a memory of the graphics processing unit.
    Type: Application
    Filed: April 12, 2019
    Publication date: October 24, 2019
    Inventors: Wei Cui, Kun Wang
  • Publication number: 20190327175
    Abstract: Embodiments of the present disclosure provide methods, apparatuses and computer program products for transmitting data. A method comprises determining, at a source node, a traffic type of a packet to be sent to a destination node, the source node and the destination node having therebetween a plurality of network paths for different traffic types. The method further comprises including a mark indicating the traffic type into the packet. In addition, the method further comprises sending the packet including the mark to the destination node such that the packet is forwarded along one of the plurality of network paths specific to the traffic type. Embodiments of the present disclosure can transmit data using different network paths based on different traffic types of data so as to optimize network performance for different network requirements.
    Type: Application
    Filed: April 9, 2019
    Publication date: October 24, 2019
    Inventors: Zhi Ying, Junping Zhao, Kun Wang
  • Publication number: 20190324810
    Abstract: A method of scheduling a dedicated processing resource includes: obtaining source code of an application to be compiled; extracting, during compiling of the source code, metadata associated with the application, the metadata indicating an amount of the dedicated processing resource required by the application; and obtaining, based on the metadata, the dedicated processing resource allocated to the application. In this manner, performance of the dedicated processing resource scheduling system and resource utilization is improved.
    Type: Application
    Filed: April 19, 2019
    Publication date: October 24, 2019
    Inventors: Junping Zhao, Kun Wang, Layne Lin Peng, Fei Chen
  • Publication number: 20190325155
    Abstract: In a multi-cloud computing environment comprising a plurality of cloud platforms with each cloud platform comprising one or more nodes, a method maintains a decentralized metadata database framework, wherein each node comprises a decentralized metadata database component operatively coupled to each other decentralized metadata database component of the framework and wherein each of at least two of the decentralized metadata database components stores a set of metadata records corresponding to protected data stored across the plurality of cloud platforms. Further, the method manages one or more access requests directed to the protected data through one or more of the decentralized metadata database components of the framework.
    Type: Application
    Filed: April 23, 2018
    Publication date: October 24, 2019
    Inventors: Pengfei Wu, Kun Wang, Stephen J. Todd, Assaf Natanzon
  • Publication number: 20190327344
    Abstract: The present disclosure provides a method, apparatus and computer program product for determining a data transfer manner. The method comprises determining a first transfer completion time for transferring a data block from a first device to a second device without compression; determining a second transfer completion time for transferring the data block from the first device to the second device with the compression performed; and selecting, based on a comparison of the first and second transfer completion time, a transfer manner for the data block from a first transfer manner comprising compressing the data block and transferring the compressed data block and a second transfer manner of directly transferring the data block without compression. Through the embodiments, compressing and uncompressing are evaluated based on the transfer completion time before data transfer, so as to select a transfer manner suitable for data to be transferred and for devices that perform data transfer.
    Type: Application
    Filed: March 20, 2019
    Publication date: October 24, 2019
    Inventors: Pengfei Wu, Kun Wang, Ming Zhang, Jinpeng Liu
  • Publication number: 20190327342
    Abstract: Embodiments of the present disclosure relate to methods and an electronic device for transmitting and receiving data. The data transmission method includes: determining a hash value of original data to be transmitted; determining whether the hash value exist in a predetermined set of hash values; in response to the hash value being present in the set of hash values, transmitting the hash value, rather than the original data, to a server; and in response to the hash value being absent from the set of hash values, transmitting the original data to the server; and adding the hash value to the set of hash values. The embodiments of the present disclosure can avoid transmitting duplicated data between a client and a server, and it is not required to add extra remote procedure calling commands between the client and the server.
    Type: Application
    Filed: April 11, 2019
    Publication date: October 24, 2019
    Inventors: Wei Cui, Sanping Li, Kun Wang
  • Publication number: 20190324817
    Abstract: According to one example embodiment of the present disclosure, there is provided a method for optimization in a distributed system, where the distributed system comprises a client and multiple hosts among which a host comprises a computing node. The method comprises: receiving a first command requesting to use the computing node from an application at the client; determining the type of the first command; and adjusting the first command on the basis of the type of the first command to optimize the execution of the first command in the distributed system, where the computing node is a graphics processing unit, and the first command is a remote procedure call of the graphics processing unit.
    Type: Application
    Filed: April 15, 2019
    Publication date: October 24, 2019
    Inventors: Wei Cui, Kun Wang
  • Publication number: 20190324816
    Abstract: Implementations of the present disclosure relate to a method, apparatus and computer program product for processing a computing task. According to one example implementation of the present disclosure, there is provided a method for processing a computing task, comprising: in response to usage of multiple computing resources indicating that at least one part of computing resources among the multiple computing resources are used, determining a direction of a communication ring between the at least one part of computing resources; in response to receiving a request for processing the computing task, determining the number of computing resources associated with the request; and based on the usage and the direction of the communication ring, selecting from the multiple computing resources a sequence of computing resources which satisfy the number to process the computing task. Other example implementations include an apparatus for processing a computing task and a computer program product thereof.
    Type: Application
    Filed: March 14, 2019
    Publication date: October 24, 2019
    Inventors: Junping Zhao, Kun Wang, Jinpeng Liu
  • Publication number: 20190325307
    Abstract: According to an embodiment, a performance benchmark dataset is obtained, where the performance benchmark database at least includes structural data of one or more deep neural network models, time performance data and computing resource consumption data of a plurality of deep learning applications based on the one or more deep neural network models; a training dataset is extracted based on the performance benchmark database, where the training dataset has a plurality of parameter dimensions, the plurality of parameter dimensions including: structures of deep neural network models of the plurality of deep learning applications, resource configuration of the plurality of deep learning applications, and training time of the plurality of deep learning applications; and correspondence among the parameter dimensions of the training dataset is created so as to create an estimation model for estimating resources utilized by deep learning applications.
    Type: Application
    Filed: April 11, 2019
    Publication date: October 24, 2019
    Inventors: Sanping Li, Kun Wang
  • Publication number: 20190324930
    Abstract: A method, device and computer program product for enabling a Single Root Input/Output Virtualization (SR-IOV) function in an endpoint device. The method comprises: receiving, at an adapter, a request message from a virtual machine, the request message indicating an operation to be performed on the endpoint device by the virtual machine; parsing the request message to obtain a first request Transaction Layer Packet (TLP); determining whether a type of a first request TLP is a peer-to-peer transmission supported TLP or a peer-to-peer transmission unsupported TLP; in response to determining that the type of the first request TLP is a peer-to-peer transmission supported TLP, generating a second request TLP based on the first request TLP; and sending the second request TLP to the endpoint device. With this solution, the SR-IOV function is enabled in the endpoint device which does not support the SR-IOV function without the need of changing the endpoint device.
    Type: Application
    Filed: March 11, 2019
    Publication date: October 24, 2019
    Inventors: Fucai Liu, Fei Chen, Kun Wang
  • Publication number: 20190324805
    Abstract: Embodiments of the present disclosure provide a method, apparatus and computer program product for resource scheduling. The method comprises obtaining a processing requirement for a deep learning task, the processing requirement being specified by a user and at least including a requirement related to a completion time of the deep learning task. The method further comprises determining, based on the processing requirement, a resource required by the deep learning task such that processing of the deep learning task based on the resource satisfies the processing requirement. Through the embodiments of the present disclosure, the resources can be scheduled reasonably and flexibly to satisfy the user's processing requirement for a particular deep learning task without requiring the user to manually specify the requirement on the resources.
    Type: Application
    Filed: April 10, 2019
    Publication date: October 24, 2019
    Inventors: Layne Lin Peng, Kun Wang, Sanping Li
  • Publication number: 20190327238
    Abstract: Embodiments of the present disclosure provide a method, apparatus and computer program product for executing an application in clouds. In the method according to an embodiment of the present disclosure, an application execution request from a user for executing an application in clouds is received. In response to the application execution request, a monitoring module and a protection data configuration are uploaded into a runtime environment, the protection data configuration defining sensitive data which are not allowed to be accessed by a user of low authorization level. By the monitoring module, data input and data output of the user during execution of the application are monitored based on the protection data configuration to prevent the user of low authorization level from accessing the sensitive data. Embodiments of the present disclosure can achieve effective protection for sensitive data during the process of executing the application.
    Type: Application
    Filed: March 7, 2019
    Publication date: October 24, 2019
    Inventors: Fei Chen, Fucai Liu, Kun Wang
  • Patent number: 10450256
    Abstract: The present disclosure provides ketone waxes, methods of forming ketone waxes, and compositions comprising ketone waxes. In at least one embodiment, a ketone wax is provided. The ketone wax includes about 50 wt % or greater C40-C90 ketone content; about 50 wt % or greater of the ketone wax has a boiling point of 961° F. or greater; and a paraffins content of less than about 10 wt %, as determined by 2-dimensional gas chromatography. In at least one embodiment, a method for forming a C40-C90 ketone wax includes exposing a feed stock to a basic catalyst under conditions suitable for coupling unsaturated carbon chains from the feed to form a composition including a ketone wax, oligomerizing the ketone wax to form a ketone wax having C40-C90 ketone wax, and distilling and/or extracting the oligomerized ketone wax to provide a C40-C90 ketone wax of the present disclosure.
    Type: Grant
    Filed: September 21, 2018
    Date of Patent: October 22, 2019
    Assignee: ExxonMobil Research and Engineering Company
    Inventors: Virginia M. Reiner, Michel Daage, Kun Wang, Sarvesh K. Agrawal, Frank C. Wang
  • Patent number: 10430681
    Abstract: Provided is a character segmentation and recognition method. The method includes: collecting image data to obtain a to-be-recognized image; positioning a character line candidate region on the to-be-recognized image; obtaining pre-set character line prior information, where the character line prior information includes the number of characters, character spacing and a character size; obtaining a corresponding segmentation point template based on the character line prior information; obtaining credible degrees of different positions on the character line candidate region traversed by the segmentation point template; determining a position with the highest credible degree as an optimal segmentation position; segmenting the character line candidate region based on the segmentation point template and the optimal segmentation position to obtain multiple single character regions; and performing character recognition on each of the single character regions to obtain a corresponding recognition result.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: October 1, 2019
    Assignee: GRG Banking Equipment Co., Ltd.
    Inventors: Weifeng Wang, Xinhua Qiu, Rongqiu Wang, Kun Wang
  • Publication number: 20190296421
    Abstract: The present invention discloses an antenna wiring sleeve, and an antenna assembly provided with the wiring sleeve. The antenna assembly comprises a pole and several antenna vertically distributed on the pole along the axial direction, each antenna being internally provided with a wiring sleeve mounted on the pole, cables on each antenna passing through the wiring sleeve in the antenna located below and passing along the axial direction of the pole through the antenna located below, each wiring sleeve comprising a sleeve body and a plurality of wiring grooves formed on a side surface of the sleeve body for the cables on the antenna to pass through. The wiring sleeve is additionally arranged on the antenna pole so as to axially wire the antenna cables along the outside of the pole, thereby saving space, achieving an aesthetically pleasing and neat appearance, and facilitating assembly.
    Type: Application
    Filed: March 29, 2019
    Publication date: September 26, 2019
    Inventors: Kun Wang, Yan Zhang, Jing Sun
  • Publication number: 20190291230
    Abstract: The precision lapping and polishing device for external cylindrical surface of disk part and its taper error adjustment method. The device composes a circular baseplate, slant rails, baffles, pressure plates, copper blocks, a washer blanket; blanket plates, a set of bead shafting, a friction driving wheel, a DC motor, a mobile power supply, a LED lamp and a cover body. By adopting the working principle that the generatrix rotates around the fixed axis to form the cylindrical surface, the ultra-precision machining of the cylindrical surface of disk part is realized. The radial-continuous-automatic-micro feeding of the disk part is realized by thinning the thickness of the circular baseplate which is internally tangent to the generatrix of the circular baseplate during the process of lapping and polishing. The device has the advantages of operating simply, adjusting conveniently, low cost and is of important value for popularization and application.
    Type: Application
    Filed: June 13, 2017
    Publication date: September 26, 2019
    Inventors: Siying LING, Kun WANG, Baodi YU, Xiaodong WANG, Liding WANG
  • Patent number: 10415604
    Abstract: A pumping device includes a seal cap unit including a cap, an air relief member connected with the cap, and a regulating member connected with the air relief member. The cap has an air inlet hole and an air outlet hole. The air outlet hole has a through hole. The air relief member is mounted on the air outlet hole and has a guide hole and a plurality of apertures. The regulating member has an air relief hole and a plurality of draining holes. Thus, the air relief hole of the regulating member and the pressure relief member provide a double pressure relief function to release the excessive air pressure in the barrel.
    Type: Grant
    Filed: December 16, 2016
    Date of Patent: September 17, 2019
    Assignee: Crafts & Carriers Taiwan Inc.
    Inventor: Kun-Wang Wang
  • Patent number: 10409517
    Abstract: Embodiments of the present disclosure provide a device for data backup comprising: a secondary backup device coupled to a primary backup device, the secondary backup device further comprising: data segmentation unit operable to divide target data to be backed up into a plurality of data segments; data fingerprint generation unit operable to generate a corresponding data fingerprint for each data segment from a plurality of data segments, and providing the data fingerprint to the primary backup device for backing up the target data at the primary backup device, wherein the data fingerprint is a mapped data segment of a length less than a corresponding data segment length.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: September 10, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Colin Yong Zou, Kun Wang, Sean Cheng Ye, Junping Frank Zhao, Man Lv