Patents by Inventor Shoumeng Yan
Shoumeng Yan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250077471Abstract: A method comprises: determining a container directory entry in a container virtual file system corresponding to a container, where the container virtual file system is configured to manage at least one file corresponding to the container, and has a container directory entry that reflects a hierarchical relationship between the at least one file; and tagging the container directory entry in the container virtual file system as a first control level, to make a distinction from a control level of a directory entry that is different from that in the container virtual file system and that is in a host virtual file system of a processing device, where the host virtual file system is configured to manage files in the processing device, and has a directory entry that reflects a hierarchical relationship between the files in the processing device.Type: ApplicationFiled: September 4, 2024Publication date: March 6, 2025Inventors: Weijie LIU, Zhi LI, Hongliang TIAN, Shoumeng YAN
-
Publication number: 20240403226Abstract: For managing a translation lookaside buffer (TLB), an example computing device is deployed with virtual machines (VMs) by using a virtual machine monitor and has multiple CPU cores, where the VMs include a common VM running a common execution environment and a secure VM running a trusted execution environment (TEE) instance. In an example, in response to a target request for modifying a secure memory, the common execution environment sends an inter-processor interrupt (IPI) to one or more CPU cores that run the secure VM. Control data corresponding to the secure VM include a TLB control field set to a target value that indicates that all TLBs of the VM are flushed when exiting from the secure VM. In response to the IPI, the one or more CPU cores exit from a secure mode, and flushes all TLBs of the VM based on the target value.Type: ApplicationFiled: May 31, 2024Publication date: December 5, 2024Applicant: ALIPAY (HANGZHOU) INFORMATION TECHNOLOGY CO., LTD.Inventors: Bojun Zhu, Shuang Liu, Shoumeng Yan
-
Publication number: 20240403231Abstract: Shared memory management methods and apparatuses are provided. In an implementation, an apparatus comprises a virtual machine monitor, a virtual machine, and a trusted execution environment (TEE). The virtual machine monitor is configured to determine, based on the address information, whether a first address is comprised in the shared memory when a page fault occurs in response to the trusted part requests to access the first address, and in response to determining that the first address is comprised in the shared memory, send an interrupt notification to the virtual machine. The virtual machine is configured to: in response to the interrupt notification and determining, based on the address information, that the first address is comprised in the shared memory, validate a first page table entry in the first page table, and return a response message to the virtual machine monitor, wherein the first page table entry comprises address mapping information of a first page that comprises the first address.Type: ApplicationFiled: May 31, 2024Publication date: December 5, 2024Applicant: ALIPAY (HANGZHOU) INFORMATION TECHNOLOGY CO., LTD.Inventors: Yiling Xu, Shuang Liu, Ran Duan, Shoumeng Yan
-
Patent number: 12139036Abstract: An electric vehicle computing sharing system (100) is adapted to receive a signal indicating the electric vehicle (110, 120, 130) is connected to a charging station (115, 125, 135). The computing sharing system (100) may be further adapted to receive information about the electric vehicle (110, 120, 130). The computing sharing system (100) may be further adapted to determine a predicted charging duration (535) for the electric vehicle (110, 120, 130). The computing sharing system (100) may be further adapted to identify a task for execution by a computing resource of the electric vehicle (110, 120, 130) based on the predicted charging duration (535). The computing sharing system (100) may be further adapted to transmit the task to the electric vehicle (110, 120, 130). The computing sharing system (100) may be further adapted to receive a result for the task from the electric vehicle (110, 120, 130).Type: GrantFiled: June 28, 2019Date of Patent: November 12, 2024Assignee: Intel CorporationInventors: Bin Yang, Shoumeng Yan, Zhifang Long, Yanmin Zhang, Jia Bao
-
Patent number: 12106009Abstract: Technologies for framework-level audio device virtualization include a computing device that executes multiple application framework instances. The computing device monitors for an application framework instance switch and, in response to an application framework instance switch, determines whether the current application framework instance is in the foreground. If in the foreground, the computing device selects a physical audio output device. The computing device may output audio data associated with the current application framework instance using a kernel audio driver associated with the physical audio output device. If not in the foreground, the computing device selects a null audio output device using a null audio hardware abstraction layer (HAL). The null audio HAL may sleep for the duration of audio data associated with the current application framework instance. The null audio HAL may be an operating-system- and device-independent shared library of the computing device.Type: GrantFiled: September 14, 2020Date of Patent: October 1, 2024Assignee: Intel CorporationInventors: Shoumeng Yan, Yuan Wu, Dahai Kou
-
Publication number: 20240205224Abstract: Trusted grid construction includes respectively loading, by a plurality of computing nodes, uniform target code in trusted execution environments (TEEs) of the plurality of computing nodes. A target metric value corresponding to the target code is stored to form a plurality of trusted nodes, where target logic corresponding to the uniform target code includes trusted proxy logic configured to provide a security related service for an upper-layer application. Each trusted node performs mutual verification with another trusted node based on the target metric value. A secure connection is established to the another trusted node after the mutual verification is passed, where a plurality of trusted nodes that establish secure connections to each other form a trusted grid.Type: ApplicationFiled: December 1, 2023Publication date: June 20, 2024Applicant: ALIPAY (HANGZHOU) INFORMATION TECHNOLOGY CO., LTD.Inventors: Junxian Xiao, Shuai Wang, Shoumeng Yan
-
Publication number: 20240205204Abstract: This specification provide computer-implemented methods, and apparatuses for data transmission protocol execution and data storage. Execution of a data transmission protocol includes an initiator and a receiver. In an example execution process, a first application serving as the initiator encapsulates protocol information related to a transmission handshake protocol, ciphertext data obtained by encrypting target privacy data using an encryption method determined based on the protocol information, signature information obtained by digitally signing the ciphertext data and the protocol information using a private key, field content of an attestation field, and the like into a data transmission unit. The attestation field is used to fill a remote attestation report that includes public key information of the initiator.Type: ApplicationFiled: December 4, 2023Publication date: June 20, 2024Applicant: Alipay (Hangzhou) Information Technology Co., Ltd.Inventors: Junxian Xiao, Shuai Wang, Shoumeng Yan, Xiaomeng Zhang, Wenting Chang
-
Patent number: 11537892Abstract: A mechanism is described for facilitating slimming of neural networks in machine learning environments. A method of embodiments, as described herein, includes learning a first neural network associated with machine learning processes to be performed by a processor of a computing device, where learning includes analyzing a plurality of channels associated with one or more layers of the first neural network. The method may further include computing a plurality of scaling factors to be associated with the plurality of channels such that each channel is assigned a scaling factor, wherein each scaling factor to indicate relevance of a corresponding channel within the first neural network. The method may further include pruning the first neural network into a second neural network by removing one or more channels of the plurality of channels having low relevance as indicated by one or more scaling factors of the plurality of scaling factors assigned to the one or more channels.Type: GrantFiled: August 18, 2017Date of Patent: December 27, 2022Assignee: INTEL CORPORATIONInventors: Shoumeng Yan, Jianguo Li, Zhuang Liu
-
Patent number: 11392405Abstract: One or more implementations of the present specification provide a method and apparatus for securely entering a trusted execution environment in a hyper-threading scenario. The method can include: in response to that a logical processor running on a physical processor core generates a trusted execution environment entry event through an approach provided by a virtual machine monitor, labeling the logical processor with a state of expecting to enter a trusted execution environment; and in response to determining that all logical processors corresponding to the physical processor core are labeled with the state of expecting to enter a trusted execution environment, separately controlling each one of the logical processors to enter a trusted execution environment built on the physical processor core.Type: GrantFiled: June 23, 2021Date of Patent: July 19, 2022Assignee: Alipay (Hangzhou) Information Technology Co., Ltd.Inventors: Xiaojian Liu, Shoumeng Yan, Zongmin Gu
-
Publication number: 20220169140Abstract: An electric vehicle computing sharing system (100) is adapted to receive a signal indicating the electric vehicle (110, 120, 130) is connected to a charging station (115, 125, 135). The computing sharing system (100) may be further adapted to receive information about the electric vehicle (110, 120, 130). The computing sharing system (100) may be further adapted to determine a predicted charging duration (535) for the electric vehicle (110, 120, 130). The computing sharing system (100) may be further adapted to identify a task for execution by a computing resource of the electric vehicle (110, 120, 130) based on the predicted charging duration (535). The computing sharing system (100) may be further adapted to transmit the task to the electric vehicle (110, 120, 130). The computing sharing system (100) may be further adapted to receive a result for the task from the electric vehicle (110, 120, 130).Type: ApplicationFiled: June 28, 2019Publication date: June 2, 2022Inventors: Bin Yang, Shoumeng Yan, Zhifang Long, Yanmin Zhang, Jia Bao
-
Patent number: 11334086Abstract: Autonomous robots and methods of operating the same are disclosed. An autonomous robot includes a sensor and memory including machine readable instructions. The autonomous robot further includes at least one processor to execute the instructions to generate a velocity costmap associated with an environment in which the robot is located. The processor generates the velocity costmap based on a source image captured by the sensor. The velocity costmap includes velocity information indicative of movement of an obstacle detected in the environment.Type: GrantFiled: September 27, 2017Date of Patent: May 17, 2022Assignee: Intel CorporationInventors: Bin Wang, Jianguo Li, Shoumeng Yan
-
Patent number: 11297163Abstract: Method, systems and apparatuses may provide for technology that divides an application into a plurality of portions that are each associated with one or more functions of the application and determine a plurality of transition probabilities between the plurality of portions. Some technology may also receive at least a first portion of the plurality of portions, and receive a relation file indicating the plurality of transition probabilities between the plurality of portions.Type: GrantFiled: January 17, 2019Date of Patent: April 5, 2022Inventors: Shoumeng Yan, Xiao Dong Lin, Yao Zu Dong, Zhen Zhou, Bin Yang
-
METHOD AND APPARATUS FOR SECURELY ENTERING TRUSTED EXECUTION ENVIRONMENT IN HYPER-THREADING SCENARIO
Publication number: 20220066809Abstract: One or more implementations of the present specification provide a method and apparatus for securely entering a trusted execution environment in a hyper-threading scenario. The method can include: in response to that a logical processor running on a physical processor core generates a trusted execution environment entry event through an approach provided by a virtual machine monitor, labeling the logical processor with a state of expecting to enter a trusted execution environment; and in response to determining that all logical processors corresponding to the physical processor core are labeled with the state of expecting to enter a trusted execution environment, separately controlling each one of the logical processors to enter a trusted execution environment built on the physical processor core.Type: ApplicationFiled: June 23, 2021Publication date: March 3, 2022Inventors: Xiaojian LIU, Shoumeng YAN, Zongmin GU -
Publication number: 20210368023Abstract: Method, systems and apparatuses may provide for technology that divides an application into a plurality of portions that are each associated with one or more functions of the application and determine a plurality of transition probabilities between the plurality of portions. Some technology may also receive at least a first portion of the plurality of portions, and receive a relation file indicating the plurality of transition probabilities between the plurality of portions.Type: ApplicationFiled: January 17, 2019Publication date: November 25, 2021Applicant: INTEL CORPORATIONInventors: Shoumeng Yan, Xiao Dong Lin, Yao Zu Dong, Zhen Zhou, Bin Yang
-
Patent number: 10951521Abstract: A method for scheduling a computational task is proposed. The method includes receiving, at a server, a request for executing a computational task from a client device. The method further includes forwarding the computational task to a processing device if a predetermined condition is fulfilled. The predetermined condition can be based on an execution time or on a security level of data of the computational task, for example.Type: GrantFiled: June 1, 2018Date of Patent: March 16, 2021Assignee: MaxLinear, Inc.Inventors: Bin Yang, Shoumeng Yan, Yong Yao, Hongyu Zhang, Guobin Zhang
-
Publication number: 20210064333Abstract: Technologies for framework-level audio device virtualization include a computing device that executes multiple application framework instances. The computing device monitors for an application framework instance switch and, in response to an application framework instance switch, determines whether the current application framework instance is in the foreground. If in the foreground, the computing device selects a physical audio output device. The computing device may output audio data associated with the current application framework instance using a kernel audio driver associated with the physical audio output device. If not in the foreground, the computing device selects a null audio output device using a null audio hardware abstraction layer (HAL). The null audio HAL may sleep for the duration of audio data associated with the current application framework instance. The null audio HAL may be an operating-system- and device-independent shared library of the computing device.Type: ApplicationFiled: September 14, 2020Publication date: March 4, 2021Inventors: Shoumeng Yan, Yuan Wu, Dahai Kou
-
Patent number: 10908952Abstract: Preemptive scheduling enclaves as disclosed herein support both cooperative and preemptive scheduling of in-enclave (IE) thread execution. These preemptive scheduling enclaves may include a scheduler configured to be executed as part of normal hardware interrupt processing by enclave threads. The scheduler identifies an IE thread to be scheduled and modifies enclave data structures so that when the enclave thread resumes processing after a hardware interrupt, the identified IE thread is executed, rather than the interrupted IE thread.Type: GrantFiled: April 21, 2017Date of Patent: February 2, 2021Assignee: INTEL CORPORATIONInventors: Hongliang Tian, Shoumeng Yan, Mona Vij
-
Patent number: 10776072Abstract: Technologies for framework-level audio device virtualization include a computing device that executes multiple application framework instances. The computing device monitors for an application framework instance switch and, in response to an application framework instance switch, determines whether the current application framework instance is in the foreground. If in the foreground, the computing device selects a physical audio output device. The computing device may output audio data associated with the current application framework instance using a kernel audio driver associated with the physical audio output device. If not in the foreground, the computing device selects a null audio output device using a null audio hardware abstraction layer (HAL). The null audio HAL may sleep for the duration of audio data associated with the current application framework instance. The null audio HAL may be an operating-system- and device-independent shared library of the computing device.Type: GrantFiled: March 29, 2016Date of Patent: September 15, 2020Assignee: Intel CorporationInventors: Shoumeng Yan, Yuan Wu, Dahai Stephen Kou
-
Publication number: 20200264626Abstract: Autonomous robots and methods of operating the same are disclosed. An autonomous robot includes a sensor and memory including machine readable instructions. The autonomous robot further includes at least one processor to execute the instructions to generate a velocity costmap associated with an environment in which the robot is located. The processor generates the velocity costmap based on a source image captured by the sensor. The velocity costmap includes velocity information indicative of movement of an obstacle detected in the environment.Type: ApplicationFiled: September 27, 2017Publication date: August 20, 2020Inventors: Bin Wang, Jianguo Li, Shoumeng Yan
-
Publication number: 20200234130Abstract: A mechanism is described for facilitating slimming of neural networks in machine learning environments. A method of embodiments, as described herein, includes learning a first neural network associated with machine learning processes to be performed by a processor of a computing device, where learning includes analyzing a plurality of channels associated with one or more layers of the first neural network. The method may further include computing a plurality of scaling factors to be associated with the plurality of channels such that each channel is assigned a scaling factor, wherein each scaling factor to indicate relevance of a corresponding channel within the first neural network. The method may further include pruning the first neural network into a second neural network by removing one or more channels of the plurality of channels having low relevance as indicated by one or more scaling factors of the plurality of scaling factors assigned to the one or more channels.Type: ApplicationFiled: August 18, 2017Publication date: July 23, 2020Applicant: Intel CorporationInventors: Shoumeng Yan, Jianguo Li, Zhuang Liu