Patents by Inventor Vadim Sukhomlinov

Vadim Sukhomlinov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11171983
    Abstract: Embodiments are directed toward techniques to detect a first function associated with an address space initiating a call instruction to a second function in the address space, the first function to call the second function in a deprivileged mode of operation, and define accessible address ranges for segments of the address space for the second function, each segment to a have a different address range in the address space where the second function is permitted to access in the deprivileged mode of operation, Embodiments include switching to the stack associated with the second address space and the second function, and initiating execution of the second function in the deprivileged mode of operation.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: November 9, 2021
    Assignee: INTEL CORPORATION
    Inventors: Vadim Sukhomlinov, Kshitij Doshi, Michael Lemay, Dmitry Babokin, Areg Melik-Adamyan
  • Publication number: 20210319098
    Abstract: Techniques and apparatuses to harden AI systems against various attacks are provided. Among the different techniques and apparatuses, is provided, techniques and apparatuses that expand the domain for an inference model to include both visible classes and well as hidden classes. The hidden classes can be used to detect possible probing attacks against the model.
    Type: Application
    Filed: April 23, 2019
    Publication date: October 14, 2021
    Applicant: INTEL CORPORATION
    Inventors: OLEG POGORELIK, ALEX NAYSHTUT, OMER BEN-SHALOM, DENIS KLIMOV, RAIZY KELLERMANN, GUY BARNHART-MAGEN, VADIM SUKHOMLINOV
  • Patent number: 11126721
    Abstract: The disclosed embodiments generally relate to detecting malware through detection of micro-architectural changes (morphing events) when executing a code at a hardware level (e.g., CPU). An exemplary embodiment relates to a computer system having: a memory circuitry comprising an executable code; a central processing unit (CPU) in communication with the memory circuitry and configured to execute the code; a performance monitoring unit (PMU) associated with the CPU, the PMU configured to detect and count one or more morphing events associated with execution of the code and to determine if the counted number of morphine events exceed a threshold value; and a co-processor configured to initiate a memory scan of the memory circuitry to identify a malware in the code.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: September 21, 2021
    Assignee: INTEL CORPORATION
    Inventors: Alex Nayshtut, Vadim Sukhomlinov, Koichi Yamada, Ajay Harikumar, Venkat Gokulrangan
  • Publication number: 20210271733
    Abstract: Detailed are embodiments related to bit matrix multiplication in a processor. For example, in some embodiments a processor comprising: decode circuitry to decode an instruction have fields for an opcode, an identifier of a first source bit matrix, an identifier of a second source bit matrix, an identifier of a destination bit matrix, and an immediate; and execution circuitry to execute the decoded instruction to perform a multiplication of a matrix of S-bit elements of the identified first source bit matrix with S-bit elements of the identified second source bit matrix, wherein the multiplication and accumulation operations are selected by the operation selector and store a result of the matrix multiplication into the identified destination bit matrix, wherein S indicates a plural bit size is described.
    Type: Application
    Filed: January 22, 2021
    Publication date: September 2, 2021
    Inventors: Dmitry Y. Babokin, Kshitij A. Doshi, Vadim Sukhomlinov
  • Publication number: 20210263779
    Abstract: Embodiments of systems, apparatuses and methods provide enhanced function as a service (FaaS) to users, e.g., computer developers and cloud service providers (CSPs). A computing system configured to provide such enhanced FaaS service include one or more controls architectural subsystems, software and orchestration subsystems, network and storage subsystems, and security subsystems. The computing system executes functions in response to events triggered by the users in an execution environment provided by the architectural subsystems, which represent an abstraction of execution management and shield the users from the burden of managing the execution. The software and orchestration subsystems allocate computing resources for the function execution by intelligently spinning up and down containers for function code with decreased instantiation latency and increased execution scalability while maintaining secured execution.
    Type: Application
    Filed: April 16, 2019
    Publication date: August 26, 2021
    Applicant: Intel Corporation
    Inventors: Mohammad R. Haghighat, Kshitij Doshi, Andrew J. Herdrich, Anup Mohan, Ravishankar R. Iyer, Mingqiu Sun, Krishna Bhuyan, Teck Joo Goh, Mohan J. Kumar, Michael Prinke, Michael Lemay, Leeor Peled, Jr-Shian Tsai, David M. Durham, Jeffrey D. Chamberlain, Vadim A. Sukhomlinov, Eric J. Dahlen, Sara Baghsorkhi, Harshad Sane, Areg Melik-Adamyan, Ravi Sahita, Dmitry Yurievich Babokin, Ian M. Steiner, Alexander Bachmutsky, Anil Rao, Mingwei Zhang, Nilesh K. Jain, Amin Firoozshahian, Baiju V. Patel, Wenyong Huang, Yeluri Raghuram
  • Patent number: 11055226
    Abstract: Particular embodiments described herein provide for an electronic device that can be configured to receive a request for data, wherein the request is received on a system that regularly stores data in a cache and provide the requested data without causing the data or an address of the data to be cached or for changes to the cache to occur. In an example, the requested data is already in a level 1 cache, level 2 cache, or last level cache and the cache does not change its state. Also, a snoop request can be broadcasted to acquire the requested data and the snoop request is a read request and not a request for ownership of the data. In addition, changes to a translation lookaside buffer when the data was obtained using a linear to physical address translation is prevented.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: July 6, 2021
    Assignee: Intel Corporation
    Inventor: Vadim Sukhomlinov
  • Publication number: 20210191789
    Abstract: A computing apparatus, including: a hardware computing platform; and logic to operate on the hardware computing platform, configured to: receive a microservice instance registration for a microservice accelerator, wherein the registration includes a microservice that the microservice accelerator is configured to provide, and a microservice connection capability indicating an ability of the microservice instance to communicate directly with other instances of the same or a different microservice; and log the registration in a microservice registration database.
    Type: Application
    Filed: December 4, 2020
    Publication date: June 24, 2021
    Applicant: Intel Corporation
    Inventors: Vadim Sukhomlinov, Kshitij A. Doshi
  • Publication number: 20210117535
    Abstract: Disclosed embodiments relate to encoded inline capabilities. In one example, a system includes a trusted execution environment (TEE) to partition an address space within a memory into a plurality of compartments each associated with code to execute a function, the TEE further to assign a message object in a heap to each compartment, receive a request from a first compartment to send a message block to a specified destination compartment, respond to the request by authenticating the request, generating a corresponding encoded capability, conveying the encoded capability to the destination compartment, and scheduling the destination compartment to respond to the request, and subsequently, respond to a check capability request from the destination compartment by checking the encoded capability and, when the check passes, providing a memory address to access the message block, and, otherwise, generating a fault, wherein each compartment is isolated from other compartments.
    Type: Application
    Filed: December 7, 2020
    Publication date: April 22, 2021
    Inventors: Michael LEMAY, David M. DURHAM, Michael E. KOUNAVIS, Barry E. HUNTLEY, Vedvyas SHANBHOGUE, Jason W. BRANDT, Josh TRIPLETT, Gilbert NEIGER, Karanvir GREWAL, Baiju PATEL, Ye ZHUANG, Jr-Shian TSAI, Vadim SUKHOMLINOV, Ravi SAHITA, Mingwei ZHANG, James C. FARWELL, Amitabh DAS, Krishna BHUYAN
  • Patent number: 10965597
    Abstract: Examples may include techniques to route packets to virtual network functions. A network function virtualization load balancer is provided which routes packets to both maximize a specified distribution and minimize switching of contexts between virtual network functions. Virtual network functions are arranged to be able to shift a context from one virtual network function to another. As such, the system can be managed, for example, scaled up or down, regardless of the statefullness of the virtual network functions and their local contexts or flows.
    Type: Grant
    Filed: July 1, 2017
    Date of Patent: March 30, 2021
    Assignee: INTEL CORPORATION
    Inventors: Vadim Sukhomlinov, Kshitij A. Doshi, Andrey Chilikin
  • Patent number: 10929504
    Abstract: Detailed are embodiments related to bit matrix multiplication in a processor. For example, in some embodiments a processor comprising: decode circuitry to decode an instruction have fields for an opcode, an identifier of a first source bit matrix, an identifier of a second source bit matrix, an identifier of a destination bit matrix, and an immediate; and execution circuitry to execute the decoded instruction to perform a multiplication of a matrix of S-bit elements of the identified first source bit matrix with S-bit elements of the identified second source bit matrix, wherein the multiplication and accumulation operations are selected by the operation selector and store a result of the matrix multiplication into the identified destination bit matrix, wherein S indicates a plural bit size is described.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: February 23, 2021
    Assignee: Intel Corporation
    Inventors: Dmitry Y. Babokin, Kshitij A. Doshi, Vadim Sukhomlinov
  • Patent number: 10929535
    Abstract: The present disclosure is directed to systems and methods for mitigating or eliminating the effectiveness of a side channel attack, such as a Meltdown or Spectre type attack by selectively introducing a variable, but controlled, quantity of uncertainty into the externally accessible system parameters visible and useful to the attacker. The systems and methods described herein provide perturbation circuitry that includes perturbation selector circuitry and perturbation block circuitry. The perturbation selector circuitry detects a potential attack by monitoring the performance/timing data generated by the processor. Upon detecting an attack, the perturbation selector circuitry determines a variable quantity of uncertainty to introduce to the externally accessible system data. The perturbation block circuitry adds the determined uncertainty into the externally accessible system data. The added uncertainty may be based on the frequency or interval of the event occurrences indicative of an attack.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: February 23, 2021
    Assignee: Intel Corporation
    Inventors: Vadim Sukhomlinov, Kshitij Doshi, Francesc Guim, Alex Nayshtut
  • Publication number: 20210036859
    Abstract: A method for authenticating a secure credential transfer to a device includes verifying user identity and device identity. In particular, the method includes verifying user identity by requesting and receiving a user identification input at a first client device and verifying device identity of a second client device by (i) determining a security status of the second client device from hardware of the second client device, (ii) invoking an identifier related to the security status of the second client device to an authentication server, and (iii) obtaining certification from the authentication server for the second client device based on the invoked identifier. After verifying the user identity and the device identity, the method includes establishing a secure channel between the first client device and the second client device for the secure credential transfer using one or more tokens generated by the authentication server.
    Type: Application
    Filed: July 30, 2019
    Publication date: February 4, 2021
    Inventors: Vadim Sukhomlinov, Alberto Martin, Andrey Pronin
  • Publication number: 20210026651
    Abstract: Examples are described that relate to waking up or invoking a function such as a processor-executed application or a hardware device. The application or a hardware device can specify which sources can cause wake-ups and which sources are not to cause wake-ups. A device or processor-executed software can monitor reads from or writes to a region of memory and cause the application or a hardware device to wake-up unless the wake-up is specified as inhibited. The updated region of memory can be precisely specified to allow a pinpoint retrieval of updated content instead of scanning a memory range for changes. In some cases, a write to a region of memory can include various parameters that are to be used by the woken-up application or a hardware device. Parameters can include a source of a wake-up, a timer to cap execution time, or any other information.
    Type: Application
    Filed: July 26, 2019
    Publication date: January 28, 2021
    Inventors: Alexander BACHMUTSKY, Kshitij A. DOSHI, Raghu KONDAPALLI, Vadim SUKHOMLINOV
  • Patent number: 10860709
    Abstract: Disclosed embodiments relate to encoded inline capabilities. In one example, a system includes a trusted execution environment (TEE) to partition an address space within a memory into a plurality of compartments each associated with code to execute a function, the TEE further to assign a message object in a heap to each compartment, receive a request from a first compartment to send a message block to a specified destination compartment, respond to the request by authenticating the request, generating a corresponding encoded capability, conveying the encoded capability to the destination compartment, and scheduling the destination compartment to respond to the request, and subsequently, respond to a check capability request from the destination compartment by checking the encoded capability and, when the check passes, providing a memory address to access the message block, and, otherwise, generating a fault, wherein each compartment is isolated from other compartments.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: December 8, 2020
    Assignee: Intel Corporation
    Inventors: Michael Lemay, David M. Durham, Michael E. Kounavis, Barry E. Huntley, Vedvyas Shanbhogue, Jason W. Brandt, Josh Triplett, Gilbert Neiger, Karanvir Grewal, Baiju V. Patel, Ye Zhuang, Jr-Shian Tsai, Vadim Sukhomlinov, Ravi Sahita, Mingwei Zhang, James C. Farwell, Amitabh Das, Krishna Bhuyan
  • Patent number: 10860390
    Abstract: A computing apparatus, including: a hardware computing platform; and logic to operate on the hardware computing platform, configured to: receive a microservice instance registration for a microservice accelerator, wherein the registration includes a microservice that the microservice accelerator is configured to provide, and a microservice connection capability indicating an ability of the microservice instance to communicate directly with other instances of the same or a different microservice; and log the registration in a microservice registration database.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: December 8, 2020
    Assignee: Intel Corporation
    Inventors: Vadim Sukhomlinov, Kshitij A. Doshi
  • Patent number: 10831491
    Abstract: The present disclosure is directed to systems and methods for mitigating or eliminating the effectiveness of a side channel attack, such as a Spectre type attack, by limiting the ability of a user-level branch prediction inquiry to access system-level branch prediction data. The branch prediction data stored in the BTB may be apportioned into a plurality of BTB data portions. BTB control circuitry identifies the initiator of a received branch prediction inquiry. Based on the identity of the branch prediction inquiry initiator, the BTB control circuitry causes BTB look-up circuitry to selectively search one or more of the plurality of BTB data portions.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: November 10, 2020
    Assignee: Intel Corporation
    Inventors: Vadim Sukhomlinov, Kshitij Doshi
  • Publication number: 20200320003
    Abstract: The present disclosure is directed to systems and methods that include cache operation storage circuitry that selectively enables/disables the Cache Line Flush (CLFLUSH) operation. The cache operation storage circuitry may also selectively replace the CLFLUSH operation with one or more replacement operations that provide similar functionality but beneficially and advantageously prevent an attacker from placing processor cache circuitry in a known state during a timing-based, side channel attack such as Spectre or Meltdown. The cache operation storage circuitry includes model specific registers (MSRs) that contain information used to determine whether to enable/disable CLFLUSH functionality. The cache operation storage circuitry may include model specific registers (MSRs) that contain information used to select appropriate replacement operations such as Cache Line Demote (CLDEMOTE) and/or Cache Line Write Back (CLWB) to selectively replace CLFLUSH operations.
    Type: Application
    Filed: June 22, 2020
    Publication date: October 8, 2020
    Applicant: Intel Corporation
    Inventors: Vadim Sukhomlinov, Kshitij Doshi
  • Patent number: 10771554
    Abstract: Disclosed embodiments relate to cloud scaling with non-blocking, non-spinning cross-domain event synchronization and data communication. In an example, a processor includes a memory to store multiple virtual hardware thread (VHTR) descriptors, each including an architectural state, a monitored address range, a priority, and an execution state, fetch circuitry to fetch instructions associated with a plurality of the multiple VNFs, decode circuitry to decode the fetched instructions, scheduling circuitry to allocate and pin a VHTR to each of the plurality of VNFs, schedule execution of a VHTR on each of a plurality of cores, set the execution state of the scheduled VHTR; and in response to a monitor instruction received from a given VHTR, pause the given VHTR and switch in another VHTR to use the core previously used by the given VHTR, and, upon detecting a store to the monitored address range, trigger execution of the given VHTR.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: September 8, 2020
    Assignee: Intel Corporation
    Inventors: Vadim Sukhomlinov, Kshitij A. Doshi, Edwin Verplanke
  • Patent number: 10762708
    Abstract: Embodiments herein relate to the display of enhanced stereographic imagery in augmented or virtual reality. In various embodiments, an apparatus to display enhanced stereographic imagery may include one or more processors, an image generation module to generate an enhanced stereoscopic image of a scene having a first two-dimensional (2D) image of the scene and a second 2D image of the same scene that is visually or optically different than the first 2D image to create binocular rivalry perception of the scene when the first and second 2D images are respectively presented to a first and a second eye of a user, and a display module to display the enhanced stereoscopic image to the user, with the first 2D image presented to the first eye of the user and the second 2D image presented to the second eye of the user. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: June 23, 2016
    Date of Patent: September 1, 2020
    Assignee: Intel Corporation
    Inventor: Vadim Sukhomlinov
  • Publication number: 20200226009
    Abstract: Examples described herein relate to requesting execution of a workload by a next function with data transport overhead tailored based on memory sharing capability with the next function. In some examples, data transport overhead is one or more of: sending a memory address pointer, virtual memory address pointer or sending data to the next function. In some examples, the memory sharing capability with the next function is based on one or more of: whether the next function shares an enclave with a sender function, the next function shares physical memory domain with a sender function, or the next function shares virtual memory domain with a sender function. In some examples, selection of the next function from among multiple instances of the next function based on one or more of: sharing of memory domain, throughput performance, latency, cost, load balancing, or service legal agreement (SLA) requirements.
    Type: Application
    Filed: March 31, 2020
    Publication date: July 16, 2020
    Inventors: Alexander BACHMUTSKY, Raghu KONDAPALLI, Francesc GUIM BERNAT, Vadim SUKHOMLINOV