Patents by Inventor Dan Tsafrir

Dan Tsafrir has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240096014
    Abstract: A computer implemented method of creating data for a host vehicle simulation, comprising: in each of a plurality of iterations of a host vehicle simulation using at least one processor for: obtaining from an environment simulation engine a semantic-data dataset representing a plurality of scene objects in a geographical area, each one of the plurality of scene objects comprises at least object location coordinates and a plurality of values of semantically described parameters; creating a 3D visual realistic scene emulating the geographical area according to the dataset; applying at least one noise pattern associated with at least one sensor of a vehicle simulated by the host vehicle simulation engine on the virtual 3D visual realistic scene to create sensory ranging data simulation of the geographical area; converting the sensory ranging data simulation to an enhanced dataset emulating the geographical area, the enhanced dataset comprises a plurality of enhanced scene objects.
    Type: Application
    Filed: November 24, 2023
    Publication date: March 21, 2024
    Applicant: Cognata Ltd.
    Inventors: Dan ATSMON, Guy TSAFRIR, Eran ASA
  • Patent number: 11301397
    Abstract: A computing device, comprising at least one peripheral computing component, electrically connected to each of a plurality of hardware processors; wherein at least one of the plurality of hardware processors is adapted to executing a code for: configuring the at least one peripheral computing component to access at least one first memory location in a first memory component electrically coupled with a first hardware processor of the plurality of hardware processors via a first electrical connection between the peripheral computing component and the first hardware processor; and configuring the at least one peripheral computing component to access at least one second memory location in a second memory component electrically coupled with a second hardware processor of the plurality of hardware processors via a second electrical connection between the peripheral computing component and the second hardware processor; and wherein the first hardware processor is not the second hardware processor.
    Type: Grant
    Filed: April 24, 2019
    Date of Patent: April 12, 2022
    Assignee: Technion Research & Development Foundation Limited
    Inventors: Dan Tsafrir, Igor Smolyar
  • Publication number: 20210248092
    Abstract: A computing device, comprising at least one peripheral computing component, electrically connected to each of a plurality of hardware processors; wherein at least one of the plurality of hardware processors is adapted to executing a code for: configuring the at least one peripheral computing component to access at least one first memory location in a first memory component electrically coupled with a first hardware processor of the plurality of hardware processors via a first electrical connection between the peripheral computing component and the first hardware processor; and configuring the at least one peripheral computing component to access at least one second memory location in a second memory component electrically coupled with a second hardware processor of the plurality of hardware processors via a second electrical connection between the peripheral computing component and the second hardware processor; and wherein the first hardware processor is not the second hardware processor.
    Type: Application
    Filed: April 24, 2019
    Publication date: August 12, 2021
    Applicant: Technion Research & Development Foundation Limited
    Inventors: Dan TSAFRIR, Igor SMOLYAR
  • Patent number: 11068422
    Abstract: Described herein are embodiments that adaptively reduce the number of interrupts that occur between a device controller and a computer system. Device commands are submitted to the controller by an operating system on behalf of an application. The device performs the received commands and indicates command completions to the controller. A counter counts completions, and if the count exceeds a threshold number, the controller generates an interrupt to the computer system. If the count is greater than zero and the timeout interval has expired, then the controller generates an interrupt to the computer system. In some embodiments, the application attaches flags to one of the commands indicating that an interrupt relating to completion of the flagged command should be generated as soon as possible or that an interrupt relating to completion of all commands prior to and including the flagged command should be generated as soon as possible.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: July 20, 2021
    Assignee: VMware, Inc.
    Inventors: Amy Tai, Igor Smolyar, Dan Tsafrir, Michael Wei, Nadav Amit
  • Patent number: 10878085
    Abstract: In accordance with embodiments of the present disclosure, a compiler can compile source code to produce binary code that includes address shifting code inserted with memory operations. The address shifting code can shift addresses of memory operations that access locations in the kernel address space into address locations in the user space, thus avoiding speculative access into the kernel address space.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: December 29, 2020
    Assignee: VMware, Inc.
    Inventors: Michael Wei, Dan Tsafrir, Nadav Amit
  • Patent number: 10824717
    Abstract: In accordance with embodiments of the present disclosure, a binary translator can perform address shifting on the binary code of an executing application. Address shifting serves to shift the addresses of memory operations that can access locations in the kernel address space into address locations in the user space, thus avoiding speculative access into the kernel address space.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: November 3, 2020
    Assignee: VMWARE, INC.
    Inventors: Michael Wei, Dan Tsafrir, Nadav Amit
  • Patent number: 10713353
    Abstract: The present disclosure addresses the meltdown vulnerability resulting from speculative execution in a multi-core processing system. The operating system (OS) can be loaded for execution on one of several processing cores (OS core), while an application can be loaded for execution on another of the processing cores (application core). The OS core uses process page tables that map the entire kernel address space to physical memory. Conversely, the application core uses pages tables that map only a portion of the kernel address space to physical memory.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: July 14, 2020
    Assignee: VMware, Inc.
    Inventors: Michael Wei, Dan Tsafrir, Nadav Amit
  • Patent number: 10599835
    Abstract: Embodiments are disclosed to mitigate the meltdown vulnerability by selectively using page table isolation. Page table isolation is enabled for 64-bit applications, so that unprivileged areas in the kernel address space cannot be accessed in user mode due to speculative execution by the processor. On the other hand, page table isolation is disabled for 32-bit applications thereby providing mapping into unprivileged areas in the kernel address space. However, speculative execution is limited to a 32-bit address space in a 32-bit application, and s access to unprivileged areas in the kernel address space can be inhibited.
    Type: Grant
    Filed: April 23, 2018
    Date of Patent: March 24, 2020
    Assignee: VMWARE, INC.
    Inventors: Nadav Amit, Dan Tsafrir, Michael Wei
  • Patent number: 10379751
    Abstract: A method for reducing disk read rate by managing dataset mapping of virtual machine (VM) guest memory, comprising: monitoring a plurality of disk read write operations of a VM guest; updating a dataset mapping between disk blocks allocated to the VM guest and corresponding physical addresses of memory pages of the VM guest containing replica of data stored in the disk blocks, based on the plurality of disk read write operations; when identifying writing to one of the memory pages, removing a mapping of corresponding disk block and corresponding physical address of memory page; when reclaiming a mapped memory page of the VM guest by a host of the VM guest, discarding data contained in the memory page; and when the data is requested by the VM guest after it was reclaimed by said host, retrieving the data from corresponding disk block according to the mapping.
    Type: Grant
    Filed: October 3, 2017
    Date of Patent: August 13, 2019
    Assignee: Technion Research & Development Foundation Limited
    Inventors: Assaf Schuster, Nadav Amit, Dan Tsafrir
  • Publication number: 20190243965
    Abstract: In accordance with embodiments of the present disclosure, a compiler can compile source code to produce binary code that includes address shifting code inserted with memory operations. The address shifting code can shift addresses of memory operations that access locations in the kernel address space into address locations in the user space, thus avoiding speculative access into the kernel address space.
    Type: Application
    Filed: June 8, 2018
    Publication date: August 8, 2019
    Inventors: Michael Wei, Dan Tsafrir, Nadav Amit
  • Publication number: 20190243776
    Abstract: Embodiments are disclosed to mitigate the meltdown vulnerability by selectively using page table isolation. Page table isolation is enabled for 64-bit applications, so that unprivileged areas in the kernel address space cannot be accessed in user mode due to speculative execution by the processor. On the other hand, page table isolation is disabled for 32-bit applications thereby providing mapping into unprivileged areas in the kernel address space. However, speculative execution is limited to a 32-bit address space in a 32-bit application, and s access to unprivileged areas in the kernel address space can be inhibited.
    Type: Application
    Filed: April 23, 2018
    Publication date: August 8, 2019
    Inventors: Nadav Amit, Dan Tsafrir, Michael Wei
  • Publication number: 20190243966
    Abstract: In accordance with embodiments of the present disclosure, a binary translator can perform address shifting on the binary code of an executing application. Address shifting serves to shift the addresses of memory operations that can access locations in the kernel address space into address locations in the user space, thus avoiding speculative access into the kernel address space.
    Type: Application
    Filed: June 8, 2018
    Publication date: August 8, 2019
    Inventors: Michael Wei, Dan Tsafrir, Nadav Amit
  • Publication number: 20190243990
    Abstract: The present disclosure addresses the meltdown vulnerability resulting from speculative execution in a multi-core processing system. The operating system (OS) can be loaded for execution on one of several processing cores (OS core), while an application can be loaded for execution on another of the processing cores (application core). The OS core uses process page tables that map the entire kernel address space to physical memory. Conversely, the application core uses pages tables that map only a portion of the kernel address space to physical memory.
    Type: Application
    Filed: June 22, 2018
    Publication date: August 8, 2019
    Inventors: Michael Wei, Dan Tsafrir, Nadav Amit
  • Publication number: 20180046386
    Abstract: A method for reducing disk read rate by managing dataset mapping of virtual machine (VM) guest memory, comprising: monitoring a plurality of disk read write operations of a VM guest; updating a dataset mapping between disk blocks allocated to the VM guest and corresponding physical addresses of memory pages of the VM guest containing replica of data stored in the disk blocks, based on the plurality of disk read write operations; when identifying writing to one of the memory pages, removing a mapping of corresponding disk block and corresponding physical address of memory page; when reclaiming a mapped memory page of the VM guest by a host of the VM guest, discarding data contained in the memory page; and when the data is requested by the VM guest after it was reclaimed by said host, retrieving the data from corresponding disk block according to the mapping.
    Type: Application
    Filed: October 3, 2017
    Publication date: February 15, 2018
    Applicant: Technion Research & Development Foundation Limited
    Inventors: Assaf SCHUSTER, Nadav AMIT, Dan TSAFRIR
  • Patent number: 9811268
    Abstract: A method for reducing disk read rate by managing dataset mapping of virtual machine (VM) guest memory, comprising: monitoring a plurality of disk read write operations of a VM guest; updating a dataset mapping between disk blocks allocated to the VM guest and corresponding physical addresses of memory pages of the VM guest containing replica of data stored in the disk blocks, based on the plurality of disk read write operations; when identifying writing to one of the memory pages, removing a mapping of corresponding disk block and corresponding physical address of memory page; when reclaiming a mapped memory page of the VM guest by a host of the VM guest, discarding data contained in the memory page; and when the data is requested by the VM guest after it was reclaimed by said host, retrieving the data from corresponding disk block according to the mapping.
    Type: Grant
    Filed: February 16, 2015
    Date of Patent: November 7, 2017
    Assignee: Technion Research & Development Foundation Limited
    Inventors: Assaf Schuster, Nadav Amit, Dan Tsafrir
  • Publication number: 20170060437
    Abstract: A method for reducing disk read rate by managing dataset mapping of virtual machine (VM) guest memory, comprising: monitoring a plurality of disk read write operations of a VM guest; updating a dataset mapping between disk blocks allocated to the VM guest and corresponding physical addresses of memory pages of the VM guest containing replica of data stored in the disk blocks, based on the plurality of disk read write operations; when identifying writing to one of the memory pages, removing a mapping of corresponding disk block and corresponding physical address of memory page; when reclaiming a mapped memory page of the VM guest by a host of the VM guest, discarding data contained in the memory page; and when the data is requested by the VM guest after it was reclaimed by said host, retrieving the data from corresponding disk block according to the mapping.
    Type: Application
    Filed: February 16, 2015
    Publication date: March 2, 2017
    Inventors: Assaf SCHUSTER, Nadav AMIT, Dan TSAFRIR
  • Patent number: 9535802
    Abstract: A method of data replica recovery that is based on separate storage drives connected to a network where each storage drive has a storage space divided to contiguous storage segments and is electronically connected to a memory support component via a connection. Pairs of replicas, each of one of a plurality of data units, are stored in a manner that allows, in response to detection of a storage failure in one storage drive, to create replacement replicas in the memory support components of the other storage drives to assure that two replicas of each data unit can be found in the storage system.
    Type: Grant
    Filed: January 30, 2014
    Date of Patent: January 3, 2017
    Assignee: Technion Research & Development Foundation Limited
    Inventors: Dan Tsafrir, Eitan Rosenfeld
  • Publication number: 20150370656
    Abstract: A method of data replica recovery that is based on separate storage drives connected to a network where each storage drive has a storage space divided to contiguous storage segments and is electronically connected to a memory support component via a connection. Pairs of replicas, each of one of a plurality of data units, are stored in a manner that allows, in response to detection of a storage failure in one storage drive, to create replacement replicas in the memory support components of the other storage drives to assure that two replicas of each data unit can be found in the storage system.
    Type: Application
    Filed: January 30, 2014
    Publication date: December 24, 2015
    Inventors: Dan TSAFRIR, Eitan ROSENFELD
  • Patent number: 9195550
    Abstract: A method for checking program correctness may include executing a program on a main hardware thread in speculative execution mode on a hardware execution context on a chip having a plurality of hardware execution contexts. In this mode, the main hardware thread's state is not committed to main memory. Correctness checks by a plurality of helper threads are executed in parallel to the main hardware thread. Each helper thread runs on a separate hardware execution context on the chip in parallel with the main hardware thread. The correctness checks determine a safe point in the program up to which the operations executed by the main hardware thread are correct. Once the main hardware thread reaches the safe point, the mode of execution of the main hardware thread is switched to non-speculative. The runtime then causes the main thread to re-enter speculative mode of execution.
    Type: Grant
    Filed: February 3, 2011
    Date of Patent: November 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Dan Tsafrir, Robert W. Wisniewski
  • Patent number: 8261283
    Abstract: A system and method for performing backfilling on an incoming flow of incoming parallel programs provided by a population of clients to a facility, the system comprising a computer-controlled prediction functionality operative to generate a current run-time prediction for at least some of the incoming parallel programs, a backfiller performing backfilling on the incoming flow of incoming parallel programs based at least partly on at least some of the run-time predictions; and a prediction updater operative, if the current run-time of an incoming parallel program currently in process exceeds the current run-time prediction, to generate an updated run-time prediction to exceed the current run-time, to re-define the current run time to equal the updated run-time prediction, thereby to allow the backfiller to continue performing backfilling based on the updated run-time prediction.
    Type: Grant
    Filed: February 15, 2006
    Date of Patent: September 4, 2012
    Assignee: Yissum Research Development Company of Hebrew University of Jerusalem
    Inventors: Dan Tsafrir, Yoav Etsion, Dror Feitelson, David Talby