Patents by Inventor Tamir Anavi

Tamir Anavi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11373354
    Abstract: A system and method for creating 3D graphics representations from video. The method includes: generating a skeletal model for each of at least one non-rigid object shown in a video feed, wherein the video feed illustrates a sports event in which at least one of the non-rigid objects is moving; determining at least one 3D rigged model for the at least one skeletal model; and rendering the at least one skeletal model as a 3D representation of the sports event, wherein rendering the 3D skeletal model further comprises wrapping each of at least one 3D skeletal model with one of the at least one 3D rigged model, each 3D skeletal model corresponding to one of the at least one skeletal model, wherein each 3D rigged model is moved according to the movement of the respective skeletal model when the 3D skeletal model is wrapped with the 3D rigged model.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: June 28, 2022
    Assignee: Track160, Ltd.
    Inventors: Michael Tamir, Michael Birnboim, Yaacov Chernoi, Antonio Dello Iacono, Tamir Anavi, Michael Priven, Alexander Yudashkin
  • Patent number: 11348255
    Abstract: A method and system for tracking movements of objects in a sports activity are provided. The method includes matching video captured by at least one camera with sensory data captured by each of a plurality of tags, wherein each of the at least one camera is deployed in proximity to a monitored area, wherein each of the plurality of tags is disposed on an object of a plurality of monitored objects moving within the monitored area; and determining, based on the video and sensory data, at least one performance profile for each of the monitored objects, wherein each performance profile is determined based on positions of the respective monitored object moving within the monitored area.
    Type: Grant
    Filed: June 4, 2018
    Date of Patent: May 31, 2022
    Assignee: Track160, Ltd.
    Inventors: Michael Tamir, Michael Birnboim, Antonio Dello Iacono, Yaacov Chernoi, Tamir Anavi, Michael Priven, Alexander Yudashkin
  • Publication number: 20210269045
    Abstract: Systems and methods are disclosed for contextual driver monitoring. In one implementation, one or more first inputs are received and processed to determine a state of a driver present within a vehicle. One or more second inputs are receiving and processed to determine navigation condition(s) associated with the vehicle, the navigation condition(s) including a temporal road condition received from a cloud resource or a behavior of the driver. Based on the navigation condition(s), a driver attentiveness threshold is computed. One or more actions are initiated in correlation with the state of the driver and the driver attentiveness threshold.
    Type: Application
    Filed: June 26, 2019
    Publication date: September 2, 2021
    Inventors: Itay KATZ, Tamir ANAVI, Erez Shalom STEINBERG
  • Publication number: 20200207358
    Abstract: Systems and methods are disclosed for contextual driver monitoring. In one implementation, one or more first inputs are received. The one or more first inputs are processed to identify a first object in relation to a vehicle. One or more second inputs are received. The one or more second inputs are processed to determine, based on one or more previously determined states of attentiveness associated with the driver of the vehicle in relation to one or more objects associated with the first object, a state of attentiveness of a driver of the vehicle with respect to the first object. One or more actions are initiated based on the state of attentiveness of a driver.
    Type: Application
    Filed: September 9, 2019
    Publication date: July 2, 2020
    Inventors: Itay Katz, Tamir Anavi, Erez Steinberg
  • Publication number: 20200193671
    Abstract: A system and method for creating 3D graphics representations from video. The method includes: generating a skeletal model for each of at least one non-rigid object shown in a video feed, wherein the video feed illustrates a sports event in which at least one of the non-rigid objects is moving; determining at least one 3D rigged model for the at least one skeletal model; and rendering the at least one skeletal model as a 3D representation of the sports event, wherein rendering the 3D skeletal model further comprises wrapping each of at least one 3D skeletal model with one of the at least one 3D rigged model, each 3D skeletal model corresponding to one of the at least one skeletal model, wherein each 3D rigged model is moved according to the movement of the respective skeletal model when the 3D skeletal model is wrapped with the 3D rigged model.
    Type: Application
    Filed: February 25, 2020
    Publication date: June 18, 2020
    Applicant: Track160, Ltd.
    Inventors: Michael TAMIR, Michael BIRNBOIM, Yaacov CHERNOI, Antonio Dello IACONO, Tamir ANAVI, Michael PRIVEN, Alexander YUDASHKIN
  • Publication number: 20190318181
    Abstract: Systems and methods are disclosed for driver monitoring. In one implementation, one or more images are received, e.g., from an image sensor. Such image(s) can reflect a at least a portion of a face of a driver. Using the images, a direction of a gaze of the driver is determined. A set of determined driver gaze directions is identified using at least one predefined direction. One or more features of one or more eyes of the driver are extracted using information associated with the identified set.
    Type: Application
    Filed: June 30, 2017
    Publication date: October 17, 2019
    Inventors: Itay KATZ, Yonatan SAMET, Tamir ANAVI
  • Publication number: 20180350084
    Abstract: A method and system for tracking movements of objects in a sports activity are provided. The method includes matching video captured by at least one camera with sensory data captured by each of a plurality of tags, wherein each of the at least one camera is deployed in proximity to a monitored area, wherein each of the plurality of tags is disposed on an object of a plurality of monitored objects moving within the monitored area; and determining, based on the video and sensory data, at least one performance profile for each of the monitored objects, wherein each performance profile is determined based on positions of the respective monitored object moving within the monitored area.
    Type: Application
    Filed: June 4, 2018
    Publication date: December 6, 2018
    Applicant: Track160, Ltd.
    Inventors: Michael TAMIR, Michael BIRNBOIM, Antonio Dello IACONO, Yaacov CHERNOI, Tamir ANAVI, Michael PRIVEN, Alexander YUDASHKIN
  • Patent number: 10126826
    Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.
    Type: Grant
    Filed: June 27, 2016
    Date of Patent: November 13, 2018
    Assignee: Eyesight Mobile Technologies Ltd.
    Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
  • Publication number: 20180024643
    Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.
    Type: Application
    Filed: September 29, 2017
    Publication date: January 25, 2018
    Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
  • Publication number: 20160306435
    Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.
    Type: Application
    Filed: June 27, 2016
    Publication date: October 20, 2016
    Inventors: Itay KATZ, Nadav Israel, Tamir Anavi, Shahaf Grofit, ltay Bar-Yosef
  • Patent number: 9377867
    Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.
    Type: Grant
    Filed: August 8, 2012
    Date of Patent: June 28, 2016
    Assignee: EYESIGHT MOBILE TECHNOLOGIES LTD.
    Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
  • Publication number: 20140306877
    Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.
    Type: Application
    Filed: August 8, 2012
    Publication date: October 16, 2014
    Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
  • Patent number: 8842919
    Abstract: Systems, methods, and computer-readable media for gesture recognition are disclosed. The systems include, for example, at least one processor that is configured to receive at least one image from at least one image sensor. The processor may also be configured to detect, in the image, data corresponding to an anatomical structure of a user. The processor may also be configured to identify, in the image, information corresponding to a suspected hand gesture by the user. In addition, the processor may also be configured to discount the information corresponding to the suspected hand gesture if the data corresponding to the anatomical structure of the user is not identified in the image.
    Type: Grant
    Filed: February 8, 2014
    Date of Patent: September 23, 2014
    Assignee: Eyesight Mobile Technologies Ltd.
    Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
  • Publication number: 20140157210
    Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.
    Type: Application
    Filed: February 8, 2014
    Publication date: June 5, 2014
    Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef