Patents by Inventor Tamir Anavi
Tamir Anavi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11373354Abstract: A system and method for creating 3D graphics representations from video. The method includes: generating a skeletal model for each of at least one non-rigid object shown in a video feed, wherein the video feed illustrates a sports event in which at least one of the non-rigid objects is moving; determining at least one 3D rigged model for the at least one skeletal model; and rendering the at least one skeletal model as a 3D representation of the sports event, wherein rendering the 3D skeletal model further comprises wrapping each of at least one 3D skeletal model with one of the at least one 3D rigged model, each 3D skeletal model corresponding to one of the at least one skeletal model, wherein each 3D rigged model is moved according to the movement of the respective skeletal model when the 3D skeletal model is wrapped with the 3D rigged model.Type: GrantFiled: February 25, 2020Date of Patent: June 28, 2022Assignee: Track160, Ltd.Inventors: Michael Tamir, Michael Birnboim, Yaacov Chernoi, Antonio Dello Iacono, Tamir Anavi, Michael Priven, Alexander Yudashkin
-
Patent number: 11348255Abstract: A method and system for tracking movements of objects in a sports activity are provided. The method includes matching video captured by at least one camera with sensory data captured by each of a plurality of tags, wherein each of the at least one camera is deployed in proximity to a monitored area, wherein each of the plurality of tags is disposed on an object of a plurality of monitored objects moving within the monitored area; and determining, based on the video and sensory data, at least one performance profile for each of the monitored objects, wherein each performance profile is determined based on positions of the respective monitored object moving within the monitored area.Type: GrantFiled: June 4, 2018Date of Patent: May 31, 2022Assignee: Track160, Ltd.Inventors: Michael Tamir, Michael Birnboim, Antonio Dello Iacono, Yaacov Chernoi, Tamir Anavi, Michael Priven, Alexander Yudashkin
-
Publication number: 20210269045Abstract: Systems and methods are disclosed for contextual driver monitoring. In one implementation, one or more first inputs are received and processed to determine a state of a driver present within a vehicle. One or more second inputs are receiving and processed to determine navigation condition(s) associated with the vehicle, the navigation condition(s) including a temporal road condition received from a cloud resource or a behavior of the driver. Based on the navigation condition(s), a driver attentiveness threshold is computed. One or more actions are initiated in correlation with the state of the driver and the driver attentiveness threshold.Type: ApplicationFiled: June 26, 2019Publication date: September 2, 2021Inventors: Itay KATZ, Tamir ANAVI, Erez Shalom STEINBERG
-
Publication number: 20200207358Abstract: Systems and methods are disclosed for contextual driver monitoring. In one implementation, one or more first inputs are received. The one or more first inputs are processed to identify a first object in relation to a vehicle. One or more second inputs are received. The one or more second inputs are processed to determine, based on one or more previously determined states of attentiveness associated with the driver of the vehicle in relation to one or more objects associated with the first object, a state of attentiveness of a driver of the vehicle with respect to the first object. One or more actions are initiated based on the state of attentiveness of a driver.Type: ApplicationFiled: September 9, 2019Publication date: July 2, 2020Inventors: Itay Katz, Tamir Anavi, Erez Steinberg
-
Publication number: 20200193671Abstract: A system and method for creating 3D graphics representations from video. The method includes: generating a skeletal model for each of at least one non-rigid object shown in a video feed, wherein the video feed illustrates a sports event in which at least one of the non-rigid objects is moving; determining at least one 3D rigged model for the at least one skeletal model; and rendering the at least one skeletal model as a 3D representation of the sports event, wherein rendering the 3D skeletal model further comprises wrapping each of at least one 3D skeletal model with one of the at least one 3D rigged model, each 3D skeletal model corresponding to one of the at least one skeletal model, wherein each 3D rigged model is moved according to the movement of the respective skeletal model when the 3D skeletal model is wrapped with the 3D rigged model.Type: ApplicationFiled: February 25, 2020Publication date: June 18, 2020Applicant: Track160, Ltd.Inventors: Michael TAMIR, Michael BIRNBOIM, Yaacov CHERNOI, Antonio Dello IACONO, Tamir ANAVI, Michael PRIVEN, Alexander YUDASHKIN
-
Publication number: 20190318181Abstract: Systems and methods are disclosed for driver monitoring. In one implementation, one or more images are received, e.g., from an image sensor. Such image(s) can reflect a at least a portion of a face of a driver. Using the images, a direction of a gaze of the driver is determined. A set of determined driver gaze directions is identified using at least one predefined direction. One or more features of one or more eyes of the driver are extracted using information associated with the identified set.Type: ApplicationFiled: June 30, 2017Publication date: October 17, 2019Inventors: Itay KATZ, Yonatan SAMET, Tamir ANAVI
-
Publication number: 20180350084Abstract: A method and system for tracking movements of objects in a sports activity are provided. The method includes matching video captured by at least one camera with sensory data captured by each of a plurality of tags, wherein each of the at least one camera is deployed in proximity to a monitored area, wherein each of the plurality of tags is disposed on an object of a plurality of monitored objects moving within the monitored area; and determining, based on the video and sensory data, at least one performance profile for each of the monitored objects, wherein each performance profile is determined based on positions of the respective monitored object moving within the monitored area.Type: ApplicationFiled: June 4, 2018Publication date: December 6, 2018Applicant: Track160, Ltd.Inventors: Michael TAMIR, Michael BIRNBOIM, Antonio Dello IACONO, Yaacov CHERNOI, Tamir ANAVI, Michael PRIVEN, Alexander YUDASHKIN
-
Patent number: 10126826Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.Type: GrantFiled: June 27, 2016Date of Patent: November 13, 2018Assignee: Eyesight Mobile Technologies Ltd.Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
-
Publication number: 20180024643Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.Type: ApplicationFiled: September 29, 2017Publication date: January 25, 2018Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
-
Publication number: 20160306435Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.Type: ApplicationFiled: June 27, 2016Publication date: October 20, 2016Inventors: Itay KATZ, Nadav Israel, Tamir Anavi, Shahaf Grofit, ltay Bar-Yosef
-
Patent number: 9377867Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.Type: GrantFiled: August 8, 2012Date of Patent: June 28, 2016Assignee: EYESIGHT MOBILE TECHNOLOGIES LTD.Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
-
Publication number: 20140306877Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.Type: ApplicationFiled: August 8, 2012Publication date: October 16, 2014Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
-
Patent number: 8842919Abstract: Systems, methods, and computer-readable media for gesture recognition are disclosed. The systems include, for example, at least one processor that is configured to receive at least one image from at least one image sensor. The processor may also be configured to detect, in the image, data corresponding to an anatomical structure of a user. The processor may also be configured to identify, in the image, information corresponding to a suspected hand gesture by the user. In addition, the processor may also be configured to discount the information corresponding to the suspected hand gesture if the data corresponding to the anatomical structure of the user is not identified in the image.Type: GrantFiled: February 8, 2014Date of Patent: September 23, 2014Assignee: Eyesight Mobile Technologies Ltd.Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef
-
Publication number: 20140157210Abstract: A user interface apparatus for controlling any kind of a device. Images obtained by an image sensor in a region adjacent to the device are input to a gesture recognition system which analyzes images obtained by the image sensor to identify one or more gestures. A message decision maker generates a message based upon an identified gesture and a recognition mode of the gesture recognition system. The recognition mode is changed under one or more various conditions.Type: ApplicationFiled: February 8, 2014Publication date: June 5, 2014Inventors: Itay Katz, Nadav Israel, Tamir Anavi, Shahaf Grofit, Itay Bar-Yosef