Patents by Inventor Aviv HURVITZ

Aviv HURVITZ has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12333807
    Abstract: In a system including a processor and memory, the memory includes instructions that, when executed by the processor, cause the processor to control the system to perform receiving a video stream capturing objects; identifying, based on the received video stream, object areas corresponding to the objects, respectively; tracking the object areas in the received video stream; generating, based on the tracking of the object areas, visual data sets at a plurality of times, wherein each visual data set is generated at a different time and includes visual data representing each object area; determining a priority of each visual data in each visual data set; selecting, based on the determined priority of each visual data, a group of the visual data to be transmitted to a remote system; and transmitting, to the remote system, the selected group of the visual data.
    Type: Grant
    Filed: May 24, 2021
    Date of Patent: June 17, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Moshe David, Aviv Hurvitz, Eyal Krupka, Qingfen Lin, Arash Ghanaie-Sichanie
  • Publication number: 20230402038
    Abstract: A method for facilitating a remote conference includes receiving a digital video and a computer-readable audio signal. A face recognition machine is operated to recognize a face of a first conference participant in the digital video, and a speech recognition machine is operated to translate the computer-readable audio signal into a first text. An attribution machine attributes the text to the first conference participant. A second computer-readable audio signal is processed similarly, to obtain a second text attributed to a second conference participant. A transcription machine automatically creates a transcript including the first text attributed to the first conference participant and the second text attributed to the second conference participant.
    Type: Application
    Filed: May 15, 2023
    Publication date: December 14, 2023
    Inventors: Adi DIAMANT, Xuedong HUANG, Karen MASTER BEN-DOR, Eyal KRUPKA, Raz HALALY, Yoni SMOLIN, Ilya GURVICH, Aviv HURVITZ, Lijuan QIN, Wei XIONG, Shixiong ZHANG, Lingfeng WU, Xiong XIAO, Ido LEICHTER, Moshe DAVID, Amit Kumar AGARWAL
  • Patent number: 11688399
    Abstract: A method for facilitating a remote conference includes receiving a digital video and a computer-readable audio signal. A face recognition machine is operated to recognize a face of a first conference participant in the digital video, and a speech recognition machine is operated to translate the computer-readable audio signal into a first text. An attribution machine attributes the text to the first conference participant. A second computer-readable audio signal is processed similarly, to obtain a second text attributed to a second conference participant. A transcription machine automatically creates a transcript including the first text attributed to the first conference participant and the second text attributed to the second conference participant.
    Type: Grant
    Filed: December 8, 2020
    Date of Patent: June 27, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Adi Diamant, Karen Master Ben-Dor, Eyal Krupka, Raz Halaly, Yoni Smolin, Ilya Gurvich, Aviv Hurvitz, Lijuan Qin, Wei Xiong, Shixiong Zhang, Lingfeng Wu, Xiong Xiao, Ido Leichter, Moshe David, Xuedong Huang, Amit Kumar Agarwal
  • Publication number: 20220374636
    Abstract: In a system including a processor and memory, the memory includes instructions that, when executed by the processor, cause the processor to control the system to perform receiving a video stream capturing objects; identifying, based on the received video stream, object areas corresponding to the objects, respectively; tracking the object areas in the received video stream; generating, based on the tracking of the object areas, visual data sets at a plurality of times, wherein each visual data set is generated at a different time and includes visual data representing each object area; determining a priority of each visual data in each visual data set; selecting, based on the determined priority of each visual data, a group of the visual data to be transmitted to a remote system; and transmitting, to the remote system, the selected group of the visual data.
    Type: Application
    Filed: May 24, 2021
    Publication date: November 24, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Moshe DAVID, Aviv HURVITZ, Eyal KRUPKA, Qingfen LIN, Arash GHANAIE-SICHANIE
  • Publication number: 20220329960
    Abstract: The disclosed technology is generally directed to audio capture. In one example of the technology, recorded sounds are received such that the sounds recorded were emitted from multiple locations in an environment and such that the sounds recorded are sounds that can be converted to room impulse responses. The room impulse responses are generated from the recorded sounds. Location information that is associated with the multiple locations is received. At least the room impulses responses and the location information are used to generate at least one environment-specific model. Audio captured in the environment is received. An output is generated by processing the captured audio with the at least one environment-specific model such that the output includes at least one adjustment of the captured audio based on at least one acoustical property of the environment.
    Type: Application
    Filed: April 13, 2021
    Publication date: October 13, 2022
    Inventors: Stav YAGEV, Sharon KOUBI, Aviv HURVITZ, Igor ABRAMOVSKI, Eyal KRUPKA
  • Publication number: 20210210097
    Abstract: A method for facilitating a remote conference includes receiving a digital video and a computer-readable audio signal. A face recognition machine is operated to recognize a face of a first conference participant in the digital video, and a speech recognition machine is operated to translate the computer-readable audio signal into a first text. An attribution machine attributes the text to the first conference participant. A second computer-readable audio signal is processed similarly, to obtain a second text attributed to a second conference participant. A transcription machine automatically creates a transcript including the first text attributed to the first conference participant and the second text attributed to the second conference participant.
    Type: Application
    Filed: December 8, 2020
    Publication date: July 8, 2021
    Inventors: Adi DIAMANT, Karen MASTER BEN-DOR, Eyal KRUPKA, Raz HALALY, Yoni SMOLIN, Ilya GURVICH, Aviv HURVITZ, Lijuan QIN, Wei XIONG, Shixiong ZHANG, Lingfeng WU, Xiong XIAO, Ido LEICHTER, Moshe DAVID, Xuedong HUANG, Amit Kumar AGARWAL
  • Patent number: 10867610
    Abstract: A method for facilitating a remote conference includes receiving a digital video and a computer-readable audio signal. A face recognition machine is operated to recognize a face of a first conference participant in the digital video, and a speech recognition machine is operated to translate the computer-readable audio signal into a first text. An attribution machine attributes the text to the first conference participant. A second computer-readable audio signal is processed similarly, to obtain a second text attributed to a second conference participant. A transcription machine automatically creates a transcript including the first text attributed to the first conference participant and the second text attributed to the second conference participant.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: December 15, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Adi Diamant, Karen Master Ben-Dor, Eyal Krupka, Raz Halaly, Yoni Smolin, Ilya Gurvich, Aviv Hurvitz, Lijuan Qin, Wei Xiong, Shixiong Zhang, Lingfeng Wu, Xiong Xiao, Ido Leichter, Moshe David, Xuedong Huang, Amit Kumar Agarwal
  • Patent number: 10488939
    Abstract: A gesture recognition method comprises receiving at a processor from a sensor a sequence of captured signal frames for extracting hand pose information for a hand and using at least one trained predictor executed on the processor to extract hand pose information from the received signal frames. For at least one defined gesture, defined as a time sequence comprising hand poses, with each of the hand poses defined as a conjunction or disjunction of qualitative propositions relating to interest points on the hand, truth values are computed for the qualitative propositions using the hand pose information extracted from the received signal frames, and execution of the gesture is tracked, by using the truth values to determine which of the hand poses in the time sequence have already been executed and which of the hand poses in the time sequence is expected next.
    Type: Grant
    Filed: August 7, 2017
    Date of Patent: November 26, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kfir Karmon, Aharon Bar-Hillel, Eyal Krupka, Noam Bloom, Ilya Gurvich, Aviv Hurvitz, Ido Leichter, Yoni Smolin, Yuval Tzairi, Alon Vinnikov
  • Publication number: 20190341050
    Abstract: A method for facilitating a remote conference includes receiving a digital video and a computer-readable audio signal. A face recognition machine is operated to recognize a face of a first conference participant in the digital video, and a speech recognition machine is operated to translate the computer-readable audio signal into a first text. An attribution machine attributes the text to the first conference participant. A second computer-readable audio signal is processed similarly, to obtain a second text attributed to a second conference participant. A transcription machine automatically creates a transcript including the first text attributed to the first conference participant and the second text attributed to the second conference participant.
    Type: Application
    Filed: June 29, 2018
    Publication date: November 7, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Adi DIAMANT, Karen MASTER BEN-DOR, Eyal KRUPKA, Raz HALALY, Yoni SMOLIN, Ilya GURVICH, Aviv HURVITZ, Lijuan QIN, Wei XIONG, Shixiong ZHANG, Lingfeng WU, Xiong XIAO, Ido LEICHTER, Moshe DAVID, Xuedong HUANG, Amit Kumar AGARWAL
  • Publication number: 20180307319
    Abstract: A gesture recognition method comprises receiving at a processor from a sensor a sequence of captured signal frames for extracting hand pose information for a hand and using at least one trained predictor executed on the processor to extract hand pose information from the received signal frames. For at least one defined gesture, defined as a time sequence comprising hand poses, with each of the hand poses defined as a conjunction or disjunction of qualitative propositions relating to interest points on the hand, truth values are computed for the qualitative propositions using the hand pose information extracted from the received signal frames, and execution of the gesture is tracked, by using the truth values to determine which of the hand poses in the time sequence have already been executed and which of the hand poses in the time sequence is expected next.
    Type: Application
    Filed: August 7, 2017
    Publication date: October 25, 2018
    Inventors: Kfir KARMON, Eyal KRUPKA, Noam BLOOM, Ilya GURVICH, Aviv HURVITZ, Ido LEICHTER, Yoni SMOLIN, Yuval TZAIRI, Alon VINNIKOV, Aharon BAR-HILLEL