Patents by Inventor Karen Master Ben-Dor
Karen Master Ben-Dor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240104699Abstract: Techniques for generating a gallery view of tiles for in-area participants who are participating in an online meeting are disclosed. A video stream is accessed, where this stream includes an area view of an area in which an in-area participant is located. This area view comprises pixels representative of the area and pixels representative of the in-area participant. The pixels representative of the in-area participant are identified. A field of view of the in-area participant is generated. A tile of the in-area participant is generated based on the field of view. This tile is then displayed while the area view is not displayed.Type: ApplicationFiled: September 22, 2022Publication date: March 28, 2024Inventors: Karen MASTER BEN-DOR, Eshchar ZYCHLINSKI, Stav YAGEV, Yoni SMOLIN, Raz HALALY, Adi DIAMANT, Ido LEICHTER, Tamir SHLOMI
-
Publication number: 20230402038Abstract: A method for facilitating a remote conference includes receiving a digital video and a computer-readable audio signal. A face recognition machine is operated to recognize a face of a first conference participant in the digital video, and a speech recognition machine is operated to translate the computer-readable audio signal into a first text. An attribution machine attributes the text to the first conference participant. A second computer-readable audio signal is processed similarly, to obtain a second text attributed to a second conference participant. A transcription machine automatically creates a transcript including the first text attributed to the first conference participant and the second text attributed to the second conference participant.Type: ApplicationFiled: May 15, 2023Publication date: December 14, 2023Inventors: Adi DIAMANT, Xuedong HUANG, Karen MASTER BEN-DOR, Eyal KRUPKA, Raz HALALY, Yoni SMOLIN, Ilya GURVICH, Aviv HURVITZ, Lijuan QIN, Wei XIONG, Shixiong ZHANG, Lingfeng WU, Xiong XIAO, Ido LEICHTER, Moshe DAVID, Amit Kumar AGARWAL
-
Publication number: 20230252061Abstract: The disclosure herein describes providing responses to natural language queries associated with transcripts at least by searching multiple indexes. A transcript associated with a communication among a plurality of speakers is obtained, wherein sets of artifact sections are identified in the transcript. A set of section indexes is generated from the transcript based on artifact type definitions. A natural language query associated with the transcript is analyzed using a natural language model and query metadata of the analyzed natural language query is obtained. At least one section index of the set of section indexes is selected based on the obtained query metadata and that selected at least one section index is searched. A response to the natural language query is provided including result data from the searched at least one search index, wherein the result data includes a reference to an artifact section referenced by the searched section index(es).Type: ApplicationFiled: April 19, 2023Publication date: August 10, 2023Inventors: Karen MASTER BEN-DOR, Lili CHENG, Adi DIAMANT, Raz HALALY, Eshchar ZYCHLINSKI, Thomas Matthew LAIRD-MCCONNELL, Sonja Sabina KNOLL, Daniel DOS SANTOS MARQUES, Shunfu MAO
-
Patent number: 11688399Abstract: A method for facilitating a remote conference includes receiving a digital video and a computer-readable audio signal. A face recognition machine is operated to recognize a face of a first conference participant in the digital video, and a speech recognition machine is operated to translate the computer-readable audio signal into a first text. An attribution machine attributes the text to the first conference participant. A second computer-readable audio signal is processed similarly, to obtain a second text attributed to a second conference participant. A transcription machine automatically creates a transcript including the first text attributed to the first conference participant and the second text attributed to the second conference participant.Type: GrantFiled: December 8, 2020Date of Patent: June 27, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Adi Diamant, Karen Master Ben-Dor, Eyal Krupka, Raz Halaly, Yoni Smolin, Ilya Gurvich, Aviv Hurvitz, Lijuan Qin, Wei Xiong, Shixiong Zhang, Lingfeng Wu, Xiong Xiao, Ido Leichter, Moshe David, Xuedong Huang, Amit Kumar Agarwal
-
Patent number: 11640418Abstract: The disclosure herein describes providing responses to natural language queries associated with transcripts at least by searching multiple indexes. A transcript associated with a communication among a plurality of speakers is obtained, wherein sets of artifact sections are identified in the transcript. A set of section indexes is generated from the transcript based on artifact type definitions. A natural language query associated with the transcript is analyzed using a natural language model and query metadata of the analyzed natural language query is obtained. At least one section index of the set of section indexes is selected based on the obtained query metadata and that selected section index is searched. A response to the natural language query is provided including result data from the searched at least one search index, wherein the result data includes a reference to an artifact section referenced by the searched section index(es).Type: GrantFiled: June 25, 2021Date of Patent: May 2, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Karen Master Ben-Dor, Lili Cheng, Adi Diamant, Raz Halaly, Eshchar Zychlinski, Thomas Matthew Laird-Mcconnell, Sonja Sabina Knoll, Daniel Dos Santos Marques, Shunfu Mao
-
Publication number: 20230092783Abstract: Systems and methods are provided for configuring and utilizing botcasts, which comprise audio content with transitions corresponding to the audio content, to facilitate accessibility and presentation of the media content within the botcasts according to contextual relevance for different individual users. The systems identify, access, filter, augment, customize, personalize, create and/or otherwise configure the media content, as well as the content transitions in the botcasts, according to the individual preferences and profiles of each user, as well as the contextual circumstances for each user.Type: ApplicationFiled: November 18, 2021Publication date: March 23, 2023Inventors: Karen MASTER BEN-DOR, Adi DIAMANT, Stav YAGEV, Eshchar ZYCHLINSKI, Yoni SMOLIN
-
Publication number: 20220414130Abstract: The disclosure herein describes providing responses to natural language queries associated with transcripts at least by searching multiple indexes. A transcript associated with a communication among a plurality of speakers is obtained, wherein sets of artifact sections are identified in the transcript. A set of section indexes is generated from the transcript based on artifact type definitions. A natural language query associated with the transcript is analyzed using a natural language model and query metadata of the analyzed natural language query is obtained. At least one section index of the set of section indexes is selected based on the obtained query metadata and that selected at least one section index is searched. A response to the natural language query is provided including result data from the searched at least one search index, wherein the result data includes a reference to an artifact section referenced by the searched section index(es).Type: ApplicationFiled: June 25, 2021Publication date: December 29, 2022Inventors: Karen MASTER BEN-DOR, Lili CHENG, Adi DIAMANT, Raz HALALY, Eshchar ZYCHLINSKI, Thomas Matthew LAIRD-MCCONNELL, Sonja Sabina KNOLL, Daniel DOS SANTOS MARQUES, Shunfu MAO
-
Publication number: 20210210097Abstract: A method for facilitating a remote conference includes receiving a digital video and a computer-readable audio signal. A face recognition machine is operated to recognize a face of a first conference participant in the digital video, and a speech recognition machine is operated to translate the computer-readable audio signal into a first text. An attribution machine attributes the text to the first conference participant. A second computer-readable audio signal is processed similarly, to obtain a second text attributed to a second conference participant. A transcription machine automatically creates a transcript including the first text attributed to the first conference participant and the second text attributed to the second conference participant.Type: ApplicationFiled: December 8, 2020Publication date: July 8, 2021Inventors: Adi DIAMANT, Karen MASTER BEN-DOR, Eyal KRUPKA, Raz HALALY, Yoni SMOLIN, Ilya GURVICH, Aviv HURVITZ, Lijuan QIN, Wei XIONG, Shixiong ZHANG, Lingfeng WU, Xiong XIAO, Ido LEICHTER, Moshe DAVID, Xuedong HUANG, Amit Kumar AGARWAL
-
Patent number: 10956019Abstract: Automatically alternating between input modes on a computing device based on a usage pattern is provided. A first input mode is initiated for interacting with content displayed on the computing device. An input corresponding to a second input mode on the computing is then detected. A transition is then made from the first input mode to the second input mode on the computing device. Upon the detecting a termination of the input on the displayed content the second input mode, a gradual transition is made from the second input mode to the first input mode based on a current sensor state of the computing device and a threshold.Type: GrantFiled: June 29, 2017Date of Patent: March 23, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Oded Elyada, Jeffrey M. Jo, Shaiket S. Das, Aditya R. Kalro, Zijia Zheng, Karen Master Ben-Dor, Adi Diamant, Inbal Ort Bengal
-
Patent number: 10936343Abstract: Computer interfaces are provided for accessing and displaying content from disparate and remotely connected computer systems and that can be used for facilitating collaboration and visualization of physical and cloud resources for distributed event management. Systems are provided for generating, modifying, deploying, accessing, and otherwise managing the computer interfaces. Templates are used to build canvas interfaces that are contextually relevant for different entities based on the context of associated events and assigned roles of the entities with respect to the different events. The canvas interfaces can be used to access and orchestrate resources associated with the different events.Type: GrantFiled: December 18, 2018Date of Patent: March 2, 2021Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Eli Schwartz, Alok Srivastava, Michael Andrew Foynes, Eli Ben-David, Merav Davidson, Alexander Vakaluk, Nir Levy, Ami Luttwak, Irit Shalom Kantor, Eli Arbel, Eyal Livne, Avner Shahar-Kashtan, Rona Mayk, Ariel Ben-Horesh, Moaid Hathot, Alexander Pshul, Karen Master Ben-Dor, Adi Diamant, Eliazer Carmon
-
Patent number: 10867610Abstract: A method for facilitating a remote conference includes receiving a digital video and a computer-readable audio signal. A face recognition machine is operated to recognize a face of a first conference participant in the digital video, and a speech recognition machine is operated to translate the computer-readable audio signal into a first text. An attribution machine attributes the text to the first conference participant. A second computer-readable audio signal is processed similarly, to obtain a second text attributed to a second conference participant. A transcription machine automatically creates a transcript including the first text attributed to the first conference participant and the second text attributed to the second conference participant.Type: GrantFiled: June 29, 2018Date of Patent: December 15, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Adi Diamant, Karen Master Ben-Dor, Eyal Krupka, Raz Halaly, Yoni Smolin, Ilya Gurvich, Aviv Hurvitz, Lijuan Qin, Wei Xiong, Shixiong Zhang, Lingfeng Wu, Xiong Xiao, Ido Leichter, Moshe David, Xuedong Huang, Amit Kumar Agarwal
-
Patent number: 10762900Abstract: In non-limiting examples of the present disclosure, systems, methods and devices for executing a command by a digital assistant in a group device environment are presented. A plurality of devices with digital assistants may be clustered for the duration of an event. One of the devices of the cluster may be assigned as an arbitrator device for the cluster. A user may issue a verbal command executable by a digital assistant of the cluster. The user that issued the verbal command may be identified via voice analysis. A determination may be made as to whether the verbal command corresponds to an intent to share content with a plurality of members of the cluster, or a specific member of the cluster, and a device of the cluster may be selected for executing a reply to the verbal command based on the determined intent and the executing device's presentation capabilities.Type: GrantFiled: March 7, 2018Date of Patent: September 1, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Karen Master Ben-dor, Roni Karassik, Adi Diamant, Adi Miller
-
Publication number: 20190341050Abstract: A method for facilitating a remote conference includes receiving a digital video and a computer-readable audio signal. A face recognition machine is operated to recognize a face of a first conference participant in the digital video, and a speech recognition machine is operated to translate the computer-readable audio signal into a first text. An attribution machine attributes the text to the first conference participant. A second computer-readable audio signal is processed similarly, to obtain a second text attributed to a second conference participant. A transcription machine automatically creates a transcript including the first text attributed to the first conference participant and the second text attributed to the second conference participant.Type: ApplicationFiled: June 29, 2018Publication date: November 7, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Adi DIAMANT, Karen MASTER BEN-DOR, Eyal KRUPKA, Raz HALALY, Yoni SMOLIN, Ilya GURVICH, Aviv HURVITZ, Lijuan QIN, Wei XIONG, Shixiong ZHANG, Lingfeng WU, Xiong XIAO, Ido LEICHTER, Moshe DAVID, Xuedong HUANG, Amit Kumar AGARWAL
-
Publication number: 20190324986Abstract: Computer interfaces are provided for accessing and displaying content from disparate and remotely connected computer systems and that can be used for facilitating collaboration and visualization of physical and cloud resources for distributed event management. Systems are provided for generating, modifying, deploying, accessing, and otherwise managing the computer interfaces. Templates are used to build canvas interfaces that are contextually relevant for different entities based on the context of associated events and assigned roles of the entities with respect to the different events. The canvas interfaces can be used to access and orchestrate resources associated with the different events.Type: ApplicationFiled: December 18, 2018Publication date: October 24, 2019Inventors: Eli Schwartz, Alok Srivastava, Michael Andrew Foynes, Eli Ben-David, Merav Davidson, Alexander Vakaluk, Nir Levy, Ami Luttwak, Irit Shalom Kantor, Eli Arbel, Eyal Livne, Avner Shahar-Kashtan, Rona Mayk, Ariel Ben-Horesh, Moaid Hathot, Alexander Pshul, Karen Master Ben-Dor, Adi Diamant, Eliazer Carmon
-
Publication number: 20190279615Abstract: In non-limiting examples of the present disclosure, systems, methods and devices for executing a command by a digital assistant in a group device environment are presented. A plurality of devices with digital assistants may be clustered for the duration of an event. One of the devices of the cluster may be assigned as an arbitrator device for the cluster. A user may issue a verbal command executable by a digital assistant of the cluster. The user that issued the verbal command may be identified via voice analysis. A determination may be made as to whether the verbal command corresponds to an intent to share content with a plurality of members of the cluster, or a specific member of the cluster, and a device of the cluster may be selected for executing a reply to the verbal command based on the determined intent and the executing device's presentation capabilities.Type: ApplicationFiled: March 7, 2018Publication date: September 12, 2019Inventors: Karen Master BEN-DOR, Roni KARASSIK, Adi DIAMANT, Adi MILLER
-
Publication number: 20190027147Abstract: Query understanding using integrated image capture and recognition is provided. A user is enabled to speak an utterance which is received by a digital assistant executing on a computing device. The utterance includes a spoken trigger, which is detected by the digital assistant and activates a camera integrated in or communicatively attached to the computing device. The camera captures an image of an object or person of interest. The utterance, the image, and temporally relevant context information are provided to an image integrated query system, which performs speech recognition and image processing on the utterance and the image for understanding the user intent. The understood intent is provided to the digital assistant, which operates to complete perform a search query or complete a task indicated in the integrated utterance and image data.Type: ApplicationFiled: July 18, 2017Publication date: January 24, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Adi Diamant, Karen Master Ben-Dor
-
Patent number: 9870063Abstract: A system for associating between a computerized model of multimodal human interaction and application functions, comprising: (a) An interface for receiving instructions from a programmer defining one or more application functions. (b) A memory storing hand gestures each defined by a dataset of discrete pose values and discrete motion values. (c) A code store storing a code. (d) One or more processors coupled to the interface, the memory and the code store for executing the stored code which comprises: (1) Code instructions to define a logical sequence of user input per instructions of the programmer. The logical sequence combines hand gestures with non-gesture user input. (2) Code instructions to associate the logical sequence with the application function(s) for initiating an execution of the application function(s) during runtime of the application in response to detection of the logical sequence by analyzing a captured data depicting a user during runtime.Type: GrantFiled: December 31, 2015Date of Patent: January 16, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Kfir Karmon, Adi Diamant, Karen Master Ben-Dor, Eyal Krupka
-
Publication number: 20170300090Abstract: Automatically alternating between input modes on a computing device based on a usage pattern is provided. A first input mode is initiated for interacting with content displayed on the computing device. An input corresponding to a second input mode on the computing is then detected. A transition is then made from the first input mode to the second input mode on the computing device. Upon the detecting a termination of the input on the displayed content the second input mode, a gradual transition is made from the second input mode to the first input mode based on a current sensor state of the computing device and a threshold.Type: ApplicationFiled: June 29, 2017Publication date: October 19, 2017Applicant: Microsoft Technology Licensing, LLCInventors: Oded ELYADA, Jeffrey M. JO, Shaiket S. DAS, Aditya R. KALRO, Zijia ZHENG, Karen MASTER BEN-DOR, Adi DIAMANT, Inbal ORT BENGAL
-
Patent number: 9772764Abstract: Automatically alternating between input modes on a computing device based on a usage pattern is provided. A first input mode is initiated for interacting with content displayed on the computing device. An input corresponding to a second input mode on the computing is then detected. A transition is then made from the first input mode to the second input mode on the computing device. Upon the detecting a termination of the input on the displayed content the second input mode, a gradual transition is made from the second input mode to the first input mode based on a current sensor state of the computing device and a threshold.Type: GrantFiled: June 6, 2013Date of Patent: September 26, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Oded Elyada, Jeffrey M. Jo, Shaiket S. Das, Aditya R. Kalro, Zijia Zheng, Karen Master Ben-Dor, Adi Diamant, Inbal Ort Bengal
-
Publication number: 20170192512Abstract: A system for associating between a computerized model of multimodal human interaction and application functions, comprising: (a) An interface for receiving instructions from a programmer defining one or more application functions. (b) A memory storing hand gestures each defined by a dataset of discrete pose values and discrete motion values. (c) A code store storing a code. (d) One or more processors coupled to the interface, the memory and the code store for executing the stored code which comprises: (1) Code instructions to define a logical sequence of user input per instructions of the programmer. The logical sequence combines hand gestures with non-gesture user input. (2) Code instructions to associate the logical sequence with the application function(s) for initiating an execution of the application function(s) during runtime of the application in response to detection of the logical sequence by analyzing a captured data depicting a user during runtime.Type: ApplicationFiled: December 31, 2015Publication date: July 6, 2017Inventors: Kfir Karmon, Adi Diamant, Karen Master Ben-Dor, Eyal Krupka