Patents by Inventor Eyal Krupka
Eyal Krupka has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20190341050Abstract: A method for facilitating a remote conference includes receiving a digital video and a computer-readable audio signal. A face recognition machine is operated to recognize a face of a first conference participant in the digital video, and a speech recognition machine is operated to translate the computer-readable audio signal into a first text. An attribution machine attributes the text to the first conference participant. A second computer-readable audio signal is processed similarly, to obtain a second text attributed to a second conference participant. A transcription machine automatically creates a transcript including the first text attributed to the first conference participant and the second text attributed to the second conference participant.Type: ApplicationFiled: June 29, 2018Publication date: November 7, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Adi DIAMANT, Karen MASTER BEN-DOR, Eyal KRUPKA, Raz HALALY, Yoni SMOLIN, Ilya GURVICH, Aviv HURVITZ, Lijuan QIN, Wei XIONG, Shixiong ZHANG, Lingfeng WU, Xiong XIAO, Ido LEICHTER, Moshe DAVID, Xuedong HUANG, Amit Kumar AGARWAL
-
Publication number: 20190341054Abstract: Multi-modal speech localization is achieved using image data captured by one or more cameras, and audio data captured by a microphone array. Audio data captured by each microphone of the array is transformed to obtain a frequency domain representation that is discretized in a plurality of frequency intervals. Image data captured by each camera is used to determine a positioning of each human face. Input data is provided to a previously-trained, audio source localization classifier, including: the frequency domain representation of the audio data captured by each microphone, and the positioning of each human face captured by each camera in which the positioning of each human face represents a candidate audio source. An identified audio source is indicated by the classifier based on the input data that is estimated to be the human face from which the audio data originated.Type: ApplicationFiled: June 27, 2018Publication date: November 7, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Eyal KRUPKA, Xiong XIAO
-
Patent number: 10460201Abstract: A computer implemented method of training an image classifier, comprising: receiving training images data labeled according to image classes; selecting reference points of the images; and constructing a set of voting convolutional tables and binary features on a patch surrounding each reference point by performing, for each calculation stage: creating a voting table by: creating first candidate binary features; calculating a global loss reduction for each first candidate binary feature; selecting one first candidate binary feature having minimal global loss reduction; and repeating to select stage-size binary features; and performing a tree split using the voting table by: creating second candidate binary features; calculating a combined loss reduction for each stage-split size group of the second candidate binary features; selecting one of the groups having a maximal combined loss reduction; and creating a child-directing table using the selected binary features.Type: GrantFiled: December 31, 2015Date of Patent: October 29, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Eyal Krupka, Aharon Bar Hillel
-
Publication number: 20190303177Abstract: In one embodiment, a computing system generates a reference distance to classify a user in a first position or a second position and measures a current distance of the user using a camera. The current distance measured using features of the user and the current distance is not a measurement of distance of the user to the camera. The computing system determines that the user is in one of the first position and the second position based on comparing the reference distance to the current distance. A user interface operates in a first mode when the user is in the first position and operates in a second mode when the user is in the second position.Type: ApplicationFiled: March 29, 2018Publication date: October 3, 2019Inventors: Keren MASTER, Eyal KRUPKA, Ido LEICHTER, Raz HALALY
-
Patent number: 10310618Abstract: A system for creating hand gestures representations, comprising an interface for interacting with a user, a storage storing a plurality of discrete pose values and discrete motion values, a memory storing a gesture visual builder code, one or more processors coupled to the interface, storage and memory to execute the gesture visual builder code allowing the user to create hand gesture. The gesture visual builder code comprising code instructions to present the user with a GUI which displays a hierarchical menu driven interface, code instructions to receive iteratively user instructions from the user using the hierarchical menu driven interface, for creating a logical sequence of hand gesture by defining one or more hand pose features records and hand motion features records and code instructions to generate a code segment defining the one or more hand pose/motion features records through the discrete pose/motion values respectively.Type: GrantFiled: December 31, 2015Date of Patent: June 4, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Kfir Karmon, Eyal Krupka, Yuval Tzairi, Uri Levanon, Shelly Horowitz
-
Patent number: 10296811Abstract: A user's collection of images may be analyzed to identify people's faces within the images, then create clusters of similar faces, where each of the clusters may represent a person. The clusters may be ranked in order of size to determine a relative importance of the associated person to the user. The ranking may be used in many social networking applications to filter and present content that may be of interest to the user. In one use scenario, the clusters may be used to identify images from a second user's image collection, where the identified images may be pertinent or interesting to the first user. The ranking may also be a function of user interactions with the images, as well as other input not related to the images. The ranking may be incrementally updated when new images are added to the user's collection.Type: GrantFiled: August 30, 2016Date of Patent: May 21, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Eyal Krupka, Igor Abramovski, Igor Kviatkovsky
-
Patent number: 10230934Abstract: Depth map correction using lookup tables is described. In an example depth maps may be generated that measure a depth to an object using differences in phase between light transmitted from a camera which illuminates the object and light received at the camera which has been reflected from the object. In various embodiments depth maps may be subject to errors caused by received light undergoing multiple reflections before being received by the camera. In an example a correction for an estimated depth of an object may be computed and stored in a lookup table which maps the amplitude and phase of the received light to a depth correction. In an example the amplitudes and frequencies of each modulation frequency may be to access lookup table which stores corrections for the depth of an object and which allows an accurate depth map to be obtained.Type: GrantFiled: June 14, 2013Date of Patent: March 12, 2019Assignee: MICROSOFT TEHCNOLOGY LICENSING, LLCInventor: Eyal Krupka
-
Patent number: 10139921Abstract: Hand gesture detection electrical device for detecting hand gestures, comprising an IC electronically integrating: (a) First interface connecting to imaging device(s). (b) Second interface connecting to controlled unit. (c) Data storage storing sequential logic models representing a hand gestures. The sequential logic models map a sequence of pre-defined hand poses and/or motions. (d) Memory storing code. (e) Processor(s) coupled to the first and second interfaces, data storage and memory for executing the code to: (1) Receive timed images depicting a user's moving hand. (2) Generate a runtime sequence mapping runtime hand datasets each defined by discrete hand values indicating current state of the moving hand. (3) Estimate which hand gesture(s) best match the runtime sequence by optimizing the runtime sequence compared to the sequential logic models using SSVM functions. (4) Initiate action(s) to the controlled unit. The action(s) are associated with selected hand gesture(s) based on the estimation.Type: GrantFiled: December 27, 2017Date of Patent: November 27, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Kfir Karmon, Eyal Krupka, Adi Diamant
-
Publication number: 20180307319Abstract: A gesture recognition method comprises receiving at a processor from a sensor a sequence of captured signal frames for extracting hand pose information for a hand and using at least one trained predictor executed on the processor to extract hand pose information from the received signal frames. For at least one defined gesture, defined as a time sequence comprising hand poses, with each of the hand poses defined as a conjunction or disjunction of qualitative propositions relating to interest points on the hand, truth values are computed for the qualitative propositions using the hand pose information extracted from the received signal frames, and execution of the gesture is tracked, by using the truth values to determine which of the hand poses in the time sequence have already been executed and which of the hand poses in the time sequence is expected next.Type: ApplicationFiled: August 7, 2017Publication date: October 25, 2018Inventors: Kfir KARMON, Eyal KRUPKA, Noam BLOOM, Ilya GURVICH, Aviv HURVITZ, Ido LEICHTER, Yoni SMOLIN, Yuval TZAIRI, Alon VINNIKOV, Aharon BAR-HILLEL
-
Publication number: 20180120950Abstract: Hand gesture detection electrical device for detecting hand gestures, comprising an IC electronically integrating: (a) First interface connecting to imaging device(s). (b) Second interface connecting to controlled unit. (c) Data storage storing sequential logic models representing a hand gestures. The sequential logic models map a sequence of pre-defined hand poses and/or motions. (d) Memory storing code. (e) Processor(s) coupled to the first and second interfaces, data storage and memory for executing the code to: (1) Receive timed images depicting a user's moving hand. (2) Generate a runtime sequence mapping runtime hand datasets each defined by discrete hand values indicating current state of the moving hand. (3) Estimate which hand gesture(s) best match the runtime sequence by optimizing the runtime sequence compared to the sequential logic models using SSVM functions. (4) Initiate action(s) to the controlled unit. The action(s) are associated with selected hand gesture(s) based on the estimation.Type: ApplicationFiled: December 27, 2017Publication date: May 3, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Kfir KARMON, Eyal KRUPKA, Adi DIAMANT
-
Patent number: 9898256Abstract: A system of injecting a code section to a code edited by a graphical user interface (GUI) of an integrated development environment (IDE), comprising: a memory storing a dataset associating each code segment with one hand pose feature or hand motion feature; an imager adapted to capture images of a hand while an IDE being executed on a client terminal; and processor for executing code of an application, comprising: code instructions to identify at least one of the features and at least one discrete value of the identified features from an analysis of the images; code instructions to select at least one of the code segments associated with the identified features; and code instructions to add automatically a code section generated based on the code segments and the discrete value to a code presented by a code editor of the IDE.Type: GrantFiled: December 31, 2015Date of Patent: February 20, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Kfir Karmon, Adi Diamant, Eyal Krupka
-
Patent number: 9870063Abstract: A system for associating between a computerized model of multimodal human interaction and application functions, comprising: (a) An interface for receiving instructions from a programmer defining one or more application functions. (b) A memory storing hand gestures each defined by a dataset of discrete pose values and discrete motion values. (c) A code store storing a code. (d) One or more processors coupled to the interface, the memory and the code store for executing the stored code which comprises: (1) Code instructions to define a logical sequence of user input per instructions of the programmer. The logical sequence combines hand gestures with non-gesture user input. (2) Code instructions to associate the logical sequence with the application function(s) for initiating an execution of the application function(s) during runtime of the application in response to detection of the logical sequence by analyzing a captured data depicting a user during runtime.Type: GrantFiled: December 31, 2015Date of Patent: January 16, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Kfir Karmon, Adi Diamant, Karen Master Ben-Dor, Eyal Krupka
-
Patent number: 9857881Abstract: Hand gesture detection electrical device for detecting hand gestures, comprising an IC electronically integrating: (a) First interface connecting to imaging device(s). (b) Second interface connecting to controlled unit. (c) Data storage storing sequential logic models representing a hand gestures. The sequential logic models map a sequence of pre-defined hand poses and/or motions. (d) Memory storing code. (e) Processor(s) coupled to the first and second interfaces, data storage and memory for executing the code to: (1) Receive timed images depicting a user's moving hand. (2) Generate a runtime sequence mapping runtime hand datasets each defined by discrete hand values indicating current state of the moving hand. (3) Estimate which hand gesture(s) best match the runtime sequence by optimizing the runtime sequence compared to the sequential logic models using SSVM functions. (4) Initiate action(s) to the controlled unit. The action(s) are associated with selected hand gesture(s) based on the estimation.Type: GrantFiled: December 31, 2015Date of Patent: January 2, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Kfir Karmon, Eyal Krupka, Adi Diamant
-
Patent number: 9819677Abstract: A computer may identify an individual according to one or more biometrics based on various physiological aspects of the individual, such as metrics of various features of the face, gait, fingerprint, or voice of the individual. However, biometrics are often computationally intensive to compute, inaccurate, and unable to scale to identify an individual among a large set of known individuals. Therefore, the biometric identification of an individual may be supplemented by identifying one or more devices associated with the individual (e.g., a mobile phone, a vehicle driven by the individual, or an implanted medical device). When an individual is registered for identification, various device identifiers of devices associated with the individual may be stored along with the biometrics of the individual. Individuals may then be identified using both biometrics and detected device identifiers, thereby improving the efficiency, speed, accuracy, and scalability of the identification.Type: GrantFiled: June 20, 2016Date of Patent: November 14, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Nir Nice, Eyal Krupka
-
Patent number: 9734435Abstract: Computer implemented method for computing a feature dataset classifying a pose of a human hand, comprising: (a) Selecting a global orientation category (GOC) defining a spatial orientation of a human hand in a 3D space by applying GOC classifying functions on a received image segment depicting the hand. (b) Identifying in-plane rotation by applying in-plane rotation classifying functions on the image segment, the in-plane rotation classifying functions are selected according to said GOC. (c) Aligning the image segment in a 2D plane according to the in-plane rotation. (d) Applying hand pose features classifying functions on the aligned image segment. Each one of the feature classifying functions outputs a current discrete pose value of an associated hand feature. (e) Outputting a features dataset defining a current discrete pose value for each of the hand pose features for classifying current hand pose of the hand.Type: GrantFiled: December 31, 2015Date of Patent: August 15, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Eyal Krupka, Alon Vinnikov, Kfir Karmon
-
Publication number: 20170193328Abstract: A computer implemented method of training an image classifier, comprising: receiving training images data labeled according to image classes; selecting reference points of the images; and constructing a set of voting convolutional tables and binary features on a patch surrounding each reference point by performing, for each calculation stage: creating a voting table by: creating first candidate binary features; calculating a global loss reduction for each first candidate binary feature; selecting one first candidate binary feature having minimal global loss reduction; and repeating to select stage-size binary features; and performing a tree split using the voting table by: creating second candidate binary features; calculating a combined loss reduction for each stage-split size group of the second candidate binary features; selecting one of the groups having a maximal combined loss reduction; and creating a child-directing table using the selected binary features.Type: ApplicationFiled: December 31, 2015Publication date: July 6, 2017Inventors: Eyal KRUPKA, Aharon BAR HILLEL
-
Publication number: 20170193288Abstract: Computer implemented method for detecting a hand gesture of a user, comprising: (a) Receiving sequential logic models each representing a hand gesture. The sequential logic model maps pre-defined hand poses and motions each represented by a hand features record defined by discrete hand values each indicating a state of respective hand feature. (b) Receiving a runtime sequence of runtime hand datasets each defined by discrete hand values scores indicating current state hand features of a user's moving hand which are inferred by analyzing timed images depicting the moving hand. (c) Submitting the runtime hand datasets and the pre-defined hand features records in SSVM functions to generate estimation terms for the runtime hand datasets with respect to the hand features records. (d) Estimating which of the hand gestures best matches the runtime sequence depicted in the timed images by optimizing score functions using the estimation terms for the runtime hand datasets.Type: ApplicationFiled: December 31, 2015Publication date: July 6, 2017Inventors: Daniel FREEDMAN, Kfir KARMON, Eyal KRUPKA, Yagil ENGEL, Yevgeny SHAPIRO
-
Publication number: 20170192514Abstract: A system for creating hand gestures representations, comprising: (a) An interface for interacting with a user. (b) A storage storing a plurality of discrete pose values and discrete motion values. (c) A memory storing a gesture visual builder code. (d) One or more processors coupled to the interface, storage and memory to execute the gesture visual builder code allowing the user to create hand gesture. The gesture visual builder code comprising: (1) Code instructions to present the user with a GUI which displays a hierarchical menu driven interface. (2) Code instructions to receive iteratively user instructions from the user using the hierarchical menu driven interface, for creating a logical sequence of hand gesture by defining one or more hand pose features records and hand motion features records. (3) Code instructions to generate a code segment defining the one or more hand pose/motion features records through the discrete pose/motion values respectively.Type: ApplicationFiled: December 31, 2015Publication date: July 6, 2017Inventors: Kfir KARMON, Eyal KRUPKA, Yuval TZAIRI, Uri LEVANON, Shelly HOROWITZ
-
Publication number: 20170193334Abstract: Computer implemented method for computing a feature dataset classifying a pose of a human hand, comprising: (a) Selecting a global orientation category (GOC) defining a spatial orientation of a human hand in a 3D space by applying GOC classifying functions on a received image segment depicting the hand. (b) Identifying in-plane rotation by applying in-plane rotation classifying functions on the image segment, the in-plane rotation classifying functions are selected according to said GOC. (c) Aligning the image segment in a 2D plane according to the in-plane rotation. (d) Applying hand pose features classifying functions on the aligned image segment. Each one of the feature classifying functions outputs a current discrete pose value of an associated hand feature. (e) Outputting a features dataset defining a current discrete pose value for each of the hand pose features for classifying current hand pose of the hand.Type: ApplicationFiled: December 31, 2015Publication date: July 6, 2017Inventors: Eyal KRUPKA, Alon VINNIKOV, Kfir KARMON
-
Publication number: 20170192513Abstract: Hand gesture detection electrical device for detecting hand gestures, comprising an IC electronically integrating: (a) First interface connecting to imaging device(s). (b) Second interface connecting to controlled unit. (c) Data storage storing sequential logic models representing a hand gestures. The sequential logic models map a sequence of pre-defined hand poses and/or motions. (d) Memory storing code. (e) Processor(s) coupled to the first and second interfaces, data storage and memory for executing the code to: (1) Receive timed images depicting a user's moving hand. (2) Generate a runtime sequence mapping runtime hand datasets each defined by discrete hand values indicating current state of the moving hand. (3) Estimate which hand gesture(s) best match the runtime sequence by optimizing the runtime sequence compared to the sequential logic models using SSVM functions. (4) Initiate action(s) to the controlled unit. The action(s) are associated with selected hand gesture(s) based on the estimation.Type: ApplicationFiled: December 31, 2015Publication date: July 6, 2017Inventors: Kfir KARMON, Eyal KRUPKA, Adi DIAMANT