Patents by Inventor Shmuel Avidan

Shmuel Avidan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20080021899
    Abstract: A computer implemented method classifies securely a private query sample using exact k-nn classification. A secure dot product protocol is applied to determine securely distances between a private query sample and a plurality of private labeled samples. A secure k-rank protocol is applied to the distances to determine a nearest distance of a kth nearest labeled sample having a particular label. Then, a secure Parzen protocol is applied to the nearest distance to label the private query sample according to the particular label.
    Type: Application
    Filed: July 21, 2006
    Publication date: January 24, 2008
    Inventors: Shmuel Avidan, Ariel Elbaz
  • Publication number: 20070237387
    Abstract: A method and system is presented for detecting humans in images of a scene acquired by a camera. Gradients of pixels in the image are determined and sorted into bins of a histogram. An integral image is stored for each bin of the histogram. Features are extracted fom the integral images, the extracted features corresponding to a subset of a substantially larger set of variably sized and randomly selected blocks of pixels in the test image. The features are applied to a cascaded classifier to determine whether the test image includes a human or not.
    Type: Application
    Filed: April 11, 2006
    Publication date: October 11, 2007
    Inventors: Shmuel Avidan, Qiang Zhu
  • Publication number: 20070156471
    Abstract: A method maximizes a candidate solution to a cardinality-constrained combinatorial optimization problem of sparse principal component analysis. An approximate method has as input a covariance matrix A, a candidate solution, and a sparsity parameter k. A variational renormalization for the candidate solution vector x with regards to the eigenvalue structure of the covariance matrix A and the sparsity parameter k is then performed by means of a sub-matrix eigenvalue decomposition of A to obtain a variance maximized k-sparse eigenvector x that is the best possible solution. Another method solves the problem by means of a nested greedy search technique that includes a forward and backward pass. An exact solution to the problem initializes a branch-and-bound search with an output of a greedy solution.
    Type: Application
    Filed: November 29, 2005
    Publication date: July 5, 2007
    Inventors: Baback Moghaddam, Yair Weiss, Shmuel Avidan
  • Publication number: 20070122041
    Abstract: A computer implemented method maximizes candidate solutions to a cardinality-constrained combinatorial optimization problem of sparse linear discriminant analysis. A candidate sparse solution vector x with k non-zero elements is inputted, along with a pair of covariance matrices A, B measuring between-class and within-class covariance of binary input data to be classified, the sparsity parameter k denoting a desired cardinality of a final solution vector. A variational renormalization of the candidate solution vector x is performed with regards to the pair of covariance matrices A, B and the sparsity parameter k to obtain a variance maximized discriminant eigenvector {circumflex over (x)} with cardinality k that is locally optimal for the sparsity parameter k and zero-pattern of the candidate sparse solution vector x, and is the final solution vector for the sparse linear discriminant analysis optimization problem.
    Type: Application
    Filed: May 25, 2006
    Publication date: May 31, 2007
    Inventors: Baback Moghaddam, Yair Weiss, Shmuel Avidan
  • Publication number: 20070081664
    Abstract: A method for securely classifying private data x of a first party Alice using a classifier H(x) of a second party Bob. The classifier is H ? ( x ) = sign ? ? ? ( ? n = 1 N ? h n ? ( x ) ) , where h n ? ( x ) = { ? n x T ? y n > ? n ? n otherwise , ? n , ? n and ?n are scalar values and yn is a vector storing parameters of the classifier. Bob generates a set of N random numbers, S1, . . . , SN, such that s = ? n = 1 N ? s n , for each n=1, . . . , N, the following substeps are performed: applying a secure dot product to xTyn to obtain an for Alice and bn for Bob; applying a secure millionaire protocol to determine whether an is larger than ?n?bn, and returning a result of ?n+Sn, or ?n+Sn; accumulating, by Alice, the result in cn.
    Type: Application
    Filed: October 7, 2005
    Publication date: April 12, 2007
    Inventors: Shmuel Avidan, Ariel Elbaz
  • Publication number: 20070070226
    Abstract: A method extracts an alpha matte from images acquired of a scene by cameras. A depth plane is selected for a foreground in the scene. A trimap is determined from a set of images acquired of the scene. An epipolar plane image is constructed from the set of images and the trimap, the epipolar plane image including scan lines. Variances of intensities are measured along the scan lines in the epipolar image, and an alpha matte is extracted according to the variances.
    Type: Application
    Filed: September 29, 2005
    Publication date: March 29, 2007
    Inventors: Wojciech Matusik, Shmuel Avidan
  • Publication number: 20070070200
    Abstract: A method and system for determining an alpha matte for a video is presented. A set of videos is acquired by an array of cameras. A centrally located camera in the array is designated as a reference camera and acquires a reference video. A foreground depth plane is selected from the set of videos. A trimap is determined from variances of pixel intensities in each image. Variances of the intensities of pixels labeled as background and pixels labeled as foreground are extrapolated to the pixels labeled as unknown in the trimap. Means of the intensities of the pixels labeled as background are extrapolated to the pixels labeled as unknown to determine an alpha matte for the reference video.
    Type: Application
    Filed: March 24, 2006
    Publication date: March 29, 2007
    Inventors: Wojciech Matusik, Shmuel Avidan
  • Patent number: 7155032
    Abstract: The present invention is embodied in a system and method for extracting structure from multiple images of a scene by representing the scene as a group of image layers, including reflection and transparency layers. In general, the present invention performs layer extraction from multiple images containing reflections and transparencies. The present invention includes an optimal approach for recovering layer images and their associated motions from an arbitrary number of composite images. The present invention includes image formation equations, the constrained least squares technique used to recover the component images, a novel method to estimate upper and lower bounds on the solution using min- and max-composites, and a motion refinement method.
    Type: Grant
    Filed: October 1, 2005
    Date of Patent: December 26, 2006
    Assignee: Microsoft Corp.
    Inventors: Richard S. Szeliski, Shmuel Avidan, Padmanabhan Anandan
  • Publication number: 20060171594
    Abstract: A computer implemented method models a background in a sequence of frames of a video. For each frame, the method detects static corners using an array of pixels of the frame, and extracts, for each static corner, features from a window of pixels around the static corner. For each static corner, a descriptor is determined from the corresponding features. Each static corner and corresponding descriptor is stored in a memory, and each static corner is classified as a background or foreground according to the descriptor to model a background in the video.
    Type: Application
    Filed: February 1, 2005
    Publication date: August 3, 2006
    Inventors: Shmuel Avidan, Qiang Zhu
  • Publication number: 20060165258
    Abstract: A method locates an object in a sequence of frames of a video. A feature vector is constructed for every pixel in each frame. The feature vector is used to training the weak classifiers. The weak classifiers separate pixels that are associated with the object from pixels that are associated with the background. The set of weak classifiers are combined into a strong classifier. The strong classifier labels pixels in a frame to generate a confidence map. A ‘peak’ in the confidence is located using a mean-shift operation. The peak indicates a location of the object in the frame. That is, the confidence map distinguishes the object from the background in the video.
    Type: Application
    Filed: January 24, 2005
    Publication date: July 27, 2006
    Inventor: Shmuel Avidan
  • Publication number: 20060120619
    Abstract: A method processes a sequence of input images securely. A sequence of input images are acquired in a client. Pixels in each input image are permuted randomly according to a permutation ? to generate a permuted image for each input image. Each permuted image is transferred to a server, which maintains a background image from the permuted images. In the server, each permuted image is combined with the background image to generate a corresponding permuted motion image for each permuted image. Each permuted motion image is transferred to the client and the pixels in each permuted motion image are reordered according to an inverse permutation ??1 to recover a corresponding motion image for each input image.
    Type: Application
    Filed: December 6, 2004
    Publication date: June 8, 2006
    Inventors: Shmuel Avidan, Moshe Butman, Ayelet Butman
  • Publication number: 20060120524
    Abstract: A method processes an input image securely. An input image I is acquired in a client. A set of m random images, H1, . . . , Hm, and a coefficient vector, a=[a1, . . . , am], are generated such that the input image I is I=?i=1m?i Hj. The set of the random images is transferred to a server including a weak classifier. In the server, a set of m convolved random images H? are determined, such that {HI?=?1(H1*y}i.1m, where * is a convolution operator and ?1 is a first random pixel permutation. The set of convolved images is transferred to the client. In the client, a set of m permuted images I? is determined, such that I?=?2(?i=1m?i H1?), where ?2 is a second random pixel permutation. The set of permuted image is transferred to the server.
    Type: Application
    Filed: December 6, 2004
    Publication date: June 8, 2006
    Inventors: Shmuel Avidan, Moshe Butman, Ayelet Butman
  • Publication number: 20060123245
    Abstract: A method processes an input image securely. An input image is acquired in a client and partitioned into a set of overlapping tiles. The set of overlapping tiles is transferred to a server. In the server, motion pixels in each tile that are immediately adjacent to other motions pixels in the tile are labeled locally to generate a set of locally labeled tiles. The set of locally labeled tiles is transferred to the client. In the client, the set of locally labeled tiles is labeled globally to generate a list of pairs of unique global labels. The list of pairs of unique global labels is transferred to the server. In the server, the pairs of unique global labels are classified into equivalence classes. The equivalence classes are transferred to the client and the motion pixels are relabeled in the client according to the equivalence classes to form connected components in the input image.
    Type: Application
    Filed: December 6, 2004
    Publication date: June 8, 2006
    Inventors: Shmuel Avidan, Moshe Butman, Ayelet Butman
  • Publication number: 20060062434
    Abstract: The present invention is embodied in a system and method for extracting structure from multiple images of a scene by representing the scene as a group of image layers, including reflection and transparency layers. In general, the present invention performs layer extraction from multiple images containing reflections and transparencies. The present invention includes an optimal approach for recovering layer images and their associated motions from an arbitrary number of composite images. The present invention includes image formation equations, the constrained least squares technique used to recover the component images, a novel method to estimate upper and lower bounds on the solution using min- and max-composites, and a motion refinement method.
    Type: Application
    Filed: October 1, 2005
    Publication date: March 23, 2006
    Applicant: Mircosoft Corporation
    Inventors: Richard Szeliski, Shmuel Avidan, Padmanabhan Anandan
  • Publication number: 20060056682
    Abstract: The present invention is embodied in a system and method for extracting structure from multiple images of a scene by representing the scene as a group of image layers, including reflection and transparency layers. In general, the present invention performs layer extraction from multiple images containing reflections and transparencies. The present invention includes an optimal approach for recovering layer images and their associated motions from an arbitrary number of composite images. The present invention includes image formation equations, the constrained least squares technique used to recover the component images, a novel method to estimate upper and lower bounds on the solution using min- and max-composites, and a motion refinement method.
    Type: Application
    Filed: October 1, 2005
    Publication date: March 16, 2006
    Applicant: Microsoft Corporation
    Inventors: Richard Szeliski, Shmuel Avidan, Padmanabhan Anandan
  • Publication number: 20060018521
    Abstract: A method represents a class of objects by first acquiring a set of positive training images of the class of objects. A matrix A is constructed from the set of positive training images. Each row in the matrix A corresponds to a vector of intensities of pixels of one positive training image. Correlated intensities are grouped into a set of segments of a feature mask image. Each segment includes a set of pixels with correlated intensities. From each segment, a subset of representative pixels is selected. A set of features is assigned to each pixel in each subset of representative pixels of each segment of the feature mask image to represent the class of objects.
    Type: Application
    Filed: July 23, 2004
    Publication date: January 26, 2006
    Inventor: Shmuel Avidan
  • Patent number: 6987865
    Abstract: The present invention is embodied in a system and method for extracting structure from multiple images of a scene by representing the scene as a group of image layers, including reflection and transparency layers. In general, the present invention performs layer extraction from multiple images containing reflections and transparencies. The present invention includes an optimal approach for recovering layer images and their associated motions from an arbitrary number of composite images. The present invention includes image formation equations, the constrained least squares technique used to recover the component images, a novel method to estimate upper and lower bounds on the solution using min- and max-composites, and a motion refinement method.
    Type: Grant
    Filed: September 9, 2000
    Date of Patent: January 17, 2006
    Assignee: Microsoft Corp.
    Inventors: Richard S. Szeliski, Shmuel Avidan, Padmanabhan Anandan