Patents by Inventor Amir Akbarzadeh

Amir Akbarzadeh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8325999
    Abstract: The described implementations relate to assisted face recognition tagging of digital images, and specifically to context-driven assisted face recognition tagging. In one case, context-driven assisted face recognition tagging (CDAFRT) tools can access face images associated with a photo gallery. The CDAFRT tools can perform context-driven face recognition to identify individual face images at a specified probability. In such a configuration, the probability that the individual face images are correctly identified can be higher than attempting to identify individual face images in isolation.
    Type: Grant
    Filed: June 8, 2009
    Date of Patent: December 4, 2012
    Assignee: Microsoft Corporation
    Inventors: Ashish Kapoor, Gang Hua, Amir Akbarzadeh, Simon J. Baker
  • Publication number: 20120242798
    Abstract: A preferred method for sharing user-generated virtual and augmented reality scenes can include receiving at a server a virtual and/or augmented reality (VAR) scene generated by a user mobile device. Preferably, the VAR scene includes visual data and orientation data, which includes a real orientation of the user mobile device relative to a projection matrix. The preferred method can also include compositing the visual data and the orientation data into a viewable VAR scene; locally storing the viewable VAR scene at the server; and in response to a request received at the server, distributing the processed VAR scene to a viewer mobile device.
    Type: Application
    Filed: January 10, 2012
    Publication date: September 27, 2012
    Inventors: TERRENCE EDWARD MCARDLE, BENJAMIN ZEIS NEWHOUSE, AMIR AKBARZADEH
  • Publication number: 20120214590
    Abstract: A preferred method of acquiring virtual or augmented reality (VAR) scenes can include at a plurality of locations of interest, providing one or more users with a predetermined pattern for image acquisition with an image capture device and for each of the one or more users, in response to a user input, acquiring at least one image at the location of interest. The method of the preferred embodiment can also include for each of the one or more users, in response to the acquisition of at least one image, providing the user with feedback to ensure a complete acquisition of the virtual or augmented reality scene; and receiving at a remote database, from each of the one or more users, one or more VAR scenes. One variation of the method of the preferred embodiment can include providing game mechanics to promote proper image acquisition and promote competition between users.
    Type: Application
    Filed: November 22, 2011
    Publication date: August 23, 2012
    Inventors: BENJAMIN ZEIS NEWHOUSE, TERRENCE EDWARD MCARDLE, AMIR AKBARZADEH
  • Publication number: 20120124036
    Abstract: Methods are provided for displaying image results responsive to a search query. In addition to displaying responsive results for a query, responsive results are also provided for related queries. The results are ordered along a plurality of display axes, including at least one axis corresponding to the ordering of the various search queries. The results can be displayed in an aligned or non-aligned manner. The results can then be translated along one or more of the display axes to allow a user to browse the various results.
    Type: Application
    Filed: November 16, 2010
    Publication date: May 17, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: GONZALO A. RAMOS, STEVEN M. DRUCKER, AMIR AKBARZADEH
  • Publication number: 20120086792
    Abstract: Captured images are analyzed to identify portrayed individuals and/or scene elements therein. Upon user confirmation of one or more identified individuals and/or scene elements entity information is accessed to determine whether there are any available communication addresses, e.g., email addresses, SMS-based addresses, websites, etc., that correspond with or are otherwise linked to an identified individual or scene element in the current captured image. A current captured image can then be automatically transmitted, with no need for any other user effort, to those addresses located for an identified individual or scene element.
    Type: Application
    Filed: October 11, 2010
    Publication date: April 12, 2012
    Applicant: Microsoft Corporation
    Inventors: Amir Akbarzadeh, Simon J. Baker, David Per Zachris Nister, Scott Fynn
  • Publication number: 20110142299
    Abstract: Face recognition may be performed using a combination of visual analysis and social context. In one example, a web site such as a social networking site or photo-sharing site allows users to upload photos, and allows faces that appear in the photo to be tagged with users' names. When user A uploads a new photo, two analyses may be performed. First, a face in the photo is compared with known faces of users to determine similarity. Second, it is determined which other users user A frequently uploads photos of. Two probability distributions are created. One distribution assigns high probabilities to users whose photos are similar to the new photo. The other assigns high probabilities to users who frequently appear in photos uploaded by user A. These probability distributions are combined, and the person in the photo is identified as being the person with the highest probability.
    Type: Application
    Filed: December 14, 2009
    Publication date: June 16, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Amir Akbarzadeh, Gang Hua
  • Publication number: 20110142298
    Abstract: Two faces may be compared by calculating distances between different regions of the windows, and choosing one of the distances as the difference between the images. Two images are examined to detect the location of the face in the images. The faces may then be geometrically and photometrically rectified. A sliding window that is smaller than the whole face may be positioned at various locations over the images, and a descriptor is calculated for each window position. The descriptor for a window at one location in one image is compared with descriptors for windows in the neighborhood of that location in the other image. The lowest distance between window descriptors is chosen. The process is repeated for all window positions, resulting in a set of distances. The distances are sorted, and one of the distances is chosen to represent the difference between the two faces.
    Type: Application
    Filed: December 14, 2009
    Publication date: June 16, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Amir Akbarzadeh, Gang Hua
  • Publication number: 20100310134
    Abstract: The described implementations relate to assisted face recognition tagging of digital images, and specifically to context-driven assisted face recognition tagging. In one case, context-driven assisted face recognition tagging (CDAFRT) tools can access face images associated with a photo gallery. The CDAFRT tools can perform context-driven face recognition to identify individual face images at a specified probability. In such a configuration, the probability that the individual face images are correctly identified can be higher than attempting to identify individual face images in isolation.
    Type: Application
    Filed: June 8, 2009
    Publication date: December 9, 2010
    Applicant: Microsoft Corporation
    Inventors: Ashish Kapoor, Gang Hua, Amir Akbarzadeh, Simon J. Baker
  • Publication number: 20100284577
    Abstract: Representing a face by jointly quantizing features and spatial location to perform implicit elastic matching between features. A plurality of the features are extracted from a face image and expanded with a corresponding spatial location in the face image. Each of the expanded features is quantized based on one or more randomized decision trees. A histogram of the quantized features is calculated to represent the face image. The histogram is compared to histograms of other face images to identify a match, or to calculate a distance metric representative of a difference between faces.
    Type: Application
    Filed: May 8, 2009
    Publication date: November 11, 2010
    Applicant: Microsoft Corporation
    Inventors: Gang Hua, John Wright, Amir Akbarzadeh