Patents by Inventor Alexandros NEOFYTOU

Alexandros NEOFYTOU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240071042
    Abstract: An image-processing technique is described herein for removing a visual effect in a face region of an image caused, at least in part, by screen illumination provided by an electronic screen. The technique can perform this removal without advance knowledge of the nature of the screen illumination provided by the electronic screen. The technique improves the quality of the image and also protects the privacy of a user by removing the visual effect in the face region that may reveal the characteristics of display information presented on the electronic screen. In some implementations, the technique first adjusts a face region of the image, and then adjusts other regions in the image for consistency with the face region. In some implementations, the technique is applied by a videoconferencing application, and is performed by a local computing device.
    Type: Application
    Filed: August 30, 2022
    Publication date: February 29, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sunando SENGUPTA, Ebey Paulose ABRAHAM, Alexandros NEOFYTOU, Eric Chris Wolfgang SOMMERLADE
  • Patent number: 11915398
    Abstract: In various embodiments, a computer-implemented method of training a neural network for relighting an image is described. A first training set that includes source images and a target illumination embedding is generated, the source images having respective illuminated subjects. A second training set that includes augmented images and the target illumination embedding is generated, where the augmented images corresponding to the source images. A first autoencoder is trained using the first training set to generate a first output set that includes estimated source illumination embeddings and first reconstructed images that correspond to the source images, the reconstructed images having respective subjects that are i) from the corresponding source image, and ii) illuminated based on the target illumination embedding.
    Type: Grant
    Filed: March 1, 2023
    Date of Patent: February 27, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Alexandros Neofytou, Eric Chris Wolfgang Sommerlade, Sunando Sengupta, Yang Liu
  • Publication number: 20240054683
    Abstract: In various embodiments, a computer-implemented method of training a neural network for creating an output signal of different modality from an input signal is described. In embodiments, the first modality may be a sound signal or a visual image and where the output signal would be a visual image or a sound signal, respectively. In embodiments a model is trained using a first pair of visual and audio networks to train a set of codebooks using known visual signals and the audio signals and using a second pair of visual and audio networks to further train the set of codebooks using the augmented visual signals and the augmented audio signals. Further, the first and the second visual networks are equally weighted and where the first and the second audio networks are equally weighted.
    Type: Application
    Filed: October 26, 2023
    Publication date: February 15, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sunando SENGUPTA, Alexandros NEOFYTOU, Eric Chris Wolfgang SOMMERLADE, Yang LIU
  • Patent number: 11836952
    Abstract: In various embodiments, a computer-implemented method of training a neural network for creating an output signal of different modality from an input signal is described. In embodiments, the first modality may be a sound signal or a visual image and where the output signal would be a visual image or a sound signal, respectively. In embodiments a model is trained using a first pair of visual and audio networks to train a set of codebooks using known visual signals and the audio signals and using a second pair of visual and audio networks to further train the set of codebooks using the augmented visual signals and the augmented audio signals. Further, the first and the second visual networks are equally weighted and where the first and the second audio networks are equally weighted. In aspects of the present disclosure, the set of codebooks comprise a visual codebook, an audio codebook and a correlation codebook.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: December 5, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sunando Sengupta, Alexandros Neofytou, Eric Chris Wolfgang Sommerlade, Yang Liu
  • Publication number: 20230206406
    Abstract: In various embodiments, a computer-implemented method of training a neural network for relighting an image is described. A first training set that includes source images and a target illumination embedding is generated, the source images having respective illuminated subjects. A second training set that includes augmented images and the target illumination embedding is generated, where the augmented images corresponding to the source images. A first autoencoder is trained using the first training set to generate a first output set that includes estimated source illumination embeddings and first reconstructed images that correspond to the source images, the reconstructed images having respective subjects that are i) from the corresponding source image, and ii) illuminated based on the target illumination embedding.
    Type: Application
    Filed: March 1, 2023
    Publication date: June 29, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Alexandros NEOFYTOU, Eric Chris Wolfgang SOMMERLADE, Sunando SENGUPTA, Yang LIU
  • Patent number: 11657833
    Abstract: A computing system includes an encoder that receives an input image and encodes the input image into real image features, a decoder that decodes the real image features into a reconstructed image, a generator that receives first audio data corresponding to the input image and generates first synthetic image features from the first audio data, and receives second audio data and generates second synthetic image features from the second audio data, a discriminator that receives both the real and synthetic image features and determines whether a target feature is real or synthetic, and a classifier that classifies a scene of the second audio data based on the second synthetic image features.
    Type: Grant
    Filed: October 26, 2021
    Date of Patent: May 23, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Eric Chris Wolfgang Sommerlade, Yang Liu, Alexandros Neofytou, Sunando Sengupta
  • Patent number: 11647158
    Abstract: A computing system, a method, and a computer-readable storage medium for adjusting eye gaze are described. The method includes capturing a video stream including images of a user, detecting the user's face region within the images, and detecting the user's facial feature regions within the images based on the detected face region. The method includes determining whether the user is completely disengaged from the computing system and, if the user is not completely disengaged, detecting the user's eye region within the images based on the detected facial feature regions. The method also includes computing the user's desired eye gaze direction based on the detected eye region, generating gaze-adjusted images based on the desired eye gaze direction, wherein the gaze-adjusted images include a saccadic eye movement, a micro-saccadic eye movement, and/or a vergence eye movement, and replacing the images within the video stream with the gaze-adjusted images.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: May 9, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Steven N. Bathiche, Eric Sommerlade, Alexandros Neofytou, Panos C. Panay
  • Patent number: 11615512
    Abstract: In various embodiments, a computer-implemented method of training a neural network for relighting an image is described. A first training set that includes source images and a target illumination embedding is generated, the source images having respective illuminated subjects. A second training set that includes augmented images and the target illumination embedding is generated, where the augmented images corresponding to the source images. A first autoencoder is trained using the first training set to generate a first output set that includes estimated source illumination embeddings and first reconstructed images that correspond to the source images, the reconstructed images having respective subjects that are i) from the corresponding source image, and ii) illuminated based on the target illumination embedding.
    Type: Grant
    Filed: March 2, 2021
    Date of Patent: March 28, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Alexandros Neofytou, Eric Chris Wolfgang Sommerlade, Sunando Sengupta, Yang Liu
  • Publication number: 20220343543
    Abstract: In various embodiments, a computer-implemented method of training a neural network for creating an output signal of different modality from an input signal is described. In embodiments, the first modality may be a sound signal or a visual image and where the output signal would be a visual image or a sound signal, respectively. In embodiments a model is trained using a first pair of visual and audio networks to train a set of codebooks using known visual signals and the audio signals and using a second pair of visual and audio networks to further train the set of codebooks using the augmented visual signals and the augmented audio signals. Further, the first and the second visual networks are equally weighted and where the first and the second audio networks are equally weighted. In aspects of the present disclosure, the set of codebooks comprise a visual codebook, an audio codebook and a correlation codebook.
    Type: Application
    Filed: April 26, 2021
    Publication date: October 27, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sunando SENGUPTA, Alexandros NEOFYTOU, Eric Chris Wolfgang SOMMERLADE, Yang LIU
  • Publication number: 20220284551
    Abstract: In various embodiments, a computer-implemented method of training a neural network for relighting an image is described. A first training set that includes source images and a target illumination embedding is generated, the source images having respective illuminated subjects. A second training set that includes augmented images and the target illumination embedding is generated, where the augmented images corresponding to the source images. A first autoencoder is trained using the first training set to generate a first output set that includes estimated source illumination embeddings and first reconstructed images that correspond to the source images, the reconstructed images having respective subjects that are i) from the corresponding source image, and ii) illuminated based on the target illumination embedding.
    Type: Application
    Filed: March 2, 2021
    Publication date: September 8, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Alexandros Neofytou, Eric Chris Wolfgang Sommerlade, Sunando Sengupta, Yang Liu
  • Publication number: 20220221932
    Abstract: Aspects of the present disclosure relate to systems and methods for controlling a function of a computing system using gaze detection. In examples, one or more images of a user are received and gaze information may be determined from the received one or more images. Non-gaze information may be received when the gaze information is determined to satisfy a condition. Accordingly, a function may be enabled based on the received non-gaze information. In examples, the gaze information may be determined by extracting a plurality of features from the received one or more images, providing the plurality of features to a neural network, and determining, utilizing the neural network, a location at a display device at which a gaze of the user is directed.
    Type: Application
    Filed: January 12, 2021
    Publication date: July 14, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Steven N. BATHICHE, Eric Chris Wolfgang Sommerlade, Vivek PRADEEP, Alexandros NEOFYTOU
  • Patent number: 11330196
    Abstract: Technology is described herein that uses an object-encoding system to convert an object image into a combined encoding. The object image depicts a reference object, while the combined encoding represents an environment image. The environment image, in turn, depicts an estimate of an environment that has produced the illumination effects exhibited by the reference object. The combined encoding includes: a first part that represents image content in the environment image within a high range of intensities values; and a second part that represents image content within a low range of intensity values. Also described herein is a training system that trains the object-encoding system based on combined encodings produced by a separately-trained environment-encoding system. Also described herein are various applications of the object-encoding system and environment-encoding system.
    Type: Grant
    Filed: October 12, 2020
    Date of Patent: May 10, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Alexandros Neofytou, Eric Chris Wolfgang Sommerlade, Alejandro Sztrajman, Sunando Sengupta
  • Publication number: 20220141422
    Abstract: A computing system, a method, and a computer-readable storage medium for adjusting eye gaze are described. The method includes capturing a video stream including images of a user, detecting the user's face region within the images, and detecting the user's facial feature regions within the images based on the detected face region. The method includes determining whether the user is completely disengaged from the computing system and, if the user is not completely disengaged, detecting the user's eye region within the images based on the detected facial feature regions. The method also includes computing the user's desired eye gaze direction based on the detected eye region, generating gaze-adjusted images based on the desired eye gaze direction, wherein the gaze-adjusted images include a saccadic eye movement, a micro-saccadic eye movement, and/or a vergence eye movement, and replacing the images within the video stream with the gaze-adjusted images.
    Type: Application
    Filed: October 30, 2020
    Publication date: May 5, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Steven N. BATHICHE, Eric SOMMERLADE, Alexandros NEOFYTOU, Panos C. PANAY
  • Publication number: 20220116549
    Abstract: Technology is described herein that uses an object-encoding system to convert an object image into a combined encoding. The object image depicts a reference object, while the combined encoding represents an environment image. The environment image, in turn, depicts an estimate of an environment that has produced the illumination effects exhibited by the reference object. The combined encoding includes: a first part that represents image content in the environment image within a high range of intensities values; and a second part that represents image content within a low range of intensity values. Also described herein is a training system that trains the object-encoding system based on combined encodings produced by a separately-trained environment-encoding system. Also described herein are various applications of the object-encoding system and environment-encoding system.
    Type: Application
    Filed: October 12, 2020
    Publication date: April 14, 2022
    Inventors: Alexandros NEOFYTOU, Eric Chris Wolfgang SOMMERLADE, Alejandro SZTRAJMAN, Sunando SENGUPTA
  • Publication number: 20220044071
    Abstract: A computing system includes an encoder that receives an input image and encodes the input image into real image features, a decoder that decodes the real image features into a reconstructed image, a generator that receives first audio data corresponding to the input image and generates first synthetic image features from the first audio data, and receives second audio data and generates second synthetic image features from the second audio data, a discriminator that receives both the real and synthetic image features and determines whether a target feature is real or synthetic, and a classifier that classifies a scene of the second audio data based on the second synthetic image features.
    Type: Application
    Filed: October 26, 2021
    Publication date: February 10, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Eric Chris Wolfgang SOMMERLADE, Yang LIU, Alexandros NEOFYTOU, Sunando SENGUPTA
  • Patent number: 11164042
    Abstract: A computing system includes an encoder that receives an input image and encodes the input image into real image features, a decoder that decodes the real image features into a reconstructed image, a generator that receives first audio data corresponding to the input image and generates first synthetic image features from the first audio data, and receives second audio data and generates second synthetic image features from the second audio data, a discriminator that receives both the real and synthetic image features and determines whether a target feature is real or synthetic, and a classifier that classifies a scene of the second audio data based on the second synthetic image features.
    Type: Grant
    Filed: April 9, 2020
    Date of Patent: November 2, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Eric Chris Wolfgang Sommerlade, Yang Liu, Alexandros Neofytou, Sunando Sengupta
  • Publication number: 20210216817
    Abstract: A computing system includes an encoder that receives an input image and encodes the input image into real image features, a decoder that decodes the real image features into a reconstructed image, a generator that receives first audio data corresponding to the input image and generates first synthetic image features from the first audio data, and receives second audio data and generates second synthetic image features from the second audio data, a discriminator that receives both the real and synthetic image features and determines whether a target feature is real or synthetic, and a classifier that classifies a scene of the second audio data based on the second synthetic image features.
    Type: Application
    Filed: April 9, 2020
    Publication date: July 15, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Eric Chris Wolfgang SOMMERLADE, Yang LIU, Alexandros NEOFYTOU, Sunando SENGUPTA
  • Publication number: 20210097644
    Abstract: A method for image enhancement on a computing device includes receiving a digital input image depicting a human eye. From the digital input image, the computing device generates a gaze-adjusted image via a gaze adjustment machine learning model by changing an apparent gaze direction of the human eye. From the gaze-adjusted image and potentially in conjunction with the digital input image, the computing device generates a detail-enhanced image via a detail enhancement machine learning model by adding or modifying details. The computing device outputs the detail-enhanced image.
    Type: Application
    Filed: November 26, 2019
    Publication date: April 1, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Eric Chris Wolfgang SOMMERLADE, Alexandros NEOFYTOU, Sunando SENGUPTA