Patents by Inventor Alexandros NEOFYTOU
Alexandros NEOFYTOU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240071042Abstract: An image-processing technique is described herein for removing a visual effect in a face region of an image caused, at least in part, by screen illumination provided by an electronic screen. The technique can perform this removal without advance knowledge of the nature of the screen illumination provided by the electronic screen. The technique improves the quality of the image and also protects the privacy of a user by removing the visual effect in the face region that may reveal the characteristics of display information presented on the electronic screen. In some implementations, the technique first adjusts a face region of the image, and then adjusts other regions in the image for consistency with the face region. In some implementations, the technique is applied by a videoconferencing application, and is performed by a local computing device.Type: ApplicationFiled: August 30, 2022Publication date: February 29, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Sunando SENGUPTA, Ebey Paulose ABRAHAM, Alexandros NEOFYTOU, Eric Chris Wolfgang SOMMERLADE
-
Patent number: 11915398Abstract: In various embodiments, a computer-implemented method of training a neural network for relighting an image is described. A first training set that includes source images and a target illumination embedding is generated, the source images having respective illuminated subjects. A second training set that includes augmented images and the target illumination embedding is generated, where the augmented images corresponding to the source images. A first autoencoder is trained using the first training set to generate a first output set that includes estimated source illumination embeddings and first reconstructed images that correspond to the source images, the reconstructed images having respective subjects that are i) from the corresponding source image, and ii) illuminated based on the target illumination embedding.Type: GrantFiled: March 1, 2023Date of Patent: February 27, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Alexandros Neofytou, Eric Chris Wolfgang Sommerlade, Sunando Sengupta, Yang Liu
-
Publication number: 20240054683Abstract: In various embodiments, a computer-implemented method of training a neural network for creating an output signal of different modality from an input signal is described. In embodiments, the first modality may be a sound signal or a visual image and where the output signal would be a visual image or a sound signal, respectively. In embodiments a model is trained using a first pair of visual and audio networks to train a set of codebooks using known visual signals and the audio signals and using a second pair of visual and audio networks to further train the set of codebooks using the augmented visual signals and the augmented audio signals. Further, the first and the second visual networks are equally weighted and where the first and the second audio networks are equally weighted.Type: ApplicationFiled: October 26, 2023Publication date: February 15, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Sunando SENGUPTA, Alexandros NEOFYTOU, Eric Chris Wolfgang SOMMERLADE, Yang LIU
-
Patent number: 11836952Abstract: In various embodiments, a computer-implemented method of training a neural network for creating an output signal of different modality from an input signal is described. In embodiments, the first modality may be a sound signal or a visual image and where the output signal would be a visual image or a sound signal, respectively. In embodiments a model is trained using a first pair of visual and audio networks to train a set of codebooks using known visual signals and the audio signals and using a second pair of visual and audio networks to further train the set of codebooks using the augmented visual signals and the augmented audio signals. Further, the first and the second visual networks are equally weighted and where the first and the second audio networks are equally weighted. In aspects of the present disclosure, the set of codebooks comprise a visual codebook, an audio codebook and a correlation codebook.Type: GrantFiled: April 26, 2021Date of Patent: December 5, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Sunando Sengupta, Alexandros Neofytou, Eric Chris Wolfgang Sommerlade, Yang Liu
-
Publication number: 20230206406Abstract: In various embodiments, a computer-implemented method of training a neural network for relighting an image is described. A first training set that includes source images and a target illumination embedding is generated, the source images having respective illuminated subjects. A second training set that includes augmented images and the target illumination embedding is generated, where the augmented images corresponding to the source images. A first autoencoder is trained using the first training set to generate a first output set that includes estimated source illumination embeddings and first reconstructed images that correspond to the source images, the reconstructed images having respective subjects that are i) from the corresponding source image, and ii) illuminated based on the target illumination embedding.Type: ApplicationFiled: March 1, 2023Publication date: June 29, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Alexandros NEOFYTOU, Eric Chris Wolfgang SOMMERLADE, Sunando SENGUPTA, Yang LIU
-
Patent number: 11657833Abstract: A computing system includes an encoder that receives an input image and encodes the input image into real image features, a decoder that decodes the real image features into a reconstructed image, a generator that receives first audio data corresponding to the input image and generates first synthetic image features from the first audio data, and receives second audio data and generates second synthetic image features from the second audio data, a discriminator that receives both the real and synthetic image features and determines whether a target feature is real or synthetic, and a classifier that classifies a scene of the second audio data based on the second synthetic image features.Type: GrantFiled: October 26, 2021Date of Patent: May 23, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Eric Chris Wolfgang Sommerlade, Yang Liu, Alexandros Neofytou, Sunando Sengupta
-
Patent number: 11647158Abstract: A computing system, a method, and a computer-readable storage medium for adjusting eye gaze are described. The method includes capturing a video stream including images of a user, detecting the user's face region within the images, and detecting the user's facial feature regions within the images based on the detected face region. The method includes determining whether the user is completely disengaged from the computing system and, if the user is not completely disengaged, detecting the user's eye region within the images based on the detected facial feature regions. The method also includes computing the user's desired eye gaze direction based on the detected eye region, generating gaze-adjusted images based on the desired eye gaze direction, wherein the gaze-adjusted images include a saccadic eye movement, a micro-saccadic eye movement, and/or a vergence eye movement, and replacing the images within the video stream with the gaze-adjusted images.Type: GrantFiled: October 30, 2020Date of Patent: May 9, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Steven N. Bathiche, Eric Sommerlade, Alexandros Neofytou, Panos C. Panay
-
Patent number: 11615512Abstract: In various embodiments, a computer-implemented method of training a neural network for relighting an image is described. A first training set that includes source images and a target illumination embedding is generated, the source images having respective illuminated subjects. A second training set that includes augmented images and the target illumination embedding is generated, where the augmented images corresponding to the source images. A first autoencoder is trained using the first training set to generate a first output set that includes estimated source illumination embeddings and first reconstructed images that correspond to the source images, the reconstructed images having respective subjects that are i) from the corresponding source image, and ii) illuminated based on the target illumination embedding.Type: GrantFiled: March 2, 2021Date of Patent: March 28, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Alexandros Neofytou, Eric Chris Wolfgang Sommerlade, Sunando Sengupta, Yang Liu
-
Publication number: 20220343543Abstract: In various embodiments, a computer-implemented method of training a neural network for creating an output signal of different modality from an input signal is described. In embodiments, the first modality may be a sound signal or a visual image and where the output signal would be a visual image or a sound signal, respectively. In embodiments a model is trained using a first pair of visual and audio networks to train a set of codebooks using known visual signals and the audio signals and using a second pair of visual and audio networks to further train the set of codebooks using the augmented visual signals and the augmented audio signals. Further, the first and the second visual networks are equally weighted and where the first and the second audio networks are equally weighted. In aspects of the present disclosure, the set of codebooks comprise a visual codebook, an audio codebook and a correlation codebook.Type: ApplicationFiled: April 26, 2021Publication date: October 27, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Sunando SENGUPTA, Alexandros NEOFYTOU, Eric Chris Wolfgang SOMMERLADE, Yang LIU
-
Publication number: 20220284551Abstract: In various embodiments, a computer-implemented method of training a neural network for relighting an image is described. A first training set that includes source images and a target illumination embedding is generated, the source images having respective illuminated subjects. A second training set that includes augmented images and the target illumination embedding is generated, where the augmented images corresponding to the source images. A first autoencoder is trained using the first training set to generate a first output set that includes estimated source illumination embeddings and first reconstructed images that correspond to the source images, the reconstructed images having respective subjects that are i) from the corresponding source image, and ii) illuminated based on the target illumination embedding.Type: ApplicationFiled: March 2, 2021Publication date: September 8, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Alexandros Neofytou, Eric Chris Wolfgang Sommerlade, Sunando Sengupta, Yang Liu
-
Publication number: 20220221932Abstract: Aspects of the present disclosure relate to systems and methods for controlling a function of a computing system using gaze detection. In examples, one or more images of a user are received and gaze information may be determined from the received one or more images. Non-gaze information may be received when the gaze information is determined to satisfy a condition. Accordingly, a function may be enabled based on the received non-gaze information. In examples, the gaze information may be determined by extracting a plurality of features from the received one or more images, providing the plurality of features to a neural network, and determining, utilizing the neural network, a location at a display device at which a gaze of the user is directed.Type: ApplicationFiled: January 12, 2021Publication date: July 14, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Steven N. BATHICHE, Eric Chris Wolfgang Sommerlade, Vivek PRADEEP, Alexandros NEOFYTOU
-
Patent number: 11330196Abstract: Technology is described herein that uses an object-encoding system to convert an object image into a combined encoding. The object image depicts a reference object, while the combined encoding represents an environment image. The environment image, in turn, depicts an estimate of an environment that has produced the illumination effects exhibited by the reference object. The combined encoding includes: a first part that represents image content in the environment image within a high range of intensities values; and a second part that represents image content within a low range of intensity values. Also described herein is a training system that trains the object-encoding system based on combined encodings produced by a separately-trained environment-encoding system. Also described herein are various applications of the object-encoding system and environment-encoding system.Type: GrantFiled: October 12, 2020Date of Patent: May 10, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Alexandros Neofytou, Eric Chris Wolfgang Sommerlade, Alejandro Sztrajman, Sunando Sengupta
-
Publication number: 20220141422Abstract: A computing system, a method, and a computer-readable storage medium for adjusting eye gaze are described. The method includes capturing a video stream including images of a user, detecting the user's face region within the images, and detecting the user's facial feature regions within the images based on the detected face region. The method includes determining whether the user is completely disengaged from the computing system and, if the user is not completely disengaged, detecting the user's eye region within the images based on the detected facial feature regions. The method also includes computing the user's desired eye gaze direction based on the detected eye region, generating gaze-adjusted images based on the desired eye gaze direction, wherein the gaze-adjusted images include a saccadic eye movement, a micro-saccadic eye movement, and/or a vergence eye movement, and replacing the images within the video stream with the gaze-adjusted images.Type: ApplicationFiled: October 30, 2020Publication date: May 5, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Steven N. BATHICHE, Eric SOMMERLADE, Alexandros NEOFYTOU, Panos C. PANAY
-
Publication number: 20220116549Abstract: Technology is described herein that uses an object-encoding system to convert an object image into a combined encoding. The object image depicts a reference object, while the combined encoding represents an environment image. The environment image, in turn, depicts an estimate of an environment that has produced the illumination effects exhibited by the reference object. The combined encoding includes: a first part that represents image content in the environment image within a high range of intensities values; and a second part that represents image content within a low range of intensity values. Also described herein is a training system that trains the object-encoding system based on combined encodings produced by a separately-trained environment-encoding system. Also described herein are various applications of the object-encoding system and environment-encoding system.Type: ApplicationFiled: October 12, 2020Publication date: April 14, 2022Inventors: Alexandros NEOFYTOU, Eric Chris Wolfgang SOMMERLADE, Alejandro SZTRAJMAN, Sunando SENGUPTA
-
Publication number: 20220044071Abstract: A computing system includes an encoder that receives an input image and encodes the input image into real image features, a decoder that decodes the real image features into a reconstructed image, a generator that receives first audio data corresponding to the input image and generates first synthetic image features from the first audio data, and receives second audio data and generates second synthetic image features from the second audio data, a discriminator that receives both the real and synthetic image features and determines whether a target feature is real or synthetic, and a classifier that classifies a scene of the second audio data based on the second synthetic image features.Type: ApplicationFiled: October 26, 2021Publication date: February 10, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Eric Chris Wolfgang SOMMERLADE, Yang LIU, Alexandros NEOFYTOU, Sunando SENGUPTA
-
Patent number: 11164042Abstract: A computing system includes an encoder that receives an input image and encodes the input image into real image features, a decoder that decodes the real image features into a reconstructed image, a generator that receives first audio data corresponding to the input image and generates first synthetic image features from the first audio data, and receives second audio data and generates second synthetic image features from the second audio data, a discriminator that receives both the real and synthetic image features and determines whether a target feature is real or synthetic, and a classifier that classifies a scene of the second audio data based on the second synthetic image features.Type: GrantFiled: April 9, 2020Date of Patent: November 2, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Eric Chris Wolfgang Sommerlade, Yang Liu, Alexandros Neofytou, Sunando Sengupta
-
Publication number: 20210216817Abstract: A computing system includes an encoder that receives an input image and encodes the input image into real image features, a decoder that decodes the real image features into a reconstructed image, a generator that receives first audio data corresponding to the input image and generates first synthetic image features from the first audio data, and receives second audio data and generates second synthetic image features from the second audio data, a discriminator that receives both the real and synthetic image features and determines whether a target feature is real or synthetic, and a classifier that classifies a scene of the second audio data based on the second synthetic image features.Type: ApplicationFiled: April 9, 2020Publication date: July 15, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Eric Chris Wolfgang SOMMERLADE, Yang LIU, Alexandros NEOFYTOU, Sunando SENGUPTA
-
Publication number: 20210097644Abstract: A method for image enhancement on a computing device includes receiving a digital input image depicting a human eye. From the digital input image, the computing device generates a gaze-adjusted image via a gaze adjustment machine learning model by changing an apparent gaze direction of the human eye. From the gaze-adjusted image and potentially in conjunction with the digital input image, the computing device generates a detail-enhanced image via a detail enhancement machine learning model by adding or modifying details. The computing device outputs the detail-enhanced image.Type: ApplicationFiled: November 26, 2019Publication date: April 1, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Eric Chris Wolfgang SOMMERLADE, Alexandros NEOFYTOU, Sunando SENGUPTA