Patents by Inventor Sunando SENGUPTA
Sunando SENGUPTA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11947210Abstract: The present disclosure relates identifying an intended viewer and an unintended viewer of a liquid crystal display (LCD) using face recognition technology. Once identified the system may determine a face position for the unintended viewer. The system may modulate the voltage applied at a third electrode on the color filter layer of the LCD to achieve a certain off-axis contrast that may reduce the unintended viewer's visibility of the LCD without restricting the visibility of the intended viewer. Ultimately, the present disclosure provides enhanced privacy options for the intended viewer with a lightweight, inexpensive, and highly transportable system.Type: GrantFiled: May 4, 2023Date of Patent: April 2, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Timothy A. Large, Neil Emerton, Sunando Sengupta
-
Publication number: 20240071042Abstract: An image-processing technique is described herein for removing a visual effect in a face region of an image caused, at least in part, by screen illumination provided by an electronic screen. The technique can perform this removal without advance knowledge of the nature of the screen illumination provided by the electronic screen. The technique improves the quality of the image and also protects the privacy of a user by removing the visual effect in the face region that may reveal the characteristics of display information presented on the electronic screen. In some implementations, the technique first adjusts a face region of the image, and then adjusts other regions in the image for consistency with the face region. In some implementations, the technique is applied by a videoconferencing application, and is performed by a local computing device.Type: ApplicationFiled: August 30, 2022Publication date: February 29, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Sunando SENGUPTA, Ebey Paulose ABRAHAM, Alexandros NEOFYTOU, Eric Chris Wolfgang SOMMERLADE
-
Patent number: 11915398Abstract: In various embodiments, a computer-implemented method of training a neural network for relighting an image is described. A first training set that includes source images and a target illumination embedding is generated, the source images having respective illuminated subjects. A second training set that includes augmented images and the target illumination embedding is generated, where the augmented images corresponding to the source images. A first autoencoder is trained using the first training set to generate a first output set that includes estimated source illumination embeddings and first reconstructed images that correspond to the source images, the reconstructed images having respective subjects that are i) from the corresponding source image, and ii) illuminated based on the target illumination embedding.Type: GrantFiled: March 1, 2023Date of Patent: February 27, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Alexandros Neofytou, Eric Chris Wolfgang Sommerlade, Sunando Sengupta, Yang Liu
-
Publication number: 20240054683Abstract: In various embodiments, a computer-implemented method of training a neural network for creating an output signal of different modality from an input signal is described. In embodiments, the first modality may be a sound signal or a visual image and where the output signal would be a visual image or a sound signal, respectively. In embodiments a model is trained using a first pair of visual and audio networks to train a set of codebooks using known visual signals and the audio signals and using a second pair of visual and audio networks to further train the set of codebooks using the augmented visual signals and the augmented audio signals. Further, the first and the second visual networks are equally weighted and where the first and the second audio networks are equally weighted.Type: ApplicationFiled: October 26, 2023Publication date: February 15, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Sunando SENGUPTA, Alexandros NEOFYTOU, Eric Chris Wolfgang SOMMERLADE, Yang LIU
-
Patent number: 11871147Abstract: Methods and systems for applying gaze adjustment techniques to participants in a video conference are disclosed. Some examples may include: receiving, at computing system, image adjustment information associated with a video stream including images of a first participant, identifying, for a display layout of a communication application, a location displaying the images of the first participant, determining, based on the received image adjustment information, a location displaying images of a second participant for the display layout, the received image adjustment information indicating that an eye gaze of the first participant being directed toward the second participant, computing an eye gaze direction of the first participant based on the location displaying images of the second participant, generating gaze-adjusted images based on the desired eye gaze direction of the first participant and replacing the images within the video stream with the gaze-adjusted images.Type: GrantFiled: June 9, 2021Date of Patent: January 9, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Eric Chris Wolfgang Sommerlade, Alexandros Neophytou, Sunando Sengupta
-
Patent number: 11836952Abstract: In various embodiments, a computer-implemented method of training a neural network for creating an output signal of different modality from an input signal is described. In embodiments, the first modality may be a sound signal or a visual image and where the output signal would be a visual image or a sound signal, respectively. In embodiments a model is trained using a first pair of visual and audio networks to train a set of codebooks using known visual signals and the audio signals and using a second pair of visual and audio networks to further train the set of codebooks using the augmented visual signals and the augmented audio signals. Further, the first and the second visual networks are equally weighted and where the first and the second audio networks are equally weighted. In aspects of the present disclosure, the set of codebooks comprise a visual codebook, an audio codebook and a correlation codebook.Type: GrantFiled: April 26, 2021Date of Patent: December 5, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Sunando Sengupta, Alexandros Neofytou, Eric Chris Wolfgang Sommerlade, Yang Liu
-
Publication number: 20230319233Abstract: Methods and systems for applying gaze adjustment techniques to participants in a video conference are disclosed. Some examples may include: receiving, at computing system, image adjustment information associated with a video stream including images of a first participant, identifying, for a display layout of a communication application, a location displaying the images of the first participant, determining, based on the received image adjustment information, a location displaying images of a second participant for the display layout, the received image adjustment information indicating that an eye gaze of the first participant being directed toward the second participant, computing an eye gaze direction of the first participant based on the location displaying images of the second participant, generating gaze-adjusted images based on the desired eye gaze direction of the first participant and replacing the images within the video stream with the gaze-adjusted images.Type: ApplicationFiled: June 5, 2023Publication date: October 5, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Eric Chris Wolfgang SOMMERLADE, Alexandros NEOPHYTOU, Sunando SENGUPTA
-
Publication number: 20230289919Abstract: Aspects of the present disclosure relate to video stream refinement for a dynamic scene. In examples, a system is provided that includes at least one processor, and memory storing instructions that, when executed by the at least one processor, causes the system to perform a set of operations. The set of operations include receiving an input video stream, identifying, within the input video stream, a frame portion containing features of interest, enlarging the frame portion containing the features of interest, enhancing the frame portion of the input video stream to increase fidelity within the frame portion, and displaying the enhanced frame portion.Type: ApplicationFiled: March 11, 2022Publication date: September 14, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Sunando SENGUPTA, John G A WEISS, Luming LIANG, Ilya D. ZHARKOV, Eric CW SOMMERLADE
-
Patent number: 11714881Abstract: A method of improving image quality of a stream of input images is described. The stream of input images, including a current input image, is received. One or more target objects, including a first target object, are identified spatio-temporally within the stream of input images. The one or more target objects are tracked spatio-temporally within the stream of input images. The current input image is segmented into i) a foreground including the first target object, and ii) a background. The foreground is processed to have improved image quality in the current input image. Processing of the foreground further comprises processing the first target object using a same processing technique as for a prior input image of the stream of input images based on the tracking of the first target object. The background is processed differently from the foreground. An output image is generated by merging the foreground with the background.Type: GrantFiled: May 27, 2021Date of Patent: August 1, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Eric Chris Wolfgang Sommerlade, Sunando Sengupta, Alexandros Neophytou
-
Patent number: 11706384Abstract: Methods and systems for applying gaze adjustment techniques to participants in a video conference are disclosed. Some examples may include: receiving, at computing system, image adjustment information associated with a video stream including images of a first participant, identifying, for a display layout of a communication application, a location displaying the images of the first participant, determining, based on the received image adjustment information, a location displaying images of a second participant for the display layout, the received image adjustment information indicating that an eye gaze of the first participant being directed toward the second participant, computing an eye gaze direction of the first participant based on the location displaying images of the second participant, generating gaze-adjusted images based on the desired eye gaze direction of the first participant and replacing the images within the video stream with the gaze-adjusted images.Type: GrantFiled: June 9, 2021Date of Patent: July 18, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Eric Chris Wolfgang Sommerlade, Alexandros Neophytou, Sunando Sengupta
-
Publication number: 20230206406Abstract: In various embodiments, a computer-implemented method of training a neural network for relighting an image is described. A first training set that includes source images and a target illumination embedding is generated, the source images having respective illuminated subjects. A second training set that includes augmented images and the target illumination embedding is generated, where the augmented images corresponding to the source images. A first autoencoder is trained using the first training set to generate a first output set that includes estimated source illumination embeddings and first reconstructed images that correspond to the source images, the reconstructed images having respective subjects that are i) from the corresponding source image, and ii) illuminated based on the target illumination embedding.Type: ApplicationFiled: March 1, 2023Publication date: June 29, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Alexandros NEOFYTOU, Eric Chris Wolfgang SOMMERLADE, Sunando SENGUPTA, Yang LIU
-
Patent number: 11657833Abstract: A computing system includes an encoder that receives an input image and encodes the input image into real image features, a decoder that decodes the real image features into a reconstructed image, a generator that receives first audio data corresponding to the input image and generates first synthetic image features from the first audio data, and receives second audio data and generates second synthetic image features from the second audio data, a discriminator that receives both the real and synthetic image features and determines whether a target feature is real or synthetic, and a classifier that classifies a scene of the second audio data based on the second synthetic image features.Type: GrantFiled: October 26, 2021Date of Patent: May 23, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Eric Chris Wolfgang Sommerlade, Yang Liu, Alexandros Neofytou, Sunando Sengupta
-
Patent number: 11615512Abstract: In various embodiments, a computer-implemented method of training a neural network for relighting an image is described. A first training set that includes source images and a target illumination embedding is generated, the source images having respective illuminated subjects. A second training set that includes augmented images and the target illumination embedding is generated, where the augmented images corresponding to the source images. A first autoencoder is trained using the first training set to generate a first output set that includes estimated source illumination embeddings and first reconstructed images that correspond to the source images, the reconstructed images having respective subjects that are i) from the corresponding source image, and ii) illuminated based on the target illumination embedding.Type: GrantFiled: March 2, 2021Date of Patent: March 28, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Alexandros Neofytou, Eric Chris Wolfgang Sommerlade, Sunando Sengupta, Yang Liu
-
Publication number: 20220400228Abstract: Methods and systems for applying gaze adjustment techniques to participants in a video conference are disclosed. Some examples may include: receiving, at computing system, image adjustment information associated with a video stream including images of a first participant, identifying, for a display layout of a communication application, a location displaying the images of the first participant, determining, based on the received image adjustment information, a location displaying images of a second participant for the display layout, the received image adjustment information indicating that an eye gaze of the first participant being directed toward the second participant, computing an eye gaze direction of the first participant based on the location displaying images of the second participant, generating gaze-adjusted images based on the desired eye gaze direction of the first participant and replacing the images within the video stream with the gaze-adjusted images.Type: ApplicationFiled: June 9, 2021Publication date: December 15, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Eric Chris Wolfgang SOMMERLADE, Alexandros NEOPHYTOU, Sunando SENGUPTA
-
Publication number: 20220383034Abstract: A method of improving image quality of a stream of input images is described. The stream of input images, including a current input image, is received. One or more target objects, including a first target object, are identified spatio-temporally within the stream of input images. The one or more target objects are tracked spatio-temporally within the stream of input images. The current input image is segmented into i) a foreground including the first target object, and ii) a background. The foreground is processed to have improved image quality in the current input image. Processing of the foreground further comprises processing the first target object using a same processing technique as for a prior input image of the stream of input images based on the tracking of the first target object. The background is processed differently from the foreground. An output image is generated by merging the foreground with the background.Type: ApplicationFiled: May 27, 2021Publication date: December 1, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Eric Chris Wolfgang SOMMERLADE, Sunando SENGUPTA, Alexandros NEOPHYTOU
-
Publication number: 20220343543Abstract: In various embodiments, a computer-implemented method of training a neural network for creating an output signal of different modality from an input signal is described. In embodiments, the first modality may be a sound signal or a visual image and where the output signal would be a visual image or a sound signal, respectively. In embodiments a model is trained using a first pair of visual and audio networks to train a set of codebooks using known visual signals and the audio signals and using a second pair of visual and audio networks to further train the set of codebooks using the augmented visual signals and the augmented audio signals. Further, the first and the second visual networks are equally weighted and where the first and the second audio networks are equally weighted. In aspects of the present disclosure, the set of codebooks comprise a visual codebook, an audio codebook and a correlation codebook.Type: ApplicationFiled: April 26, 2021Publication date: October 27, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Sunando SENGUPTA, Alexandros NEOFYTOU, Eric Chris Wolfgang SOMMERLADE, Yang LIU
-
Publication number: 20220284551Abstract: In various embodiments, a computer-implemented method of training a neural network for relighting an image is described. A first training set that includes source images and a target illumination embedding is generated, the source images having respective illuminated subjects. A second training set that includes augmented images and the target illumination embedding is generated, where the augmented images corresponding to the source images. A first autoencoder is trained using the first training set to generate a first output set that includes estimated source illumination embeddings and first reconstructed images that correspond to the source images, the reconstructed images having respective subjects that are i) from the corresponding source image, and ii) illuminated based on the target illumination embedding.Type: ApplicationFiled: March 2, 2021Publication date: September 8, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Alexandros Neofytou, Eric Chris Wolfgang Sommerlade, Sunando Sengupta, Yang Liu
-
Patent number: 11330196Abstract: Technology is described herein that uses an object-encoding system to convert an object image into a combined encoding. The object image depicts a reference object, while the combined encoding represents an environment image. The environment image, in turn, depicts an estimate of an environment that has produced the illumination effects exhibited by the reference object. The combined encoding includes: a first part that represents image content in the environment image within a high range of intensities values; and a second part that represents image content within a low range of intensity values. Also described herein is a training system that trains the object-encoding system based on combined encodings produced by a separately-trained environment-encoding system. Also described herein are various applications of the object-encoding system and environment-encoding system.Type: GrantFiled: October 12, 2020Date of Patent: May 10, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Alexandros Neofytou, Eric Chris Wolfgang Sommerlade, Alejandro Sztrajman, Sunando Sengupta
-
Publication number: 20220116549Abstract: Technology is described herein that uses an object-encoding system to convert an object image into a combined encoding. The object image depicts a reference object, while the combined encoding represents an environment image. The environment image, in turn, depicts an estimate of an environment that has produced the illumination effects exhibited by the reference object. The combined encoding includes: a first part that represents image content in the environment image within a high range of intensities values; and a second part that represents image content within a low range of intensity values. Also described herein is a training system that trains the object-encoding system based on combined encodings produced by a separately-trained environment-encoding system. Also described herein are various applications of the object-encoding system and environment-encoding system.Type: ApplicationFiled: October 12, 2020Publication date: April 14, 2022Inventors: Alexandros NEOFYTOU, Eric Chris Wolfgang SOMMERLADE, Alejandro SZTRAJMAN, Sunando SENGUPTA
-
Publication number: 20220044071Abstract: A computing system includes an encoder that receives an input image and encodes the input image into real image features, a decoder that decodes the real image features into a reconstructed image, a generator that receives first audio data corresponding to the input image and generates first synthetic image features from the first audio data, and receives second audio data and generates second synthetic image features from the second audio data, a discriminator that receives both the real and synthetic image features and determines whether a target feature is real or synthetic, and a classifier that classifies a scene of the second audio data based on the second synthetic image features.Type: ApplicationFiled: October 26, 2021Publication date: February 10, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Eric Chris Wolfgang SOMMERLADE, Yang LIU, Alexandros NEOFYTOU, Sunando SENGUPTA