Patents by Inventor Lior Wolf
Lior Wolf has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240039559Abstract: Disclosed herein are systems and method for training neural network based decoders for decoding error correction codes, comprising obtaining a plurality of training samples comprising one or more codewords encoded using an error correction code and transmitted over a transmission channel where the training samples are subject to gradual interference over a plurality of time steps and associate the encoded codeword(s) with an interference level and a parity check syndrome at each of the plurality of time steps, using the training samples to train a neural network based decoder to decode codewords encoded using an error correction code by (1) estimating a multiplicative interference included in the encoded codeword(s) based on reverse diffusion applied to the encoded codeword(s) across the time steps, (2) computing an additive interference included in the encoded codewords based on the multiplicative interference, and (3) recovering the codeword(s) by removing the additive interference.Type: ApplicationFiled: July 18, 2023Publication date: February 1, 2024Applicant: Ramot at Tel-Aviv University Ltd.Inventors: Yoni CHOUKROUN, Lior WOLF
-
Patent number: 11854203Abstract: In one embodiment, a method includes receiving a first image depicting a context including one or more persons having one or more respective poses, receiving a second image depicting a target person having an original pose, where the target person is to be inserted into the context depicted in the first image, generating a target segmentation mask specifying a new pose for the target person in the context of the first image based on the first image, generating a third image depicting the target person having the new pose based on the second image and the target segmentation mask, and generating an output image based on the first image and the third image, the output image depicting the one or more persons having the one or more respective poses and the target person having the new pose.Type: GrantFiled: December 18, 2020Date of Patent: December 26, 2023Assignee: Meta Platforms, Inc.Inventors: Oran Gafni, Lior Wolf
-
Patent number: 11727725Abstract: A technique and system for counting the number of repetitions of approximately the same action in an input video sequence using 3D convolutional neural networks is disclosed. The proposed system runs online and not on the complete video. It analyzes sequentially blocks of 20 non-consecutive frames. The cycle length within each block is evaluated using a deep network architecture and the information is then integrated over time. A unique property of the disclosed method is that it is shown to successfully train on entirely synthetic data, created by synthesizing moving random patches. It therefore effectively exploits the high generalization capability of deep neural networks. Coupled with a region of interest detection mechanism and a suitable mechanism to identify the time scale of the video, the system is robust enough to handle real world videos collected from youtube and elsewhere, as well as non-video signals such as sensor data revealing repetitious physical movement.Type: GrantFiled: February 11, 2021Date of Patent: August 15, 2023Inventors: Lior Wolf, Ofir Levy
-
Patent number: 11727596Abstract: A video generation system is described that extracts one or more characters or other objects from a video, re-animates the character, and generates a new video in which the extracted characters. The system enables the extracted character(s) to be positioned and controlled within a new background scene different from the original background scene of the source video. In one example, the video generation system comprises a pose prediction neural network having a pose model trained with (i) a set of character pose training images extracted from an input video of the character and (ii) a simulated motion control signal generated from the input video. In operation, the pose prediction neural network generates, in response to a motion control input from a user, a sequence of images representing poses of a character. A frame generation neural network generates output video frames that render the character within a scene.Type: GrantFiled: May 17, 2021Date of Patent: August 15, 2023Assignee: Meta Platforms Technologies, LLCInventors: Oran Gafni, Lior Wolf, Yaniv Nechemia Taigman
-
Patent number: 11461919Abstract: A neural network system for detecting at least one object in at least one image, the system includes a plurality of object detectors. Each object detector receives respective image information thereto. Each object detector includes a respective neural network. Each the neural network including a plurality of layers. Layers in different object detectors are common layers when the layers receive the same input thereto and produce the same output therefrom. Common layers are computed only once during object detection for all the different object detectors.Type: GrantFiled: April 9, 2020Date of Patent: October 4, 2022Assignee: Ramot at Tel Aviv University Ltd.Inventors: Lior Wolf, Assaf Mushinsky
-
Patent number: 11430424Abstract: Disclosed herein a system, a method and a device for generating a voice model for a user. A device can include an encoder and a decoder to generate a voice model for converting text to an audio output that resembles a voice of the person sending respective text. The encoder can includes a neural network and can receive a plurality of audio samples from a user. The encoder can generate a sequence of values and provide the sequence of values to the decoder. The decoder can establish, using the sequence of values and one or more speaker embeddings of the user, a voice model corresponding to the plurality of audio samples of the user.Type: GrantFiled: November 13, 2019Date of Patent: August 30, 2022Assignee: Meta Platforms Technologies, LLCInventors: Lior Wolf, David Vazquez, Tali Zvi, Yaniv Nechemia Taigman, Adam Polyak, Hyunbin Park
-
Patent number: 11373352Abstract: In one embodiment, a method includes generating a keypoint pose and a dense pose for a first person in a first pose based on a first image comprising the first person in the first pose, generating an input semantic segmentation map corresponding to a second person in a second pose based on a second image comprising the second person in the second pose, generating a target semantic segmentation map corresponding to the second person in the first pose by processing the keypoint pose, the dense pose, and the input segmentation map using a first machine-learning model, generating an encoding vector representing the second person based on the second image, and generating a target image of the second person in the first pose by processing the encoding vector and the target segmentation map using a second machine-learning model.Type: GrantFiled: March 4, 2021Date of Patent: June 28, 2022Assignee: Meta Platforms, Inc.Inventors: Oran Gafni, Oron Ashual, Lior Wolf
-
Publication number: 20220198617Abstract: In one embodiment, a method includes generating a first identity encoding representing a first facial identity of the person based on an image of a person, generating a second identity encoding representing a second facial identity different from the first facial identity of the person based on the first identity encoding, generating a source encoding by using an encoder to process a source image of the person having an expression, generating an intermediate image by using a decoder to process the source encoding and the second identity encoding, the intermediate image including a face having the second facial identity and the expression of the person in the source image, and generating an output image by blending the source image with facial features of the face in the intermediate image.Type: ApplicationFiled: December 18, 2020Publication date: June 23, 2022Inventors: Oran Gafni, Lior Wolf
-
Publication number: 20210256993Abstract: In one embodiment, a method includes receiving a mixed audio signal comprising a mixture of voice signals associated with a plurality of speakers, generating first audio signals by processing the mixed audio signal using a first machine-learning model configured with a first number of output channels, determining that at least one of the first number of output channels is silent based on the first audio signals, generating second audio signals by processing the mixed audio signal using a second machine-learning model configured with a second number of output channels that is fewer than the first number of output channels, determining that each of the second number of output channels is non-silent based on the second audio signals, and using the second machine-learning model to separate additional mixed audio signals associated with the plurality of speakers.Type: ApplicationFiled: April 20, 2020Publication date: August 19, 2021Inventors: Eliya Nachmani, Lior Wolf, Yossef Mordechay Adi
-
Publication number: 20210241067Abstract: In one embodiment, a method includes inputting an encoded message with noise to a neural-networks model comprising a variable and a check layers of nodes, each node being associated with at least one weight and a hyper-network node, updating the weights associated with the variable layer of nodes by processing the encoded message using the hyper-network nodes associated with the variable layer of nodes, generating a first set of outputs by processing the encoded message using the variable layer of nodes and their respective updated weights, updating the weights associated with the check layer of nodes by processing the first set of outputs using the hyper-network nodes associated with the check layer of nodes, and generating a decoded message without noise using the neural-networks model by using at least the first set of outputs and the check layer of nodes and their respective updated weights.Type: ApplicationFiled: February 5, 2020Publication date: August 5, 2021Inventors: Eliya Nachmani, Lior Wolf
-
Publication number: 20210166055Abstract: A technique and system for counting the number of repetitions of approximately the same action in an input video sequence using 3D convolutional neural networks is disclosed. The proposed system runs online and not on the complete video. It analyzes sequentially blocks of 20 non-consecutive frames. The cycle length within each block is evaluated using a deep network architecture and the information is then integrated over time. A unique property of the disclosed method is that it is shown to successfully train on entirely synthetic data, created by synthesizing moving random patches. It therefore effectively exploits the high generalization capability of deep neural networks. Coupled with a region of interest detection mechanism and a suitable mechanism to identify the time scale of the video, the system is robust enough to handle real world videos collected from youtube and elsewhere, as well as non-video signals such as sensor data revealing repetitious physical movement.Type: ApplicationFiled: February 11, 2021Publication date: June 3, 2021Inventors: Lior WOLF, Ofir Levy
-
Patent number: 11017560Abstract: A video generation system is described that extracts one or more characters or other objects from a video, re-animates the character, and generates a new video in which the extracted characters. The system enables the extracted character(s) to be positioned and controlled within a new background scene different from the original background scene of the source video. In one example, the video generation system comprises a pose prediction neural network having a pose model trained with (i) a set of character pose training images extracted from an input video of the character and (ii) a simulated motion control signal generated from the input video. In operation, the pose prediction neural network generates, in response to a motion control input from a user, a sequence of images representing poses of a character. A frame generation neural network generates output video frames that render the character within a scene.Type: GrantFiled: April 15, 2019Date of Patent: May 25, 2021Assignee: Facebook Technologies, LLCInventors: Oran Gafni, Lior Wolf, Yaniv Taigman
-
Publication number: 20210142782Abstract: Disclosed herein a system, a method and a device for generating a voice model for a user. A device can include an encoder and a decoder to generate a voice model for converting text to an audio output that resembles a voice of the person sending respective text. The encoder can includes a neural network and can receive a plurality of audio samples from a user. The encoder can generate a sequence of values and provide the sequence of values to the decoder. The decoder can establish, using the sequence of values and one or more speaker embeddings of the user, a voice model corresponding to the plurality of audio samples of the user.Type: ApplicationFiled: November 13, 2019Publication date: May 13, 2021Inventors: Lior Wolf, David Vazquez, Tali Zvi, Yaniv Nechemia Taigman, Adam Polyak, Hyunbin Park
-
Patent number: 10922577Abstract: A technique and system for counting the number of repetitions of approximately the same action in an input video sequence using 3D convolutional neural networks is disclosed. The proposed system runs online and not on the complete video. It analyzes sequentially blocks of 20 non-consecutive frames. The cycle length within each block is evaluated using a deep network architecture and the information is then integrated over time. A unique property of the disclosed method is that it is shown to successfully train on entirely synthetic data, created by synthesizing moving random patches. It therefore effectively exploits the high generalization capability of deep neural networks. Coupled with a region of interest detection mechanism and a suitable mechanism to identify the time scale of the video, the system is robust enough to handle real world videos collected from youtube and elsewhere, as well as non-video signals such as sensor data revealing repetitious physical movement.Type: GrantFiled: October 28, 2019Date of Patent: February 16, 2021Inventors: Lior Wolf, Ofir Levy
-
Publication number: 20210019635Abstract: A method for generating a decision tree based response to a query that is related to a group of at least one user out of multiple groups of at least one users, the method may include obtaining the query; and generating the decision tree based response, wherein the generating of the decision tree based response includes applying one or more decisions of a group specific decision tree, wherein the group specific decision tree is associated with the group and is generated by applying an embedding function and regression functions on group related information, wherein the embedding function and the regression functions are learnt using information related to other groups of the multiple groups.Type: ApplicationFiled: June 29, 2020Publication date: January 21, 2021Inventors: Lior Wolf, Eyal Shulman
-
Publication number: 20200279393Abstract: A neural network system for detecting at least one object in at least one image, the system includes a plurality of object detectors. Each object detector receives respective image information thereto. Each object detector includes a respective neural network. Each the neural network including a plurality of layers. Layers in different object detectors are common layers when the layers receive the same input thereto and produce the same output therefrom. Common layers are computed only once during object detection for all the different object detectors.Type: ApplicationFiled: April 9, 2020Publication date: September 3, 2020Inventors: Lior Wolf, Assaf Mushinsky
-
Patent number: 10635934Abstract: A method of recognizing image content, comprises applying to the image a neural network which comprises an input layer for receiving the image, a plurality of hidden layers for processing the image, and an output layer for generating output pertaining to an estimated image content based on outputs of the hidden layers. The method further comprises applying to an output of at least one of the hidden layers a neural network branch, which is independent of the neural network and which has an output layer for generating output pertaining to an estimated error level of the estimate. A combined output indicative of the estimated image content and the estimated error level is generated.Type: GrantFiled: September 17, 2018Date of Patent: April 28, 2020Assignee: Ramot at Tel-Aviv University Ltd.Inventors: Lior Wolf, Noam Mor
-
Patent number: 10621477Abstract: A convolutional neural network system for detecting at least one object in at least one image. The system includes a plurality of object detectors, corresponding to a predetermined image window size in the at least one image. Each object detector is associated with a respective down-sampling ratio with respect to the at least one image. Each object detector includes a respective convolutional neural network and an object classifier coupled with the convolutional neural network. The respective convolutional neural network includes a plurality of convolution layers. The object classifier classifies objects in the image according to the results from the convolutional neural network. Object detectors associated with the same respective down-sampling ratio define at least one group of object detectors. Object detectors in a group of object detectors being associated with common convolution layers.Type: GrantFiled: March 1, 2018Date of Patent: April 14, 2020Assignee: Ramot at Tel Aviv University Ltd.Inventors: Lior Wolf, Assaf Mushinsky
-
Publication number: 20200065608Abstract: A technique and system for counting the number of repetitions of approximately the same action in an input video sequence using 3D convolutional neural networks is disclosed. The proposed system runs online and not on the complete video. It analyzes sequentially blocks of 20 non-consecutive frames. The cycle length within each block is evaluated using a deep network architecture and the information is then integrated over time. A unique property of the disclosed method is that it is shown to successfully train on entirely synthetic data, created by synthesizing moving random patches. It therefore effectively exploits the high generalization capability of deep neural networks. Coupled with a region of interest detection mechanism and a suitable mechanism to identify the time scale of the video, the system is robust enough to handle real world videos collected from youtube and elsewhere, as well as non-video signals such as sensor data revealing repetitious physical movement.Type: ApplicationFiled: October 28, 2019Publication date: February 27, 2020Inventors: Lior Wolf, Ofir Levy
-
Publication number: 20200003678Abstract: A method of designing a nanostructure, comprises: receiving a far field optical response and material properties; feeding the synthetic far field optical response and material properties to an artificial neural network having at least three hidden layers; and extracting from the artificial neural network a shape of a nanostructure corresponding to the far field optical response.Type: ApplicationFiled: February 9, 2018Publication date: January 2, 2020Applicant: Ramot at Tel-Aviv University Ltd.Inventors: Lior WOLF, Haim SUCHOWSKI, Michael MREJEN, Achiya NAGLER, Itzik MALKIEL, Uri ARIELI