Abstract: Methods, apparatuses, and systems for using a compensating window to correct tolerance-placement effects on camera focus are provided. The system may receive a first captured image of a first test target from a surface of a target plane. The first captured image may be captured using a first lens of a camera. The system may determine a first modulation transfer function measurement for the first captured image. The system may determine that the first modulation transfer function measurement is within a threshold measurement. The system may send an alert indicative that the first lens is within the threshold measurement.
Abstract: Various aspects of the subject technology relate to systems, methods, and machine-readable media for outputting filtered visual media content items. Various aspects may include receiving an input frame of a visual media content item. Aspects may also include training a machine learning algorithm based on a dataset of bracketed images. Aspects may include configuring a neural network based on image filtering of the input frame and via a shader component of a graphics processing unit. Aspects may include determining portions of the input frame that are associated with an extent of darkness. Aspects may include performing an image enhancement operation to the portions of the input frame. Aspects may include providing instructions to display an output frame changed by the image enhancement operation.
Abstract: Technology for customized crest factor reduction (CFR) noise shaping includes dividing a frequency band into a plurality of regions, assigning a constellation goal for each region, the respective constellation goal for at least two regions being different, determining a CFR noise level for each region based on the constellation goal for the region and a target CFR noise level for the divided frequency band, creating a cancellation pulse based on scaling factors, and based on the cancellation pulse, applying a cancellation pulse signal on a per-region basis to generate transmission signals having the determined CFR noise level for each region. In examples, a first region has a first constellation goal and a second region has a second constellation goal, and a determined CFR noise level for the first region supports the first constellation goal and a determined CFR noise level for the second region supports the second constellation goal.
Abstract: The disclosed computer-implemented method may include systems for incorporating a user's avatar into a real-time communication session. For example, the described systems establish a real-time communication session between two or more social networking system users. The described systems further generate a landmark map representing positioning of one of the real-time communication session participants, and transmit the landmark map with the participant's avatar to one or more recipients. On the recipient-side, the described systems render the transmitted avatar according to the landmark map. Various other methods, systems, and computer-readable media are also disclosed.
Type:
Grant
Filed:
November 8, 2022
Date of Patent:
November 19, 2024
Assignee:
Meta Platforms, Inc.
Inventors:
Zhuoping Zhou, Brijesh Navin Chandra Patel, Wing Mei Cheramie Cheung, Joel Alexander Bullard, Pablo Jose Barvo, Philipp Ojha, Shu Liang, Graham Hill, Mahalia Katherine Miller
Abstract: Methods and systems for propagating light into a waveguide are provided. The system may include a light source configured to generate light. The system may include at least one mirror configured to direct the light into one or more rays of light. The system may include a Surface Relief Grating disposed on a Volume Bragg Grating. The Surface Relief Grating may receive the one or more rays of light and may diffract the one or more rays of light. The Volume Bragg Grating may be disposed on the waveguide in which the waveguide may be configured to receive the one or more rays of light from the Volume Bragg Grating and propagate the one or more rays of light throughout the waveguide such that an off-Bragg condition is exhibited by the one or more rays of light propagating through the waveguide.
Abstract: In one embodiment, a method includes receiving a first user request from a first user for generating a media montage from a client system during a dialog session with the first user, generating an initial media montage during the dialog session based on media collections associated with the first user, sending instructions for presenting the initial media montage to the client system during the dialog session, receiving a second user request from the first user from the client system during the dialog session for editing the initial media montage, generating an edited media montage from the initial media montage during the dialog session based on the second user request and a memory graph associated with the first user, and sending instructions for presenting the edited media montage to the client system during the dialog session.
Type:
Grant
Filed:
January 9, 2023
Date of Patent:
November 12, 2024
Assignee:
META PLATFORMS, INC.
Inventors:
Satwik Kottur, Seungwhan Moon, Aram Markosyan, Hardik Shah
Abstract: The disclosed computer-implemented method may include systems and methods for embedding specific data into a call stack associated with an application session. For example, the systems and methods described herein can initialize a program thread that sequentially executes specialized application functions based on characters of a unique identifier to embed the unique identifier within a call stack of the application session. The systems and methods further provide the unique identifier in connection with other data sources associated with the application session such that further analysis of all data associated with the application session may be cross-referenced according to the unique identifier. Various other methods, systems, and computer-readable media are also disclosed.
Abstract: In one embodiment, a method includes accessing a decoded hypothesis corresponding to an utterance, computing a predicted probability of observing each token in the decoded hypothesis by having a local first machine-learning model process the decoded hypothesis, computing a confidence score for each token in the decoded hypothesis by having a second machine-learning model process the decoded hypothesis, where the confidence score indicates a degree of confidence for the token to be observed at its position, calculating a loss for the computed predicted probabilities of observing tokens in the decoded hypothesis based on the computed confidence scores, and updating parameters of the local first machine-learning model based on the calculated loss.
Abstract: An image processing system enables a user wearing a head-mounted display to experience a virtual environment combined with a representation of a real-world object. The image processing system receives a captured scene of a real-world environment that includes a target object. The image processing system identifies the target object in the captured scene and generates a representation of the target object. In some cases, the image processing system may include a graphical overlay with the representation of the target object. The image processing system can generate a combined scene that includes the target object and the virtual environment. The combined scene is presented to the user, thereby allowing the user to interact with the real-world target object (or a representation thereof) in combination with the virtual environment.
Abstract: In one embodiment, a method includes by a client system associated with a user, receiving, at the client system, a user input from the user, parsing, by the client system, the first user input to identify a request to execute a function to be performed by an assistant system of several assistant systems associated with the client system, determining whether the user is authorized to access the assistant system by comparing a voiceprint of the user to several voiceprints stored on the client system, sending, from the client system to the assistant system in response to determining the user is authorized to access the assistant system, a request to set an assistant xbot of the assistant system into a listening mode, and receiving, at the client system from the assistant system, an indication that the assistant xbot is in listening mode.
Abstract: In one embodiment, a method includes receiving a first user input from a first user, wherein the first user input comprises a partial request, presenting one or more suggested intent auto-completions corresponding to the partial request, receiving a selection by the first user of a first suggested intent auto-completion of the suggested intent auto-completions and a second user input, presenting one or more suggested slot auto-completions corresponding to one or more candidate slot-hypotheses corresponding to the second user input, respectively, wherein each of the candidate slot-hypotheses comprise a slot-suggestion, and wherein each suggested slot auto-completion comprises the second user input and the corresponding candidate slot-hypothesis, receiving a selection by the first user of a first suggested slot auto-completion of the suggested slot auto-completions, and presenting execution results of one or more tasks corresponding to the first suggested intent auto-completion and the first suggested slot auto
Type:
Grant
Filed:
October 22, 2020
Date of Patent:
October 29, 2024
Assignee:
Meta Platforms, Inc.
Inventors:
Jiedan Zhu, Fuchun Peng, Benoit F. Dumoulin, Xiaohu Liu, Rajen Subba, Mohsen Agsen, Michael Robert Hanson
Abstract: In one embodiment, a method includes receiving a first user input comprising a wake word associated with an assistant xbot from a first client system, setting the assistant xbot into a listening mode, wherein a continuous non-visual feedback is provided via the first client system while the assistant xbot is in the listening mode, receiving a second user input comprising a user utterance from the first client system while the assistant xbot is in the listening mode, determining the second user input has ended based on a completion of the user utterance, and setting the assistant xbot into an inactive mode, wherein the non-visual feedback is discontinued via the first client system while the assistant xbot is in the inactive mode.
Type:
Grant
Filed:
November 8, 2021
Date of Patent:
October 29, 2024
Assignee:
Meta Platforms, Inc.
Inventors:
Leif Haven Martinson, David Levison, Heath William Black, Ryan Frederick Stewart, Tara Ramanan, Samuel Steele Noertker
Abstract: In one embodiment, a method includes accessing visual signals comprising images portraying textual content in a real-world environment associated with a first user from a client system associated with the first user, recognizing the textual content based on machine-learning models and the visual signals, determining a context associated with the first user with respect to the real-world environment based on the visual signals, executing tasks determined based on the textual content and the determined context for the first user, and sending instructions for presenting execution results of the tasks to the first user to the client system.
Type:
Grant
Filed:
August 4, 2021
Date of Patent:
October 22, 2024
Assignee:
Meta Platforms, Inc.
Inventors:
Elizabeth Kelsey Santoro, Denis Savenkov, Koon Hui Geoffrey Goh, Kshitiz Malik, Ruchir Srivastava
Abstract: Systems, apparatuses and methods provide technology that compresses first data based on a first compression scheme to generate second data, where the first data is associated with a first machine learning model. The technology stores the second data into a memory, adjusts a first entry of a lookup table to correspond to the first compression scheme based on the first data being compressed based on the first compression scheme, provide the second data from the memory to processing elements of a processing array during execution of the first machine learning model, and decompresses, at the processing array, the second data based on the lookup table to obtain the first data.
Type:
Application
Filed:
April 17, 2023
Publication date:
October 17, 2024
Applicant:
Meta Platforms, Inc.
Inventors:
Kaushal Gandhi, Olivia Wu, Soheil Gharahi, Thomas Mark Ulrich, Abdulkadir Utku Diril, Khasim S. Dudekula, Eda Sahin
Abstract: One or more embodiments of the disclosure include systems and methods that generate and utilize digital visual codes. In particular, in one or more embodiments, the disclosed systems and methods generate digital visual codes comprising a plurality of digital visual code points arranged in concentric circles, a plurality of anchor points, and an orientation anchor surrounding a digital media item. In addition, the disclosed systems and methods embed information in the digital visual code points regarding an account of a first user of a networking system. In one or more embodiments, the disclosed systems and methods display the digital visual codes via a computing device of the first user, scan the digital visual codes via a second computing device, and provide privileges to the second computing device in relation to the account of the first user in the networking system based on the scanned digital visual code.
Type:
Grant
Filed:
August 24, 2023
Date of Patent:
October 15, 2024
Assignee:
META PLATFORMS, INC.
Inventors:
Christopher Anthony Leach, Eugenio Padilla Garza, Anthony Tran, Russell William Andrews
Inventors:
Frederick Scott Gottesman, Patrick Francis Keenan, Aaron Albonetti, Wade Campbell, William Siemers, Annabel Strauss, Ishwarya Venkatachalam, Kevin Victor Wong
Inventors:
Frederick Scott Gottesman, Patrick Francis Keenan, Aaron Albonetti, Wade Campbell, William Siemers, Annabel Strauss, Ishwarya Venkatachalam, Kevin Victor Wong