Patents by Inventor Kiran Bhat
Kiran Bhat has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250086873Abstract: Various implementations relate to methods, systems and computer readable media to provide cross-device communication with adaptive avatar interaction. According to one aspect, a computer-implemented method includes receiving communication inputs from a first device, determining facial landmarks and head orientation of the first user, and generating an animated 3D avatar based on the inputs. A virtual camera position is adjusted according to the head orientation. The computer-implemented method further receives additional communication inputs from a second device with enhanced features, modifies the avatar accordingly, and provides the enhanced animation to the second device. Various implementations allow for different viewing modes, including picture-in-picture (PIP), side-by-side, and cinematic views, to adapt the experience across multiple devices such as virtual reality (VR) headsets, augmented reality (AR) devices, mobile phones, and desktop computers.Type: ApplicationFiled: September 9, 2024Publication date: March 13, 2025Applicant: Roblox CorporationInventors: David B. BASZUCKI, Garima SINHA, Claus Christopher MOBERG, Raj BHATIA, Kiran BHAT
-
Publication number: 20250078377Abstract: Various implementations relate to methods, systems and computer readable media to provide body tracking from monocular video. According to one aspect, a computer-implemented method includes obtaining a video including a set of video frames depicting movement of a human subject; extracting 2D images of the human subject from the video frames; providing the 2D images as input to a pre-trained neural network model. The method further includes determining a pose of the subject based on the 2D images. The method further includes generating a 3D pose estimation of upper body joint positions of the human subject. The method further includes determining confidence scores, and selecting a set of keypoints of the upper body joints of the human subject based on the confidence scores. The method further includes animating a 3D avatar using at least the selected set of keypoints, and displaying the animated 3D avatar in a user interface.Type: ApplicationFiled: September 6, 2024Publication date: March 6, 2025Applicant: Roblox CorporationInventors: Mubbasir Turab KAPADIA, Iñaki NAVARRO OIZA, Young-Yoon LEE, Joseph LIU, Haomiao JIANG, Che-jui CHANG, Seonghyeon MOON, Kiran BHAT
-
Publication number: 20250069596Abstract: A metaverse application receives a user-provided audio stream associated with a user. The metaverse application obtains portions of one or more audio streams. The metaverse application divides the user-provided audio stream into a plurality of portions, wherein each portion corresponds to a particular time window of the audio stream. The metaverse application providing the plurality of portions of the user-provided audio stream as input to an audio machine-learning model. The audio machine-learning model outputs, based on the portions of the user-provided audio stream, a determination of abuse in a particular portion of the plurality of portions. The metaverse application performs a remedial action responsive to the determination of abuse in the particular portion.Type: ApplicationFiled: August 21, 2024Publication date: February 27, 2025Applicant: Roblox CorporationInventors: Mahesh Kumar NANDWANA, Joseph LIU, Morgan Samuel MCGUIRE, Kiran BHAT
-
Publication number: 20250061670Abstract: Implementations described herein relate to methods, systems, and computer-readable media to display a virtual character within a virtual environment on a display device. In some implementations, a method can include obtaining an input pose of the virtual character, wherein the virtual character is based on a rig that comprises a plurality of joints, receiving an indication of one or more of a position and an orientation of a target end effector located on the rig, determining an output pose for the rig wherein the determining comprises calculating a respective orientation and position of one or more joints of the plurality of joints of the rig based on the position of the target end effector and rotation constraints of a plurality of joints of a reference rig, and displaying the virtual character in the output pose on the display device.Type: ApplicationFiled: September 1, 2023Publication date: February 20, 2025Applicant: Roblox CorporationInventors: Vincent PETRELLA, Simone GUGGIARI, David BROWN, Emiliano GAMBARETTO, Kiran BHAT
-
Publication number: 20240428492Abstract: Implementations described herein relate to methods, systems, and computer-readable media to generate animations for a 3D avatar from input video and audio captured at a client device. A camera may capture video of a face while a trained face detection model and a trained regression model output a set of video FACS weights, head poses, and facial landmarks to be translated into the animations of the 3D avatar. Additionally, a microphone may capture audio uttered by a user while a trained facial movement detection model and a trained regression model output a set of audio FACS weights. Additionally, a blending term is provided for identification of lapses in audio. A modularity mixing component fuses the video FACS weights and the audio FACS weights based on the blending term to create final FACS weights for animating the user's avatar, a character rig, or another animation-capable construct.Type: ApplicationFiled: January 25, 2024Publication date: December 26, 2024Applicant: Roblox CorporationInventors: Iñaki NAVARRO, Dario KNEUBUEHLER, Tijmen VERHULSDONCK, Eloi DU BOIS, William WELCH, Charles SHANG, Ian SACHS, Kiran BHAT
-
Publication number: 20240378836Abstract: Some implementations relate to methods, systems, and computer-readable media to create a variant of a template avatar. In some implementations, the method includes obtaining a template avatar that includes a template geometry obtained from a mesh of the template avatar, generating a template cage associated with the template avatar as a low-resolution approximation wrapped around the template geometry, creating a target cage from the template cage by modifying the template cage based on input from a user, and morphing the template geometry with the target cage to generate a target avatar that is a variant of the template avatar. The method may also include adjusting a rigging and a skinning of the target avatar to enable animation for the target avatar. Using these techniques makes it more efficient and less labor-intensive to create a variant of a template avatar.Type: ApplicationFiled: May 10, 2024Publication date: November 14, 2024Applicant: Roblox CorporationInventors: Maurice Kyojin CHU, Ronald Matthew GRISWOLD, Jihyun YOON, Michael Vincent PALLESCHI, Adam Tucker BURR, Adrian Paul LONGLAND, Ian SACHS, Kiran BHAT, Andrew Alan JOHNSON
-
Publication number: 20240355028Abstract: Implementations described herein relate to methods, systems, and computer-readable media to generate animations for a 3D avatar from input video captured at a client device. A camera may capture video of a face while a trained face detection model and a trained regression model output a set of FACS weights, head poses, and facial landmarks to be translated into the animations of the 3D avatar. Additionally, a higher level-of-detail may be intelligently selected based upon user preferences and/or computing conditions at the client device.Type: ApplicationFiled: April 30, 2024Publication date: October 24, 2024Applicant: Roblox CorporationInventors: Inaki NAVARRO, Dario KNEUBUHLER, Tijmen VERHULSDONCK, Eloi DU BOIS, Will WELCH, Vivek VERMA, Ian SACHS, Kiran BHAT
-
Patent number: 12002139Abstract: Implementations described herein relate to methods, systems, and computer-readable media to generate animations for a 3D avatar from input video captured at a client device. A camera may capture video of a face while a trained face detection model and a trained regression model output a set of FACS weights, head poses, and facial landmarks to be translated into the animations of the 3D avatar. Additionally, a higher level-of-detail may be intelligently selected based upon user preferences and/or computing conditions at the client device.Type: GrantFiled: February 22, 2022Date of Patent: June 4, 2024Assignee: Roblox CorporationInventors: Inaki Navarro, Dario Kneubuhler, Tijmen Verhulsdonck, Eloi Du Bois, Will Welch, Vivek Verma, Ian Sachs, Kiran Bhat
-
Publication number: 20240112691Abstract: A computer-implemented method includes receiving a first audio stream of a performance associated with a first client device. The method further includes during a time window of the performance, wherein the time window is less than a total time of the performance: generating a synthesized first audio stream that predicts a future of the performance based on audio features of the first audio stream and mixing the synthesized first audio stream and a second audio stream associated with a second client device to form a combined audio stream that synchronizes the synthesized first audio stream and the second audio stream, where the time window is advanced and the generating and the mixing are repeated until the performance is complete.Type: ApplicationFiled: October 4, 2022Publication date: April 4, 2024Applicant: Roblox CorporationInventors: Mahesh Kumar NANDWANA, Kiran BHAT, Morgan McGuire
-
Publication number: 20240112689Abstract: A computer-implemented method includes receiving, at a server, a first audio stream of a performance associated with a first client device. The method further includes receiving, at the server, a second audio stream of the performance associated with a second client device. The method further includes during a time window of the performance, where the time window is less than a total time of the performance: generating a synthesized first audio stream that predicts a future of the performance based on audio features of the first audio stream and mixing the synthesized first audio stream and the second audio stream to form a combined audio stream that synchronizes the synthesized first audio stream and the second audio stream, where the time window is advanced and the generating and the mixing are repeated until the performance is complete. The method further includes transmitting the combined audio stream to the second client device.Type: ApplicationFiled: October 4, 2022Publication date: April 4, 2024Applicant: Roblox CorporationInventors: Mahesh Kumar NANDWANA, Kiran BHAT, Morgan MCGUIRE
-
Patent number: 11551393Abstract: Systems and methods for animating from audio in accordance with embodiments of the invention are illustrated. One embodiment includes a method for generating animation from audio. The method includes steps for receiving input audio data, generating an embedding for the input audio data, and generating several predictions for several tasks from the generated embedding. The several predictions includes at least one of blendshape weights, event detection, and/or voice activity detection. The method includes steps for generating a final prediction from the several predictions, where the final prediction includes a set of blendshape weights, and generating an output based on the generated final prediction.Type: GrantFiled: July 23, 2020Date of Patent: January 10, 2023Assignee: LoomAi, Inc.Inventors: Chong Shang, Eloi Henri Homere Du Bois, Inaki Navarro, Will Welch, Rishabh Battulwar, Ian Sachs, Vivek Verma, Kiran Bhat
-
Publication number: 20220270314Abstract: Implementations described herein relate to methods, systems, and computer-readable media to generate animations for a 3D avatar from input video captured at a client device. A camera may capture video of a face while a trained face detection model and a trained regression model output a set of FACS weights, head poses, and facial landmarks to be translated into the animations of the 3D avatar. Additionally, a higher level-of-detail may be intelligently selected based upon user preferences and/or computing conditions at the client device.Type: ApplicationFiled: February 22, 2022Publication date: August 25, 2022Applicant: Roblox CorporationInventors: Inaki NAVARRO, Dario KNEUBUHLER, Tijmen VERHULSDONCK, Eloi DU BOIS, Will WELCH, Vivek VERMA, Ian SACHS, Kiran BHAT
-
Patent number: 11049332Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes: receiving a plate with an image of the subject's facial expression and an estimate of intrinsic parameters of a camera used to film the plate; generating a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters; solving for the facial expression in the plate by executing a deformation solver to solve for at least some parameters of the deformable model with a differentiable renderer and shape-from-shading techniques, using as inputs, the three-dimensional parameterized deformable model, estimated intrinsic camera parameters, estimated lighting conditions and albedo estimates over a series of iterations to infer geometry of the facial expression and generate an intermediate facial; generating, from the intermediate facial mesh, refined albedo estimates for the deformable model; andType: GrantFiled: March 3, 2020Date of Patent: June 29, 2021Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Matthew Loper, Stéphane Grabli, Kiran Bhat
-
Publication number: 20210027511Abstract: Systems and methods for animating from audio in accordance with embodiments of the invention are illustrated. One embodiment includes a method for generating animation from audio. The method includes steps for receiving input audio data, generating an embedding for the input audio data, and generating several predictions for several tasks from the generated embedding. The several predictions includes at least one of blendshape weights, event detection, and/or voice activity detection. The method includes steps for generating a final prediction from the several predictions, where the final prediction includes a set of blendshape weights, and generating an output based on the generated final prediction.Type: ApplicationFiled: July 23, 2020Publication date: January 28, 2021Applicant: LoomAi, Inc.Inventors: Chong Shang, Eloi Henri Homere Du Bois, Inaki Navarro, Will Welch, Rishabh Battulwar, Ian Sachs, Vivek Verma, Kiran Bhat
-
Publication number: 20200286301Abstract: A method of transferring a facial expression from a subject to a computer generated character that includes: receiving a plate with an image of the subject's facial expression and an estimate of intrinsic parameters of a camera used to film the plate; generating a three-dimensional parameterized deformable model of the subject's face where different facial expressions of the subject can be obtained by varying values of the model parameters; solving for the facial expression in the plate by executing a deformation solver to solve for at least some parameters of the deformable model with a differentiable renderer and shape-from-shading techniques, using as inputs, the three-dimensional parameterized deformable model, estimated intrinsic camera parameters, estimated lighting conditions and albedo estimates over a series of iterations to infer geometry of the facial expression and generate an intermediate facial; generating, from the intermediate facial mesh, refined albedo estimates for the deformable model; andType: ApplicationFiled: March 3, 2020Publication date: September 10, 2020Applicant: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Matthew Loper, Stéphane Grabli, Kiran Bhat
-
Patent number: 10559111Abstract: System and methods for computer animations of 3D models of heads generated from images of faces is disclosed. A 2D captured image that includes an image of a face can be received and used to generate a static 3D model of a head. A rig can be fit to the static 3D model to generate an animation-ready 3D generative model. Sets of rigs can be parameters that each map to particular sounds or particular facial movement observed in a video. These mappings can be used to generate a playlists of sets of rig parameters based upon received audio or video content. The playlist may be played in synchronization with an audio rendition of the audio content. Methods can receive a captured image, identify taxonomy attributes from the captured image, select a template model for the captured image, and perform a shape solve for the selected template model based on the identified taxonomy attributes.Type: GrantFiled: December 14, 2018Date of Patent: February 11, 2020Assignee: LoomAi, Inc.Inventors: Ian Sachs, Kiran Bhat, Dominic Monn, Senthil Radhakrishnan, Will Welch
-
Publication number: 20190122411Abstract: System and methods for computer animations of 3D models of heads generated from images of faces is disclosed. A 2D captured image that includes an image of a face can be received and used to generate a static 3D model of a head. A rig can be fit to the static 3D model to generate an animation-ready 3D generative model. Sets of rigs can be parameters that each map to particular sounds or particular facial movement observed in a video. These mappings can be used to generate a playlists of sets of rig parameters based upon received audio or video content. The playlist may be played in synchronization with an audio rendition of the audio content. Methods can receive a captured image, identify taxonomy attributes from the captured image, select a template model for the captured image, and perform a shape solve for the selected template model based on the identified taxonomy attributes.Type: ApplicationFiled: December 14, 2018Publication date: April 25, 2019Applicant: LoomAi, Inc.Inventors: Ian Sachs, Kiran Bhat, Dominic Monn, Senthil Radhakrishnan, Will Welch
-
Patent number: 10198845Abstract: Systems and methods for animating expressions of 3D models from captured images of a user's face in accordance with various embodiments of the invention are disclosed. In many embodiments, expressions are identified based on landmarks from images of a user's face. In certain embodiments, weights for morph targets of a 3D model are calculated based on identified landmarks and/or weights for predefined facial expressions to animate expressions for the 3D model.Type: GrantFiled: May 29, 2018Date of Patent: February 5, 2019Assignee: LoomAi, Inc.Inventors: Kiran Bhat, Mahesh Ramasubramanian, Michael Palleschi, Andrew A. Johnson, Ian Sachs
-
Patent number: 10169905Abstract: System and methods for computer animations of 3D models of heads generated from images of faces is disclosed. A 2D captured image that includes an image of a face can be received and used to generate a static 3D model of a head. A rig can be fit to the static 3D model to generate an animation-ready 3D generative model. Sets of rigs can be parameters that each map to particular sounds. These mappings can be used to generate a playlists of sets of rig parameters based upon received audio content. The playlist may be played in synchronization with an audio rendition of the audio content.Type: GrantFiled: January 31, 2018Date of Patent: January 1, 2019Assignee: LoomAi, Inc.Inventors: Kiran Bhat, Akash Garg, Michael Daniel Flynn, Will Welch
-
Patent number: 10147219Abstract: Performance capture systems and techniques are provided for capturing a performance of a subject and reproducing an animated performance that tracks the subject's performance. For example, systems and techniques are provided for determining control values for controlling an animation model to define features of a computer-generated representation of a subject based on the performance. A method may include obtaining input data corresponding to a pose performed by the subject, the input data including position information defining positions on a face of the subject. The method may further include obtaining an animation model for the subject that includes adjustable controls that control the animation model to define facial features of the computer-generated representation of the face, and matching one or more of the positions on the face with one or more corresponding positions on the animation model.Type: GrantFiled: February 3, 2017Date of Patent: December 4, 2018Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Kiran Bhat, Michael Koperwas, Jeffery Yost, Ji Hun Yu, Sheila Santos