Temporal Interpolation Or Processing Patents (Class 345/475)
  • Patent number: 12260678
    Abstract: Systems and techniques are provided to identify, analyze, and evaluate key events and mechanical variables in videos of human motion related to an action, such as may be used in training for various sports and other activities. Information about the action is calculated based on analysis of the video such as via keypoint identification, pose identification and/or estimation, and related calculations, and provided automatically to the user to allow for improvement of the action.
    Type: Grant
    Filed: September 14, 2023
    Date of Patent: March 25, 2025
    Assignee: QUALIAOS, INC.
    Inventors: Kevin John Prince, Carlos Dietrich, Justin Ali Kennedy
  • Patent number: 12254642
    Abstract: A computer-implemented method is performed by one or more processors to automatically register a plurality of captured data obtained using a respective measurement device, each of the captured data is obtained separately. The computer-implemented method includes accessing a first captured data of a portion of an environment, and a first image corresponding to said portion of the environment captured from a known relative position and angle with respect to the first captured data. Further, from the plurality of captured data, a second captured data is identified that has at least a partial overlap with said portion, the second captured data is identified based on a corresponding second image. The second image is captured from a known relative position and angle with respect to the second captured data. The method further includes transforming the second captured data and/or the first captured data to a coordinate system.
    Type: Grant
    Filed: October 27, 2021
    Date of Patent: March 18, 2025
    Inventor: Jafar Amiri Parian
  • Patent number: 12243135
    Abstract: Techniques for vector object blending are described to generate a transformed vector object based on a first vector object and a second vector object. A transformation module, for instance, receives a first vector object that includes a plurality of first paths and a second vector object that includes a plurality of second paths. The transformation module computes morphing costs based on a correspondence within candidate path pairs that include one of the first paths and one of the second paths. Based on the morphing costs, the transformation module generates a low-cost mapping of paths between the first paths and the second paths. To generate the transformed vector object, the transformation module adjusts one or more properties of at least one of the first paths based on the mapping, such as geometry, appearance, and z-order.
    Type: Grant
    Filed: November 4, 2022
    Date of Patent: March 4, 2025
    Assignee: Adobe Inc.
    Inventors: Tarun Beri, Matthew David Fisher
  • Patent number: 12182944
    Abstract: Various methods and systems are provided for authoring and presenting 3D presentations. Generally, an augmented or virtual reality device for each author, presenter and audience member includes 3D presentation software. During authoring mode, one or more authors can use 3D and/or 2D interfaces to generate a 3D presentation that choreographs behaviors of 3D assets into scenes and beats. During presentation mode, the 3D presentation is loaded in each user device, and 3D images of the 3D assets and corresponding asset behaviors are rendered among the user devices in a coordinated manner. As such, one or more presenters can navigate the scenes and beats of the 3D presentation to deliver the 3D presentation to one or more audience members wearing augmented reality headsets.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: December 31, 2024
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Darren Alexander Bennett, David J. W. Seymour, Charla M. Pereira, Enrico William Guld, Kin Hang Chu, Julia Faye Taylor-Hell, Jonathon Burnham Cobb, Helen Joan Hem Lam, You-Da Yang, Dean Alan Wadsworth, Andrew Jackson Klein
  • Patent number: 12148082
    Abstract: A system and method for animating an avatar in a virtual world comprising an image processor arranged to process a stream of images capturing an active user to obtain an activity data set arranged to track the activity of the user; an avatar spatial processor arranged to process the activity data set to determine a plurality of motion tracking points arranged to track the user's activity over a three-dimensional space; a facial expression detection engine arranged to process the activity data set to detect one or more facial expressions of the user; and, an avatar animation engine arranged to animate the avatar in the virtual world with the plurality of motion tracking points and the detected one or more facial expressions so as to mirror the actions and facial expressions of the active user.
    Type: Grant
    Filed: August 23, 2022
    Date of Patent: November 19, 2024
    Assignee: The Education University of Hong Kong
    Inventors: Yanjie Song, Leung Ho Philip Yu, Chi Kin John Lee, Kaiyi Wu, Jiaxin Cao
  • Patent number: 12138543
    Abstract: Systems and methods are provided for enhanced animation generation based on generative control models. An example method includes accessing an autoencoder trained based on character control information generated using motion capture data, the character control information indicating, at least, trajectory information associated with the motion capture data, and the autoencoder being trained to reconstruct, via a latent feature space, the character control information. First character control information associated with a trajectory of an in-game character of an electronic game is obtained. A latent feature representation is generated and the latent feature representation is modified. A control signal is output to a motion prediction network for use in updating a character pose of the in-game character.
    Type: Grant
    Filed: January 20, 2021
    Date of Patent: November 12, 2024
    Assignee: Electronic Arts Inc.
    Inventors: Wolfram Sebastian Starke, Yiwei Zhao, Mohsen Sardari, Harold Henry Chaput, Navid Aghdaie, Kazi Atif-Uz Zaman
  • Patent number: 12136440
    Abstract: The present disclosure provides a video processing method, an apparatus, a device and a storage medium. The method includes: after determining a target effect style and determining a target video clip based on presentation of a video to be processed on a timeline, establishing a binding relationship between the target effect style and the target video clip in response to an effect application trigger operation, so as to achieve an effect of applying the target effect style to the target video clip. The embodiment of the present disclosure, by establishing the binding relationship between the target effect style and the target video clip, achieves the effect of effect processing only on a certain video clip of the video, thereby meeting the user's demand of effect processing only on a certain video clip, which increases flexibility of video effect processing, and further improves the user's experience of video effect processing.
    Type: Grant
    Filed: December 26, 2023
    Date of Patent: November 5, 2024
    Assignee: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
    Inventors: Qifan Zheng, Chen Zhao, Yan Zeng, Pingfei Fu
  • Patent number: 12118827
    Abstract: An apparatus for detecting mounting behavior of an animal object includes: a memory that stores a program; and a processor that executes the program. The program extracts animal detection information about an animal object detected from the image by inputting the received image into an animal detection model. Also, the program extracts bounding boxes of which a distance between coordinates of central points is smaller than a first set value, bounding boxes of which a difference in rotational angle is smaller than a second set value, and bounding boxes of which a difference between a vector connecting the central points of the extracted bounding boxes and an orientation of each bounding box is smaller than a third set value. If activity information of the animal object is extracted based on an MHI of the image, it is determined that mounting behavior occurs.
    Type: Grant
    Filed: March 23, 2022
    Date of Patent: October 15, 2024
    Assignee: INTFLOW INC.
    Inventors: Kwang Myung Jeon, So Heun Ju
  • Patent number: 12039661
    Abstract: A device for performing parameterized generation of two-dimensional images from a three-dimensional model may include at least one processor configured to receive a set of parameters for generating a two-dimensional image from a three-dimensional model. The at least one processor may be further configured to position a render camera relative to the three-dimensional model based at least in part on a first parameter of the set of parameters, apply a pose to the three-dimensional model based at least in part on a second parameter of the set of parameters, and add at least one supplemental content item to the posed three-dimensional model based at least in part on a third parameter of the set of parameters. The at least one processor may be further configured to generate, using the positioned render camera, the two-dimensional image from the posed three-dimensional model including the added at least one supplemental content item.
    Type: Grant
    Filed: June 1, 2020
    Date of Patent: July 16, 2024
    Assignee: Apple Inc.
    Inventors: Jeffrey D. Harris, Amaury Balliet, Remi G. Santos, Jason D. Rickwald
  • Patent number: 12004836
    Abstract: A surgical manipulator and method of operating the same. The surgical manipulator includes an arm with a plurality of links and joints, wherein an angle between adjacent links forms a joint angle. The arm includes a distal end configured to support a surgical instrument with an energy applicator. At least one controller is coupled to the arm and models the surgical instrument and the energy applicator as a virtual rigid body. The controller(s) determine a commanded pose for the surgical instrument and the energy applicator based on a summation of a plurality of forces and/or torques, wherein the plurality of forces and/or torques are selectively applied to the virtual rigid body to emulate orientation and movement of the surgical instrument and the energy applicator. The controller(s) determine commanded joint angles for the arm that place the surgical instrument and the energy applicator according to the commanded pose.
    Type: Grant
    Filed: March 15, 2023
    Date of Patent: June 11, 2024
    Assignee: Stryker Corporation
    Inventors: David G. Bowling, John Michael Stuart, Joel N. Beer
  • Patent number: 12008702
    Abstract: A configuration that causes an agent such as a character in a virtual world or a robot in the real world to perform actions by imitating actions of a human is to be achieved. An environment map including type and layout information about objects in the real world is generated, actions of a person acting in the real world are analyzed, time/action/environment map correspondence data including the environment map and time-series data of action analysis data is generated, a learning process using the time/action/environment map correspondence data is performed, an action model having the environment map as an input value and a result of action estimation as an output value is generated, and action control data for a character in a virtual world or a robot is generated with the use of the action model. For example, an agent is made to perform an action by imitating an action of a human.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: June 11, 2024
    Assignee: SONY GROUP CORPORATION
    Inventors: Takashi Seno, Yohsuke Kaji, Tomoya Ishikawa, Gaku Narita
  • Patent number: 12002164
    Abstract: Various methods and systems are provided for authoring and presenting 3D presentations. Generally, an augmented or virtual reality device for each author, presenter and audience member includes 3D presentation software. During authoring mode, one or more authors can use 3D and/or 2D interfaces to generate a 3D presentation that choreographs behaviors of 3D assets into scenes and beats. During presentation mode, the 3D presentation is loaded in each user device, and 3D images of the 3D assets and corresponding asset behaviors are rendered among the user devices in a coordinated manner. As such, one or more presenters can navigate the scenes and beats of the 3D presentation to deliver the 3D presentation to one or more audience members wearing augmented reality headsets.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: June 4, 2024
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Darren Alexander Bennett, David J. W. Seymour, Charla M. Pereira, Enrico William Guld, Kin Hang Chu, Julia Faye Taylor-Hell, Jonathon Burnham Cobb, Helen Joan Hem Lam, You-Da Yang, Dean Alan Wadsworth, Andrew Jackson Klein
  • Patent number: 11995847
    Abstract: During operation, an electronic device may capture images using multiple image sensors having different fields of view and positions. Then, the electronic device may determine, based at least in part on an apparent size of an anatomical feature in the images (such as an interpupillary distance) and a predefined or predetermined size of the anatomical feature, absolute motion of at least a portion of the individual along a direction between at least the portion of the individual and the electronic device. Moreover, the electronic device may compute based at least in part on an estimated distance along the direction corresponding to the apparent size and the predefined or predetermined size and angular information associated with one or more objects in the images relative to the positions, absolute motion of at least the portion of the individual in a plane that is perpendicular to the direction.
    Type: Grant
    Filed: September 24, 2022
    Date of Patent: May 28, 2024
    Assignee: Echo Pixel, Inc.
    Inventors: Sergio Aguirre-Valencia, Janet H. Goldenstein
  • Patent number: 11978145
    Abstract: An expression generation method for an animation object is provided. In the method, a first facial expression of a target animation object is acquired by a first animation application from a facial expression set generated by a second animation application. The facial expression set includes different facial expressions of the target animation object. A display parameter of the acquired first facial expression in the first animation application is adjusted based on a first user input to obtain a second facial expression of the target animation object. A target animation of the target animation object that includes an image frame of the second facial expression is generated.
    Type: Grant
    Filed: August 3, 2022
    Date of Patent: May 7, 2024
    Assignee: TENCENT AMERICA LLC
    Inventor: Chang Guo
  • Patent number: 11963741
    Abstract: The pose and shape of a human body may be recovered based on joint location information associated with the human body. The joint location information may be derived based on an image of the human body or from an output of a human motion capture system. The recovery of the pose and shape of the human body may be performed by a computer-implemented artificial neural network (ANN) trained to perform the recovery task using training datasets that include paired joint location information and human model parameters. The training of the ANN may be conducted in accordance with multiple constraints designed to improve the accuracy of the recovery and by artificially manipulating the training data so that the ANN can learn to recover the pose and shape of the human body even with partially observed joint locations.
    Type: Grant
    Filed: January 11, 2023
    Date of Patent: April 23, 2024
    Assignee: Shanghai United Imaging Intelligence Co., Ltd.
    Inventors: Ziyan Wu, Srikrishna Karanam, Changjiang Cai, Georgios Georgakis
  • Patent number: 11951390
    Abstract: A method of rendering a virtual environment is disclosed. First application data is received. The first application data includes first graph data. The first graph data corresponds to a first state of an application. Second application data is received after the first application data. The second application data corresponds to a second state of the application. The first application data and the second application data are analyzed to determine a change in the first graph data associated with the second application data. An island subgraph within the first graph data that includes the change is determined. Second graph data is generated for the second state. The generating of the second graph data includes partially reconstructing the first graph data. The partial reconstructing includes rebuilding the determined island subgraph. The generated second graph data is communicated for rendering of the virtual environment in the second state.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: April 9, 2024
    Assignee: Unity IPR ApS
    Inventors: Janus Lynggaard Thorborg, Toulouse de Margerie, Wayne Johnson
  • Patent number: 11903659
    Abstract: A robotic device for a minimally invasive medical intervention on soft tissues is provided. The robotic device uses a medical instrument having a robot arm having several degrees of freedom and having an end suitable for receiving the medical instrument, an image capture system suitable for capturing position information concerning the anatomy of the patient, a storage medium having a biomechanical model of the human body, a processing circuit configured to determine a position setpoint and an orientation setpoint for said medical instrument on the basis of the biomechanical model, on the basis of the position information and on the basis of a trajectory to be followed by the medical instrument in order to perform the medical intervention, and a control circuit configured to control the robot arm in order to place the medical instrument in the position setpoint and the orientation setpoint.
    Type: Grant
    Filed: November 8, 2018
    Date of Patent: February 20, 2024
    Assignee: QUANTUM SURGICAL
    Inventors: Lucien Blondel, Fernand Badano, Bertin Nahum
  • Patent number: 11871146
    Abstract: A video processor is configured to perform the following steps: receiving a series of input frames; calculating a buffer stage value according to the series of input frames, wherein the buffer stage value corresponds to a status of the input frames stored in a frame buffer of the video processor; and selecting a frame set from the input frames stored in the frame buffer for generating an interpolated frame as an output frame to be output by the video processor according to the buffer stage value.
    Type: Grant
    Filed: August 16, 2022
    Date of Patent: January 9, 2024
    Assignee: NOVATEK Microelectronics Corp.
    Inventors: Chih Chang, I-Feng Lin, Hsiao-En Chang
  • Patent number: 11806162
    Abstract: Described herein are methods and systems for using three-dimensional (3D) human movement data as an interactive and synesthetic means of communication that allows body language to be shared between and among individuals and groups, permitting never-before-seen means of expressivity and sharing, and forming the basis for a novel type of media having numerous applications, for example as part of or to enhance the application of psychedelic-assisted therapy, especially where such therapy incorporates augmented or virtual reality.
    Type: Grant
    Filed: January 27, 2023
    Date of Patent: November 7, 2023
    Inventors: Sarah Hashkes, Matthew Hoe
  • Patent number: 11798318
    Abstract: Systems and techniques are provided to identify, analyze, and evaluate key events and mechanical variables in videos of human motion related to an action, such as may be used in training for various sports and other activities. Information about the action is calculated based on analysis of the video such as via keypoint identification, pose identification and/or estimation, and related calculations, and provided automatically to the user to allow for improvement of the action.
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: October 24, 2023
    Assignee: QualiaOS, Inc.
    Inventors: Kevin John Prince, Carlos Dietrich, Justin Ali Kennedy
  • Patent number: 11785332
    Abstract: The described technology is directed towards a production shot design system that facilitates previsualizing scene shots, including by members of a production crew (running client devices) in different locations in a collaborative and secure shot construction environment. Modifiable scene elements' properties and camera data can be manipulated to build a scene (shot) containing modifiable and non-modifiable scene elements. In an online, shared camera mode, changes to a scene can be communicated to other client devices, e.g., virtually immediately, so that each client device displays the change for other users to see at an interactive frame rate. Scene changes can also be made locally and/or in an offline mode before communicating to other users. In various aspects, animation and a video plane camera/video plane (e.g., greenscreen) are integrated into the production shot design system.
    Type: Grant
    Filed: January 24, 2022
    Date of Patent: October 10, 2023
    Assignee: HOME BOX OFFICE, INC.
    Inventors: Stephen Beres, Uwe Kranich
  • Patent number: 11776156
    Abstract: A method includes receiving video data that includes a series of frames of image data. Here, the video data is representative of an actor performing an activity. The method also includes processing the video data to generate a spatial input stream including a series of spatial images representative of spatial features of the actor performing the activity, a temporal input stream representative of motion of the actor performing the activity, and a pose input stream including a series of images representative of a pose of the actor performing the activity. Using at least one neural network, the method also includes processing the temporal input stream, the spatial input stream, and the pose input stream. The method also includes classifying, by the at least one neural network, the activity based on the temporal input stream, the spatial input stream, and the pose input stream.
    Type: Grant
    Filed: June 11, 2021
    Date of Patent: October 3, 2023
    Assignee: Google LLC
    Inventors: Yinxiao Li, Zhichao Lu, Xuehan Xiong, Jonathan Huang
  • Patent number: 11769281
    Abstract: Vector object transformation techniques are described that support generation of a transformed vector object based on a first vector object and a second vector object. A plurality of paths for a first and second vector object, for instance, are generated. Corresponding paths are determined by detecting which of the plurality of paths from the first vector object correspond to which of the plurality of paths from the second vector object. A mapping of control points between the first and second vector objects is generated. Using the mapping, a transformation of the first vector object is generated by adjusting one or more control points of the first vector object. As a result, the transformed vector object includes visual characteristics based on both the first vector object and the second vector object.
    Type: Grant
    Filed: February 1, 2022
    Date of Patent: September 26, 2023
    Assignee: Adobe Inc.
    Inventors: Tarun Beri, Matthew David Fisher
  • Patent number: 11769346
    Abstract: Methods and apparati for inserting face and hair information from a source video (401) into a destination (driver) video (402) while mimicking pose, illumination, and hair motion of the destination video (402). An apparatus embodiment comprises an identity encoder (404) configured to encode face and hair information of the source video (401) and to produce as an output an identity vector; a pose encoder (405) configured to encode pose information of the destination video (402) and to produce as an output a pose vector; an illumination encoder (406) configured to encode head and hair illumination of the destination video (402) and to produce as an output an illumination vector; and a hair motion encoder (414) configured to encode hair motion of the destination video (402) and to produce as an output a hair motion vector. The identity vector, pose vector, illumination vector, and hair motion vector are fed as inputs to a neural network generator (410).
    Type: Grant
    Filed: December 22, 2021
    Date of Patent: September 26, 2023
    Assignee: Spree3d Corporation
    Inventors: Mohamed N. Moustafa, Ahmed A. Ewais, Amr A. Ali
  • Patent number: 11722764
    Abstract: The present disclosure generally relates to displaying visual effects in image data. In some examples, visual effects include an avatar displayed on a user's face. In some examples, visual effects include stickers applied to image data. In some examples, visual effects include screen effects. In some examples, visual effects are modified based on depth data in the image data.
    Type: Grant
    Filed: November 12, 2021
    Date of Patent: August 8, 2023
    Assignee: Apple Inc.
    Inventors: Marcel Van Os, Jessica L. Aboukasm, Jean-Francois M. Albouze, David R. Black, Jae Woo Chang, Robert M. Chinn, Gregory L. Dudey, Katherine K. Ernst, Aurelio Guzman, Christopher J. Moulios, Joanna M. Newman, Grant Paul, Nicolas Scapel, William A. Sorrentino, III, Brian E. Walsh, Joseph-Alexander P. Weil, Christopher Wilson
  • Patent number: 11720081
    Abstract: A method includes receiving, by a mobile computing device from an electroencephalogram (EEG) monitoring headset, an incoming wireless communication signal including an EEG data stream. The method may further include processing, by an application running on the mobile computing device, the received EEG data stream to determine at least one actionable command for at least one peripheral device. The method may also include transmitting, by the mobile computing device to the at least one peripheral device, at least one outgoing wireless communication signal including the at least one determined actionable command.
    Type: Grant
    Filed: March 17, 2020
    Date of Patent: August 8, 2023
    Assignee: DUKE UNIVERSITY
    Inventors: Allen Song, Chris Petty
  • Patent number: 11704855
    Abstract: Disclosed herein are system, method, and device embodiments for implementing a customizable animation experience. A multi-tenant service may associate an animation element with a visual component of an application, and generate a markup component including an animation parameter configured to customize the animation element within the application code. Further, the multi-tenant service may receive a request for the animation from an animation manager based on execution of the application code, and send the animation information to the animation manager. In some embodiments, the animation manager is configured to set the animation parameter to the animation information and present an animation associated with the animation element based on the animation parameter.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: July 18, 2023
    Assignee: Salesforce, Inc.
    Inventors: Pavithra Ramamurthy, Kirupa Chinnathambi
  • Patent number: 11687045
    Abstract: Disclosed are platforms for communicating among one or more otherwise independent systems involved in controlling functions of buildings or other sites having switchable optical devices deployed therein. Such independent systems include a window control system and one or more other independent systems such as systems that control residential home products (e.g., thermostats, smoke alarms, etc.), HVAC systems, security systems, lighting control systems, and the like. Together the systems control and/or monitor multiple features and/or products, including switchable windows and other infrastructure of a site, which may be a commercial, residential, or public site.
    Type: Grant
    Filed: June 22, 2021
    Date of Patent: June 27, 2023
    Assignee: View, Inc.
    Inventors: Dhairya Shrivastava, Stephen Clark Brown, Vijay Mani, Ronald F. Cadet
  • Patent number: 11675418
    Abstract: There is provided a program, an information processor, and an information processing method that make it possible to blend motions of a plurality of actors captured by using a motion capture technique and to reproduce the blended motions in real time in an avatar or the like on a virtual space. The program causes a computer to implement a control function of dynamically controlling a motion of an avatar in a virtual space or a robot on a real space, the control function being configured to: capture motions of a plurality of actors on the real space from respective motion sensors attached to the actors; blend the motions of the plurality of actors on the basis of a predetermined algorithm; and dynamically control the motion of the avatar or the robot on the basis of the blend result to cause the avatar or the robot to make a motion reflecting the motions of the plurality of actors.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: June 13, 2023
    Assignee: SONY CORPORATION
    Inventors: Yasutaka Fukumoto, Nobuhiro Saijo, Kazuma Takahashi, Keita Mochizuki
  • Patent number: 11641524
    Abstract: A method for displaying an image in an electronic device is provided. The method includes obtaining a first image including a plurality of subjects, setting a plurality of sub-regions respectively including the plurality of subjects, obtaining a distance between the plurality of sub-regions, when a distance between a first region and a second region, which are adjacent to each other, among the plurality of sub-regions is greater than or equal to a specified threshold distance, omitting at least a portion of a third region disposed between the first region and the second region in the first image, and displaying a second image obtained by resetting a size of each of the plurality of sub-regions and rearranging each of the plurality of sub-regions.
    Type: Grant
    Filed: February 5, 2021
    Date of Patent: May 2, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kyuwon Kim, Yusic Kim, Chulsang Chang, Hyungmin Cho, Jaewoong Choi
  • Patent number: 11640235
    Abstract: This application provides an additional object display method, an additional object display apparatus, and a computer device, and relates to the field of computer application technologies. The method includes: displaying a trigger control in a video playback interface; pausing playback of a video in response to an activation operation on the trigger control, and displaying a reference picture frame; obtaining a target object in the reference picture frame in response to a drag operation on the trigger control; and displaying, corresponding to the target object, an additional object corresponding to the trigger control in a picture frame of the video during playback of the video, so that an additional object matches a video playback picture during playback of the video.
    Type: Grant
    Filed: May 12, 2020
    Date of Patent: May 2, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xianmin Xiao, Zhong Bao Zhang, Hui Jiang, Wen Tao Wang, Peng Xiao, Xiong Zhi Li, Yuanhao Zhang, Feng Lin
  • Patent number: 11599706
    Abstract: Systems, methods, and non-transitory computer readable media may provide a view of geospatial information. A user's selection of a location may be obtained. Characteristic information describing characteristics of the location may be obtained. Activities information describing activities of the location may be obtained. An interface (e.g., user interface, API) enabling presentation of a geospatial view of the activities of the location with respect to the characteristics of the location may be provided.
    Type: Grant
    Filed: May 3, 2018
    Date of Patent: March 7, 2023
    Assignee: Palantir Technologies Inc.
    Inventors: Alexander Mark, Andrew Elder, Brandon Marc-Aurele, David Montague, Eric Knudson, Eric Jeney, Jeffrey Bagdis, Daniel O'Malley, Vincent Tilson
  • Patent number: 11587279
    Abstract: Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.
    Type: Grant
    Filed: February 28, 2022
    Date of Patent: February 21, 2023
    Assignee: INTEL CORPORATION
    Inventors: Yikai Fang, Yangzhou Du, Qiang Eric Li, Xiaofeng Tong, Wenlong Li, Minje Park, Myung-Ho Ju, Jihyeon Kate Yi, Tae-Hoon Pete Kim
  • Patent number: 11557391
    Abstract: The pose and shape of a human body may be recovered based on joint location information associated with the human body. The joint location information may be derived based on an image of the human body or from an output of a human motion capture system. The recovery of the pose and shape of the human body may be performed by a computer-implemented artificial neural network (ANN) trained to perform the recovery task using training datasets that include paired joint location information and human model parameters. The training of the ANN may be conducted in accordance with multiple constraints designed to improve the accuracy of the recovery and by artificially manipulating the training data so that the ANN can learn to recover the pose and shape of the human body even with partially observed joint locations.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: January 17, 2023
    Assignee: SHANGHAI UNITED IMAGING INTELLIGENCE CO., LTD.
    Inventors: Ziyan Wu, Srikrishna Karanam, Changjiang Cai, Georgios Georgakis
  • Patent number: 11544855
    Abstract: A target tracking method and apparatus is provided. The target tracking apparatus includes a memory configured to store a neural network, and a processor configured to extract feature information of each of a target included in a target region in a first input image, a background included in the target region, and a searching region in a second input image, using the neural network, obtain similarity information of the target and the searching region and similarity information of the background and the searching region based on the extracted feature information, obtain a score matrix including activated feature values based on the obtained similarity information, and estimate a position of the target in the searching region from the score matrix.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: January 3, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: HyunJeong Lee, Changbeom Park, Hana Lee, Sung Kwang Cho
  • Patent number: 11532102
    Abstract: Views of a virtual environment can be displayed on mobile devices in a real-world environment simultaneously for multiple users. The users can operate selections devices in the real-world environment that interact with objects in the virtual environment. Virtual characters and objects can be moved and manipulated using selection shapes. A graphical interface can be instantiated and rendered as part of the virtual environment. Virtual cameras and screens can also be instantiated to created storyboards, backdrops, and animated sequences of the virtual environment. These immersive experiences with the virtual environment can be used to generate content for users and for feature films.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: December 20, 2022
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Jose Perez, III, Peter Dollar, Barak Moshe
  • Patent number: 11511191
    Abstract: An interactive control system for a game object including: a collision module, configured to assign a collision attribute to each game object; a moving module, configured to receive a moving instruction directed to a game group; an interaction module, configured to be coupled to the collision module, calculate a repulsive force of each game object, and represent an interaction form of the game object based on the repulsive force, wherein, when the game group is moved to the moving target, makes a group circle with a center of the game group as a center of the circle and a first length as a radius, when the game object is outside the group circle, it controls the collision module to impart a restoring force directing to the center of the circle to the game object, so as to control the game object to return to the game group.
    Type: Grant
    Filed: August 5, 2020
    Date of Patent: November 29, 2022
    Assignee: SHANGHAI LILITH TECHNOLOGY CORPORATION
    Inventors: Ganlin Zhuang, Yifan Mao, Huan Jin
  • Patent number: 11497999
    Abstract: A method of determining blending coefficients for respective animations includes: obtaining animation data, the animation data defining at least two different animations that are at least in part to be simultaneously applied to the animated object, each animation comprising a plurality of frames; obtaining corresponding video game data, the video game data comprising an in-game state of the object; inputting the animation data and video game data into a machine learning model, the machine learning model being trained to determine, based on the animation data and corresponding video game data, a blending coefficient for each of the animations in the animation data; determining, based on the output of the machine learning model, one or more blending coefficients for at least one of the animations, the or each blending coefficient defining a relative weighting with which each animation is to be applied to the animated object; and blending the at least simultaneously applied part of the two animations using the o
    Type: Grant
    Filed: January 13, 2020
    Date of Patent: November 15, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Fabio Cappello, Oliver Hume
  • Patent number: 11478704
    Abstract: Methods and systems are provided for displaying voice input of a spectator in a video game. The method includes receiving, by a server, the voice input produced by the spectator while viewing video game video of the video game. The method includes examining, by the server, the voice input to identify speech characteristics associated with the voice input of the spectator. The method includes processing, by the server, the voice input to generate a spectator video that includes text images representing the voice input of the spectator. In one embodiment, the text images are configured to be adjusted in visual appearance based on the speech characteristics of the voice input, wherein the text images are directed in a field of view of an avatar of the spectator. The method includes combining, by the server, video game video with the spectator video to produce an overlay of the text images graphically moving in toward a game scene provided by the video game video.
    Type: Grant
    Filed: November 4, 2020
    Date of Patent: October 25, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Masanori Omote
  • Patent number: 11481948
    Abstract: The present disclosure discloses a method, device and storage medium for generating an animation. The method includes: acquiring a configuration file corresponding to a configuration file identifier; determining behavior information and animated resources based on the configuration file; acquiring first animated resources based on first animated resource identifiers in the behavior information; and generating the animation by synthesizing the behavior information and the first animated resources.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: October 25, 2022
    Assignee: BEIJING DAJIA INTERNET INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Xuan Liu, Zhenlong Bai, Kaijian Jiang, Chao Wang
  • Patent number: 11478707
    Abstract: Embodiments relate to generating image frames including a motion of a character with one or more stretchable body parts by either performing only blending of prestored animation clips or performing both the blending of prestored animation clips and performing inverse kinematics operations where one or more bones in the body parts are stretched or contracted. Choosing whether to perform blending or the inverse kinematics depends on whether predetermined conditions are satisfied or not. Prestored animation clips to be blended may be determined according to the speed of the character when performing the jumping motion. When performing the inverse kinematics, physical properties of the character are simulated to determine the trajectory of the character during the jumping.
    Type: Grant
    Filed: December 8, 2020
    Date of Patent: October 25, 2022
    Assignee: SQUARE ENIX LTD.
    Inventors: Stephen Perez, Noriyuki Imamura, Gary Linn Snethen
  • Patent number: 11475608
    Abstract: One aspect of the disclosure is a non-transitory computer-readable storage medium including program instructions. Operations performed by execution of the program instructions include obtaining an input image that depicts a face of a subject, having an initial facial expression and an initial pose, determining a reference shape description based on the input image, determining a target shape description based on the reference shape description, a facial expression difference, and a pose difference, generating a rendered target shape image using the target shape description, and generating an output image based on the input image and the rendered target shape using an image generator, wherein the output image is a simulated image of the subject of the input image that has a final expression that is based on the initial facial expression and the facial expression difference, and a final pose that is based on the initial pose and the pose difference.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: October 18, 2022
    Assignee: Apple Inc.
    Inventors: Barry-John Theobald, Nataniel Ruiz Gutierrez, Nicholas E. Apostoloff
  • Patent number: 11388226
    Abstract: Systems and methods for guided personal identity based actions are provided. In example embodiments, a user-specified action from a first user device of a first user is received. The user-specified action pertains to the first user and uses data of the first user when performed. The user-specified action is linked to an identifier. An indication of the identifier is received from a second user device of a second user. In response to receiving the indication of the identifier, the user-specified action linked to the identifier is identified, the data of the first user is accessed, a user interface that includes an option to perform the user-specified action using the data of the first user is generated, and the generated user interface is presented on the second user device.
    Type: Grant
    Filed: May 29, 2018
    Date of Patent: July 12, 2022
    Assignee: Snap Inc.
    Inventors: Landon Anderton, Garrett Gee, Ryan Hornberger, Kirk Ouimet, Kameron Sheffield, Benjamin Turley
  • Patent number: 11257294
    Abstract: A cross reality system enables any of multiple types of devices to efficiently and accurately access previously stored maps and render virtual content specified in relation to those maps. The cross reality system may include a cloud-based localization service that responds to requests from devices to localize with respect to a stored map. Devices of any type, with native hardware and software configured for augmented reality operations may be configured to work with the cross reality system by incorporating components that interface between the native AR framework of the device and the cloud-based localization service. These components may present position information about the device in a format recognized by the localization service. Additionally, these components may filter or otherwise process perception data provided by the native AR framework to increase the accuracy of localization.
    Type: Grant
    Filed: October 15, 2020
    Date of Patent: February 22, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Xuan Zhao, Ali Shahrokni, Daniel Olshansky, Christian Ivan Robert Moore, Rafael Domingos Torres, Joel David Holder
  • Patent number: 11238619
    Abstract: Views of a virtual environment can be displayed on mobile devices in a real-world environment simultaneously for multiple users. The users can operate selections devices in the real-world environment that interact with objects in the virtual environment. Virtual characters and objects can be moved and manipulated using selection shapes. A graphical interface can be instantiated and rendered as part of the virtual environment. Virtual cameras and screens can also be instantiated to created storyboards, backdrops, and animated sequences of the virtual environment. These immersive experiences with the virtual environment can be used to generate content for users and for feature films.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: February 1, 2022
    Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.
    Inventors: Jose Perez, III, Peter Dollar, Barak Moshe
  • Patent number: 11216288
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for evaluating interactions with a user interface of an application are disclosed. In one aspect, a method includes, for each of a plurality of different user sessions of a native application, accessing frame bundles that each include data representing content presented by a frame of a user interface of the native application at a given time. Each frame bundle includes at least a portion of a view tree of the native application used to generate the user interface at the given time and data specifying content presented by each view of the portion of the view tree. Based on the frame bundles, playback data are generated that present visual changes of the user interface corresponding to changes to the view trees.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: January 4, 2022
    Assignee: FullStory, Inc.
    Inventors: Matthew Mastracci, Joel Grayson Webber, Michael Morrissey, Hollis Bruce Johnson, Jr.
  • Patent number: 11109042
    Abstract: Systems and methods for coding a video to be overlaid by annotations are devised. A motion compensated predictive coding is employed, wherein coding parameters of video pixel blocks are determined based on the pixel blocks' relation to the annotations. A decoder decodes the video and annotates it based on metadata, obtained from the coder or other sources, describing the annotations' appearance and rendering mode.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: August 31, 2021
    Assignee: Apple Inc.
    Inventors: Sudeng Hu, Xing Wen, Jae Hoon Kim, Peikang Song, Hang Yuan, Dazhong Zhang, Xiaosong Zhou, Hsi-Jung Wu, Christopher Garrido, Ming Jin, Patrick Miauton, Karthick Santhanam
  • Patent number: 11076082
    Abstract: An image processing method includes obtaining one or more sets of image data generated by an imaging sensor and one or more sets of positional data generated by one or more positional sensors. An individual set of image data is associated with an image timestamp based on a time at which the individual set of image data was generated. An individual set of positional data is associated with a positional timestamp based on a time at which the individual set of positional data was generated. The method further includes correlating one of the one or more sets of image data with a corresponding one of the one or more sets of positional data based on an image timestamp associated with the one of the one or more sets of image data and a positional timestamp associated with the corresponding one of the one or more sets of positional data.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: July 27, 2021
    Assignee: SZ DJI OSMO TECHNOLOGY CO., LTD.
    Inventors: Xianggen Li, Li Zhou
  • Patent number: 11036524
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for evaluating interactions with a user interface of an application are disclosed. In one aspect, a method includes, for each of a plurality of different user sessions of a native application, accessing frame bundles that each include data representing content presented by a frame of a user interface of the native application at a given time. Each frame bundle includes at least a portion of a view tree of the native application used to generate the user interface at the given time and data specifying content presented by each view of the portion of the view tree. Based on the frame bundles, playback data are generated that present visual changes of the user interface corresponding to changes to the view trees.
    Type: Grant
    Filed: July 17, 2018
    Date of Patent: June 15, 2021
    Assignee: FullStory, Inc.
    Inventors: Matthew Mastracci, Joel Grayson Webber, Michael Morrissey, Hollis Bruce Johnson, Jr.
  • Patent number: 11017577
    Abstract: The invention comprises a learned model of human body shape and pose dependent shape variation that is more accurate than previous models and is compatible with existing graphics pipelines. Our Skinned Multi-Person Linear model (SMPL) is a skinned vertex based model that accurately represents a wide variety of body shapes in natural human poses. The parameters of the model are learned from data including the rest pose template, blend weights, pose-dependent blend shapes, identity-dependent blend shapes, and a regressor from vertices to joint locations. Unlike previous models, the pose-dependent blend shapes are a linear function of the elements of the pose rotation matrices. This simple formulation enables training the entire model from a relatively large number of aligned 3D meshes of different people in different poses. The invention quantitatively evaluates variants of SMPL using linear or dual quaternion blend skinning and show that both are more accurate than a BlendSCAPE model trained on the same data.
    Type: Grant
    Filed: August 14, 2019
    Date of Patent: May 25, 2021
    Assignee: Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V.
    Inventors: Michael J. Black, Matthew Loper, Naureen Mahmood, Gerard Pons-Moll, Javier Romero