Temporal Interpolation Or Processing Patents (Class 345/475)
-
Patent number: 11963741Abstract: The pose and shape of a human body may be recovered based on joint location information associated with the human body. The joint location information may be derived based on an image of the human body or from an output of a human motion capture system. The recovery of the pose and shape of the human body may be performed by a computer-implemented artificial neural network (ANN) trained to perform the recovery task using training datasets that include paired joint location information and human model parameters. The training of the ANN may be conducted in accordance with multiple constraints designed to improve the accuracy of the recovery and by artificially manipulating the training data so that the ANN can learn to recover the pose and shape of the human body even with partially observed joint locations.Type: GrantFiled: January 11, 2023Date of Patent: April 23, 2024Assignee: Shanghai United Imaging Intelligence Co., Ltd.Inventors: Ziyan Wu, Srikrishna Karanam, Changjiang Cai, Georgios Georgakis
-
Patent number: 11951390Abstract: A method of rendering a virtual environment is disclosed. First application data is received. The first application data includes first graph data. The first graph data corresponds to a first state of an application. Second application data is received after the first application data. The second application data corresponds to a second state of the application. The first application data and the second application data are analyzed to determine a change in the first graph data associated with the second application data. An island subgraph within the first graph data that includes the change is determined. Second graph data is generated for the second state. The generating of the second graph data includes partially reconstructing the first graph data. The partial reconstructing includes rebuilding the determined island subgraph. The generated second graph data is communicated for rendering of the virtual environment in the second state.Type: GrantFiled: June 7, 2021Date of Patent: April 9, 2024Assignee: Unity IPR ApSInventors: Janus Lynggaard Thorborg, Toulouse de Margerie, Wayne Johnson
-
Patent number: 11903659Abstract: A robotic device for a minimally invasive medical intervention on soft tissues is provided. The robotic device uses a medical instrument having a robot arm having several degrees of freedom and having an end suitable for receiving the medical instrument, an image capture system suitable for capturing position information concerning the anatomy of the patient, a storage medium having a biomechanical model of the human body, a processing circuit configured to determine a position setpoint and an orientation setpoint for said medical instrument on the basis of the biomechanical model, on the basis of the position information and on the basis of a trajectory to be followed by the medical instrument in order to perform the medical intervention, and a control circuit configured to control the robot arm in order to place the medical instrument in the position setpoint and the orientation setpoint.Type: GrantFiled: November 8, 2018Date of Patent: February 20, 2024Assignee: QUANTUM SURGICALInventors: Lucien Blondel, Fernand Badano, Bertin Nahum
-
Patent number: 11871146Abstract: A video processor is configured to perform the following steps: receiving a series of input frames; calculating a buffer stage value according to the series of input frames, wherein the buffer stage value corresponds to a status of the input frames stored in a frame buffer of the video processor; and selecting a frame set from the input frames stored in the frame buffer for generating an interpolated frame as an output frame to be output by the video processor according to the buffer stage value.Type: GrantFiled: August 16, 2022Date of Patent: January 9, 2024Assignee: NOVATEK Microelectronics Corp.Inventors: Chih Chang, I-Feng Lin, Hsiao-En Chang
-
Patent number: 11806162Abstract: Described herein are methods and systems for using three-dimensional (3D) human movement data as an interactive and synesthetic means of communication that allows body language to be shared between and among individuals and groups, permitting never-before-seen means of expressivity and sharing, and forming the basis for a novel type of media having numerous applications, for example as part of or to enhance the application of psychedelic-assisted therapy, especially where such therapy incorporates augmented or virtual reality.Type: GrantFiled: January 27, 2023Date of Patent: November 7, 2023Inventors: Sarah Hashkes, Matthew Hoe
-
Patent number: 11798318Abstract: Systems and techniques are provided to identify, analyze, and evaluate key events and mechanical variables in videos of human motion related to an action, such as may be used in training for various sports and other activities. Information about the action is calculated based on analysis of the video such as via keypoint identification, pose identification and/or estimation, and related calculations, and provided automatically to the user to allow for improvement of the action.Type: GrantFiled: July 30, 2021Date of Patent: October 24, 2023Assignee: QualiaOS, Inc.Inventors: Kevin John Prince, Carlos Dietrich, Justin Ali Kennedy
-
Patent number: 11785332Abstract: The described technology is directed towards a production shot design system that facilitates previsualizing scene shots, including by members of a production crew (running client devices) in different locations in a collaborative and secure shot construction environment. Modifiable scene elements' properties and camera data can be manipulated to build a scene (shot) containing modifiable and non-modifiable scene elements. In an online, shared camera mode, changes to a scene can be communicated to other client devices, e.g., virtually immediately, so that each client device displays the change for other users to see at an interactive frame rate. Scene changes can also be made locally and/or in an offline mode before communicating to other users. In various aspects, animation and a video plane camera/video plane (e.g., greenscreen) are integrated into the production shot design system.Type: GrantFiled: January 24, 2022Date of Patent: October 10, 2023Assignee: HOME BOX OFFICE, INC.Inventors: Stephen Beres, Uwe Kranich
-
Patent number: 11776156Abstract: A method includes receiving video data that includes a series of frames of image data. Here, the video data is representative of an actor performing an activity. The method also includes processing the video data to generate a spatial input stream including a series of spatial images representative of spatial features of the actor performing the activity, a temporal input stream representative of motion of the actor performing the activity, and a pose input stream including a series of images representative of a pose of the actor performing the activity. Using at least one neural network, the method also includes processing the temporal input stream, the spatial input stream, and the pose input stream. The method also includes classifying, by the at least one neural network, the activity based on the temporal input stream, the spatial input stream, and the pose input stream.Type: GrantFiled: June 11, 2021Date of Patent: October 3, 2023Assignee: Google LLCInventors: Yinxiao Li, Zhichao Lu, Xuehan Xiong, Jonathan Huang
-
Patent number: 11769346Abstract: Methods and apparati for inserting face and hair information from a source video (401) into a destination (driver) video (402) while mimicking pose, illumination, and hair motion of the destination video (402). An apparatus embodiment comprises an identity encoder (404) configured to encode face and hair information of the source video (401) and to produce as an output an identity vector; a pose encoder (405) configured to encode pose information of the destination video (402) and to produce as an output a pose vector; an illumination encoder (406) configured to encode head and hair illumination of the destination video (402) and to produce as an output an illumination vector; and a hair motion encoder (414) configured to encode hair motion of the destination video (402) and to produce as an output a hair motion vector. The identity vector, pose vector, illumination vector, and hair motion vector are fed as inputs to a neural network generator (410).Type: GrantFiled: December 22, 2021Date of Patent: September 26, 2023Assignee: Spree3d CorporationInventors: Mohamed N. Moustafa, Ahmed A. Ewais, Amr A. Ali
-
Patent number: 11769281Abstract: Vector object transformation techniques are described that support generation of a transformed vector object based on a first vector object and a second vector object. A plurality of paths for a first and second vector object, for instance, are generated. Corresponding paths are determined by detecting which of the plurality of paths from the first vector object correspond to which of the plurality of paths from the second vector object. A mapping of control points between the first and second vector objects is generated. Using the mapping, a transformation of the first vector object is generated by adjusting one or more control points of the first vector object. As a result, the transformed vector object includes visual characteristics based on both the first vector object and the second vector object.Type: GrantFiled: February 1, 2022Date of Patent: September 26, 2023Assignee: Adobe Inc.Inventors: Tarun Beri, Matthew David Fisher
-
Patent number: 11722764Abstract: The present disclosure generally relates to displaying visual effects in image data. In some examples, visual effects include an avatar displayed on a user's face. In some examples, visual effects include stickers applied to image data. In some examples, visual effects include screen effects. In some examples, visual effects are modified based on depth data in the image data.Type: GrantFiled: November 12, 2021Date of Patent: August 8, 2023Assignee: Apple Inc.Inventors: Marcel Van Os, Jessica L. Aboukasm, Jean-Francois M. Albouze, David R. Black, Jae Woo Chang, Robert M. Chinn, Gregory L. Dudey, Katherine K. Ernst, Aurelio Guzman, Christopher J. Moulios, Joanna M. Newman, Grant Paul, Nicolas Scapel, William A. Sorrentino, III, Brian E. Walsh, Joseph-Alexander P. Weil, Christopher Wilson
-
Patent number: 11720081Abstract: A method includes receiving, by a mobile computing device from an electroencephalogram (EEG) monitoring headset, an incoming wireless communication signal including an EEG data stream. The method may further include processing, by an application running on the mobile computing device, the received EEG data stream to determine at least one actionable command for at least one peripheral device. The method may also include transmitting, by the mobile computing device to the at least one peripheral device, at least one outgoing wireless communication signal including the at least one determined actionable command.Type: GrantFiled: March 17, 2020Date of Patent: August 8, 2023Assignee: DUKE UNIVERSITYInventors: Allen Song, Chris Petty
-
Patent number: 11704855Abstract: Disclosed herein are system, method, and device embodiments for implementing a customizable animation experience. A multi-tenant service may associate an animation element with a visual component of an application, and generate a markup component including an animation parameter configured to customize the animation element within the application code. Further, the multi-tenant service may receive a request for the animation from an animation manager based on execution of the application code, and send the animation information to the animation manager. In some embodiments, the animation manager is configured to set the animation parameter to the animation information and present an animation associated with the animation element based on the animation parameter.Type: GrantFiled: January 22, 2020Date of Patent: July 18, 2023Assignee: Salesforce, Inc.Inventors: Pavithra Ramamurthy, Kirupa Chinnathambi
-
Patent number: 11687045Abstract: Disclosed are platforms for communicating among one or more otherwise independent systems involved in controlling functions of buildings or other sites having switchable optical devices deployed therein. Such independent systems include a window control system and one or more other independent systems such as systems that control residential home products (e.g., thermostats, smoke alarms, etc.), HVAC systems, security systems, lighting control systems, and the like. Together the systems control and/or monitor multiple features and/or products, including switchable windows and other infrastructure of a site, which may be a commercial, residential, or public site.Type: GrantFiled: June 22, 2021Date of Patent: June 27, 2023Assignee: View, Inc.Inventors: Dhairya Shrivastava, Stephen Clark Brown, Vijay Mani, Ronald F. Cadet
-
Patent number: 11675418Abstract: There is provided a program, an information processor, and an information processing method that make it possible to blend motions of a plurality of actors captured by using a motion capture technique and to reproduce the blended motions in real time in an avatar or the like on a virtual space. The program causes a computer to implement a control function of dynamically controlling a motion of an avatar in a virtual space or a robot on a real space, the control function being configured to: capture motions of a plurality of actors on the real space from respective motion sensors attached to the actors; blend the motions of the plurality of actors on the basis of a predetermined algorithm; and dynamically control the motion of the avatar or the robot on the basis of the blend result to cause the avatar or the robot to make a motion reflecting the motions of the plurality of actors.Type: GrantFiled: April 15, 2019Date of Patent: June 13, 2023Assignee: SONY CORPORATIONInventors: Yasutaka Fukumoto, Nobuhiro Saijo, Kazuma Takahashi, Keita Mochizuki
-
Patent number: 11640235Abstract: This application provides an additional object display method, an additional object display apparatus, and a computer device, and relates to the field of computer application technologies. The method includes: displaying a trigger control in a video playback interface; pausing playback of a video in response to an activation operation on the trigger control, and displaying a reference picture frame; obtaining a target object in the reference picture frame in response to a drag operation on the trigger control; and displaying, corresponding to the target object, an additional object corresponding to the trigger control in a picture frame of the video during playback of the video, so that an additional object matches a video playback picture during playback of the video.Type: GrantFiled: May 12, 2020Date of Patent: May 2, 2023Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Xianmin Xiao, Zhong Bao Zhang, Hui Jiang, Wen Tao Wang, Peng Xiao, Xiong Zhi Li, Yuanhao Zhang, Feng Lin
-
Patent number: 11641524Abstract: A method for displaying an image in an electronic device is provided. The method includes obtaining a first image including a plurality of subjects, setting a plurality of sub-regions respectively including the plurality of subjects, obtaining a distance between the plurality of sub-regions, when a distance between a first region and a second region, which are adjacent to each other, among the plurality of sub-regions is greater than or equal to a specified threshold distance, omitting at least a portion of a third region disposed between the first region and the second region in the first image, and displaying a second image obtained by resetting a size of each of the plurality of sub-regions and rearranging each of the plurality of sub-regions.Type: GrantFiled: February 5, 2021Date of Patent: May 2, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Kyuwon Kim, Yusic Kim, Chulsang Chang, Hyungmin Cho, Jaewoong Choi
-
Patent number: 11599706Abstract: Systems, methods, and non-transitory computer readable media may provide a view of geospatial information. A user's selection of a location may be obtained. Characteristic information describing characteristics of the location may be obtained. Activities information describing activities of the location may be obtained. An interface (e.g., user interface, API) enabling presentation of a geospatial view of the activities of the location with respect to the characteristics of the location may be provided.Type: GrantFiled: May 3, 2018Date of Patent: March 7, 2023Assignee: Palantir Technologies Inc.Inventors: Alexander Mark, Andrew Elder, Brandon Marc-Aurele, David Montague, Eric Knudson, Eric Jeney, Jeffrey Bagdis, Daniel O'Malley, Vincent Tilson
-
Patent number: 11587279Abstract: Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.Type: GrantFiled: February 28, 2022Date of Patent: February 21, 2023Assignee: INTEL CORPORATIONInventors: Yikai Fang, Yangzhou Du, Qiang Eric Li, Xiaofeng Tong, Wenlong Li, Minje Park, Myung-Ho Ju, Jihyeon Kate Yi, Tae-Hoon Pete Kim
-
Patent number: 11557391Abstract: The pose and shape of a human body may be recovered based on joint location information associated with the human body. The joint location information may be derived based on an image of the human body or from an output of a human motion capture system. The recovery of the pose and shape of the human body may be performed by a computer-implemented artificial neural network (ANN) trained to perform the recovery task using training datasets that include paired joint location information and human model parameters. The training of the ANN may be conducted in accordance with multiple constraints designed to improve the accuracy of the recovery and by artificially manipulating the training data so that the ANN can learn to recover the pose and shape of the human body even with partially observed joint locations.Type: GrantFiled: August 17, 2020Date of Patent: January 17, 2023Assignee: SHANGHAI UNITED IMAGING INTELLIGENCE CO., LTD.Inventors: Ziyan Wu, Srikrishna Karanam, Changjiang Cai, Georgios Georgakis
-
Patent number: 11544855Abstract: A target tracking method and apparatus is provided. The target tracking apparatus includes a memory configured to store a neural network, and a processor configured to extract feature information of each of a target included in a target region in a first input image, a background included in the target region, and a searching region in a second input image, using the neural network, obtain similarity information of the target and the searching region and similarity information of the background and the searching region based on the extracted feature information, obtain a score matrix including activated feature values based on the obtained similarity information, and estimate a position of the target in the searching region from the score matrix.Type: GrantFiled: December 18, 2020Date of Patent: January 3, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: HyunJeong Lee, Changbeom Park, Hana Lee, Sung Kwang Cho
-
Patent number: 11532102Abstract: Views of a virtual environment can be displayed on mobile devices in a real-world environment simultaneously for multiple users. The users can operate selections devices in the real-world environment that interact with objects in the virtual environment. Virtual characters and objects can be moved and manipulated using selection shapes. A graphical interface can be instantiated and rendered as part of the virtual environment. Virtual cameras and screens can also be instantiated to created storyboards, backdrops, and animated sequences of the virtual environment. These immersive experiences with the virtual environment can be used to generate content for users and for feature films.Type: GrantFiled: January 31, 2022Date of Patent: December 20, 2022Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Jose Perez, III, Peter Dollar, Barak Moshe
-
Patent number: 11511191Abstract: An interactive control system for a game object including: a collision module, configured to assign a collision attribute to each game object; a moving module, configured to receive a moving instruction directed to a game group; an interaction module, configured to be coupled to the collision module, calculate a repulsive force of each game object, and represent an interaction form of the game object based on the repulsive force, wherein, when the game group is moved to the moving target, makes a group circle with a center of the game group as a center of the circle and a first length as a radius, when the game object is outside the group circle, it controls the collision module to impart a restoring force directing to the center of the circle to the game object, so as to control the game object to return to the game group.Type: GrantFiled: August 5, 2020Date of Patent: November 29, 2022Assignee: SHANGHAI LILITH TECHNOLOGY CORPORATIONInventors: Ganlin Zhuang, Yifan Mao, Huan Jin
-
Patent number: 11497999Abstract: A method of determining blending coefficients for respective animations includes: obtaining animation data, the animation data defining at least two different animations that are at least in part to be simultaneously applied to the animated object, each animation comprising a plurality of frames; obtaining corresponding video game data, the video game data comprising an in-game state of the object; inputting the animation data and video game data into a machine learning model, the machine learning model being trained to determine, based on the animation data and corresponding video game data, a blending coefficient for each of the animations in the animation data; determining, based on the output of the machine learning model, one or more blending coefficients for at least one of the animations, the or each blending coefficient defining a relative weighting with which each animation is to be applied to the animated object; and blending the at least simultaneously applied part of the two animations using the oType: GrantFiled: January 13, 2020Date of Patent: November 15, 2022Assignee: Sony Interactive Entertainment Inc.Inventors: Fabio Cappello, Oliver Hume
-
Patent number: 11478707Abstract: Embodiments relate to generating image frames including a motion of a character with one or more stretchable body parts by either performing only blending of prestored animation clips or performing both the blending of prestored animation clips and performing inverse kinematics operations where one or more bones in the body parts are stretched or contracted. Choosing whether to perform blending or the inverse kinematics depends on whether predetermined conditions are satisfied or not. Prestored animation clips to be blended may be determined according to the speed of the character when performing the jumping motion. When performing the inverse kinematics, physical properties of the character are simulated to determine the trajectory of the character during the jumping.Type: GrantFiled: December 8, 2020Date of Patent: October 25, 2022Assignee: SQUARE ENIX LTD.Inventors: Stephen Perez, Noriyuki Imamura, Gary Linn Snethen
-
Patent number: 11478704Abstract: Methods and systems are provided for displaying voice input of a spectator in a video game. The method includes receiving, by a server, the voice input produced by the spectator while viewing video game video of the video game. The method includes examining, by the server, the voice input to identify speech characteristics associated with the voice input of the spectator. The method includes processing, by the server, the voice input to generate a spectator video that includes text images representing the voice input of the spectator. In one embodiment, the text images are configured to be adjusted in visual appearance based on the speech characteristics of the voice input, wherein the text images are directed in a field of view of an avatar of the spectator. The method includes combining, by the server, video game video with the spectator video to produce an overlay of the text images graphically moving in toward a game scene provided by the video game video.Type: GrantFiled: November 4, 2020Date of Patent: October 25, 2022Assignee: Sony Interactive Entertainment Inc.Inventor: Masanori Omote
-
Patent number: 11481948Abstract: The present disclosure discloses a method, device and storage medium for generating an animation. The method includes: acquiring a configuration file corresponding to a configuration file identifier; determining behavior information and animated resources based on the configuration file; acquiring first animated resources based on first animated resource identifiers in the behavior information; and generating the animation by synthesizing the behavior information and the first animated resources.Type: GrantFiled: July 22, 2020Date of Patent: October 25, 2022Assignee: BEIJING DAJIA INTERNET INFORMATION TECHNOLOGY CO., LTD.Inventors: Xuan Liu, Zhenlong Bai, Kaijian Jiang, Chao Wang
-
Patent number: 11475608Abstract: One aspect of the disclosure is a non-transitory computer-readable storage medium including program instructions. Operations performed by execution of the program instructions include obtaining an input image that depicts a face of a subject, having an initial facial expression and an initial pose, determining a reference shape description based on the input image, determining a target shape description based on the reference shape description, a facial expression difference, and a pose difference, generating a rendered target shape image using the target shape description, and generating an output image based on the input image and the rendered target shape using an image generator, wherein the output image is a simulated image of the subject of the input image that has a final expression that is based on the initial facial expression and the facial expression difference, and a final pose that is based on the initial pose and the pose difference.Type: GrantFiled: August 3, 2020Date of Patent: October 18, 2022Assignee: Apple Inc.Inventors: Barry-John Theobald, Nataniel Ruiz Gutierrez, Nicholas E. Apostoloff
-
Patent number: 11388226Abstract: Systems and methods for guided personal identity based actions are provided. In example embodiments, a user-specified action from a first user device of a first user is received. The user-specified action pertains to the first user and uses data of the first user when performed. The user-specified action is linked to an identifier. An indication of the identifier is received from a second user device of a second user. In response to receiving the indication of the identifier, the user-specified action linked to the identifier is identified, the data of the first user is accessed, a user interface that includes an option to perform the user-specified action using the data of the first user is generated, and the generated user interface is presented on the second user device.Type: GrantFiled: May 29, 2018Date of Patent: July 12, 2022Assignee: Snap Inc.Inventors: Landon Anderton, Garrett Gee, Ryan Hornberger, Kirk Ouimet, Kameron Sheffield, Benjamin Turley
-
Patent number: 11257294Abstract: A cross reality system enables any of multiple types of devices to efficiently and accurately access previously stored maps and render virtual content specified in relation to those maps. The cross reality system may include a cloud-based localization service that responds to requests from devices to localize with respect to a stored map. Devices of any type, with native hardware and software configured for augmented reality operations may be configured to work with the cross reality system by incorporating components that interface between the native AR framework of the device and the cloud-based localization service. These components may present position information about the device in a format recognized by the localization service. Additionally, these components may filter or otherwise process perception data provided by the native AR framework to increase the accuracy of localization.Type: GrantFiled: October 15, 2020Date of Patent: February 22, 2022Assignee: Magic Leap, Inc.Inventors: Xuan Zhao, Ali Shahrokni, Daniel Olshansky, Christian Ivan Robert Moore, Rafael Domingos Torres, Joel David Holder
-
Patent number: 11238619Abstract: Views of a virtual environment can be displayed on mobile devices in a real-world environment simultaneously for multiple users. The users can operate selections devices in the real-world environment that interact with objects in the virtual environment. Virtual characters and objects can be moved and manipulated using selection shapes. A graphical interface can be instantiated and rendered as part of the virtual environment. Virtual cameras and screens can also be instantiated to created storyboards, backdrops, and animated sequences of the virtual environment. These immersive experiences with the virtual environment can be used to generate content for users and for feature films.Type: GrantFiled: March 16, 2020Date of Patent: February 1, 2022Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: Jose Perez, III, Peter Dollar, Barak Moshe
-
Patent number: 11216288Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for evaluating interactions with a user interface of an application are disclosed. In one aspect, a method includes, for each of a plurality of different user sessions of a native application, accessing frame bundles that each include data representing content presented by a frame of a user interface of the native application at a given time. Each frame bundle includes at least a portion of a view tree of the native application used to generate the user interface at the given time and data specifying content presented by each view of the portion of the view tree. Based on the frame bundles, playback data are generated that present visual changes of the user interface corresponding to changes to the view trees.Type: GrantFiled: December 11, 2019Date of Patent: January 4, 2022Assignee: FullStory, Inc.Inventors: Matthew Mastracci, Joel Grayson Webber, Michael Morrissey, Hollis Bruce Johnson, Jr.
-
Patent number: 11109042Abstract: Systems and methods for coding a video to be overlaid by annotations are devised. A motion compensated predictive coding is employed, wherein coding parameters of video pixel blocks are determined based on the pixel blocks' relation to the annotations. A decoder decodes the video and annotates it based on metadata, obtained from the coder or other sources, describing the annotations' appearance and rendering mode.Type: GrantFiled: May 23, 2019Date of Patent: August 31, 2021Assignee: Apple Inc.Inventors: Sudeng Hu, Xing Wen, Jae Hoon Kim, Peikang Song, Hang Yuan, Dazhong Zhang, Xiaosong Zhou, Hsi-Jung Wu, Christopher Garrido, Ming Jin, Patrick Miauton, Karthick Santhanam
-
Patent number: 11076082Abstract: An image processing method includes obtaining one or more sets of image data generated by an imaging sensor and one or more sets of positional data generated by one or more positional sensors. An individual set of image data is associated with an image timestamp based on a time at which the individual set of image data was generated. An individual set of positional data is associated with a positional timestamp based on a time at which the individual set of positional data was generated. The method further includes correlating one of the one or more sets of image data with a corresponding one of the one or more sets of positional data based on an image timestamp associated with the one of the one or more sets of image data and a positional timestamp associated with the corresponding one of the one or more sets of positional data.Type: GrantFiled: November 19, 2018Date of Patent: July 27, 2021Assignee: SZ DJI OSMO TECHNOLOGY CO., LTD.Inventors: Xianggen Li, Li Zhou
-
Patent number: 11036524Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for evaluating interactions with a user interface of an application are disclosed. In one aspect, a method includes, for each of a plurality of different user sessions of a native application, accessing frame bundles that each include data representing content presented by a frame of a user interface of the native application at a given time. Each frame bundle includes at least a portion of a view tree of the native application used to generate the user interface at the given time and data specifying content presented by each view of the portion of the view tree. Based on the frame bundles, playback data are generated that present visual changes of the user interface corresponding to changes to the view trees.Type: GrantFiled: July 17, 2018Date of Patent: June 15, 2021Assignee: FullStory, Inc.Inventors: Matthew Mastracci, Joel Grayson Webber, Michael Morrissey, Hollis Bruce Johnson, Jr.
-
Patent number: 11017577Abstract: The invention comprises a learned model of human body shape and pose dependent shape variation that is more accurate than previous models and is compatible with existing graphics pipelines. Our Skinned Multi-Person Linear model (SMPL) is a skinned vertex based model that accurately represents a wide variety of body shapes in natural human poses. The parameters of the model are learned from data including the rest pose template, blend weights, pose-dependent blend shapes, identity-dependent blend shapes, and a regressor from vertices to joint locations. Unlike previous models, the pose-dependent blend shapes are a linear function of the elements of the pose rotation matrices. This simple formulation enables training the entire model from a relatively large number of aligned 3D meshes of different people in different poses. The invention quantitatively evaluates variants of SMPL using linear or dual quaternion blend skinning and show that both are more accurate than a BlendSCAPE model trained on the same data.Type: GrantFiled: August 14, 2019Date of Patent: May 25, 2021Assignee: Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V.Inventors: Michael J. Black, Matthew Loper, Naureen Mahmood, Gerard Pons-Moll, Javier Romero
-
Patent number: 10988024Abstract: A display device is configured to, when displaying an indicator needle as rotating with a relatively small rotation amount per unit time, display an indicator needle image while rotating the indicator needle image, and, when displaying the indicator needle as rotating with a relatively large rotation amount per unit time, display an indicator-needle motion-blurred image that corresponds to a rotation range per unit time of the indicator needle. An indicator-needle inner end angle formed by two sides of the indicator needle image that form an end thereof that faces a center of the rotation is equal to a minimum motion-blurred inner end angle formed by two sides of the indicator-needle motion-blurred image that has the smallest rotation amount per unit time while the indicator needle is displayed as rotating, the two sides forming an end of the indicator-needle motion-blurred image that faces the center of the rotation.Type: GrantFiled: January 25, 2018Date of Patent: April 27, 2021Assignee: YAZAKI CORPORATIONInventors: Kazumasa Shoji, Ryoko Sone, Yousuke Suzuki
-
Patent number: 10957078Abstract: A raster unit is configured to generate different sample patterns for adjacent pixels within a given frame. In addition, the raster unit may adjust the sample patterns between frames. The raster unit includes an index unit that selects a sample pattern table for use with a current frame. For a given pixel, the index unit extracts a sample pattern from the selected sample pattern table. The extracted sample pattern is used to generate coverage information for the pixel. The coverage information for all pixels is then used to generate an image. The resultant image may then be filtered to reduce or remove artifacts induced by the changing of sample locations.Type: GrantFiled: December 4, 2018Date of Patent: March 23, 2021Assignee: NVIDIA CorporationInventors: Yury Y. Uralsky, Jonah M. Alben, Ankan Banerjee, Gregory Massal, Thomas Petersen, Oleg Kuznetsov, Eric B. Lum, Prakshep Mehta
-
Patent number: 10867416Abstract: Methods and systems are provided for generating harmonized images for input composite images. A neural network system can be trained, where the training includes training a neural network that generates harmonized images for input composite images. This training is performed based on a comparison of a training harmonized image and a reference image, where the reference image is modified to generate a training input composite image used to generate the training harmonized image. In addition, a mask of a region can be input to limit the area of the input image that is to be modified. Such a trained neural network system can be used to input a composite image and mask pair for which the trained system will output a harmonized image.Type: GrantFiled: March 10, 2017Date of Patent: December 15, 2020Assignee: ADOBE INC.Inventors: Xiaohui Shen, Zhe Lin, Yi-Hsuan Tsai, Xin Lu, Kalyan K. Sunkavalli
-
Patent number: 10845188Abstract: Methods and apparatus for capturing motion from a self-tracking device are disclosed. In embodiments, a device self-tracks motion of the device relative to a first reference frame while recording motion of a subject relative to a second reference frame, the second reference frame being a reference frame relative to the device. In the embodiments, the subject may be a real object or, alternately, the subject may be a virtual subject and a motion of the virtual object may be recorded relative to the second reference frame by associating a position offset relative to the device with the position of the virtual object in the recorded motion. The motion of the subject relative to the first reference frame may be determined from the tracked motion of the device relative to the first frame and the recorded motion of the subject relative to the second reference frame.Type: GrantFiled: January 5, 2016Date of Patent: November 24, 2020Assignee: Microsoft Technology Licensing, LLCInventors: John Weiss, Vivek Pradeep, Xiaoyan Hu
-
Patent number: 10839614Abstract: Systems and methods to rapidly create, view, and modify three-dimensional experiences may include a two-dimensional content editing device and application and a three-dimensional experience viewing device and application. Using the two-dimensional content editing application, two-dimensional objects may be created, and properties of the two-dimensional objects may be defined. Using the three-dimensional experience viewing application, the two-dimensional objects may be rendered within a three-dimensional environment based on the defined properties. In this manner, three-dimensional experiences may be quickly created, viewed, modified, reviewed, and published without the need for specialized tools, training, or experience in three-dimensional modeling or programming.Type: GrantFiled: June 26, 2018Date of Patent: November 17, 2020Assignee: Amazon Technologies, Inc.Inventors: Lane Daughtry, David Robert Cole, Jason Andrew Brightman
-
Patent number: 10825258Abstract: In one embodiment, a method includes by a computing device, displaying a user interface for designing augmented-reality effects. The method includes receiving user input through the user interface. The method includes displaying a graph generated based on the user input. The graph may include multiple nodes and one or more edges. The nodes may include a detector node and a filter node connected by one or more edges. The method includes detecting, in accordance with an object type specified by the detector node, one or more object instances of the object type appearing in a scene. The method includes selecting, in accordance with at least one criterion specified by the filter node, at least one of the one or more detected object instances that satisfies the criterion. The method includes rendering an augmented-reality effect based on at least the selected object instance.Type: GrantFiled: September 28, 2018Date of Patent: November 3, 2020Assignee: Facebook, Inc.Inventors: Stef Marc Smet, Thomas Paul Mann, Michael Slater, Hannes Luc Herman Verlinde
-
Patent number: 10796691Abstract: Systems, devices, and methods are described herein for providing a graphical user interface for configuring presentations of content and controlling distribution of content, for example, through in conjunction with a management system.Type: GrantFiled: June 2, 2016Date of Patent: October 6, 2020Assignee: Sinclair Broadcast Group, Inc.Inventors: Benjamin Aaron Miller, Jason D. Justman, Lora Clark Bouchard, Michael Ellery Bouchard, Kevin James Cotlove, Mathew Keith Gitchell, Stacia Lynn Haisch, Jonathan David Kersten, Todd Christopher Tibbetts
-
Patent number: 10785489Abstract: The present teaching relates to method, system, medium, and implementations for rendering a moving object. An object data package related to a moving object appearing in a monitored scene with respect to a first time instance is first received and features characterizing the moving object at the first time instance are extracted from the package, that are estimated at a monitoring rate and include a current position of the object and a current motion vector at the first time instance. Information associated with a previously rendered object at a previously rendered position at a previous time instance is retrieved and a next rendering position of the object is determined based on the current position, the current motion vector, and a rendering rate lower than the monitoring rate. The object is rendered at the next rendering position based on a motion vector and the information associated with the previously rendered object.Type: GrantFiled: February 15, 2019Date of Patent: September 22, 2020Assignee: DMAI, INC.Inventor: Jeremy Nelson
-
Patent number: 10776532Abstract: A system and method for solving linear complementarity problems for rigid body simulation is disclosed. The method includes determining a plurality of modified effective masses for a plurality of contacts between a plurality of bodies, wherein each modified effective mass term is based on a corresponding number of contacts. A plurality of relative velocities is determined based on the plurality of body velocities determined from a last iteration. A plurality of impulse corrections is determined based on the plurality of modified effective masses and the plurality of relative velocities. A plurality of updated impulses is determined based on the impulse corrections. The plurality of updated impulses is applied to the plurality of bodies based on a plurality of original masses of the bodies, body velocities determined from the last iteration, to determine a plurality of updated velocities of the plurality of bodies.Type: GrantFiled: February 22, 2013Date of Patent: September 15, 2020Assignee: NVIDIA CORPORATIONInventors: Richard Tonge, Feodor Benevolenski, Andrey Voroshilov
-
Patent number: 10762639Abstract: Current embodiments provided herein include methods for visualizing repetitive movements which use video image files acquired with an appropriate frame rate, which is based on the period of repetition and minimal exposure, to reorganize the presentation of the frames to freeze the motion of the object in motion at any point in the cycle of repetition or to display the isolated frequency of repetition or a video of the amplified motion to enable the detailed visual inspection of an object in motion, and without having to stop the motion.Type: GrantFiled: January 21, 2020Date of Patent: September 1, 2020Assignee: RDI TECHNOLOGIES, INC.Inventors: Jeffrey R. Hay, Mark William Slemp, Kenneth Ralph Piety
-
Patent number: 10742947Abstract: Labor of photographing images of an object can be reduced in a case where the images of the object viewed from a viewpoint moving along with an orbit are displayed. Image obtaining means of a display control system obtains a plurality of image data pieces generated by photographing an object from a plurality of photographing positions that are different from one another in photographing directions. Information obtaining means obtains photograph information relating to the photographing positions of the respective image data pieces in a three-dimensional space based on the image data pieces. Image selecting means selects some of the image data pieces based on the photograph information of the respective image data pieces and orbit information relating to an orbit of a viewpoint that moves while changing a viewing direction in the three-dimensional space. Display control means displays, on display means, the image data pieces selected by the image selecting means in an order according to the orbit.Type: GrantFiled: March 30, 2015Date of Patent: August 11, 2020Assignee: RAKUTEN, INC.Inventors: Adiyan Mujibiya, Shogo Yamashita
-
Patent number: 10735798Abstract: A method and system for distributing video content across a distributed network is described. The system comprises a first device having video data provided thereon. A first application is operable on the first device and is configured for associating control data with the video data, wherein the control data contains information for creating auxiliary data which is to be presented with the video data subsequent to the video data being broadcast to one or more second devices across the network. A control centre is in communication with the first application for receiving the video data and the associated control data from the first device. The control centre is operable to broadcast the video data and the associated control data to one or more second devices. A media player is provided on the respective second devices which is operable in response to reading the control data to create the auxiliary data on the respective second device.Type: GrantFiled: April 12, 2019Date of Patent: August 4, 2020Assignee: HELEN BRADLEY LENNONInventors: Helen Bradley Lennon, Damien Purcell
-
Patent number: 10701253Abstract: Embodiments of the disclosure provide systems and methods for motion capture to generate content (e.g., motion pictures, television programming, videos, etc.). An actor or other performing being can have multiple markers on his or her face that are essentially invisible to the human eye, but that can be clearly captured by camera systems of the present disclosure. Embodiments can capture the performance using two different camera systems, each of which can observe the same performance but capture different images of that performance. For instance, a first camera system can capture the performance within a first light wavelength spectrum (e.g., visible light spectrum), and a second camera system can simultaneously capture the performance in a second light wavelength spectrum different from the first spectrum (e.g., invisible light spectrum such as the IR light spectrum). The images captured by the first and second camera systems can be combined to generate content.Type: GrantFiled: August 13, 2018Date of Patent: June 30, 2020Assignee: LUCASFILM ENTERTAINMENT COMPANY LTD.Inventors: John Knoll, Leandro Estebecorena, Stephane Grabli, Per Karefelt, Pablo Helman, John M. Levin
-
Patent number: 10617474Abstract: In accordance with one or more embodiments herein, a system for optimizing a position of an implant having an individually customized implant hat and at least one implant protrusion extending from the implant hat in the direction of an implant axis in an anatomical joint of a patient is provided.Type: GrantFiled: December 22, 2017Date of Patent: April 14, 2020Assignee: Episurf IP-Management ABInventors: Anders Karlsson, Richard LilliestrĂ¥le