Animation Patents (Class 345/473)
  • Patent number: 10997722
    Abstract: A method for identifying a body motion includes receiving a series of images including a visual presentation of a human face from the image capture device. The series of images may form an image sequence. Each of the series of images may have a previous image or a next image in the image sequence. The method also includes, for each of the series of images, determining a plurality of characteristic points on the human face, determining positions of the plurality of characteristic points on the human face, and determining an asymmetry value based on the positions of the plurality of characteristic points. The method further includes identifying a head-shaking movement of the human face based on the asymmetry values of the series of images.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: May 4, 2021
    Assignee: BEIJING DIDI INFINITY TECHNOLOGY AND DEVELOPMENT CO., LTD.
    Inventor: Xiubao Zhang
  • Patent number: 10999229
    Abstract: One or more embodiments described herein include methods and systems of providing message status notifications. The status notifications can comprise one or more of sent, delivered, or accessed/read notifications. In one or more embodiments a status notification is persistently displayed in a thread for each participant in a conversation. Each time the participant accesses a new message, the system can move the status notification adjacent to the new message.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: May 4, 2021
    Assignee: FACEBOOK, INC.
    Inventor: Benjamin S Langholz
  • Patent number: 10997764
    Abstract: Embodiments of the present disclosure provide a method and apparatus for generating an animation. A method may include: extracting an audio feature from target speech segment by segment, to aggregate the audio feature into an audio feature sequence composed of an audio feature of each speech segment; inputting the audio feature sequence into a pre-trained mouth-shape information prediction model, to obtain a mouth-shape information sequence corresponding to the audio feature sequence; generating, for mouth-shape information in the mouth-shape information sequence, a face image including a mouth-shape object indicated by the mouth-shape information; and using the generated face image as a key frame of a facial animation, to generate the facial animation.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: May 4, 2021
    Assignee: Beijing Baidu Netcom Science and Technology Co., Ltd.
    Inventors: Jianxiang Wang, Fuqiang Lyu, Xiao Liu, Jianchao Ji
  • Patent number: 10996834
    Abstract: Embodiments of the present invention provide a displaying method. The method includes steps of: displaying an element at a first position on touchscreen, obtaining touch information, determining an arrangement instruction which is obtained for the greatest number of times within predetermined time according to the touch information, and displaying the element at a second position on the touchscreen according to the arrangement instruction.
    Type: Grant
    Filed: December 5, 2016
    Date of Patent: May 4, 2021
    Assignee: HUAWEI DEVICE CO., LTD.
    Inventors: Fang Lan, Gang Wu, Jie Xu
  • Patent number: 10997766
    Abstract: An avatar motion generating method and a head mounted display system are provided. In the method, an input event is received, and the input event is related to sensing result of a user. First avatar motion is generated based on one of predefined motion data, motion sensing data and a combination thereof at the first period of time. Second avatar motion is generated based on another of the predefined motion data, the motion sensing data and the combination thereof at the second period of time. Accordingly, the motion of the avatar could be smooth and natural.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: May 4, 2021
    Assignee: XRSPACE CO., LTD.
    Inventors: Wei-Zhe Hong, Pei-Wen Hsieh
  • Patent number: 10997770
    Abstract: Techniques are described for automating animation of fonts. In certain embodiments, segments of a glyph that symbolizes a font character are accessed. Sub-segments are then generated for the glyph by applying an automated segmenting function to the segments. Glyph points are then determined for the glyph based on the generated sub-segments of the glyph. For a glyph point in the glyph points, positions for the glyph point are computed at time points by, for each time point of the time points, applying an effect function to the glyph point. Keyframes are generated corresponding to the time points, wherein each keyframe in the keyframes corresponds to a respective time point in the time points and includes an animation effect generated for the glyph based on respective positions computed for the glyph points at the time point. A font animation is provided based on the keyframes.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: May 4, 2021
    Assignee: ADOBE INC.
    Inventor: Nirmal Kumawat
  • Patent number: 10984573
    Abstract: A non-transitory computer readable storage medium storing computer program code that, when executed by a processing device, cause the processing device to perform operations comprising: determining a first representative point, wherein the first representative point represents a first geometric primitive; determining a second representative point, wherein the second representative point represents a second geometric primitive; determining an initial distance between the first representative point and the second representative point; calculating a first displacement based on a velocity of the first representative point; calculating a second displacement based on a velocity of the second representative point; determining a separating direction between the first representative point and the second representative point; projecting the first displacement along the separating direction; projecting the second displacement along the separating direction; calculating a predicted minimum distance between the first repr
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: April 20, 2021
    Assignee: ELECTRONIC ARTS INC.
    Inventor: Christopher Charles Lewin
  • Patent number: 10984578
    Abstract: A rail manipulator indicates the possible range(s) of movement of a part of a computer-generated character in a computer animation system. The rail manipulator obtains a model of the computer-generated character. The model may be a skeleton structure of bones connected at joints. The interconnected bones may constrain the movements of one another. When an artist selects one of the bones for movement, the rail manipulator determines the range of movement of the selected bone. The determination may be based on the position and/or the ranges of moments of other bones in the skeleton structure. The range of movement is displayed on-screen to the artist, together with the computer-generated character. In this way, the rail manipulator directly communicates to the artist the degree to which a portion of the computer-generated character can be moved, in response to the artist's selection of the portion of the computer-generated character.
    Type: Grant
    Filed: September 28, 2017
    Date of Patent: April 20, 2021
    Assignee: DreamWorks Animation L.L.C.
    Inventor: Alexander P. Powell
  • Patent number: 10984589
    Abstract: Systems and methods relate to encoded video streams including geometric-data streams transmitted to a receiver for rendering of a viewpoint-adaptive 3D persona. A method includes obtaining a three-dimensional (3D) mesh of a subject generated from depth-camera-captured information about the subject, obtaining a facial-mesh model, locating a facial portion of the obtained 3D mesh of the subject, computing a geometric transform based on the facial portion and the facial-mesh model, the geometric transform determined in response to one or more aggregated error differences between a plurality of feature points on the facial-mesh model and a plurality of corresponding feature points on the facial portion of the obtained 3D mesh, generating a transformed facial-mesh model using the geometric transform and generating a hybrid mesh of the subject at least in part by combining the transformed facial-mesh model and at least a portion of the obtained 3D mesh.
    Type: Grant
    Filed: September 16, 2019
    Date of Patent: April 20, 2021
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Simion Venshtain, Po-Han Huang
  • Patent number: 10984574
    Abstract: The present disclosure relates to an AR animation generation system identifies an animation profile for animating the virtual object displayed in an augmented reality (AR) scene. The AR animation generation system creates a link between the virtual object and the mobile computing system based upon a position of the virtual object within the AR scene and a position of a mobile device in a real-world environment. The link enables determining for each position of the mobile device in the real-world environment, a corresponding position for the virtual object in the AR scene.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: April 20, 2021
    Assignee: Adobe Inc.
    Inventors: Yaniv De Ridder, Stefano Corazza, Lee Brimelow, Erwan Maigret, David Montero
  • Patent number: 10986312
    Abstract: An information processing apparatus according to an embodiment of the present technology includes a generation unit and a first transmission unit. The generation unit generates parameter information that shows states of a user. The first transmission unit transmits the generated parameter information through a network to an information processing apparatus of a communication partner capable of generating an image that reflects the state of the user on the basis of the parameter information.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: April 20, 2021
    Assignee: Sony Corporation
    Inventor: Hiromitsu Fujii
  • Patent number: 10981078
    Abstract: Disclosed are methods that create the perception for an audience that an actor is being transported from one location to another during a stage performance. Also disclosed are methods for entrance onto/exit from a stage platform by an actor, or entrance onto/exit from a stage backdrop by a precision image that represents an actor. The disclosed methods involve interaction of an actor with an object/prop or image of an object/prop, and/or involve interaction of an image of an actor with an object/prop or image of an object/prop during a stage performance.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: April 20, 2021
    Inventor: Hongzhi Li
  • Patent number: 10977769
    Abstract: Processor displays avatar to move from a starting coordinate to first movement destination coordinates in accordance with each instruction in first instruction set. Processor records coordinates after movement by instructions in the first instruction set, as first group, in accordance with instruction included in the first instruction set. Processor returns avatar to the before starting movement coordinates and displays avatar. Processor displays avatar to move from the before starting movement coordinates to second movement destination coordinates in accordance with each instruction in second instruction set. Processor records coordinates after movement by instructions in the second instruction set.
    Type: Grant
    Filed: September 4, 2018
    Date of Patent: April 13, 2021
    Assignee: CASIO COMPUTER CO., LTD.
    Inventor: Miki Suzuki
  • Patent number: 10979760
    Abstract: A method causes a display device to simultaneously display at least the following: a video depicting an item in a scene and a tag in a first position on the video. The tag is associated with the item depicted in the video. The tag includes text information associated with the item depicted in the video. The method also causes the tag to undergo motion relative to at least a portion of the video scene from the first position to a second position different from the first position, while causing the display device to display the video and the tag on the video.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: April 13, 2021
    Assignee: Gula Consulting Limited Liability Company
    Inventor: Charles J. Kulas
  • Patent number: 10977873
    Abstract: An electronic device is provided. The electronic device includes a camera, a display, and a processor configured to obtain a first image including one or more external objects by using the camera, display to output a three-dimensional (3D) object generated based on attributes related to a face among the one or more external objects using the display, receive a selection of at least one graphic attribute from a plurality of graphic attributes which can be applied to the 3D object, generate a 3D avatar for the face based on the at least one graphic attribute, and generate a second image including at least one object reflecting a predetermined facial expression or motion using the 3D avatar.
    Type: Grant
    Filed: February 21, 2019
    Date of Patent: April 13, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Wooyong Lee, Yonggyoo Kim, Byunghyun Min, Dongil Son, Chanhee Yoon, Kihuk Lee, Cheolho Cheong
  • Patent number: 10979751
    Abstract: A communication management apparatus includes a receiver to receive image type information indicating a type of an image from a first communication terminal; circuitry to generate image data identification information for identifying image data to be transmitted from the first communication terminal to a second communication terminal, based on reception of the image type information; and a transmitter to transmit the image data identification information that is generated to the first communication terminal, and transmit the image data identification information that is generated and the image type information that is received to the second communication terminal.
    Type: Grant
    Filed: February 25, 2019
    Date of Patent: April 13, 2021
    Assignee: Ricoh Company, Ltd.
    Inventors: Kenichiro Morita, Kumiko Yoshida, Yoichiro Matsuno, Takuya Imai, Shoh Nagamine, Junpei Mikami
  • Patent number: 10972984
    Abstract: Systems, methods, devices, computer readable media, and other various embodiments are described for location management processes in wearable electronic devices. Performance of such devices is improved with reduced time to first fix of location operations in conjunction with low-power operations. In one embodiment, low-power circuitry manages high-speed circuitry and location circuitry to provide location assistance data from the high-speed circuitry to the low-power circuitry automatically on initiation of location fix operations as the high-speed circuitry and location circuitry are booted from low-power states. In some embodiments, the high-speed circuitry is returned to a low-power state prior to completion of a location fix and after capture of content associated with initiation of the location fix. In some embodiments, high-speed circuitry is booted after completion of a location fix to update location data associated with content.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: April 6, 2021
    Assignee: Snap Inc.
    Inventors: Yu Jiang Tham, John James Robertson, Gerald Nilles, Jason Heger, Praveen Babu Vadivelu
  • Patent number: 10970901
    Abstract: A single-photo generating device is provided. The single-photo generating device includes an image capturing device and a processing device. The image capturing device generates a first image, wherein the first image includes a plurality of people. The processing device is coupled to the image capturing device and obtains the first image from the image capturing device. The processing device extracts each human image corresponding to the plurality of people from the first image and selects a background image, and the processing device generates a plurality of single photos corresponding to each human image according to the extracted human images and the background image.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: April 6, 2021
    Assignee: WISTRON CORP.
    Inventors: Cheng Yan Xu, Qi Cao
  • Patent number: 10970867
    Abstract: Techniques are described herein that overcome the limitations of conventional techniques by bridging a gap between user interaction with digital content using a computing device and a user's physical environment through use of augmented reality content. In one example, user interaction with augmented reality digital content as part of a live stream of digital images of a user's environment is used to specify a size of an area that is used to filter search results to find a “best fit”. In another example, a geometric shape is used to represent a size and shape of an object included in a digital image (e.g., a two-dimensional digital image). The geometric shape is displayed as augmented reality digital content as part of a live stream of digital images to “assess fit” of the object in the user's physical environment.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: April 6, 2021
    Assignee: eBay Inc.
    Inventors: Preeti Patil Anadure, Mukul Arora, Ashwin Ganesh Krishnamurthy
  • Patent number: 10963648
    Abstract: A co-user list may be configured based on user interaction in a virtual world environment. A first user may be enabled to navigate the virtual world environment using an instant messenger application that includes the co-user list. A second user that is located proximate to the first user in the virtual world environment may be detected. An attribute associated with the second user may be determined. The co-user list may be configured based on the attribute associated with the second user.
    Type: Grant
    Filed: August 18, 2014
    Date of Patent: March 30, 2021
    Assignee: Verizon Media Inc.
    Inventor: David S. Bill
  • Patent number: 10964228
    Abstract: There is disclosed an educational system including a science experimental set and a computer system, the science experimental set comprising experimental set items, and the computer system including a processor, a detector and a display, the computer system configured to display educational media content on the display relating to the science experimental set in response to the detector detecting an item in the science experimental set, and the processor identifying the media content to be displayed based on the detection of the item. Related methods, computer program products and kits of parts are disclosed.
    Type: Grant
    Filed: March 9, 2016
    Date of Patent: March 30, 2021
    Assignee: MEL SCIENCE LIMITED
    Inventors: Vassili Philippov, Artem Messorosh, Sergey Safonov, Mikhail Perepelkin, Konstantin Gurianov
  • Patent number: 10964081
    Abstract: A user interface for animating digital artwork includes a two-part control to change scale, rotation, and/or shear. A stationary portion is manipulated by the user while a moveable portion moves during manipulation to reflect a deformation position resulting from the control For example, a system may store an artwork having a tessellated mesh bounded by an alpha edge and a bend handle associated with at least a first vertex of the mesh. The system also includes a user interface that implements a control for the bend handle. The control includes a stationary portion enabling the user to select a control value for the bend handle and a moveable portion that moves, during a manipulation event of the stationary portion, to a deformed position determined from the control value and from a position of at least one other handle associated with at least a second vertex of the vertices.
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: March 30, 2021
    Assignee: ADOBE INC.
    Inventor: William Amir Stone
  • Patent number: 10965932
    Abstract: An embodiment of an image processor for immersive video includes technology to re-order patches from a plurality of views based on one or more of relative position and orientation related information for a desired synthesized view, select a set of views to be used in each view synthesis pass, perform two or more view synthesis passes for the synthesized view to provide two or more intermediate view synthesis results, and mask and merge the two or more intermediate view synthesis results to provide a final view synthesis result. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: March 30, 2021
    Assignee: Intel Corporation
    Inventors: Basel Salahieh, Sumit Bhatia, Jill Boyce
  • Patent number: 10960297
    Abstract: Systems, methods, and devices are disclosed for tracking physical objects using a passive reflective object. A computer-implemented method includes obtaining a location profile derived from content capturing a passive object having a reflective surface reflecting one or more real-world objects. The passive object is attached to a physical object. The method further includes transmitting the location profile to a simulation device. The method further includes generating a virtual representation of the physical object based on the location profile of the passive object. The method further includes presenting the virtual representation in a simulation experience.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: March 30, 2021
    Assignee: DISNEY ENTERPRISES, INC.
    Inventor: Steven M. Chapman
  • Patent number: 10956036
    Abstract: Non-limiting examples of the present disclosure describe gesture input processing. As an example, a gesture input may be a continuous gesture input that is received through a soft keyboard application. The continuous gesture input may comprise query input and a selection of an application extension displayed within the soft keyboard application. The query input may be processed using a component associated with the application extension. A result for the query input may be provided. As an example, the result may be provided by the component associated with the application extension. Other examples are also described.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: March 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Sung Joon Won
  • Patent number: 10958973
    Abstract: A method, computer system, and computer program product for viewing preferences identification are provided. The embodiment may include receiving, by a processor, a plurality of data related to a user profile. The embodiment may also include collecting user interaction information from a streaming content service. The embodiment may further include analyzing the user habits and patterns based on the collected user interaction information. The embodiment may also include comparing the habits and patterns with the received user profile. The embodiment may further include prompting a user to confirm an identity associated with the user profile when there is a match between the user profile and the habits and patterns.
    Type: Grant
    Filed: June 4, 2019
    Date of Patent: March 23, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Harry Hoots, Hernan A. Cunico, Martin G. Keen, Uma Maheshwar Reddy Chamakura
  • Patent number: 10958877
    Abstract: The invention relates to systems and method for inhibiting or causing automated actions based on estimated person locations comprising multiple video sources configured to detect the location of one or more persons wherein at least one video source is calibrated for a known location and pose. The invention further comprises at least one processor operably connected to a calibrated video source wherein said processor aggregates possible person locations. These systems and method may be useful for initiating or interrupting the automated activity of equipment in the presence of personnel.
    Type: Grant
    Filed: November 11, 2015
    Date of Patent: March 23, 2021
    Assignee: Helmerich & Payne Technologies, LLC
    Inventors: Peter A. Torrione, Kenneth D. Morton, Jr.
  • Patent number: 10949960
    Abstract: Techniques related to synthesizing an image of a person in an unseen pose are discussed. Such techniques include detecting a body part occlusion for a body part in a representation of the person in a first image and, in response to the detected occlusion, projecting a representation of the body part from a second image having a different view into the first image. A geometric transformation based on a source pose of the person and a target pose is then applied to the merged image to generate a synthesized image comprising a representation of the person in the target pose.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: March 16, 2021
    Assignee: Intel Corporation
    Inventors: Fanny Nina Paravecino, James Hall, Rita Brugarolas Brufau
  • Patent number: 10949978
    Abstract: A segmentation of an object depicted in a first visual representation may be determined. The segmentation may include for each image a first respective image portion that includes the object, a second respective image portion that includes a respective ground area located beneath the object, and a third respective image portion that includes a background area located above the second respective portion and behind the object. A second visual representation may be constructed that includes the first respective image portion and a target background image portion that replaces the third respective image portion and that is selected from a target background image based on an area of the third respective image portion relative to the respective image.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: March 16, 2021
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Matthias Reso, Abhishek Kar, Julius Santiago, Pavel Hanchar, Radu Bogdan Rusu
  • Patent number: 10949716
    Abstract: In one aspect, a computerized process useful for movement classification using a motion capture suit includes the step of providing the motion capture suit worn by a user. The motion capture suit comprises a set of position sensors and a Wi-Fi system configured to communicate a set of position sensor data to a computing system. The process includes the step of providing the computing system to: receive a set of position data from the motion capture suit for a specified time window of data comprising X, Y and Z axis positions and a joints-angle data for each position sensor of the set of position sensors, transforming each joints-angle data to a corresponding frequency domain using a fast Fourier transformation to remove any time dependency value, after the fast Fourier data transformation, train a support vector machine using the X, Y and Z axis positions data and the frequency domain data as input, using the support vector machine to predict a set of body positions and movements.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: March 16, 2021
    Inventors: Jakob Balslev, Anders Kullmann Klok, Maziar Taghiyar-Zamani, Matias Søndergaard, Lasse Petersen, Peter Jensen
  • Patent number: 10942632
    Abstract: An electronic device has a graphical user interface that displays a viewport of a canvas containing positioned and sized graphical information units. Smaller units are displayed in front of overlapping larger units. Relative size determination uses a specific-size-metric such as unit width. The device uses parentage determination rules to deduce a current hierarchical relationship between two units according to current sizes and positions. When there is full or partial overlap, the larger unit of each pair is deduced as the parent; otherwise there is no direct relationship. Clusters of decreasingly sized descendants result. User input adjusts the size and/or position of a selected unit while concurrently applying the adjustment proportionally to all descendants. Throughout continuous input, each adjusted and other canvas units are independently displayed with smaller in front of overlapping larger units.
    Type: Grant
    Filed: November 6, 2018
    Date of Patent: March 9, 2021
    Assignee: Zocomotion Ltd.
    Inventor: David Sefton
  • Patent number: 10944991
    Abstract: A decoding device, an encoding device and methods for point cloud encoding and decoding are disclosed. The method for decoding includes receiving a bitstream and decoding from the bit stream a first and second frame that is associated with a delta index. The first and second frames include patches that represent a 3D point cloud at different instances in time. The method additionally includes determining, based on decoding the delta index, that at least one of the patches included in the second frame matches a corresponding patch included in the first frame. The method further includes identifying a predictor index for a current patch; identifying a reference index associated with a reference patch in the first frame based on the delta index and the predictor index; and generating the 3D point cloud using the first frame, the second frame, and the reference patch.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: March 9, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Rajan Laxman Joshi
  • Patent number: 10937182
    Abstract: An electronic device estimates a pose of one or more subjects in an environment based on estimating a correspondence between a data volume containing a data mesh based on a current frame captured by a depth camera and a reference volume containing a plurality of fused prior data frames based on spectral embedding and performing bidirectional non-rigid matching between the reference volume and the current data frame to refine the correspondence so as to support location-based functionality. The electronic device predicts correspondences between the data volume and the reference volume based on spectral embedding. The correspondences provide constraints that accelerate the convergence between the data volume and the reference volume. By tracking changes between the current data mesh frame and the reference volume, the electronic device avoids tracking failures that can occur when relying solely on a previous data mesh frame.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: March 2, 2021
    Assignee: GOOGLE LLC
    Inventors: Mingsong Dou, Sean Ryan Fanello, Adarsh Prakash Murthy Kowdle, Christoph Rhemann, Sameh Khamis, Philip L. Davidson, Shahram Izadi, Vladimir Tankovich
  • Patent number: 10937220
    Abstract: Embodiments provide for animation streaming for media interaction by receiving, at a generator, inputs from a target device presenting of a virtual environment; updating, based on the user inputs, a model of the virtual environment; determining network conditions between the generator and target device; generating a packet that includes a forecasted animation set for a virtual object in the updated model that comprises rig updates for the virtual object for at least two different states, and a number of states included in the packet is based on the network conditions; and streaming the packet to the target device, where the target device: receives a second input to interact with the virtual environment that changes the virtual environment to a given state; selects and applies a rig update associated with the given state a local model of the virtual object; and outputs the updated local model on the target device.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: March 2, 2021
    Assignee: Disney Enterprises, Inc.
    Inventor: Kenneth J. Mitchell
  • Patent number: 10936858
    Abstract: A system and method for generate a mood log based on user images. In one embodiment, a system includes an image module that receives images taken by a user's mobile computing device and determines that a face of the user is included in the image, a mood module that determines a mood level of the user based on the face, and a log module that stores the mood level in a log of mood levels for the user.
    Type: Grant
    Filed: October 22, 2018
    Date of Patent: March 2, 2021
    Assignee: Snap Inc.
    Inventor: Sheldon Chang
  • Patent number: 10936149
    Abstract: A method includes providing a virtual space to a first user via a head-mounted device (HMD) including a display. The virtual space includes a first avatar associated with the first user, a character object operable based on an input operation on a controller, a second avatar associated with a second user, and a virtual camera defining a field-of-view image to be provided to the HMD. The method further includes determining a viewpoint mode. In a first viewpoint mode the virtual camera is associated with a viewpoint of the first avatar. In a second viewpoint mode the virtual camera is associated with a viewpoint of the character object. The viewpoint is determined based on at least one of an input operation determined in advance on the controller or a state of the controller. The method includes providing the field-of-view image via the HMD in accordance with the determined viewpoint mode.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: March 2, 2021
    Assignee: COLOPL, INC.
    Inventor: Atsushi Inomata
  • Patent number: 10930075
    Abstract: Devices, systems, and methods for interacting with a three-dimensional virtual environment, including receiving an input associated with a change in pose of a user's hand; estimating, based on at least the input, a first pose in the virtual environment for an input source associated with the hand; identifying a surface of a virtual object in the virtual environment; rendering a frame depicting elements of the virtual environment, the frame including a pixel rendered for a position on the surface; determining a distance between the position and a virtual input line extending through a position of the first pose and in a direction of the first pose; changing a pixel color rendered for the pixel based on the distance between the position and the virtual input line; and displaying the frame including the pixel with the changed pixel color to the user via a head-mounted display device.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: February 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Carlos Fernando Faria Costa, Mathew Julian Lamb, Brian Thomas Merrell
  • Patent number: 10930070
    Abstract: A periphery monitoring device includes: an acquisition unit configured to acquire a captured image from an imaging unit that captures an image of a periphery of a vehicle; a generation unit configured to generate a vehicle surrounding image indicating a situation around the vehicle in a virtual space based on the captured image; and a processing unit configured to display, on a display device, an image in which an own vehicle image is overlapped on the vehicle surrounding image, the own vehicle image indicating the vehicle in which a transmissive state of a constituent plane representing a plane constituting the vehicle is determined according to a direction of the constituent plane, and the vehicle surrounding image being represented based on a virtual viewpoint facing the vehicle in the virtual space.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: February 23, 2021
    Assignee: AISIN SEIKI KABUSHIKI KAISHA
    Inventors: Kazuya Watanabe, Tetsuya Maruoka
  • Patent number: 10930074
    Abstract: Embodiments of the present application provide a method for real-time control of a three-dimensional model configured to solve technical issues that a real-time feedback for an actual object is not formed through limited resources in order to control an action of the three-dimensional model to form a live video in a mobile internet environment. The method includes: capturing a real-time video of an actual object; marking an action of the actual object in an image of the real-time video; and forming an action control instruction of a corresponding 3D model according to a change of the action that is marked.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: February 23, 2021
    Assignee: APPMAGICS TECH (BEIJING) LIMITED
    Inventors: Yingna Fu, Yulin Jin
  • Patent number: 10930044
    Abstract: A control system provides an interface for virtual characters, or avatars, during live avatar-human interactions. A human interactor can select facial expressions, poses, and behaviors of the virtual character using an input device mapped to menus on a display device.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: February 23, 2021
    Assignee: MURSION, INC.
    Inventors: Alex Zelenin, Brian D. Kelly, Arjun Nagendran
  • Patent number: 10929849
    Abstract: A method for verifying the identity of an individual with a mobile device equipped with at least one camera, a graphical display, a wireless communication adapter, and a verification mobile application. The method includes capturing a video of a biometric attribute of the individual through said camera of mobile device (step 120). Then reconstructing with the mobile device in real-time a 3D model of the individual's biometric attribute from the video captured, where the 3D model contains shapes and/or textures, forming thereby a reconstructed 3D model (step 120). And, comparing the reconstructed 3D model with a reference 3D model containing shapes and textures, stored in either the mobile device or remote enrolment database following a previous enrolment phase of the individual with said mobile device, thereby providing a detailed comparison result.
    Type: Grant
    Filed: October 24, 2014
    Date of Patent: February 23, 2021
    Assignee: OneVisage SA
    Inventor: Christophe Remillet
  • Patent number: 10930086
    Abstract: The present disclosure illustrates systems and methods for automatically adjusting a following 3D asset based on a deformation of a related base 3D asset. The systems and methods may use geomaps to index the relationship between the following 3D asset and base 3D asset. By automatically adjusting a following 3D asset based on the base 3D asset, the following 3D asset may retain full functionality.
    Type: Grant
    Filed: October 30, 2017
    Date of Patent: February 23, 2021
    Assignee: DG Holdings, Inc.
    Inventors: Jesse Janzer, Jon Middleton, Berkley Frei
  • Patent number: 10922867
    Abstract: There are provided systems and methods for rendering of an animated avatar. An embodiment of the method includes: determining a first rendering time of a first clip as approximately equivalent to a predetermined acceptable rendering latency, a first playing time of the first clip determined as approximately the first rendering time multiplied by a multiplicative factor; rendering the first clip; determining a subsequent rendering time for each of one or more subsequent clips, each subsequent rendering time is determined to be approximately equivalent to the predetermined acceptable rendering latency plus the total playing time of the preceding clips, each subsequent playing time is determined to be approximately the rendering time of the respective subsequent clip multiplied by the multiplicative factor; and rendering the one or more subsequent clips.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: February 16, 2021
    Inventors: Enas Tarawneh, Michael Jenkin
  • Patent number: 10922895
    Abstract: Computing devices for content library projection in computer-based 3D environments are disclosed herein. In one embodiment, a computing device is configured to provide, on a display, a user interface containing a work area having a template of a 3D environment and a gallery containing models of two-dimensional (2D) or 3D content items. The computing device can then detect, via the user interface, a user input selecting the content library to be inserted as an object into the template of the 3D environment. In response to detecting the user input, the computing device can render and surface on the display, graphical representations of the 2D or 3D content items corresponding to the models in the selected content library along a circle having a center spaced apart from a default position of a viewer of the 3D environment by a preset distance.
    Type: Grant
    Filed: May 4, 2018
    Date of Patent: February 16, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vidya Srinivasan, Colton Brett Marshall, Harold Anthony Martinez Molina, Aniket Handa, Amy Scarfone, Justin Chung-Ting Lam, Edward Boyle Averett
  • Patent number: 10921877
    Abstract: A silhouette-based limb finder may be used to detect limbs from a camera image. This limb determination may be used to control an application, such as a game, or a combination with other image processing. A first distance field indicating a distance from the edge of a silhouette in an image and a second distance field indicating distance from a location in the silhouette may be used to generate a path from an extremity point on the silhouette to the location. This path then may be used to determine a limb in the silhouette. This allows tracking of limbs even for hard to detect player poses.
    Type: Grant
    Filed: October 20, 2014
    Date of Patent: February 16, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jonathan R. Hoof, Daniel G. Kennett
  • Patent number: 10922489
    Abstract: Various embodiments provide input to and facilitate various operations of media production. An automated script content generator uses recurrent artificial neural networks trained using machine learning on a corpus of stories or scripts to generate and suggest script content and indicates effects changes to the script would have of the scenes, characters, interactions and other entities of the script. An automated producer breaks down the script to automatically generate storyboards, calendars, schedules and budgets and provides this as input to the pre-production operation within a media production environment. The system also provides information to affect and facilitate the greenlighting operation and other operations in the media production environment in an iterative script review and revision process.
    Type: Grant
    Filed: January 10, 2019
    Date of Patent: February 16, 2021
    Assignee: RivetAI, Inc.
    Inventors: Debajyoti Ray, Andrew Kortschak, Walter Kortschak
  • Patent number: 10904192
    Abstract: Techniques are described for time series based enrichment of messages that are persisted and published in a flow according to the time series data. Inbound messages may be received and processed to add timing information. The modified messages may be stored as a time series in data storage. In response for a request for a particular sequence or set of messages, the messages may be retrieved from data storage and provided in a flow instance to the requestor. The requestor, such as a consumer application, may replay the data from the messages according to the order of the time series of the messages. In this way, implementations enable a time ordered sequence of messages to be replayed at any time after the initial receipt of the messages, and enable any number of instances of such replay including simultaneous replay of a particular message sequence to multiple consumers.
    Type: Grant
    Filed: July 27, 2016
    Date of Patent: January 26, 2021
    Assignee: SAP SE
    Inventors: Andreas Hoffner, Martin Bachmann
  • Patent number: 10902343
    Abstract: Training data from multiple types of sensors and captured in previous capture sessions can be fused within a physics-based tracking framework to train motion priors using different deep learning techniques, such as convolutional neural networks (CNN) and Recurrent Temporal Restricted Boltzmann Machines (RTRBMs). In embodiments employing one or more CNNs, two streams of filters can be used. In those embodiments, one stream of the filters can be used to learn the temporal information and the other stream of the filters can be used to learn spatial information. In embodiments employing one or more RTRBMs, all visible nodes of the RTRBMs can be clamped with values obtained from the training data or data synthesized from the training data. In cases where sensor data is unavailable, the input nodes may be unclamped and the one or more RTRBMs can generate the missing sensor data.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: January 26, 2021
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Sheldon Andrews, Ivan Huerta Casado, Kenneth J. Mitchell, Leonid Sigal
  • Patent number: 10902618
    Abstract: Systems and methods are disclosed for universal body movement translation and character rendering. Motion data from a source character can be translated and used to direct movement of a target character model in a way that respects the anatomical differences between the two characters. The movement of biomechanical parts in the source character can be converted into normalized values based on defined constraints associated with the source character, and those normalized values can be used to inform the animation of movement of biomechanical parts in a target character based on defined constraints associated with the target character.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: January 26, 2021
    Assignee: Electronic Arts Inc.
    Inventors: Simon Payne, Darren Rudy
  • Patent number: 10896320
    Abstract: In an embodiment, a children face alert system is provided for use with a smart device with a display screen. A neural network model trained with dataset images with embedded distance information can run in the background of the smart device. When receiving a captured image frame of the face of a child using the smart device, the neural network model can determine that the captured image frame is from a child, and further determine whether the face of the child is within a predetermined distance to the display screen based on a size of the face on the captured image frame. If the face is within the predetermined distance, the smart device can display an alert that the face of the child is too close to the display screen and pause one or more user applications until the child's face moves outside of the predetermined distance.
    Type: Grant
    Filed: November 14, 2018
    Date of Patent: January 19, 2021
    Assignee: BAIDU USA LLC
    Inventors: Shengdong Zhu, Lei Zhong, Kaisheng Song, Jia Guo