Animation Patents (Class 345/473)
  • Patent number: 10666920
    Abstract: A method of altering audio output from an electronic device based on image data is provided. In one embodiment, the method includes acquiring image data and determining one or more characteristics of the image data. Such characteristics may include sharpness, brightness, motion, magnification, zoom setting, and so forth, as well as variation in any of the preceding characteristics. The method may also include producing audio output, wherein at least one characteristic of the audio output is determined based on one or more of the image data characteristics. Various audio output characteristics that may be varied based on the video data characteristics may include, for instance, pitch, reverberation, tempo, volume, filter frequency response, added sound effects, or the like. Additional methods, devices, and manufactures are also disclosed.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: May 26, 2020
    Assignee: Apple Inc.
    Inventors: Aram Lindahl, Kelvin Chiu
  • Patent number: 10661164
    Abstract: A method is provided for controlling game character movement for a server. The method includes receiving a movement-request data-packet sent by a first client, where the server is configured to manage a character movement on the first client and a character movement on a second client in a same game scene. The method also includes determining whether a target client is the first client or the second client according to the movement-request data-packet, where a character on the target client is a character whose movement needs to be controlled by the first client. Further, the method includes updating a movement identifier of the target client, and broadcasting the updated movement identifier of the target client to the first client and the second client.
    Type: Grant
    Filed: June 13, 2018
    Date of Patent: May 26, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Zhenxing Zhang, Bin Qiu
  • Patent number: 10664222
    Abstract: One or more embodiments of the disclosure provide systems and methods for providing media presentations to users of a media presentation system. A media presentation generally includes a plurality of media segments provided by multiple users of the media presentation system. In one or more embodiments, a user of the media presentation system may share a media presentation with a co-user. The media presentation system can enable the co-user, if authorized by the user, to contribute (e.g., add a media segment) to a media presentation shared with the co-user.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: May 26, 2020
    Assignee: FACEBOOK, INC.
    Inventors: Joshua Alexander Miller, Leo Litterello Mancini, Michael Slater
  • Patent number: 10657695
    Abstract: The present invention relates to a method for generating and causing display of a communication interface that facilitates the sharing of emotions through the creation of 3D avatars, and more particularly with the creation of such interfaces for displaying 3D avatars for use with mobile devices, cloud based systems and the like.
    Type: Grant
    Filed: October 30, 2017
    Date of Patent: May 19, 2020
    Assignee: Snap Inc.
    Inventors: Jesse Chand, Jeremy Voss
  • Patent number: 10656793
    Abstract: A technique is described herein for providing a personalized notification to a recipient-user. In one approach, the technique involves: receiving an original message sent by a sender-user; selecting a notification type from a set of possible notification types based on at least a portion of the original message; and selecting one or more property values from one or more respective ranges of possible property values. The selected notification type and selected property value(s) define a recipient-instantiated (RI) notification. The technique then displays the RI notification on a user interface presentation of a recipient-user computing device. In one approach, the technique can randomly select the notification type and/or the property value(s). In addition, or alternatively, the technique can make these choices based on context information.
    Type: Grant
    Filed: May 25, 2017
    Date of Patent: May 19, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Adrian Aisemberg
  • Patent number: 10657180
    Abstract: Technical solutions are described for reusing a solution for a test. An example method includes building, by a processor, a solution cache including a tree structure representative of a plurality of solutions, which stores a key configurable immediate value of a previous solution as a node, the previous solution as a leaf node of the tree, and an edge from the node indicative of a value of the key configurable immediate value at the node. The method includes traversing nodes of the tree structure in the solution cache to identify key configurable immediate values of a previous solution identical to configurable immediate values from the test by identifying edges associated values identical to those from the test. In response to reaching a leaf node of the tree structure, using the solution(s) at the leaf node as a solution of the test.
    Type: Grant
    Filed: November 4, 2015
    Date of Patent: May 19, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Eyal Bin
  • Patent number: 10657656
    Abstract: Systems, computer-implemented methods, and computer program products to generate virtual motion sensor data from computer animations are provided. A system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a tracker component that can track virtual location data corresponding to a feature of a computer animated character in a virtual environment. The computer executable components can further comprise a virtual motion sensor component that, based on the virtual location data, can generate virtual motion sensor data.
    Type: Grant
    Filed: June 15, 2018
    Date of Patent: May 19, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Marco Cavallo, Ravi Tejwani, Patrick Watson, Aldis Sipolins, Jenna Reinen, Hui Wu
  • Patent number: 10650386
    Abstract: A method and system for improving network usage detection and presentation is provided. The method includes detecting and identifying a user accessing specified network content. Objects being viewed by the user via a network are detected and prior associations between the objects and the user are determined. Attributes of the objects with respect to the prior associations are determined and presented via a GUI.
    Type: Grant
    Filed: August 30, 2018
    Date of Patent: May 12, 2020
    Assignee: International Business Machines Corporation
    Inventors: Lisa Seacat DeLuca, Jeremy A. Greenberger
  • Patent number: 10649211
    Abstract: A fixed-distance display system includes a light source configured to generate a light beam. The system also includes a light guiding optical element configured to propagate at least a portion of the light beam by total internal reflection. The system further includes a first inertial measurement unit configured to measure a first value for calculating a head pose of a user. Moreover, the system includes a camera configured to capture an image for machine vision optical flow analysis. The display system is configured to display virtual images only within a tolerance range of a single predetermined optical plane.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: May 12, 2020
    Assignee: Magic Leap, INC.
    Inventors: Samuel A Miller, William Hudson Welch
  • Patent number: 10650171
    Abstract: A computer-implemented method and system automatically solves constraints in a computer-aided design (CAD) model. A CAD model of a real-world object capable of assuming various positions is constructed and a constraint solver process is initiated and executes while a user defines multiple positions of the CAD model. Input of data specified during a CAD design workflow is automatically input to the constraint solver process, and unknown variables are solved for as part of the CAD design workflow.
    Type: Grant
    Filed: December 22, 2015
    Date of Patent: May 12, 2020
    Assignee: DASSAULT SYSTEMES SOLIDWORKS CORPORATION
    Inventors: Shrikant Vitthal Savant, Kyeong Hwi Lee
  • Patent number: 10643252
    Abstract: An advertisement method of an electronic device and the electronic device thereof are provided. The operation method of the electronic device includes the processes of displaying banner advertisement comprising a user selection item, and displaying banner advertisement of a scenario matching to a result of selection of the user selection item.
    Type: Grant
    Filed: December 10, 2014
    Date of Patent: May 5, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sang-Heon Jeong, Jun-Seon Yun, Mi-Jung Kim
  • Patent number: 10642365
    Abstract: Parametric inertia and API techniques are described. In one or more implementations, functionality is exposed via an application programming interface by an operating system of a computing device to one or more applications that is configured to calculate an effect of inertia for movement in a user interface. The calculated effect of inertia for the movement on the user interface is managed by the operating system based on one or more rest points specified using one or more parametric curves by the one or more applications via interaction with the application programming interface.
    Type: Grant
    Filed: September 9, 2014
    Date of Patent: May 5, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ming Huang, Nicolas J. Brun
  • Patent number: 10645370
    Abstract: Systems, methods, and computing devices for capturing synthetic stereoscopic content are provided. An example computing device includes at least one processor and memory. The memory stores instructions that cause the computing device to receive a three-dimensional scene. The instructions may additionally cause the computing device to reposition vertices of the three-dimensional scene to compensate for variations in camera location in a directional stereoscopic projection and generate a stereoscopic image based on the repositioned vertices. An example method includes projecting a three-dimensional scene onto a left eye image cube and a right eye image cube and repositioning vertices of the three-dimensional scene to adjust for rendering from a single camera location. The method also includes mapping pixels of a stereoscopic image to points on the left eye image cube and the right eye image cube and generating the stereoscopic image using the values of the mapped pixels.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: May 5, 2020
    Assignee: GOOGLE LLC
    Inventors: Jeremy Chernobieff, Houman Meshkin, Scott Dolim
  • Patent number: 10634918
    Abstract: A method for one or more processors to implement includes acquiring a synthetic image of an object from a second orientation different from a first orientation, using a three-dimensional model. The method further includes identifying, in the synthetic image, second edge points that are located on an edge of the object that is not a perimeter. The method further includes identifying matched edge points, which are first edge points and second edge points at substantially a same location on the object. The method further includes storing the matched edged points in a memory that can be accessed by an object-tracking device, so that the object-tracking device can identify the object in a real environment by identifying the matched edge points in images of the object.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: April 28, 2020
    Assignee: SEIKO EPSON CORPORATION
    Inventor: Mikhail Brusnitsyn
  • Patent number: 10635745
    Abstract: The described technology is directed towards a pre-child user interface element in a user interface tree that draws before the parent element draws, (and thus before any conventional child element of the parent draws). For example, based upon current state data such as whether the parent element has focus, the pre-child may draw a highlight or the like before (so as to be beneath) drawing the representation of the parent element, to indicate the focused state (or and/or other current state or states). The user interface tree maintains a property that it is composable because the parent user interface element code is independent of what any of its pre-child element or pre-children elements do when invoked.
    Type: Grant
    Filed: May 4, 2018
    Date of Patent: April 28, 2020
    Assignee: HOME BOX OFFICE, INC.
    Inventors: Brandon C. Furtwangler, Brendan Joseph Clark, J. Jordan C. Parker
  • Patent number: 10621384
    Abstract: An upper limb model of a virtual manikin includes a data conversion engine configured to produce converted data based on one or more data sets. Each data set represents dependencies between elements of the kinematic model. The upper limb model further includes a kinematic chain model configured to generate one or more constraints based on the converted data. The upper limb model also includes a posturing engine configured to determine, based on the one or more constraints, a trajectory from a first position to a second position. The kinematic model may further include a rendering engine configured to render a posture corresponding to the second posture. The elements of the kinematic model may include one or more of a clavicle, a scapula, a humerus, a forearm and a hand.
    Type: Grant
    Filed: December 9, 2015
    Date of Patent: April 14, 2020
    Assignee: Dassault Systemes Americas Corp.
    Inventors: Pierre-Olivier Lemieux, Arnaud Barré, Rachid Aissaoui, Nicola Hagemeister
  • Patent number: 10621779
    Abstract: Artificial intelligence based techniques are used for analysis of 3D objects in conjunction with each other. A 3D model of two or more 3D objects is generated. Features of 3D objects are matched to develop a correspondence between the 3D objects. Two 3D objects are geometrically mapped and an object is overlayed on another 3D object to obtain a superimposed object. Match analysis of 3D objects is performed based on machine learning based models to determine how well the objects are spatially matched. The analysis of the objects is used in augmented reality applications.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: April 14, 2020
    Assignee: FastVDO LLC
    Inventors: Pankaj N. Topiwala, Madhu Peringassery Krishnan, Wei Dai
  • Patent number: 10613827
    Abstract: A configuration receives, with a processor, a request for a voice-based, human-spoken language interpretation from a first human-spoken language to a second human-spoken language. Further, the configuration routes, with the processor, the request to a device associated with a remotely-located human interpreter. In addition, the configuration receives, with the processor, audio in the first human-spoken language from a telecommunication device. The configuration also augments, in real-time with the processor, the audio with one or more visual features corresponding to the audio. Further, the configuration sends, with the processor, the augmented audio to the device associated with the human interpreter for the voice-based, human-spoken language interpretation to be based on the augmented audio in a simulated video remote interpretation session.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: April 7, 2020
    Assignee: Language Line Services, Inc.
    Inventors: Jeffrey Cordell, Lindsay D'Penha, Julia Berke
  • Patent number: 10614299
    Abstract: This invention introduces an indoor person identification system that utilizes the capture and analysis of footstep induced structural vibrations. The system senses floor vibration and detects the signal induced by footsteps. Then the system then extracts features from the signal that represent characteristics of each person's unique gait pattern. With these extracted features, the system conducts hierarchical classification at an individual step level and at a collection of consecutive steps level, achieving high degree of accuracy in the identification of individuals.
    Type: Grant
    Filed: February 9, 2016
    Date of Patent: April 7, 2020
    Assignee: CARNEGIE MELLON UNIVERSITY
    Inventors: Pei Zhang, Hae Young Noh, Shijia Pan, Ningning Wang, Amelie Bonde, Mostafa Mirshekari
  • Patent number: 10610301
    Abstract: The invention generally pertains to a combination of a surgical with a computer-assisted surgery system. The surgical tool may be used as an input device, allowing information to pass from the user to the computer-assisted surgery system, and providing functionality similar to common user interface devices, such as a mouse or any other input device. When used as an input device, it may be used for defining anatomical reference geometry, manipulating the position and/or orientation of virtual implants, manipulating the position and/or orientation of surgical approach trajectories, manipulating the positions and/or orientation of bone resections, and the selection or placement of any other anatomical or surgical feature.
    Type: Grant
    Filed: February 13, 2017
    Date of Patent: April 7, 2020
    Assignee: MAKO Surgical Corp.
    Inventor: Arthur E. Quaid, III
  • Patent number: 10607075
    Abstract: A method for mapping an input device to a virtual object in virtual space displayed on a display device is disclosed. The method may include determining, via an eye tracking device, a gaze direction of a user. The method may also include, based at least in part on the gaze direction being directed to a virtual object in virtual space displayed on a display device, modifying an action to be taken by one or more processors in response to receiving a first input from an input device. The method may further include, thereafter, in response to receiving the input from the input device, causing the action to occur, wherein the action correlates the first input to an interaction with the virtual object.
    Type: Grant
    Filed: March 20, 2018
    Date of Patent: March 31, 2020
    Assignee: Tobii AB
    Inventors: Simon Gustafsson, Alexey Bezugly, Anders Kingbäck, Anders Clausen
  • Patent number: 10607065
    Abstract: Generation of parameterized avatars is described. An avatar generation system uses a trained machine-learning model to generate a parameterized avatar, from which digital visual content (e.g., images, videos, augmented and/or virtual reality (AR/VR) content) can be generated. The machine-learning model is trained to identify cartoon features of a particular style—from a library of these cartoon features—that correspond to features of a person depicted in a digital photograph. The parameterized avatar is data (e.g., a feature vector) that indicates the cartoon features identified from the library by the trained machine-learning model for the depicted person. This parameterization enables the avatar to be animated. The parameterization also enables the avatar generation system to generate avatars in non-photorealistic (relatively cartoony) styles such that, despite the style, the avatars preserve identities and expressions of persons depicted in input digital photographs.
    Type: Grant
    Filed: May 3, 2018
    Date of Patent: March 31, 2020
    Assignee: Adobe Inc.
    Inventors: Rebecca Ilene Milman, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Elya Shechtman, Duygu Ceylan Aksit, David P. Simons
  • Patent number: 10606842
    Abstract: Presenting data from different data providers in a correlated fashion. A first query is performed on a first data set controlled by a first entity to capture a first set of data results. Then a second query is performed on a second data set controlled by a second entity to capture a second set of data results. A relationship ontology that correlates data stored in different data stores controlled by different entities is then consulted to identify one or more relationships between data in the selected results set and the second data set.
    Type: Grant
    Filed: July 25, 2017
    Date of Patent: March 31, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Pedram Faghihi Rezaei, Amir M. Netz, Patrick J. Baumgartner
  • Patent number: 10607652
    Abstract: A method and system of converting a first language of a soundtrack of a person speaking in a video to a second language. A meaning of a word of the first language is translated, and one or more synonym words in the second language stored in a database of the computer system is located. The first and second languages are different languages. Outlines of shapes of mouth openings of the one or more synonym words in the second language are compared with the outlines of the shapes of mouth openings of the word of the first language. A synonym word of the one or more synonym words translated from the second language into the first language having mouth openings with a smallest difference from the mouth openings of the word of the first language is selected.
    Type: Grant
    Filed: April 1, 2019
    Date of Patent: March 31, 2020
    Assignee: International Business Machines Corporation
    Inventors: Yuan Jin, Cheng Yu Peng, Yin Qian, Xiao Rui Shao, Jian Jun Wang
  • Patent number: 10599213
    Abstract: Disclosed herein are a method for remotely controlling virtual content and an apparatus for the method. The method for remotely controlling virtual content includes acquiring spatial data about a virtual space, creating at least one individual space by transforming the virtual space in accordance with a user interaction area that corresponds to a user based on the spatial data, visualizing the at least one individual space in the user interaction area and providing an interactive environment which enables an interaction between the user's body and a virtual object included in the at least one individual space, and controlling the virtual object in response to a user interaction event occurring in the interactive environment.
    Type: Grant
    Filed: May 16, 2018
    Date of Patent: March 24, 2020
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Ung-Yeon Yang, Ki-Hong Kim, Jin-Ho Kim
  • Patent number: 10600192
    Abstract: A first digital representation in an image and a second digital representation in a second image is identified based at least in part on specified criteria. A first epipolar line in the second image is determined based at least in part on a position of the first digital representation in the image. A second epipolar line is determined based at least in part on a position of the second digital representation in the second image. At least one cost value is determined based at least in part on the first digital representation, the second digital representation, the first epipolar line, and the second epipolar line. The first digital representation and the second digital representation are determined, based at least in part on the at least one cost value, to represent a same object. The first digital representation is associated in a data store with the second digital representation.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: March 24, 2020
    Assignee: Vulcan Inc.
    Inventors: Samuel Allan McKennoch, Cecil Lee Quartey, Jeremy Kyle Bensley
  • Patent number: 10600225
    Abstract: A sketch-based interface within an animation engine provides an end-user with tools for creating emitter textures and oscillator textures. The end-user may create an emitter texture by sketching one or more patch elements and then sketching an emitter. The animation engine animates the sketch by generating a stream of patch elements that emanate from the emitter. The end-user may create an oscillator texture by sketching a patch that includes one or more patch elements, and then sketching a brush skeleton and an oscillation skeleton. The animation engine replicates the patch along the brush skeleton, and then interpolates the replicated patches between the brush skeleton and the oscillation skeleton, thereby causing those replicated patches to periodically oscillate between the two skeletons.
    Type: Grant
    Filed: November 25, 2014
    Date of Patent: March 24, 2020
    Assignee: AUTODESK, INC.
    Inventors: Tovi Grossman, George Fitzmaurice, Rubaiat Habib Kazi, Fanny Chevalier, Shengdong Zhao
  • Patent number: 10596471
    Abstract: The present specification describes systems and methods that enable non-players to participate as spectators in online video games and, through a collective voting mechanism, determine the occurrence of certain events or contents of the gameplay in real time. Game event options are generated and presented to non-players. A specific one of the game event options is then selected based on a collective vote of the non-players. Once selected, the specific one or more of the game event options are then generated as actual gaming events and incorporated into a video game stream that is transmitted to the players as part of the gameplay session. In this manner, non-players may be able to directly affect the course of gameplay.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: March 24, 2020
    Assignee: Activision Publishing, Inc.
    Inventors: Josiah Eatedali, Jon Estanislao, Etienne Pouliot, Dave Bergeron, Maxime Babin, Mario Beckman Notaro
  • Patent number: 10599286
    Abstract: A method includes defining a virtual space. The virtual space comprises a first avatar object, a first character object, a second avatar object, and a second character object. The method includes defining a plurality of operation modes for operating the virtual space. The method includes moving, in accordance with an operation of the virtual space by the first user being executed in the first mode, the first character object in accordance with the input to the first controller. The method includes moving, in accordance with an operation of the virtual space by the first user being executed in the second mode, the first avatar object based on the input to the first controller. The method includes generating a visual-field image in accordance with a motion of a head-mounted device (HMD) associated with the first user. The method includes displaying the visual-field image on the HMD.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: March 24, 2020
    Assignee: COLOPL, INC.
    Inventor: Atsushi Inomata
  • Patent number: 10592391
    Abstract: A computer-implemented automated review method for transaction and datasource configuration source code files seeking to access a data store comprises the steps of receiving a request to review configuration source code files seeking to access the data store; checking the configuration source code files for a definition of a transaction manager; setting an issue flag if the configuration source code files do not include the definition of the transaction manager; checking the configuration source code files to determine whether a transaction definition is at an outer boundary of a service object or a method; setting the issue flag if the transaction definition does not appear before the start of the service object class or method definition; reviewing the status of the issue flag; issuing a halt signal if the issue flag is set; and issuing a proceed signal if the issue flag is not set.
    Type: Grant
    Filed: October 13, 2017
    Date of Patent: March 17, 2020
    Assignee: State Farm Mutual Automobile Insurance Company
    Inventors: Matthew Anderson, Richard T. Snyder, Daniel George Galvin
  • Patent number: 10594995
    Abstract: There is described a method and an apparatus for rendering a realistic lighting on a subject to undergo chroma-key compositing into a scene environment. The method comprises providing translucent screens forming a closed environment around the subject and undergoing projection from outside to provide the realistic lighting, and identifying an area of the screens that is behind the subject and forming the contour thereof from a perspective of the camera. A withdrawable background is projected for the areas of the screens corresponding to behind the subject and to the contour of the subject, projecting, such that when an image of the subject is taken using the camera, the withdrawable background forming the contour of the subject is used to isolate the subject in the image of the subject to perform chroma-key composition.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: March 17, 2020
    Assignee: BUF CANADA INC.
    Inventor: Pierre Louis Charles Buffin
  • Patent number: 10592724
    Abstract: A method for outputting information corresponding to an object includes identifying a shape of the object, receiving an image of a label corresponding to the object, generating a three-dimensional model of the object to which the image of the label is virtually attached based on the identified shape of the object and the image of the label, generating a plurality of pieces of two-dimensional image data corresponding to the three-dimensional model of the object, the plurality of pieces of two-dimensional image data being generated by changing a virtual capturing position for capturing the three-dimensional model of the object, comparing input image data to the plurality of pieces of two-dimensional image data and outputting the information corresponding to the object based on a positive comparison between the input image data and at least one of the plurality of pieces of two-dimensional image data.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: March 17, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Takatoshi Tada
  • Patent number: 10586367
    Abstract: A method, apparatus, and computer readable medium for interactive cinemagrams. The method includes displaying a still frame of a cinemagram on a display of an electronic device, the cinemagram having an animated portion. The method also includes after the displaying, identifying occurrence of a triggering event based on an input from one or more sensors of the electronic device. Additionally, the method includes initiating animation of the animated portion of the cinemagram in response to identifying the occurrence of the triggering event. The method may also include generating the image as a cinemagram by identifying a reference frame from a plurality of frames and an object in the reference frame, segmenting the object from the reference frame, tracking the object across multiple of the frames, determining whether a portion of the reference frame lacks pixel information during motion of the object, and identifying pixel information to add to the portion.
    Type: Grant
    Filed: July 14, 2017
    Date of Patent: March 10, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sourabh Ravindran, Youngjun Yoo
  • Patent number: 10586369
    Abstract: One or more services may generate audio data and animations of an avatar based on input text. A speech input ingestion (SII) service may identify tags of objects in a virtual environment and associate tags of those objects with words in the input text, which may be stored as metadata in speech markup data. This association may enable an animation service to generate gestures toward objects while animating an avatar, or may be used to create animations or effects of the object. The SII service may analyze input text to identify dialog including multiple speakers associated with the text. The SII service may create metadata to associate certain words with respective speakers (avatars) of those words, which may be processed by the animation service to animate multiple avatars speaking the dialog.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: March 10, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Kyle Michael Roche, David Chiapperino, Christine Morten, Kathleen Alison Curry, Leo Chan
  • Patent number: 10586368
    Abstract: The present invention relates to a joint automatic audio visual driven facial animation system that in some example embodiments includes a full scale state of the art Large Vocabulary Continuous Speech Recognition (LVCSR) with a strong language model for speech recognition and obtained phoneme alignment from the word lattice.
    Type: Grant
    Filed: December 29, 2017
    Date of Patent: March 10, 2020
    Assignee: Snap Inc.
    Inventors: Chen Cao, Xin Chen, Wei Chu, Zehao Xue
  • Patent number: 10585277
    Abstract: According to the invention, a system for tracking a gaze of a user across a multi-display arrangement is disclosed. The system may include a first display, a first eye tracking device, a second display, a second eye tracking device, and a processor. The first eye tracking device may be configured to determine a user's gaze direction while the user is gazing at the first display. The second eye tracking device may be configured to determine the user's gaze direction while the user is gazing at the second display. The processor may be configured to determine that the user's gaze has moved away from the first display in a direction of the second display, and in response to determining that the user's gaze has moved away from the first display in the direction of the second display, deactivate the first eye tracking device, and activate the second eye tracking device.
    Type: Grant
    Filed: August 31, 2017
    Date of Patent: March 10, 2020
    Assignee: Tobii AB
    Inventors: Farshid Bagherpour, Mårten Skogö
  • Patent number: 10586361
    Abstract: Mesh art positioning techniques as part of digital content creation by a graphics editing application of a computing device are described. The graphics editing application is configured to obtain lists of vertices that are used to form mesh art. This list may then be used by a snapping module of graphics editing application to generate a snap point list that is used as a basis for mesh art positioning in relation to other objects within an item of digital content. Techniques are also described to address color diffusion within the mesh art, such as to identify a vertex that is a source of color diffusion and a boundary of color diffusion within the mesh art. The source and/or outer boundary of color diffusion within the mesh is then used as a basis to control mesh art positioning by the graphics editing application.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: March 10, 2020
    Assignee: Adobe Inc.
    Inventors: Avadhesh Kumar Sharma, Ashish Ranjan
  • Patent number: 10584570
    Abstract: A method and systems for dynamically planning a well site are provided herein. The method includes generating, via a computing system, a three-dimensional model of a hydrocarbon field including a reservoir. The method also includes determining a location for a well site based on the three-dimensional model and determining reservoir targets for the determined location and a well trajectory for each reservoir target. The method also includes adjusting the location for the well site within the three-dimensional model and dynamically adjusting the reservoir targets and the well trajectories based on the dynamic adjustment of the location for the well site. The determination and the dynamic adjustment of the location, the reservoir targets, and the well trajectories for the well site are based on specified constraints. The method further includes determining a design for the well site based on the dynamic adjustment of the location, the reservoir targets, and the well trajectories for the well site.
    Type: Grant
    Filed: May 23, 2014
    Date of Patent: March 10, 2020
    Assignee: ExxonMobil Upstream Research Company
    Inventors: Yao-Chou Cheng, Ruben D. Uribe, Doug H. Freeman, Christopher A. Alba, Jose J. Sequeira, Jr.
  • Patent number: 10579672
    Abstract: This invention describes an audio snippet exchange network that allows people to subscribe to audio snippets that are published by other members on the network. The audio snippets may also have user contributed metadata related to them, such that the recipients can search a library of audio snippets and play back only those that match the search term. Oftentimes people want to take advantage of communication via social networks but are presently engaged in an activity such as driving or watching a live event. This audio snippet exchange network allows the person to have a largely uninterrupted experience while still publishing and consuming audio messages.
    Type: Grant
    Filed: March 29, 2016
    Date of Patent: March 3, 2020
    Inventors: David M. Orbach, Evan John Kaye
  • Patent number: 10580187
    Abstract: There are provided systems and methods for rendering of an animated avatar. An embodiment of the method includes: determining a first rendering time of a first clip as approximately equivalent to a predetermined acceptable rendering latency, a first playing time of the first clip determined as approximately the first rendering time multiplied by a multiplicative factor; rendering the first clip; determining a subsequent rendering time for each of one or more subsequent clips, each subsequent rendering time is determined to be approximately equivalent to the predetermined acceptable rendering latency plus the total playing time of the preceding clips, each subsequent playing time is determined to be approximately the rendering time of the respective subsequent clip multiplied by the multiplicative factor; and rendering the one or more subsequent clips.
    Type: Grant
    Filed: May 1, 2018
    Date of Patent: March 3, 2020
    Inventors: Enas Tarawneh, Michael Jenkin
  • Patent number: 10572134
    Abstract: The present disclosure provides a method for providing a prototyping tool, including at least: acquiring input data from a user; defining, as a trigger, a gesture generated by using the input data; and defining an interaction for allowing at least one action to be performed if the trigger occurs.
    Type: Grant
    Filed: August 8, 2018
    Date of Patent: February 25, 2020
    Assignee: STUDIO XID KOREA INC.
    Inventor: Soo Kim
  • Patent number: 10573065
    Abstract: The present specification describes systems and methodsfor automatically generating personalized blendshapes from actor performance measurements, while preserving the semantics of a template facial animation rig. The disclosed inventions facilitate the creation of an ensemble of realistic digital double face rigs for each individual with consistent behaviour across the set with sophisticated iterative optimization techniques.
    Type: Grant
    Filed: October 21, 2016
    Date of Patent: February 25, 2020
    Assignee: Activision Publishing, Inc.
    Inventors: Wan-Chun Ma, Chongyang Ma
  • Patent number: 10569180
    Abstract: A fantasy sports visual simulation system providing an audiovisual experience that allows contest participants and spectators to view a videogame-like model of a virtual fantasy sports contest. These simulations can occur at any time between any two or more contest participants, like mini-contests within the context of a broader league. Once a league is formed and team rosters are created, numerical calculations are performed based on real-athlete statistics to determine a current “state” of performance for all virtual athletes in gameplay. This statistical analysis is the basis for assigning performance variables to each virtual athlete, which allows the system to calculate numerical point values based on the performance of each virtual athlete and team in this fantasy sports visual simulation system.
    Type: Grant
    Filed: November 6, 2016
    Date of Patent: February 25, 2020
    Inventors: Alberto Murat Croci, Michael Joseph Karlin
  • Patent number: 10569135
    Abstract: Provided is an analysis device, including a processor that implements an acquisition function of acquiring data indicating play events that are defined based on motions when a plurality of users play a sport and arranged in a time interval, a calculation function of calculating a degree of correlation of plays of the plurality of users in the interval based on a temporal relation of the play events of the plurality of users, and a relation estimation function of estimating a relation of the plurality of users in the interval based on the degree of correlation.
    Type: Grant
    Filed: November 6, 2014
    Date of Patent: February 25, 2020
    Assignee: SONY CORPORATION
    Inventor: Hideyuki Matsunaga
  • Patent number: 10574938
    Abstract: A depth camera assembly (DCA) for depth sensing of a local area. The DCA includes a light generator, a detector, and a controller. The light generator illuminates a local area with a light pattern. The detector captures portions of the light pattern reflected from an object in the local area. The detector includes pixel rows and pixel columns that form a dynamically adjustable read-out area. The controller reads first data of the captured portions of the reflected light pattern that correspond to a first read-out area, and locates the object based on the first data. The controller determines a second read-out area of the detector based on a portion of the read-out area associated with the object. The controller reads second data of the captured portions of the reflected light pattern that correspond to the second read-out area, and determines depth information for the object based on the second data.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: February 25, 2020
    Assignee: Facebook Technologies, LLC
    Inventor: Nicholas Daniel Trail
  • Patent number: 10571695
    Abstract: The present invention provides a glass type terminal comprising: a frame formed to be fixable to a user's head; a display unit mounted on the frame and outputting visual information; an optical unit formed from at least one lens and forming an image from the visual information; a user input unit for sensing a control command for changing a operating mode; and a control unit for controlling the display unit such that the visual information changes on the basis of the change of the operating mode, and controlling the optical unit such that a focal length of the image changes according to the changed operating mode.
    Type: Grant
    Filed: December 16, 2015
    Date of Patent: February 25, 2020
    Assignee: LG ELECTRONICS INC.
    Inventor: Dongseuck Ko
  • Patent number: 10573051
    Abstract: Techniques are described for dynamically determining a transition, at run-time, between user interface states of an application based on a timing function that is used for multiple, different transitions within one or more applications. The timing function is applied to the various transitioning graphical elements in the user interface, such that the appearance of each shared element is progressively altered at a rate that is determined according to the timing function. Shared elements are transitioned using the timing function (e.g., as a whole) during the duration of the transition. Outgoing and incoming elements are transitioned, respectively, using a first subsection and second subsection of the timing function, wherein the subsections are bounded by an inflection time which, in some instances, corresponds to a time of peak velocity of the timing function.
    Type: Grant
    Filed: August 16, 2017
    Date of Patent: February 25, 2020
    Assignee: Google LLC
    Inventors: Eric Charles Henry, Ariel Benjamin Sachter-Zeltzer, Jonas Alon Naimark, Sharon Harris
  • Patent number: 10575022
    Abstract: Disclosed are an image encoding and decoding method, image processing device, and computer storage medium, comprising: entropy-encoding a plurality of replication parameters of a current encoding block according to their order after adjustment, to generate a video code stream comprising information of the plurality of replication parameters; said plurality of replication parameters comprising one or more types of replication parameter components. Parsing the video code stream, comprising the information of the plurality of replication parameters, of a decoding block; entropy-decoding said plurality of replication parameters to obtain binary code of said plurality of replication parameters; adjusting said binary code to obtain the values of said plurality of replication parameters; said plurality of replication parameters comprising one or more types of replication parameter components.
    Type: Grant
    Filed: June 6, 2016
    Date of Patent: February 25, 2020
    Assignees: ZTE Corporation, TONGJI University
    Inventors: Tao Lin, Ming Li, Ping Wu, Guoqiang Shang, Zhao Wu
  • Patent number: 10573053
    Abstract: The present application describes techniques for animating images on mobile devices. One example method includes: drawing a final image to be displayed on a hidden canvas; storing the drawn image as an endpoint image; determining a changing display parameter based on an animation effect, wherein the display parameter comprises a display location parameter and each frame of a screenshot parameter determined based on an animation effect; and displaying at least a part of the endpoint image frame by frame in an animation area at a certain interval by displaying, frame by frame in the animation area at the certain interval using the display location parameter, a part of the endpoint image captured based on the screenshot parameter of the frame, until the endpoint image is finally displayed.
    Type: Grant
    Filed: April 18, 2019
    Date of Patent: February 25, 2020
    Assignee: Alibaba Group Holding Limited
    Inventor: Xiaoqing Dong
  • Patent number: 10575044
    Abstract: A display device displays a tag overlaid on a video scene in a first portion of a video screen. The displayed tag is associated with content depicted in the video, includes descriptive text information, and is clickable, so that upon selection by a user, additional information associated with the tag is displayed. Based at least in part on an indication that the tag has been selected by a user, the tag undergoes vertical and/or horizontal repositioning relative to the first portion of the video screen, to a second portion of the video screen. The display device displays the video and the tag overlaid on the video in the second portion of the video screen. The displaying of the tag includes displaying at least a portion of the additional information associated with the tag.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: February 25, 2020
    Assignee: Gula Consulting Limited Liabiity Company
    Inventor: Charles J. Kulas