Animation Patents (Class 345/473)
  • Patent number: 9842094
    Abstract: Systems and methods for switching to different states of electronic content being developed in a content creation application. This involves storing different states of the electronic content using a content-addressable data store, where individual states are represented by identifiers that identify items of respective states stored in the content-addressable data store. Identical items that are included in multiple states are stored once in the content-addressable data store and referenced by common identifiers. Input is received to change the electronic content to a selected state of the different states and the electronic content is displayed in the selected state based on identifiers for the selected state. In this way, undo, redo, and other commands to switch to different states of electronic content being developed are provided.
    Type: Grant
    Filed: February 12, 2016
    Date of Patent: December 12, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: David P. Simons, James Acquavella, Gregory Scott Evans, Joel Brandt
  • Patent number: 9839844
    Abstract: Techniques are disclosed for generating 2D images of a 3D avatar in a virtual world. In one embodiment, a request is received specifying customizations to the 3D avatar. The 2D images are generated based on the request, each 2D image representing the 3D avatar from a different viewing angle in the virtual world. Advantageously, the 2D images may be sent to a client for display, without requiring the client to render the 3D avatar.
    Type: Grant
    Filed: June 9, 2011
    Date of Patent: December 12, 2017
    Assignee: Disney Enterprises, Inc.
    Inventors: Jackson Dunstan, Robert Todd Ogrin
  • Patent number: 9839852
    Abstract: An interactive game designed for learning to play a guitar. A guitar may be connected to a computer or other platform, capable of loading music and displaying notes and chords and other feedback and visual learning aids on a display screen, allowing a user to read music and play along. The goal of the software or interactive game engine is for players to learn how to play a guitar. Users may operate the game in a number of modes with different goals, playing mini-games throughout the levels of the game. The game provides feedback and statistics to help users learn how to play the guitar.
    Type: Grant
    Filed: August 31, 2015
    Date of Patent: December 12, 2017
    Assignee: Ubisoft Entertainment
    Inventor: Joseph Charles Epstein
  • Patent number: 9842405
    Abstract: A method of tracking a target includes classifying a pixel having a pixel address with one or more pixel cases. The pixel is classified based on one or more observed or synthesized values. An example of an observed value for a pixel address includes an observed depth value obtained from a depth camera. Examples of synthesized values for a pixel address include a synthesized depth value calculated by rasterizing a model of the target; one or more body-part indices estimating a body part corresponding to that pixel address; and one or more player indices estimating a target corresponding to that pixel address. One or more force vectors are calculated for the pixel based on the pixel case, and the force vector is mapped to one or more force-receiving locations of the model representing the target to adjust the model representing the target into an adjusted pose.
    Type: Grant
    Filed: October 23, 2013
    Date of Patent: December 12, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventor: Ryan M. Geiss
  • Patent number: 9836879
    Abstract: A computer-implemented method for computing skinning weights. The method includes traversing one or more paths from a first voxel included in a voxelization associated with a three-dimensional model to a second voxel included in the voxelization. The first voxel intersects a first influence included in the three-dimensional model. The second voxel intersects a target vertex associated with the three-dimensional model. The voxelization includes a set of interior voxels. The first voxel and the second voxel are included in the set of interior voxels. The method also includes identifying a first path included in the one or more paths that is associated with a first distance value related to the second voxel that indicates that the first path represents the shortest distance between the first voxel and the second voxel. The method further includes assigning a skinning weight to the target vertex based on the first distance value.
    Type: Grant
    Filed: April 11, 2014
    Date of Patent: December 5, 2017
    Assignee: AUTODESK, INC.
    Inventors: Olivier Dionne, Martin De Lasa
  • Patent number: 9836177
    Abstract: Virtual assistants intelligently emulate a representative of a service provider by providing variable responses to user queries received via the virtual assistants. These variable responses may take the context of a user's query into account both when identifying an intent of a user's query and when identifying an appropriate response to the user's query.
    Type: Grant
    Filed: December 30, 2011
    Date of Patent: December 5, 2017
    Assignee: Next IT Innovation Labs, LLC
    Inventors: Fred A Brown, Tanya M Miller, Mark Zartler
  • Patent number: 9836590
    Abstract: Technologies are described herein for enhancing a user presence status determination. Visual data may be received from a depth camera configured to be arranged within a three-dimensional space. A current user presence status of a user in the three-dimensional space may be determined based on the visual data. A previous user presence status of the user may be transformed to the current user presence status, responsive to determining the current user presence status of the user.
    Type: Grant
    Filed: June 22, 2012
    Date of Patent: December 5, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Anne Marie Renee Archambault, Jeffrey Scott Berg, Xiping Zuo, Abhishek Agrawal
  • Patent number: 9830491
    Abstract: A particular implementation selects two or more fiducial markers to be embedded into a video to convey information. Specifically, the translation, scaling and rotation between a reference marker and a secondary marker can be used to transmit information. When more information needs to be embedded, more secondary markers can be used. The transformation between the fiducial markers can also evolve over time as the information to be embedded evolves over time. At the receiving side, a reader device captures a video including multiple fiducial markers and determines the translation, scaling and rotation between the fiducial markers. Based on the transformation of the fiducial markers, the reader device can retrieve the information embedded in the captured video by the fiducial markers.
    Type: Grant
    Filed: August 2, 2016
    Date of Patent: November 28, 2017
    Assignee: THOMSON Licensing
    Inventors: Anthony Laurent, Bernard Denis, Jean-Eudes Marvie, Eric Hubert
  • Patent number: 9830605
    Abstract: A system that incorporates teachings of the present disclosure may include, for example, a media processor having a controller to present media content provided by a media content source operating in an interactive television (iTV) network, access a client program which presents an overlay that superimposes onto the media content, wherein the client program enables the media processor to associate at least a portion of the media content with a user-generated comment, receive the user-generated comment, wherein the user-generated comment provides commentary on the portion of the media content, associate the user-generated comment with the portion of the media content, and transmit the user-generated comment to a third party for determination of marketing parameters of the portion of the media content. Other embodiments are disclosed.
    Type: Grant
    Filed: October 30, 2009
    Date of Patent: November 28, 2017
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Linda Roberts, E-Lee Chang, Ja-Young Sung, Natasha Barrett Schultz, Robert Arthur King
  • Patent number: 9821232
    Abstract: Embodiments for fostering integration of a user in a multi-player gaming environment by a processor. Each of a plurality of user bioanalytics is recorded over a period of time as the user interacts in the multi-player gaming environment. The recorded bioanalytics are compared against a plurality of game analytics corresponding to aspects of game play in the multi-player gaming environment over the period of time. Based on the bioanalytics and game analytics, an avatar representation of the user is constructed for the multi-player gaming environment.
    Type: Grant
    Filed: August 7, 2015
    Date of Patent: November 21, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Emmanuel Barajas Gonzalez, Shaun E. Harrington, Benjamin K. Rawlins
  • Patent number: 9814983
    Abstract: In an editor a plurality of valid start points are determined. Based on this plurality of start points a user may select one of the points. When a user selects one of the points, the editor determines at least one valid end point. The user may then draw a line between the selected point and a valid end point. As a result of the connection between the two points a new environment is created in the editor.
    Type: Grant
    Filed: July 30, 2014
    Date of Patent: November 14, 2017
    Assignee: NINTENDO CO., LTD
    Inventors: Rory Johnston, Vivek Melwani, Stephen Mortimer, Yukimi Shimura
  • Patent number: 9811244
    Abstract: A display control device includes: an obtaining unit that obtains a stacking image formed by stacking plural specific images for specifying respective contents of images recorded on at least one surface of each of plural recording media; and a controller that exerts control so that a process is displayed on a display screen, the process sequentially performing an operation to select one specific image from the plural specific images in the stacking image obtained by the obtaining unit and to change the one specific image to represent a state in which at least a part of a recording medium corresponding to the one specific image is turned, to thereby show at least a part of another specific image hidden behind the one specific image while changing one specific image to be selected.
    Type: Grant
    Filed: July 6, 2012
    Date of Patent: November 7, 2017
    Assignee: FUJI XEROX CO., LTD.
    Inventors: Hiroshi Nakada, Kanji Itaki, Kimitake Hasuike, Yasuhiro Hirano
  • Patent number: 9813666
    Abstract: Systems and methods for reducing the bandwidth required to transmit video streams related to faces re described herein. In some aspects, contour information from face recognition technology is captured at a transmitting device and sent to a receiving device. The contour information may be used to reconstruct the face at the receiving device without the need to send an entire video frame of the face.
    Type: Grant
    Filed: May 29, 2012
    Date of Patent: November 7, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Henry Hing Law, Tung Chuen Kwong, Benjamin Koon Pan Chan, Yugang Zhou, Wilson Hung Yu
  • Patent number: 9811937
    Abstract: Techniques for rendering realistic depictions of conversational gestures are provided. Embodiments include generating a data model for a first conversational gesture type, by analyzing captured video data to determine motion attribute data for a plurality of conversational gestures. Additionally, upon receiving a request to splice a gesture of the first conversational gesture type into a first animation, embodiments determine a locomotion of a first virtual character, while the first virtual character is interacting with a second virtual character within the first animation. A gesture of the first conversational gesture type is then stylized, using the generated data model and based on the determined locomotion of the first virtual character within the animation. Embodiments splice the stylized gesture into the locomotion of the first virtual character within the received animation data.
    Type: Grant
    Filed: September 29, 2015
    Date of Patent: November 7, 2017
    Assignee: Disney Enterprises, Inc.
    Inventors: Carol A. O'Sullivan, Kerstin Ruhland, Michael Neff, Yingying Wang
  • Patent number: 9804732
    Abstract: A method of controlling an image forming apparatus using a user terminal includes displaying a popup window corresponding to an event generated in the image forming apparatus, determining whether at least one user terminal is connected to the image forming apparatus, and applying a previously set timeout to the popup window according to the connection of the user terminal.
    Type: Grant
    Filed: August 28, 2015
    Date of Patent: October 31, 2017
    Assignee: S-PRINTING SOLUTION CO., LTD.
    Inventor: Soo-young Kang
  • Patent number: 9799373
    Abstract: Disclosed are systems and methods for improving interactions with and between computers in content generating, searching, hosting and/or providing systems supported by or configured with personal computing devices, servers and/or platforms. The systems interact to identify and retrieve data within or across platforms, which can be used to improve the quality of data used in processing interactions between or among processors in such systems. The disclosed systems and methods provide systems and methods for automatically extracting and creating an animated Graphics Interchange Format (GIF) file from a media file. The disclosed systems and methods identify a number of GIF candidates from a video file, and based on analysis of each candidate's attributes, features and/or qualities, as well as determinations related to an optimal playback setting for the content of each GIF candidate, at least one GIF candidate is automatically provided to a user for rendering.
    Type: Grant
    Filed: November 5, 2015
    Date of Patent: October 24, 2017
    Assignee: YAHOO HOLDINGS, INC.
    Inventors: Yale Song, Alejandro Jaimes
  • Patent number: 9799159
    Abstract: A wagering gaming apparatus is provided, comprising a 3-dimensional (3D) display device; at least one processor programmed to cause the 3D display device to display a 3D scene for a game, the 3D scene comprising a virtual 3D space in which a plurality of virtual game components are displayed; and at least one contactless sensor device configured to sense a location and shape of a physical object in a physical 3D space and generate 3D information indicative of the location and shape of the physical object in the physical 3D space. In some embodiments, the at least one processor is programmed to: update a 3D model for a virtual object in the 3D scene, the virtual object corresponding to the physical object; and detect an interaction between the virtual object and at least one virtual game component in the 3D scene.
    Type: Grant
    Filed: June 22, 2015
    Date of Patent: October 24, 2017
    Assignee: IGT CANADA SOLUTIONS ULC
    Inventors: Stefan Keilwert, Franz Pierer, Sven Aurich, Fayez Idris, David V. Froy, Jr.
  • Patent number: 9800527
    Abstract: The present disclosure relates to image processing technologies, and provides a method for displaying an image, comprising: receiving a picture uploaded by a client; intercepting a first image and a second image from the picture; and associating the intercepted first image and the intercepted second image with a network account of a user; when monitoring that the network account is logged in, pushing the first image and the second image as content of a data card of the network account to a client for displaying. The solution of the present invention can mix the avatar of the data card and the background image of the data card together, and display more information related to the avatar of the data card by the background image of the data card.
    Type: Grant
    Filed: September 23, 2014
    Date of Patent: October 24, 2017
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Jiawen Wu, Dongfang Hong
  • Patent number: 9799134
    Abstract: A method and system for high-performance real-time adjustment of colors and patterns on one or more pre-selected elements in a playing video, interactive 360° content, or other image, using graphics processing unit (“GPU”) shader code to process the asset per-pixel and blend in a target color or pattern based on prepared masks and special metadata lookup tables encoded visually in the video file. The method and system can generate asset-specific optimized GPU code that is generated from templates. Pixels are blended into the target asset using the source image for shadows and highlights, one or more masks, and various metadata lookup-tables, such as a texture lookup-table that allows for changing or adding patterns, z-depth to displace parts of the image, or normals to calculate light reflection and refraction.
    Type: Grant
    Filed: January 11, 2017
    Date of Patent: October 24, 2017
    Inventor: Daniel Haveman
  • Patent number: 9798935
    Abstract: A method is described for determining a body parameter of a person outside a vehicle. The method may include capturing a first set of data of the person by a first data capturing device of the vehicle, the captured first set of data representative of a first body posture of the person, capturing a second set of data of the person by a second data capturing device of the vehicle, the captured second set of data representative of a second body posture of the person different from the first body posture, and using the first and second sets of data as input for estimation of the body parameter of the person. Use of a data capturing device of a vehicle is also described, and optionally a distance measurement system of the vehicle, for determining a body parameter of a person according to the method.
    Type: Grant
    Filed: June 10, 2013
    Date of Patent: October 24, 2017
    Assignee: Volvo Car Corporation
    Inventors: Andreas Sandahl, David De Val, Birgit Klinton Kaamark, Magnus Baeckelie, Marcus Rothoff
  • Patent number: 9799133
    Abstract: Examples of systems and methods for non-facial animation in facial performance driven avatar system are generally described herein. A method for facial gesture driven body animation may include capturing a series of images of a face, and computing facial motion data for each of the images in the series of images. The method may include identifying an avatar body animation based on the facial motion data, and animating a body of an avatar using the avatar body animation.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: October 24, 2017
    Assignee: Intel Corporation
    Inventors: Xiaofeng Tong, Qiang Eric Li, Yangzhou Du, Wenlong Li, Johnny C. Yip
  • Patent number: 9792725
    Abstract: The invention discloses a method for image and video virtual hairstyle modeling, including: performing data acquisition for a target subject by using a digital device and obtaining a hairstyle region from an image by segmenting; obtaining a uniformly distributed static hairstyle model which conforms to the original hairstyle region by solving an orientation ambiguity problem of an image hairstyle orientation field, calculating a movement of the hairstyle in a video by tracing a movement of a head model and estimating non-rigid deformation, generating a dynamic hairstyle model in every moment during the moving process, so that the dynamic hairstyle model fits the real movement of the hairstyle in the video naturally. The method is used to perform virtual 3D model reconstruction with physical rationality for individual hairstyles in single-views and video sequences, and widely applied in creating virtual characters and many hairstyle editing applications for images and videos.
    Type: Grant
    Filed: November 7, 2014
    Date of Patent: October 17, 2017
    Assignee: ZHEJIANG UNIVERSITY
    Inventors: Yanlin Weng, Menglei Chai, Lvdi Wang, Kun Zhou
  • Patent number: 9792363
    Abstract: A method for video playback uses only resources universally supported by a browser (“inline playback”) operating in virtually all handheld media devices. In one case, the method first prepares a video sequence for display by a browser by (a) dividing the video sequence into a silent video stream and an audio stream; (b) extracting from the silent video stream a number of still images, the number of still images corresponding to at least one of a desired output frame rate and a desired output resolution; and (c) combining the still images into a composite image. In one embodiment, the composite image having a number of rows, with each row being formed by the still images created from a fixed duration of the silent video stream.
    Type: Grant
    Filed: February 1, 2011
    Date of Patent: October 17, 2017
    Assignee: VDOPIA, INC.
    Inventors: Ryan Patrick McConville, Bhupendra Singh, Prashant Pandey, Chhavi Upadhyay, Srikanth Kakani
  • Patent number: 9792723
    Abstract: The disclosure provides an approach for progressively sculpting three-dimensional (3D) geometry. In one configuration, a sculpting application receives time-based sculpts and stores the sculpted changes from the original geometry, referred to herein as “offsets,” in “fixes” which include (time, offsets) pairs, with the offsets being defined in relation to a reference frame. Each fix may further be associated with a “set” which includes portions of the geometry that are managed together. The sculpting application automatically provides smooth transitions between sculpts by applying scatter-data interpolation to interpolate the offsets of successive fixes, thereby generating new offsets for frames in between user-provided fixes. Further, the user may modify an envelope curve for a set to scale offsets, including offsets in fixes and those automatically generated through interpolation.
    Type: Grant
    Filed: April 7, 2015
    Date of Patent: October 17, 2017
    Assignee: Disney Enterprises, Inc.
    Inventors: Gene S. Lee, Brian Whited, David Suroviec
  • Patent number: 9785883
    Abstract: Users receive content recommendations from a personalized, generalized recommendation service that aggregates and selects content of high personal relevance to each individual user from a large pool of both personal and public content. The received content is filtered and the content determined to be relevant is cached. When a user request for content is received, the cached content is rescored and the content determined to be most relevant based on satisfaction of a relevance threshold is selected and forwarded to the user. Feedback methodologies are also implemented so that a user's actions are taken into consideration in real time and can affect subsequent recommendations to the user.
    Type: Grant
    Filed: April 27, 2012
    Date of Patent: October 10, 2017
    Assignee: EXCALIBUR IP, LLC
    Inventors: Chris LuVogt, Vu B. Nguyen, Brian Theodore, Ketan Bhatia, Justine Shen, Deepa Mahalingam
  • Patent number: 9786085
    Abstract: A rail manipulator indicates the possible range(s) of movement of a part of a computer-generated character in a computer animation system. The rail manipulator obtains a model of the computer-generated character. The model may be a skeleton structure of bones connected at joints. The interconnected bones may constrain the movements of one another. When an artist selects one of the bones for movement, the rail manipulator determines the range of movement of the selected bone. The determination may be based on the position and/or the ranges of movements of other bones in the skeleton structure. The range of movement is displayed on-screen to the artist, together with the computer-generated character. In this way, the rail manipulator directly communicates to the artist the degree to which a portion of the computer-generated character can be moved, in response to the artist's selection of the portion of the computer-generated character.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: October 10, 2017
    Assignee: DreamWorks Animation L.L.C.
    Inventor: Alexander P. Powell
  • Patent number: 9786032
    Abstract: A graphic character object temporary storage stores parameters of a character and associated default values in a hierarchical data structure and one or more animation object data represented in a hierarchical data structure, the one or more animation object data having an associated animation, the graphic character object temporary storage and the animation object data being part of a local memory of a computer system. A method includes receiving a vector graphic object having character part objects which are represented as geometric shapes, displaying a two dimensional character, changing the scale of a part of the displayed two dimensional character, and storing an adjusted parameter in the graphic character object temporary storage as a percentage change from the default value, displaying a customized two dimensional character, applying keyframe data in an associated animation object data to the character parts objects, and displaying an animation according to the keyframe data.
    Type: Grant
    Filed: July 28, 2015
    Date of Patent: October 10, 2017
    Assignee: GOOGLE INC.
    Inventors: Asa Jonas Ivry Block, Suzanne Chambers, George Michael Brower, Igor Clark, Richard The
  • Patent number: 9772738
    Abstract: A mobile terminal and screen operation method for the same are disclosed. The screen operation method includes: displaying a screen containing an amorphous object that is changeable at least in part to a specific form according to an input event; receiving a generated input event; and displaying a concrete object that is generated from the amorphous object by modifying the amorphous object at least in part according to the input event.
    Type: Grant
    Filed: February 22, 2013
    Date of Patent: September 26, 2017
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Minwook Na, Jongwoo Shin, Kangsik Choi, Minsoo Kwon, Jeeyeun Wang
  • Patent number: 9773524
    Abstract: A method for video editing using a mobile terminal and a remote computer is disclosed. A user selects a user video to edit using a mobile application of the mobile terminal. The user selects a visual effect and parameters of the visual effect using the mobile application. Subsequently, the mobile application provides a preview of the visual effect superimposed over the user video using a series of still images representing the visual effect. When the user confirms the preview, the mobile terminal generates a request for video editing and sends the request to a server. The request includes identification of the visual effect for combining the visual effect and the user video as confirmed by the preview. Based on the request from the mobile terminal, the server combines a video clip of the visual effect and the user video into a resulting video.
    Type: Grant
    Filed: June 3, 2016
    Date of Patent: September 26, 2017
    Assignee: MAVERICK CO., LTD.
    Inventors: Joo Hyun Oh, Min Jung, Byulsaim Kwak
  • Patent number: 9772813
    Abstract: One or more embodiments of the disclosure provide systems and methods for providing media presentations to users of a media presentation system. A media presentation generally includes a plurality of media segments provided by multiple users of the media presentation system. In one or more embodiments, a user of the media presentation system may share a media presentation with a co-user. The media presentation system can enable the co-user, if authorized by the user, to contribute (e.g., add a media segment) to a media presentation shared with the co-user.
    Type: Grant
    Filed: March 31, 2015
    Date of Patent: September 26, 2017
    Assignee: FACEBOOK, INC.
    Inventors: Joshua Alexander Miller, Leo Litterello Mancini, Michael Slater
  • Patent number: 9766879
    Abstract: Supplemental functionalities may be provided for an executable program via an ontology instance. In some embodiments, a computer program (e.g., an executable program or other computer program) associated with an ontology may be caused to be run. The ontology may include information indicating attributes for a set of applications. An instance of the ontology may be obtained, which may correspond to an application of the set of applications. Based on the ontology instance, supplemental information may be generated for the computer program. The supplemental information may be related to one or more functionalities of the application to be added to the executable program. The supplemental information may be provided as input to the computer program. The supplemental information, at least in part, may cause the one or more functionalities of the application to be made available via the executable program.
    Type: Grant
    Filed: December 28, 2016
    Date of Patent: September 19, 2017
    Assignee: REACTIVECORE LLC
    Inventor: Michel Dufresne
  • Patent number: 9767855
    Abstract: In a model calculating apparatus (1) according to the invention it is provided, for the purpose of creating a detailed 3D model (20), by a recording device (3), to generate a first video data stream (4) and to generate a reduced video data stream (13) from frames (26) of the first video data stream (4) and to process further these reduced video data stream for the purpose of creating an approximated 3D model (30).
    Type: Grant
    Filed: February 6, 2014
    Date of Patent: September 19, 2017
    Assignee: Testo AG
    Inventors: Jan-Friso Evers-Senne, Martin Stratmann, Hellen Altendorf
  • Patent number: 9769566
    Abstract: Sound control circuit comprising a game port, installed on a digital audiovisual reproduction system managed by an operating system, characterized in that the game port in the sound control circuit is used to create an access to the configuration of the audiovisual reproduction system and/or additional management functions for the audiovisual reproduction system.
    Type: Grant
    Filed: April 30, 2012
    Date of Patent: September 19, 2017
    Assignee: TouchTunes Music Corporation
    Inventor: Guy Nathan
  • Patent number: 9766786
    Abstract: Techniques and apparatuses for visual storytelling on a mobile media-consumption device are described. These techniques and apparatuses enable a user to view events central to the story while also viewing context for the story. By so doing, a user may enjoy the story as the story's author intended without sacrificing a user's ability to engage with the story's context.
    Type: Grant
    Filed: July 16, 2014
    Date of Patent: September 19, 2017
    Assignee: Google Technology Holdings LLC
    Inventors: Baback Elmieh, Darren Mark Austin, Brian M. Collins, Mark Jason Oftedal, Jan J. Pinkava, Douglas Paul Sweetland
  • Patent number: 9762918
    Abstract: A method and apparatus of line buffer reduction for context adaptive entropy processing are disclosed. The context formation for context adaptive entropy processing depends on block information associated with one or more neighboring blocks. When the neighboring block is on a different side of a region boundary from the current block, the block information is replaced by replacement block information to reduce or remove line buffer requirement for storing the block information of neighboring blocks on the other side of the region boundaries from the current block. The context adaptive entropy processing is CABAC encoding, CABAC decoding, CAVLC encoding, or CAVLC decoding.
    Type: Grant
    Filed: April 23, 2012
    Date of Patent: September 12, 2017
    Assignee: HFI INNOVATION INC.
    Inventors: Tzu-Der Chuang, Yu-Wen Huang, Ching-Yeh Chen
  • Patent number: 9762651
    Abstract: Systems and methods are provided for sharing a screen from a mobile device. For example, a method includes capturing an image of a screen displayed on the mobile device in response to a command to share the screen, receiving user instructions for redacting a portion of the image, and transmitting the image with the selected portion redacted to a recipient device selected by the user. As another example, a method includes receiving, from a first mobile device, an identifier for a recipient and an image representing a captured screen of a first mobile device, copying the image to an image repository associated with the recipient, performing recognition on the image, generating annotation data for the image, based on the recognition, that includes at least one visual cue, and providing the image and the annotation data to a second mobile device, the second mobile device being associated with the recipient.
    Type: Grant
    Filed: August 21, 2014
    Date of Patent: September 12, 2017
    Assignee: Google Inc.
    Inventors: Matthew Sharifi, David Petrou
  • Patent number: 9760568
    Abstract: Systems and methods are provided for enabling communications between users of an instant messaging application and a virtual world environment. In accordance with one implementation, a method is provided that includes operations performed by one or more processors, including enabling a first user to navigate the virtual world environment by controlling an avatar representing the first user. The method also includes capturing a first paralinguistic indicator made by the first user, the first paralinguistic indicator configured for communications in the virtual world environment. In addition, the method includes translating the first paralinguistic indicator into a message configured for text-based communications in the instant messaging application, the message comprising at least one of a text description of the first paralinguistic indicator and a second paralinguistic indicator configured for communications in the instant messaging application.
    Type: Grant
    Filed: May 6, 2014
    Date of Patent: September 12, 2017
    Assignee: Oath Inc.
    Inventor: David S. Bill
  • Patent number: 9761035
    Abstract: Methods and systems for dynamic user interfaces are provided. A user interface allows a user to receive information about a computer system's state and to make changes to state, such as with touch screen devices. Dynamic user interfaces provide advanced methods of interfacing with the computer system, receiving information, and changing computer state. Advanced methods include improved gestural controls like interrupting or fast-forwarding an animated transition. Advanced methods of receiving information from the computer system are also provided, such as real-time data updates mid-animation and meaning conveyed through motion of and/or configuration change of UI elements. Defined animation pathways in the system can have different relative velocities as a function of percentage completion regardless of the duration(s) of the animation pathways, allowing for more fluid UIs.
    Type: Grant
    Filed: March 3, 2015
    Date of Patent: September 12, 2017
    Assignee: MINDSTREAM INTERFACE DYNAMICS, INC.
    Inventor: Jeremy Flores
  • Patent number: 9752889
    Abstract: A graphics display system for driver information and driver assistance applications generates controllable and dynamic graphical effects in conjunction with 3D visualization of maps. The system generates a display of a map in a 3D virtual environment that responds to the environment changes in a dynamic and visually intuitive manner for a vehicle operator. The system processes environment information, including lighting condition, weather condition, and other data acquired from different sensors in the vehicle such as cameras and lighting sensors, or through networked information services. The graphics display can be integrated with different driver information and driver assistance system embodiments including mobile platforms, in-vehicle information systems, web platforms, and PC systems.
    Type: Grant
    Filed: March 14, 2014
    Date of Patent: September 5, 2017
    Assignee: Robert Bosch GmbH
    Inventors: Liu Ren, Lei Yang
  • Patent number: 9754379
    Abstract: A method and a system for determining parameters of an off-axis virtual camera provided by embodiments of present invention can extract a scene depth map for each video frame from a depth buffer, determine the minimum value of edge depth values of the scene depth map as the closest scene edge depth of each video frame, determine the depth of a first object as the depth of an object of interest of each video frame, use the smaller value between the closest scene edge depth and the depth of an object of interest as the zero-parallax value and obtain a zero-parallax value sequence constituted by the zero-parallax value of each video frame. The present invention realizes automatic determination of the zero parallax of each video frame rather than manual setting thereof, and thus the determination will not be affected by factors such as lack of experience, and the amount of work for an technician is also reduced.
    Type: Grant
    Filed: February 1, 2016
    Date of Patent: September 5, 2017
    Assignee: BEIJING UNIVERSITY OF POSTS AND TELECOMMUNICATIONS
    Inventors: Huadong Ma, Liang Liu, Aner Liu, Dawei Lu
  • Patent number: 9753602
    Abstract: Systems and method are provided for providing a playlist transport bar. The playlist transport bar provides an overlay which graphically represents assets (e.g., programs) of a playlist in a manner that enables a user to simultaneously ascertain a playback position within the playlist and a particular asset. The playlist transport may include asset regions which each correspond to an asset in a playlist and a position indication region which may provide information relating to a playback position.
    Type: Grant
    Filed: June 14, 2013
    Date of Patent: September 5, 2017
    Assignee: Rovi Guides, Inc.
    Inventors: Jon P. Radloff, Danny R. Gaydou, II, Thomas J. Carroll, Mark Heyner, Kenneth F. Carpenter, Jr.
  • Patent number: 9754400
    Abstract: In a method for reconstructing a motion of an object from a sequence of motion pattern segments of a computer model of the object, a motion transition between an initial motion state and a final motion state of the object in a time interval of the motion is captured based on position data of the at least one sampling point which is received from the position marker. Further, at least one motion pattern segment corresponding to the motion transition is selected from a plurality of motion patterns of the computer model which are stored in a database such that the selected motion pattern segment leads with sufficient probability from the initial motion state to the final motion state for the time interval. Furthermore, an image of the motion of the object for the time interval is reconstructed using the initial motion state and the selected motion pattern segment.
    Type: Grant
    Filed: November 12, 2013
    Date of Patent: September 5, 2017
    Assignee: FRAUNHOFER-GESELLSCHAFT ZUR FOERDERUNG DER ANGEWANDTEN FORSCHUNG E.V.
    Inventors: Stephan Otto, Ingmar Bretz, Norbert Franke, Thomas Von Der Gruen, Christopher Mutschler
  • Patent number: 9749494
    Abstract: A user terminal apparatus is provided. The user terminal apparatus includes a camera unit configured to photograph an object, a controller configured to detect an object image from an image of the object photographed by the camera unit, generate image metadata used to change a feature part of the object image, and generate an image file by matching the object image with the image metadata, a storage configured to store the image file, and a display configured to, in response to selecting the image file, display the object image in which the feature part is changed based on the image metadata.
    Type: Grant
    Filed: July 8, 2014
    Date of Patent: August 29, 2017
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jae-yun Jeong, Sung-jin Kim, Yong-gyoo Kim, Sung-dae Cho, Ji-hwan Choe
  • Patent number: 9747495
    Abstract: Systems and methods in accordance with embodiments of the invention enable collaborative creation, transmission, sharing, non-linear exploration, and modification of animated video messages. One embodiment includes a video camera, a processor, a network interface, and storage containing an animated message application, and a 3D character model. In addition, the animated message application configures the processor to: capture a video sequence using the video camera; detect a human face within a sequence of video frames; track changes in human facial expression of a human face detected within a sequence of video frames; map tracked changes in human facial expression to motion data, where the motion data is generated to animate the 3D character model; apply motion data to animate the 3D character model; render an animation of the 3D character model into a file as encoded video; and transmit the encoded video to a remote device via the network interface.
    Type: Grant
    Filed: March 6, 2013
    Date of Patent: August 29, 2017
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Stefano Corazza, Daniel Babcock, Charles Pina, Sylvio Drouin
  • Patent number: 9741146
    Abstract: Embodiments disclose an animation system designed to generate animation that appears realistic to a user without using a physics engine. The animation system can use a measure of kinetic energy and reference information to determine whether the animation appears realistic or satisfies the laws of physics. Based, at least in part, on the kinetic energy, the animation system can determine whether to adjust a sampling rate of animation data to reflect more realistic motion compared to a default sampling rate.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: August 22, 2017
    Assignee: Electronic Arts, Inc.
    Inventor: Hitoshi Nishimura
  • Patent number: 9740377
    Abstract: A composite video including a plurality of videos in a single stream is sent from a video streamer server to a client, where it is presented on an electronic display. A user may make a selection in the composite video that is translated to an absolute media reference that may include information identifying which video of the composite video was selected, an absolute media time identifying an elapsed time from the beginning of the video to the selection, and/or an absolute media spatial coordinate identifying a spatial location of the video that was selected. Auxiliary information related to the composite video may be obtained based on the selection and the absolute media reference and displayed to the user.
    Type: Grant
    Filed: January 19, 2012
    Date of Patent: August 22, 2017
    Assignee: Vuemix, Inc.
    Inventors: Govind Kizhepat, Yung-Hsiao Lai, Erik Matthew Nystrom, Sarvesh Arun Telang
  • Patent number: 9740398
    Abstract: Detecting user input based on multiple gestures is provided. One or more interactions are received from a user via a user interface. An inferred interaction is determined based, at least in part, on a geometric operation, wherein the geometric operation is based on the one or more interactions. The inferred interaction is presented via the user interface. Whether a confirmation has been received for the inferred interaction is determined.
    Type: Grant
    Filed: November 2, 2016
    Date of Patent: August 22, 2017
    Assignee: International Business Machines Corporation
    Inventors: Rachel K. E. Bellamy, Bonnie E. John, Peter K. Malkin, John T. Richards, Calvin B. Swart, John C. Thomas, Jr., Sharon M. Trewin
  • Patent number: 9740947
    Abstract: An architecture for linear-time extraction of maximally stable extremal regions (MSERs) having an image memory, heap memory, a pointer array and processing hardware is disclosed. The processing hardware is configured to in real-time analyze image pixels in the image memory using a linear-time algorithm to identify a plurality of components of the image. The processing hardware is also configured to place the image pixels in the heap memory for each of the plurality of components of the image, generate a pointer that points to a location in the heap memory that is associated with a start of flooding for another component and store the pointer in the array of pointers. The processing hardware is also configured to access the plurality of components using the array of pointers and determine MSER ellipses based on the components and MSER criteria.
    Type: Grant
    Filed: September 27, 2016
    Date of Patent: August 22, 2017
    Assignee: Khalifa University of Science and Technology
    Inventors: Sohailah Mohamed Rashed Alyammahi, Ehab Najeh Salahat, Hani Hasan Mustafa Saleh, Andrzej Stefan Sluzek, Mohammed Ismail Elnaggar
  • Patent number: 9740291
    Abstract: A presentation system includes the following: a reception unit that receives a start instruction to start a presentation, a detection unit that starts detecting a gesture of a presenter in response to the start instruction, and a control unit that controls an operation for distributing presentation material, based on a detail of detection of the gesture.
    Type: Grant
    Filed: June 22, 2012
    Date of Patent: August 22, 2017
    Assignee: KONICA MINOLTA BUSINESS TECHNOLOGIES, INC.
    Inventors: Takeshi Morikawa, Kaitaku Ozawa, Takeshi Minami, Daisuke Sakiyama, Kazuya Anezaki
  • Patent number: 9734617
    Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.
    Type: Grant
    Filed: May 27, 2016
    Date of Patent: August 15, 2017
    Assignee: faceshift AG
    Inventors: Sofien Bouaziz, Mark Pauly