Motion Planning Or Control Patents (Class 345/474)
  • Patent number: 10438392
    Abstract: A stylesheet data structure includes a plurality of stylesheet records, each comprising an ontology concept field, a presentation instruction field, and a presentation identifier field. Techniques for ontology driven animation includes receiving a request to render an instance of a first concept in an annotation with an associated ontology. It is determined whether a stylesheet file includes a first stylesheet record that indicates the first concept, wherein the first stylesheet record also indicates a first presentation identifier. If so, then an instance of a first component of the first concept is rendered according to a presentation instruction indicated in a second stylesheet record that also indicates the first presentation identifier. In some embodiments, the instance of the first component of the first concept is an instance of the first concept and the second stylesheet record is the first stylesheet record.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: October 8, 2019
    Inventor: Evan John Molinelli
  • Patent number: 10430642
    Abstract: A three-dimensional model (e.g., motion capture model) of a user is generated from captured images or captured video of the user. A machine learning network may track poses and expressions of the user to generate and refine the three-dimensional model. Refinement of the three-dimensional model may provide more accurate tracking of the user's face. Refining of the three-dimensional model may include refining the determinations of poses and expressions at defined locations (e.g., eye corners and/or nose) in the three-dimensional model. The refining may occur in an iterative process. Tracking of the three-dimensional model over time (e.g., during video capture) may be used to generate an animated three-dimensional model (e.g., an animated puppet) of the user that simulates the user's poses and expressions.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: October 1, 2019
    Assignee: Apple Inc.
    Inventors: Sofien Bouaziz, Brian Amberg, Thibaut Weise, Patrick Snape, Stefan Brugger, Alex Mansfield, Reinhard Knothe, Thomas Kiser
  • Patent number: 10395345
    Abstract: First and second spatial frame regions are identified in a sequence of motion picture image frames captured at a high frame rate. Different motion blur parameters are determined for each of the first and second spatial frame regions. First and second intermediate frame sequences having frame rates less than the capture frame rate are generated from the original frame sequence. The first motion blur parameter is applied to the first intermediate frame sequence and the second motion blur parameter is applied to the second intermediate frame sequence. The first and second spatial frame regions in the corresponding first and second intermediate frame sequences are composited to produce an output frame sequence having different motion blur in different regions of the scene.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: August 27, 2019
    Assignee: RealD Inc.
    Inventor: Anthony Davis
  • Patent number: 10388053
    Abstract: Embodiments of systems disclosed herein reduce or eliminate artifacts or visible discrepancies that may occur when transitioning from one animation to another animation. In certain embodiments, systems herein identify one or more pose or reference features for one or more objects in a frame of a currently displayed animation. Although not limited as such, often the one or more objects are characters within the animation. Systems herein can attempt to match the reference features for the one or more objects to reference features of corresponding objects in a set of potential starting frames for a second animation that is to start being displayed. The potential starting frame with reference features that are an acceptable match with the current frame of the current animation may be selected as a starting frame for playing the second, animation potentially resulting in a smoother transition than starting from the first frame.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: August 20, 2019
    Assignee: Electronic Arts Inc.
    Inventors: Ben Folsom Carter, Jr., Fabio Zinno
  • Patent number: 10380784
    Abstract: Provided herein is an electronic apparatus including a storage configured to store a texture image representing a characteristic of a particle of an object; and a processor configured to map the texture image to a plurality of locations where the particle exists and to generate a blending image by blending the mapped texture images, and to render the object based on the blending image.
    Type: Grant
    Filed: July 25, 2016
    Date of Patent: August 13, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seung-ho Shin, Soo-wan Park, Joon-seok Lee
  • Patent number: 10372317
    Abstract: A method for presenting a media item of a set of media items in a user interface (UI) of a client device is disclosed. The UI includes a first scrub area associated with a first scrub rate and a second scrub area associated with a second scrub rate. The client device receives a first user input via the first scrub area of the UI to navigate through the set of media items at the first scrub rate. The client device receives a second user input that is separate from the first user input via the second scrub area of the UI to navigate through the set of media items at a second scrub rate.
    Type: Grant
    Filed: June 12, 2015
    Date of Patent: August 6, 2019
    Assignee: GOOGLE LLC
    Inventor: Baron Winfield Arnold
  • Patent number: 10369469
    Abstract: A method of runtime animation substitution may include detecting, by a processing device of a video game console, an interaction scenario in an instance of an interactive video game, wherein the interaction scenario comprises a target animation associated with a game character. The method may further include identifying, by the processing device, a valid transitional animation. The method may further include causing, by the processing device, the valid transitional animation to be performed by the game character in the instance of the interactive video game.
    Type: Grant
    Filed: November 9, 2016
    Date of Patent: August 6, 2019
    Assignee: Electronic Arts Inc.
    Inventors: Simon Sherr, Brett Peake
  • Patent number: 10357717
    Abstract: To improve the user friendliness of switching a map display between the real world and a virtual world while also improving the entertainment value of the game and avoiding the danger arising from using a smartphone while walking.
    Type: Grant
    Filed: July 26, 2016
    Date of Patent: July 23, 2019
    Assignees: EARTHBEAT, INC., DWANGO Co., Ltd.
    Inventors: Shigeo Okajima, Kazuya Asano, Hiroto Tamura
  • Patent number: 10324943
    Abstract: Examples of auto-monitoring and adjusting dynamic data visualizations are provided herein. A data visualization based on initial data can be generated. A series of data updates can be received. The data visualization can be updated based on the series of data updates. Various performance metrics can be monitored, and data updates and/or the updated data visualization can be adjusted accordingly. Performance metrics can include at least one of: a data visualization rendering time; a data transfer time; or a data update generation time. Upon determining that one or more performance metrics exceed a threshold: a time between data updates of the series of data updates can be increased; sampled data can be requested for subsequent data updates; and/or a time-dimension extent of the updated data visualization can be reduced.
    Type: Grant
    Filed: August 10, 2015
    Date of Patent: June 18, 2019
    Assignee: Business Objects Software, Ltd.
    Inventors: Sybil Shim, Daniel Georges, Charles Wilson, Paul van der Eerden, Saeed Jahankhani
  • Patent number: 10297085
    Abstract: Systems, apparatuses and methods of creating virtual objects may provide for segmenting one or more objects in a scene and highlighting a selected object from the segmented one or more objects based on an input from a user. In one example, a scene-based virtual object is created from the selected object and a behavior is assigned to the scene-based virtual object.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: May 21, 2019
    Assignee: Intel Corporation
    Inventor: Glen J. Anderson
  • Patent number: 10274726
    Abstract: A head up display arrangement for a motor vehicle includes an image source providing illuminated images. At least one mirror is positioned to provide a first reflection of the illuminated images. A windshield is positioned to receive the first reflection and provide a second reflection of the illuminated image such that the second reflection is visible to a driver of the vehicle who has at least one eye within an eyebox defined by the second reflection. An image capturing device captures images of a head of a driver of the motor vehicle. An electronic processor adjusts, based on the captured images of the driver's head, the illuminated images and/or a position of the at least one mirror.
    Type: Grant
    Filed: April 13, 2017
    Date of Patent: April 30, 2019
    Assignee: Panasonic Automotive Systems Company of America, Division of Panasonic Corporation of North America
    Inventors: Dallas Dwight Hickerson, Thomas Ray Burns
  • Patent number: 10268781
    Abstract: The VISUAL MODELING APPARATUSES, METHODS AND SYSTEMS (“VISUAL MODELING SYSTEM”) transforms and maps visual imagery, photographs, video and the like onto a large scale model or Giant using a matrix of embedded lighting elements in the structure to create a large scale visitor and entertainment attraction.
    Type: Grant
    Filed: February 16, 2016
    Date of Patent: April 23, 2019
    Inventor: Paddy Dunning
  • Patent number: 10268737
    Abstract: Embodiments relate to techniques for performing data blending operations across multiple different data sets comprising data structures with columns and rows. The data sets may be classified and displayed in a visualization (i.e., chart) in a client interface. Columns and rows from the blended data sets may be mapped together (i.e., linked). Updates to the visualization, including adding elements from the data sets, may trigger a data blending process on the backend server in communication with a database. The server may blend the specified data by generating a runtime artifact representing a calculation graph for the blend operation and query the database to retrieve a resulting data set. The data blending operation may comprise collapsing dimensions of a primary data set with linked dimensions of a secondary data sets into a blended column and aggregating values of measures in rows of the blended column of the resulting data structure.
    Type: Grant
    Filed: December 30, 2015
    Date of Patent: April 23, 2019
    Assignee: Business Objects Software Limited
    Inventors: Alfred Fung, Ali Moosavi, Erik Schmidt, David Mosimann, Jung-Rung Han
  • Patent number: 10265847
    Abstract: A transfer source operation information acquisition unit acquires a plurality of pieces of action information about the transfer source robot; a transfer destination operation information acquisition unit acquires a plurality of pieces of first action information about the transfer destination robot; and a correction unit generates a plurality of pieces of second action information about the transfer destination robot by correcting action information about the transfer source robot by a prescribed update formula using the first action information about the transfer destination robot. The number of the pieces of the first action information about the transfer destination robot is smaller than the number of the pieces of the action information about the transfer source robot, and the number of the pieces of the second action information about the transfer destination robot is larger than the number of the pieces of the first action information about the transfer destination robot.
    Type: Grant
    Filed: December 2, 2015
    Date of Patent: April 23, 2019
    Assignee: SOINN HOLDINGS LLC
    Inventors: Daiki Kimura, Osamu Hasegawa
  • Patent number: 10269001
    Abstract: Disclosed herein are methods and systems for executing a first transaction at a checkout system at substantially the same time that a second transaction is started at the checkout system. For example, a cashier can scan one or more items, adding the items to a first transaction. When all items have been added to the first transaction and a first customer is making a payment in the first transaction, the cashier can begin to add items to a second transaction, such that execution of the first transaction occurs at substantially the same time that items are added to the second transaction.
    Type: Grant
    Filed: January 12, 2016
    Date of Patent: April 23, 2019
    Assignee: Target Brands, Inc.
    Inventors: Greg Rose, Mike Cooley, John Deters, Kevin Jansen, Joseph Brenny, Julie Wegmiller
  • Patent number: 10235468
    Abstract: Embodiments relate to performing data blending operations across multiple different data sets comprising data structures with columns and rows. Columns of data sets to be blended may be linked together. Filters may be applied to data sets before the data blend operation is performed to specify which columns to be displayed in a visualization at a client interface. A direct filter may be applied to one of the data sets to obtain a filtered resulting data set. Data elements of the filtered resulting data set can be identified that correspond to the linked columns of the data sets to be blended. The results of applying the direct filter may then be used as the filtering criteria for an indirect filter to filter a second data set. The results of applying the direct and indirect filters may then be combined together in the data blending operation.
    Type: Grant
    Filed: December 30, 2015
    Date of Patent: March 19, 2019
    Assignee: Business Objects Software Limited
    Inventors: Justin Wong, Ali Moosavi, Saeed Jahankhani
  • Patent number: 10217490
    Abstract: A digital video editing system uses a graphical user interface which facilitates the selection of a video sequence of interest and its representation in a conveniently visualized form. Through the graphical user interface, the user may select a starting frame, a time interval, and a number of frames within the time interval which may be represented by thumbnail depictions of selected video frames. Once the video sequence is represented by a selected sequence of video frames over a selected interval, the user can then use editing techniques to manipulate the portions of the video sequence represented by the thumbnail depictions.
    Type: Grant
    Filed: July 31, 2014
    Date of Patent: February 26, 2019
    Assignee: Intel Corporation
    Inventor: Edward O. Clapper
  • Patent number: 10172566
    Abstract: A system for evaluating the action capacities of an individual comprising at least one measurement module, each measurement module being configured to produce at least one measurement point of a physiological parameter of the individual; a computation module configured to determine at least one set of measurement points representative of a distribution law for the measurement point or points, termed measured set; a computation module configured to compute at least one conditional probability of having one or more states of health; a third computation module configured to compute an average of the computed conditional probability or probabilities; a fourth computation module configured to determine at least one level of action capacity of the individual; a transmission module configured to transmit a signal representative of the level or levels of action capacity of the individual to a user device.
    Type: Grant
    Filed: May 16, 2017
    Date of Patent: January 8, 2019
    Assignee: AIRBUS OPERATIONS SAS
    Inventors: Benoit Papaix, Matthieu Pujos
  • Patent number: 10176592
    Abstract: This present disclosure relates to systems and processes for capturing an unstructured light field in a plurality of images. In particular embodiments, a plurality of keypoints are identified on a first keyframe in a plurality of captured images. A first convex hull is computed from all keypoints in the first keyframe and merged with previous convex hulls corresponding to previous keyframes to form a convex hull union. Each keypoint is tracked from the first keyframe to a second image. The second image is adjusted to compensate for camera rotation during capture, and a second convex hull is computed from all keypoints in the second image. If the overlapping region between the second convex hull and the convex hull union is equal to, or less than, a predetermined size, the second image is designated as a new keyframe, and the convex hull union is augmented with the second convex hull.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: January 8, 2019
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Radu Bogdan Rusu, Yuheng Ren
  • Patent number: 10134168
    Abstract: One embodiment of the present invention includes a double solve unit that configures a kinematic chain representing an animated character. The double solve unit generates a first solution for the kinematic chain based on a first solving order. While generating the first solution, the doubles solve unit determines the recursion depth of each output connector included in the kinematic chain. Subsequently, the double solve unit identifies any output connectors for which the recursion length exceeds a corresponding expected recursion depth—indicating that a custom recursive dependency exists that is not reflected in the first solution. For these custom recursive output connectors, the double solve unit creates a second solving order and generates a more accurate solution.
    Type: Grant
    Filed: March 28, 2014
    Date of Patent: November 20, 2018
    Assignee: AUTODESK, INC.
    Inventor: Krystian Ligenza
  • Patent number: 10136242
    Abstract: A cloud computing system is provided to support a consumer for programming a smart phone/touch pad. Set of accessory members is provided to enable said consumer to build and program a consumer designed article that comprises said consumer programmed smart phone/touch pad.
    Type: Grant
    Filed: September 24, 2011
    Date of Patent: November 20, 2018
    Inventor: Peter Ar-Fu Lam
  • Patent number: 10122982
    Abstract: Recording images, including: receiving an optical effects selection, which indicates a selected optical effect to apply to raw image data capturing the images; receiving an optical effects parameter, which indicates how to apply the selected optical effects to the raw image data; storing the optical effects selection and the optical effects parameter as effects metadata; recording the raw image data using a sensor of the digital camera; marking the effects metadata with time information to associate the effects metadata with the recorded raw image data over time; applying the selected optical effect to the raw image data according to the optical effects parameter to create processed image data while preserving the recorded raw image data; and displaying the processed image data on a display of the digital camera. Key words include raw image data and effects metadata.
    Type: Grant
    Filed: May 21, 2014
    Date of Patent: November 6, 2018
    Assignees: SONY CORPORATION, SONY PICTURES ENTERTAINMENT INC
    Inventors: Spencer Stephens, Chris Cookson, Scot Barbour
  • Patent number: 10120539
    Abstract: The disclosure provides a method for setting a User Interface (UI). The method comprises the following steps: acquiring and storing image data in a file of a selected background image on a UI management interface; marking space coordinates of a region with different shapes cut on the background image, performing display effect processing on the cut region with different shapes, and outputting a display effect processing result; and recording a preset directory name and a corresponding menu linking path of each icon. The disclosure also discloses a device for setting a UI. By adopting the scheme, a personalized UI can be obtained conveniently and quickly, and user experience is improved.
    Type: Grant
    Filed: February 15, 2011
    Date of Patent: November 6, 2018
    Assignee: ZTE Corporation
    Inventor: Qiang Wang
  • Patent number: 10123164
    Abstract: In a server system, a computer-implemented method of initiating a proximity-based communication protocol involving a first and one or more second client devices. For each of plural candidate second devices location coordinates are retrieved and an associated axis aligned bounding box AABB is calculated. When AABB of such candidate second devices overlap with an AABB for the first device, the candidate is presented to the user of the first device. Next, a selection of one or more candidate second devices is received from the first device and causing the protocol to be initiated between the first device and the one or more selected candidate second devices.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: November 6, 2018
    Assignee: Bunq B.V.
    Inventors: Ali Niknam, Stijn Van Drongelen, Andreas Verhoeven, Menno Arnold Den Hollander, Robert-Jan Mahieu
  • Patent number: 10097807
    Abstract: In an example embodiment a method, apparatus and computer program product are provided. The method includes facilitating access to a plurality of source multimedia content, wherein at least one source multimedia of the plurality of source multimedia content comprises corresponding depth information. The method further includes generating a blend map by defining a plurality of depth layers. At least one depth layer of the plurality of depth layers is associated with a respective depth limit. Defining the at least one depth layer comprises selecting pixels of the at least one depth layer from the at least one source multimedia content of the plurality of source multimedia content based on the respective depth limit associated with the at least one depth layer and the corresponding depth information of the at least one source multimedia content. The method also includes blending the plurality of source multimedia content based on the blend map.
    Type: Grant
    Filed: October 6, 2014
    Date of Patent: October 9, 2018
    Assignee: NOKIA TECHNOLOGIES OY
    Inventors: Tobias Karlsson, Tor Andrae, Amer Mustajbasic
  • Patent number: 10089909
    Abstract: A display control method includes: inputting user's image including a drawing portion made by hand drawing and being a display target image; and performing image control including causing the input user's image to emerge from any one of a left end and a right end of a predetermined display region, on which the user's image is to be displayed, and moving the user's image that has emerged.
    Type: Grant
    Filed: September 11, 2017
    Date of Patent: October 2, 2018
    Assignee: Ricoh Company, Limited
    Inventors: Atsushi Itoh, Aiko Ohtsuka, Tetsuya Sakayori, Hidekazu Suzuki, Takanobu Tanaka
  • Patent number: 10083007
    Abstract: Devices and methods for filtering data include calculating intermediate input values from input elements using a transformation function. The transformation function is based at least in part on a size of the filter and a number of filter outputs. Intermediate filter values are calculated from filter elements of the filter using the transformation function. Each intermediate input value is multiplied with a respective intermediate filter value to form intermediate values. These intermediate values are combined with each other using the transformation function to determine one or more output values.
    Type: Grant
    Filed: September 15, 2016
    Date of Patent: September 25, 2018
    Assignee: ALTERA CORPORATION
    Inventors: Utku Aydonat, Andrew Chaang Ling, Gordon Raymond Chiu, Shane O'Connell
  • Patent number: 10037709
    Abstract: A decision support method for use by an operator surrounded by adverse entities in a battlefield environment comprises generating a layered representation of the physical environment surrounding the operator from sensor information by mapping the spherical physical environment of the operator into a geometrical representation suitable for display on a screen, the representation being segmented into a plurality of layers having respective sizes, each layer being associated with a respective category of tactical actions. The representation further comprises visual elements representing adverse entities in the surrounding physical environment of the operator, each visual element being represented so as to be superposed with a given layer.
    Type: Grant
    Filed: December 8, 2014
    Date of Patent: July 31, 2018
    Assignee: THALES NEDERLAND B.V.
    Inventors: Jan-Egbert Hamming, Frank Koudijs, Frank Colijn, Pim Van Wensveen
  • Patent number: 10032305
    Abstract: A system includes hardware processor(s), an HMD, an input device, and an onion skin animation module. The animation modules is configured to receive a character rig of a 3D character, receive a first 3D animation of the 3D character, the first 3D animation defines a motion sequence of the 3D character based on the character rig, create a virtual time bar within the virtual environment, the virtual time bar displaying a timeline associated with the first 3D animation, identify a first animation time within the first 3D animation, the first animation time is a point in time during the motion sequence, create a first pose object of the 3D character in the virtual environment, pose the first pose object based on the first 3D animation at the animation time, and positioning the first pose object within the virtual environment proximate the first animation time on the virtual time bar.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: July 24, 2018
    Assignee: Unity IPR ApS
    Inventor: Timoni West
  • Patent number: 10022628
    Abstract: Embodiments of the systems and processes disclosed herein can use procedural techniques to calculate reactionary forces between character models. In some embodiments, the system can calculate a change in momentum of the character at the time of impact and simulate the reaction of the character model, using momentum-based inverse kinematic analysis. Procedural animation can be used to dynamically generate a target pose for the character model based on the inverse kinematic analysis for each rendered frame.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: July 17, 2018
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Masatoshi Matsumiya, Paolo Rigiroli
  • Patent number: 10019828
    Abstract: An image generating apparatus, including a memory storing avatar data representing a motion of an avatar in a virtual space and a processor coupled to the memory and the processor configured to obtain sensor information that represents a motion of a person in a real space acquired from at least one sensor, determine a first value that indicates an impression of the person based on the obtained sensor information, determine a type of the motion based on the obtained sensor information, select at least one candidate data set corresponding to the type of the motion from the memory, determine a second value that indicates impression of the avatar for each of the selected at least one data set, select a representative data set based on the determined first value and the determined second value, and generate an avatar image based on the representative data set.
    Type: Grant
    Filed: October 5, 2016
    Date of Patent: July 10, 2018
    Assignee: FUJITSU LIMITED
    Inventor: Naoko Hayashida
  • Patent number: 10008020
    Abstract: There is presented a method for interactive, real-time animation of soft body dynamics, comprising the steps of: providing a 3D model of a soft body, the model comprising a set of vertices connected by edges; defining a set of physical constraints between vertices in the 3D model, the set of constraints forming a system of linear equations comprising a set of unknowns representing the positions of the vertices; applying a Brooks-Vizing node coloring algorithm in order to partition the system of linear equations into a set of partitions each including an independent subset of unknowns; for each partition, applying a Gauss-Seidel based solver in parallel in order to determine an approximation of the unknowns; and using the determined approximation of the unknowns to update the 3D model. There is also presented an animation system configured to perform the above-described method.
    Type: Grant
    Filed: December 21, 2016
    Date of Patent: June 26, 2018
    Assignee: CHALMERS TEKNISKA HÖGSKOLA AB
    Inventor: Marco Fratarcangeli
  • Patent number: 10009550
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for synthetic imaging. In one aspect, a method includes receiving from each digital camera respective imaging data, each digital camera having a viewpoint that is different from the viewpoints of each other digital camera and having a field of view that is overlapping with at least one other digital camera; for a synthetic viewpoint that is a viewpoint that is within a geometry defined by the viewpoints of the digital cameras, selecting respective imaging data that each has a field of view that overlaps a field of view of the synthetic viewpoint and generating, from the selected respective imaging data, synthetic imaging data that depicts an image captured from a virtual camera positioned at the synthetic viewpoint.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: June 26, 2018
    Assignee: X Development LLC
    Inventor: Thomas Peter Hunt
  • Patent number: 9996940
    Abstract: Methods, devices, and systems for expression transfer are disclosed. The disclosure includes capturing a first image of a face of a person. The disclosure includes generating an avatar based on the first image of the face of the person, with the avatar approximating the first image of the face of the person. The disclosure includes transmitting the avatar to a destination device. The disclosure includes capturing a second image of the face of the person on a source device. The disclosure includes calculating expression information based on the second image of the face of the person, with the expression information approximating an expression on the face of the person as captured in the second image. The disclosure includes transmitting the expression information from the source device to the destination device. The disclosure includes animating the avatar on a display component of the destination device using the expression information.
    Type: Grant
    Filed: October 25, 2017
    Date of Patent: June 12, 2018
    Assignee: Connectivity Labs Inc.
    Inventors: Thomas Yamasaki, Rocky Chau-Hsiung Lin, Koichiro Kanda
  • Patent number: 9972123
    Abstract: Systems and methods for generating a model of an object that includes the surface reflectance details of the object are disclosed. The surface reflectance properties of the object comprising at least separate components for the object diffuse data and the object specular data are received. A 3D model of the object is generated wherein the reflectance properties of the model are configured based on the reflectance properties of the object surface. The object diffuse data determines the color to be used in generating the model and the object specular data determines one of the attributes of the coating for the model or the material to be used for generating the model.
    Type: Grant
    Filed: April 1, 2015
    Date of Patent: May 15, 2018
    Assignee: OTOY, INC.
    Inventor: Clay Sparks
  • Patent number: 9959634
    Abstract: Methods and systems for identifying depth data associated with an object are disclosed. The method includes capturing, with an image capturing device, a plurality of source images of the object. The image capturing device has a sensor that is tilted at a known angle with respect to an object plane of the object such that the image capturing device has a depth of field associated with each source image, the depth of field defining a plane that is angled with respect to the object plane. An image processor analyzes the plurality of source images to identify segments of the source images that satisfy an image quality metric. Position data is assigned to the identified segments of the source images, the position data including depth positions based on the plane defined by the depth of field.
    Type: Grant
    Filed: March 13, 2012
    Date of Patent: May 1, 2018
    Assignee: Google LLC
    Inventors: Peter Gregory Brueckner, Iain Richard Tyrone McClatchie, Matthew Thomas Valente
  • Patent number: 9959039
    Abstract: Operating a touch-screen device includes displaying at least a portion of a keyboard on a touch-screen, detecting a touch on the touch-screen, and detecting movement of the touch on the touch-screen. Operating the touch-screen device also includes moving the displayed keyboard in response to the detected movement of the touch on the touch-screen, detecting a release of the touch from the touch-screen, and assigning a character according to a final location of the touch relative to a location of the displayed keyboard.
    Type: Grant
    Filed: July 7, 2014
    Date of Patent: May 1, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Olivier Artigue, Jean-Michel Douliez, Francois Trible
  • Patent number: 9946436
    Abstract: An example method includes receiving, at a mobile device, one or more user selections by a user of the mobile device, where each user selection indicates a respective type of data item to be presented on the mobile device. The method also includes receiving, at the mobile device, one or more data items. The method also includes identifying data items that are associated with the types of data items to be presented on the mobile device, and responsive to identifying data items that are associated with the types of data items to be presented on the mobile device, presenting, on the mobile device, a dynamic icon to present the identified data items.
    Type: Grant
    Filed: May 21, 2015
    Date of Patent: April 17, 2018
    Assignee: Appelago Inc.
    Inventor: Peter Rolih
  • Patent number: 9940753
    Abstract: A method of augmenting a target object with projected light is disclosed. The method includes determining a blend of component attributes to define visual characteristics of the target object, modifying an input image based, at least in part, on an image of the target object, wherein the modified input image defines an augmented visual characteristic of the target object, determining a present location of one or more landmarks on the target object based, at least in part, on the image of the target object, predicting a future location of the one or more landmarks, deforming a model of the target object based on the future location of the one or more landmarks, generating an augmentation image based on the deformed model and the modified input image, and transmitting for projection the augmentation image.
    Type: Grant
    Filed: October 11, 2016
    Date of Patent: April 10, 2018
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Anselm Grundhöfer, Amit Bermano
  • Patent number: 9933923
    Abstract: A graphical user interface for a display apparatus comprises an addressable window, which is related to a first formation action of a first start object to a target object and related to a second formation action of a second start object to the same target object, wherein each formation action comprises a transit from the respective start object to the target object using elements which can be selected from a plurality of element types, wherein assigned to each transit is an arrival time resulting from the speed of the corresponding formation, wherein the window includes a synchronisation button, by means of which the second formation action can be synchronised with the first formation action by delaying the second arrival time to the first arrival time.
    Type: Grant
    Filed: August 8, 2014
    Date of Patent: April 3, 2018
    Assignee: XYRALITY GMBH
    Inventor: Alexander Spohr
  • Patent number: 9911227
    Abstract: A method and system for providing access to and control of parameters within a scenegraph includes redefining components or nodes' semantic within a scenegraph. The set of components or nodes (depending on the scenegraph structure) are required to enable access from the Application User Interface to selected scenegraph information. In one embodiment, a user interface is generated for controlling the scenegraph parameters. In addition, constraints can be implemented that allow or disallow access to certain scenegraph parameters and restrict their range of values.
    Type: Grant
    Filed: February 12, 2016
    Date of Patent: March 6, 2018
    Assignee: GVBB HOLDINGS S.A.R.L.
    Inventors: Ralph Andrew Silberstein, David Sahuc, Donald Johnson Childers
  • Patent number: 9892539
    Abstract: A method is disclosed for applying physics-based simulation to an animator provided rig. The disclosure presents equations of motions for simulations performed in the subspace of deformations defined by an animator's rig. The method receives an input rig with a plurality of deformation parameters, and the dynamics of the character are simulated in the subspace of deformations described by the character's rig. Stiffness values defined on rig parameters are transformed to a non-homogeneous distribution of material parameters for the underlying rig.
    Type: Grant
    Filed: October 18, 2013
    Date of Patent: February 13, 2018
    Assignee: DISNEY ENTERPRISES, INC.
    Inventors: Bernhard Thomaszewski, Robert Sumner, Fabian Hahn, Stelian Coros, Markus Gross, Sebastian Martin
  • Patent number: 9893974
    Abstract: On a server, a collision handler is called by a physics simulation engine to categorize a plurality of rigid bodies in some simulation data as either colliding or not colliding. The simulation data relates to a triggering event involving the plurality of rigid bodies and is generated by a simulation of both gravitational trajectories and collisions of rigid bodies. Based on the categorization and the simulation data, a synchronization engine generates synchronization packets for the colliding bodies only and transmits the packets to one or more client computing devices configured to perform a reduced simulation function.
    Type: Grant
    Filed: October 11, 2015
    Date of Patent: February 13, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Marco Anastasi, Maurizio Sciglio
  • Patent number: 9894405
    Abstract: A real-time video exploration (RVE) system that allows users to pause, step into, move through, and explore 2D or 3D modeled worlds of scenes in a video. The RVE system may allow users to discover, select, explore, and manipulate objects within the modeled worlds used to generate video content. The RVE system may implement methods that allow users to view and explore in more detail the features, components, and/or accessories of selected objects that are being manipulated and explored. The RVE system may also implement methods that allow users to interact with interfaces of selected objects or interfaces of components of selected objects.
    Type: Grant
    Filed: June 27, 2014
    Date of Patent: February 13, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Gerard Joseph Heinz, II, Michael Schleif Pesce, Collin Charles Davis, Michael Anthony Frazzini, Ashraf Alkarmi, Michael Martin George, David A. Limp, William Dugald Carr, Jr.
  • Patent number: 9892556
    Abstract: A real-time video exploration (RVE) system that allows users to pause, step into, and explore 2D or 3D modeled worlds of scenes in a video. The system may leverage network-based computation resources to render and stream new video content from the models to clients with low latency. A user may pause a video, step into a scene, and interactively change viewing positions and angles in the model to move through or explore the scene. The user may resume playback of the recorded video when done exploring the scene. Thus, rather than just viewing a pre-rendered scene in a movie from a pre-determined perspective, a user may step into and explore the scene from different angles, and may wander around the scene at will within the scope of the model to discover parts of the scene that are not visible in the original video.
    Type: Grant
    Filed: June 27, 2014
    Date of Patent: February 13, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Gerard Joseph Heinz, II, Michael Schleif Pesce, Collin Charles Davis, Michael Anthony Frazzini, Ashraf Alkarmi, Michael Martin George, David A. Limp, William Dugald Carr, Jr.
  • Patent number: 9887112
    Abstract: An inkjet coating device comprises a support board and a sprinkler head. The support board is provided for placing the glass plate, the sprinkler head comprises a plurality of spray nozzles, wherein, the spray nozzles comprise an ink entrance port and an ink exit port, the ink is poured from the ink entrance port and poured onto the glass plate from the ink exit port, the internal diameter of the ink entrance port is larger than that of the ink exit port. The inkjet coating device comprises a trumpet-shaped spray nozzle which are closed together without gap, so as to not only increase the spraying capacity, but also make the trumpet-shaped ink drop more disperse in each coating interval belt uniformly to form an ink coating layers with uniform thickness.
    Type: Grant
    Filed: May 12, 2014
    Date of Patent: February 6, 2018
    Assignee: SHENZHEN CHINA STAR OPTOELECTRONICS TECHNOLOGY CO., LTD
    Inventor: Maocheng Yan
  • Patent number: 9870622
    Abstract: A method for determining posture-related information of a subject using the subject's image is provided. The method comprises: determining, from a first image, first positions of a first pair of joints and a first body segment length of a first body segment associated with the first pair of joints; determining, from a second image, second positions of a second pair of joints and a second body segment length of a second body segment associated with the second pair of joints; determining, based on an algorithm that reduces a difference between the first and second body segment lengths, whether the first and second pairs of joints correspond to a pair of joints; If the first and second pairs of joints are determined to correspond to a pair of joints, determining, based on the second positions, information of a posture of the subject; and providing an indication regarding the information.
    Type: Grant
    Filed: July 19, 2016
    Date of Patent: January 16, 2018
    Assignee: DYACO INTERNATIONAL, INC.
    Inventors: Tung-Wu Lu, Hsuan-Lun Lu, Cheng-Kai Lin, Hao Chiang
  • Patent number: 9852327
    Abstract: A system facilitates automatic recognition of facial expressions or other facial attributes. The system includes a data access module and an expression engine. The expression engine further includes a set of specialized expression engines, a pose detection module, and a combiner module. The data access module accesses a facial image of a head. The set of specialized expression engines generates a set of specialized expression metrics, where each specialized expression metric is an indication of a facial expression of the facial image assuming a specific orientation of the head. The pose detection module determines the orientation of the head from the facial image. Based on the determined orientation of the head and the assumed orientations of each of the specialized expression metrics, the combiner module combines the set of specialized expression metrics to determine a facial expression metric for the facial image that is substantially invariant to the head orientation.
    Type: Grant
    Filed: January 13, 2017
    Date of Patent: December 26, 2017
    Assignee: Emotient, Inc.
    Inventors: Jacob Whitehill, Javier R. Movellan, Ian Fasel
  • Patent number: 9852237
    Abstract: An object testing system and method for testing an object. A three-dimensional environment is displayed with a model of an object and an avatar from a viewpoint relative to the avatar on a display system viewed by a human operator. The object is under testing in a live environment. Information about motions of the human operator that are detected is generated. Live information about the object that is under testing in the live environment is received. A change in the object from applying the live information to the model of the object is identified. The change in the model of the object is displayed on the display system as seen from the viewpoint relative to the avatar in the three-dimensional environment.
    Type: Grant
    Filed: September 16, 2015
    Date of Patent: December 26, 2017
    Assignee: THE BOEING COMPANY
    Inventors: Jonathan Wayne Gabrys, David William Bowen, Anthony Mathew Montalbano, Chong Choi
  • Patent number: 9814982
    Abstract: To mitigate collisions in a physical space during gaming, a set of physical objects and a user situated in the 3D space are mapped to determine a spacing between an object in the set of physical objects and the user, where the user moves in the 3D space to cause a motion in a virtual environment of a game. A prediction is computed that the user will make a motion in the 3D space during the gaming. For the motion in the 3D space, a motion trajectory of the user is computed using a measurement parameter corresponding to the user stored in a user profile. A detection is made that the motion trajectory of the user violates a spacing threshold between the user and the object, and the user is alerted about a risk of collision between the user and the object in the 3D space.
    Type: Grant
    Filed: February 25, 2015
    Date of Patent: November 14, 2017
    Assignee: GLOBALFOUNDRIES INC.
    Inventors: Prach Jerry Chuaypradit, Wendy Chong, Ronald C. Geiger, Jr., Janani Janakiraman, Joefon Jann, Jenny S. Li, Anuradha Rao, Tai-chi Su, Singpui Zee