Morphing Patents (Class 345/646)
-
Patent number: 8994736Abstract: Methods and apparatus for interactive curve-based freeform deformation of three-dimensional (3-D) models may provide a user interface that allows a user to interactively deform 3-D models based on simple and intuitive manipulations of a curve drawn on the model (i.e., freeform deformation). The user may apply freeform deformations using touch and/or multitouch gestures to specify and manipulate a deformation curve. The deformations may be applied by deforming the space around a curve/sweep path and deforming the 3-D model accordingly. The freeform deformation methods are not dependent on manipulation of a fixed set of parameters to perform deformations, and may provide for both local and global deformation. One or more weights and user interface elements for controlling those weights may be provided that allow the user to control the extent (region of influence) of the freeform deformations along the curve and/or perpendicular to the curve.Type: GrantFiled: September 10, 2012Date of Patent: March 31, 2015Assignee: Adobe Systems IncorporatedInventors: Nathan A. Carr, Pushkar P. Joshi, Fatemeh Abbasinejad
-
Patent number: 8963958Abstract: Techniques for wrapping a two-dimensional texture conformally onto a surface of a three dimensional virtual object within an arbitrarily-shaped, user-defined region. The techniques provide minimum distortion and allow interactive manipulation of the mapped texture. The techniques feature an energy minimization scheme in which distances between points on the surface of the three-dimensional virtual object serve as set lengths for springs connecting points of a planar mesh. The planar mesh is adjusted to minimize spring energy, and then used to define a patch upon which a two-dimensional texture is superimposed. Points on the surface of the virtual object are then mapped to corresponding points of the texture. A haptic/graphical user interface element that allows a user to interactively and intuitively adjust texture mapped within the arbitrary, user-defined region.Type: GrantFiled: November 20, 2012Date of Patent: February 24, 2015Assignee: 3D Systems, Inc.Inventors: Torsten Berger, Elaine Chen, Walter C. Shannon, III, Bob Tipton
-
Patent number: 8966528Abstract: A method of providing a menu for video content is disclosed and may include delivering a looping video clip over a first video channel. The looping video clip may be configured to be displayed on a video plane. The method may also include delivering side channel data over a second video channel. The side channel data may include two or more navigable menu elements that may be configured to be displayed on a graphics overlay plane.Type: GrantFiled: September 19, 2012Date of Patent: February 24, 2015Assignee: Sony CorporationInventor: Thomas Patrick Dawson
-
Patent number: 8963955Abstract: An information processing section of a game apparatus executes a program which includes: acquiring a real world image; setting the most recent view matrix of a virtual camera based on a detected marker S204; reading the previous view matrix S206; calculating correction view matrixes so as to change a blending ratio at which the most recent view matrix is blended depending on a distance S208 to S210; selecting the correction view matrix such that the longer the distance between an object and the marker is, the lower the blending ratio is; and rendering a virtual object in a frame buffer in a superimposed manner by using the selected correction view matrix.Type: GrantFiled: January 5, 2011Date of Patent: February 24, 2015Assignees: Nintendo Co., Ltd., Hal Laboratory Inc.Inventor: Tetsuya Noge
-
Publication number: 20150022556Abstract: Systems, methods, and computer-readable medium containing instructions for processing image data. One system includes at least one processor configured to generate a graphical user interface (“GUI”). Through the GUI, the at least one processor receives a matching condition and a morphing action from a user. The morphing action includes an action to perform on a data attribute associated with image data when the image data satisfies the matching condition. Based on the received matching condition and morphing action, the at least one processor creates executable code. The at least one processor also receives image data including an image data attribute and executes the executable code to determine if the received image data satisfies the matching condition. If the image data satisfies the matching condition, the at least one processor automatically performs the morphing action on the received image data attribute.Type: ApplicationFiled: May 20, 2014Publication date: January 22, 2015Inventor: Amit Khare
-
Patent number: 8933962Abstract: Techniques for generating a personalized cartoon by using a few text queries are described herein. The present disclosure describes efficiently searching multiple images from a network, obtaining clipart image from the multiple images, and vectorization of the clipart image. The present disclosure also describes techniques to change a style of the cartoon such as recoloring one or more cartoon objects.Type: GrantFiled: November 15, 2010Date of Patent: January 13, 2015Assignee: Microsoft CorporationInventors: Jian Sun, Ying-Qing Xu, Litian Tao, Mengcheng Huang
-
Patent number: 8902232Abstract: Acquisition, modeling, compression, and synthesis of realistic facial deformations using polynomial displacement maps are described. An analysis phase can be included where the relationship between motion capture markers and detailed facial geometry is inferred. A synthesis phase can be included where detailed animated facial geometry is driven by a sparse set of motion capture markers. For analysis, an actor can be recorded wearing facial markers while performing a set of training expression clips. Real-time high-resolution facial deformations are captured, including dynamic wrinkle and pore detail, using interleaved structured light 3D scanning and photometric stereo. Next, displacements are calculated between a neutral mesh driven by the motion capture markers and the high-resolution captured expressions. These geometric displacements are stored in one or more polynomial displacement maps parameterized according to the local deformations of the motion capture dots.Type: GrantFiled: February 2, 2009Date of Patent: December 2, 2014Assignee: University of Southern CaliforniaInventors: Paul E. Debevec, Wan-Chun Ma, Timothy Hawkins
-
Patent number: 8803906Abstract: A video receiver receives a compound transport stream (TS) comprising 3D program video streams and spliced advertising streams. The received one or more 3D program video streams are extracted and decoded. Targeted advertising streams are extracted from the received advertising streams according to user criteria. Targeted advertising graphic objects of the extracted or replaced targeted advertising streams are spliced into the decoded 3D program video streams. The decoded 3D program video with the spliced targeted advertising graphic objects is presented in a 2D video. The extracted or replaced targeted advertising streams are processed to generate the targeted advertising graphic objects to be spliced based on focal point of view. The generated targeted advertising graphic objects are located according to associated scene graph information. The decoded 3D program video streams and the spliced targeted advertising graphic objects are converted into a 2D video for display.Type: GrantFiled: August 24, 2009Date of Patent: August 12, 2014Assignee: Broadcom CorporationInventors: Xuemin Chen, Samir Hulyalkar, Marcus Kellerman, Ilya Klebanov
-
Publication number: 20140204126Abstract: An apparatus and method pertaining to the display of a stylus path that includes both a validated portion and a predicted portion. Upon determining an error between subsequent stylus movement and that predicted portion, these teachings provide for morphing a display of the predicted portion to accord with the subsequent stylus movement over time rather than abruptly switching the display to an immediately fully-corrected representation.Type: ApplicationFiled: January 18, 2013Publication date: July 24, 2014Applicant: RESEARCH IN MOTION LIMITEDInventors: Peter MANKOWSKI, Jacek S. IDZIK, Cornel MERCEA, Weimin Michael RANG, Yaran NAN
-
Patent number: 8786609Abstract: The placement of one animated element in a virtualized three-dimensional environment can be accomplished with reference to a second animated element and a vector field derived from the relationship thereof. If the first animated element is “inside” the second animated element after the second one was moved to a new animation frame, an existing vector field can be calculated for the region where it is “inside”. The vector field can comprise vectors that can have a direction and magnitude commensurate with the initial velocity and direction required to move the first animated element back outside of the second one. Movement of the first animated element can then be simulated in accordance with the vector field and afterwards a determination can be made whether any portion still remains inside. Such an iterative process can move and place the first animation element prior to the next move of the second animation element.Type: GrantFiled: June 1, 2010Date of Patent: July 22, 2014Assignee: Microsoft CorporationInventors: Pengpeng Wang, Nishant Dani, Cole Brooking, Pragyana K. Mishra, Manjula Ananthnarayanan Iyer
-
Patent number: 8780139Abstract: Resolution monitoring when using visual manipulation tools is described, including determining a minimum resolution for a visual manipulation tool, monitoring a usage of the visual manipulation tool, and interrupting the usage of the visual manipulation tool if the visual manipulation tool is operating below the minimum resolution.Type: GrantFiled: March 27, 2006Date of Patent: July 15, 2014Assignee: Adobe Systems IncorporatedInventor: Robert Murata
-
Patent number: 8766978Abstract: Methods and apparatus for generating curved extrusions. A user interface may be provided via which the value of one or more extrusion parameters and/or a reference point may be changed. The extrusion parameters may include a depth parameter that controls the amount of extrusion, an X angle parameter that controls the angle of bend in the X direction, a Y angle parameter that controls the angle of bend in the Y direction, a scale parameter that controls the scale factor, and a twist parameter that controls the angle of extrusion twist. A weight function for changing one or more of the extrusion parameters non-uniformly along the sweep path may also be provided. An extrusion may be generated from an initial 2D object according to the set of extrusion parameters and the reference point.Type: GrantFiled: May 28, 2010Date of Patent: July 1, 2014Assignee: Adobe Systems IncorporatedInventors: Pushkar P. Joshi, Gavin S. P. Miller, Peter F. Falco, Jr.
-
Patent number: 8760467Abstract: Systems and techniques to apply an image distortion to two image objects of different graphic types. In general, in one implementation, the technique includes: receiving an image distortion description to be applied to an image portion including a vector graphic and a raster graphic, the raster graphic being distortable separate from the vector graphic, applying the image distortion description to the vector graphic to produce a distorted vector graphic, and applying the image distortion description to the raster graphic to produce a distorted raster graphic, the distorted vector graphic and the distorted raster graphic together forming a distorted image portion.Type: GrantFiled: May 20, 2008Date of Patent: June 24, 2014Assignee: Adobe Systems IncorporatedInventor: John W. Peterson
-
Publication number: 20140168269Abstract: The present invention re-renders data center visualizations at different levels of abstraction based on roles or activities of an avatar. Morphing of data center objects is accomplished by either combining or decomposing existing data center objects in a manner that will result in a new object that maintains its relationship to the original objects. An example of this would be when creating an application object by combining an existing infrastructure objects (e.g., a server, a network and storage) used to support the application object runtime environment. This allows for the avatar to not only relate the application object to the supporting infrastructure objects, but also provides a view of how the application object is impacted whenever the supporting infrastructure objects change or break.Type: ApplicationFiled: February 24, 2014Publication date: June 19, 2014Applicant: International Business Machines CorporationInventors: Christopher J. Dawson, Michael J. Osias, Brian W. Sledge
-
Publication number: 20140125706Abstract: In the present invention, an input assembly unit (1) receives vertex information used before and after geomorphing and index information as input, and uses the index information to output vertex information used before and after the geomorphing of each polygon. A polygon processing unit (3) interpolates vertex information that has been processed at a vertex processing unit (2) in accordance with the time in order to perform output for each polygon. A rasterizing unit (4) detects pixels included in each polygon output by the polygon processing unit (3) within the output image in order to output pixel information in which the vertex information for the polygon has been interpolated in accordance with the position of each pixel. A pixel processing unit (5) determines the color of each pixel of the corresponding output image using the pixel information.Type: ApplicationFiled: September 12, 2011Publication date: May 8, 2014Applicant: Mitsubishi Electric CorporationInventors: Satoshi Sakurai, Shoichiro Kuboyama
-
Publication number: 20140118402Abstract: A set of images is processed to modify and register the images to a reference image in preparation for blending the images to create a high-dynamic range image. To modify and register a source image to a reference image, a processing unit generates correspondence information for the source image based on a global correspondence algorithm, generates a warped source image based on the correspondence information, estimates one or more color transfer functions for the source image, and fills the holes in the warped source image. The holes in the warped source image are filled based on either a rigid transformation of a corresponding region of the source image or a transformation of the reference image based on the color transfer functions.Type: ApplicationFiled: April 30, 2013Publication date: May 1, 2014Applicant: NVIDIA CORPORATIONInventors: Orazio GALLO, Kari PULLI, Jun HU
-
Patent number: 8711150Abstract: Methods and apparatus for deactivating internal constraint curves when inflating an N-Sided patch. Given a patch representation, the methods simplify the construction of 3D models from 2D sketches. At least some interior constraint curves may be deactivated when inflating an N-sided patch generated from a 2D sketch, or when performing other surface deformation tasks. An inactive constraint is a passive curve that stays on the surface and that gets modified along with the surface when the surface is inflated, but that does not affect the surface itself. By changing parameters stored at the active constraints, embodiments may modify the surface and turn the inactive constraints from flat 2D curves into 3D space curves. The inactive constraints can be activated at any time when their 3D shape meets the user's expectations.Type: GrantFiled: April 23, 2010Date of Patent: April 29, 2014Assignee: Adobe Systems IncorporatedInventors: Pushkar P. Joshi, Nathan A. Carr
-
Patent number: 8711178Abstract: A method for generating an animated morph between a first image and a second image is provided. The method may include: (i) reading a first set of cephalometric landmark points associated with the first image; (ii) reading a second set of cephalometric landmark points associated with the second image; (iii) defining a first set of line segments by defining a line segment between each of the first set of cephalometric landmarks; (iv) defining a second set of line segments by defining a line segment between each of the second set of cephalometric landmarks such that each line segment of the second set of line segments corresponds to a corresponding line segment of the first set of line segments; and (v) generating an animation progressively warping the first image to the second image based at least on the first set of line segments and the second set of line segments.Type: GrantFiled: May 19, 2011Date of Patent: April 29, 2014Assignee: Dolphin Imaging Systems, LLCInventor: Emilio David Cortés Provencio
-
Patent number: 8704827Abstract: The description relates to surgical computer systems, including computer program products, and methods for cumulative buffering for surface imaging. A display image is buffered that has been saved from a previous update. A model representing a tool is subtracted from the buffered display image. The subtracted display image is displayed using a CSG technique at a fixed angle. The subtracted display image is saved. This process is repeated so that the displayed image is cumulatively changed with each change in location of the model representing the tool.Type: GrantFiled: December 21, 2007Date of Patent: April 22, 2014Assignee: Mako Surgical Corp.Inventor: Min Wu
-
Publication number: 20140104295Abstract: The disclosure provides an approach for transferring image edits from a source image to target images. In one embodiment, a warp application receives a user-selected region of interest in a source image and determines for the region of interest content-aware bounded weight functions and seed locations for the same. For each of the target images, the warping application initializes a linear blend skinning subspace warp to a projection onto a feature space of a piecewise affine map from scale invariant feature transform features of the source image to the target image. After initializing the warps, the warping application iteratively optimizes the warps by applying the inverse compositional Lucas-Kanade procedure and using the content-aware weight functions in said procedure. Edits made to the source image may automatically be transferred to target images by warping those edits via the optimized warp function for the respective target images.Type: ApplicationFiled: October 17, 2012Publication date: April 17, 2014Applicant: Disney Enterprises, Inc.Inventors: Alexander SORKINE-HORNUNG, Kaan YUCER, Alec Stefan JACOBSON, Olga SORKINE-HORNUNG
-
Publication number: 20140085324Abstract: A system and method for validating GPU rendered display data—e.g., a sequence of frames—by comparing, across substantially all of the pixel locations in a frame, the GPU rendered display data to display data rendered by another processor. In this way, by checking substantially all of the pixel locations in a frame, errors in the display image data can be detected without prior knowledge of the format, layout, etc. of the display data. The system may be capable of operating without receiving input from a user or producing output to a user, and without receiving input from other applications or producing output to other applications. In this way, the validation would be invisible to a user and/or other applications.Type: ApplicationFiled: September 24, 2012Publication date: March 27, 2014Applicant: BARCO N.V.Inventors: Edouard Y-M J. Charvet, Lieven W. Demeestere, Maarten Zanders
-
Patent number: 8675952Abstract: Provided is a method and apparatus of extracting a 3D facial expression of a user. When a facial image of the user is received, the 3D facial expression extracting method and apparatus may generate 3D expression information by tracking an expression of the user from the facial image using at least one of shape-based tracking and texture-based tracking, may generate a 3D expression model based on the 3D expression information, and reconstruct the 3D expression model to have a natural facial expression by adding muscle control points to the 3D expression model.Type: GrantFiled: July 26, 2010Date of Patent: March 18, 2014Assignee: Samsung Electronics Co., Ltd.Inventors: Young Kyoo Hwang, Jung Bae Kim
-
Patent number: 8660382Abstract: A computer system running image processing software receives an identification of a desired geographical area to be imaged and collected into an oblique-mosaic image; creates a mathematical model of a virtual camera looking down at an oblique angle, the mathematical model having an oblique-mosaic pixel map of the desired area encompassing multiple source images; assigns surface locations to pixels included in the oblique-mosaic pixel map; creates a ground elevation model of the ground and vertical structures within the oblique-mosaic pixel map using overlapping source images of the desired geographical area, wherein the source oblique images were captured at an oblique angle and compass direction similar to that of the virtual camera; and reprojects, with the mathematical model, source oblique image pixels of the overlapping source images for pixels included in the oblique-mosaic pixel map using the ground elevation model to thereby create an oblique-mosaic image of the desired area.Type: GrantFiled: February 27, 2013Date of Patent: February 25, 2014Assignee: Pictometry International Corp.Inventors: Stephen L. Schultz, Frank Giuffrida, Robert Gray
-
Patent number: 8643641Abstract: System and method for periodic body scan differencing for detecting changes in surface and subsurface body scans over time. May include use of a scanner and a computer system configured to scan a portion of body at first point in time to yield first scan, scan the portion of the body at second point in time to yield second scan, difference the two scans to produce a morphological difference image, and display the morphological difference image. May utilize surface or subsurface scans. Any type of scanner may be utilized that scans to the desired resolution of morphological testing. Can morph scan(s), geometrically, visually or both, to account for age, weight or color differences that have occurred between scans. Can display morphological surface or subsurface differences between scans in multiple images or atlas view. Differences can be displayed independently or overlaid onto scan(s). Differences may be highlighted to make more readily viewable.Type: GrantFiled: May 12, 2008Date of Patent: February 4, 2014Inventor: Charles G. Passmore
-
Publication number: 20140002502Abstract: A method of outputting graphics to a display comprising: detecting an input from a user representative of an image manipulation request; performing a first image manipulation process on at least part of the retrieved image data set in accordance with the image manipulation request to produce second graphics; outputting the second graphics to a display area of the display; determining that a boundary condition relating to the retrieved image data set has been satisfied, the boundary condition relating to a limit of the retrieved image data set beyond which there is no further element of the retrieved image data set to be displayed; performing a second image manipulation process on at least part of the retrieved image data set to produce third graphics, the second image manipulation process providing a second type of alteration to the retrieved image data set, the second type of alteration being of a different type than the first type of alteration; and outputting the third graphics to the display area of the dType: ApplicationFiled: June 27, 2013Publication date: January 2, 2014Inventor: Kapsu HAN
-
Patent number: 8610720Abstract: Methods and apparatus for decomposing an N-sided patch into multiple patches. A single patch may be decomposed into multiple, disjoint, and possibly abutting patches. An internal constraint curve may be selected, and a new patch with the constraint curve as the boundary may be generated. If the constraint curve is closed, it is turned into a hole in the original patch. If the constraint curve is not closed, the system closes the curve. The 3D position, surface normal, and possibly other information such as an up direction required for every point along the boundary of the new patch may be taken from the original patch surface. The new patch(es) may be edited independent of the original patch and may be further decomposed into more patches.Type: GrantFiled: April 23, 2010Date of Patent: December 17, 2013Assignee: Adobe Systems IncorporatedInventors: Pushkar P. Joshi, Nathan A. Carr
-
Patent number: 8576250Abstract: A method, apparatus, media and signals for applying a shape transformation to at least a portion of a three dimensional representation of an appliance for a living body is disclosed. The representation is defined by an input plurality of coordinates representing a general shape of the appliance. The method involves identifying a coordinate location of a datum plane with respect to the representation of the appliance, the datum plane defining a transform volume within which the shape transformation is to be applied, the transform volume extending outwardly from and normal to a first surface of the datum plane. The method also involves identifying input coordinates in the plurality of input coordinates that are located within the transform volume. The method further involves modifying the identified input coordinates in accordance with the shape transformation to produce a modified representation of the appliance, and storing the modified representation of the appliance in a computer memory.Type: GrantFiled: October 24, 2007Date of Patent: November 5, 2013Assignee: Vorum Research CorporationInventors: Robert Malcolm Sabiston, Jeffrey David Chang, Christopher Cameron Handford
-
Patent number: 8576225Abstract: Systems and processes for rendering fractures in an object are provided. In one example, a surface representation of an object may be converted into a volumetric representation of the object. The volumetric representation of the object may be divided into volumetric representations of two or more fragments. The volumetric representations of the two or more fragments may be converted into surface representations of the two or more fragments. Additional information associated with attributes of adjacent fragments may be used to convert the volumetric representations of the two or more fragments into surface representations of the two or more fragments. The surface representations of the two or more fragments may be displayed.Type: GrantFiled: July 13, 2010Date of Patent: November 5, 2013Assignee: DreamWorks Animation LLCInventors: Akash Garg, Kyle Maxwell, David Lipton
-
Patent number: 8547396Abstract: Systems, methods, and computer storage media for generating a computer animation of a game. A custom animation platform receives game play data of the game and determines at least one scene based on the game play data. Then, one or more frames in the scene are set up, where at least one of the frames includes at least one non-game pre-production element of the game. Subsequently, the frames are rendered and the rendered frames are combined to generate a computer animation.Type: GrantFiled: December 31, 2007Date of Patent: October 1, 2013Inventor: Jaewoo Jung
-
Patent number: 8531484Abstract: Embodiments of the present invention provide a method and a device for generating a morphing animation from multiple images, where the method includes: performing hue preprocessing on adjacent images among multiple images; determining the quantity of intermediate frames between the adjacent images according to a feature point differential of the adjacent images on which the hue preprocessing has been performed, generating, between the adjacent images through an image warping technology, intermediate frame images, the quantity of which is the same as that of the intermediate frames, insert the intermediate frame images between the adjacent images, and generate a morphing animation from the multiple images and the intermediate frame images that are inserted between all adjacent images among the multiple images. The morphing animation generated in the present invention is smooth and natural, thereby improving a morphing effect of the morphing animation.Type: GrantFiled: September 26, 2012Date of Patent: September 10, 2013Assignee: Huawei Technologies Co., Ltd.Inventors: Lanfang Dong, Zeju Xia, Yuan Wu, Jingfan Qin
-
Patent number: 8493406Abstract: The rendering on a user interface of a potentially complex computerized scene generation system. The user interface includes visual item(s) that have associated data. In addition, another set of visual items may be driven by data provided to input parameters, and may represent elements in the scene. Through user gestures, a user may correlate data items in the data source visual items with the element visual items to thereby automatically populate the element visual items with data, affecting the rendering of the data-driven element visual items. The element visual items might be linked, once again, perhaps through user gestures, to a parent visual item. In so doing, properties of the parent visual item might change and/or input parameters of the element visual items might change. Accordingly, complex visual scenes may be created through potentially quite simple user gestures.Type: GrantFiled: June 19, 2009Date of Patent: July 23, 2013Assignee: Microsoft CorporationInventors: Darryl E. Rubin, Vijay Mital, David G. Green, Jason A. Wolf, John A. Payne
-
Publication number: 20130155111Abstract: A method of making an article of footwear is disclosed. The method includes the steps of providing a customer with a pre-selected set of graphics, allowing a customer to choose a set of input graphics, and generating a set of morphed graphics based on a set of input graphics. The user can select a morphed graphic and apply it to an article. The method may further include the step of limiting the number of times a customized graphic may be selected and applied to an article.Type: ApplicationFiled: December 15, 2011Publication date: June 20, 2013Applicant: Nike, Inc.Inventors: David J. Dirsa, Clifford B. Gerber, Petre Gheorghian, E. Scott Morris
-
Publication number: 20130155112Abstract: Provided herein is a method, apparatus, and computer program product for providing a method of graphically transitioning between multiple interactive levels of a program. In particular, the method of example embodiments may include providing for display of a first representation of a program including a first interaction level, providing for presentation of a first graphical transition from the first representation of the program to an intermediate representation of the program in response to receiving a first input, where the first graphical transition provides a visual cue indicative of a relationship between the first representation of the program and the intermediate representation of the program. The first graphical transition may resemble the physical manipulation of a tangible object. The method may include providing for display of the intermediate representation where the intermediate representation includes an intermediate interaction level.Type: ApplicationFiled: December 20, 2011Publication date: June 20, 2013Applicant: Nokia CorporationInventors: Christopher Paretti, William Lindmeier
-
Publication number: 20130141457Abstract: An electronic device capable of displaying correct characters in place of garbled characters includes a storage unit, a control unit, and a display unit. The storage unit stores a number of garbled characters and identifiable characters associated with the garbled characters, and the identifiable characters are translated from original characters corresponding to the garbled characters. The control unit obtains any garbled character displayed on the display unit, determines whether the obtained garbled character matches a garbled character stored in the storage unit, if so, then the control unit controls the display unit to display an identifiable character in place of the garbled character.Type: ApplicationFiled: June 27, 2012Publication date: June 6, 2013Applicants: HON HAI PRECISION INDUSTRY CO., LTD., Fu Tai Hua Industry (Shenzhen) Co., Ltd.Inventor: Qiang YOU
-
Patent number: 8452125Abstract: A system, including a computer system running image processing software, receives an identification of a desired area to be imaged and collected into an oblique-mosaic image, creates a mathematical model of a virtual camera having a sensor higher in elevation from which the source oblique images were captured and looking down at an oblique angle, the mathematical model having an oblique-mosaic pixel map for the sensor of the desired area encompassing multiple source images, determines geographic coordinates for pixels, and selects source oblique images of the geographic coordinates captured at an oblique angle and compass direction similar to the oblique angle and compass direction of the virtual camera. The computer system reprojects at least one source oblique image pixel of the area to be imaged for each pixel included in the oblique-mosaic pixel map to create the oblique-mosaic image.Type: GrantFiled: October 19, 2011Date of Patent: May 28, 2013Assignee: Pictometry International Corp.Inventors: Stephen Schultz, Frank Giuffrida, Robert Gray
-
Patent number: 8436874Abstract: An item editing device, and item editing method, and a program stored on a tangible media enable easily editing items whether the item is rectangle or an arch shape by unifying the item editing operation and reducing the number of steps.Type: GrantFiled: December 7, 2007Date of Patent: May 7, 2013Assignee: Seiko Epson CorporationInventors: Masakazu Honma, Junichi Otsuka
-
Patent number: 8421805Abstract: Dynamic animated avatars selectively morph to reveal or depict the user's identity while simultaneously emulating or tracking active movements associated with the user's verbal stream or perceived movements of the user. A user elects morphing to transition the rendered avatar to another, more revealing avatar, for example from a cartoon caricature to a posed photographic image. Animation processing identifies active movements derived from an input stream to compute animations of the mouth and profile based on speech, text, or captured video from the user. The computed animations appear as active movements to the currently rendered avatar, and emulate the user speaking or moving according to text, audio or video being transmitted. A user maintains an anonymous or posed identity with respect to the receiving party, and chooses to reveal a different avatar depicting a truer identity at the user's discretion, while continuing to display active movements paralleling the user's verbal activities.Type: GrantFiled: February 9, 2007Date of Patent: April 16, 2013Assignee: Dialogic CorporationInventor: Wendell E. Bishop
-
Patent number: 8417365Abstract: A system includes an encoding module and a decoding module. The encoding module generates a three-dimensional (3D) model of a part, modifies the 3D model to include a 3D structure, and generates a computer-aided design (CAD) file based on the modified 3D model. The decoding module determines whether the CAD file includes the 3D structure, authorizes operation of analysis software on the CAD file when the CAD file includes the 3D structure, and prohibits operation of the analysis software when the CAD file does not include the 3D structure.Type: GrantFiled: April 15, 2010Date of Patent: April 9, 2013Inventors: Paul N. Crepeau, Qigui Wang
-
Patent number: 8411111Abstract: At least one embodiment of the present invention relates to a method, a device and/or a computer program product for creating a (three- or four-dimensional) model from a number of different image datasets from a number of modalities. To this end, in at least one embodiment, the image datasets are fitted into a representation provided, the different image datasets being automatically enriched with contour lines and integrated into the representation. The model is created from this.Type: GrantFiled: December 17, 2009Date of Patent: April 2, 2013Assignee: Siemens AktiengesellschaftInventor: Ulrich Hartung
-
Patent number: 8407111Abstract: In one aspect, embodiments of a method of correlating information and location comprise establishing a reference point for an entity. A three-dimensional coordinate is assigned to a device that comprises the entity. The assigned three-dimensional coordinate is relative to the reference point. Information about the device is correlated with the assigned three-dimensional coordinate and stored in a computing device. The computing device receives secondary information, and in response to the secondary information received by the computing device, the computing device provides at least one of the three-dimensional coordinate of the device or at least a portion of the information about the device.Type: GrantFiled: March 31, 2011Date of Patent: March 26, 2013Assignee: General Electric CompanyInventors: Thomas Bernardy, Stefan Pieper
-
Patent number: 8368722Abstract: An interactive user interface element makes content (e.g., images, news, standard indexed Web content) available to a user of online map services (e.g., virtual globe program). In some implementations, when zoomed out on a feature displayed in map imagery (e.g., virtual globe imagery), the user sees a non-interactive user interface element (e.g., a feature label). As the user expresses greater interest in the feature by, for example, flying (“zooming”) toward the feature, the non-interactive user interface element is replaced by (or morphs into) an interactive user interface element (e.g., a feature label including a clickable icon). In some implementations, a user's interaction with the interactive user interface element (or navigation actions in the imagery) launches a content access portal (e.g., a balloon) for presenting content (e.g., text, digital photos, video, audio) and/or providing access (e.g., links) to related content.Type: GrantFiled: April 18, 2007Date of Patent: February 5, 2013Assignee: Google Inc.Inventor: Rebecca Moore
-
Publication number: 20130021362Abstract: An apparatus includes an input unit, a microphone, a control unit, and a voice recognition unit. The input unit is configured to receive a first type input and a second type input. The microphone is configured to receive an input sound signal. The control unit is configured to control a display to display feedback according to a type of input. The voice recognition unit is configured to perform recognition processing on the input sound signal.Type: ApplicationFiled: July 10, 2012Publication date: January 24, 2013Applicant: SONY CORPORATIONInventors: Akiko SAKURADA, Osamu Shigeta, Nariaki Sato, Yasuyuki Koga, Kazuyuki Yamamoto
-
Patent number: 8347329Abstract: A method of providing a menu for video content is disclosed and may include delivering a looping video clip over a first video channel. The looping video clip may be configured to be displayed on a video plane. The method may also include delivering side channel data over a second video channel. The side channel data may include two or more navigable menu elements that may be configured to be displayed on a graphics overlay plane.Type: GrantFiled: October 14, 2010Date of Patent: January 1, 2013Assignee: Sony CorporationInventor: Thomas Patrick Dawson
-
Patent number: 8314810Abstract: A system for identifying prior selection of specific display information on an EPG. In one embodiment, a user selects an object on a screen, and upon selection of the object, an attribute of the object (e.g., color, transparency, etc.) is modified. The modified value is saved into memory so the user may later identify that the specific object was selected. Each subsequent selection will modify the attribute further, allowing the user to identity that the object was selected a number of times. In one embodiment, the attribute will continue to be modified until a specific expiration limit has been reached.Type: GrantFiled: January 30, 2012Date of Patent: November 20, 2012Assignee: JLB Ventures LLCInventor: Yakov Kamen
-
Patent number: 8314801Abstract: Embodiments described herein are directed to automatically generating an animation for a transition between a current state and a new state. In one embodiment, a computer system accesses state properties of a visual element corresponding to a current state the visual element is in and a new state the visual element is to be transitioned to. The state properties include visual properties and transition description information. The computer system determines the differences between the visual properties of the current state and the new state and automatically generates an animation based on the determined differences between the visual properties for the current state and the new state, such that the animation is playable to transition the visual element from the current state to new state.Type: GrantFiled: February 29, 2008Date of Patent: November 20, 2012Assignee: Microsoft CorporationInventors: Kenneth L. Young, Steven Charles White, Christian B. Schormann
-
Publication number: 20120281019Abstract: A novel algorithmic framework is presented for the simulation of hyperelastic soft tissues that drastically improves each aspect discussed above compared to existing techniques. The approach is robust to large deformation (even inverted configurations) and extremely stable by virtue of careful treatment of linearization. Additionally, a new multigrid approach is presented to efficiently support hundreds of thousands of degrees of freedom (rather than the few thousands typical of existing techniques) in a production environment. Furthermore, these performance and robustness improvements are guaranteed in the presence of both collision and quasistatic/implicit time stepping techniques. The result is a significant advance in the applicability of hyperelastic simulation to skeleton driven character skinning.Type: ApplicationFiled: December 20, 2011Publication date: November 8, 2012Applicant: Disney Enterprises, Inc.Inventors: Rasmus Tamstorf, Andrew Selle, Aleka McAdams, Eftychios Sifakis, Joseph Teran
-
Patent number: 8275590Abstract: A user may simulate wearing real-wearable items, such as virtual garments and accessories. A virtual-outfitting interface may be provided for presentation to the user. An item-search/selection portion within the virtual-outfitting interface may be provided. The item-search/selection portion may depict one or more virtual-wearable items corresponding to one or more real-wearable items. The user may be allowed to select at least one virtual-wearable item from the item-search/selection portion. A main display portion within the virtual-outfitting interface may be provided. The main display portion may include a composite video feed that incorporates a video feed of the user and the selected at least one virtual-wearable item such that the user appears to be wearing the selected at least one virtual-wearable item in the main display portion.Type: GrantFiled: June 23, 2010Date of Patent: September 25, 2012Assignee: Zugara, Inc.Inventors: Matthew Szymczyk, Aaron Von Hungen, Blake Callens, Hans Forsman, Jack Benoff
-
Publication number: 20120236105Abstract: A method and apparatus for changing an appearance of a user during a video call is provided herein. Prior to making a call, a user identifies several images that can be used as morphing templates and also identifies a context where each template will be used. During a video call, a morphing template is then chosen based on a context of the call (e.g., time, place, caller identification, etc.). During the video call, the user's image will be morphed based on the chosen template. Because templates used for morphing will be easily changed based on a context of the call, the user will be provided with a simple technique to for morphing their image.Type: ApplicationFiled: March 14, 2011Publication date: September 20, 2012Applicant: MOTOROLA MOBILITY, INC.Inventors: William P. Alberth, Dean E. Thorson, Kenneth A. Haas
-
Publication number: 20120223970Abstract: A method for generating an animated morph between a first image and a second image is provided. The method may include: (i) reading a first set of cephalometric landmark points associated with the first image; (ii) reading a second set of cephalometric landmark points associated with the second image; (iii) defining a first set of line segments by defining a line segment between each of the first set of cephalometric landmarks; (iv)defining a second set of line segments by defining a line segment between each of the second set of cephalometric landmarks such that each line segment of the second set of line segments corresponds to a corresponding line segment of the first set of line segments; and (v) generating an animation progressively warping the first image to the second image based at least on the first set of line segments and the second set of line segments.Type: ApplicationFiled: May 19, 2011Publication date: September 6, 2012Applicant: DOLPHIN IMAGING SYSTEMS, LLCInventor: Emilio David Cortés Provencio
-
Patent number: 8253739Abstract: A method for interpolating an intermediate polygon P from two polygons P1 and P2. The method includes, in at least one embodiment, defining a similarity measure based on a geometrical reference object, the geometrical reference object being associated with the two polygons P1 and P2; and based on the similarity measure, determining an initial pair of corresponding points. Based on this initial pair of corresponding points, in at least one embodiment of the method, a sequence of pairs of corresponding points is determined from which sequence the intermediate polygon is interpolated.Type: GrantFiled: April 3, 2008Date of Patent: August 28, 2012Assignee: Siemens AktiengesellschaftInventor: Peter Hassenpflug