Patents Examined by Zhengxi Liu
  • Patent number: 10395622
    Abstract: An apparatus capable of displaying a plurality of images while a screen is scrolled, including a processing unit configured to process an image to be displayed on a display unit, a display control unit configured to control the display unit to display a plurality of images processed by the processing unit, a scroll control unit configured to set another plurality of images as display targets to be displayed on the display unit by scrolling the displayed plurality of images, and a control unit configured to control the processing unit to process the plurality of images as the display targets such as to prioritize the processing for an image disposed on an upstream side over an image disposed on a downstream side in a moving direction of scrolled images.
    Type: Grant
    Filed: February 23, 2017
    Date of Patent: August 27, 2019
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Yasufumi Oyama
  • Patent number: 10388042
    Abstract: Methods for efficient display of data points in a user interface are performed by systems and apparatuses. Efficient display of data point in a user interface includes maximizing coverage of data points prior to rendering. Coverage is determined using a radius value for represented data points in a data set. The radius may be increased to correspondingly generate additional coverage. Covered data points may be removed from the rendering subset as the radius is set and increased. The radius is increased until the number of represented data points to render is less than a threshold value. Multiple data sets may be efficiently rendered together.
    Type: Grant
    Filed: January 15, 2018
    Date of Patent: August 20, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Cristian Petculescu, Marius Dumitru, Radu C. Coman, Amir M. Netz
  • Patent number: 10350497
    Abstract: When a data transfer process is started, a state in which a plurality of character objects lift an icon object in a three-dimensional virtual space, and carry the icon object from a start point toward a completion point in the three-dimensional virtual space, according to a degree of progress of the data transfer process, is displayed. At this time, a display range is sequentially shifted according to the degree of progress of the data transfer process so as to constantly display the icon object on a screen.
    Type: Grant
    Filed: May 30, 2012
    Date of Patent: July 16, 2019
    Assignee: Nintendo Co., Ltd.
    Inventors: Yosuke Fujino, Naoya Morimura
  • Patent number: 10319135
    Abstract: A computer-implemented method for simulating a human or animal body taking a posture, comprising the steps of: a) providing a model (AV) of said human or animal body, including a skeleton comprising a plurality of bones articulated by rotational joints to form at least one kinematic chain; b) defining a starting position and a starting rotational state for each rotational joint of the skeleton, a target point (T) and a bone, called end bone, of a kinematic chain, called active kinematic chain; c) for a set of bones of the active kinematic chain, including the end bone, defining at least one axis (GZ) having a fixed orientation with respect to the bone; d) determining a first posture of the body by performing bounded rotations of a set of joints of the active kinematic chain; and e) determining a second posture of the body by iteratively by performing bounded rotations of a set of joints of the active kinematic chain in order to direct a selected axis (GZ) of the end bone toward the target.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: June 11, 2019
    Assignee: Dassault Systemes
    Inventor: Mickaël Brossard
  • Patent number: 10311627
    Abstract: A method of processing a graphics pipeline in a graphics processing apparatus includes performing pixel shading to process pixels corresponding to an object, texturing the object, and transmitting data of a textured object to a processing path for a post-processing operation of the textured object. A graphics processing apparatus for processing a graphics pipeline includes a shading processor configured to perform pixel shading to process pixels corresponding to an object. A texturing processor is configured apply to texture the object, determine a post-processing operation mode to adjust visual effects of the textured object, and transmit data of the textured object to a processing path for the post-processing operation in accordance with the determined post-processing mode. A reorder buffer is configured to buffer data of the object in accordance with a processing order when the processing path bypasses the shading processor.
    Type: Grant
    Filed: August 30, 2017
    Date of Patent: June 4, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Yeon-gon Cho, Soo-jung Ryu
  • Patent number: 10269178
    Abstract: Some embodiments of the invention pertain to a method for visualizing surface data and panorama image data in a three-dimensional scene. In some embodiments, the method may include providing a map view mode and a panorama view mode to a user. In some embodiments, the map view mode and/or the panorama view mode may include a multitude of surface tiles representing features of the three-dimensional surface, and may be referenced relative to a coordinate reference system. In some embodiments, the panorama image data may be associated with at least one panorama image and may include panorama position data describing a position relative to the coordinate reference system. In some embodiments, the map view mode may include visualizing surface data of at least a part of the representation of the three-dimensional surface as perceived from a map viewpoint.
    Type: Grant
    Filed: September 9, 2015
    Date of Patent: April 23, 2019
    Assignee: MY VIRTUAL REALITY SOFTWARE AS
    Inventors: Asmund Moen Nordstoga, Olav Sylthe
  • Patent number: 10260840
    Abstract: A mobile ballistics processing and display system for receiving data associated with one or more ballistics variables, for processing such variables, and for displaying a ballistics solution associated with such variables in an easily and quickly understandable map format. One or more ballistics variables are inputted into a mobile computing device or are otherwise acquired by such device. Projected in-flight projectile characteristics are calculated by the computing device based upon ballistics variables. Users are provided with the ability to input in-flight bullet characteristics criteria into the computing device. The computing device is configured to depict in map format, projected paths of a projectile from one or more shooter locations to one or more target locations.
    Type: Grant
    Filed: January 19, 2015
    Date of Patent: April 16, 2019
    Assignee: GeoBallistics, LLC
    Inventors: Joe D. Baker, Jeffrey P. Barstad
  • Patent number: 10255887
    Abstract: A computer-readable recording medium storing an intensity of interest evaluation program that causes a computer to execute a procedure is provided. The procedure includes: using a movement amount detection sensor installed to an information processing terminal to detect a value of a movement amount of an information processing terminal in a period in which content is being displayed on the information processing terminal; and evaluating a intensity of interest toward the content based on a length of a first period within the period in which the detected value of the movement amount of the information processing terminal is a predetermined value or less.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: April 9, 2019
    Assignee: FUJITSU LIMITED
    Inventors: Teruyuki Sato, Koichiro Niinuma
  • Patent number: 10249091
    Abstract: An augmented reality (AR) output device or virtual reality (VR) output device is worn by a user, and includes one or more sensors positioned to detect actions performed by a user of the immersive output device. A processor provides a data signal configured for the AR or VR output device, causing the immersive output device to provide AR output or VR output via a stereographic display device. The data signal encodes audio-video data. The processor controls a pace of scripted events defined by a narrative in the one of the AR output or the VR output, based on output from the one or more sensors indicating actions performed by a user of the AR or VR output device. The audio-video data may be packaged in a non-transitory computer-readable medium with additional content that is coordinated with the defined narrative and is configured for providing an alternative output, such as 2D video output or the stereoscopic 3D output.
    Type: Grant
    Filed: October 7, 2016
    Date of Patent: April 2, 2019
    Assignee: WARNER BROS. ENTERTAINMENT INC.
    Inventors: Christopher DeFaria, Piotr Mintus, Gary Lake-Schaal, Lewis Ostrover
  • Patent number: 10242495
    Abstract: The present invention concerns a method for adapting a mesh model to make it match a target. The model comprises a plurality of reference interfaces, each reference interface being associated with a target interface in the target. The method comprises, for at least one pair of successive interfaces, defining four intersections between a current alignment and the reference interfaces or the associated target interfaces, and modifying the coordinates for each current corner of the alignment on the basis of the initial coordinates of the current corner, and the four defined intersections, the modified coordinates of the current corner being on the current alignment.
    Type: Grant
    Filed: July 21, 2014
    Date of Patent: March 26, 2019
    Assignee: TOTAL SA
    Inventors: Aurèle Forge, Frédérik Pivot
  • Patent number: 10210618
    Abstract: Within examples, object image masking is provided. An example method includes receiving a depth mask of an object, projecting the depth mask of the object onto an image of the object in a background so as to generate a depth image of the object in the background, determining portions of the depth image of the object in the background that are representative of the object and that are representative of the background, based on the portions of the depth image of the object in the background that are representative of the object determining a foreground mask of the object, and using the foreground mask of the object to identify portions of the image representative of the object.
    Type: Grant
    Filed: December 22, 2014
    Date of Patent: February 19, 2019
    Assignee: Google LLC
    Inventors: James Joseph Kuffner, James Robert Bruce, Ken Conley, Arshan Poursohi
  • Patent number: 10198165
    Abstract: A system of mapping a cardiac image of single heart chamber and a method thereof are disclosed. In the system and method thereof, one heart chamber (such as a left atrium, a left ventricular, a right atrium, right ventricular or aortic structure) can be selected in a 3D-based cardiac image (such as a CT or MRI image), and slices of the selected heart chamber are reconstructed and unwrapped by a 2D mapping visual display manner. The visual display and required angle alignment planes for subsequent analysis will be adjusted and achieved by operator manually, with specific global and regional architectural analysis performed by automatic algorithm.
    Type: Grant
    Filed: July 6, 2016
    Date of Patent: February 5, 2019
    Assignee: NATIONAL YANG-MING UNIVERSITY
    Inventors: Tung-Hsin Wu, Jing-Yi Sun, Chun-Ho Yun, Chung-Lieh Hung
  • Patent number: 10198844
    Abstract: One exemplary process for animating hair includes receiving data representing a plurality of hairs and a plurality of objects in a timestep of a frame of animation. A first tree is populated to represent kinematic objects of the plurality of objects and a second tree is populated to represent dynamic objects of the plurality of objects based on the received data. A first elasticity preconditioner is created to represent internal elastic energy of the plurality of hairs based on the received data. Based on the first tree and the second tree, a first set of potential contacts is determined between two or more hairs of the plurality of hairs or between one or more hairs of the plurality of hairs and one or more objects of the plurality of objects. Positions of the plurality of hairs are determined based on the first set of potential contacts and the first elasticity preconditioner.
    Type: Grant
    Filed: September 13, 2016
    Date of Patent: February 5, 2019
    Assignee: DreamWorks Animation L.L.C.
    Inventors: Galen G. Gornowicz, Silviu Borac
  • Patent number: 10192357
    Abstract: A graphic processing apparatus and a method of performing a graphics pipeline in the graphic processing apparatus are provided. The method of performing a graphics pipeline in a graphic processing apparatus includes binning to generate a bounding box bitstream corresponding to a drawcall requiring tessellation, and in response to a bounding box allocated by the bounding box bitstream being identified in a current tile to be processed, rendering the current tile by performing selective tessellation on drawcalls corresponding to the identified bounding box.
    Type: Grant
    Filed: July 6, 2016
    Date of Patent: January 29, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Sangoak Woo
  • Patent number: 10132633
    Abstract: The technology causes disappearance of a real object in a field of view of a see-through, mixed reality display device system based on user disappearance criteria. Image data is tracked to the real object in the field of view of the see-through display for implementing an alteration technique on the real object causing its disappearance from the display. A real object may satisfy user disappearance criteria by being associated with subject matter that the user does not wish to see or by not satisfying relevance criteria for a current subject matter of interest to the user. In some embodiments, based on a 3D model of a location of the display device system, an alteration technique may be selected for a real object based on a visibility level associated with the position within the location. Image data for alteration may be prefetched based on a location of the display device system.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: November 20, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: James C. Liu, Stephen G. Latta, Benjamin I. Vaught, Christopher M. Novak, Darren Bennett
  • Patent number: 10134200
    Abstract: A method for CIG-mode rendering in a virtual fitting system is provided. The method comprises step for: preparing a three-dimensional body, selecting a first garment for an outer and a second garment for an inner, assigning a plurality of inner-saver objects (ISOs), initializing all pixels of a stencil buffer for the screen; incrementing first pixels of the stencil buffer corresponding to interiors and RGB-drawing of the interiors, and the interiors comprise pixels of the outer or the inner, decrementing second pixels of the stencil buffer corresponding to exteriors and RGB-drawing of the exteriors, incrementing third pixels of the inner-saver objects and void-drawing the inner-saver objects; RGB-drawing forth pixels of the inner, where the stencil values are greater than zero.
    Type: Grant
    Filed: July 5, 2017
    Date of Patent: November 20, 2018
    Assignee: PHYSAN, INC.
    Inventors: Bong Ouk Choi, Won-Young Lee
  • Patent number: 10134183
    Abstract: Systems and methods for displaying labels in conjunction with geographic imagery provided, for instance, by a geographic information system, such as a mapping service or a virtual globe application are provided. Candidate positions for displaying labels in conjunction with geographic imagery can be determined based at least in part on a virtual camera viewpoint. The candidate positions can be associated with non-occluded points on three-dimensional models corresponding to the labels. Adjusted positions for labels can be determined form the plurality of candidate positions. The labels can be provided for display in conjunction with the geographic imagery at the adjusted positions.
    Type: Grant
    Filed: October 2, 2017
    Date of Patent: November 20, 2018
    Assignee: Google LLC
    Inventor: Jonah Jones
  • Patent number: 10121265
    Abstract: An image processing device includes a calculation unit which calculates a luminosity of an environmental light of an image for each region based on an overall average value which is an average value of the luminosity of all pixels forming the image of an object and region average values which are the average values of the luminosity of the pixels for each region that is obtained by dividing the image, and a generation unit which generates a virtual captured image which is an image of the object when the object is illuminated with light from a predetermined position based on the luminosity of the environmental light for each of the regions of the image that is calculated by the calculation unit and the image.
    Type: Grant
    Filed: March 9, 2015
    Date of Patent: November 6, 2018
    Assignee: SONY CORPORATION
    Inventor: Kazunori Kamio
  • Patent number: 10110847
    Abstract: A program image creation method that allows two-way communication for communication in a format of questions and answers. The method includes: a description image processing step of setting a description image based on image selection information; a voice processing step of synchronizing a voice from voice input information with the description image; an avatar processing step of combining an avatar that is set based on avatar selection information with the description image; a decoration processing step of combining a decoration material that is set based on decoration selection information with the description image; and an interactive processing step of setting a hyperlink based on interactive selection information.
    Type: Grant
    Filed: August 7, 2013
    Date of Patent: October 23, 2018
    Assignees: 4COLORS INC., ROKURO KAYAMA
    Inventor: Rokuro Kayama
  • Patent number: 10102660
    Abstract: A method, system and computer-program product for real-time virtual 3D reconstruction of a live scene in an animation system. The method comprises receiving 3D positional tracking data for a detected live scene by the processor, determining an event by analyzing the 3D positional tracking data by the processor, comprising steps of determining event characteristics from the 3D positional tracking data, receiving pre-defined event characteristics, determining an event probability by comparing the event characteristics to the pre-defined event characteristics, and selecting an event assigned to the event probability, determining a 3D animation data set from a plurality of 3D animation data sets assigned to the selected event and stored in the data base by the processor, and providing the 3D animation data set to the output device.
    Type: Grant
    Filed: January 19, 2018
    Date of Patent: October 16, 2018
    Assignee: Virtually Live (Switzerland) GMBH
    Inventors: Karl-Heinz Hugel, Florian Struck