Patents Examined by Kimbinh T. Nguyen
-
Patent number: 10621775Abstract: 3-D rendering systems include a rasterization section that can fetch untransformed geometry, transform geometry and cache data for transformed geometry in a memory. As an example, the rasterization section can transform the geometry into screen space. The geometry can include one or more of static geometry and dynamic geometry. The rasterization section can query the cache for presence of data pertaining to a specific element or elements of geometry, and use that data from the cache, if present, and otherwise perform the transformation again, for actions such as hidden surface removal. The rasterization section can receive, from a geometry processing section, tiled geometry lists and perform the hidden surface removal for pixels within respective tiles to which those lists pertain.Type: GrantFiled: March 19, 2018Date of Patent: April 14, 2020Assignee: Imagination Technologies LimitedInventor: John W. Howson
-
Patent number: 10593122Abstract: Techniques are described that enable a two-dimensional (2D) representation of three-dimensional (3D) virtual reality content to be generated and encoded. These techniques include modifying non-display pixels within the 2D representation to soften the transitions between display pixels and non-display pixels.Type: GrantFiled: January 24, 2017Date of Patent: March 17, 2020Assignee: Amazon Technologies, Inc.Inventors: Charles Benjamin Franklin Waggoner, Yongjun Wu
-
Patent number: 10592067Abstract: Embodiments herein relate to distributed interactive medical visualization systems with primary/secondary interaction features and related methods. In an embodiment, a distributed interactive medical visualization system is included. The system can include a first video processing circuit and a first central processing circuit in communication with the first video processing circuit. The system can also include a first communications circuit. The system can also include a primary user interface generated by the first video processing circuit. The primary user interface can include a three-dimensional model of at least a portion of a subject's anatomy from a first perspective, the first perspective configured to be controlled by a primary user. The primary user interface can include a command interface object, wherein engagement can cause a secondary user interface to begin mirroring the perspective of the primary user on the three-dimensional model of the subject's anatomy.Type: GrantFiled: August 8, 2017Date of Patent: March 17, 2020Assignee: Boston Scientific Scimed, Inc.Inventors: Kenneth Matthew Merdan, David M. Flynn, Gregory Ernest Ostenson, Benjamin Bidne, Robbie Halvorson, Eric A. Ware
-
Patent number: 10586398Abstract: The present invention relates to medical image editing. In order to facilitate the medical image editing process, a medical image editing device (50) is provided that comprises a processor unit (52), an output unit (54), and an interface unit (56). The processor unit (52) is configured to provide a 3D surface model of an anatomical structure of an object of interest. The 3D surface model comprises a plurality of surface sub-portions. The surface sub-portions each comprise a number of vertices, and each vertex is assigned by a ranking value. The processor unit (52) is further configured to identify at least one vertex of vertices adjacent to the determined point of interest as an intended vertex. The identification is based on a function of a detected proximity distance to the point of interest and the assigned ranking value. The output unit (54) is configured to provide a visual presentation of the 3D surface model.Type: GrantFiled: December 7, 2015Date of Patent: March 10, 2020Assignee: KONINKLIJKE PHILIPS N.V.Inventors: Fabian Wenzel, Thomas Heiko Stehle, Carsten Meyer, Lyubomir Georgiev Zagorchev, Jochen Peters, Martin Bergtholdt
-
Medical diagnostic apparatus, medical image processing apparatus and medical image processing method
Patent number: 10575823Abstract: A medical diagnostic apparatus according to one embodiment comprises processing circuitry. The processing circuitry is configured to acquire a three-dimensional shape of a first part based on a contour of the first part in each of a plurality of sectional images intersecting each other along an extending direction of the first part connecting to a cardiac chamber; acquire a three-dimensional shape of a second part, based on a contour of the second part in a sectional image along an extending direction of the second part connecting to the cardiac chamber; and generate a three-dimensional image representing at least some of the first part, the second part, and the cardiac chamber using a three-dimensional shape representing the cardiac chamber and the three-dimensional shapes of the first part and the second part acquired.Type: GrantFiled: September 29, 2015Date of Patent: March 3, 2020Assignee: Canon Medical Systems CorporationInventors: Tomoya Okazaki, Yukinobu Sakata, Tomoyuki Takeguchi -
Patent number: 10580167Abstract: Techniques are described that enable a two-dimensional (2D) representation of three-dimensional (3D) virtual reality (VR) content to be encoded. These techniques include encoding VR content while excluding non-display pixels of the VR content from motion estimation during encoder processing.Type: GrantFiled: June 12, 2017Date of Patent: March 3, 2020Assignee: Amazon Technologies, Inc.Inventors: Charles Benjamin Franklin Waggoner, Yongjun Wu
-
Patent number: 10580200Abstract: An apparatus and method are described for performing an early depth test on graphics data. For example, one embodiment of a graphics processing apparatus comprises: early depth test circuitry to perform an early depth test on blocks of pixels to determine whether all pixels in the block of pixels can be resolved by the early depth test; a plurality of execution circuits to execute pixel shading operations on the blocks of pixels; and a scheduler circuit to schedule the blocks of pixels for the pixel shading operations, the scheduler circuit to prioritize the blocks of pixels in accordance with the determination as to whether all pixels in the block of pixels can be resolved by the early depth test.Type: GrantFiled: April 7, 2017Date of Patent: March 3, 2020Assignee: Intel CorporationInventors: Brent E. Insko, Prasoonkumar Surti
-
Patent number: 10571794Abstract: A video is superimposed on an object in order that the object will be perceived as if the object were given a motion. This video is a video including a luminance motion component corresponding to a motion given to the object.Type: GrantFiled: April 21, 2015Date of Patent: February 25, 2020Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Takahiro Kawabe, Shinya Nishida, Kazushi Maruya, Masataka Sawayama
-
Patent number: 10489957Abstract: A control system provides an interface for virtual characters, or avatars, during live avatar-human interactions. A human interactor can select facial expressions, poses, and behaviors of the virtual character using an input device mapped to menus on a display device.Type: GrantFiled: November 7, 2016Date of Patent: November 26, 2019Assignee: MURSION, INC.Inventors: Alex Zelenin, Brian D. Kelly, Arjun Nagendran
-
Patent number: 10482656Abstract: A three-dimensional (3D) face modeling method and apparatus is disclosed. The 3D face modeling apparatus may generate a personalized 3D face model using a two-dimensional (2D) input image and a generic 3D face model, obtain a depth image and a texture image using the generated personalized 3D face model, determine a patch region of each of the depth image and the texture image, and adjust a shape of the personalized 3D face model based on a matching relationship between the patch region of the depth image and the patch region of the texture image.Type: GrantFiled: November 7, 2016Date of Patent: November 19, 2019Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Seon Min Rhee, Jungbae Kim, Jaejoon Han
-
Patent number: 10482659Abstract: Systems, methods, and other embodiments are disclosed that augment a visually displayed portion of a facility with superimposed virtual elements. In one embodiment, mobile position data is generated based on non-optical sensor readings taken by a mobile computing device. The mobile position data represents a location and an orientation of the mobile computing device in a three-dimensional (3D) space of the facility. Projection parameters are generated based on the mobile position data and are applied to modeled facility data to generate rendered image data. The modeled facility data represents hidden and unhidden elements of the facility within the 3D space. The rendered image data is superimposed on live real-time image data acquired by the mobile computing device and displayed by the mobile computing device. The projection parameters promote spatial alignment of the rendered image data with the live image data with respect to positions within the 3D space.Type: GrantFiled: July 15, 2015Date of Patent: November 19, 2019Assignee: ORACLE INTERNATIONAL CORPORATIONInventors: Xin Li, John R. Punin, Rashmi Raja
-
Patent number: 10475392Abstract: The present invention concerns a method of method of relighting a media item comprising media elements.Type: GrantFiled: March 7, 2016Date of Patent: November 12, 2019Assignee: ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE (EPFL)Inventors: Niranjan Thanikachalam, Loic Arnaud Baboulaz, Damien Firmenich, Sabine Suesstrunk, Martin Vetterli
-
Patent number: 10475240Abstract: A system and a method for displaying a three-dimensional robotic workcell data includes generating the robotic workcell data, a display device including a web browser for receiving the robotic workcell data in a standard format and displaying a three-dimensional rendering of the robotic workcell data, and manipulating the three-dimensional rendering on the display device.Type: GrantFiled: October 31, 2011Date of Patent: November 12, 2019Assignee: FANUC ROBOTICS AMERICA CORPORATIONInventors: Judy A. Evans, Kenneth W. Krause
-
Patent number: 10475225Abstract: Avatar animation systems disclosed herein provide high quality, real-time avatar animation that is based on the varying countenance of a human face. In some example embodiments, the real-time provision of high quality avatar animation is enabled at least in part, by a multi-frame regressor that is configured to map information descriptive of facial expressions depicted in two or more images to information descriptive of a single avatar blend shape. The two or more images may be temporally sequential images. This multi-frame regressor implements a machine learning component that generates the high quality avatar animation from information descriptive of a subject's face and/or information descriptive of avatar animation frames previously generated by the multi-frame regressor.Type: GrantFiled: December 18, 2015Date of Patent: November 12, 2019Assignee: Intel CorporationInventors: Minje Park, Tae-Hoon Kim, Myung-Ho Ju, Jihyeon Yi, Xiaolu Shen, Lidan Zhang, Qiang Li
-
Patent number: 10467791Abstract: The present disclosure relates to a motion edit method for editing the motion of an articulated object using a computing device with a touch screen, including a) when any one of a joint path and a body line of an articulated object is selected by a user, setting a position constraint of higher level joint according to the user's sketching input and joint hierarchy, and b) generating motion of the articulated object for which the constraint is set.Type: GrantFiled: December 12, 2017Date of Patent: November 5, 2019Assignee: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGYInventors: Junyong Noh, Byungkuk Choi, Roger Blanco i Ribera, Seokpyo Hong, Haegwang Eom, Sunjin Jung, J. P. Lewis, Yeongho Seol
-
Patent number: 10466480Abstract: Images perceived to be substantially full color or multi-colored may be formed using component color images that are distributed in unequal numbers across a plurality of depth planes. The distribution of component color images across the depth planes may vary based on color. In some embodiments, a display system includes a stack of waveguides that each output light of a particular color, with some colors having fewer numbers of associated waveguides than other colors. The stack of waveguides may include by multiple pluralities (e.g., first and second pluralities) of waveguides, each configured to produce an image by outputting light corresponding to a particular color. The total number of waveguides in the second plurality of waveguides is less than the total number of waveguides in the first plurality of waveguides, and may be more than the total number of waveguides in a third plurality of waveguides, in embodiments where three component colors are utilized.Type: GrantFiled: January 5, 2017Date of Patent: November 5, 2019Assignee: Magic Leap, Inc.Inventors: Brian T. Schowengerdt, Hong Hua, Hui-Chuan Cheng, Christophe Peroz
-
Patent number: 10453252Abstract: Features of the surface of an object of interest captured in a two-dimensional (2D) image are identified and marked for use in point matching to align multiple 2D images and generating a point cloud representative of the surface of the object in a photogrammetry process. The features which represent actual surface features of the object may have their local contrast enhanced to facilitate their identification. Reflections on the surface of the object are suppressed by correlating such reflections with, e.g., light sources, not associated with the object of interest so that during photogrammetry, such reflections can be ignored, resulting in the creation of a 3D model that is an accurate representation of the object of interest. Prior to local contrast enhancement and the suppression of reflection information, identification and isolation of the object of interest can be improved through one or more filtering processes.Type: GrantFiled: May 8, 2017Date of Patent: October 22, 2019Assignee: Disney Enterprises, Inc.Inventors: Steven M. Chapman, Mehul Patel
-
Patent number: 10430961Abstract: Methods and apparatus are disclosed for enhancing urban surface model with image data obtained from a satellite image. Three dimensional models of an urban cityscape obtained from digital surface models may comprise surface location information but lack image information associated with the cityscape, such as the color and texture of building facades. The location of the satellite at the time of recording the satellite image interest may be obtained from metadata associated with the satellite image. A 3D model of a cityscape corresponding to the satellite image may be subjected to a transformation operation to determine portions of the 3D model that are viewable from a location corresponding to the location of the satellite when taking the picture. Visible facades buildings of the 3D model may be identified and mapped to portions of the satellite image which may then be used in rendering 2D images from the 3D model.Type: GrantFiled: December 16, 2016Date of Patent: October 1, 2019Assignee: ObjectVideo Labs, LLCInventors: Gang Qian, Yunxian Zhou, David Conger, Allison Beach
-
Patent number: 10417327Abstract: Methods and devices for rendering interactive three-dimensional (3D) fonts may include receiving, at a text platform component executing on a computing device, a request from an application to render text. The methods and devices may include parsing the text to identify at least one glyph in the text. The methods and devices may include accessing a font file that includes a 3D glyph description associated with the at least one glyph and an interaction policy associated with the 3D glyph that determines when and how to animate the 3D glyph. The methods and devices may include rendering at least one 3D glyph based on the 3D glyph description and the interaction policy. The methods and devices may include transmitting at least one rendered 3D glyph.Type: GrantFiled: April 28, 2017Date of Patent: September 17, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Worachai Chaoweeraprasit, Richard Kirkpatrick Manning, Simon Young Tao
-
Patent number: 10402940Abstract: A method for processing images in an imaging device includes the steps of using real time scalar software (RTSS) for: receiving scalar input data (SID) from video preview application software (VPAS) within a host computer; and performing scaling and cropping operations within the imaging device on raw image frame data to create a scaled down frame (SDF) within the imaging device. As a result, images of high resolution can be transmitted efficiently with significantly reduced amounts of data over the data links, and achieve a high number of frames per second.Type: GrantFiled: October 30, 2013Date of Patent: September 3, 2019Assignee: Pathway Innovations and Technologies, Inc.Inventors: Ji Shen, Bruce Barnes, Hamid Kharrati