Patents Examined by Kimbinh T. Nguyen
-
Patent number: 10230905Abstract: A system includes a video device for capturing, at a viewing time, a first video image corresponding to a foundation scene at a setting, the foundation scene viewed at the viewing time from a vantage position. A memory stores a library of image data including media generated at a time prior to the viewing time. A vantage position monitor tracks the vantage position and generating vantage position of a human viewer. A digital video data controller selects from the image data in the library, at the viewing time and based on the vantage position data, a plurality of second images corresponding to a modifying scene at the setting, the modifying scene further corresponding to the vantage position. A combiner combines the first video image and the plurality of second images to create a composite image for display.Type: GrantFiled: March 24, 2016Date of Patent: March 12, 2019Assignee: Passewall Research LLCInventor: Stuart Wilkinson
-
Patent number: 10230939Abstract: A system, method and software for producing 3D effects in a video of a physical scene. The 3D effects can be observed when the video is viewed, either during a live stream or later when viewing the recorded video. A reference plane is defined. The reference plane has peripheral boundaries. A live event is viewed with stereoscopic video cameras. The stereoscopic camera viewpoints are calculated that enable the event to be recorded within the peripheral boundaries of the reference plane. The footage from the stereoscopic video cameras is digitally altered prior to being imaged. The altering of the footage includes bending, tapering, stretching and/or tilting a portion of the footage in real time. Once the footage is altered, a common set of boundaries are set for superimposed footage to create a final video production.Type: GrantFiled: August 1, 2017Date of Patent: March 12, 2019Assignee: Maxx Media Group, LLCInventors: Richard S. Freeman, Scott A. Hollinger
-
Patent number: 10220172Abstract: Systems and methods permit generation of a digital scan of a user's face such as for obtaining of a patient respiratory mask, or component(s) thereof, based on the digital scan. The method may include: receiving video data comprising a plurality of video frames of the user's face taken from a plurality of angles relative to the user's face, generating a three-dimensional representation of a surface of the user's face based on the plurality of video frames, receiving scale estimation data associated with the received video data, the scale estimation data indicative of a relative size of the user's face, and scaling the digital three-dimensional representation of the user's face based on the scale estimation data. In some aspects, the scale estimation data may be derived from motion information collected by the same device that collects the scan of the user's face.Type: GrantFiled: November 7, 2016Date of Patent: March 5, 2019Assignee: ResMed LimitedInventors: Simon Michael Lucey, Priyanshu Gupta, Benjamin Peter Johnston, Tzu-Chin Yu
-
Patent number: 10223605Abstract: An interactive virtual aquarium simulation system includes a two-dimensional (2D) fish image having a unique identifier associated therewith, with the unique identifier corresponding to predefined fish movements. A scanner scans the 2D fish image and converts to a digital image. A three-dimensional (3D) mapping processor is coupled to the scanner to generate a 3D fish image based on the digital image. A virtual simulation processor is coupled to the 3D mapping processor to generate simulation video of a virtual aquarium including a plurality of fish and the 3D fish image. The 3D fish image swims within the virtual aquarium based on the predefined fish movements. The simulation video of the virtual aquarium with the 3D fish image is provided to a display.Type: GrantFiled: March 18, 2016Date of Patent: March 5, 2019Assignee: COLORVISION INTERNATIONAL, INC.Inventors: Henry Tyson, Bryan Wilkins, James William Guy, Mark Simmons
-
Patent number: 10216355Abstract: Example systems and methods for virtual visualization of a three-dimensional (3D) model of an object in a two-dimensional (2D) environment. The method may include projecting a ray from a user device to a ground plane and determining an angle at which the projected ray touches the ground plane. The method further helps determine a level for the ground plane for positioning the 3D model of the object in the 2D environment.Type: GrantFiled: May 12, 2015Date of Patent: February 26, 2019Assignee: Atheer, Inc.Inventor: Milos Jovanovic
-
Patent number: 10217267Abstract: Systems and methods for producing an acceleration structure provide for subdividing a 3-D scene into a plurality of volumetric portions, which have different sizes, each being addressable using a multipart address indicating a location and a relative size of each volumetric portion. A stream of primitives is processed by characterizing each according to one or more criteria, selecting a relative size of volumetric portions for use in bounding the primitive, and finding a set of volumetric portions of that relative size which bound the primitive. A primitive ID is stored in each location of a cache associated with each volumetric portion of the set of volumetric portions. A cache location is selected for eviction, responsive to each cache eviction decision made during the processing. An element of an acceleration structure according to the contents of the evicted cache location is generated, responsive to the evicted cache location.Type: GrantFiled: August 30, 2016Date of Patent: February 26, 2019Assignee: Imagination Technologies LimitedInventors: James A McCombe, Aaron Dwyer, Luke T Peterson, Neils Nesse
-
Patent number: 10217292Abstract: According to various embodiments, devices, methods, and computer-readable media for reconstructing a 3D scene are described. A server device, sensor devices, and client devices may interoperate to reconstruct a 3D scene sensed by the sensor devices. The server device may generate one or more models for objects in the scene, including the identification of dynamic and/or static objects. The sensor devices may, provide model data updates based on these generated models, such that only delta changes in the scene may be provided, in addition to raw sensor data. Models may utilize semantic knowledge, such as knowledge of the venue or identity of one or more persons in the scene, to further facilitate model generation and updating. Other embodiments may be described and/or claimed.Type: GrantFiled: April 1, 2016Date of Patent: February 26, 2019Assignee: Intel CorporationInventors: Ignacio J. Alvarez, Ranganath Krishnan
-
Patent number: 10192353Abstract: A machine can be specially configured to generate, compress, decompress, store, access, communicate, or otherwise process a special data structure that represents a three-dimensional surface of an object. The data structure can be or include a pruned sparse voxel octree in which each node in the octree corresponds to a different block of the octree, and children of the node in the octree correspond to the smaller blocks that subdivide the block. Moreover, each block occupied by the surface or a portion thereof can define its enclosed surface or portion thereof explicitly or implicitly.Type: GrantFiled: October 17, 2017Date of Patent: January 29, 2019Assignee: 8i LimitedInventors: Philip A. Chou, Maja Krivokuca, Robert James William Higgs, Charles Loop, Eugene Joseph d'Eon
-
Patent number: 10182220Abstract: A modeled object distribution management system includes a creator storage unit, a first display controller and a second display controller. The creator storage unit stores modeled object-related information including information indicating stereoscopic images of created modeled objects. The first display controller displays the stereoscopic images to allow a client to browse the stereoscopic images, by using the information indicating the stereoscopic images which is stored in the creator storage unit. The second display controller displays a modeling plan to allow the client to browse the modeling plan. The modeling plan includes a modeling method and a material which are used to model a modeled object corresponding to a stereoscopic image selected by the client from the stereoscopic images displayed in the first display controller.Type: GrantFiled: November 30, 2016Date of Patent: January 15, 2019Assignee: FUJI XEROX CO., LTD.Inventor: Kazunori Onishi
-
Patent number: 10163263Abstract: The technology uses image content to facilitate navigation in panoramic image data. Aspects include providing a first image including a plurality of avatars, in which each avatar corresponds to an object within the first image, and determining an orientation of at least one of the plurality of avatars to a point of interest within the first image. A viewport is determined for a first avatar in accordance with the orientation thereof relative to the point of interest, which is included within the first avatar's viewport. In response to received user input, a second image is selected that includes at least a second avatar and the point of interest from the first image. A viewport of the second avatar in the second image is determined and the second image is oriented to align the second avatar's viewpoint with the point of interest to provide navigation between the first and second images.Type: GrantFiled: March 16, 2017Date of Patent: December 25, 2018Assignee: Google LLCInventors: Jiajun Zhu, Daniel Joseph Filip, Luc Vincent
-
Patent number: 10147226Abstract: A method of converting three dimensional image data into two dimensional image data, includes identifying at least two vertices of an object to be rendered in a frame of three dimensional image data, calculating a three-dimensional (3D) motion vector for each vertex of the object to be rendered, determining a position of each vertex in a new frame, calculating the motion vectors for a block based upon the vertex position in the new frame and the motion vectors for the vertex, and using the motion vectors for the vertex to render pixels in the new frame.Type: GrantFiled: March 8, 2016Date of Patent: December 4, 2018Assignee: Pixelworks, Inc.Inventors: Songsong Chen, Bob Zhang, Neil Woodall
-
Patent number: 10134179Abstract: Embodiments include systems and methods for synthesizing, recording, performing and playing back visual music for virtual immersive video playback environments. Virtual music (VM) compositions can include 3D VM instruments, which can be controlled in a highly dynamic and expressive manner using human interface controllers. Some embodiments include novel techniques for controlling, synthesizing, and rendering VM instruments having complex particle system architectures. Some embodiments further provide VM compositional techniques built on four-dimensional modeling constructs that include path-adaptive coordinate systems that define a compositional space-time for modeling, and path-anchored object locators that place objects in the compositional space-time.Type: GrantFiled: September 30, 2016Date of Patent: November 20, 2018Assignee: Visual Music Systems, Inc.Inventors: William B. Sebastian, Nathaniel Resnikoff
-
Patent number: 10134178Abstract: Embodiments include systems and methods for synthesizing, recording, performing and playing back visual music for virtual immersive video playback environments. Virtual music (VM) compositions can include 3D VM instruments, which can be controlled in a highly dynamic and expressive manner using human interface controllers. Some embodiments include novel techniques for controlling, synthesizing, and rendering VM instruments having complex particle system architectures. Some embodiments further provide VM compositional techniques built on four-dimensional modeling constructs that include path-adaptive coordinate systems that define a compositional space-time for modeling, and path-anchored object locators that place objects in the compositional space-time.Type: GrantFiled: September 30, 2016Date of Patent: November 20, 2018Assignee: Visual Music Systems, Inc.Inventors: William B. Sebastian, Robert Eastwood
-
Patent number: 10127720Abstract: Embodiments of the invention include a method inserting a new face in a polygonal mesh comprising receiving an input corresponding to: a polygonal mesh having a plurality of faces, a selection of a face (fm) of the plurality of faces, a direction vector (d), a modified target plane (pm), and a threshold angle ?. For each edge (e) of the selected face fm, the method further includes determining each adjacent face (fadj) to selected face fm, and inserting a new face at edge e if no adjacent face exists or if fadj is substantially parallel to pm and within threshold ?. In some embodiments, the new face has a normal orthogonal to e and d.Type: GrantFiled: December 12, 2016Date of Patent: November 13, 2018Assignee: ENVIRONMENTAL SYSTEMS RESEARCH INSTITUTEInventors: Markus Lipp, Pascal Mueller
-
Patent number: 10129385Abstract: A method and an electronic device are provided for transmitting a message from an electronic device to another electronic device. Handwritten input comprising one or more input objects is received. Playback information of the one or more input objects is generated using input coordinates of the one or more input objects or input times of the one or more input objects. An animation message including a first region in which one or more images are displayed, and a second region in which the one or more input objects are displayed, is generated according to the playback information. The animation message is transmitted to the another electronic device.Type: GrantFiled: June 20, 2016Date of Patent: November 13, 2018Assignee: Samsung Electronics Co., LtdInventors: Do-Hyeon Kim, Mu-Sik Kwon, Woo-Sung Kang
-
Patent number: 10109102Abstract: A machine may render a view that includes a portion of an infinite plane within a three-dimensional (3D) space. The machine may determine a polygon within a frustum in the 3D space. The polygon may be determined by calculating an intersection of the frustum with the infinite plane. The polygon may represent that portion of the infinite plane which lies within the boundaries of the frustum. The machine may then determine a color of an element of this polygon according to one or more algorithms, default values, or other programming for depicting the infinite plane within the 3D space. The color of this element of the polygon may be that applied by the machine to a further element that is located on the far plane of the frustum, and this further element may be located at a height above the polygon within the 3D space.Type: GrantFiled: March 6, 2017Date of Patent: October 23, 2018Assignee: ADOBE SYSTEMS INCORPORATEDInventor: Nikolai Svakhin
-
Patent number: 10091367Abstract: To provide an information processing device that can perform scroll operations without preparing model-specific tables. An image forming apparatus (1) causes a moving interval calculating part (110) to calculate moving interval values (250) of indication coordinates (320) of an object based on a ratio of an elapsed time (220) to the moving time (200) and a difference between end coordinates (240) and start coordinates (230). A moving interval value after setting wait part (120) adds the moving interval values (250) to the indication coordinates (320) of the object. An object drawing part (130) draws the object on the coordinates to which the moving interval values (250) are added and causes a display part to display the object.Type: GrantFiled: September 19, 2014Date of Patent: October 2, 2018Assignee: KYOCERA Document Solutions Inc.Inventor: Hideyuki Sasaki
-
Patent number: 10083537Abstract: A video may be presented on a touchscreen display. Reception of annotation input may be determined based on user's engagement with the touchscreen display. Annotation input may define an in-frame visual annotation for the video. In-frame visual annotation may be associated with a visual portion of the video and one or more points within a duration of the video such that a subsequent presentation of the video includes the in-frame visual annotation positioned at the visual portion of the video at the one or more points. A graphical user interface may be presented on the touchscreen display. The graphical user interface may include one or more animation fields that provide options for selection by the user. The options may define different properties of a moving visual element added to the video. The options may define visual characteristics, presentation periods, and motions of the moving visual element.Type: GrantFiled: September 16, 2016Date of Patent: September 25, 2018Assignee: GoPro, Inc.Inventors: Stephen Trey Moore, Ross Chinni, Nicholas D. Woodman
-
Patent number: 10074209Abstract: A method for processing a current image of an image sequence is disclosed. According to the invention, the method includes: identification of at least one region to be constructed associated with the current image, called unknown region, selection of at least one construction technique for constructing said at least one unknown region, association of at least one confidence indicator with said at least one unknown region, the confidence indicator being obtained by: a first value representative of the use of temporal inpainting or inter-view inpainting inversely proportional to the temporal or inter-view distance, a second value representative of the use of temporal inpainting or inter-view inpainting to construct said pixel, a third value representative of the minimum distance between said and a pixel of a known region, a fourth value representative of the application of a color and/or luminance compensation.Type: GrantFiled: July 26, 2014Date of Patent: September 11, 2018Assignee: Thomson LicensingInventors: Matthieu Fradet, Joan Llach Pinsach, Philippe Robert
-
Patent number: 10068373Abstract: An electronic device for providing map information associated with a space of interest is provided. The electronic device includes a display and a processor configured to display, on the display, at least a portion of a map including at least one node associated with at least one image photographed at a corresponding position of the space of interest and additional information on the at least one image, change, in response to an input or an event, a first image associated with a first node among the at least one node or first additional information on the first image, and display, on the map through the display, at least a portion of the changed first image or at least a portion of the changed first additional information.Type: GrantFiled: June 30, 2015Date of Patent: September 4, 2018Assignee: Samsung Electronics Co., Ltd.Inventors: Shin-Jun Lee, Kyung-Tae Kim, Eun-Seon Noh, Sun-Kee Lee, Cheol-Ho Cheong, Jin-Ik Kim, Hyung-Suk Kim, Bu-Seop Jung, Sung-Dae Cho