Patents Assigned to IN3D Corporation
-
Patent number: 11854579Abstract: Apparati, methods, and computer readable media for inserting identity information from a source image (static image or video) (301) into a destination video (302), while mimicking motion of the destination video (302). In an apparatus embodiment, an identity encoder (304) is configured to encode identity information of the source image (301). When source image (301) is a multi-frame static image or a video, an identity code aggregator (307) is positioned at an output of the identity encoder (304), and produces an identity vector (314). A driver encoder (313) is coupled to the destination (driver) video (302), and has two components: a pose encoder (305) configured to encode pose information of the destination video (302), and a motion encoder (315) configured to separately encode motion information of the destination video (302). The driver encoder (313) produces two vectors: a pose vector (308) and a motion vector (316).Type: GrantFiled: July 12, 2021Date of Patent: December 26, 2023Assignee: Spree3D CorporationInventors: Mohamed N. Moustafa, Ahmed A. Ewais, Amr A. Ali
-
Patent number: 11836905Abstract: Apparati, methods, and computer readable media for inserting identity information from a source image (1) into a destination image (2), while mimicking illumination of the destination image (2). In an apparatus embodiment, an identity encoder (4) is configured to encode just identity information of the source image (1) and to produce an identity vector (7), where the identity encoder (4) does not encode any pose information or illumination information of the source image (1). A driver encoder (12) has two components: a pose encoder (5) configured to encode pose information of the destination image (2) and an illumination encoder (6) configured to separately encode illumination information of the destination image (2), and to produce two vectors: a pose vector (8) and an illumination vector (9). A neural network generator (10) is coupled to the identity encoder (4) and to the driver encoder (12), and has three inputs: the identity vector (7), the pose vector (8), and the illumination vector (9).Type: GrantFiled: June 3, 2021Date of Patent: December 5, 2023Assignee: Spree3D CorporationInventors: Mohamed N. Moustafa, Ahmed A. Ewais, Amr A. Ali
-
Patent number: 11769346Abstract: Methods and apparati for inserting face and hair information from a source video (401) into a destination (driver) video (402) while mimicking pose, illumination, and hair motion of the destination video (402). An apparatus embodiment comprises an identity encoder (404) configured to encode face and hair information of the source video (401) and to produce as an output an identity vector; a pose encoder (405) configured to encode pose information of the destination video (402) and to produce as an output a pose vector; an illumination encoder (406) configured to encode head and hair illumination of the destination video (402) and to produce as an output an illumination vector; and a hair motion encoder (414) configured to encode hair motion of the destination video (402) and to produce as an output a hair motion vector. The identity vector, pose vector, illumination vector, and hair motion vector are fed as inputs to a neural network generator (410).Type: GrantFiled: December 22, 2021Date of Patent: September 26, 2023Assignee: Spree3d CorporationInventors: Mohamed N. Moustafa, Ahmed A. Ewais, Amr A. Ali
-
Patent number: 11663764Abstract: Provided are methods and systems for automatic creation of a customized avatar animation of a user. An example method commences with receiving production parameters and creating, based on the production parameters, a multidimensional array of a plurality of blank avatar animations. Each blank avatar animation has a predetermined number of frames and a plurality of features associated with each frame. The method further includes receiving user parameters including body dimensions, hair, and images of a face of a user. The method continues with selecting, from the plurality of blank avatar animations, two blank avatar animations closest to the user based on the body dimensions. The method further includes interpolating corresponding frames of the two blank avatar animations to produce an interpolated avatar animation. The method continues with compositing the face and the hair with the interpolated avatar animation using a machine learning technique to render the customized avatar animation.Type: GrantFiled: April 15, 2021Date of Patent: May 30, 2023Assignee: Spree3D CorporationInventors: Gil Spencer, Dmitriy Vladlenovich Pinskiy, Evan Smyth
-
Patent number: 9804865Abstract: Embodiments here include systems and methods for running an application via a microvisor processor in communication with a memory and a storage is disclosed. For example, one method includes installing an application. The method also includes identifying an operating system that the application is configured to execute within. The method also includes identifying a resource required by the application to execute, wherein the resource is part of the operating system. The method also includes identifying a location of the resource in the storage. The method also includes retrieving the resource from the storage. The method also includes bundling the application and the resource in the memory. The method also includes executing the application using the resource.Type: GrantFiled: May 22, 2015Date of Patent: October 31, 2017Assignee: Sphere 3D CorporationInventors: Peter G. Bookman, Giovanni J. Morelli
-
Publication number: 20160189419Abstract: According to one aspect, there are systems and methods for generating data indicative of a three-dimensional representation of a scene. Current depth data indicative of a scene is generated using a sensor. Salient features are detected within a depth frame associated with the depth data, and these salient features are matched with a saliency likelihoods distribution. The saliency likelihoods distribution represents the scene, and is generated from previously-detected salient features. The pose of the sensor is estimated based upon the matching of detected salient features, and this estimated pose is refined based upon a volumetric representation of the scene. The volumetric representation of the scene is updated based upon the current depth data and estimated pose. A saliency likelihoods distribution representation is updated based on the salient features. Image data indicative of the scene may also be generated and used along with depth data.Type: ApplicationFiled: August 8, 2014Publication date: June 30, 2016Applicant: Sweep3D CorporationInventors: Adel FAKIH, John ZELEK
-
Patent number: 9129034Abstract: A system and method for web browsing contemporaneously displays multiple web pages, advertisements, or other applications, preferably within a single window, for a user to view. In a preferred embodiment, a current web page, a past web page, a future web page, and/or an advertisement or other application are contemporaneously displayed in a single window. In this embodiment, the present invention tracks a past web page and renders it in a first panel, renders the current web page in a second panel, and identifies a hyperlink in the current web page to retrieve and render the future web page in a third panel. In other embodiments, a host provides a list of web pages that are to be displayed in the panels. In other embodiments, a user selects a list of web pages that are to be displayed in the panels. In other embodiments, hyperlinks are filtered and/or prioritized to determine which web pages are to be displayed in the panels.Type: GrantFiled: June 8, 2009Date of Patent: September 8, 2015Assignee: Browse3D CorporationInventors: David T. Shuping, William R. Johnson, Robert C. Randa
-
Publication number: 20090293012Abstract: A handheld synthetic vision system includes a display, a sensor suite and a computer all housed in a handheld unit. The system enhances normal vision by displaying to a user actual or digitally created visual scenes of objects and information that may or may not be perceptible to unaided human senses.Type: ApplicationFiled: June 9, 2005Publication date: November 26, 2009Applicant: NAV3D CORPORATIONInventors: Keith Alter, Andrew Kevin Barrows, Chad Jennings
-
Patent number: 7546538Abstract: A system and method for web browsing contemporaneously displays multiple web pages, advertisements, or other applications, preferably within a single window, for a user to view. In a preferred embodiment, a current web page, a past web page, a future web page, and/or an advertisement or other application are contemporaneously displayed in a single window. In this embodiment, the present invention tracks a past web page and renders it in a first panel, renders the current web page in a second panel, and identifies a hyperlink in the current web page to retrieve and render the future web page in a third panel. In other embodiments, a host provides a list of web pages that are to be displayed in the panels. In other embodiments, a user selects a list of web pages that are to be displayed in the panels. In other embodiments, hyperlinks are filtered and/or prioritized to determine which web pages are to be displayed in the panels.Type: GrantFiled: November 10, 2001Date of Patent: June 9, 2009Assignee: Browse3D CorporationInventors: David T. Shuping, William R. Johnson, Robert C. Randa
-
Patent number: 6362817Abstract: A computer-based system for designing and using three-dimensional environments over a bandwidth limited network such as the Internet. The system allows an environment to be specified as a series of two-dimensional grids of text characters. Each character occupies a single grid position and represents an object in the environment. Objects can be given characteristics such as texture maps, and associated images and sounds that are triggered by events such as a user approaching the object. An object or image can be a hyperlink so that, when clicked or moved upon, the user is transported to a new location. A basic set of objects and media (images and sounds) is provided so that a designer of an environment does not have to perform low-level three-dimensional modeling. Objects can behave differently when placed near one another. For example, walls fuse together to provide a longer wall.Type: GrantFiled: May 18, 1998Date of Patent: March 26, 2002Assignee: IN3D CorporationInventors: Michael Powers, Philip Stephens
-
Patent number: 6313855Abstract: The present invention provides a system and method for web browsing. Generally speaking, the present invention contemporaneously displays multiple web pages, preferably within a single window, for a user to view. In a preferred embodiment, a current web page, a past web page, and a future web page are contemporaneously displayed in a single window. In this embodiment, the present invention tracks a past web page and renders it in a first panel, renders the current web page in a second panel, and identifies a hyperlink in the current web page to retrieve and render the future web page in a third panel. Preferably, all of these panels are embedded within a single window. In this manner, the user contemporaneously views the current web page, the past page, and the future web page in the single window. Preferably, the present invention is implemented as a web browsing room in a three-dimensional space where walls of the rooms correspond to various ones of the aforementioned panels.Type: GrantFiled: February 4, 2000Date of Patent: November 6, 2001Assignee: Browse3d CorporationInventors: David T. Shuping, William R. Johnson