Patents by Inventor Henry Fuchs
Henry Fuchs has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11049476Abstract: Methods, systems, and computer readable media for minimal-latency tracking and display for matching real and virtual worlds in head-worn displays are disclosed. According to one aspect, a method for minimal-latency tracking and display for matching real and virtual worlds in head-worn displays includes calculating a desired image, calculating an error image as the difference between the desired image and an image currently being perceived by a user, identifying as an error portion a portion of the error image having the largest error, updating a portion of a projected image that corresponds to the error portion, and recalculating the image currently being perceived by a user based on the updated projected image.Type: GrantFiled: November 4, 2015Date of Patent: June 29, 2021Assignee: The University of North Carolina at Chapel HillInventors: Henry Fuchs, Anselmo A. Lastra, John Turner Whitted, Feng Zheng, Andrei State, Gregory Welch
-
Publication number: 20190380790Abstract: The subject matter described herein includes methods, systems, and computer readable media for image guided ablation. One system for image guided ablation includes an ultrasound transducer for producing a real-time ultrasound image of a target volume and of surrounding tissue. The system further includes an ablation probe for ablating the target volume. The system further includes a display for displaying an image to guide positioning of the ablation probe during ablation of the target volume. The system further includes at least one tracker for tracking position and orientation of the ablation probe during the ablation of the target volume. The system further includes a rendering and display module for receiving a pre-ablation image of the target volume and for displaying a combined image on the display, where the combined image includes a motion tracked, rendered image of the ablation probe and an equally motion tracked real-time ultrasound image registered with the pre-ablation image.Type: ApplicationFiled: June 25, 2019Publication date: December 19, 2019Inventors: Henry Fuchs, Hua Yang, Tabitha Peck, Anna Bulysheva, Andrei State
-
Patent number: 10365711Abstract: Methods, systems, and computer readable media for unified scene acquisition and pose tracking in a wearable display are disclosed. According to one aspect, a system for unified scene acquisition and pose tracking in a wearable display includes a wearable frame configured to be worn by a user. Mounted on the frame are: at least one sensor for acquiring scene information for a real scene proximate to the user, the scene information including images and depth information; a pose tracker for estimating the user's head pose based on the acquired scene information; a rendering unit for generating a virtual reality (VR) image based on the acquired scene information and estimated head pose; and at least one display for displaying to the user a combination of the generated VR image and the scene proximate to the user.Type: GrantFiled: May 17, 2013Date of Patent: July 30, 2019Assignee: The University of North Carolina at Chapel HillInventors: Henry Fuchs, Mingsong Dou, Gregory Welch, Jan-Michael Frahm
-
Patent number: 10321107Abstract: A system for illuminating a spatial augmented reality object includes an augmented reality object including a projection surface having a plurality of apertures formed through the projection surface. The system further includes a lenslets layer including a plurality of lenslets and conforming to curved regions of the of the projection surface for directing light through the apertures. The system further includes a camera for measuring ambient illumination in an environment of the projection surface. The system further includes a projected image illumination adjustment module for adjusting illumination of a captured video image. The system further includes a projector for projecting the illumination adjusted captured video image onto the projection surface via the lenslets layer and the apertures.Type: GrantFiled: November 12, 2014Date of Patent: June 11, 2019Assignee: The University of North Carolina at Chapel HillInventors: Henry Fuchs, Gregory Welch
-
Patent number: 10319154Abstract: A system for providing auto-focus augmented reality (AR) viewing of real and virtual objects includes a frame for holding AR viewing components on a user's head and optically in front of the user's eyes. The AR viewing components include an internal virtual objects display for displaying virtual objects to a user. Internal and external focus correction modules respectively adjust focal distances of virtual and real objects, are respectively configurable to adjust the focal distances of the virtual and real objects differently based on the different user vision types, and operate to respectively adjust the focal distances of the virtual and real objects such that the virtual and real objects are simultaneously in focus based on the vision type of the user.Type: GrantFiled: July 20, 2018Date of Patent: June 11, 2019Assignee: The University of North Carolina at Chapel HillInventors: Praneeth Kumar Chakravarthula, Henry Fuchs
-
Patent number: 9983412Abstract: A head mountable augmented reality see through near eye display system includes a see through augmented reality image focal distance modulator for changing a distance at which augmented reality images appear in focus and including at least one light transmissive surface through which real world objects are viewable. The system further includes a display for generating an augmented reality image and projecting the augmented reality image onto the see through augmented reality image focal distance modulator. The system further includes an augmented reality image focal distance controller for controlling the see through augmented reality image focal distance modulator to cause the augmented reality image to appear in focus a distance corresponding to a vergence distance and for changing the distance at which the augmented reality image appears to be in focus in correspondence with changes in the vergence distance.Type: GrantFiled: February 2, 2017Date of Patent: May 29, 2018Assignee: THE UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILLInventors: Henry Fuchs, David Scott Dunn, Cary Aaron Tippets
-
Patent number: 9898866Abstract: Methods, systems, and computer readable media for low latency stabilization for head-worn displays are disclosed. According to one aspect, the subject matter described herein includes a system for low latency stabilization of a head-worn display. The system includes a low latency pose tracker having one or more rolling-shutter cameras that capture a 2D image by exposing each row of a frame at a later point in time than the previous row and that output image data row by row, and a tracking module for receiving image data row by row and using that data to generate a local appearance manifold. The generated manifold is used to track camera movements, which are used to produce a pose estimate.Type: GrantFiled: March 13, 2014Date of Patent: February 20, 2018Assignee: The University of North Carolina at Chapel HillInventors: Henry Fuchs, Anselmo A. Lastra, Jan-Michael Frahm, Nate Michael Dierk, David Paul Perra
-
Patent number: 9858721Abstract: The subject matter described herein includes systems, methods, and computer readable media for generating an augmented scene display. An exemplary method includes forming, using a display device operating in a first stage, an augmented virtual image by emitting light rays through a plurality of spatial light modulation layers included in a display device. The method also includes forming, using the display device operating in a second stage, an occluded real image by opening a shutter element of the display device to receive light rays from a real object and utilizing the plurality of spatial light modulation layers to block any light ray from the real object which coincides with the augmented virtual image. The method further includes generating an augmented scene display that includes both the occluded real image and the augmented virtual image by alternating the operation of the display device between the first stage and the second stage.Type: GrantFiled: January 15, 2014Date of Patent: January 2, 2018Assignee: The University of North Carolina at Chapel HillInventors: Andrew Maimone, Henry Fuchs
-
Publication number: 20170345398Abstract: Methods, systems, and computer readable media for minimal-latency tracking and display for matching real and virtual worlds in head-worn displays are disclosed. According to one aspect, a method for minimal-latency tracking and display for matching real and virtual worlds in head-worn displays includes calculating a desired image, calculating an error image as the difference between the desired image and an image currently being perceived by a user, identifying as an error portion a portion of the error image having the largest error, updating a portion of a projected image that corresponds to the error portion, and recalculating the image currently being perceived by a user based on the updated projected image.Type: ApplicationFiled: November 4, 2015Publication date: November 30, 2017Applicant: The University of North Carolina at Chapel HillInventors: Henry Fuchs, Anselmo A. Lastra, John Turner Whitted, Feng Zheng, Andrei State
-
Patent number: 9792715Abstract: Methods, systems, and computer readable media for utilizing synthetic animatronics are disclosed. According to one aspect, a method for synthetic animatronics includes providing a display surface having different regions that accommodate different positions or deformations of a subject, mapping images of the subject to the different regions on the display surface, and displaying the mapped images on the different regions of the display surface at different times in accordance with a desired animation of the subject.Type: GrantFiled: May 17, 2013Date of Patent: October 17, 2017Assignee: THE UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILLInventors: Gregory Welch, Kurtis P. Keller, Andrei State, Henry Fuchs, Ryan Edward Schubert
-
Patent number: 9538167Abstract: Methods, systems, and computer readable media for shader lamps-based avatars of real and virtual people are disclosed. According to one method, shader lamps-based avatars of real and virtual objects are displayed on physical target objects. The method includes obtaining visual information of a source object and generating at least a first data set of pixels representing a texture image of the source object. At least one of a size, shape, position, and orientation of a 3D physical target object are determined. A set of coordinate data associated with various locations on the surface of the target object are also determined. The visual information is mapped to the physical target object. Mapping includes defining a relationship between the first and second sets of data, wherein each element of the first set is related to each element of the second set.Type: GrantFiled: March 8, 2010Date of Patent: January 3, 2017Assignee: THE UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILLInventors: Gregory Francis Welch, Henry Fuchs, Peter Lincoln, Andrew Nashel, Andrei State
-
Publication number: 20160323553Abstract: A system for illuminating a spatial augmented reality object includes an augmented reality object including a projection surface having a plurality of apertures formed through the projection surface. The system further includes a lenslets layer including a plurality of lenslets and conforming to curved regions of the of the projection surface for directing light through the apertures. The system further includes a camera for measuring ambient illumination in an environment of the projection surface. The system further includes a projected image illumination adjustment module for adjusting illumination of a captured video image. The system further includes a projector for projecting the illumination adjusted captured video image onto the projection surface via the lenslets layer and the apertures.Type: ApplicationFiled: November 12, 2014Publication date: November 3, 2016Inventors: Henry Fuchs, Gregory Welch
-
Publication number: 20160270862Abstract: The subject matter described herein includes methods, systems, and computer readable media for image guided ablation. One system for image guided ablation includes an ultrasound transducer for producing a real-time ultrasound image of a target volume and of surrounding tissue. The system further includes an ablation probe for ablating the target volume. The system further includes a display for displaying an image to guide positioning of the ablation probe during ablation of the target volume. The system further includes at least one tracker for tracking position and orientation of the ablation probe during the ablation of the target volume. The system further includes a rendering and display module for receiving a pre-ablation image of the target volume and for displaying a combined image on the display, where the combined image includes a motion tracked, rendered image of the ablation probe and an equally motion tracked real-time ultrasound image registered with the pre-ablation image.Type: ApplicationFiled: February 11, 2016Publication date: September 22, 2016Inventors: Henry Fuchs, Hua Yang, Tabitha Peck, Anna Bulysheva, Andrei State
-
Patent number: 9361727Abstract: Methods, systems, and computer readable media for generating autostereo three-dimensional views of a scene for a plurality of viewpoints are disclosed. According to one system, a display is configured to display images from plural different viewpoints using a barrier located in front of the display, where the barrier has a pseudo-random arrangement of light ports through which images on the display are viewable. A renderer coupled to the display simultaneously renders images from the different viewpoints such that pixels that should appear differently from the different viewpoints are displayed in a predetermined manner. The pseudo-random arrangement of the light ports in the barrier smoothes interference between the different viewpoints as perceived by viewers located at the different viewpoints.Type: GrantFiled: March 8, 2010Date of Patent: June 7, 2016Assignee: The University of North Carolina at Chapel HillInventors: Henry Fuchs, Leonard McMillan, Andrew Nashel
-
Patent number: 9299195Abstract: A video conference server receives a plurality of video frames including a current frame and at least one previous frame. Each of the video frames includes a corresponding image and a corresponding depth map. The server produces a directional distance function (DDF) field that represents an area surrounding a target surface of the object captured in the current frame. A forward transformation is generated that modifies the reference surface to align with the target surface. Using at least a portion of the forward transformation, a backward transformation is calculated that modifies the target surface of the current frame to align with the reference surface. The backward transformation is then applied to the DDF to generate a transformed DDF. The server updates the reference model with the transformed DDF and transmits data for the updated reference model to enable a representation of the object to be produced at a remote location.Type: GrantFiled: March 25, 2014Date of Patent: March 29, 2016Assignee: Cisco Technology, Inc.Inventors: Mingsong Dou, Henry Fuchs, Madhav Marathe
-
Patent number: 9265572Abstract: The subject matter described herein includes methods, systems, and computer readable media for image guided ablation. One system for image guided ablation includes an ultrasound transducer for producing a real-time ultrasound image of a target volume and of surrounding tissue. The system further includes an ablation probe for ablating the target volume. The system further includes a display for displaying an image to guide positioning of the ablation probe during ablation of the target volume. The system further includes at least one tracker for tracking position and orientation of the ablation probe during the ablation of the target volume. The system further includes a rendering and display module for receiving a pre-ablation image of the target volume and for displaying a combined image on the display, where the combined image includes a motion tracked, rendered image of the ablation probe and an equally motion tracked real-time ultrasound image registered with the pre-ablation image.Type: GrantFiled: July 23, 2010Date of Patent: February 23, 2016Assignee: The University of North Carolina at Chapel HillInventors: Henry Fuchs, Hua Yang, Tabitha Peck, Anna Bulysheva, Andrei State
-
Publication number: 20160035139Abstract: Methods, systems, and computer readable media for low latency stabilization for head-worn displays are disclosed. According to one aspect, the subject matter described herein includes a system for low latency stabilization of a head-worn display. The system includes a low latency pose tracker having one or more rolling-shutter cameras that capture a 2D image by exposing each row of a frame at a later point in time than the previous row and that output image data row by row, and a tracking module for receiving image data row by row and using that data to generate a local appearance manifold. The generated manifold is used to track camera movements, which are used to produce a pose estimate.Type: ApplicationFiled: March 13, 2014Publication date: February 4, 2016Inventors: Henry Fuchs, Anselmo A. Lastra, Jan-Michael Frahm, Nate Michael Dierk, David Paul Perra
-
Publication number: 20150363978Abstract: The subject matter described herein includes systems, methods, and computer readable media for generating an augmented scene display. An exemplary method includes forming, using a display device operating in a first stage, an augmented virtual image by emitting light rays through a plurality of spatial light modulation layers included in a display device. The method also includes forming, using the display device operating in a second stage, an occluded real image by opening a shutter element of the display device to receive light rays from a real object and utilizing the plurality of spatial light modulation layers to block any light ray from the real object which coincides with the augmented virtual image. The method further includes generating an augmented scene display that includes both the occluded real image and the augmented virtual image by alternating the operation of the display device between the first stage and the second stage.Type: ApplicationFiled: January 15, 2014Publication date: December 17, 2015Inventors: Andrew Maimone, Henry Fuchs
-
Publication number: 20150279118Abstract: A video conference server receives a plurality of video frames including a current frame and at least one previous frame. Each of the video frames includes a corresponding image and a corresponding depth map. The server produces a directional distance function (DDF) field that represents an area surrounding a target surface of the object captured in the current frame. A forward transformation is generated that modifies the reference surface to align with the target surface. Using at least a portion of the forward transformation, a backward transformation is calculated that modifies the target surface of the current frame to align with the reference surface. The backward transformation is then applied to the DDF to generate a transformed DDF. The server updates the reference model with the transformed DDF and transmits data for the updated reference model to enable a representation of the object to be produced at a remote location.Type: ApplicationFiled: March 25, 2014Publication date: October 1, 2015Applicant: Cisco Technology, Inc.Inventors: Mingsong Dou, Henry Fuchs, Madhav Marathe
-
Publication number: 20150178973Abstract: Methods, systems, and computer readable media for utilizing synthetic animatronics are disclosed. According to one aspect, a method for synthetic animatronics includes providing a display surface having different regions that accommodate different positions or deformations of a subject, mapping images of the subject to the different regions on the display surface, and displaying the mapped images on the different regions of the display surface at different times in accordance with a desired animation of the subject.Type: ApplicationFiled: May 17, 2013Publication date: June 25, 2015Inventors: Gregory Welch, Kurtis P. Keller, Andrei State, Henry Fuchs, Ryan Edward Schubert