Patents by Inventor Tuotuo LI
Tuotuo LI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11961201Abstract: In one embodiment, a method includes accessing multiple 3D photos to be concurrently displayed through multiple frames positioned in a virtual space, each of the of 3D photos having an optimal viewing point in the virtual space and determining a reference point based on a head pose of a viewer within the virtual space. The method may further include adjusting each 3D photo by rotating the 3D photo so that the optimal viewing point of the 3D photo points at the reference point, translating the rotated 3D photo toward the reference point, and non-uniformly scaling the rotated and translated 3D photo based on a scaling factor determined using the reference point and a position of the frame through which the 3D photo is to be viewed. The method may further include rendering an image comprising the adjusted multiple 3D photos as seen through the multiple frames.Type: GrantFiled: March 7, 2022Date of Patent: April 16, 2024Assignee: Meta Platforms Technologies, LLCInventors: Johannes Peter Kopf, Xuejian Rong, Tuotuo Li, Ocean Quigley
-
Publication number: 20230019187Abstract: Disclosed herein are systems, apparatus, methods, and articles of manufacture to present three dimensional images without glasses. An example apparatus includes a micro lens array and at least one processor. The at least one processor is to: determine a first position of a first pupil of a viewer; determine a second position of a second pupil of the viewer; align a first eye box with the first position of the first pupil; align a second eye box with the second position of the second pupil; render, for presentation on a display, at least one of a color plus depth image or a light field image based on the first position of the first pupil and the second position of the second pupil; and cause backlight to be steered through the micro lens array and alternatingly through the first eye box and the second eye box.Type: ApplicationFiled: August 26, 2022Publication date: January 19, 2023Inventors: Tuotuo Li, Joshua J. Ratcliff, Qiong Huang, Alexey M. Supikov, Ronald T. Azuma
-
Publication number: 20220383602Abstract: In one embodiment, a method includes accessing multiple 3D photos to be concurrently displayed through multiple frames positioned in a virtual space, each of the of 3D photos having an optimal viewing point in the virtual space and determining a reference point based on a head pose of a viewer within the virtual space. The method may further include adjusting each 3D photo by rotating the 3D photo so that the optimal viewing point of the 3D photo points at the reference point, translating the rotated 3D photo toward the reference point, and non-uniformly scaling the rotated and translated 3D photo based on a scaling factor determined using the reference point and a position of the frame through which the 3D photo is to be viewed. The method may further include rendering an image comprising the adjusted multiple 3D photos as seen through the multiple frames.Type: ApplicationFiled: March 7, 2022Publication date: December 1, 2022Inventors: Johannes Peter Kopf, Xuejian Rong, Tuotuo Li, Ocean Quigley
-
Patent number: 11483543Abstract: An apparatus and method for hybrid rendering. For example, one embodiment of a method comprises: identifying left and right views of a user's eyes; generating at least one depth map for the left and right views; calculating depth clamping thresholds including a minimum depth value and a maximum depth value; transforming the depth map in accordance with the minimum depth value and maximum depth value; and performing view synthesis to render left and right views using the transformed depth map.Type: GrantFiled: April 7, 2020Date of Patent: October 25, 2022Assignee: INTEL CORPORATIONInventors: Joshua J. Ratcliff, Tuotuo Li
-
Patent number: 11438566Abstract: Disclosed herein are systems, apparatus, methods, and articles of manufacture to present three dimensional images without glasses. An example apparatus includes a micro lens array and at least one processor. The at least one processor is to: determine a first position of a first pupil of a viewer; determine a second position of a second pupil of the viewer; align a first eye box with the first position of the first pupil; align a second eye box with the second position of the second pupil; render, for presentation on a display, at least one of a color plus depth image or a light field image based on the first position of the first pupil and the second position of the second pupil; and cause backlight to be steered through the micro lens array and alternatingly through the first eye box and the second eye box.Type: GrantFiled: February 1, 2021Date of Patent: September 6, 2022Assignee: INTEL CORPORATIONInventors: Tuotuo Li, Joshua J. Ratcliff, Qiong Huang, Alexey M. Supikov, Ronald T. Azuma
-
Publication number: 20220007007Abstract: Disclosed herein are systems and methods for machine vision. A machine vision system includes a motion rendering device, a first image sensor, and a second image sensor. The machine visions system includes a processor configured to run a computer program stored in memory that is configured to determine a first transformation that allows mapping between the first coordinate system associated with the motion rendering device and the second coordinate system associated with the first image sensor, and to determine a second transformation that allows mapping between the first coordinate system associated with the motion rendering device and the third coordinate system associated with the second image sensor.Type: ApplicationFiled: July 9, 2021Publication date: January 6, 2022Applicant: Cognex CorporationInventors: Tuotuo Li, Lifeng Liu, Cyril C. Marrion
-
Patent number: 11070793Abstract: Disclosed herein are systems and methods for machine vision. A machine vision system includes a motion rendering device, a first image sensor, and a second image sensor. The machine visions system includes a processor configured to run a computer program stored in memory that is configured to determine a first transformation that allows mapping between the first coordinate system associated with the motion rendering device and the second coordinate system associated with the first image sensor, and to determine a second transformation that allows mapping between the first coordinate system associated with the motion rendering device and the third coordinate system associated with the second image sensor.Type: GrantFiled: July 27, 2016Date of Patent: July 20, 2021Assignee: Cognex CorporationInventors: Tuotuo Li, Lifeng Liu, Cyril C. Marrion
-
Patent number: 11049280Abstract: This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.Type: GrantFiled: May 13, 2019Date of Patent: June 29, 2021Assignee: Cognex CorporationInventors: Guruprasad Shivaram, Cyril C. Marrion, Jr., Lifeng Liu, Tuotuo Li
-
Publication number: 20210152804Abstract: Disclosed herein are systems, apparatus, methods, and articles of manufacture to present three dimensional images without glasses. An example apparatus includes a micro lens array and at least one processor. The at least one processor is to: determine a first position of a first pupil of a viewer; determine a second position of a second pupil of the viewer; align a first eye box with the first position of the first pupil; align a second eye box with the second position of the second pupil; render, for presentation on a display, at least one of a color plus depth image or a light field image based on the first position of the first pupil and the second position of the second pupil; and cause backlight to be steered through the micro lens array and alternatingly through the first eye box and the second eye box.Type: ApplicationFiled: February 1, 2021Publication date: May 20, 2021Inventors: Tuotuo Li, Joshua J. Ratcliff, Qiong Huang, Alexey M. Supikov, Ronald T. Azuma
-
Patent number: 10939085Abstract: In some examples, a three dimensional display system includes a display (for example, a display screen or a display panel), a micro lens array, and an eye tracker to track one or more eyes of a person and to provide eye location information. The display system also includes a rendering processor to render or capture color plus depth images (for example, RGB-D images) or light field images. The display system also includes a light field processor to use the eye location information to convert the rendered color plus depth images or light field images to display images to be provided to the display.Type: GrantFiled: October 24, 2019Date of Patent: March 2, 2021Assignee: Intel CorporationInventors: Tuotuo Li, Joshua J. Ratcliff, Qiong Huang, Alexey M. Supikov, Ronald T. Azuma
-
Publication number: 20200326814Abstract: In one embodiment, the present disclosure pertains to human-computer interfaces. In one embodiment, a method includes an operation for detecting an optical signal emitted from an object on a surface. The optical signal forms a geometric pattern on the surface. The method also includes an operation for determining a three-dimensional position of the object relative to the surface based on the geometric pattern. In some embodiments, a plurality of angles and distances are determined from the geometric pattern. The angles and distances correspond to geometric shapes formed between the object and the surface as defined by the geometric pattern. The three-dimensional position may be determined based on the angles and distances.Type: ApplicationFiled: December 5, 2019Publication date: October 15, 2020Inventor: Tuotuo Li
-
Publication number: 20200304776Abstract: An apparatus and method for hybrid rendering. For example, one embodiment of a method comprises: identifying left and right views of a user's eyes; generating at least one depth map for the left and right views; calculating depth clamping thresholds including a minimum depth value and a maximum depth value; transforming the depth map in accordance with the minimum depth value and maximum depth value; and performing view synthesis to render left and right views using the transformed depth map.Type: ApplicationFiled: April 7, 2020Publication date: September 24, 2020Inventors: Joshua J. RATCLIFF, Tuotuo LI
-
Publication number: 20200204781Abstract: In some examples, a three dimensional display system includes a display (for example, a display screen or a display panel), a micro lens array, and an eye tracker to track one or more eyes of a person and to provide eye location information. The display system also includes a rendering processor to render or capture color plus depth images (for example, RGB-D images) or light field images. The display system also includes a light field processor to use the eye location information to convert the rendered color plus depth images or light field images to display images to be provided to the display.Type: ApplicationFiled: October 24, 2019Publication date: June 25, 2020Applicant: INTEL CORPORATIONInventors: Tuotuo Li, Joshua J. Ratcliff, Qiong Huang, Alexey M. Supikov, Ronald T. Azuma
-
Patent number: 10623723Abstract: An apparatus and method for hybrid rendering. For example, one embodiment of a method comprises: identifying left and right views of a user's eyes; generating at least one depth map for the left and right views; calculating depth clamping thresholds including a minimum depth value and a maximum depth value; transforming the depth map in accordance with the minimum depth value and maximum depth value; and performing view synthesis to render left and right views using the transformed depth map.Type: GrantFiled: September 29, 2016Date of Patent: April 14, 2020Assignee: Intel CorporationInventors: Joshua J. Ratcliff, Tuotuo Li
-
Publication number: 20200065995Abstract: This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.Type: ApplicationFiled: May 13, 2019Publication date: February 27, 2020Inventors: Guruprasad Shivaram, Cyril C. Marrion, JR., Lifeng Liu, Tuotuo Li
-
Publication number: 20190146223Abstract: Disclosed herein is a head mounted display including a projector to project multiple virtual images at different depth planes and an optical element to reflect the projected images to a viewpoint. A user can perceive the multiple virtual image at the different depth planes for realize a large field of view display without vergence accommodation conflict. The optical element can also transmit a view of the real world to provide an augmented reality experience.Type: ApplicationFiled: December 21, 2018Publication date: May 16, 2019Inventor: Tuotuo Li
-
Patent number: 10290118Abstract: This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.Type: GrantFiled: July 29, 2016Date of Patent: May 14, 2019Assignee: COGNEX CORPORATIONInventors: Guruprasad Shivaram, Cyril C. Marrion, Jr., Lifeng Liu, Tuotuo Li
-
Publication number: 20190124313Abstract: In some examples, a three dimensional display system includes a display (for example, a display screen or a display panel), a micro lens array, and an eye tracker to track one or more eyes of a person and to provide eye location information. The display system also includes a rendering processor to render or capture color plus depth images (for example, RGB-D images) or light field images. The display system also includes a light field processor to use the eye location information to convert the rendered color plus depth images or light field images to display images to be provided to the display.Type: ApplicationFiled: October 19, 2017Publication date: April 25, 2019Applicant: INTEL CORPORATIONInventors: Tuotuo Li, Joshua J. Ratcliff, Qiong Huang, Alexey M. Supikov, Ronald T. Azuma
-
Publication number: 20180091800Abstract: An apparatus and method for hybrid rendering. For example, one embodiment of a method comprises: identifying left and right views of a user's eyes; generating at least one depth map for the left and right views; calculating depth clamping thresholds including a minimum depth value and a maximum depth value; transforming the depth map in accordance with the minimum depth value and maximum depth value; and performing view synthesis to render left and right views using the transformed depth map.Type: ApplicationFiled: September 29, 2016Publication date: March 29, 2018Inventors: JOSHUA J. RATCLIFF, TUOTUO LI
-
Publication number: 20170132807Abstract: This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.Type: ApplicationFiled: July 29, 2016Publication date: May 11, 2017Inventors: Guruprasad Shivaram, Cyril C. Marrion, Jr., Lifeng Liu, Tuotuo Li