Patents by Inventor Linjie LUO
Linjie LUO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10949648Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and a method for accessing a set of images depicting at least a portion of a face. A set of facial regions of the face is identified, each facial region of the set of facial regions intersecting another facial region with at least one common vertex which is a member of a set of facial vertices. For each facial region of the set of facial regions, a weight formed from a set of region coefficients is generated. Based on the set of facial regions and the weight of each facial region of the set of facial regions, the face is tracked across the set of images.Type: GrantFiled: October 25, 2018Date of Patent: March 16, 2021Assignee: Snap Inc.Inventors: Chen Cao, Menglei Chai, Linjie Luo, Oliver Woodford
-
Publication number: 20210037179Abstract: A dolly zoom effect can be applied to one or more images captured via a resource-constrained device (e.g., a mobile smartphone) by manipulating the size of a target feature while the background in the one or more images changes due to physical movement of the resource-constrained device. The target feature can be detected using facial recognition or shape detection techniques. The target feature can be resized before the size is manipulated as the background changes (e.g., changes perspective).Type: ApplicationFiled: July 20, 2020Publication date: February 4, 2021Inventors: Linjie Luo, Chongyang Ma, Zehao Xue
-
Publication number: 20200410773Abstract: Systems and methods for local augmented reality (AR) tracking of an AR object are disclosed. In one example embodiment a device captures a series of video image frames. A user input is received at the device associating a first portion of a first image of the video image frames with an AR sticker object and a target. A first target template is generated to track the target across frames of the video image frames. In some embodiments, global tracking based on a determination that the target is outside a boundary area is used. The global tracking comprises using a global tracking template for tracking movement in the video image frames captured following the determination that the target is outside the boundary area. When the global tracking determines that the target is within the boundary area, local tracking is resumed along with presentation of the AR sticker object on an output display of the device.Type: ApplicationFiled: July 13, 2020Publication date: December 31, 2020Inventors: Jia Li, Linjie Luo, Rahul Bhupendra Sheth, Ning Xu, Jianchao Yang
-
Patent number: 10861170Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.Type: GrantFiled: November 30, 2018Date of Patent: December 8, 2020Assignee: Snap Inc.Inventors: Yuncheng Li, Linjie Luo, Xuecheng Nie, Ning Zhang
-
Publication number: 20200327738Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. A set of structure façade data describing one or more structure façades associated with the first position estimate is then accessed. A first image of an environment is captured, and a portion of the image is matched to part of the structure façade data. A second position is then estimated based on a comparison of the structure façade data with the portion of the image matched to the structure façade data.Type: ApplicationFiled: June 26, 2020Publication date: October 15, 2020Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M. Rodriguez, II, Rahul Sheth, Jia Li, Xutao Lv
-
Patent number: 10757319Abstract: A dolly zoom effect can be applied to one or more images captured via a resource-constrained device (e.g., a mobile smartphone) by manipulating the size of a target feature while the background in the one or more images changes due to physical movement of the resource-constrained device. The target feature can be detected using facial recognition or shape detection techniques. The target feature can be resized before the size is manipulated as the background changes (e.g., changes perspective).Type: GrantFiled: June 15, 2017Date of Patent: August 25, 2020Assignee: Snap Inc.Inventors: Linjie Luo, Chongyang Ma, Zehao Xue
-
Patent number: 10748347Abstract: Systems and methods for local augmented reality (AR) tracking of an AR object are disclosed. In one example embodiment a device captures a series of video image frames. A user input is received at the device associating a first portion of a first image of the video image frames with an AR sticker object and a target. A first target template is generated to track the target across frames of the video image frames. In some embodiments, global tracking based on a determination that the target is outside a boundary area is used. The global tracking comprises using a global tracking template for tracking movement in the video image frames captured following the determination that the target is outside the boundary area. When the global tracking determines that the target is within the boundary area, local tracking is resumed along with presentation of the AR sticker object on an output display of the device.Type: GrantFiled: July 25, 2018Date of Patent: August 18, 2020Assignee: Snap Inc.Inventors: Jia Li, Linjie Luo, Rahul Bhupendra Sheth, Ning Xu, Jianchao Yang
-
Patent number: 10733802Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. A set of structure façade data describing one or more structure façades associated with the first position estimate is then accessed. A first image of an environment is captured, and a portion of the image is matched to part of the structure façade data. A second position is then estimated based on a comparison of the structure façade data with the portion of the image matched to the structure façade data.Type: GrantFiled: June 11, 2019Date of Patent: August 4, 2020Assignee: Snap Inc.Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M Rodriguez, II, Rahul Sheth, Jia Li, Xutao Lv
-
Publication number: 20200219312Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. Point cloud data describing an environment is then accessed. A two-dimensional surface of an image of an environment is captured, and a portion of the image is matched to a portion of key points in the point cloud data. An augmented reality object is then aligned within one or more images of the environment based on the match of the point cloud with the image. In some embodiments, building façade data may additionally be used to determine a device location and place the augmented reality object within an image.Type: ApplicationFiled: March 19, 2020Publication date: July 9, 2020Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M. Rodriguez, II, Rahul Sheth, Jia Li, Xutao Lv
-
Patent number: 10657708Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. A 3D point cloud data describing an environment is then accessed. A first image of an environment is captured, and a portion of the image is matched to a portion of key points in the 3D point cloud data. An augmented reality object is then aligned within one or more images of the environment based on the match of the 3D point cloud with the image. In some embodiments, building façade data may additionally be used to determine a device location and place the augmented reality object within an image.Type: GrantFiled: May 4, 2018Date of Patent: May 19, 2020Assignee: Snap Inc.Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M Rodriguez, II, Rahul Sheth, Jia Li, Xutao Lv
-
Publication number: 20200043145Abstract: Systems, devices, media, and methods are presented for generating texture models for objects within a video stream. The systems and methods access a set of images as the set of images are being captured at a computing device. The systems and methods determine, within a portion of the set of images, an area of interest containing an eye and extract an iris area from the area of interest. The systems and methods segment a sclera area within the area of interest and generate a texture for the eye based on the iris area and the sclera area.Type: ApplicationFiled: July 31, 2018Publication date: February 6, 2020Inventors: Chen Cao, Wen Zhang, Menglei Chai, Linjie Luo
-
Patent number: 10552968Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.Type: GrantFiled: September 22, 2017Date of Patent: February 4, 2020Assignee: Snap Inc.Inventors: Shenlong Wang, Linjie Luo, Ning Zhang, Jia Li
-
Publication number: 20190295326Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. A set of structure façade data describing one or more structure façades associated with the first position estimate is then accessed. A first image of an environment is captured, and a portion of the image is matched to part of the structure façade data. A second position is then estimated based on a comparison of the structure façade data with the portion of the image matched to the structure façade data.Type: ApplicationFiled: June 11, 2019Publication date: September 26, 2019Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M. Rodriguez, II, Rahul Sheth, Jia Li, Xutao Lv
-
Patent number: 10366543Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. A set of structure façade data describing one or more structure façades associated with the first position estimate is then accessed. A first image of an environment is captured, and a portion of the image is matched to part of the structure façade data. A second position is then estimated based on a comparison of the structure façade data with the portion of the image matched to the structure façade data.Type: GrantFiled: September 20, 2018Date of Patent: July 30, 2019Assignee: Snap Inc.Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M Rodriguez, II, Rahul Sheth, Jia Li, Xutao Lv
-
Patent number: 10102680Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. A set of structure façade data describing one or more structure façades associated with the first position estimate is then accessed. A first image of an environment is captured, and a portion of the image is matched to part of the structure façade data. A second position is then estimated based on a comparison of the structure façade data with the portion of the image matched to the structure façade data.Type: GrantFiled: December 4, 2017Date of Patent: October 16, 2018Assignee: Snap Inc.Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M Rodriguez, II, Rahul Bhupendra Sheth, Jia Li, Xutao Lv
-
Patent number: 10055895Abstract: Systems and methods for local augmented reality (AR) tracking of an AR object are disclosed. In one example embodiment a device captures a series of video image frames. A user input is received at the device associating a first portion of a first image of the video image frames with an AR sticker object and a target. A first target template is generated to track the target across frames of the video image frames. In some embodiments, global tracking based on a determination that the target is outside a boundary area is used. The global tracking comprises using a global tracking template for tracking movement in the video image frames captured following the determination that the target is outside the boundary area. When the global tracking determines that the target is within the boundary area, local tracking is resumed along with presentation of the AR sticker object on an output display of the device.Type: GrantFiled: January 29, 2016Date of Patent: August 21, 2018Assignee: Snap Inc.Inventors: Jia Li, Linjie Luo, Rahul Sheth, Ning Xu, Jianchao Yang
-
Patent number: 10025882Abstract: The disclosure provides a technique for recursively partitioning a 3D model of an object into two or more components such that each component fits within a predefined printing volume. The technique includes determining a set of planar cuts each of which partitions the 3D model into at least two components, evaluating one or more objective functions for each cut in the set of planar cuts, and selecting a cut from the set of planar cuts based on the evaluations of the objective functions. In addition, the technique includes, upon determining that a component resulting from the selected cut does not fit within the predefined printing volume, further partitioning that component.Type: GrantFiled: August 14, 2012Date of Patent: July 17, 2018Assignee: Disney Enterprises, Inc.Inventors: Ilya Baran, Linjie Luo, Wojciech Matusik
-
Patent number: 10019840Abstract: One embodiment involves receiving a fine mesh as input, the fine mesh representing a 3-Dimensional (3D) model and comprising fine mesh polygons. The embodiment further involves identifying, based on the fine mesh, near-planar regions represented by a coarse mesh of coarse mesh polygons, at least one of the near-planar regions corresponding to a plurality of the coarse mesh polygons. The embodiment further involves determining a deformation to deform the coarse mesh based on comparing normals between adjacent coarse mesh polygons. The deformation may involve reducing a first angle between coarse mesh polygons adjacent to one another in a same near-planar region. The deformation may additionally or alternatively involve increasing an angle between coarse mesh polygons adjacent to one another in different near-planar regions. The fine mesh can be deformed using the determined deformation.Type: GrantFiled: September 20, 2016Date of Patent: July 10, 2018Assignee: Adobe Systems IncorporatedInventors: Daniel Robert Goldman, Jan Jachnik, Linjie Luo
-
Patent number: 9984499Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. A 3D point cloud data describing an environment is then accessed. A first image of an environment is captured, and a portion of the image is matched to a portion of key points in the 3D point cloud data. An augmented reality object is then aligned within one or more images of the environment based on the match of the 3D point cloud with the image. In some embodiments, building façade data may additionally be used to determine a device location and place the augmented reality object within an image.Type: GrantFiled: November 30, 2015Date of Patent: May 29, 2018Assignee: Snap Inc.Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M Rodriguez, II, Rahul Sheth, Jia Li, Xutao Lv
-
Publication number: 20180089904Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. A set of structure façade data describing one or more structure façades associated with the first position estimate is then accessed. A first image of an environment is captured, and a portion of the image is matched to part of the structure façade data. A second position is then estimated based on a comparison of the structure façade data with the portion of the image matched to the structure façade data.Type: ApplicationFiled: December 4, 2017Publication date: March 29, 2018Inventors: Nathan Jurgenson, Linjie Luo, Jonathan M. Rodriguez, II, Rahul Bhupendra Sheth, Jia Li, Xutao Lv