Patents by Inventor Yannan Wu
Yannan Wu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230062503Abstract: Hierarchical structured sparse parameter pruning and processing improves runtime performance and energy efficiency of neural networks. In contrast with conventional (non-structured) pruning which allows for any distribution of the non-zero values within a matrix that achieves the desired sparsity degree (e.g., 50%) and is consequently difficult to accelerate, structured hierarchical sparsity requires each multi-element unit at the coarsest granularity of the hierarchy to be pruned to the desired sparsity degree. The global desired sparsity degree is a function of the per-level sparsity degrees. Distribution of non-zero values within each multi-element unit is constrained according to the per-level sparsity degree at the particular level of the hierarchy. Each level of the hierarchy may be associated with a hardware (e.g., logic or circuit) structure that can be enabled or disabled according to the per-level sparsity.Type: ApplicationFiled: February 28, 2022Publication date: March 2, 2023Inventors: Yannan Wu, Po-An Tsai, Saurav Muralidharan, Joel Springer Emer
-
Publication number: 20210389762Abstract: A method includes, with aid of one or more processors individually or collectively, analyzing stereoscopic video data of an environment to determine environmental information, generating augmented stereoscopic video data of the environment by fusing the stereoscopic video data and the environmental information, and controlling an unmanned aerial vehicle (UAV) to avoid an obstacle on a motion path of the UAV according to the augmented stereoscopic video data of the environment.Type: ApplicationFiled: August 30, 2021Publication date: December 16, 2021Inventors: Zhicong HUANG, Cong ZHAO, Shuo YANG, Yannan WU, Kang YANG, Guyue ZHOU
-
Publication number: 20210329177Abstract: A method for sensing an environment in which an unmanned aerial vehicle (UAV) is configured to operate includes, with aid of one or more processors onboard the UAV individually or collectively, obtaining video data of the environment that is collected using a binocular video camera mounted in a forward-looking direction of the UAV, encoding the video data to generate stereoscopic video data, and transmitting the stereoscopic video data to a terminal remote to the movable object.Type: ApplicationFiled: May 24, 2021Publication date: October 21, 2021Inventors: Cong ZHAO, Yannan WU, Kang YANG, Guyue ZHOU
-
Patent number: 11106203Abstract: A method for generating a first person view (FPV) of an environment includes, with aid of one or more processors individually or collectively, analyzing stereoscopic video data of the environment to determine environmental information and generating augmented stereoscopic video data of the environment by fusing the stereoscopic video data and the environmental information.Type: GrantFiled: February 15, 2019Date of Patent: August 31, 2021Assignee: SZ DJI TECHNOLOGY CO., LTD.Inventors: Zhicong Huang, Cong Zhao, Shuo Yang, Yannan Wu, Kang Yang, Guyue Zhou
-
Patent number: 11019280Abstract: A method for processing video data of an environment includes, with aid of one or more processors individually or collectively, obtaining in or near real-time a reference position of an imaging device located on a movable object based on one or more previously traversed positions of the imaging device, and modifying an image frame in the video data to obtain a modified image frame based on the reference position of the imaging device and an actual position of the imaging device at which the image frame is taken. The one or more previously traversed positions are obtained using at least one sensor on the movable object. The video data is acquired by the imaging device.Type: GrantFiled: November 5, 2018Date of Patent: May 25, 2021Assignee: SZ DJI TECHNOLOGY CO., LTD.Inventors: Cong Zhao, Yannan Wu, Kang Yang, Guyue Zhou
-
Publication number: 20210152621Abstract: A computer-implemented method for controlling bit rate includes determining an expected adjustment for a first coding parameter that encodes a first slice based on a difference between a cumulative number of bits used to encode one or more slices of a frame up to and including the first slice and a maximum number of bits allowed to encode the one or more slices, updating a coding parameter threshold based on an indicator that indicates a relationship between the coding parameter threshold and one or more coding parameters used to encode the one or more slices, and determining an actual adjustment for the first coding parameter based on the expected adjustment and the updated coding parameter threshold.Type: ApplicationFiled: January 4, 2021Publication date: May 20, 2021Inventors: Wenjun ZHAO, Yannan WU
-
Publication number: 20210058614Abstract: Methods and systems of determining a quantization step for encoding video based on motion data are provided. Video captured by an image capture device is received. The video comprises a video frame component. Additionally, motion data associated with the video frame component is received. Further, a quantization step for encoding the video frame component is determined based on the motion data.Type: ApplicationFiled: November 9, 2020Publication date: February 25, 2021Inventor: Yannan WU
-
Patent number: 10904562Abstract: A method for constructing an optical flow field includes classifying a plurality of scenarios according to motions of a mobile platform carrying an imaging device and statuses of the imaging device. The plurality of scenarios include at least one of elementary scenarios or combined scenarios. The method further includes constructing a plurality of optical flow fields each corresponding to one of the plurality of scenarios, acquiring a motion of the mobile platform and a status of the imaging device relative to the mobile platform, and selecting a corresponding optical flow field from the constructed optical flow fields corresponding to the plurality of scenarios based upon the motion of the mobile platform and the status of the imaging device for the imaging device to capture a frame at a shooting direction.Type: GrantFiled: May 17, 2019Date of Patent: January 26, 2021Assignee: SZ DJI TECHNOLOGY CO., LTD.Inventor: Yannan Wu
-
Patent number: 10887365Abstract: A computer-implemented method for controlling bit rate includes determining a difference between a cumulative number of bits used to encode one or more slices of a frame up to and including a first slice encoded using a first coding parameter and a maximum number of bits allowed to encode the one or more slices of the frame, updating a coding parameter threshold based at least in part on a counter that indicates a number of times when one or more coding parameters used to encode the one or more slices reach or exceed the coding parameter threshold, and determining a second coding parameter used to encode a second slice of the frame based at least in part on the difference and the updated coding parameter threshold.Type: GrantFiled: January 30, 2019Date of Patent: January 5, 2021Assignee: SZ DJI TECHNOLOGY CO., LTD.Inventors: Wenjun Zhao, Yannan Wu
-
Patent number: 10834392Abstract: Methods and systems of determining a quantization step for encoding video based on motion data are provided. Video captured by an image capture device is received. The video comprises a video frame component. Additionally, motion data associated with the video frame component is received. Further, a quantization step for encoding the video frame component is determined based on the motion data.Type: GrantFiled: March 7, 2017Date of Patent: November 10, 2020Assignee: SZ DJI TECHNOLOGY CO., LTD.Inventor: Yannan Wu
-
Publication number: 20200329254Abstract: A video encoding method includes receiving a video captured by an image capture device on-board a movable object, where the video includes a video frame component; receiving sensor data from a plurality of sensors on-board the movable object; generating, according to the sensor data, an optical flow field associated with the video frame component; and evaluating motion of the video for video encoding based on the optical flow field.Type: ApplicationFiled: June 25, 2020Publication date: October 15, 2020Inventors: Yannan WU, Xiaozheng TANG, Wei CHEN, Zisheng CAO, Mingyu WANG
-
Patent number: 10708617Abstract: Methods and systems for evaluating a search area for encoding video are provided. The method comprises receiving video captured by an image capture device, the video comprising video frame components. Additionally, the method comprises receiving optical flow field data associated with the video frame component, wherein at least a portion of the optical flow field data is captured by sensors. The method also comprises determining a search area based on the optical flow field data.Type: GrantFiled: March 7, 2017Date of Patent: July 7, 2020Assignee: SZ DJI TECHNOLOGY CO., LTD.Inventors: Yannan Wu, Xiaozheng Tang, Wei Chen, Zisheng Cao, Mingyu Wang
-
Publication number: 20190273945Abstract: A method for constructing an optical flow field includes classifying a plurality of scenarios according to motions of a mobile platform carrying an imaging device and statuses of the imaging device. The plurality of scenarios include at least one of elementary scenarios or combined scenarios. The method further includes constructing a plurality of optical flow fields each corresponding to one of the plurality of scenarios, acquiring a motion of the mobile platform and a status of the imaging device relative to the mobile platform, and selecting a corresponding optical flow field from the constructed optical flow fields corresponding to the plurality of scenarios based upon the motion of the mobile platform and the status of the imaging device for the imaging device to capture a frame at a shooting direction.Type: ApplicationFiled: May 17, 2019Publication date: September 5, 2019Inventor: Yannan WU
-
Publication number: 20190220002Abstract: A method for generating a first person view (FPV) of an environment includes, with aid of one or more processors individually or collectively, analyzing stereoscopic video data of the environment to determine environmental information and generating augmented stereoscopic video data of the environment by fusing the stereoscopic video data and the environmental information.Type: ApplicationFiled: February 15, 2019Publication date: July 18, 2019Inventors: Zhicong HUANG, Cong ZHAO, Shuo YANG, Yannan WU, Kang YANG, Guyue ZHOU
-
Patent number: 10321153Abstract: A system constructs an optical flow field that corresponds with a selected video frame. The optical flow field is constructed based on a first motion of a mobile platform having an imaging device and a status of the imaging device. The first motion and the status are determined with measurements of sensors installed on the mobile platform and/or the imaging device installed on the mobile platform. The first motion includes at least one of a first rotation, a horizontal movement, or a vertical movement of the mobile platform. The status includes a rotation of the imaging device and/or an orientation of the imaging device relative to the mobile platform.Type: GrantFiled: March 14, 2017Date of Patent: June 11, 2019Assignee: SZ DJI TECHNOLOGY CO., LTD.Inventor: Yannan Wu
-
Publication number: 20190166180Abstract: A computer-implemented method for controlling bit rate includes determining a difference between a cumulative number of bits used to encode one or more slices of a frame up to and including a first slice encoded using a first coding parameter and a maximum number of bits allowed to encode the one or more slices of the frame, updating a coding parameter threshold based at least in part on a counter that indicates a number of times when one or more coding parameters used to encode the one or more slices reach or exceed the coding parameter threshold, and determining a second coding parameter used to encode a second slice of the frame based at least in part on the difference and the updated coding parameter threshold.Type: ApplicationFiled: January 30, 2019Publication date: May 30, 2019Inventors: Wenjun ZHAO, Yannan WU
-
Publication number: 20190075252Abstract: A method for processing video data of an environment includes, with aid of one or more processors individually or collectively, obtaining in or near real-time a reference position of an imaging device located on a movable object based on one or more previously traversed positions of the imaging device, and modifying an image frame in the video data to obtain a modified image frame based on the reference position of the imaging device and an actual position of the imaging device at which the image frame is taken. The one or more previously traversed positions are obtained using at least one sensor on the movable object.Type: ApplicationFiled: November 5, 2018Publication date: March 7, 2019Inventors: Cong ZHAO, Yannan WU, Kang YANG, Guyue ZHOU
-
Publication number: 20170188046Abstract: A system constructs an optical flow field that corresponds with a selected video frame. The optical flow field is constructed based on a first motion of a mobile platform having an imaging device and a status of the imaging device. The first motion and the status are determined with measurements of sensors installed on the mobile platform and/or the imaging device installed on the mobile platform. The first motion includes at least one of a first rotation, a horizontal movement, or a vertical movement of the mobile platform. The status includes a rotation of the imaging device and/or an orientation of the imaging device relative to the mobile platform.Type: ApplicationFiled: March 14, 2017Publication date: June 29, 2017Inventor: Yannan WU
-
Publication number: 20170180729Abstract: Methods and systems of determining a quantization step for encoding video based on motion data are provided. Video captured by an image capture device is received. The video comprises a video frame component. Additionally, motion data associated with the video frame component is received. Further, a quantization step for encoding the video frame component is determined based on the motion data.Type: ApplicationFiled: March 7, 2017Publication date: June 22, 2017Inventor: Yannan Wu
-
Publication number: 20170180754Abstract: Methods and systems for evaluating a search area for encoding video are provided. The method comprises receiving video captured by an image capture device, the video comprising video frame components. Additionally, the method comprises receiving optical flow field data associated with the video frame component, wherein at least a portion of the optical flow field data is captured by sensors. The method also comprises determining a search area based on the optical flow field data.Type: ApplicationFiled: March 7, 2017Publication date: June 22, 2017Inventors: Yannan Wu, Xiaozheng Tang, Wei Chen, Zisheng Cao, Mingyu Wang