Patents by Inventor Wenjie Luo
Wenjie Luo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240121807Abstract: A communication method and apparatus, and a system. A first terminal device obtains first information, where the first information includes a first multicast and broadcast service MBS service identifier and first sidelink SL resource information. A mapping relationship exists between the first MBS service identifier and the first SL resource information. A first MBS is an MBS supported by a first relay terminal device, and the first SL resource information is usable for transmitting data of the first MB S to the first terminal device. The first terminal device receives, based on the first SL resource information, the data that is of the first MBS and that is sent by the first relay terminal device. The first terminal device transmits MBS data in a scenario in which a relay terminal device supports an MBS service.Type: ApplicationFiled: December 18, 2023Publication date: April 11, 2024Inventors: Haiyan LUO, Wenjie PENG, Qinghai ZENG
-
Patent number: 11954895Abstract: The present disclosure discloses a method for automatically identifying south troughs by improved Laplace and relates to the technical field of meteorology. The method includes the following steps: acquiring grid data of a geopotential height field; calculating a gradient field of the geopotential height field in an x direction; searching for a turning point where a gradient value turned from being negative to being positive, and cleaning the gradient field; calculating a divergence of the x direction to obtain an improved Laplacian numerical value L?; performing 0,1 binarization processing on the L? to obtain a black-and-white image and a plurality of targets of potential troughs, merging the black-and-white image and the plurality of targets of the potential troughs by expansion, recovering original scale through erosion, and selecting an effective target through an angle of direction of a contour and an axial ratio.Type: GrantFiled: July 20, 2023Date of Patent: April 9, 2024Assignee: Chengdu University of Information TechnologyInventors: Wendong Hu, Yanqiong Hao, Hongping Shu, Tiangui Xiao, Yan Chen, Ying Zhang, Jian Shao, Jianhong Gan, Yaqiang Wang, Fei Luo, Huahong Li, Balin Xu, Qiyang Peng, Juzhang Ren, Chengchao Li, Tao Zhang, Xiaohang Wen, Chao Wang, Yongkai Zhang, Wenjie Zhou, Jingyi Tao
-
Publication number: 20230415788Abstract: Generally, the disclosed systems and methods utilize multi-task machine-learned models for object intention determination in autonomous driving applications. For example, a computing system can receive sensor data obtained relative to an autonomous vehicle and map data associated with a surrounding geographic environment of the autonomous vehicle. The sensor data and map data can be provided as input to a machine-learned intent model. The computing system can receive a jointly determined prediction from the machine-learned intent model for multiple outputs including at least one detection output indicative of one or more objects detected within the surrounding environment of the autonomous vehicle, a first corresponding forecasting output descriptive of a trajectory indicative of an expected path of the one or more objects towards a goal location, and/or a second corresponding forecasting output descriptive of a discrete behavior intention determined from a predefined group of possible behavior intentions.Type: ApplicationFiled: September 11, 2023Publication date: December 28, 2023Inventors: Sergio Casas, Raquel Urtasun, Wenjie Luo
-
Publication number: 20230406361Abstract: Methods, systems, and apparatus for generating trajectory predictions for one or more agents. In one aspect, a system comprises one or more computers configured to obtain scene context data characterizing a scene in an environment at a current time point, where the scene includes multiple agents. The one or more computers process the scene context data using a marginal trajectory prediction neural network to generate a respective marginal trajectory prediction for each of the plurality of agents that defines multiple possible trajectories for the agent after the current time point and a respective likelihood score for each of the multiple possible future trajectories. The one or more computers can generate graph data based on the respective marginal trajectory predictions, and the one or more computers can process the graph data using a graph neural network to generate a joint trajectory prediction output for the multiple agents in the scene.Type: ApplicationFiled: June 15, 2023Publication date: December 21, 2023Inventors: Wenjie Luo, Cheolho Park, Dragomir Anguelov, Benjamin Sapp
-
Publication number: 20230367318Abstract: Systems and methods for generating motion plans including target trajectories for autonomous vehicles are provided. An autonomous vehicle may include or access a machine-learned motion planning model including a backbone network configured to generate a cost volume including data indicative of a cost associated with future locations of the autonomous vehicle. The cost volume can be generated from raw sensor data as part of motion planning for the autonomous vehicle. The backbone network can generate intermediate representations associated with object detections and objection predictions. The motion planning model can include a trajectory generator configured to evaluate one or more potential trajectories for the autonomous vehicle and to select a target trajectory based at least in part on the cost volume generate by the backbone network.Type: ApplicationFiled: July 25, 2023Publication date: November 16, 2023Inventors: Wenyuan Zeng, Wenjie Luo, Abbas Sadat, Bin Yang, Rachel Urtasun
-
Patent number: 11794785Abstract: Generally, the disclosed systems and methods utilize multi-task machine-learned models for object intention determination in autonomous driving applications. For example, a computing system can receive sensor data obtained relative to an autonomous vehicle and map data associated with a surrounding geographic environment of the autonomous vehicle. The sensor data and map data can be provided as input to a machine-learned intent model. The computing system can receive a jointly determined prediction from the machine-learned intent model for multiple outputs including at least one detection output indicative of one or more objects detected within the surrounding environment of the autonomous vehicle, a first corresponding forecasting output descriptive of a trajectory indicative of an expected path of the one or more objects towards a goal location, and/or a second corresponding forecasting output descriptive of a discrete behavior intention determined from a predefined group of possible behavior intentions.Type: GrantFiled: May 20, 2022Date of Patent: October 24, 2023Assignee: UATC, LLCInventors: Sergio Casas, Wenjie Luo, Raquel Urtasun
-
Patent number: 11755018Abstract: Systems and methods for generating motion plans including target trajectories for autonomous vehicles are provided. An autonomous vehicle may include or access a machine-learned motion planning model including a backbone network configured to generate a cost volume including data indicative of a cost associated with future locations of the autonomous vehicle. The cost volume can be generated from raw sensor data as part of motion planning for the autonomous vehicle. The backbone network can generate intermediate representations associated with object detections and objection predictions. The motion planning model can include a trajectory generator configured to evaluate one or more potential trajectories for the autonomous vehicle and to select a target trajectory based at least in part on the cost volume generate by the backbone network.Type: GrantFiled: August 15, 2019Date of Patent: September 12, 2023Assignee: UATC, LLCInventors: Wenyuan Zeng, Wenjie Luo, Abbas Sadat, Bin Yang, Raquel Urtasun
-
Patent number: 11733388Abstract: A method, an apparatus and an electronic device for real-time object detection are provided according to the present disclosure. In the method, target point cloud data in a range of a preset angle is acquired, where the target point cloud data includes one or more data points, and the preset angle is less than the round angle. A one-dimensional LiDAR point feature of each of the data points in the target point cloud data is determined. A previous frame of LiDAR feature vector is updated based on the one-dimensional LiDAR point feature of each of the data points in the target point cloud data to generate a current frame of LiDAR feature vector. Object information is determined in a real-time manner based on the current frame of LiDAR feature vector.Type: GrantFiled: September 28, 2020Date of Patent: August 22, 2023Assignee: Beijing Qingzhouzhihang Intelligent Technology Co., LtdInventor: Wenjie Luo
-
Publication number: 20230252777Abstract: Systems and methods for performing semantic segmentation of three-dimensional data are provided. In one example embodiment, a computing system can be configured to obtain sensor data including three-dimensional data associated with an environment. The three-dimensional data can include a plurality of points and can be associated with one or more times. The computing system can be configured to determine data indicative of a two-dimensional voxel representation associated with the environment based at least in part on the three-dimensional data. The computing system can be configured to determine a classification for each point of the plurality of points within the three-dimensional data based at least in part on the two-dimensional voxel representation associated with the environment and a machine-learned semantic segmentation model. The computing system can be configured to initiate one or more actions based at least in part on the per-point classifications.Type: ApplicationFiled: April 13, 2023Publication date: August 10, 2023Inventors: Chris Jia-Han Zhang, Wenjie Luo, Raquel Urtasun
-
Patent number: 11657603Abstract: Systems and methods for performing semantic segmentation of three-dimensional data are provided. In one example embodiment, a computing system can be configured to obtain sensor data including three-dimensional data associated with an environment. The three-dimensional data can include a plurality of points and can be associated with one or more times. The computing system can be configured to determine data indicative of a two-dimensional voxel representation associated with the environment based at least in part on the three-dimensional data. The computing system can be configured to determine a classification for each point of the plurality of points within the three-dimensional data based at least in part on the two-dimensional voxel representation associated with the environment and a machine-learned semantic segmentation model. The computing system can be configured to initiate one or more actions based at least in part on the per-point classifications.Type: GrantFiled: March 22, 2021Date of Patent: May 23, 2023Assignee: UATC, LLCInventors: Chris Jia-Han Zhang, Wenjie Luo, Raquel Urtasun
-
Patent number: 11620838Abstract: Systems and methods for answering region specific questions are provided. A method includes obtaining a regional scene question including an attribute query and a spatial region of interest for a training scene depicting a surrounding environment of a vehicle. The method includes obtaining a universal embedding for the training scene and an attribute embedding for the attribute query of the scene question. The universal embedding can identify sensory data corresponding to the training scene that can be used to answer questions concerning a number of different attributes in the training scene. The attribute embedding can identify aspects of an attribute that can be used to answer questions specific to the attribute. The method includes determining an answer embedding based on the universal embedding and the attribute embedding and determining a regional scene answer to the regional scene question based on the spatial region of interest and the answer embedding.Type: GrantFiled: September 8, 2020Date of Patent: April 4, 2023Assignee: UATC, LLCInventors: Sean Segal, Wenjie Luo, Eric Randall Kee, Ersin Yumer, Raquel Urtasun, Abbas Sadat
-
Patent number: 11475351Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for object detection, tracking, and motion prediction are provided. For example, the disclosed technology can include receiving sensor data including information based on sensor outputs associated with detection of objects in an environment over one or more time intervals by one or more sensors. The operations can include generating, based on the sensor data, an input representation of the objects. The input representation can include a temporal dimension and spatial dimensions. The operations can include determining, based on the input representation and a machine-learned model, detected object classes of the objects, locations of the objects over the one or more time intervals, or predicted paths of the objects. Furthermore, the operations can include generating, based on the input representation and the machine-learned model, an output including bounding shapes corresponding to the objects.Type: GrantFiled: September 7, 2018Date of Patent: October 18, 2022Assignee: UATC, LLCInventors: Wenjie Luo, Bin Yang, Raquel Urtasun
-
Publication number: 20220289180Abstract: Generally, the disclosed systems and methods utilize multi-task machine-learned models for object intention determination in autonomous driving applications. For example, a computing system can receive sensor data obtained relative to an autonomous vehicle and map data associated with a surrounding geographic environment of the autonomous vehicle. The sensor data and map data can be provided as input to a machine-learned intent model. The computing system can receive a jointly determined prediction from the machine-learned intent model for multiple outputs including at least one detection output indicative of one or more objects detected within the surrounding environment of the autonomous vehicle, a first corresponding forecasting output descriptive of a trajectory indicative of an expected path of the one or more objects towards a goal location, and/or a second corresponding forecasting output descriptive of a discrete behavior intention determined from a predefined group of possible behavior intentions.Type: ApplicationFiled: May 20, 2022Publication date: September 15, 2022Inventors: Sergio Casas, Wenjie Luo, Raquel Urtasun
-
Patent number: 11370423Abstract: Generally, the disclosed systems and methods utilize multi-task machine-learned models for object intention determination in autonomous driving applications. For example, a computing system can receive sensor data obtained relative to an autonomous vehicle and map data associated with a surrounding geographic environment of the autonomous vehicle. The sensor data and map data can be provided as input to a machine-learned intent model. The computing system can receive a jointly determined prediction from the machine-learned intent model for multiple outputs including at least one detection output indicative of one or more objects detected within the surrounding environment of the autonomous vehicle, a first corresponding forecasting output descriptive of a trajectory indicative of an expected path of the one or more objects towards a goal location, and/or a second corresponding forecasting output descriptive of a discrete behavior intention determined from a predefined group of possible behavior intentions.Type: GrantFiled: May 23, 2019Date of Patent: June 28, 2022Assignee: UATC, LLCInventors: Sergio Casas, Wenjie Luo, Raquel Urtasun
-
Publication number: 20210302584Abstract: A method, an apparatus and an electronic device for real-time object detection are provided according to the present disclosure. In the method, target point cloud data in a range of a preset angle is acquired, where the target point cloud data includes one or more data points, and the preset angle is less than the round angle. A one-dimensional LiDAR point feature of each of the data points in the target point cloud data is determined. A previous frame of LiDAR feature vector is updated based on the one-dimensional LiDAR point feature of each of the data points in the target point cloud data to generate a current frame of LiDAR feature vector. Object information is determined in a real-time manner based on the current frame of LiDAR feature vector.Type: ApplicationFiled: September 28, 2020Publication date: September 30, 2021Inventor: Wenjie LUO
-
Publication number: 20210209370Abstract: Systems and methods for performing semantic segmentation of three-dimensional data are provided. In one example embodiment, a computing system can be configured to obtain sensor data including three-dimensional data associated with an environment. The three-dimensional data can include a plurality of points and can be associated with one or more times. The computing system can be configured to determine data indicative of a two-dimensional voxel representation associated with the environment based at least in part on the three-dimensional data. The computing system can be configured to determine a classification for each point of the plurality of points within the three-dimensional data based at least in part on the two-dimensional voxel representation associated with the environment and a machine-learned semantic segmentation model. The computing system can be configured to initiate one or more actions based at least in part on the per-point classifications.Type: ApplicationFiled: March 22, 2021Publication date: July 8, 2021Inventors: Chris Jia-Han Zhang, Wenjie Luo, Raquel Urtasun
-
Publication number: 20210150244Abstract: Systems and methods for answering region specific questions are provided. A method includes obtaining a regional scene question including an attribute query and a spatial region of interest for a training scene depicting a surrounding environment of a vehicle. The method includes obtaining a universal embedding for the training scene and an attribute embedding for the attribute query of the scene question. The universal embedding can identify sensory data corresponding to the training scene that can be used to answer questions concerning a number of different attributes in the training scene. The attribute embedding can identify aspects of an attribute that can be used to answer questions specific to the attribute. The method includes determining an answer embedding based on the universal embedding and the attribute embedding and determining a regional scene answer to the regional scene question based on the spatial region of interest and the answer embedding.Type: ApplicationFiled: September 8, 2020Publication date: May 20, 2021Inventors: Sean Segal, Wenjie Luo, Eric Randall Kee, Ersin Yumer, Raquel Urtasun, Abbas Sadat
-
Patent number: 10970553Abstract: Systems and methods for performing semantic segmentation of three-dimensional data are provided. In one example embodiment, a computing system can be configured to obtain sensor data including three-dimensional data associated with an environment. The three-dimensional data can include a plurality of points and can be associated with one or more times. The computing system can be configured to determine data indicative of a two-dimensional voxel representation associated with the environment based at least in part on the three-dimensional data. The computing system can be configured to determine a classification for each point of the plurality of points within the three-dimensional data based at least in part on the two-dimensional voxel representation associated with the environment and a machine-learned semantic segmentation model. The computing system can be configured to initiate one or more actions based at least in part on the per-point classifications.Type: GrantFiled: September 6, 2018Date of Patent: April 6, 2021Assignee: UATC, LLCInventors: Chris Jia-Han Zhang, Wenjie Luo, Raquel Urtasun
-
Patent number: 10762650Abstract: A system for estimating depth using a monocular camera may include one or more processors, a monocular camera, and a memory device. The monocular camera and the memory device may be operably connected to the one or more processors. The memory device may include an image capture, an encoder-decoder module, a semantic information generating module, and a depth map generating module. The modules may configure the one or more processors to executed by one or more processors cause the one or more processors to obtain a captured image from the monocular camera, generate a synthesized image based on the captured image wherein the style transfer module was trained using a generative adversarial network, generate, a feature map based on the synthesized image, generate semantic information based on the feature map, and generate a depth map based on the feature map and the semantic information.Type: GrantFiled: September 13, 2019Date of Patent: September 1, 2020Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.Inventors: Rui Guo, Wenjie Luo, Shalini Keshavamurthy, Haritha Muralidharan, Fangying Zhai, Kentaro Oguchi
-
Publication number: 20200159225Abstract: Systems and methods for generating motion plans including target trajectories for autonomous vehicles are provided. An autonomous vehicle may include or access a machine-learned motion planning model including a backbone network configured to generate a cost volume including data indicative of a cost associated with future locations of the autonomous vehicle. The cost volume can be generated from raw sensor data as part of motion planning for the autonomous vehicle. The backbone network can generate intermediate representations associated with object detections and objection predictions. The motion planning model can include a trajectory generator configured to evaluate one or more potential trajectories for the autonomous vehicle and to select a target trajectory based at least in part on the cost volume generate by the backbone network.Type: ApplicationFiled: August 15, 2019Publication date: May 21, 2020Inventors: Wenyuan Zeng, Wenjie Luo, Abbas Sadat, Bin Yang, Raquel Urtasun