Patents by Inventor Bingbing Zhuang
Bingbing Zhuang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250115250Abstract: Methods and systems for motion detection include performing a first prediction to predict voxel occupancy based on a sequence of input point clouds including a current point cloud and a set of previous point clouds. A second prediction is performed to predict voxel occupancy for the sequence of input point clouds using predicted voxel occupancy between the input point clouds. Motion detection is performed based on the completed voxel occupancy. An action is performed responsive to a detected motion.Type: ApplicationFiled: October 1, 2024Publication date: April 10, 2025Inventors: Bingbing Zhuang, Manmohan Chandraker, Di Liu
-
Publication number: 20250117029Abstract: Systems and methods for automatic multi-modality sensor calibration with near-infrared images (NIR). Image keypoints from collected images and NIR keypoints from NIR can be detected. A deep-learning-based neural network that learns relation graphs between the image keypoints and the NIR keypoints can match the image keypoints and the NIR keypoints. Three dimensional (3D) points from 3D point cloud data can be filtered based on corresponding 3D points from the NIR keypoints (NIR-to-3D points) to obtain filtered NIR-to-3D points. An extrinsic calibration can be optimized based on a reprojection error computed from the filtered NIR-to-3D points to obtain an optimized extrinsic calibration for an autonomous entity control system. An entity can be controlled by employing the optimized extrinsic calibration for the autonomous entity control system.Type: ApplicationFiled: October 3, 2024Publication date: April 10, 2025Inventors: Tom Bu, Bingbing Zhuang, Manmohan Chandraker
-
Publication number: 20250118009Abstract: A computer-implemented method for synthesizing an image includes capturing data from a scene and fusing grid-based representations of the scene from different encodings to inherit beneficial properties of the different encodings, The encodings include Lidar encoding and a high definition map encoding. Rays are rendered from fused grid-based representations. A density and color are determined for points in the rays. A volume rendering is employed for the rays with the density and color. An image is synthesized from the volume rendered rays with the density and the color.Type: ApplicationFiled: October 1, 2024Publication date: April 10, 2025Inventors: Bingbing Zhuang, Ziyu Jiang, Buyu Liu, Manmohan Chandraker, Shanlin Sun
-
Publication number: 20250115254Abstract: Systems and methods for a hybrid motion planner for autonomous vehicles. A multi-lane intelligent driver model (MIDM) can predict trajectory predictions from collected data by considering adjacent lanes of an ego vehicle. A multi-lane hybrid planning driver model (MPDM) can be trained using open-loop ground truth data and close-loop simulations to obtain a trained MPDM. The trained MPDM can predict planned trajectories with collected data and the trajectory predictions to generate final trajectories for the autonomous vehicles. The final trajectories can be employed to control the autonomous vehicles.Type: ApplicationFiled: October 3, 2024Publication date: April 10, 2025Inventors: Buyu Liu, Francesco Pittaluga, Bingbing Zhuang, Manmohan Chandraker, Samuel Sohn
-
Publication number: 20250118010Abstract: A computer-implemented method for synthesizing an image includes capturing data from a scene and decomposing the captured scene into static objects; dynamic objects and sky. Bounding boxes are generated for the dynamic objects and motion is simulated for the dynamic objects as static movement of the bounding boxes. The dynamic objects and the static objects are merged according to density and color of sample points. The sky is blended into a merged version of the dynamic objects and the static objects, and an image is synthesized from volume rendered rays.Type: ApplicationFiled: October 1, 2024Publication date: April 10, 2025Inventors: Ziyu Jiang, Bingbing Zhuang, Manmohan Chandraker
-
Patent number: 12254681Abstract: Systems and methods are provided for multi-modal test-time adaptation. The method includes inputting a digital image into a pre-trained Camera Intra-modal Pseudo-label Generator, and inputting a point cloud set into a pre-trained Lidar Intra-modal Pseudo-label Generator. The method further includes applying a fast 2-dimension (2D) model, and a slow 2D model, to the inputted digital image to apply pseudo-labels, and applying a fast 3-dimension (3D) model, and a slow 3D model, to the inputted point cloud set to apply pseudo-labels. The method further includes fusing pseudo-label predictions from the fast models and the slow models through an Inter-modal Pseudo-label Refinement module to obtain robust pseudo labels, and measuring a prediction consistency for the pseudo-labels.Type: GrantFiled: September 6, 2022Date of Patent: March 18, 2025Assignee: NEC CorporationInventors: Yi-Hsuan Tsai, Bingbing Zhuang, Samuel Schulter, Buyu Liu, Sparsh Garg, Ramin Moslemi, Inkyu Shin
-
Publication number: 20250028153Abstract: A lens sequentially comprises, from an object side to an image surface: a first lens having negative focal power, an object side surface of the first lens being convex, and an image side surface of the first lens being concave; a second lens having positive focal power, an image side surface of the second lens being convex; a diaphragm; a third lens having positive focal power, an object side surface and an image side surface of the third lens both convex; a fourth lens having positive focal power, an object side surface and an image side surface of the fourth lens both convex; and a fifth lens having negative focal power, an object side surface of the fifth lens being concave, an image side surface of the fifth lens being convex, the fourth lens and the fifth lens constituting a cemented lens.Type: ApplicationFiled: January 17, 2023Publication date: January 23, 2025Inventors: Bingbing LING, Yumin BAO, Linfan ZHUANG, Kemin WANG
-
Patent number: 12205324Abstract: A computer-implemented method for fusing geometrical and Convolutional Neural Network (CNN) relative camera pose is provided. The method includes receiving two images having different camera poses. The method further includes inputting the two images into a geometric solver branch to return, as a first solution, an estimated camera pose and an associated pose uncertainty value determined from a Jacobian of a reproduction error function. The method also includes inputting the two images into a CNN branch to return, as a second solution, a predicted camera pose and an associated pose uncertainty value. The method additionally includes fusing, by a processor device, the first solution and the second solution in a probabilistic manner using Bayes' rule to obtain a fused pose.Type: GrantFiled: November 5, 2021Date of Patent: January 21, 2025Assignee: NEC CorporationInventors: Bingbing Zhuang, Manmohan Chandraker
-
Patent number: 12131422Abstract: A method for achieving high-fidelity novel view synthesis and 3D reconstruction for large-scale scenes is presented. The method includes obtaining images from a video stream received from a plurality of video image capturing devices, grouping the images into different image clusters representing a large-scale 3D scene, training a neural radiance field (NeRF) and an uncertainty multilayer perceptron (MLP) for each of the image clusters to generate a plurality of NeRFs and a plurality of uncertainty MLPs for the large-scale 3D scene, applying a rendering loss and an entropy loss to the plurality of NeRFs, performing uncertainty-based fusion to the plurality of NeRFs to define a fused NeRF, and jointly fine-tuning the plurality of NeRFs and the plurality of uncertainty MLPs, and during inference, applying the fused NeRF for novel view synthesis of the large-scale 3D scene.Type: GrantFiled: October 11, 2022Date of Patent: October 29, 2024Assignee: NEC CorporationInventors: Bingbing Zhuang, Samuel Schulter, Yi-Hsuan Tsai, Buyu Liu, Nanbo Li
-
Patent number: 12131557Abstract: A computer-implemented method for road layout prediction is provided. The method includes segmenting, by a first processor-based element, an RGB image to output pixel-level semantic segmentation results for the RGB image in a perspective view for both visible and occluded pixels in the perspective view based on contextual clues. The method further includes learning, by a second processor-based element, a mapping from the pixel-level semantic segmentation results for the RGB image in the perspective view to a top view of the RGB image using a road plane assumption. The method also includes generating, by a third processor-based element, an occlusion-aware parametric road layout prediction for road layout related attributes in the top view.Type: GrantFiled: November 8, 2021Date of Patent: October 29, 2024Assignee: NEC CorporationInventors: Buyu Liu, Bingbing Zhuang, Manmohan Chandraker
-
Publication number: 20240354921Abstract: Systems and methods for road defect level prediction. A depth map is obtained from an image dataset received from input peripherals by employing a vision transformer model. A plurality of semantic maps is obtained from the image dataset by employing a semantic segmentation model to give pixel-wise segmentation results of road scenes to detect road pixels. Regions of interest (ROI) are detected by utilizing the road pixels. Road defect levels are predicted by fitting the ROI and the depth map into a road surface model to generate road points classified into road defect levels. The predicted road defect levels are visualized on a road map.Type: ApplicationFiled: March 26, 2024Publication date: October 24, 2024Inventors: Sparsh Garg, Bingbing Zhuang, Samuel Schulter, Manmohan Chandraker
-
Publication number: 20240354583Abstract: Methods and systems for training a model include annotating a subset of an unlabeled training dataset, that includes images of road scenes, with labels. A road defect detection model is iteratively trained, including adding pseudo-labels to a remainder of examples from the unlabeled training dataset and training the road defect detection model based on the labels and the pseudo-labels.Type: ApplicationFiled: March 25, 2024Publication date: October 24, 2024Inventors: Sparsh Garg, Samuel Schulter, Bingbing Zhuang, Manmohan Chandraker
-
Publication number: 20240351582Abstract: Methods and systems for trajectory prediction include encoding trajectories of agents in a scene from past images of the scene. Lane centerlines are encoded for agents in the scene. The agents in the scene are encoded using the encoded trajectories and the encoded lane centerlines. A hypercolumn trajectory is decoded from the encoded agents to generate predicted trajectories for the agents. A vehicle is automatically operated responsive to the predicted trajectories.Type: ApplicationFiled: April 18, 2024Publication date: October 24, 2024Inventors: Buyu Liu, Sriram Nochur Narayanan, Bingbing Zhuang, Yumin Suh
-
Publication number: 20240303365Abstract: Systems and methods are provided for privacy-preserving image feature matching in computer vision applications, including receiving a raw image descriptor, and perturbing the raw image descriptor using a subset selection mechanism to generate a perturbed descriptor set that includes the raw image descriptor and additional descriptors. Each descriptor in the perturbed descriptor set is replaced with its nearest neighbor in a predefined descriptor database to reduce the output domain size of the subset selection mechanism. Local differential privacy (LDP) protocols are employed to further perturb the descriptor set, ensuring formal privacy guarantees, and the perturbed descriptor set is matched against a second set of descriptors for image feature matching.Type: ApplicationFiled: March 7, 2024Publication date: September 12, 2024Inventors: Francesco Pittaluga, Bingbing Zhuang, Xiang Yu
-
Patent number: 11987236Abstract: A method provided for 3D object localization predicts pairs of 2D bounding boxes. Each pair corresponds to a detected object in each of the two consecutive input monocular images. The method generates, for each detected object, a relative motion estimation specifying a relative motion between the two images. The method constructs an object cost volume by aggregating temporal features from the two images using the pairs of 2D bounding boxes and the relative motion estimation to predict a range of object depth candidates and a confidence score for each object depth candidate and an object depth from the object depth candidates. The method updates the relative motion estimation based on the object cost volume and the object depth to provide a refined object motion and a refined object depth. The method reconstructs a 3D bounding box for each detected object based on the refined object motion and refined object depth.Type: GrantFiled: August 23, 2021Date of Patent: May 21, 2024Assignee: NEC CorporationInventors: Pan Ji, Buyu Liu, Bingbing Zhuang, Manmohan Chandraker, Xiangyu Chen
-
Publication number: 20240153251Abstract: Methods and systems for training a model include performing two-dimensional object detection on a training image to identify an object. The training image is cropped around the object. A category-level shape reconstruction is generated using a neural radiance field model. A normalized coordinate model is trained using the training image and ground truth information from the category-level shape reconstruction.Type: ApplicationFiled: November 1, 2023Publication date: May 9, 2024Inventors: Bingbing Zhuang, Samuel Schulter, Buyu Liu, Zhixiang Min
-
Publication number: 20240153250Abstract: Methods and systems for training a model include training a size estimation model to generate an estimated object size using a training dataset with differing levels of annotation. Two-dimensional object detection is performed on a training image to identify an object. The training image is cropped around the object. A category-level shape reconstruction is generated using a neural radiance field model. A normalized coordinate model is trained using the training image and ground truth information from the category-level shape reconstruction.Type: ApplicationFiled: November 1, 2023Publication date: May 9, 2024Inventors: Bingbing Zhuang, Samuel Schulter, Buyu Liu, Zhixiang Min
-
Publication number: 20240071105Abstract: Methods and systems for training a model include pre-training a backbone model with a pre-training decoder, using an unlabeled dataset with multiple distinct sensor data modalities that derive from different sensor types. The backbone model is fine-tuned with an output decoder after pre-training, using a labeled dataset with the multiple modalities.Type: ApplicationFiled: August 22, 2023Publication date: February 29, 2024Inventors: Samuel Schulter, Bingbing Zhuang, Vijay Kumar Baikampady Gopalkrishna, Sparsh Garg, Zhixing Zhang
-
Publication number: 20240037187Abstract: Video methods and systems include extracting features of a first modality and a second modality from a labeled first training dataset in a first domain and an unlabeled second training dataset in a second domain. A video analysis model is trained using contrastive learning on the extracted features, including optimization of a loss function that includes a cross-domain regularization part and a cross-modality regularization part.Type: ApplicationFiled: October 11, 2023Publication date: February 1, 2024Inventors: Yi-Hsuan Tsai, Xiang Yu, Bingbing Zhuang, Manmohan Chandraker, Donghyun Kim
-
Publication number: 20240037186Abstract: Video methods and systems include extracting features of a first modality and a second modality from a labeled first training dataset in a first domain and an unlabeled second training dataset in a second domain. A video analysis model is trained using contrastive learning on the extracted features, including optimization of a loss function that includes a cross-domain regularization part and a cross-modality regularization part.Type: ApplicationFiled: October 11, 2023Publication date: February 1, 2024Inventors: Yi-Hsuan Tsai, Xiang Yu, Bingbing Zhuang, Manmohan Chandraker, Donghyun Kim