Patents by Inventor Bingbing Zhuang
Bingbing Zhuang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250028153Abstract: A lens sequentially comprises, from an object side to an image surface: a first lens having negative focal power, an object side surface of the first lens being convex, and an image side surface of the first lens being concave; a second lens having positive focal power, an image side surface of the second lens being convex; a diaphragm; a third lens having positive focal power, an object side surface and an image side surface of the third lens both convex; a fourth lens having positive focal power, an object side surface and an image side surface of the fourth lens both convex; and a fifth lens having negative focal power, an object side surface of the fifth lens being concave, an image side surface of the fifth lens being convex, the fourth lens and the fifth lens constituting a cemented lens.Type: ApplicationFiled: January 17, 2023Publication date: January 23, 2025Inventors: Bingbing LING, Yumin BAO, Linfan ZHUANG, Kemin WANG
-
Patent number: 12205324Abstract: A computer-implemented method for fusing geometrical and Convolutional Neural Network (CNN) relative camera pose is provided. The method includes receiving two images having different camera poses. The method further includes inputting the two images into a geometric solver branch to return, as a first solution, an estimated camera pose and an associated pose uncertainty value determined from a Jacobian of a reproduction error function. The method also includes inputting the two images into a CNN branch to return, as a second solution, a predicted camera pose and an associated pose uncertainty value. The method additionally includes fusing, by a processor device, the first solution and the second solution in a probabilistic manner using Bayes' rule to obtain a fused pose.Type: GrantFiled: November 5, 2021Date of Patent: January 21, 2025Assignee: NEC CorporationInventors: Bingbing Zhuang, Manmohan Chandraker
-
Patent number: 12131422Abstract: A method for achieving high-fidelity novel view synthesis and 3D reconstruction for large-scale scenes is presented. The method includes obtaining images from a video stream received from a plurality of video image capturing devices, grouping the images into different image clusters representing a large-scale 3D scene, training a neural radiance field (NeRF) and an uncertainty multilayer perceptron (MLP) for each of the image clusters to generate a plurality of NeRFs and a plurality of uncertainty MLPs for the large-scale 3D scene, applying a rendering loss and an entropy loss to the plurality of NeRFs, performing uncertainty-based fusion to the plurality of NeRFs to define a fused NeRF, and jointly fine-tuning the plurality of NeRFs and the plurality of uncertainty MLPs, and during inference, applying the fused NeRF for novel view synthesis of the large-scale 3D scene.Type: GrantFiled: October 11, 2022Date of Patent: October 29, 2024Assignee: NEC CorporationInventors: Bingbing Zhuang, Samuel Schulter, Yi-Hsuan Tsai, Buyu Liu, Nanbo Li
-
Patent number: 12131557Abstract: A computer-implemented method for road layout prediction is provided. The method includes segmenting, by a first processor-based element, an RGB image to output pixel-level semantic segmentation results for the RGB image in a perspective view for both visible and occluded pixels in the perspective view based on contextual clues. The method further includes learning, by a second processor-based element, a mapping from the pixel-level semantic segmentation results for the RGB image in the perspective view to a top view of the RGB image using a road plane assumption. The method also includes generating, by a third processor-based element, an occlusion-aware parametric road layout prediction for road layout related attributes in the top view.Type: GrantFiled: November 8, 2021Date of Patent: October 29, 2024Assignee: NEC CorporationInventors: Buyu Liu, Bingbing Zhuang, Manmohan Chandraker
-
Publication number: 20240354921Abstract: Systems and methods for road defect level prediction. A depth map is obtained from an image dataset received from input peripherals by employing a vision transformer model. A plurality of semantic maps is obtained from the image dataset by employing a semantic segmentation model to give pixel-wise segmentation results of road scenes to detect road pixels. Regions of interest (ROI) are detected by utilizing the road pixels. Road defect levels are predicted by fitting the ROI and the depth map into a road surface model to generate road points classified into road defect levels. The predicted road defect levels are visualized on a road map.Type: ApplicationFiled: March 26, 2024Publication date: October 24, 2024Inventors: Sparsh Garg, Bingbing Zhuang, Samuel Schulter, Manmohan Chandraker
-
Publication number: 20240354583Abstract: Methods and systems for training a model include annotating a subset of an unlabeled training dataset, that includes images of road scenes, with labels. A road defect detection model is iteratively trained, including adding pseudo-labels to a remainder of examples from the unlabeled training dataset and training the road defect detection model based on the labels and the pseudo-labels.Type: ApplicationFiled: March 25, 2024Publication date: October 24, 2024Inventors: Sparsh Garg, Samuel Schulter, Bingbing Zhuang, Manmohan Chandraker
-
Publication number: 20240351582Abstract: Methods and systems for trajectory prediction include encoding trajectories of agents in a scene from past images of the scene. Lane centerlines are encoded for agents in the scene. The agents in the scene are encoded using the encoded trajectories and the encoded lane centerlines. A hypercolumn trajectory is decoded from the encoded agents to generate predicted trajectories for the agents. A vehicle is automatically operated responsive to the predicted trajectories.Type: ApplicationFiled: April 18, 2024Publication date: October 24, 2024Inventors: Buyu Liu, Sriram Nochur Narayanan, Bingbing Zhuang, Yumin Suh
-
Publication number: 20240303365Abstract: Systems and methods are provided for privacy-preserving image feature matching in computer vision applications, including receiving a raw image descriptor, and perturbing the raw image descriptor using a subset selection mechanism to generate a perturbed descriptor set that includes the raw image descriptor and additional descriptors. Each descriptor in the perturbed descriptor set is replaced with its nearest neighbor in a predefined descriptor database to reduce the output domain size of the subset selection mechanism. Local differential privacy (LDP) protocols are employed to further perturb the descriptor set, ensuring formal privacy guarantees, and the perturbed descriptor set is matched against a second set of descriptors for image feature matching.Type: ApplicationFiled: March 7, 2024Publication date: September 12, 2024Inventors: Francesco Pittaluga, Bingbing Zhuang, Xiang Yu
-
Patent number: 11987236Abstract: A method provided for 3D object localization predicts pairs of 2D bounding boxes. Each pair corresponds to a detected object in each of the two consecutive input monocular images. The method generates, for each detected object, a relative motion estimation specifying a relative motion between the two images. The method constructs an object cost volume by aggregating temporal features from the two images using the pairs of 2D bounding boxes and the relative motion estimation to predict a range of object depth candidates and a confidence score for each object depth candidate and an object depth from the object depth candidates. The method updates the relative motion estimation based on the object cost volume and the object depth to provide a refined object motion and a refined object depth. The method reconstructs a 3D bounding box for each detected object based on the refined object motion and refined object depth.Type: GrantFiled: August 23, 2021Date of Patent: May 21, 2024Assignee: NEC CorporationInventors: Pan Ji, Buyu Liu, Bingbing Zhuang, Manmohan Chandraker, Xiangyu Chen
-
Publication number: 20240153251Abstract: Methods and systems for training a model include performing two-dimensional object detection on a training image to identify an object. The training image is cropped around the object. A category-level shape reconstruction is generated using a neural radiance field model. A normalized coordinate model is trained using the training image and ground truth information from the category-level shape reconstruction.Type: ApplicationFiled: November 1, 2023Publication date: May 9, 2024Inventors: Bingbing Zhuang, Samuel Schulter, Buyu Liu, Zhixiang Min
-
Publication number: 20240153250Abstract: Methods and systems for training a model include training a size estimation model to generate an estimated object size using a training dataset with differing levels of annotation. Two-dimensional object detection is performed on a training image to identify an object. The training image is cropped around the object. A category-level shape reconstruction is generated using a neural radiance field model. A normalized coordinate model is trained using the training image and ground truth information from the category-level shape reconstruction.Type: ApplicationFiled: November 1, 2023Publication date: May 9, 2024Inventors: Bingbing Zhuang, Samuel Schulter, Buyu Liu, Zhixiang Min
-
Publication number: 20240071105Abstract: Methods and systems for training a model include pre-training a backbone model with a pre-training decoder, using an unlabeled dataset with multiple distinct sensor data modalities that derive from different sensor types. The backbone model is fine-tuned with an output decoder after pre-training, using a labeled dataset with the multiple modalities.Type: ApplicationFiled: August 22, 2023Publication date: February 29, 2024Inventors: Samuel Schulter, Bingbing Zhuang, Vijay Kumar Baikampady Gopalkrishna, Sparsh Garg, Zhixing Zhang
-
Publication number: 20240037187Abstract: Video methods and systems include extracting features of a first modality and a second modality from a labeled first training dataset in a first domain and an unlabeled second training dataset in a second domain. A video analysis model is trained using contrastive learning on the extracted features, including optimization of a loss function that includes a cross-domain regularization part and a cross-modality regularization part.Type: ApplicationFiled: October 11, 2023Publication date: February 1, 2024Inventors: Yi-Hsuan Tsai, Xiang Yu, Bingbing Zhuang, Manmohan Chandraker, Donghyun Kim
-
Publication number: 20240037186Abstract: Video methods and systems include extracting features of a first modality and a second modality from a labeled first training dataset in a first domain and an unlabeled second training dataset in a second domain. A video analysis model is trained using contrastive learning on the extracted features, including optimization of a loss function that includes a cross-domain regularization part and a cross-modality regularization part.Type: ApplicationFiled: October 11, 2023Publication date: February 1, 2024Inventors: Yi-Hsuan Tsai, Xiang Yu, Bingbing Zhuang, Manmohan Chandraker, Donghyun Kim
-
Publication number: 20240037188Abstract: Video methods and systems include extracting features of a first modality and a second modality from a labeled first training dataset in a first domain and an unlabeled second training dataset in a second domain. A video analysis model is trained using contrastive learning on the extracted features, including optimization of a loss function that includes a cross-domain regularization part and a cross-modality regularization part.Type: ApplicationFiled: October 11, 2023Publication date: February 1, 2024Inventors: Yi-Hsuan Tsai, Xiang Yu, Bingbing Zhuang, Manmohan Chandraker, Donghyun Kim
-
Patent number: 11694311Abstract: A computer-implemented method executed by at least one processor for applying rolling shutter (RS)-aware spatially varying differential homography fields for simultaneous RS distortion removal and image stitching is presented. The method includes inputting two consecutive frames including RS distortions from a video stream, performing keypoint detection and matching to extract correspondences between the two consecutive frames, feeding the correspondences between the two consecutive frames into an RS-aware differential homography estimation component to filter out outlier correspondences, sending inlier correspondences to an RS-aware spatially varying differential homography field estimation component to compute an RS-aware spatially varying differential homography field, and using the RS-aware spatially varying differential homography field in an RS stitching and correction component to produce stitched images with removal of the RS distortions.Type: GrantFiled: February 23, 2021Date of Patent: July 4, 2023Inventors: Bingbing Zhuang, Quoc-Huy Tran
-
Publication number: 20230154104Abstract: A method for achieving high-fidelity novel view synthesis and 3D reconstruction for large-scale scenes is presented. The method includes obtaining images from a video stream received from a plurality of video image capturing devices, grouping the images into different image clusters representing a large-scale 3D scene, training a neural radiance field (NeRF) and an uncertainty multilayer perceptron (MLP) for each of the image clusters to generate a plurality of NeRFs and a plurality of uncertainty MLPs for the large-scale 3D scene, applying a rendering loss and an entropy loss to the plurality of NeRFs, performing uncertainty-based fusion to the plurality of NeRFs to define a fused NeRF, and jointly fine-tuning the plurality of NeRFs and the plurality of uncertainty MLPs, and during inference, applying the fused NeRF for novel view synthesis of the large-scale 3D scene.Type: ApplicationFiled: October 11, 2022Publication date: May 18, 2023Inventors: Bingbing Zhuang, Samuel Schulter, Yi-Hsuan Tsai, Buyu Liu, Nanbo Li
-
Publication number: 20230081913Abstract: Systems and methods are provided for multi-modal test-time adaptation. The method includes inputting a digital image into a pre-trained Camera Intra-modal Pseudo-label Generator, and inputting a point cloud set into a pre-trained Lidar Intra-modal Pseudo-label Generator. The method further includes applying a fast 2-dimension (2D) model, and a slow 2D model, to the inputted digital image to apply pseudo-labels, and applying a fast 3-dimension (3D) model, and a slow 3D model, to the inputted point cloud set to apply pseudo-labels. The method further includes fusing pseudo-label predictions from the fast models and the slow models through an Inter-modal Pseudo-label Refinement module to obtain robust pseudo labels, and measuring a prediction consistency for the pseudo-labels.Type: ApplicationFiled: September 6, 2022Publication date: March 16, 2023Inventors: Yi-Hsuan Tsai, Bingbing Zhuang, Samuel Schulter, Buyu Liu, Sparsh Garg, Ramin Moslemi, Inkyu Shin
-
Patent number: 11599974Abstract: A method for jointly removing rolling shutter (RS) distortions and blur artifacts in a single input RS and blurred image is presented. The method includes generating a plurality of RS blurred images from a camera, synthesizing RS blurred images from a set of GS sharp images, corresponding GS sharp depth maps, and synthesized RS camera motions by employing a structure-and-motion-aware RS distortion and blur rendering module to generate training data to train a single-view joint RS correction and deblurring convolutional neural network (CNN), and predicting an RS rectified and deblurred image from the single input RS and blurred image by employing the single-view joint RS correction and deblurring CNN.Type: GrantFiled: November 5, 2020Date of Patent: March 7, 2023Inventors: Quoc-Huy Tran, Bingbing Zhuang, Pan Ji, Manmohan Chandraker
-
Patent number: 11455813Abstract: Systems and methods are provided for producing a road layout model. The method includes capturing digital images having a perspective view, converting each of the digital images into top-down images, and conveying a top-down image of time t to a neural network that performs a feature transform to form a feature map of time t. The method also includes transferring the feature map of the top-down image of time t to a feature transform module to warp the feature map to a time t+1, and conveying a top-down image of time t+1 to form a feature map of time t+1. The method also includes combining the warped feature map of time t with the feature map of time t+1 to form a combined feature map, transferring the combined feature map to a long short-term memory (LSTM) module to generate the road layout model, and displaying the road layout model.Type: GrantFiled: November 12, 2020Date of Patent: September 27, 2022Inventors: Buyu Liu, Bingbing Zhuang, Samuel Schulter, Manmohan Chandraker