Zhirui Gao,Renjiao Yi,Chenyang Zhu,Ke Zhuang,Wei Chen,Kai Xu,denotes the corresponding author
Abstract
Radiance fields including NeRFs and 3D Gaussians demonstrate great potential in high-fidelity rendering and scene reconstruction, while they require a substantial number of posed images as inputs. COLMAP is frequently employed for preprocessing to estimate poses, while it necessitates a large number of feature matches to operate effectively, and it struggles with scenes characterized by sparse features, large baselines between images, or a limited number of input images.We aim to tackle few-view NeRF reconstruction using only 3 to 6 unposed scene images. Traditional methods often use calibration boards but they are not common in images. We propose a novel idea of utilizing everyday objects, commonly found in both images and real life, as “pose probes”. The probe object is automatically segmented by SAM, whose shape is initialized from a cube. We apply a dual-branch volume rendering optimization (object NeRF and scene NeRF) to constrain the pose optimization and jointly refine the geometry. Specifically, object poses of two views are first estimated by PnP matching in an SDF representation, which serves as initial poses. PnP matching, requiring only a few features, is suitable for feature-sparse scenes. Additional views are incrementally incorporated to refine poses from preceding views. In experiments, PoseProbe achieves state-of-the-art performance in both pose estimation and novel view synthesis across multiple datasets. We demonstrate its effectiveness, particularly in few-view and large-baseline scenes where COLMAP struggles. In ablations, using different objects in a scene yields comparable performance. Our project page is available at: this https URL
1 Introduction
As a milestone in the realm of computer vision and graphics, neural radiance fields (NeRFs) offer unprecedented capability of photorealistic rendering of scenes from multi-view posed images. The accuracy of novel-view renderings depends heavily on the precision of input camera poses and the number of input images, limiting its applicability in real-world scenarios.Camera poses of the input views are typically recovered with COLMAP(Schönberger and Frahm 2016) in most works. However, with limited and sparse input views, COLMAP may fail to obtain accurate poses due to wide baselines and insufficient feature matches.
To relax the requirement of accurate input poses, many works estimate or refine poses based on various assumptions. For example, NeRFmm(Wang etal. 2021b) focuses on forward-facing scenes where the baseline is relatively small. BARF(Lin etal. 2022) and SPARF(Truong etal. 2023) assume imperfect initial poses instead of fully accurate poses. GNeRF(Meng etal. 2021) assumes the distribution of camera poses. The same goes for NeRS(Zhang etal. 2021). Furthermore, recent works(Wang etal. 2021b; Lin etal. 2022; Meng etal. 2021) primarily rely on photometric losses to optimize NeRFs and camera poses. In sparse-view cases, the photometric loss becomes insufficient since the 3D construction is under-constrained. To obtain more constraints, Nope-NeRF(Bian etal. 2023) incorporates monocular depth estimation from dense video frames as additional input. Still, the requirement of dense input frames does not work for few-view cases. SPARF(Truong etal. 2023) is proposed for few-view inputs but still requires reasonable initial poses. Therefore, reconstructing NeRFs without any pose priors in the few-view setting remains challenging.
A traditional way is placing a calibration board in the scene to calibrate accurate poses. However, calibration boards are not easily accessible in everyday scenes.This limitation inspires us to explore the potential of utilizing ubiquitous everyday objects, such as co*ke cans or boxes, as calibration probes. Such objects are easily found in photos and offer a practical, low-burden alternative. We adopt SAM to automatically segment the probe object by prompts, and simply use a cube as shape initialization. We find that most objects with simple shapes can be efficiently employed as pose probes, as in Tab.6, using different objects in a scene leads to slight performance change (within 7% in PSNR).
In this paper, we introduce a pipeline for NeRF reconstruction from few-view (3 to 6) unposed images. The main idea is leveraging everyday objects as pose probes. As shown in Fig.2, a dual-branch volume rendering optimization workflow is adopted, targeting the probe object and the entire scene respectively. The object branch uses hybrid volume rendering with signed distance field (SDF) representation to jointly optimize camera poses and object geometry, where the SDF (geometry of the PoseProbe) is initialized by a cube and deformed by a DeformNet.Multi-view geometry consistency and multi-layer feature consistency are introduced as training constraints. Similarly, the scene branch aims to learn scene neural representation and refine the camera poses, in a self-supervised manner as well.
Specifically, initialized camera poses of two views are first obtained using Perspective-n-Poin (PnP) matching. Additional views are then added incrementally using PnP as well. Note that PnP matching requires only several feature matches, working for feature-sparse scenarios, while COLMAP often fails due to insufficient feature matches. As tested in Tab.7, even using an identity matrix or very noisy poses (adding 30% noises to PnP poses), the method still gets comparable performance with a slight drop in metrics.Once all views are acquired, we enable DeformNet to deform the cube shape to an accurate object shape. Poses are further optimized jointly with DeformNet by both branches to get final results.In this way, we obtain high-quality novel view synthesis and poses, without any pose priors, even for large-baseline and few-view images. As shown in Fig.1, aided by the proposed pose probe (the co*ke can), our method produces realistic novel-view renderings and accurate poses using only three input images, without relying on pose initialization, outperforming both COLMAP-based and COLMAP-free state-of-the-art methods.
The main contributions include:
- •
We utilize generic objects as pose calibration probes, to tackle the challenging feature-sparse scenes using only 3 to 6 images, where COLMAP is inapplicable.
- •
We propose an explicit-implicit SDF representation to efficiently bridge CAD initialization and implicit deformations. The whole pipeline is end-to-end differentiable and fully self-supervised.
- •
We generate and capture a synthetic and a real dataset, and compare the proposed method with state-of-the-arts across three benchmarks, where our method achieves PSNR improvements of 31.9%, 28.0%, and 42.1% in novel view synthesis, along with significant enhancements in pose metrics. The proposed method successfully handles sparse-view scenes where COLMAP experiences a 67% initialization failure rate.
2 Related Works
Radiance fields with pose optimization. The reliance on high-precision camera poses as input restricts the applicability of NeRFs and 3D Gaussian Splatting(Kerbl etal. 2023) (3DGS). Several studies have sought to alleviate this dependency. NeRF-based techniques utilize neural networks to represent the radiance fields and jointly optimize camera parameters, as demonstrated by early approaches (Wang etal. 2021b; Jeong etal. 2021; Lin etal. 2022; Chng etal. 2021). L2G-NeRF(Chen etal. 2023) and LU-NeRF(Cheng etal. 2023) incorporate a local-to-global registration and boost noise resilience. Additionally, Camp(Park etal. 2023) proposes using a proxy problem to compute a whitening transform, which helps refine the initial camera poses. Furthermore, NoPe-NeRF(Bian etal. 2023) adopts monocular depth estimation to learn scene representation independent of pose priors, but faces challenges with sparse inputs. Similar to our method, NeRS(Zhang etal. 2021) employs a category-level shape template to effectively model object shapes and textures. However, it requires pose initialization and cannot render entire scenes. SPARF(Truong etal. 2023) addresses the challenge of NeRFs with sparse-view, wide-baseline input images but requires initial camera positions to be close to ground truth, which limits its applicability in real-world scenarios. 3DGS-based approaches utilize explicit 3D Gaussians rather than neural networks and have been explored in various studies. CF-3DGS(Fu etal. 2024) and COGS(Jiang etal. 2024) leverage monocular depth estimators to assist in registering camera poses. Recent work(Fan etal. 2024) proposes using an off-the-shelf model(Wang etal. 2024) to compute initial camera poses and achieve sparse-view and SfM-free optimization. However, 3DGS requires an initialized point cloud, which is often difficult to obtain in unconstrained scenes with sparse viewpoints and unknown poses. Consequently, 3DGS-based methods generally rely on pretrained vision models(Wang etal. 2024; Ranftl etal. 2020), significantly increasing complexity.
Novel-view synthesis from few views. To address the challenge of requiring dense input views, various regularization techniques shine in few-view learning. DS-NeRF(Deng etal. 2022) utilizes depth supervision to avoid overfitting.Additionally, appearance regularization(Niemeyer etal. 2022), geometry regularization(Song, Kwak, and Kim 2022; Niemeyer etal. 2022) and frequency regularization(Yang, Pavone, and Wang 2023) are introduced to optimize the radiance fields. FSGS(Zhu etal. 2023) and SparseGS(Xiong etal. 2023) utilize monocular depth estimators or diffusion models to enhance Gaussian Splatting in sparse-view scenarios. However, these methods assume the availability of ground-truth camera poses, while Structure-from-Motion algorithms often fail with sparse or few inputs, limiting their practical application. Recent studies(Liu etal. 2023a; Shi etal. 2023; Liu etal. 2024) leverage 2D diffusion models to generate 3D models from a single image, but they still face challenges in scene reconstruction. In our approach, we employ geometry regularization to facilitate scene learning with fewer views.
3 Method
Given sparse view (as low as 3) and unposed images of a scene, we tackle the challenge of photorealistic novel view synthesis and pose estimation, by telling a novel idea of using common objects as pose probes.Our method does not require any pose initialization since obtaining initial poses is not always convenient and COLMAP may be inapplicable for few-view and feature-sparse scenes. We propose a dual-branch pipeline as illustrated in Fig.2, which integrates both neural explicit and implicit volume rendering. In the object branch (Sec3.1), we utilize neural volume rendering with a hybrid signed distance field (SDF) to efficiently optimize both camera poses and object representation. In the scene branch (Sec.3.2), the scene representation is learned in an implicit NeRF, while the camera pose is optimized simultaneously. The joint training is introduced in Sec.3.3.
3.1 Object NeRF with pose estimation
Inspired by the fast convergence of explicit representations(Sun, Sun, and Chen 2022; Wu etal. 2022) while preserving high-quality rendering, we design a neural volume rendering framework similar to DVGO(Sun, Sun, and Chen 2022) for the object branch. To recover high-fidelity shapes and precise camera poses, we discard the density voxel grid and adopt SDF(Wang etal. 2021a; Fu etal. 2022) as the rendering field. In particular, we design a hybrid explicit and implicit representation of SDF that assigns any point a scalar :
(1) |
To better utilize the geometry of the object, the gradient at each point is embedded into the color rendering process:
(2) |
Here, the normal is computed as the normalized gradient of the SDF, and represents the viewing direction. Next, we will introduce how to use the pose probe to obtain the initial camera poses for each frame. Following this, we will delve into the hybrid explicit and implicit representation of the SDF and discuss strategies for optimizing neural fields in conjunction with camera poses.
Hybrid SDF representation. In the design of a hybrid explicit and implicit SDF generation network, the explicit template field is a non-learnable voxel grid , initialized using the template object, while the implicit deform field is implemented as MLPs to predict a deformation field and a correction field on top of . The voxel grid is initialized with a similar template, and we find that a coarse mesh (e.g., a cube) is sufficient to learn detailed geometry and appearance. We obtain the SDF values in by calculating the closest distance from each voxel center to the surface and determining whether the point lies inside or outside the object. This process is efficient and takes only a few seconds. The template field provides a strong prior, reducing the search space from a known baseline and enabling detailed geometry representation with fewer parameters.
For finer shape details, we use an implicit deformation field to refine the coarse SDF. While optimizing an explicit SDF correction voxel grid topping on template field is a straightforward choice, it limits information sharing and tends to degenerate solutions, especially in the sparse views. In contrast, the implicit field inherently provides a smooth and continuous representation beneficial for capturing fine details and complex deformations. Inspired by(Deng, Yang, and Tong 2021), Our deform field predicts a deformation vector and a scalar correction value for each point :
(3) |
The ultimate SDF value of any pointis determined by interpolating at its deformed location within the template field , further refined by a correction scalar. Therefore, the SDF value of the point is represented as:
(4) | ||||
The predicted SDF value in our hybrid representation is to estimate volume opacity. But directly using the SDF values in Eqn.4 is not perfect for volume rendering since the value scale is predefined manually. To this end, we propose a mapping function sdf with two learnable parameters to scale the original SDF to the scene-customized scale:
(5) |
where and are trainable parameters to control the scale of SDF voxel grid . To ensure that and are always positive for maintaining the original SDF sign, we apply the Softplus activation function to them. The parameters and vary from scene to scene as illustrated in supplementary materials. Our hybrid SDF representation merges the advantages of explicit and implicit representations, balancing rapid convergence with detailed modeling.
Incremental pose optimization.We employ an incremental pose optimization approach, introducing a new image into the training loop at fixed intervals. Given the input images and corresponding masks of the calibration object, the first image is designated as the reference image . Multiple projection views around the object are sampled to acquire mask images, and the view with the best matching mask is selected as the initial pose for the first frame. For each newly added frame , we first compute 2D correspondences with the previous image using SuperPoint(DeTone, Malisiewicz, and Rabinovich 2018) and SuperGlue(Sarlin etal. 2020). The matching pixels in the image cast rays to locate corresponding 3D points on the object, leveraging the optimized pose for precise surface positioning. We explain this process in supplementary. This forms 2D-3D correspondences between the newly added image and the object, allowing the PnP with RANSAC to calculate the initial pose of image . Finally, the added image views and the radiance field are jointly optimized.
Multi-view geometric consistency.Recently, SCNeRF(Jeong etal. 2021) and SPARF(Truong etal. 2023) propose to use reprojection error to learn geometry andcamera poses consistency. We adopt a more direct multi-view projection distance to constrain the camera poses. Formally, given an image pair and matching pixel pairs , we first locate the surface points using ray-casting. 3D surface points are then projected back to the image coordinate to minimize the distance between correspondences. The geometric circle projection distance of pair is defined as:
(6) |
where means the camera projection function, and denotes Huber loss function(Hastie etal. 2009). Additionally, based on the prior that rays emitted from feature points should intersect the object, we introduce a regularization term that minimizes the distance between these rays and the surface of the pose probe to refine the camera poses:
(7) |
where denotes the shortest distance from the object center to the ray , and represents the maximum radius of the object. Finally, our multi-view geometric consistency objective is formulated as:
(8) |
Here, represents the matching confidence associated with pair and is set to 10.
Multi-layer feature-metric consistency.Geometric consistency facilitates rapid convergence in camera pose optimization; however, mismatches can produce misleading supervisory signals, potentially trapping the optimization in local optima. Inspired by dense bundle adjustment(Tang and Tan 2018), we introduce a multi-layer feature-metric consistency. This constraint minimizes the feature difference of aligned pixels using dot-product similarity. The multi-layer feature-metric associated with pixel is formulated as:
(9) |
where are the multi-layer image features extracted by the pretrained VGG(Simonyan and Zisserman 2015). Here, denotes the number of images, and is the number of layers. Our feature-metric loss is defined as . We incorporate a visible mask to remove out-of-view or occluded points from another perspective. Points whose projected pixels fall outside the object masks are considered out-of-view, while points with invalid depth values are treated as occluded. This constraint considers more image pixels rather than focusing solely on keypoints in the geometric consistency. In contrast to photometric error, which is sensitive to initialization and increases non-convexity(Engel, Koltun, and Cremers 2017), our feature-based consistency loss provides smoother optimization.
3.2 Scene NeRF with pose refinement
While training the object NeRF, we simultaneously train a scene NeRF branch. The aim is to learn neural scene representation while fine-tuning the camera poses. To validate the effectiveness of our proposed modules, we employ a baseline NeRF model with coarse-to-fine positional encoding(Lin etal. 2022). We also use the projection distance loss (Eqn6) as an additional constraint in the scene branch. Furthermore, we observe that adding a depth smoothness prior enhances the geometric perception of the scene. Analogous to RegNeRF(Niemeyer etal. 2022), a depth total variation loss over small patches is introduced:
(10) |
where is the set of sampled rays, is predicted depth of the ray through pixel , and is the patch size.
3.3 Joint training
The final training objectives consist of all losses for the object NeRF and scene NeRF: .
Object NeRF. To encourage smootherdeformation and prevent large shape distortion, We incorporate a smoothness loss on the deformation field and a minimal correction prior(Deng, Yang, and Tong 2021) for the correction field:
(11) |
Besides, we add an Eikonal term(Gropp etal. 2020) to regularize the the SDF:
(12) |
(13) |
where and represent photometric loss and mask loss respectively.
Scene NeRF. In the scene training stage, the total loss is:
(14) |
All s denote the balancing weights for the corresponding loss terms, whose values can be found in the supplementary.
4 Experiments
In this section, we compare state-of-the-art baselines for camera pose estimation and novel view synthesis in few-view (36) settings on multiple benchmarks. Furthermore, we conduct a series of ablations to assess the effectiveness and robustness of key components. Please refer to the supplementary PDF and videos for more results and details.
4.1 Experimental settings
Datasets.We propose a synthetic dataset (ShapenetScene) and a real-life dataset (co*keBox). The former provides a benchmark with precise poses for the quantitative evaluation of our method, while the latter demonstrates its practical applicability. Additionally, we conduct experiments on the ToyDesk(Yang etal. 2021) and DTU (Jensen etal. 2014) benchmarks. ShapenetScene is generated using BlenderProc(Denninger etal. 2023) and comprises six scenes rendered jointly from SceneNet(Handa etal. 2016) and ShapeNet(Chang etal. 2015). Each scene includes 100 RGB images and corresponding mask images captured around the object at 360. co*keBox contains four sets of densely posed images with 2D instance segmentation of calibration objects. We use COLMAP(Schönberger and Frahm 2016) and Grounded-SAM(Kirillov etal. 2023; Liu etal. 2023b) to recover the pseudo ground truth camera poses and mask images. ToyDesk, introduced by Object-NeRF(Yang etal. 2021), contains posed images and 2D instance segmentation masks. The images are partitioned into training and testing sets for training and evaluation. For DTU, we follow the dataset splitting protocol of SPARF(Truong etal. 2023) to separate the training and testing sets.
Metrics. For camera pose evaluation, we report the average rotation and translation errors as pose metrics after aligning the optimized poses with the ground truth. For novel view synthesis, we report the PSNR, SSIM(Wang etal. 2004), LPIPS(Zhang etal. 2018) (with AlexNet(Krizhevsky, Sutskever, and Hinton 2012)). We also present the Average metric (the geometric mean of , , and LPIPS) following (Yang, Pavone, and Wang 2023).
4.2 Comparison with State-of-the-arts
Rot. Trans. PSNR SSIM LPIPS Average 3-view 6-view 3-view 6-view 3-view 6-view 3-view 6-view 3-view 6-view 3-view 6-view Nope-NeRF 14.39 16.18 8.61 19.90 12.90 14.40 0.46 0.54 0.68 0.68 0.30 0.26 SCNeRF 10.95 9.88 7.72 14.65 16.39 16.74 0.51 0.53 0.58 0.55 0.21 0.20 BARF 8.25 13.15 10.53 10.02 17.95 18.97 0.56 0.58 0.65 0.64 0.18 0.17 SPARF 8.41 14.48 16.27 21.45 18.29 16.57 0.65 0.56 0.55 0.58 0.18 0.20 CF-3DGS 56.10 35.69 27.32 20.81 16.74 18.31 0.49 0.65 0.52 0.47 0.20 0.16 Ours 0.72 0.70 1.89 1.06 23.11 26.08 0.68 0.79 0.48 0.35 0.11 0.07
We compare our method against state-of-the-art pose-free methods, including BARF(Lin etal. 2022), SCNeRF(Jeong etal. 2021), Nope-NeRF(Bian etal. 2023), SPARF(Truong etal. 2023), as well as FC-3DGS(Fu etal. 2024).
Rot. Trans. PSNR SSIM LPIPS Average Nope-NeRF 15.67 16.03 14.46 0.55 0.68 0.25 SCNeRF 3.59 6.49 19.65 0.61 0.41 0.14 BARF 10.66 27.43 16.41 0.52 0.66 0.22 SPARF 6.04 10.65 21.21 0.64 0.51 0.16 Ours 1.31 2.73 25.07 0.73 0.38 0.09
Results on ShapenetScene. We evaluate our method and baselines with 3 and 6 input views. For a fair comparison, the camera poses derived via PnP in our method serve as the initial poses for all NeRF baselines, exhibiting average rotation and translation errors of approximately 35 and 70, respectively. As shown inTab.1 andFig.3, we observe that most baselines fail to register poses accurately and produce poor novel views, as they rely on good initial poses or dense input views. In Fig.4, we display optimized poses of one scene. To further validate the robustness of our method, we experiment akin to SPARF by adding 15% additive Gaussian noise to the ground truth poses as initial estimates, and compare with state-of-the-arts, including BARF, Nope-NeRF, SCNeRF, and SPARF. The perturbed camera poses have an average rotation and translation error of around 15 and 45, respectively. Quantitative results are presented in Tab.2. BARF and Nope-NeRF continue to struggle with optimizing camera poses, resulting in poor rendering quality. The geometric losses utilized by SCNeRF and SPARF facilitate improved learning of camera poses. However, SCNeRF faces challenges when rendering with few views, and SPARF similarly struggles with sparser input images. In contrast, our method achieves more accurate pose estimation and more realistic renderings both from scratch and noisy poses, resulting in higher-quality novel views.
Rot. Trans. PSNR SSIM LPIPS Average BARF 15.37 45.13 10.08 0.29 0.70 0.39 SCNeRF 11.63 19.13 12.62 0.45 0.57 0.28 SPARF 9.66 22.89 14.34 0.50 0.51 0.25 CF-3DGS 90.68 30.41 10.84 0.39 0.53 0.32 Ours 1.27 3.82 18.32 0.61 0.38 0.15
Results on DTU. We test on the DTU dataset with 6 input views. The PnP camera poses are used as the initial poses for NeRF baselines to ensure a fair comparison. In Tab.3 and Fig.5, our method performs better in pose estimation and novel-view synthesis. All baselines suffer from blurriness and inaccurate scene geometry, while our approach produces closer results to GT thanks to the pose probe constraint.
Results on real-life datasets.We conduct qualitative and quantitative evaluations in Fig.6 and Tab.4, comparing with state-of-the-art methods (BARF, SCNeRF, and SPARF) on co*keBox and ToyDesk datasets, using only 3 input views. Pseudo ground truth camera poses are recovered from dense image sequences via COLMAP to facilitate training and evaluation. These pseudo poses are used as initial poses for all baselines, while our approach operates independently of initial poses. Notably, BARF and SCNeRF struggle with view synthesis despite having COLMAP poses. In comparison to SPARF, also initialized with COLMAP poses, our method demonstrates superior performance. For a more intuitive comparison, we also present the results of the baselines initialized with identical poses in Tab.4.
PSNR SSIM LPIPS Average SCNeRF 17.88 (12.08) 0.69 (0.49) 0.34 (0.56) 0.15 (0.29) BARF 19.85 (14.08) 0.53 (0.56) 0.33 (0.41) 0.14 (0.22) SPARF 24.10 (18.25) 0.66 (0.69) 0.27 (0.35) 0.08 (0.15) Ours 25.95 0.76 0.23 0.07
Rot. Trans. PSNR SSIM LPIPS Average w/o Incre. 11.83 10.53 17.54 0.62 0.64 0.191 w/o 12.85 12.37 17.02 0.61 0.66 0.200 w/o 2.13 3.22 25.31 0.78 0.36 0.079 w/o 0.72 1.77 25.57 0.78 0.35 0.077 w/o DeformNet 3.14 8.56 23.74 0.76 0.39 0.093 Full Model 0.70 1.06 26.08 0.79 0.35 0.073
4.3 Ablations and analysis
Effectiveness of proposed components.As shown in Tab.5, we ablate key modules using 6 input views on ShapenetScene. Incremental pose optimization improves initial poses for new frames by using the optimized poses from previous frames, making overall pose alignment easier. Removing this strategy results in a significant drop in model performance. Geometric consistency loss (Eqn.8) is crucial for guiding camera pose optimization, while feature consistency (Eqn.9) further refines the precision of pose estimation. Omitting these two constraints causes a noticeable decline in performance, as inaccurate poses result in poor novel view synthesis. Depth smoothness regularization (Eqn.10) enhances image quality with minimal impact on pose optimization. Furthermore, DeformNet is integral to our framework, demonstrating that more accurate geometric constraints can yield more precise camera poses, thereby producing higher-quality novel view synthesis.
Rot. Trans. PSNR SSIM LPIPS Average Candy 1.91 1.36 18.11 0.65 0.41 0.156 Face 1.57 0.87 19.17 0.68 0.39 0.139 Dragon 0.65 0.83 19.52 0.69 0.38 0.135
Impact of different pose probes. To investigate the impact of different pose probes, we use a scene shown in the last row of Fig.6, in which there are multiple partially-observed objects. We alternately use the toys (Candy, Face, and Dragon) in the scene as pose probes, with all shapes initialized as cubes. As shown in Tab.6, all pose probes work effectively, with Dragon achieving the lowest pose errors because of its richer features.
Robustness on initial poses and matching pairs.Our method utilizes PnP to compute the initial poses of new frames but does not rely on it. We conducted experiments with 3 input views using various pose initialization strategies, as detailed in Tab.7. Our method maintains comparable performance when using the previous frame’s pose as initialization (identical poses) and remains effective even with large Gaussian noises added to the ground truth poses. PnP initialization accelerates pose convergence, reducing the number of required optimization iterations.
Additionally, we compare the robustness of COLMAP and our PnP method by categorizing the data into sparse (3 views) and dense (6 views) splits. As illustrated in Tab.8, the state-of-the-art COLMAP with SuperPoint and SuperGlue (COLMAP-SP-SG) often fails in the sparse view split due to an insufficient number of feature pairs for pose initialization. Moreover, we verify that our method remains effective even when using only half of the matching pairs, demonstrating that our approach is less dependent on pose matching compared to COLMAP. PnP operates reliably with significantly fewer feature pairs, making it effective for both sparse and dense views. It is worth noting that the COLMAP poses are further refined using SPARF(Truong etal. 2023).
Pose init. Iterations Rot. Trans. PSNR SSIM LPIPS Average Identical 5k 1.11 3.15 22.82 0.67 0.48 0.114 30% noise 5k 2.84 7.16 22.25 0.63 0.56 0.131 25% noise 5k 0.81 1.82 22.91 0.67 0.50 0.114 15% noise 5k 0.80 2.30 22.59 0.66 0.49 0.117 PnP 3k 0.72 1.89 23.11 0.68 0.48 0.111
Pose init. Sparse views Dense views Rot. Trans. SR Matches Rot. Trans. SR Matches COLMAP - - 0.0% 202 3.38 8.82 83% 2271 COLAMP-SP-SG 10.24 11.61 33% 499 13.58 2.32 100% 3208 Ours-50% 1.97 2.72 100% 137 1.48 2.91 100% 392 Ours 0.72 1.89 100% 274 0.70 1.06 100% 783
5 Conclusion
We propose PoseProbe, a novel pipeline using common objects as calibration probes to joint pose-NeRF training, tailored for challenging scenarios of few-view and large-baseline, where COLMAP is infeasible. A main limitation is that our method only applies to scenarios where calibration objects are present in all input images. We will explore utilizing multiple pose probes to address the limitation.
References
- Bian etal. (2023)Bian, W.; Wang, Z.; Li, K.; Bian, J.-W.; and Prisacariu, V.A. 2023.Nope-nerf: Optimising neural radiance field with no pose prior.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4160–4169.
- Chang etal. (2015)Chang, A.X.; Funkhouser, T.; Guibas, L.; Hanrahan, P.; Huang, Q.; Li, Z.; Savarese, S.; Savva, M.; Song, S.; Su, H.; etal. 2015.Shapenet: An information-rich 3d model repository.arXiv preprint arXiv:1512.03012.
- Chen etal. (2023)Chen, Y.; Chen, X.; Wang, X.; Zhang, Q.; Guo, Y.; Shan, Y.; and Wang, F. 2023.Local-to-global registration for bundle-adjusting neural radiance fields.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8264–8273.
- Cheng etal. (2023)Cheng, Z.; Esteves, C.; Jampani, V.; Kar, A.; Maji, S.; and Makadia, A. 2023.LU-NeRF: Scene and Pose Estimation by Synchronizing Local Unposed NeRFs.arXiv preprint arXiv:2306.05410.
- Chng etal. (2021)Chng, S.-F.; Ramasinghe, S.; Sherrah, J.; and Lucey, S. 2021.GARF: Gaussian Activated Radiance Fields for High Fidelity Reconstruction and Pose Estimation.In ICCV.
- Deng etal. (2022)Deng, K.; Liu, A.; Zhu, J.-Y.; and Ramanan, D. 2022.Depth-supervised NeRF: Fewer Views and Faster Training for Free.In CVPR.
- Deng, Yang, and Tong (2021)Deng, Y.; Yang, J.; and Tong, X. 2021.Deformed implicit field: Modeling 3d shapes with learned dense correspondence.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10286–10296.
- Denninger etal. (2023)Denninger, M.; Winkelbauer, D.; Sundermeyer, M.; Boerdijk, W.; Knauer, M.; Strobl, K.H.; Humt, M.; and Triebel, R. 2023.BlenderProc2: A Procedural Pipeline for Photorealistic Rendering.Journal of Open Source Software, 8(82): 4901.
- DeTone, Malisiewicz, and Rabinovich (2018)DeTone, D.; Malisiewicz, T.; and Rabinovich, A. 2018.SuperPoint: Self-Supervised Interest Point Detection and Description.
- Engel, Koltun, and Cremers (2017)Engel, J.; Koltun, V.; and Cremers, D. 2017.Direct sparse odometry.IEEE transactions on pattern analysis and machine intelligence, 40(3): 611–625.
- Fan etal. (2024)Fan, Z.; Cong, W.; Wen, K.; Wang, K.; Zhang, J.; Ding, X.; Xu, D.; Ivanovic, B.; Pavone, M.; Pavlakos, G.; etal. 2024.Instantsplat: Unbounded sparse-view pose-free gaussian splatting in 40 seconds.arXiv preprint arXiv:2403.20309.
- Fu etal. (2022)Fu, Q.; Xu, Q.; Ong, Y.S.; and Tao, W. 2022.Geo-neus: Geometry-consistent neural implicit surfaces learning for multi-view reconstruction.Advances in Neural Information Processing Systems, 35: 3403–3416.
- Fu etal. (2024)Fu, Y.; Liu, S.; Kulkarni, A.; Kautz, J.; Efros, A.A.; and Wang, X. 2024.COLMAP-Free 3D Gaussian Splatting.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 20796–20805.
- Gropp etal. (2020)Gropp, A.; Yariv, L.; Haim, N.; Atzmon, M.; and Lipman, Y. 2020.Implicit geometric regularization for learning shapes.arXiv preprint arXiv:2002.10099.
- Handa etal. (2016)Handa, A.; Pătrăucean, V.; Stent, S.; and Cipolla, R. 2016.Scenenet: An annotated model generator for indoor scene understanding.In 2016 IEEE International Conference on Robotics and Automation (ICRA), 5737–5743. IEEE.
- Hastie etal. (2009)Hastie, T.; Tibshirani, R.; Friedman, J.H.; and Friedman, J.H. 2009.The elements of statistical learning: data mining, inference, and prediction, volume2.Springer.
- Jensen etal. (2014)Jensen, R.; Dahl, A.; Vogiatzis, G.; Tola, E.; and Aanæs, H. 2014.Large scale multi-view stereopsis evaluation.In Proceedings of the IEEE conference on computer vision and pattern recognition, 406–413.
- Jeong etal. (2021)Jeong, Y.; Ahn, S.; Choy, C.; Anandkumar, A.; Cho, M.; and Park, J. 2021.Self-calibrating neural radiance fields.In Proceedings of the IEEE/CVF International Conference on Computer Vision, 5846–5854.
- Jiang etal. (2024)Jiang, K.; Fu, Y.; VarmaT, M.; Belhe, Y.; Wang, X.; Su, H.; and Ramamoorthi, R. 2024.A Construct-Optimize Approach to Sparse View Synthesis without Camera Pose.SIGGRAPH.
- Kerbl etal. (2023)Kerbl, B.; Kopanas, G.; Leimkühler, T.; and Drettakis, G. 2023.3D Gaussian Splatting for Real-Time Radiance Field Rendering.ACM Trans. Graph., 42(4): 139–1.
- Kirillov etal. (2023)Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.-Y.; Dollár, P.; and Girshick, R. 2023.Segment Anything.arXiv:2304.02643.
- Krizhevsky, Sutskever, and Hinton (2012)Krizhevsky, A.; Sutskever, I.; and Hinton, G.E. 2012.Imagenet classification with deep convolutional neural networks.Advances in neural information processing systems, 25.
- Lin etal. (2022)Lin, C.-H.; Ma, W.-C.; Torralba, A.; and Lucey, S. 2022.BARF: Bundle-Adjusting Neural Radiance Fields.In ECCV.
- Liu etal. (2024)Liu, M.; Xu, C.; Jin, H.; Chen, L.; VarmaT, M.; Xu, Z.; and Su, H. 2024.One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization.Advances in Neural Information Processing Systems, 36.
- Liu etal. (2023a)Liu, R.; Wu, R.; VanHoorick, B.; Tokmakov, P.; Zakharov, S.; and Vondrick, C. 2023a.Zero-1-to-3: Zero-shot one image to 3d object.In Proceedings of the IEEE/CVF international conference on computer vision, 9298–9309.
- Liu etal. (2023b)Liu, S.; Zeng, Z.; Ren, T.; Li, F.; Zhang, H.; Yang, J.; Li, C.; Yang, J.; Su, H.; Zhu, J.; etal. 2023b.Grounding dino: Marrying dino with grounded pre-training for open-set object detection.arXiv preprint arXiv:2303.05499.
- Meng etal. (2021)Meng, Q.; Chen, A.; Luo, H.; Wu, M.; Su, H.; Xu, L.; He, X.; and Yu, J. 2021.GNeRF: GAN-based Neural Radiance Field without Posed Camera.In ICCV.
- Niemeyer etal. (2022)Niemeyer, M.; Barron, J.T.; Mildenhall, B.; Sajjadi, M. S.M.; Geiger, A.; and Radwan, N. 2022.RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs.In CVPR.
- Park etal. (2023)Park, K.; Henzler, P.; Mildenhall, B.; Barron, J.T.; and Martin-Brualla, R. 2023.CamP: Camera Preconditioning for Neural Radiance Fields.ACM Trans. Graph., 42(6).
- Ranftl etal. (2020)Ranftl, R.; Lasinger, K.; Hafner, D.; Schindler, K.; and Koltun, V. 2020.Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer.IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI).
- Sarlin etal. (2020)Sarlin, P.-E.; DeTone, D.; Malisiewicz, T.; and Rabinovich, A. 2020.Superglue: Learning feature matching with graph neural networks.In CVPR.
- Schönberger and Frahm (2016)Schönberger, J.L.; and Frahm, J.-M. 2016.Structure-from-Motion Revisited.In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4104–4113.
- Shi etal. (2023)Shi, R.; Chen, H.; Zhang, Z.; Liu, M.; Xu, C.; Wei, X.; Chen, L.; Zeng, C.; and Su, H. 2023.Zero123++: a single image to consistent multi-view diffusion base model.arXiv preprint arXiv:2310.15110.
- Simonyan and Zisserman (2015)Simonyan, K.; and Zisserman, A. 2015.Very deep convolutional networks for large-scale image recognition.In 3rd International Conference on Learning Representations (ICLR 2015). Computational and Biological Learning Society.
- Song, Kwak, and Kim (2022)Song, J.; Kwak, M.-S.; and Kim, S. 2022.Neural Radiance Fields with Geometric Consistency for Few-Shot Novel View Synthesis.
- Sun, Sun, and Chen (2022)Sun, C.; Sun, M.; and Chen, H.-T. 2022.Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5459–5469.
- Tang and Tan (2018)Tang, C.; and Tan, P. 2018.Ba-net: Dense bundle adjustment network.arXiv preprint arXiv:1806.04807.
- Truong etal. (2023)Truong, P.; Rakotosaona, M.-J.; Manhardt, F.; and Tombari, F. 2023.Sparf: Neural radiance fields from sparse and noisy poses.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4190–4200.
- Wang etal. (2021a)Wang, P.; Liu, L.; Liu, Y.; Theobalt, C.; Komura, T.; and Wang, W. 2021a.Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction.arXiv preprint arXiv:2106.10689.
- Wang etal. (2024)Wang, S.; Leroy, V.; Cabon, Y.; Chidlovskii, B.; and Revaud, J. 2024.DUSt3R: Geometric 3D Vision Made Easy.In CVPR.
- Wang etal. (2004)Wang, Z.; Bovik, A.C.; Sheikh, H.R.; and Simoncelli, E.P. 2004.Image quality assessment: from error visibility to structural similarity.IEEE transactions on image processing, 13(4): 600–612.
- Wang etal. (2021b)Wang, Z.; Wu, S.; Xie, W.; Chen, M.; and Prisacariu, V.A. 2021b.NeRF: Neural Radiance Fields Without Known Camera Parameters.arXiv preprint arXiv:2102.07064.
- Wu etal. (2022)Wu, T.; Wang, J.; Pan, X.; Xu, X.; Theobalt, C.; Liu, Z.; and Lin, D. 2022.Voxurf: Voxel-based efficient and accurate neural surface reconstruction.arXiv preprint arXiv:2208.12697.
- Xiong etal. (2023)Xiong, H.; Muttukuru, S.; Upadhyay, R.; Chari, P.; and Kadambi, A. 2023.SparseGS: Real-Time 360° Sparse View Synthesis using Gaussian Splatting.
- Yang etal. (2021)Yang, B.; Zhang, Y.; Xu, Y.; Li, Y.; Zhou, H.; Bao, H.; Zhang, G.; and Cui, Z. 2021.Learning Object-Compositional Neural Radiance Field for Editable Scene Rendering.In International Conference on Computer Vision (ICCV).
- Yang, Pavone, and Wang (2023)Yang, J.; Pavone, M.; and Wang, Y. 2023.FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8254–8263.
- Zhang etal. (2021)Zhang, J.; Yang, G.; Tulsiani, S.; and Ramanan, D. 2021.Ners: Neural reflectance surfaces for sparse-view 3d reconstruction in the wild.Advances in Neural Information Processing Systems, 34: 29835–29847.
- Zhang etal. (2018)Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; and Wang, O. 2018.The Unreasonable Effectiveness of Deep Features as a Perceptual Metric.In CVPR.
- Zhu etal. (2023)Zhu, Z.; Fan, Z.; Jiang, Y.; and Wang, Z. 2023.FSGS: Real-Time Few-Shot View Synthesis using Gaussian Splatting.arXiv:2312.00451.