-
Notifications
You must be signed in to change notification settings - Fork 250
Open
Description
Hi! I'm reading your code recently and I'm confused why don't you reverse the y-axis and z-axis here when converting extrinsic from opencv coordinate system to open3d coordinate system
2d-gaussian-splatting/utils/mesh_utils.py
Lines 64 to 66 in 6e21151
| extrinsic=np.asarray((viewpoint_cam.world_view_transform.T).cpu().numpy()) | |
| camera = o3d.camera.PinholeCameraParameters() | |
| camera.extrinsic = extrinsic |
I notice that you implement this in estimate_bounding_sphere function
2d-gaussian-splatting/utils/mesh_utils.py
Lines 125 to 132 in 6e21151
| def estimate_bounding_sphere(self): | |
| """ | |
| Estimate the bounding sphere given camera pose | |
| """ | |
| from utils.render_utils import transform_poses_pca, focus_point_fn | |
| torch.cuda.empty_cache() | |
| c2ws = np.array([np.linalg.inv(np.asarray((cam.world_view_transform.T).cpu().numpy())) for cam in self.viewpoint_stack]) | |
| poses = c2ws[:,:3,:] @ np.diag([1, -1, -1, 1]) |
I'm confused about the difference. Could you help explain? Thanks!
Metadata
Metadata
Assignees
Labels
No labels