You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wish to test point clouds with less than 100k points. Is there a way to pre-process point clouds with < 100k points to h5py? How should I change the script?
Thanks in advance!
The text was updated successfully, but these errors were encountered:
Our method does not require more than 100k points as input. It's more a dataloader and preprocessing setup for the PCPNet dataset specifically. You can check out this script to see how we preprocess for the PCPNet data. Our preprocessing code for PCPNet is a little bit tedious as it was designed to adapt the PCPNet dataset, so you don't have to follow it.
To load your data, you can write your preprocessing code and dataloader, as long as the new dataloader provides input data with the same dict keys and the same dimensions as our dataloader. In this way, you can preprocess your data however you want.
One thing to note is the provided pretrained checkpoints are trained on data with 100k points per object. So when you switch to point clouds with different densities, you probably need to re-train the network to get the best performance on your data.
Dear author,
I wish to test point clouds with less than 100k points. Is there a way to pre-process point clouds with < 100k points to h5py? How should I change the script?
Thanks in advance!
The text was updated successfully, but these errors were encountered: