I wish I would have caught wind of this project a little earlier. I’ve been buried in my basement like a troll with work this month.
Photogrammetry has been part of my work for about the last 6 years. I have scanned quite a few things, on quite a few cameras, ranging from millimeters to shopping malls. If I’m not too late, I’d love to contribute some of my own personal observations that come from a variety of success and failure.
I have primarily use MetaShape (formerly PhotoScan).
In the documentation it heavily emphasizes capturing in a static environment where the object remains still and the camera orbits around the object.
I’ve seen many people try to build static camera rigs that focus on an orbiting object, and I’m yet to see it achieve quality results. The major issue is that you’re constantly changing the surface shading which is the very thing the photogrammetry software uses to orient itself. The other issue is that you create a situation in which the software has to discern between 2 different temporal spaces 1) the static environment and the dynamic environment.
Think about it this way: Say someone put you in a large non-descript room that had just a few objects in it (bicycle, lamp, filing cabinet) and then they pointed North. Then they blindfolded you, moved you to a different position in the room, and then took off your blind fold and asked you to point North. It takes a bit of spatial intelligence, but from your mental markers you could likely reorient yourself to North without too much trouble. Now imagine they blindfold you, change your position and then ALSO change the position of some of the objects in the room… that task very quickly becomes more complicated.
If you keep a static environment there are a ton of tools to aid you in this process. You may think you’re helping the algorithm by not moving the camera, but the mathematical precision of the camera tracking is unbelievable. Most importantly you DON’T need an expensive camera. Half of the scans I’ve done in the last 2 years have been on my iPhone. You don’t need zoom lenses. Almost all photogrammetry software has lens calibration that can automatically handle camera distortion straight from the EXIF data.
Regardless of your approach, I feel the project should go the mobile phone route for acquiring photos. It’s readily available, the average quality is better than high end Sony or Blackmagic film cameras from 5 years ago, they have networking built in, storage built-in, can be easily replaced and/or upgraded, etc.
Tangentially, I see a lot of really lowball photogrammetry examples on the internet so I set out to do a few Photogrammetry Precision Tests this last summer that were inspired over beers with the super-awesome @Matt_Flego at Zero Gravity. You remember summer, right? Warm, bright.
I probably can’t make it in person this Tuesday, but if everyone is zooming that night I’d be happy to join.