Electronics Meetup/Member Build-Openscan 1/18/2021 6pm

Hello all,
we will be meeting up Tuesday the 18th @ 6pm to continue work on the open source 3D scanner project.

We will get the pieces assembled and the device working and iron out the details of which software we will move forward with.

As always we will wrap up with any general electronics project questions.

I hope to have a zoom link as that worked well last time so check back here for that link as the time approaches

2 Likes

Zoom Link!
https://us02web.zoom.us/j/85741081793?pwd=N3EwNjluV1RSOXU5Y1k0aDk5andhQT09

Folks,
I spent some time this morning looking at the Raspberry Pi and Arduino versions of the project.

Similarities

  • Both control the stepper motors and can send signals to trigger external cameras.
  • Both require building “Shield” boards to hold two A4988 stepper drivers and the external camera interfaces.

Differences

  • The Arduino version user interface is a small lcd panel and push buttons.
  • The Pi Version
    • provides a web interface (Wifi or Ethernet) for job configuration and control.
    • can directly control an 8MP PiCam V2 and store the accumulated images the Pi’s SD card.
    • Supports a ringlight board for the PiCam to provide uniform illumination

The Pi Version software has been tested on Pi4 and Pi3.

To me, the Pi Version looks like a more complete and extensible solution. However, I’m probably biased by having spent a lot of time developing Raspberry Pi systems.

Useful links

2 Likes

If it doesn’t introduce specific issues outside of general project complexity, i don’t see a reason why we wouldn’t go with a Pi- It seems like a much easier thing to make modifications for should we feel that the first version could use some improvements-

1 Like

Camera Research Part 1:
Canon Rebel (T7) ~ 500-800 (depending on bundle!)
Rebel EOS T7 Kit, And it’s specs.

Would be good for streaming setup as well, which would put it at a premium but multi purpose. We would need to get a Zoom lens for it to mitigate perspective warp (true for all cameras). Canons have many great lens options.

For DSLRs, I don’t think it gets any cheaper (unless we purchase a camera with a locked lens, which I don’t recommend).

DSLRs seem to have an easy way to do shutter control:Here is an article about it - something called “gphoto2”- Here are the officially supported cameras

Something of note is that T7 is not on their official list, but T7i is… but the difference is around 400 dollars. I think It might be worth looking for a T6 (refurbished or otherwise): A refurbished with zoom lens goes for around $600..

Again, we could ask for this as an investment in both streaming and scanning. It’s also possible we look other places than Amazon.

Part two will be looking at Pi-Cameras.

1 Like

@RobinLloydMiller Thanks for digging into this! FWIW, the Pi Version is using gphoto for external control of DSLR cameras that support a USB interface. There’s also an opto-isolated trigger output for cameras that only support wired shutter control.

@Gary I looked more closely at the docs for Arduino and Pi versions to try to answer your question about “timed control”. I had the impression you were thinking about a completely open loop situation where someone would come in, set up their smartphone to capture images at some fixed interval and program the OpenScan to move to positions at fixed time intervals – the idea being that they would manually start them both at the same time and trust that they would stay in sync for the duration of the shoot. Is that right?

AFAICT, neither the Arduino nor the Pi supports that exactly. You can set a dwell time in either version to make the OpenScan hold still at a position for that duration but it doesn’t seem to include the time to move from one position to the next. I can see how that might get tricky unless the both axes move at the same angular rate and the programmed steps in each axis are equal.

Both versions do support triggering at each new position and holding still for a fixed dwell time.

Let me know if I’ve misunderstood what you were referring to.

1 Like

Mike,
Open loop was what I was thinking. The Arduino with the camera phone is setup to do exactly this.
They should have a very consistent step timing that can be accounted for with a photo taking app.

That said I am happy to embrace the goodness that is RaspPi

Robin,
I am not ready to shell out for a camera of that quality as much as that would be awesome. I will wait to see some of the other options you can turn up.

@Gary So it looks like we could support open loop in the Pi Version by choosing External mode and setting **Time Per Foto" to the desired interval and simply ignore the Release Time value, which, AFAICT, determines how long the shutter control output is active during the photo interval.

A little experimentation should tell us how much extra time, if any, needs to be added on the camera side to compensate for the time to move to the next position.

I’ve posted a query in the OpenScan repo’s issues pages to verify that this will work as expected.

1 Like

I found two ways to remotely/automatically ask a cell phone to take a picture. The open scan project uses a bluetooth button, available on amazon for c. $8. Other projects use the volume control found on earbuds to act as a shutter control. This second solution seems easier to me as it’s hardwired, and doesn’t require a battery. A link below provides the hardware specs for volume control, which could be easily triggered by an arduino, and I assume a Pi. The other link shows someone using the VisualSFM software. I’ve not played with Pis before but interested in learning.

https://source.android.com/devices/accessories/headset/plug-headset-spec

@PikePorter That’s a really interesting video tutorial. I had no idea open source scanning/meshing/rendering software had become so advanced.

The volume control hack sounds pretty straightforward. Seems more easily shared that the Bluetooth remote approach since that requires pairing and unpairing with each user’s phone.

Its definitely a lot for a personal thing, but perhaps Generator might? (On the basis of investing in a product photo camera, which i think has been a conversation floating around)

1 Like

I wish I would have caught wind of this project a little earlier. I’ve been buried in my basement like a troll with work this month.

Photogrammetry has been part of my work for about the last 6 years. I have scanned quite a few things, on quite a few cameras, ranging from millimeters to shopping malls. If I’m not too late, I’d love to contribute some of my own personal observations that come from a variety of success and failure.

I have primarily use MetaShape (formerly PhotoScan).
In the documentation it heavily emphasizes capturing in a static environment where the object remains still and the camera orbits around the object.


I’ve seen many people try to build static camera rigs that focus on an orbiting object, and I’m yet to see it achieve quality results. The major issue is that you’re constantly changing the surface shading which is the very thing the photogrammetry software uses to orient itself. The other issue is that you create a situation in which the software has to discern between 2 different temporal spaces 1) the static environment and the dynamic environment.

Think about it this way: Say someone put you in a large non-descript room that had just a few objects in it (bicycle, lamp, filing cabinet) and then they pointed North. Then they blindfolded you, moved you to a different position in the room, and then took off your blind fold and asked you to point North. It takes a bit of spatial intelligence, but from your mental markers you could likely reorient yourself to North without too much trouble. Now imagine they blindfold you, change your position and then ALSO change the position of some of the objects in the room… that task very quickly becomes more complicated.

If you keep a static environment there are a ton of tools to aid you in this process. You may think you’re helping the algorithm by not moving the camera, but the mathematical precision of the camera tracking is unbelievable. Most importantly you DON’T need an expensive camera. Half of the scans I’ve done in the last 2 years have been on my iPhone. You don’t need zoom lenses. Almost all photogrammetry software has lens calibration that can automatically handle camera distortion straight from the EXIF data.

Regardless of your approach, I feel the project should go the mobile phone route for acquiring photos. It’s readily available, the average quality is better than high end Sony or Blackmagic film cameras from 5 years ago, they have networking built in, storage built-in, can be easily replaced and/or upgraded, etc.

Tangentially, I see a lot of really lowball photogrammetry examples on the internet so I set out to do a few Photogrammetry Precision Tests this last summer that were inspired over beers with the super-awesome @Matt_Flego at Zero Gravity. You remember summer, right? Warm, bright. :slight_smile:

I probably can’t make it in person this Tuesday, but if everyone is zooming that night I’d be happy to join.

-m

2 Likes

I think technically your iphone is a more expensive camera then the DSLR hahah :wink:

That’s great input Mike! I also have a couple extra older devices lying around which have cameras and i’d happily donate to the project given that they meet its needs.

regarding the design- I think (and do not speak for the rest) but while we have most of the pieces printed, it might make sense to complete this build- and should it not be good enough, we begin planning for a more robust camera-rotates-object solution. just my two cents on that.

I absolutely agree with finishing the current build as it was planned first. Beyond just being a good maker practice there isn’t one rig that can do everything so why not make multiple scanning rigs.

1 Like

I know this is you just poking fun at Apple because it brings you joy, but it should be genuinely acknowledged that 1) the cost of a project is greatly reduced by using what you already have and 2) it’s more environmentally conscious to do so if you can. The cost of a project does not include the materials you already have, and I would wager high stakes that from your grandpa to your nephew… 95% of people’s best camera is now their phone.

I don’t use the term iPhone specifically, Samsung phones have comparable image quality/cost ratio. I can only type the phrase “non-denominational mobile phone camera” so many times before I assume everybody knows what I’m saying. :wink:

1 Like

@Mike_Senften Wow. That is great input and a very straightforward explanation of why fixed object moving camera is easier to process reliably than the other way round!

I think this argues for keeping our OpenScan hardware build simple, e.g. don’t integrate the PiCam and instead provide a trigger output with plugs that can be used either for the smartphone audio jack approach or for shutter triggering on a conventional SLR.

Thinking about your input makes me envision a neat little robotic vehicle to move a camera around a fixed object within a large cylinder with flat white interior walls to provide a uniform background.

@Gary I got a reply form OpenScanEu about constant time triggering. He doesn’t recommend it. You can read the full thread at https://github.com/OpenScanEu/OpenScan/issues/14). In brief, here’s what he said:


"""

… at least without changing the ‘toolpath’, this approach would not be very viable, as the time between each position varies quite a bit. A while ago, I optimized the movement pattern, so that the camera positions are equally spaced. (see the first image of this post: https://www.reddit.com/r/OpenScan/comments/k0q2gy/new_routine_for_equally_spaced_camera_positions/)
The ‘downside’ of this approach is, that the movement feels kinda random and sometimes the turntable moves almost 180° between two positions, sometimes it almost does not move at all.

But why would you use a constantly timed triggering mechanism anyway? The firmware supports the use of external camera triggers (cheap modules which are present in any selfie stick)? Or isn’t this an option?

Edit: time per foto does not include the time for the movement and is just a delay after the external camera pin is set to the value high.
Release Time on the other hand defines the time, that the pin is kept on high. Depending on the camera model longer or shorter times might be required.
In the next firmware update, there will a third delay time, defining the time before the external pin is set to high.

“”"

I’m not familiar with the approach the group is taking regarding the motor sync, and we can perhaps discuss it more tonight, but I made a motor synced intervalometer back in 2014 not much more than a month after I first started learning Arduino (that is only to say if I was able to do it back then it couldn’t have been that hard).

https://vimeo.com/672348959/f2e03af10d

I remember being surprised to find out that 3.5mm jack based intervalometers are simply closing and opening a circuit. This build is a simple UNO, with a motor hat, 7 segment display, and a couple of op amps and that’s it. This object may actually exist somewhere in my basement and I may even have the code on a drive somewhere as well.

Not to say that it would actually be useful, it seems the future of most cameras (phones included) are moving to bluetooth triggering.

Just felt like if I could achieve sync as a total n00b, I have confidence that we could do it as a group rather easily. :slight_smile:

-m

Unable to attend in person or virtually tonight. Looking forward hearing about what was discussed.

Thanks,
Mike

https://drive.google.com/drive/folders/1nYx3FOuG61VXkLqc755CDL3JCTSybp-h?usp=sharing

link to my monk photos, which should all be uploaded by 8:30 tonight. There are also files here associated with the various programs’ processes. I’m running low on drive space so I’ll remove these after two weeks.

1 Like