Project Video Scanner

Hi,

First time posting here.
Since some members here have shown some interest for my work, I decided to put here a link to my program: Project Video Scanner

What is it? Well, all started at PCars with the idea of getting a cheap alternative to laser scanning, so instead of lasers I decided to make something with two cameras:

So we can get some point clouds on the cheap.
Please visit my site where you can download the program and make some tests.
Would love to have some feedback. Is it of any interest, practical?
At first the idea was to use the point cloud has reference, now I'm starting to think if we could get some surface from them.

Anyway, that's just some leisure. If it's useful, good if not at least I learned something.
 
Looks interesting enough and really could help with a lack of tracks. One question though, can it do more of the roadside? Because i notice a lot of what it does is just the road, and almost nothing beyond it.

I think this could be useful, the textures it appears to lay down are not of the highest quality but that can always be fixed later on. This could cut track creation down by a real long time, wonder if it would work on cars lol.
 
Hi Harey,

We can get more roadside if we increase the depth range of reconstruction.
But we also loose definition.
In the reconstruction step, try to increase the centre of reconstruction from 6 to 8 and the overlap factor from 2 to 4.
We can also do multiple passes on the same road and match the point clouds. That's harder to do but we could even rotate the cameras to the sides.
Has I said this is amateur and leisure so I still have lot's of testing and upgrades to do but we can enhance the software to go further on the reconstruction.
I'm working on the next version where we will reconstruct all frame points, but keep only the close ones as we walk and overlap point clouds.

One of the main problems with this is reflections, so cars are not good for this. Just take a look at the ones reconstructed, Glasses and reflected surfaces are problematic.

Working on this I was also thinking on the Euclidion tech or something similar so we could use the point clouds directly, but that is all starting right now and implies a new render engine.

Edit: Made some updates to the program to correct some errors today.
 
Last edited:
Hey there,

It looks like you finally resolved the slow twisting/torsion of the reconstruction and now have a nice flat correct reconstruction.

I'll download this later and take a look...!


So if I buy the correct equipment like you have used, and mount these high up (say on a farm vehicle driving slowly facing backwards), I could expect a bit better range and ability to reconstruct to the sides more (ie, verges and elements at each side of the road will be better?)


I'm very eager to buy some webcams and give this a go!


Thanks

Dave
 
Hey there,

It looks like you finally resolved the slow twisting/torsion of the reconstruction and now have a nice flat correct reconstruction.

I'll download this later and take a look...!


So if I buy the correct equipment like you have used, and mount these high up (say on a farm vehicle driving slowly facing backwards), I could expect a bit better range and ability to reconstruct to the sides more (ie, verges and elements at each side of the road will be better?)


I'm very eager to buy some webcams and give this a go!


Thanks

Dave

Hi,

I didn't completely solved the errors. All depends on the calibration quality.
To solve that I created a module where we can correct the cameras dislocation.
Here's the interface in action:

1.jpg

2.jpg

Give it a try, but please remember that's an leisure and amateur project....

Would love to try this with the Euclideon engine, could be interesting:
That's all based in point clouds. Maybe in the future.
 
Hmm, does the calibration process require that the cameras are fixed relative to each from video to video? Thinking about how to attach them on my bike handlebars I'd want to be able to take them off when not using it, and would probably just clamp them on the ends in about the same place every time, then start the capture software and put my laptop in a backpack. Would be nice if I can make that kind of minor adjustment without complete recalibration.
 
Hmm, does the calibration process require that the cameras are fixed relative to each from video to video? Thinking about how to attach them on my bike handlebars I'd want to be able to take them off when not using it, and would probably just clamp them on the ends in about the same place every time, then start the capture software and put my laptop in a backpack. Would be nice if I can make that kind of minor adjustment without complete recalibration.

You have to recalibrate them.
Why not put them on a bar then mount the bar in the bike handlebar.
From my tests, using a car is smoother. I tried my daughters baby car, vibration and sudden movements make camera dislocation almost impossible to compute.
If you make it be smooth, make slow and wide corners, avoid vibrations.
 
Hrmmm, looks like £120 for a pair of cameras which isn't so bad if they are what you've used and work ok.

Laptop/s, check.

Vehicles to mount them onto. Check.

So main things stopping me are a calibration chart which I'd have to make up.

Then go do some testing.



The 'correction' software looks interesting. How does it work exactly? Do you just choose certain frames and adjust their rotation/pitch error and the frames between these corrections are linearly interpolated (assuming the drift is generally linear?)

I'm still wondering how hard it'd be to use more distant object tracking say 100 frames ahead to try solve the drift issues a bit more?


Anyway, off to think about the kit I need to buy and building a rig :D

Dave
 
Hrmmm, looks like £120 for a pair of cameras which isn't so bad if they are what you've used and work ok.

Laptop/s, check.

Vehicles to mount them onto. Check.

So main things stopping me are a calibration chart which I'd have to make up.

Then go do some testing.



The 'correction' software looks interesting. How does it work exactly? Do you just choose certain frames and adjust their rotation/pitch error and the frames between these corrections are linearly interpolated (assuming the drift is generally linear?)

I'm still wondering how hard it'd be to use more distant object tracking say 100 frames ahead to try solve the drift issues a bit more?


Anyway, off to think about the kit I need to buy and building a rig :D

Dave

Hi,

The correction software works as you say. You choose a frame and correct the rotations you want for that frame.
You can fix one or more angles for any frame and when you change another one the correction is made to the fixed one.
Correction should be made only with angles but I added translations also.
The translations are applied after the rotations so it's always better to start on the rotations.
The translation applied works as the rotations where we change the values in a determined frame but can fix on other frames.
Translations are in reality delta translations applied to the dislocation after the rotations correction.
For example if you move up a specific frame by a value of 100, if you change the rotation after, the value in that frame is the result of the rotation modification plus 100.
I hope I made myself clear... Maybe not..

I'm right now applying some corrections detected by other users. Please keep your files updated as this was released just some days ago and it was full of bugs... :O_o:
 
Hi there,

That makes sense to me I think, and it sounds like a good approach.

I have a road I can test this on. I've got GPS traces, vector maps, aerial imagery, digital terrain models etc so I can see how the data lines up and let you know!

Hopefully with GPS traces over large distances (7km or so and ~ 150m altitude changes over that range) I can tweak any subtle drift over the large distances but retain quality data over the smaller distances.


My goal here is to just use it to build over the point cloud as a reference... but the surfacing of just the road points would be useful so I can conform a road quad mesh onto the surface!

I'll be sure to let you know how I get on and report back with my example footage for you to review and use to help improve your tool if that is useful to you!


Thanks for sharing :D

Dave
 
Very impressive stuff FlyPT. This is something I've wanted for quite a while, I've tried many different techniques but none are as quick as this appears to be.
I wonder how well a quadcopter would work?
You say this is not very good for digitizing cars, but the cars beside the road don't
look that bad. If reflection is a problem we could always just blow a powder onto the car
to matte it out.
 
I agree wrt car scanning.

A car can be dirty and/or have features taped onto it for tracking.

But it should be very nice for car interiors and they are harder to get accurate vs exteriors too!


Not sure on the quadcopter, I suppose you'd need a way to save the data via a transmitter, or maybe on-board the quadcopter via a super-light PC like a Raspberry Pi?!

I like the idea of using a pair of SLR at say 5fps using a high ish ISO and very fast shutter speed on constant exposures. I think the main issue is getting them in sync... say 6D with 50mm prime lenses and you'd get fantastic quality images with zero blur or rolling shutter type distortions etc.



The main bonus here seems to be nice collection of the road surface data. Single SFM type collections don't work so well, but stereo appears to track a whole bunch more points on the fairly 'blank' road surface.


Exciting stuff!

Dave
 
Hmmm I just had a play with the example data then brought it into Max for review.

Corrections are quite nice indeed, you can tweak the cloud to get it looking quite accurate.

However choosing the right values for corrections is quite hard. Roll is easy because we travel generally in a world where things are built vertically (building sides, plants grow vertically, horizon is generally flat etc)...
The problem I'm seeing the most here is pitch.

I'm not sure what the solution is, but it'd be nice if there were some way to draw into the footage what are straight lines, so for example at certain frames we can draw a line along building verticals and horizontals as guides (say gutters and corners of buildings, or sign posts for verticals and wall tops as horizontals)

That way you can just scrub through the frames and find useful buildings and features to use as registration guides to correct the pitch/roll through the video.

I'm not sure how you'd correct for the other variables.

Perhaps if you could have a 'top' view in the correction software where you can load a map or aerial photo and then you can use it as a guide to make yaw corrections?


Either way this tool is very very impressive already. With many hours spent you could have no problems making a nice corrected point cloud.
It'd just be nice to have better ways to make the corrections :D


Thanks for sharing this amazing software!

Dave
 
Hey Pedro,

I was just playing with old SFM techniques with your example data.

I managed to use every other 10th pair and reconstruct pretty well here.

Obviously it's a LOT slower using this, but given the cameras are checking tracking points over a wider range of frames there is little to no drift/torsion in the reconstruction.

Here are two images showing the reconstruction.

1.JPG
2.JPG



I'm not sure how hard it is, but I think if you can maybe pick out like every 30th frame (every second) and reconstruct the camera positions only and use those as a guide for the rotational and positional correction values for the stereo camera positions it will be better.

OK in the end we have speed vs accuracy. But for now I think you can lose some speed to get back accuracy.

Ie, if you have a check every 1s (or 30 frames) that the correction system uses for the values then you should get only a small impact on speed but a huge increase in accuracy.


I think a hybrid approach is probably best. Stereo pairs are very good for dense point reconstruction, SFM is very good for resisting drifting and torsion.


I hope this all makes sense, and you are still enjoying working on your software! The potential is still amazing if you keep making it better and better!


Thanks again for all your hard work.

If you want any example clouds/data just let me know!


Thanks

Dave
 
Hehe FlyPT :),

Was one of the first person, who talked about this, +1 year ago...

Now, the real 'stuff' is to clean the mesh topology & write a program which is 'aware' of it & somehow rebuilds the whole 3d in a clean way.

That means, we need algorithms that somehow detects mesh boundaries, mesh orientation (for later proper UV projections) etc...for proper 3D mesh regeneration.

Anyway, I'll check the whole & come back later to give you some feedback.
 
Or just use the dense point clouds and build over the top with your own polygons.

Auto-topology would be nice but I can imagine you'd spend as long trying to tidy it up to suit your exact needs as you would just building right into the point cloud data with point cloud snapping, or making cross-sections etc.

Meshers are probably great for certain jobs, but I wouldn't want to let one automate track and track-side geometry for me as I'd spend longer tidying it up than I'd have spent just making it directly from the point cloud reference I think.

Dave
 

Latest News

Online or Offline racing?

  • 100% online racing

    Votes: 96 7.8%
  • 75% online 25% offline

    Votes: 130 10.5%
  • 50% online 50% offline

    Votes: 175 14.2%
  • 25% online 75% offline

    Votes: 348 28.2%
  • 100% offline racing

    Votes: 480 38.9%
  • Something else, explain in comment

    Votes: 5 0.4%
Back
Top