These are some very preliminary tests with photogrammetry from open-source images, using tourist photos to reconstruct 3D models of some of the world’s most photographed sites (just for ease with initial tests). I don’t know how useful this technique will be for the future of the project, I just thought I’d start getting to grips with its limitations and potentials – the software I’m using is Agisoft’s Photoscan.
St Paul’s Cathedral – Sampled via 100 Flickr images
It seems to demonstrate the effect I was talking about last tutorial with the most photographed elements being the most clearly defined. The distortions in the image seem to be caused where the software fails to properly align cameras/guess lens lengths.
The Statue of Liberty – Sampled via 200 Google Images
Tower Bridge – Sampled via 100 Google Images
My Flat – Sampled via 60 photos taken on DSLR (Fixed lens length & lighting)
Obviously a Photoscan done with images intended to help build a 3D model results in a scene with considerably more detail and accuracy, although having done these experiments I think it is potentially more interesting to investigate the strange anomalies produced where the algorithm gets it wrong.
An interactive ‘mash-up’ of humanity by media artist Jonathan Harris.It’s a random collection of videos and sounds crowd-sourced by Mechanical Turk workers associated with verbs.
[Click the link below to see the website only if you have 8 minutes free as you can only access the site once every 24 hours]
The sheer quantity of material shown at such a pace forces our brain to average out the experiences we see to make sense of them. Each word in the piece is in itself an average – a container for a set of specificities. For me the work’s emotional power is in the revealing of those specificities, making words that describe the mundane extraordinary by connecting us to fragments of what in themselves are quite banal individual videos.