Synthesis Project 2015-2016
Using our eyes and brains humans are able to distinguish and recognise object in static images and dynamic environments easily: people (who?), type of object (what), and characteristics such as colour, material, etc. What could computation add by using the raw point cloud datasets for direct calculations and interpretation of the data? How could interactive visualisations contribute to insight in the data?
Point Cloud datasets usually represent rich data and are collected by laser scanning technologies but also retracted from high-accurate series of photographs. In many cases, this data is translated into objects. This translation reduces the amount of data considerably, but also reduces the potential information sources available or hidden in the dataset. The question is what operations can be carried out with the ‘raw’ dataset, which would directly enrich the use of point cloud data?
The Synthesis Project Fall 2015 – 2016 focused on the direct use of point cloud data, collected by image photography or by laserscanning. Students developed uses for calculations and measurements with the original, raw data. In 2015 three parallel projects on point clouds were run.
Semantically enriching point clouds
GIM international issue of January 2016 published a three-page article of the project “Semantically enriching point clouds: The case of street level” that was held as part of the Synthesis Project 2015-2016. In the article the authors explain how the group (Adrie Rovers, Tim Negelkerke, Irene de Vreede, Stella Psomadaki and Merwin Rook), starting from a raw point cloud with X, Y, Z and colour information derived additional semantic information, leading to a more usable dataset.
For more information: http://www.gim-international.com/