11.08.2022 Views

GEOmedia 3 2022

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Rivista bimestrale - anno XXVI - Numero 3/2022 - Sped. in abb. postale 70% - Filiale di Roma

LAND CARTOGRAPHY

GIS

CADASTRE

GEOGRAPHIC INFORMATION

PHOTOGRAMMETRY

3D

SURVEY TOPOGRAPHY

CAD

BIM

EARTH OBSERVATION SPACE

WEBGIS

UAV

URBAN PLANNING

CONSTRUCTION

LBS

SMART CITY

GNSS

ENVIRONMENT

NETWORKS

LiDAR

CULTURAL HERITAGE

May/June 2022 year XXVI N°3

Mobile Robotics and

Autonomous Mapping

DATA COLLECTION

AND PUBLICATION

WITH QGIS

PLAYING WITH COLORS

ON PANCHROMATIC

AERIAL PHOTOGRAPHS

MODELLING WATERSHED

PHENOMENA WITH QGIS


Inspiration for a smarter World

This is the motto of the Intergeo Conference 2022, where the English language edition of

GEOmedia, usually issued as the third of the year in the summer, will be distributed on next

October.

The Conference will highlight current developments in surveying with following main topics:

Digital Twins and their value creation

4D-geodata and geospatial IoT

Potentials of remote sensing

Industrial surveying, measurement systems and robotics

Smart Cities and mobility in the context of climate change and sustainability

Mobile mapping, Web services and geoIT in disaster management

Spatial reference and positioning

Earth observation and Galileo

Trend topics such as Building Information Modeling (BIM) and the diverse application

possibilities of the Digital Twins, but also the current requirements for the Smart City and

rural areas have their fixed place in the Conference. The Digital Twins will be a matter of

particular importance in this edition. The focus will be on their use in Building Information

Modeling, smart planning and construction.

But let’s go to see the content of this issue where we start with a Focus “From the field to

the clouds: data collection and publication with QGIS” by Paolo Cavallini, Matteo Ghetta

and Ulisse Cavallini, about the main solutions available for data collection and seamless

publications over the web: MerginMaps, Qfield, Lizmap, with an example form a water

resources project in Gambia. A following Focus is on “Open-source GIS software and

components for modelling watershed phenomena”, by Flavio Lupia and Giuseppe Pulighe,

over the recent version of the Soil and Water Assessment Tool (SWAT) that was implemented

by a dedicated QGIS plugin (QSWAT), widening the userbase and the potential modelling

application worldwide. Then we’ll go to the Reports “Mobile Robotics and Autonomous

Mapping: Technology for a more Sustainable Agriculture” by Eleonora Maset, Lorenzo Scalera

and Diego Tiozzo Fasiolo, concerning the automation in geomatics for agriculture using

robotics platforms, that must be equipped with appropriate technology. And “Geographical

Information: the Italian Scientific Associations and... the Big Tech” from Valerio Zunino,

observing that while the World is changing, the Italian Scientific Associations of Geographical

Information are not.

Marco Lisi, in “Time and Longitude: an unexpected affinity”, talks about the Time, the fourth

dimension, becoming increasingly important in all aspects of technology and science.

Finally, don’t miss “Potatoes, Artificial Intelligence and other amenities: playing with Colors

on Panchromatic Aerial Photographs”, by Gianluca Cantoro from Italian National AirPhoto

Archive (Aerofototeca Nazionale, AFN), discussing about the use of historical photographs,

whether taken from the air or from the ground, are usually synonyms of grayscale or sepia

prints.

Enjoy your reading,

Renzo Carlucci


In this

issue...

FOCUS

REPORT

COLUMNS

From the field

to the clouds:

data collection and

publication with QGIS

By Paolo Cavallini,

Matteo Ghetta,

Ulisse Cavallini

6

24 ESA Image

32 NEWS

40 AEROFOTECA

46 AGENDA

12

Open-source GIS

software and components

for modelling watershed

phenomena

Understanding the soil

and water components

under different

management options with

QGIS and the SWAT

By Flavio Lupia

and Giuseppe Pulighe

On the cover the

sensorized mobile

platform developed

at University of

Udine, Italy.

geomediaonline.it

GEOmedia, published bi-monthly, is the Italian magazine for

geomatics. Since more than 20 years publishing to open a

worldwide window to the Italian market and vice versa.

Themes are on latest news, developments and applications in

the complex field of earth surface sciences.

GEOmedia faces with all activities relating to the acquisition,

processing, querying, analysis, presentation, dissemination,

management and use of geo-data and geo-information. The

magazine covers subjects such as surveying, environment,

mapping, GNSS systems, GIS, Earth Observation, Geospatial

Data, BIM, UAV and 3D technologies.


Mobile Robotics and

Autonomous Mapping:

Technology for a more

Sustainable Agriculture

by Eleonora Maset,

Lorenzo Scalera,

Diego Tiozzo Fasiolo

16

ADV

Ampere 45

Epsilon 33

Esri Italia 23

Geomax 31

Gter 15

INTERGEO 35

Nais 39

Planetek 48

Stonex 47

Strumenti Topografici 2

Geographical

Information: Our

Associations and ...

the Big Tech

by Valerio Zunino

20

Teorema 46

In the background image:

Bonn, Germany. This Esa

Image of the week, also featured

on the Earth from

Space video programme, was

captured by the Copernicus

Sentinel-2 mission, that with

its high-resolution optical camera,

can image up to 10 m

ground resolution.

(Credits: ESA)

26

Time and

Longitude: an

unexpected

affinity

by Marco Lisi

Chief Editor

RENZO CARLUCCI, direttore@rivistageomedia.it

Editorial Board

Vyron Antoniou, Fabrizio Bernardini, Caterina Balletti,

Roberto Capua, Mattia Crespi, Fabio Crosilla,

Donatella Dominici, Michele Fasolo, Marco Lisi,

Flavio Lupia, Luigi Mundula, Beniamino Murgante,

Aldo Riggio, Monica Sebillo, Attilio Selvini, Donato Tufillaro

Managing Director

FULVIO BERNARDINI, fbernardini@rivistageomedia.it

Editorial Staff

VALERIO CARLUCCI, GIANLUCA PITITTO,

redazione@rivistageomedia.it

Marketing Assistant

TATIANA IASILLO, diffusione@rivistageomedia.it

Design

DANIELE CARLUCCI, dcarlucci@rivistageomedia.it

MediaGEO soc. coop.

Via Palestro, 95 00185 Roma

Tel. 06.64871209 - Fax. 06.62209510

info@rivistageomedia.it

ISSN 1128-8132

Reg. Trib. di Roma N° 243/2003 del 14.05.03

Stampa: System Graphics Srl

Via di Torre Santa Anastasia 61 00134 Roma

Paid subscriptions

Science & Technology Communication

GEOmedia is available bi-monthly on a subscription basis.

The annual subscription rate is € 45. It is possible to subscribe at any time via

https://geo4all.it/abbonamento. The cost of one issue is € 9 €, for the previous

issue the cost is € 12 €. Prices and conditions may be subject to change.

Issue closed on: 28/07/2022

una pubblicazione

Science & Technology Communication


FOCUS

From the field to the clouds:

data collection and

publication with QGIS

By Paolo Cavallini, Matteo Ghetta, Ulisse Cavallini

Open source GIS, and in particular QGIS, is a leading free and open

source solution for desktop mapping since many years already.

Its versatility, ease of use, and analytical power have made it the

software of choice for many professionals around the world (see

https://analytics.qgis.org). Field data collection and checking, and

web publication are attracting more attention in the recent years.

A whole suite of integrated tools is now available to implement a

complete workflow, all centered on QGIS.

Central to all tools is the QGIS project, designed and created using

QGIS Desktop. Its power in creating beautiful and rich styling is

probably unsurpassed, with expression-based styling, fusion modes,

and a huge set of other functions. The same project can be used on a

mobile device, and exposed through a web service (WMS, WFS, WCS,

WPS) and a complete web interface.

Mobile

Over the 20+ of life of QGIS,

a number of mobile interfaces

have been designed, from

the first attempt to run the

whole of QGIS on Android,

or a simplified interface on

Windows Mobile, to proper

mobile apps. Currently the

most important and used

solutions to use QGIS on a

mobile are Qfield (https://

qfield.org/) and MerginMaps

(https://merginmaps.com/).

Both are generic tools, that

can be effectively employed

in a wide variety of contexts.

Their flexibility stems from

the ease with which they

can be configured, simply

through QGIS projects, that

include sophisticated styling

and simple to complex forms,

with all expected functionalities

such as custom menus

(drop-down, checkbox,

calendar etc.), relations,

constraints, default values,

user guiding tips, etc. While

still relatively new entries in

the market, they have been

successfully employed in

extensive surveys, from single

users up to thousands of field

surveyors simultaneously collecting

data in the field.

MerginMaps

MerginMaps is a web service,

written in Python with flask,

that manages the synchronization

process of a QGIS

6 GEOmedia n°3-2022


FOCUS

project, and all the related

files. For media files, it is

not unlike other cloud file

management service. Unlike

other cloud file managers,

though, MerginMaps is able

to manage geospatial information,

primarily in the form

of geopackage files. When a

new version of a geopackage

is uploaded, for example because

a surveyor added some

features and is uploading

them back on the centralized

server, the geodiff library is

used to check for changes,

merge them and solve any

conflicts. This enables some

flexibility in the downloading

and uploading of new data,

since multiple surveyors can

add features for their area

of interest, upload different

versions of the modified geopackage,

and MerginMaps

will take care of adding every

new feature to the centralized

repository.

Lutra Consulting, the firm

developing MerginMaps,

offers an official hosted instance,

a reliable way to use

MerginMaps without the

need for configuration and

installation on a local server.

The official hosted instance

offers a generous free trial

for non-commercial usage.

Pricing is clear and reasonable,

with no per-user pricing,

and the support is quick and

responsive.

The surveying process is effectively

split in two. The

first phase involves the generation

of the QGIS project,

the related layers, and the

form structure, and the

subsequent upload to the

MerginMaps web service. Far

from being complicated, this

phase still requires a good understanding

of GIS software

and data formats.

Once the project is uploaded

and ready, the surveying phase

can begin. Due to the easeof-use

of the mobile application,

the surveying requires

minimal technical skill, and

operators can be trained in a

matter of few few days. From

their point of view, the intricacies

of the project are invisible:

they just need to add or

modify the features according

to the form, preconfigured

through QGIS, and click on

the synchronization button

once they are online. A very

recent addition, the option

to automatically upload new

changes whenever an internet

connection is detected, further

simplifies this.

The project folder itself is

what is visible from the web

interface. In order to manipulate

the project, and the

geospatial data, MerginMaps

can be accessed from QGIS,

through the official plugin,

and through the MerginMaps

mobile application, available

for Android and iOS.

The MerginMaps application

has a special focus on simple

UI and UX design, in order

to be accessible by everyone,

regardless of their GIS experience

and in demanding field

conditions.

Qfield

Qfield has been the first natively

Android mobile application

connected to QGIS.

Downloaded around halfmillion

times it is available

for Android and now iOS.

The idea of the usage is very

simple: the user sets up a

project in QGIS and thanks

to the plugin QfieldSync it

will be packaged in a folder.

The folder created has to be

copied to the device and with

the App data can be collected

on the field. Back to the office

the data collected with

the mobile device have to be

copied back on the machine

and re-synchronized to the

original data source with the

QfieldSync plugin.

GEOmedia n°3-2022 7


FOCUS

The App has a very simple

design and it comes with a lot

of features: snapping facilities,

advanced form layout, pictures

and interaction with the legend

to name a few.

The manual synchronization

can be nowadays avoided

thanks to QfieldCloud, a

Django framework that is able

to store and automatically

synchronize the data from the

computer to the mobile device

and vice versa. Open Source,

QfieldCloud is still in Beta

version and let the user choose

between installing the software

on the server or register to the

web with a free plan (limited

space) or buy additional space.

The main advantage of

QfieldCloud is that the user

can log in both on the machine

and on the device with the

same name and immediately

synchronize the data between

all the devices. The QfieldSync

plugin in QGIS has all the

options needed to log in, synchronize

and also see the data

changes difference.

8 GEOmedia n°3-2022


FOCUS

As for Mergin, also Qfield

has the possibility of using a

server managed by OpenGIS.

ch, the firm developing it

without the need for configuration

and installation on a

local server.

Web

A number of different web

interfaces have been designed

around QGIS. For all of

them the basic mechanism is

the same: all user requests are

sent to the backend (QGIS

Server) that creates the map

and other objects (legend,

print layouts etc.) and send

them back to the web app.

The main advantages over

other free and open source

webGIS solutions are the ease

to create both visually sophisticated

maps, and complex

print and reporting through

QGIS Desktop layouts, without

the need for specific

web skills.

The most widely used is

Lizmap, created and maintained

by 3Liz, a South French

company, who also substantially

contributes from years

to core QGIS development.

As for the other solutions

described, also Lizmap can

be used without the need for

configuration and installation

on a local server through a

service managed and maintained

by 3Liz, the firm developing

it.

Case study

MerginMaps was recently

used in a project geared towards

the improvement of

water resources infrastructure

in The Gambia, financed by

the African Development

Bank; the project is headed

by Hydronova, and its GIS

section is technically managed

by Faunalia.

Special acknowledgment

for this project goes for the

continuous support to the

Climate Smart Rural WASH

Development Project Office

Team and to the Department

of Water Resources Staff, under

the Ministry of Fisheries

and Water Resources of the

Gambia.

Data collection is the first

task upon which the whole

project is built, since in order

to improve resource management,

key stakeholders need

to know the current situation

and distribution of the resources

at their disposal.

An open source solution

is ideal in most contexts,

even more so in a context

where free access to data is

paramount, and budget constraints

are tight.

The MerginMaps mobile

application, backed by the

Mergin web service, was chosen

due to its ease of use and

synchronization. Due to the

possibility and ease of setting

up a Mergin instance, all the

data was kept in-situ at the

relevant ministry, retaining

control on this crucial information.

A QGIS project with four

layers, each with a custom

form, was created. In order

to have all the data fully offline,

vector tiles were used.

These were generated, for the

whole country, by extracting

OpenStreetMaps data, packaging

it in an mbtiles file, and

GEOmedia n°3-2022 9


FOCUS

by styling them with one of

the OpenMapTiles styles.

The resulting project was

only 16 MB, a small fraction

of the 160+ MB that would

have been needed for rasterized

tiles.

Using the QGIS drag-n-drop

form, extensive logic was

introduced in the data entry

user interface. With this

setup, users were guided in

choosing the three different

administrative levels from a

drop down, with automatic

filtering of the available options,

and constraints with

appropriate description were

implemented. For water

sources, which are of upmost

importance, a photo was also

required.

After the project was tested,

teams of surveyors covered

the whole country in the span

of a few months, while the

survey manager constantly

analyzed data quality with

spot crosschecks.

Periodically, the tablets were

brought back online, and

the data was synchronized.

In this process, the selective

sync option, introduced in

MerginMaps (at the time called

Input) 1.0.1, was crucial.

This feature instructs all the

tablets to upload the pho-

tos that were taken locally,

without downloading all

of the other media in the

project, that was added by

other surveyors. Without

this, more than 15 GB of

photos would have been

downloaded into each tablet,

severely impairing the

synchronization process and

requiring a stable and fast

internet connection.

At the completion of the

survey, the data was checked

and cleaned, then it

was synchronized with a

PostgreSQL/PostGIS database,

using the mergin-dbsync

tool, as described in

the “Extensions and integrations”

section. This procedure

initialized the new

database, and ensures that

any change in the data will

be reflected in the database

tables.

Using the newly initialized

database, a second QGIS

project based on the same

data was created and published

on a WebGIS based on

QGIS server and Lizmap,

thus reusing QGIS styling

without the need for restyling

and conversion. In

this phase, the advanced

forms could be reused, thus

showing on the website all

the information as entered

by the surveyor, including

the water source photo.

Other layers, such as the

administrative subdivisions,

were added, as well as the

localization tool, that enables

any user to quickly find

the current location, a village,

or an area of interest.

By combining the efficiency

of PostgreSQL materialized

view, and the flexibility

of the QGIS print layout,

multiple layouts were created

and personalized for

each administrative level,

10 GEOmedia n°3-2022


FOCUS

from country aggregates to

the village level. Graphs created

with DataPlotly, a recent

addition to the QGIS print

layout, were also used.

These layouts were then exposed

in Lizmap with the

AtlasPrint QGIS Server plugin.

Extensions and integrations

MerginMaps, being written

in Python, with good documentation,

has quite a few

extensions, that enable it to

adapt to the specific needs of

most survey projects.

Mergin-db-sync, first released

in June 2020, is a crucial

part of the MerginMaps

offering. Also written in

Python, it interfaces with the

main Mergin web service and

with a PostgreSQL/PostGIS

database, keeping the two

in constant sync. Whenever

a change is detected in the

specified geopackage, the

changes are propagated to the

PostgreSQL database, and

vice versa. Strict versioning

is still maintained, since the

tool creates a new version of

the MerginMaps project, just

as a user uploading new data

would. The tool uses two

PostgreSQL schemas, one in

which changes can be made

directly, and a backup copy

used to check for changes.

It utilizes the geodiff library

to check and merge changes,

even if they happen in the

two backends at the same

time. Mergin-db-sync can

also be used to expose the

data on a WebGIS such as

Lizmap.

Mergin-media-sync, first

released in December 2021,

allows for the offloading of

the media files, often representing

a good chunk of the

project size, to a local drive,

or to the MinIO object storage.

When new media files

are added, the tool downloads

them, uploads them to

the configured service, and

updates the relevant rows in

the geopackage, pointing the

media path to the new url. In

a wide-area survey, covering

many features and containing

photos, this tool can effectively

be used to avoid cluttering

the MerginMaps project

with hundreds of gigabytes of

images.

Both MerginMaps and

QField can be used with an

external GPS/GNSS device,

that can be obtained at a low

cost, enabling high precision

location, up to a centimeter

of accuracy. These devices,

once highly priced, are now

accessible and reliable.

METAKEYS

field survey; water resources; qgis; qfield

ABSTRACT

QGIS is the leading free and open source

desktop GIS. It is also a complete ecosystem,

that allows to build complete workflows, from

field data collection to publication on the web.

Central to it are QGIS projects, that define

data sources, projections, styling and integration,

and are reused from mobile to the web

without a need to reconfigure them. We describe

the main solutions available for data collection

and seamless publication over the web:

MerginMaps, Qfield, Lizmap, with an example

form a water resources project in Gambia.

AUTHOR

Paolo Cavallini

cavallini@faunalia.it

Matteo Ghetta

matteo.ghetta@faunalia.eu

Ulisse Cavallini

ulisse.cavallini@faunalia.it

Faunalia

www.faunalia.eu

Piazza Garibaldi 4, 56025 Pontedera

(PI), Italy

GEOmedia n°3-2022 11


FOCUS

Open-source GIS software and components

for modelling watershed phenomena

Understanding the soil and water components under

different management options with QGIS and the SWAT+

By Flavio Lupia and Giuseppe Pulighe

Fig. 1 – Workflow with input and output components for running SWAT+ through the QSWAT plugin for QGIS.

Modelling the

watershed balance

Current and future climate

change are expected to increase

our challenges in preserving

natural resources and ecosystem

services. At the watershed scale,

the processes taking places

relate to interactions between

soil and water and are influenced

by land use management.

Precipitation, infiltration, runoff,

evapotranspiration, soil

erosion, soil and water pollutions

are the main components

to be considered whenever for

the simulation of the watershed

system within the current and

future conditions under changing

drivers (i.e., human interventions

and climate change).

One of the most popular watershed

modelling tools is the

Soil and Water Assessment Tool

(SWAT), a public domain model

jointly developed by USDA

Agricultural Research Service

and Texas A&M University

(Arnold et al., 1998). SWAT

has been used worldwide for

different applications (water

quality, land use, soil erosion,

crop yield, etc.) during the last

four decades. As of May 2022,

a total of 5154 articles report

SWAT applications within

different journals according to

the SWAT Literature Database

(https://www.card.iastate.edu/

swat_articles/).

SWAT enables the simulation of

watershed and river basin quantity

and quality of surface and

ground water under the influence

of land use, management,

and climate change. It can be

used to monitor and control soil

erosion, non-point source pollution

and basin management. An

entirely reconstructed version of

SWAT, nicknamed SWAT+, was

only launched in recent years

to improve the capabilities of

SWAT code maintenance and

future development. Reservoir

operation functions have been

added to SWAT+ in addition

to the new model structure to

increase model simulation performance.

Now, the physical

objects (hydrologic response

units (HRUs), aquifers, canals,

ponds, reservoirs, point sources,

and inlets) are built as separate

modules.

QSWAT: the QGIS plugin for

the new SWAT+ model

SWAT model was implemented

withing the GIS environment

with dedicated plugins both for

commercial and open-source

GIS platforms. The GIS imple-

12 GEOmedia n°3-2022


FOCUS

mentations has allowed users to

manage more efficiently the watershed

modelling process in its

natural environment, the GIS,

where the spatial component

of the various datasets can be

handled straightforwardly.

QSWAT is the most recent

implementation as QGIS plugin,

written in Python, of the

new version SWAT+. As of 6th

April 2022, QSWAT3 v1.5 for

QGIS3 was released for 32 and

64bit machines. SWAT+ is written

in FORTRAN and is also

available as command-line executable

file that runs text file inputs

without interface (SWAT+

installer 2.1.0 was released on

31 March 2022 for Windows,

Linux, and MacOS). QSWAT

is increasingly gaining momentum

thanks to the spreading

and robustness of the opensource

GIS platform. QGIS has

a huge number of users and a

solid reputation. It is utilized in

academic and professional settings,

and it has been translated

into more than 48 languages.

Moreover, the release of SWAT

code as open source has benefitted

the diffusion and improvement

of the model by making

it more robust and suitable for

different applications thanks

to the collaboration of several

users with various expertise.

Different channels are available

for users’ collaboration such as

QSWAT user group, SWAT+

Editor user group and SWAT+

model user group (https://swat.

tamu.edu). Other plugins are

also available, such as the one

developed for the commercial

ESRI ARCGIS (ArcSWAT).

Beyond the functions provided

by QSWAT for setting

the watershed to be analysed,

SWAT+ is complemented by

additional software: SWAT

Editor (a user interface for

modifying SWAT+ inputs and

running the model installed

Fig. 2 – Spatial distribution of annual means of the actual evapotranspiration from the soil at subbasin scale for a

watershed through QSWAT.

along with QSWAT), SWAT+

Toolbox (a user-friendly tool

for SWAT+ for sensitivity

analysis, manual and automatic

calibration), SWATplus-CUP

(the Calibration Uncertainty

Program for SWAT+ requiring

a license purchase) and

SWATplusR (a set of tools

taking advantage of the R environment

for parameter sensitivity

analysis, model calibration

and the analysis model results).

Moreover, the SWAT website

provides datasets for running

the model even if specific datasets,

with adequate spatial and

temporal resolution, are always

recommended for the study

areas to be analysed. The datasets

available have often global

coverage and are relative to climate,

soil, land use and Digital

Elevation Models (DEMs).

Look up tables are also supplied

with QSWAT to properly

match to standard legends the

land use and soil codes.

The four-steps process and the

minimum set of data for running

SWAT+

The following spatial and tabular

data are required for running

SWAT+: Land use/cover, DEM,

Soil data (hydrological group,

clay, silt, sand), Climate data

(temperature, precipitation,

humidity, solar radiation, wind

speed) and Hydrology (river

discharge).

QSWAT runs SWAT+ by following

a four-step procedure: 1)

Delineate watersheds, 2) Create

Hydrologic Response Units

(HRUs), 3) Edit inputs and run

SWAT, and 4) Visualize.

The first step deals with the

definition of the watershed and

its structure by extracting the

channels and the watershed

boundary by processing the

DEM of the study area with

the classical Terrain Analysis

Using Digital Elevation Models

(TauDEM) functions that

divide the watershed in subbasins

(areas with a principal

stream channel). The second

step involves the creation of

the HRUs, lumped areas with

the same combination of soil,

topography, and land use, not

spatially related to each other

(Rathjens et al., 2016). The

third step concerns the weather

data selection and the set-up

of model parameters. For instance,

the latter are relative

to the potential evapotranspiration

method (e.g., Priestley-

GEOmedia n°3-2022 13


FOCUS

Taylor, Penman-Monteith or

Hargreaves), curve number

method for soil moisture, land

use management and conservation

practices. The fourth step

is dedicated to the visualization

of the results at basin and subbasin

scale and to the outputs

exploration for a given channel

and gauge.

The model can simulate daily,

monthly, yearly, and average

outputs for different model

components (e.g., channel,

aquifer, reservoir, etc.), for nutrient

balance, water balance,

plant weather and losses from

the basin, HRUs and landscape

units. Results can be printed

and exported in different formats

such as tabular or text

structure.

Calibration and validation

After the SWAT+ run and the

production of the outputs, model

calibration and validation

are strongly recommended.

The model can be calibrated

and validated for hydrologic,

sediment, nitrogen, and phosphorus

components. These

last steps guide the user on

the fine-tuning process of the

model parameters to produce

results coherent with the real

watershed processes. It requires

the collection of data on river

discharge that are often missing

for several watersheds or

available in analogic format,

moreover water quality data

(e.g., sediments load, dissolved

oxygen, nitrate and phosphorous

concentrations, etc.) can

be used. Alternative approaches

for hydrology calibration may

involve the use of evapotranspiration

data from satellite data

to overcome the lack of river

discharge data from the gauge

stations at the outlets. SWAT+

allows several options for calibrating

and validating the

simulation under the different

parametrizations defined by

the users. A sensitivity analysis

is quite common approach

undertaken to pinpoint the

main sensitive parameters and

to reduce their redundancy during

cal/val process. Literature

review is always useful to start

listing a set of common parameters

affecting streamflow and

sediment yield process. SWAT-

Calibration and Uncertainty

Programs (SWAT-CUP) is by

Fig. 3 – Schematic representation of the hydrology outputs at watershed level through QSWAT.

far the most known tool for

assessing the sensitivity of parameters

by providing several

model evaluation techniques

based on the relevant statistics

(e.g., Pearson’s correlation

coefficient, root mean square

error (RMSE), etc.). Following

the identification of the most

sensitive parameters the calibration

and validation phase

are carried out by focussing on

specific components (e.g., daily

discharge). The time series of

the available data (e.g., t0-tn) is

divided to provide a reference

period for the warm-up (e.g.,

t0-t5), calibration (e.g., t6-t15)

and validation (e.g., t16-tn).

Finally, model performances

are assessed by using classical

statistic measures (e.g., RMSE).

Calibration and validation can

be long and tedious. Therefore,

it is always recommended to

follow a precise work protocol

(Abbaspour et al. 2018).

Extending model simulation capabilities:

land use management

and climate change

The impact of alternative land

uses, and climate change are

pressing concerns in different

regions of the world. SWAT+

allows modelling diverse land

use scenarios to meet sustainability

goals (Pulighe et. al.,

2020) through a module where

alternative land management

practices can be defined (e.g.,

ploughing, seeding, tillage, irrigation,

fertilization rates and

crop nutrients uptake, etc.),

by defining dates or specific

land use classes and regions.

Similarly, climate projections

from climate models under different

representative concentration

pathways (RCPs) scenarios

of greenhouses emissions can be

loaded as weather data to create

climate scenarios for the future

decades that can be compared

to the baseline period covering

the historical meteorological

data. SWAT+ can ingest these

14 GEOmedia n°3-2022


FOCUS

data for simulating seasonal

changes in precipitation and

temperatures, hydrological extremes,

flow regime alterations

and river discharge, future water

quality (i.e., nitrogen and

phosphorus) and soil erosion

conditions, and future biomass

production. Estimating

the mentioned effects on the

hydrological regime might have

strong impacts also on agricultural

activities posing challenges

to land use management and

irrigation (Pulighe et al., 2021).

Conclusions

Open-source GIS (QGIS)

and free to use models such

as SWAT+ can be considered

effective and strategic tools for

monitoring and assessing water

and soil interactions at the

watershed level. In addition,

the growing availability of public

domain geospatial datasets

can increase the applicability

of the simulation of the watershed

processes worldwide and

for a wide variety of use cases.

QSWAT will be a valuable

tool for the SWAT scientific

community thanks to the full

integration with the geospatial

functions, the new functionality

offered by SWAT+ and

the contribution of a wide and

growing open-source community.

QSWAT could be a powerful

tool to assess the effects

of climate change and land use

management and the impacts

on water quality and land degradation.

We believe that in

the near future, the evaluation

of the effectiveness of policy interventions

and the deployment

of sustainable soil/water management

practices will become

an interesting arena for experimenting

and acknowledging the

potentiality of SWAT+.

REFERENCES

Arnold, J. G., Srinivasan, R., Muttiah, R. S., & Williams,

J. R. (1998). Large area hydrologic modeling

and assessment part I: model development 1. JAWRA

Journal of the American Water Resources Association,

34(1), 73-89.

Abbaspour KC, Vaghefi SA, Srinivasan R. A Guideline

for Successful Calibration and Uncertainty

Analysis for Soil and Water Assessment: A Review of

Papers from the 2016 International SWAT Conference.

Water. 2018; 10(1):6. https://doi.org/10.3390/

w10010006

Pulighe, G., Lupia, F., Chen, H., & Yin, H. (2021).

Modeling Climate Change Impacts on Water Balance

of a Mediterranean Watershed Using SWAT+. Hydrology,

8(4), 157.

Pulighe G, Bonati G, Colangeli M, Traverso L, Lupia

F, Altobelli F, Dalla Marta A, Napoli M. Predicting

Streamflow and Nutrient Loadings in a Semi-Arid

Mediterranean Watershed with Ephemeral Streams

Using the SWAT Model. Agronomy. 2020; 10(1):2.

https://doi.org/10.3390/agronomy10010002

Rathjens, H., Bieger, K., Srinivasan, R., Chaubey, I.,

& Arnold, J. G. (2016). CMhyd user manual. Doc.

Prep. Simulated Clim. Change Data Hydrol. Impact

Study.

https://www.card.iastate.edu/swat_articles/

https://swat.tamu.edu

METAKEYS

QGIS; QSWAT; watershed; river basin; SWAT+;

climate change

ABSTRACT

The Soil and Water Assessment Tool (SWAT) enables

the simulation of watershed and river basin quantity

and quality of surface and ground water under the influence

of land use, management, and climate change.

It can be used to monitor and control soil erosion,

non-point source pollution and basin management.

The recent version (SWAT+) was implemented by

a dedicated QGIS plugin (QSWAT) widening the

userbase and the potential modelling application

worldwide. QSWAT, along with additional software

for preparing the input dataset and for performing

the calibration/validation phase, further extends the

watershed modelling capabilities. Such tools and the

growing diffusion of public open geospatial datasets

are expected to increase the range of applications especially

with the availability climate projections datasets.

The latter will enable users to simulate all the watersoil

phenomena at watershed level under future conditions

to better understand and plan suitable action for

preserving the natural resources.

AUTHOR

Flavio Lupia

flavio.lupia@crea.gov.it

Giuseppe Pulighe

giuseppe.pulighe@crea.gov.it

CREA - Council for Agricultural Research

and Economics - Roma (Italy)

GEOmedia n°3-2022 15


REPORT

Mobile Robotics and Autonomous

Mapping: Technology for a more

Sustainable Agriculture

by Eleonora Maset, Lorenzo Scalera, Diego Tiozzo Fasiolo

Fig. 1 - The mobile robot traversing under the canopy in a

maize field (Manish et al., 2022).

Today, food systems account

for nearly onethird

of global greenhouse gas

emissions, consume

large amounts of natural

resources and are among the

causes of biodiversity loss.

As part of the actions of

the European Green

Deal, the Farm to Fork

Strategy (European Commission,

2020) plays therefore a

crucial role to reach the ambitious

goal of making Europe a

climate-neutral continent by

2050. In fact, it aims to accelerate

the transition towards a

sustainable food system, reducing

dependency on pesticides,

decreasing excess fertilization

and protecting land, soil, water,

air, plant and animal health. All

actors of the food chain need to

contribute to the implementation

of this strategy, starting

from the transformation of

production methods that can

benefit from novel technological

and digital solutions to deliver

better environmental and climate

results.

In this context, we are witnessing

an increasing demand for

automated solutions to monitor

and inspect crops and canopies,

that are driving the adoption of

autonomous and robotic systems

with computational and

logical capabilities. The introduction

of robotics and automation,

coupled with Geomatics

techniques, could provide notable

benefits not only in terms

of crop production and land use

optimization, but also to reduce

the use of chemical pesticides,

improving sustainability and

climate performance through

a more results-oriented model,

based on the use of updated

data and analyses. For these reasons,

the implementation of autonomous

and robotic solutions

together with advanced monitoring

techniques is becoming

of paramount importance in

view of a resilient and sustainable

agriculture.

Applications of mobile robotics

in agriculture span from a large

variety of tasks, as for instance,

harvesting, monitoring, phenotyping,

sowing, and weeding. A

particular task in which mobile

robots are currently employed

at a faster pace than in previous

years is 3D mapping, as testified

by a flourishing literature

on the topic (Tiozzo Fasiolo et

al., 2022). Indeed, 3D maps of

agricultural crops can provide

valuable information about the

health, stress, presence of diseases,

as well as morphological and

biochemical characteristics. Furthermore,

3D surveys of plants

and crops are fundamental in

the computation of geometrical

information, such as volume

and height, to be used to reduce

pesticide and fertilizer waste

and water usage, and, therefore,

improve sustainability and environmental

impact.

Obviously, to provide useful

information for crop management

and to perform the survey

in the most automatic way possible,

robotics platforms must

be equipped with appropriate

technology. In the following, we

will therefore try to summarize

trends and future developments

16 GEOmedia n°3-2022


REPORT

in this domain.

The first requirement of mobile

robots in agriculture applications

is the availability of onboard

sensors and computational capabilities.

Common sensors are

2D and 3D LiDAR (Light Detection

and Ranging), cameras

(monocular, stereo, RGB-D, and

time-of-flight ones), as well as

RTK-GNSS (Real-Time Kinematics

Global Navigation Satellite

System) receivers, and IMUs

(Inertial Measurement Units),

the latter two used mainly for

localization tasks.

Among the robotic systems recently

proposed in the literature

for 3D mapping in agriculture,

it is worth mentioning the platform

developed in (Manish et

al., 2021) and shown in Figure

1. That system is capable of collaborating

with a drone to build

a dense point cloud of the field.

Another interesting mobile robot

is BoniRob (Figure 2), developed

by Bosch Deepfield® Robotics

(Chebrolu et al., 2017). It is an

omnidirectional robot and carries

a multispectral camera, able

to register four spectral bands,

and an RGB-D sensor to capture

high-resolution radiometric data

about the inspected plantation.

Multiple LiDAR sensors and

GNSS receivers as well as wheel

encoders provide at the same

time observations employed for

localization, navigation, and

mapping. An example of robot

with an onboard manipulator

is given by BrambleBee (Ohi et

al., 2018). That robotic system

features a custom end effector

designed to pollinating flowers

in a greenhouse.

Images from standard RGB

cameras only supply information

about the plants in the visible

spectrum. To investigate vegetation

indexes related to the crop

vigor, as for instance the NDVI

(Normalized Difference Vegetation

Index), multispectral and

Fig. 2 - Agricultural field robot BoniRob with onboard sensors (Chebrolu et al., 2017).

hyperspectral sensors are needed,

which can measure the near

infrared radiation reflected by

the vegetation leaves. However,

only a paucity of robotic platforms

described in the literature

manage to perform this task.

The mobile lab developed at

the Free University of Bolzano

and shown in Figure 3 is among

them (Bietresato et al., 2016).

A prototype of mobile robot for

3D mapping in agriculture is

currently being developed at the

University of Udine, based on

an Agile-X Robotics Scout 2.0

platform (Figure 4). The robot

can navigate in harsh terrain and

narrow passages thanks to its

four-wheel drive and differential

kinematics. The platform is

equipped with a low-cost GNSS

receiver and a 9-degree-offreedom

(DOF) IMU as direct

georeferencing systems. Moreover,

it features a great computational

capability thanks to the

NVIDIA Jetson AGX Xavier

board, developed to exploit artificial

intelligence (AI) algorithms

even in embedded systems. The

perception of the environment

is guaranteed by a rotating 360°

LiDAR and an RGB-D camera.

Finally, for phenotyping purposes

it exploits a multispectral

camera pointing forward to

acquire information on the near

infrared and the red edge portion

of the light spectrum.

As far as the sensorial and

computational capabilities are

Fig. 3 - Agricultural robot developed at the Free University of Bolzano, Italy: robot in a orchard,

and onboard sensors (Bietresato et al., 2016).

GEOmedia n°3-2022 17


REPORT

Fig. 4 - The sensorized mobile platform developed at University of Udine, Italy.

considered, from the literature

it can be noticed that most of

the robotic platforms operating

in the agricultural environment

usually employ physical devices

to store the acquired data, whose

can require a frequent manual

intervention of an operator. The

implementation on Internet

of Things (IoT) approaches,

together with the storage of

data on clouds could be a great

improvement, making data

remotely available. Future improvements

in this context will

also include the integration of

renewable energy sources, such

as solar panels, to increase the

autonomy of the systems, especially

in large scale operations

(e.g., autonomous 3D mapping

of a whole vineyard). Moreover,

to avoid occlusion problems

that can occur in image-based

phenotyping, sensors can be

mounted on a robotic arm that

can optimize the camera pose,

guaranteeing the best point of

view for data acquisition. However,

it should also be underlined

that eye-in-hand configurations

for LiDAR sensors and multispectral

cameras are not exploited

yet. A further important

aspect is the durability of these

systems and sensors, that should

be designed to operate in severe

outdoor scenarios.

Fig. 5 - Person following with YOLO object detection (Masuzawa et al., 2017).

To navigate autonomously in

the surrounding environment, a

mobile robot needs a robust localization

method that can georeference

the data acquired by

means of the onboard sensors.

Direct georeferencing methods

are usually based on the RTK-

GNSS, that provides position at

low update rate, generally coupled

with a 9-DOF IMU, which

however is sensible to noise in

rough terrains. Higher accuracy

for the localization of the robot

and the generated 3D map can

be achieved using in addition

Simultaneous Localization and

Mapping (SLAM) approaches.

As well-known also in the Geomatics

community, SLAM problem

consists in the estimation

of the pose of the robot/sensor,

while simultaneously building a

map of the environment. Stateof-the-art

methods are divided

into two main groups: visual

SLAM and LiDAR SLAM. The

former approach relies on images

and sequentially estimates

the camera poses by tracking

keypoints in the image sequence.

The popular approach for Li-

DAR SLAM is instead based

on scan matching: the pose is

retrieved by matching the newly

acquired point cloud with the

previously built map, which is

constantly updated as soon as

new observations are available.

Although not yet fully implemented

in mobile robots for precision

agriculture applications,

an optimal solution could be

data fusion, taking the advantages

of both visual and LiDAR

SLAM methods. In addition,

since external conditions can significantly

vary among different

application and the environment

dictates the most advantageous

sensor, the robot itself should be

able to choose and use the most

suited data source according to

the environmental conditions.

Many open-source SLAM algo-

18 GEOmedia n°3-2022


REPORT

rithms are currently available,

that can run in real time or in

post-processing mode. To this

regard, a comparison among

the state-of-the-art SLAM algorithms

could be interesting,

together with a quantitative

evaluation of the obtained 3D

maps performed with respect

to ground truth datasets. Conversely,

there is a lack of methods

to efficiently fuse spectral data

acquired by multispectral and

hyperspectral cameras with Li-

DAR point clouds, fundamental

for agriculture applications.

Another important aspect that

must be considered for the

profitably application of mobile

platforms is the autonomous

navigation ability, guaranteed by

path planning algorithms. Path

planning is a mature field in

mobile robotics, and, in the crop

monitoring context, it generally

consists of providing a global

path to map an entire area. This

approach is called coverage path

planning and is usually coupled

with a row-following algorithm

to provide local velocity command

to the robot.

A recent trend in the coverage

path planning is the development

of algorithms that avoid

repetitive paths to minimize soil

compaction. This approach generally

relies on prior information

of the working area, that could

be acquired thanks to the collaboration

with drones, useful to

capture an up-to-date 2D or 3D

model of the environment. Furthermore,

a promising solution

could be extending the range

of action with swarm robotics,

that is the collaboration of several

unmanned ground vehicles

(UGVs).

Nowadays, another fundamental

aid for agricultural applications

based on mobile robotics is given

by artificial intelligence (AI). In

fact, classification and segmentation

algorithms of images and

point clouds based on AI are increasingly

used to enrich the 3D

map with semantic information,

also in real time. This is possible

mostly thanks to the advances in

the computational performance

of modern embedded computers

that can be installed onboard a

mobile platform.

For instance, a convolutional

neural network (CNN) applied

to the acquired images can provide

bounding boxes of objects

of interest, that can constitute

the basis to build a topological

map with key location estimation

and semantic information.

This is exploitable also to give

the robot person following capability,

as done in the work by

(Masuzawa et al., 2017) (Figure

5). Another example of machine

learning application is given

by the work in (Reina et al.,

2017), which employed a support

vector machine to classify

the terrain on which the robot

is navigating, by means of wheel

slip, rolling resistance, vibration

response experienced by the

mobile platform and visual data.

Recent trends in this field also

comprise the use of generative

adversarial networks to generate

photorealistic agricultural images

for model training, as well as

the recognition of diseases with

CNN.

In the coming years we will

witness great progress in all the

domains highlighted by this

work, from sensors to mobile

platforms, from localization algorithms

to artificial intelligence

methods, with the hope that

these innovations will effectively

contribute to the transition to

a more sustainable, healthy and

environmentally-friendly food

system.

REFERENCES

European Commission (2020). Farm to Fork

strategy for a fair, healthy and environmentallyfriendly

food system. https://ec.europa.eu/food/

horizontal-topics/farm-fork-strategy_en

Tiozzo Fasiolo, D., Scalera, L., Maset, E.,

Gasparetto, A. (2022). Recent Trends in Mobile

Robotics for 3D Mapping in Agriculture. In International

Conference on Robotics in Alpe-Adria

Danube Region (pp. 428-435). Springer, Cham.

Manish, R., Lin, Y. C., Ravi, R., Hasheminasab,

S. M., Zhou, T., Habib, A. (2021). Development

of a miniaturized mobile mapping system

for in-row, under-canopy phenotyping. Remote

Sensing, 13(2), 276.

Chebrolu, N., Lottes, P., Schaefer, A., Winterhalter,

W., Burgard, W., Stachniss, C. (2017).

Agricultural robot dataset for plant classification,

localization and mapping on sugar beet

fields. The International Journal of Robotics

Research, 36(10), 1045-1052.

Ohi, N., Lassak, K., Watson, R., Strader, J., Du,

Y., Yang, C., et al. (2018). Design of an autonomous

precision pollination robot. In 2018 IEEE/

RSJ international conference on intelligent robots

and systems (IROS) (pp. 7711-7718). IEEE.

Bietresato, M., Carabin, G., D'Auria, D., Gallo,

R., Ristorto, G., Mazzetto, F., Vidoni, R., Gasparetto,

A., Scalera, L. (2016). A tracked mobile

robotic lab for monitoring the plants volume and

health. In 2016 12th IEEE/ASME International

Conference on Mechatronic and Embedded

Systems and Applications (MESA). IEEE.

Masuzawa, H., Miura, J., Oishi, S. (2017). Development

of a mobile robot for harvest support

in greenhouse horticulture—Person following

and mapping. In 2017 IEEE/SICE International

Symposium on System Integration (SII) (pp.

541-546). IEEE.

Reina, G., Milella, A., Galati, R. (2017). Terrain

assessment for precision agriculture using vehicle

dynamic modelling. Biosystems engineering, 162,

124-139.

KEYWORDS

Mobile robotics; autonomous mapping; sustainable

agriculture

ABSTRACT

The introduction of robotics and automation,

coupled with Geomatics techniques, could

provide notable benefits not only in terms of crop

production and land use optimization, but also to

reduce the use of chemical pesticides, improving

sustainability and climate performance through

a more results-oriented model, based on the use

of updated data and analyses. For these reasons,

the implementation of autonomous and robotic

solutions together with advanced monitoring

techniques is becoming of paramount importance

in view of a resilient and sustainable agriculture.

AUTHOR

Eleonora Maset

eleonora.maset@uniud.it

Lorenzo Scalera

lorenzo.scalera@uniud.it

Diego Tiozzo Fasiolo

diego.tiozzo@uniud.it

Polytechnic Department of Engineering and

Architecture (DPIA), University of Udine,

Italy

GEOmedia n°3-2022 19


REPORT

Geographical Information: the Italian

Scientific Associations and...

the Big Tech

by Valerio Zunino

In Italy, the Scientific

Associations that deal with

geographical data are simply

not up to speed, and just a few

of them have some sort of idea

on how to get into it. The huge

and mind-bending projects

that Big Tech is dealing with in

our same world of professional

expertise, must be firstly

understood -deeply-, weighted

and brought to the table of

our heritage of knowledge,

culture, education, and lastly

of consulting services that our

Associations are required to

offer to the national market.

These days, we can no

longer afford to pretend

we can just barely

glimpse the revolutionary contribution

that has been made

to the world of professionals

around the planet by the big

techs (in our sector), among

others, through their generalist

geographic portals.

Nor can we continue to brand

as scientific approximation their

method in strategically tackling

that world that we, Scientific

Associations, together with the

most established companies

in the sector, believed we were

presiding over with the exclusivity

of knowledge and the

most advanced technology: the

time it takes to access an immense

amount of geographical

data residing on the internet

has soon become one tenth of

what we used to expect, and our

locking up within our Tender

Special Specifications will simply

make us less credible in the

eyes of that growing audience of

subjects that we call users, and

which evidently represents the

market of the Associates that we

have a duty to approach, starting

with small and mediumsized

enterprises and ending

with the individual professional

who is struggling at work and

in life. If we do not do this, we

will disappear.

The technological framework

and business model outlined

by those companies that today

- whether one accepts it or

not - mark the times, methods

and rules of consultation and

processing of the vast majority

of published geographical data,

and that have so far often been

seen as a kind of obstruction

to the scientific conversation,

it should be clearly stated as of

now that they will have to be

carefully studied and if possible

brought to the table of the

Associations, so as to first and

foremost qualify them. It is necessary

to at least begin to show

a general humility on the contents,

seek an encounter with

the Industry and attempt to

generate a global, adjustable and

- even more urgently- mutual

learning, that is for us the only

effective and possible means of

sharing.

On 11 June 2001, the global

market witnessed the release of

Google Earth. It took none of

us 'insiders' more than a couple

of minutes, the time to get to

grips with the surprise effect,

to measure an extraordinary

performance, that was not com-

20 GEOmedia n°3-2022


REPORT

parable (for how far it was superior)

to that of products such

as Autodesk Mapguide or the

direct competitors of the time,

positioned by ESRI, Intergraph

or Bentley, at the time leading

tools dedicated to the web consultation

of raster and vector geographic

data. The very concept

of raster was debased in just a

few moments, almost as if it

had been overtaken by still unknown

words, which had been

able to refer to and perhaps

even describe the practicality

and simplicity of the dynamic

and intelligent management of

remote sensed images, which

populated this formidable application

capable of occupying

the globe in a representation

without geographical hesitation

and continuity.

Earth then remained essentially

the same, integrating some

interesting functionalities over

time, which, however, could not

affect the first violent impact of

product innovation. It is a fact

that the other Big Techs were

not able to respond rapidly, and

did not want to do so in the

years immediately following.

Consequently, it is precisely

from 2001 onwards that Google

began to dig a trench that for

a while increased in width (in

terms of the quantity of the

information entered) and in

depth (in terms of its quality

and geographical accuracy);

then, the depth was filled by a

number of competitors, as a result

of which the big geographical

data market was hit by an

unprecedented rivalry based on

quality: an arms race that saw

first Apple enter the game, with

the initial and fundamental

support of a very strong and acclaimed

segment brand as Tom-

Tom still is... then Amazon, and

closely followed by Facebook

and Microsoft, all of which,

although leading the growth of

a robust proprietary mapping

sector, are to a greater or lesser

extent still heavily anchored,

and we are talking about truly

significant investments, to a

global geocartographic system,

probably born (Joe Morrison)

out of a conversation between

recent graduates in an English

pub in 2004, and whose commercial

value is now out of control:

OpenStreetMap.

Of the interesting and singular

reasons that have at the moment

prevented the big corporations

from replicating OSM,

suggesting them instead to

invest in it by bringing in their

own teams of editors, we will

perhaps speak on another occasion.

What seems more on topic

now, however, is to report on

the evolutionary framework of

OSM content at the hands of

the big brands of the IT world.

Within the geographic areas

(States, regions, crucial Human

Settlements, etc.) where the

white collar teams of the big

names are active, the average

incidence of editors operating

on a voluntary basis is today less

than 25% of the entire OSM

geographic road/building data

operation, whereas in 2017 this

figure was around 70% (Jennings

Anderson). Now, given

that the big corporations are

in this way impoverishing the

ideological path from which the

GEOmedia n°3-2022 21


REPORT

spirit of community that inspired

the birth of the great free

geographic portal had sprung,

it remains to be seen what else

these corporations are doing at

the moment, each on their own

account and more or less with

the headlights off.

Otherwise known as

F.A.A.N.G. (Facebook,

Amazon, Apple, Netflix and

Google), sometimes with the

inclusion of Microsoft, these

are the Big Techs that for some

time have been influencing our

behavior, our choices, our purchases

and probably even our

attitudes.

In the vicinity of Seattle (by the

way, not without the support

of the large Indian headquarters

in Hyderabad, announced

in September 2019 and now

operating with about 15,000

engineers, structured in an

area of almost 170,000 square

meters) they are working on at

least two fronts (we are talking

about mapping, of course):

on the one hand, the Amazon

Location Service project, born

together with ESRI and HERE

Technology B.V. from a rib of

Amazon Web Service, the latter,

today a cloud computing platform

with a major competitive

advantage over the analogues

provided by Microsoft (Azure)

and Google (GCP). The abovementioned

partnership serves

to fill the gap that Amazon, like

the others, also suffers in terms

of proprietary cartography (or

acquired in perpetual license

without disbursement of any

fee for the benefit of the relevant

suppliers): but while ESRI

makes available to its partner

some high-definition satellite

databases, HERE contributes

through the provision of its

own geographic vectorial data

referring in particular to road

circulation, real-time traffic and

address location.

Of course, partners receive payment

on an on-demand (click)

basis whenever the Amazon

Location Service user performs

an operation on geography, processes

a route request, performs

a different search, etc... And it

is also for this reason that Amazon

is also moving on its own

account, in order to free itself

from such costs. As is typical for

big brands, once the allurement

of a market has been verified, it

is considered a serious strategic

mistake to wait too long before

being present, consequently

Amazon Location Services, like

other platforms, simply had to

be born, necessarily together

with selected and established

partners in the segment. So,

the goal was to get into it right

away, more or less, to steady a

service and then innovate and

improve it, just as it was battling

on market share points

against its longtime competitors.

Before long, Amazon will

reduce the quantitative contribution

of its consume-based

geographic data providers, and

resubmit a quasi-proprietary

version of Amazon Location

Services on the most important

element, the mapping system:

you can bet on it.

Facebook and Microsoft are also

investing in the same endeavor

to reduce the – as the present

day - gross imbalance between

third-party geographic data and

data owned or acquired outright

without conditions on publication;

the platforms are called,

for the uninitiated, Facebook

Maps and Microsoft Azure

Maps, respectively. But while

Zuckenberg's creature (today

“Meta”) does not seem to show

any particular interest in a race

for proprietary geographic information

endowments - if not

for certain categories or thematic

classes - thereabout Redlands

we are today witnessing an

interesting acceleration, which

also concerns 2D buildings,

published and also made available

in Opendata just in the last

few weeks by the Bing subsidiary

and with reference to a long

series of countries, including

Italy. The fullness of the data is

good (we tested it for parts of

our country) and the quality

certainly more than decent.

And while what happens in

Mountain View is always

22 GEOmedia n°3-2022


REPORT

somewhat surrounded by that,

more or less, invisible aura of

mystery that almost always intrigues,

Cupertino, eventually,

shows detail and quality with

the famous proprietary map of

California, representing and

symbolizing down to individual

plants, in some cities. Apple

New Map is indeed the qualitative-quantitative

manifesto,

reporting the formidable global

geographic wishes of the brand

founded by Steve Jobs. From a

strictly GIS-oriented point of

view, it is the most ambitious

project.

In conclusion, will we, as Associations

for the protection,

dissemination, and comparison

of geographic data in Italy, be

able to keep up with the pace of

a category credibility that envisages,

without compromises, a

greater openness of our awareness

and a more convincing

manifestation of our somewhat

repressed humility?

These are reactions that are neither

easy nor quick, but necessary.

To young people, who are

entering the world of the Associations

federated in ASITA we

say and recommend that they

open up, open their vision of

the market that will one day be

theirs alone, in the direction of

others, Public sector, Big Tech,

Professional world, international

segment majors, national

Industry, etc... Geoinformation,

sooner or later, will have to

become one: we better realize it

sooner than later.

KEYWORDS

ASITA; Geographic information; big

tech; GIS; AMFM; location services

ABSTRACT

The world is changing. The Italian

Scientific Associations of Geographical

Information are not. The opportunities

are endless, but what is missing

is humility and ideas, and often these

two shortcomings feed off each other.

AUTHOR

Valerio Zunino

valerio.zunino@studiosit.it

Vice-President Association AMFM

GIS Italia

GEOmedia n°3-2022 23


Rhine River, Germany

The Rhine River, the longest river in Germany, is

featured in this colourful image captured by the Copernicus

Sentinel-2 mission. The Rhine River, visible here in black, flows from

the Swiss Alps to the North Sea through Switzerland, Liechtenstein, Austria,

France, Germany, and the Netherlands. In the image, the Rhine flows from

bottom-right to top-left. The river is an important waterway with an abundance of

shipping traffic, with import and export goods from all over the world. The picturesque

Rhine Valley has many forested hills topped with castles and includes vineyards, quaint

towns and villages along the route of the river. One particular stretch that extends from Bingen

in the south to Koblenz, known as the Rhine Gorge, has been declared a UNESCO World

Heritage Site (not visible). Cologne is visible at the top of the image. This composite image was

created by combining three separate Normalised Difference Vegetation Index (NDVI) layers from

the Copernicus Sentinel-2 mission. The Normalised Difference Vegetation Index is widely used in

remote sensing as it gives scientists an accurate measure of health and status of plant growth.Each

colour in this week’s image represents the average NDVI value of an entire season between 2018

and 2021. Shades of red depict peak vegetation growth in April and May, green shows changes in

June and July, while blue shows August and September. Colourful squares, particularly visible

in the left of the image, show different crop types. The nearby white areas are forested areas

and appear white as they retain high NDVI values through most of the growing season,

unlike crops which are planted and harvested at set time frames. Light pink areas are

grasslands, while the dark areas (which have a low NDVI) are built-up areas and

water bodies.

[Credits: contains modified Copernicus Sentinel data (2018-21),

processed by ESA - Translation: Gianluca Pititto]


REPORT

Time and Longitude:

an unexpected affinity

by Marco Lisi

Time, the fourth dimension, is becoming increasingly important in all

aspects of technology and science.

The generation and distribution of an accurate reference time is a

strategic asset on which the most disparate applications depend: from

financial transactions to broadband communications, from satellite

navigation systems to large laboratories for basic physical research (the

so-called "Big Physics ").

But time is also the dimension through which technology evolves (as,

for example, in the case of Moore's law which describes the increase in

complexity of integrated electronic circuits) and obsolescence spreads.

Obsolescence will be the great challenge, often ignored or

underestimated, that the economically and technologically advanced

societies of the world will have to face in the years to come. The more

technology increases its evolutionary pace, the more things that

surround us quickly become “old”, as they are no longer able to interface

with each other and be maintained.

Maintenance and updating of obsolete parts (the so-called “logistics”)

are essential aspects in the operational life of a system and both have to

do with time.

The importance of a precise

time reference in our society

and economy

The determination and the accurate

measurement of time are

the basis of our technological

civilization. The major advances

in this field have taken place

in the last century, with the

invention of the quartz crystal

oscillator in 1920 and the first

atomic clocks in the 40s. Nowadays

time measurement is by far

the most accurate among the

measures of other fundamental

physical quantities. Even the

measurement unit for lengths,

once based on the mythical reference

meter, a sample of Platinum-Iridium

preserved in Paris,

was internationally redefined

in 1983 as "the length of the

path covered by light in vacuum

during a time interval equal to

1/299792458 of a second ".

The second (symbol “s”) is the

unit of measurement of the official

time in the International

System of Units (SI). Its name

comes simply from the second

division of the hour, while the

minute is the first. The second

was originally defined as the

86400-th part of the mean solar

day, i.e., the average, taken over

a year, of the solar day, defined

as the time interval elapsing between

two successive passages of

the Sun on the same meridian.

In 1884 the Greenwich Mean

Time (GMT) was officially

established as the international

standard of time, defined as the

mean solar time at the meridian

passing through the Royal

Observatory in Greenwich

(England).

26 GEOmedia n°3-2022


REPORT

GMT calculates the time in each

of the 24 zones (time zones) into

which the earth's surface has

been divided. The time decreases

by one hour for each area west of

Greenwich, and increases by one

hour going east. GMT is also

defined as "Z" time, or, in the

phonetic alphabet, "Zulu" time.

The time standard underlying

the definition of GMT was

maintained until astronomers

discovered that the mean solar

day was not constant, due to the

slow (but continuous) slowdown

of the Earth's rotation around

its axis. This phenomenon is essentially

linked to the braking

action of the tides. It was then

decided to refer the average solar

day to a specific date, that of

January 1, 1900. This solution

was very impractical since it is

not possible to go back in time

and measure the duration of that

particular day.

In 1967 a new definition of the

second was proposed, based on

the motion of precession of the

isotope 133 of Cesium. The second

is now defined as the time

interval equal to 9192631770

cycles of the vibration of Cesium

133. This definition allows scientists

anywhere in the world

to reconstruct the duration of

the second with equal precision

and the concept of International

Atomic Time or TAI is based on

it.

The first atomic clock was developed

in 1949 and was based

on an absorption line of the

ammonia molecule. The cesium

clock, developed at the legendary

NIST (National Institute

of Standards and Technology)

in Boulder, Colorado, can keep

time with an accuracy better

than one second in six million

years. It was precisely the

extreme accuracy of atomic

clocks that led to the adoption

of atomic time as an official reference

worldwide. However, a

Fig. 1 - UTC and critical infrastructures.

Fig. 2 - Stonehenge, a prehistoric astronomical observatory.

new problem was been indirectly

generated: the discrepancy between

the international reference

of time, based as mentioned on

atomic clocks, and the average

solar time. An average solar year

increases by about 0.8 seconds

per century (i.e., about an hour

every 450,000 years). Consequently,

universal time accumulates

a delay of approximately 1

second every 500 days compared

to international atomic time.

This means that our distant

great-grandchildren, in the

distant future just 50,000 years

from now, would read “noon” on

their atomic clocks, even though

they are actually in the middle of

the night. To overcome this and

many other more serious drawbacks,

the concept of Universal

Coordinated Time (UTC) was

introduced in 1972, which definitively

replaced GMT.

In the short term, UTC essentially

coincides with atomic time

(called International Atomic

Time, or TAI); when the difference

between UTC and TAI approaches

one second (this occurs

approximately every 500 days),

a fictitious second, called "leap

second", is introduced.

In this way, the two time-scales,

TAI and UTC, are kept within

a maximum discrepancy of 0.9

seconds.

UTC ("Universal Coordinated

Time"), defined by the historic

“Bureau International des Poids

et Mesures” (BIPM) in Sevres

GEOmedia n°3-2022 27


REPORT

Fig. 3 - Ancient Egyptian stone obelisks.

(Paris), is since 1972 the legal

basis for the measurement of

time in the world, permanently

replacing the old GMT. It is

derived from TAI, from which it

differs only by an integer number

of seconds. TAI is in turn

calculated by BIPM from data

of more than 200 atomic clocks

located in metrology institutes

in more than 30 countries over

the world.

But why is it so important to

have an accurate and unambiguous

definition of time?

It is a matter not only for scientists

and experts. A universally

recognized and very accurate

reference time is in fact at the

base of most infrastructures of

our society (figure 1).

All cellular and wireless networks,

for example, are based on

careful synchronization of their

nodes and base stations (obtained

receiving GNSS signals,

as we will see). The same is true

for electric power distribution

networks. Surprisingly, even

financial transactions, banking,

and stock markets all depend on

an accurate time reference, given

the extreme volatility in equity

and currency markets, whose

quotations might vary within a

few microseconds.

Time and its measurement

The history of the measurement

of time is as old as the history of

human civilization.

In prehistoric England, the megalithic

monument of Stonehenge

seems to have been a sophisticated

astronomical observatory

to determine the length

of the seasons and the date of

the equinoxes (figure 2).

Already in 3500 BC the ancient

Egyptians invented the

sundial and erected stone obelisks

throughout their country

which had the primary purpose

of marking the movement of

the sun with their shadow and,

therefore, the passage of time

(figure 3).

In ancient Roman times and up

until late in the Medieval Age,

sundials, marked candles, water

and sand hourglasses were used

to measure time (figure 4).

A milestone in the history of the

measurement of time was, in

more recent times, Galileo's discovery,

in 1583, of the constancy

of the pendulum swing period,

on which all mechanical clocks

are based (figure 5).

In 1656 Christiaan Huygens,

Dutch mathematician, astronomer,

and physicist (famous

among other things for having

defined the principle of diffraction

that bears his name)

designed the first weight-wound

pendulum clock, which deviated

by ten minutes a day (figure 6).

But the major impetus for the

development of ever more accurate

techniques for measuring

time came from the need to

determine one's position (particularly

longitude) aboard a

ship in the open sea. From then

on, time and positioning became

irreversibly connected.

Fig. 4 - Roman and Medieval time measurement methods.

“Longitude problem” and

measurement of time

The latitude and longitude coordinate

system is commonly used

to determine and describe one’s

position on Earth’s surface and it

was also known by astronomers

28 GEOmedia n°3-2022


REPORT

and navigators since the Greek

and Roman times.

Determining latitude north or

south with respect to the equator

posed no major problems:

it could be calculated through

angular measurements of the sun

and stars made with relatively

simple instruments.

Measuring longitude, that is,

identifying the east-west position

on Earth between meridians,

lines running from pole to pole,

was a completely different story.

Longitude was far more difficult

than latitude to measure by astronomical

observation.

Because of the Earth’s rotation,

the difference in longitude between

two locations is equivalent

to the difference in their local

times: one degree of longitude

equals a four-minute time difference,

and 15 degrees is equal to

one hour (making 360 degrees,

or 24 hours, in total).

While a sextant with which to

determine the height of the sun

at noon was sufficient to determine

one's latitude, the determination

of longitude, due to the

earth's rotation, required the use

of both the sextant and a very

precise clock.

Several methods had been

proposed over the centuries by

scientists and astronomers (including

Galileo and Newton), all

based on the observation of specific

astronomical events, such as

lunar eclipses.

All these methods turned out

to be rather cumbersome and

inaccurate by several hundred

kilometers.

Even Christopher Columbus

made two attempts to use lunar

eclipses to discover his longitude,

during his voyages to the

New World, but his results were

affected by large errors.

The lack of an accurate longitude

determination method created

innumerable problems (at

times, real disasters) for sailors of

Fig. 5 - Galileo Galilei discovered in 1581 the isochronism of the pendulum.

the 15th and 16th centuries.

At the beginning of the eighteenth

century, with the rapid

growth of maritime traffic, a

sense of urgency had arisen. The

search for longitude cast a shadow

over the life of every man at

sea, and the safety of every vessel

and merchant ship.

The exact measurement of longitude

seemed at that time an

impossible dream, a sort of perpetual

motion machine.

There was a need for an instrument

that recorded the time (at

the place of departure) with the

utmost precision during long sea

voyages, despite the movement

of the ship and the adverse climatic

conditions of alternating

Fig. 6 - Christiaan Huygens and the first pendulum clock.

hot and cold, humid and dry.

On the other hand, seventeenthcentury

and early eighteenthcentury

clocks were crude devices

that usually lost or gained up

to a quarter of an hour a day.

The “longitude problem” however

became so serious that in

1714 the British Parliament

formed a group of well-known

scientists to study the solution,

the “Board of Longitude”. The

Board offered twenty thousand

pounds, equivalent to more than

three million pounds today, to

anyone who could find a way

to determine the longitude of a

ship on the open sea with accuracy

within one-half of a degree

(thirty nautical miles, about

GEOmedia n°3-2022 29


REPORT

Figure 7: Right: John Harrison – Left (clockwise): H1 thru H4 Harrison’s chronometers

Fig. 8 - James Cook’s Pacific Voyages.

Fig. 9 - Galileo Passive Hydrogen Maser (PHM) clock.

fifty-five kilometers, at the equator).

The approach was successful,

despite the many (and often

completely crazy) proposals. In

fact, in 1761, a self-educated

Yorkshire carpenter and amateur

clock-maker named John Harrison

built a special mechanical

clock to be loaded on board

ships, called the “marine chronometer”,

capable of losing or

gaining no more than one second

per day (an incredible accuracy

for that time) (figure 7).

Harrison did not receive the

prize from the Board until after

fighting for his reward, finally

receiving payment in 1773, after

the intervention of the British

parliament.

And it was thanks to a copy of

Harrison's H4 chronometer

that Captain James Cook made

his second and third legendary

explorations of Polynesia and

the Pacific islands on board the

HMS Resolution (figure 8).

A copy of the H4 chronometer

was also used in 1787 by

Lieutenant William Bligh, commander

of the famous HMS

Bounty, but it was retained by

Fletcher Christian following his

mutiny. It was later recovered

in Pitcairn Island to eventually

reach the National Maritime

Museum in London.

GNSS and Timing

An extremely accurate UTC reference

is today provided worldwide

by satellite navigation

systems (GNSS) such as GPS

(Global Positioning System),

GLONASS, Beidou, and the

European Galileo system. They

are systems of satellites orbiting

around the Earth, each containing

onboard extremely precise

atomic clocks which are all synchronized

to a system reference

clock.

GNSS technologies are intrinsically

linked to accurate timing.

30 GEOmedia n°3-2022


REPORT

This is because of the specific

principle (trilateration) on which

position determination is based,

i.e., the method of measuring

the distance of a user from each

satellite, involving the measurement

of the time delay experienced

by the signal-in-space.

The most accurate and numerous

atomic clocks around the

world are those belonging to

GNSS, thus contributing substantially

to the derivation of

TAI and UTC (figure 9).

UTC can be derived from

the Galileo and GPS signals,

through a series of corrections

based on data provided by the

signals themselves. The accuracy

obtainable, even with very cheap

commercial receivers (or in

those integrated into our smartphones)

is easily better than one

microsecond.

KEYWORDS

GNSS; GPS; Galileo; GLONASS; Beidou; time; longitude; GMT; TAI; UTC;

ABSTRACT

To have an accurate and unambiguous definition of time is a matter not only for scientists

and experts. A universally recognized and very accurate reference time is in fact at the

base of most infrastructures of our society. All cellular and wireless networks, for example,

are based on careful synchronization of their nodes and base stations (obtained receiving

GNSS signals, as we will see). The same is true for electric power distribution networks.

Surprisingly, even financial transactions and banking and stock markets all depend on

an accurate time reference, given the extreme volatility in equity and currency markets,

whose quotations might vary within a few microseconds. The history of the measurement

of time is as old as the history of human civilization. But the major impetus for the

development of ever more accurate techniques for measuring time came from the need to

determine one's position (particularly longitude) aboard a ship in the open sea. In 1761,

a self-educated Yorkshire carpenter and amateur clock-maker named John Harrison built

a special mechanical clock to be loaded on board ships, called the “marine chronometer”,

capable of losing or gaining no more than one second per day (an incredible accuracy for

that time). From then on, time and positioning became irreversibly connected.

AUTHOR

Marco Lisi

ingmarcolisi@gmail.com

Time to THE FUSION !

BLK2GO & X-PAD OFFICE FUSION

3D scanning and data processing

are now simpler than ever.

GeoMax provides the solution to increase your efficiency and accuracy.

BLK2GO, a handheld 3D imaging scanner, captures models and point clouds and

X-PAD OFFICE FUSION, GeoMax geodata office software, processes them in a few clicks.

©2022 Hexagon AB and/or its

subsidiaries and affiliates.

All rights reserved.

Part of Hexagon

GEOmedia n°3-2022 31

Works when you do


NEWS

in January 2021. In

December 2020, the

Space Development

Agency selected L3Harris

to build and launch four

space vehicles to demonstrate

the capability to

detect and track ballistic

and hypersonic missiles.

L3HARRIS INFRARED SPACE

TECHNOLOGY TO ENHANCE

BATTLEFIELD IMAGERY AND

MISSILE DEFENCE DETECTION

L3Harris is providing the instrument as part of a

wide-field-of-view satellite that also will help inform

future space-based missile defense missions

and architectures. The satellite will be positioned

22,000 miles from Earth, enabling the infrared system

to see a wide swath and patrol a large area for

potential missile launches.

“The L3Harris instrument can stare continuously

at a theater of interest to provide ongoing information

about the battlespace, which is an improvement

over legacy systems,” said Ed Zoiss,

President, Space & Airborne Systems, L3Harris.

“It also provides better resolution, sensitivity and

target discrimination at a lower cost.”

The instrument was built for Space Systems

Command and is integrated into a Millennium

Space Systems satellite, scheduled to launch from

Cape Canaveral, Florida. The payload, which is

more than six feet tall and weighs more than 365

pounds, was developed in Wilmington, Mass.

L3Harris is prioritizing investments in space-based

missile defense programs and has accelerated

the development of resilient, end-to-end satellite

solutions in spacecraft, payloads and ground software,

and advanced algorithms.

In a related effort, the Missile Defense Agency

awarded L3Harris a missile-tracking study contract

in 2019 and the prototype demonstration

About L3Harris

Technologies

L3Harris Technologies is

an agile global aerospace

and defense technology

innovator, delivering

end-to-end solutions that

meet customers’ missioncritical

needs. The company

provides advanced defense and commercial

technologies across space, air, land, sea and cyber

domains. L3Harris has more than $17 billion in

annual revenue and 47,000 employees, with customers

in more than 100 countries. L3Harris.

com.

Forward-Looking Statements

This press release contains forward-looking statements

that reflect management's current expectations,

assumptions and estimates of future performance

and economic conditions. Such statements

are made in reliance upon the safe harbor provisions

of Section 27A of the Securities Act of 1933

and Section 21E of the Securities Exchange Act of

1934. The company cautions investors that any

forward-looking statements are subject to risks

and uncertainties that may cause actual results

and future trends to differ materially from those

matters expressed in or implied by such forwardlooking

statements. Statements about the value or

expected value of orders, contracts or programs

are forward-looking and involve risks and uncertainties.

L3Harris disclaims any intention or obligation

to update or revise any forward-looking

statements, whether as a result of new information,

future events, or otherwise.

32 GEOmedia n°3-2022


NEWS

EMLID RELEASED THE PPK APP—EMLID

STUDIO FOR MAC AND WINDOWS

Emlid announced new PPK software—Emlid Studio. It’s a

cross-platform desktop application designed specifically for

post-processing GNSS data. The app is free and available

for Windows and Mac users.

Emlid Studio features a simple interface that makes postprocessing

easier than ever. The app allows users to convert

raw GNSS logs into RINEX, post-process static and kinematic

data, geotag images from drones, including DJI, and

extract points from the survey projects completed with the

ReachView 3 app.

With Emlid Studio, you can post-process data recorded

with Emlid Reach receivers and other GNSS receivers

or NTRIP services. For post-processing, you will need

RINEX observation and navigation files. You can also use

raw data in the UBX and RTCM3 format—Emlid Studio

will automatically convert them into RINEX.

The post-processing workflow is very straightforward.

You can receive precise positioning of a single point or

track depending on your positioning mode. Just add several

RINEX files and enter the antenna height. Click the

Process button, and Emlid Studio will do the rest. Once

the resulting position file is ready, you will see the result

on the plot.

One more tool is available for the users of Reach receivers

and the ReachView 3 app. The Stop & Go feature allows

you to improve the coordinates of points collected in Single

or Float modes.

Another helpful feature is geotagging for drone mapping.

To add geotags to the images’ EXIF data, you’ll need aerial

photos and the POS file with the events. Emlid Studio also

provides a chance to update your data from the RTK drone

in case you had a float or single solution during your

survey. You will need a set of RINEX logs from a base and

drone, MRK file, and images from the drone. Just drag and

drop data in the file slots and you’ll see the result in a few

seconds.

To start using Emlid Studio, simply download the app for

your computer—either Windows or macOS. To learn

more about Emlid Studio features, visit the Emlid website.

There's life in our world

We transform and publish data, metadata

and services in conformance to INSPIRE

We support Data Interoperability,

Open Data, Hight Value Datasets,

APIs, Location Intelligence, Data Spaces

INSPIRE Helpdesk

We support all INSPIRE implementers

Epsilon Italia S.r.l.

Viale della Concordia, 79

87040 Mendicino (CS)

Tel. (+39) 0984 631949

info@epsilon-italia.it

www.epsilon-italia.it

www.inspire-helpdesk.eu

GEOmedia n°3-2022 33


NEWS

TOPCON REPRESENTS CONSTRUCTION

INDUSTRY IN "CAMPUSOS" 5G RESEARCH

PROJECT

Topcon Positioning Germany is one of 22 partners involved

in CampusOS; a research project with the goal of developing

a modular ecosystem for open 5G campus networks

based on open radio technologies and interoperable

network components. As part of the German technology

program titled "Campus networks based on 5G communication

technologies," innovative solutions for open 5G

networks are being developed and tested in conjunction

with the German Federal Ministry for Economic Affairs

and Climate Protection. The program was launched at the

beginning of 2022 and will run through 2025.

The use of artificial intelligence in the operation of autonomous

plants and construction machinery requires the

highest level of digital sovereignty. If Construction 4.0, including

far-reaching automation, is to become a reality in

Germany and the rest of the world, the processes of such

data-driven solutions must run reliably, quickly and autonomously.

Sponsored by the Federal Ministry of Economics

The German Federal Ministry for Economic Affairs and

Climate Protection is providing around 18.1 million euros

in funding for the technology program over the next three

years, which will cost around 33 million euros in total. The

Fraunhofer Institutes FOKUS and HHI are coordinating

the project. 22 partners from industry and research are involved.

They include Deutsche Telekom, Siemens, Robert

Bosch and more.

"To enable companies to operate their own campus networks,

certain requirements must be met; from standardized

technology building blocks to network structures. As the

sole representative of the construction industry, Topcon

will test the technologies on reference test sites and therefore,

will help shape the solutions for the future," explains

Ulrich Hermanski, Chief Marketing Officer of the Topcon

Positioning Group. "We look forward to working with our

research partners to take the digital construction site to the

next level."

The future of the construction industry is digital

With this research project, construction companies will one

day be able to operate plants and machinery autonomously

in open campus networks. This will allow the fluid and

uninterrupted monitoring of construction sites in real time,

as well as the networking of all sensors and construction

machines in use on construction sites.

Completely autonomous from public networks, 5G technology

guarantees seamless machine-to-machine communication

and transmits data ten times faster than 4G.

The campus networks required for this, based on 5G frequencies,

are practically digital ecosystems. They operate

with open radio technologies and dialog-enabled components.

The campus networks are geographically limited and

can operate on a factory floor or on a construction site.

Hermanski explains: "We will put a lot of time and energy

into this project, because 5G campus networks are an important

key technology for the construction site of the future."

Lead project CampusOS: The consortium and its partners

In addition to Topcon Deutschland Positioning GmbH,

the collaborative partners of the CampusOS lead project include:

atesio GmbH, brown-iposs GmbH, BISDN GmbH,

Robert Bosch GmbH, Deutsche Telekom AG, EANTC

AG, Fraunhofer Institutes FOKUS and HHI (project coordinators),

GPS Gesellschaft für Produktionssysteme

GmbH, highstreet technologies GmbH, Kubermatic

GmbH, MUGLER SE, Node-H GmbH, Rohde & Schwarz

GmbH, rt-solutions. de GmbH, Siemens AG, Smart

Mobile Labs AG, STILL GmbH, SysEleven GmbH, the

Technical University of Berlin and the Technical University

of Kaiserslautern.

About Topcon Positioning Group

Topcon Positioning Group is an industry leading designer,

manufacturer and distributor of precision measurement

and workflow solutions for the global construction, geospatial

and agriculture markets. Topcon Positioning Group

is headquartered in Livermore, California, U.S. (topconpositioning.com,

LinkedIn, Twitter, Facebook). Its European

head office is in Capelle a/d IJssel, the Netherlands. Topcon

Corporation (topcon.com), founded in 1932, is traded on

the Tokyo Stock Exchange (7732).

34 GEOmedia n°3-2022


NEWS

INSPIRATION

FOR A SMARTER

WORLD

GET YOUR FREE TICKET NOW!

VOUCHER CODE: IG22-GEOM

TICKETSHOP GOES LIVE ON APRIL 28th

Veranstalter / Host: DVW e.V.

Ausrichter Conference / Conference organiser: DVW GmbH

Ausrichter Expo / Expo organiser: HINTE GmbH

WWW.INTERGEO.DE

GEOmedia n°3-2022 35


NEWS

GALILEO GNSS FOR THE ASSET MAPPING

PLATFORM FOR EMERGING COUNTRIES

ELECTRIFICATION

The purpose of the AMPERE (Asset Mapping Platform for

Emerging countRies Electrification) project is to provide

a dedicated solution for electrical power network information

gathering. AMPERE can actually support decision making

actors (e. g. institutions and public/ private companies

in charge to manage electrical network) to collect all needed

info to plan electrical network maintenance and upgrade.

In particular, the need for such a solution comes in emerging

countries where, despite global electrification rates are

significantly progressing, the access to electricity is still far

from being achieved in a reliable way. Indeed, the challenge

facing such communities goes beyond the lack of infrastructure

assets: what is needed is a mapping of already deployed

infrastructure (sometime not well known!) in order

to perform holistic assessment of the energy demand and

its expected growth over time. In such a context, Galileo

is a key enabler -especially, considering its free-of-charge

High Accuracy Service (HAS) and its highly precise E5

AltBOC code measurements- as a core component to map

electric utilities, optimise decision making process about the

network development and therefore increase time and cost

efficiency, offering more convenient way to manage energy

distribution. These aspects confer to the AMPERE project

a worldwide dimension, having European industry the clear

role to bring innovation and know-how to allow network

intervention planning with a limited afforded financial risk

above all for emerging non-European countries.

AMPERE proposes a solution based on a GIS Cloud mapping

technology, collecting on field data acquired with

optical/thermal cameras and LIDAR installed on board a

Remote Piloted Aircraft (RPA). In particular, an RPA will

be able to fly over selected areas performing semi-automated

operations to collect optical and thermal images as well

as 3D LiDAR-based reconstruction products. Such products

are post processed at the central cloud GIS platform

allowing operators in planning and monitoring activities

by means of visualization and analytics tools can resolve

data accessibility issues and improve the decision-making

process. On this context, EGNSS represents an essential

technology ensuring automated operations in a reliable

manner and guaranteeing high performance.

AMPERE use Galileo advanced features -namely, High

Accuracy Service (HAS) and E5 AltBOC- as a core element

of the added-value asset mapping proposition. The nature

of HAS is fitting very well the requirements of this application,

especially due to the re-shaping of the once fee-based

accuracy capability to an open, free-of-charge service delivering

around 20 centimeter accuracy, versus the belowten-centimeters

PPP services, in lower convergence time.

The key of Galileo HAS stands upon the high bandwidth of

its E6-B channel, well suited to transmit PPP information,

especially relevant for satellite clock corrections, which are

not as stable in the medium and long term as the orbits.

Additionally, the use of E5 AltBOC pseudo-ranges (which

are cm-level precise with maximum multipath effects in the

order of 1 m) supports fast ambiguity resolution for carrier

phase observations. The Alternative BOC, (AltBOC)

modulation on E5, is one of the most advanced signals the

Galileo satellites transmit. Galileo receivers capable of tracking

this signal will benefit from unequalled performance

in terms of measurement accuracy and multipath suppression.

The market is responding actively and positively to multi-frequency

enhanced capabilities provided by Galileo.

Around 40% of receiver models on the market are now

multi-frequency. Also, in the mass market, with the launch

of the world’s first dual-frequency GNSS smartphone by

Xiaomi, and u-blox, STM, Intel and Qualcomm launching

their first dual-frequency products earlier this year, multifrequency

is becoming a reality for user needing increased

accuracy.

More information on: https://h2020-ampere.eu

36 GEOmedia n°3-2022


NEWS

RHETICUS® NETWORK ALERT TO ASSIST

UNITED UTILITIES OF ENGLAND

CHC Navigation (CHCNAV) today announced the availability

of the i73+ Pocket GNSS receiver. The i73+ is a compact,

powerful and versatile GNSS receiver with an integrated UHF

modem which can be used indifferently as a base station or

rover. Powered by 624 full GNSS channels and the latest iStar

technology, it delivers survey-grade accuracy in all job site configurations.

"Building on the legacy of the i73 GNSS, the new i73+ receiver

is designed to maintain its proven compact and lightweight

concept, but additionally adds the ability to be operated

as either a GNSS RTK base station or a rover.” said Rachel

Wang, Product Manager of CHC Navigation's Surveying and

Engineering Division. “To enable this extra feature, we have

built in the latest UHF modem technology allowing the reception

and transmission of RTK corrections without sacrificing

receiver size and power consumption.”

Integrated Tx/Rx UHF modem extends the i73+ capacity

The i73+ has a built-in transceiver radio module compatible

with major radio protocols, making it a perfect portable builtin

UHF base and rover kit with fewer accessories. The i73+ is

a highly productive NTRIP rover when used with a handheld

controller or tablet and connected to a GNSS RTK network

via CHCNAV LandStar field software.

Best-in-class technology with 624-channels advanced tracking

The integrated advanced 624-channel GNSS technology takes

advantage of GPS, Glonass, Galileo

and BeiDou, in particular the latest

BeiDou III signal, and provides robust

data quality at all times. The

i73+ extends GNSS surveying capabilities

while maintaining centimeterlevel

survey-grade accuracy.

Built-in IMU technology highly

enhances surveyors’ work efficiency

With its IMU compensation ready

in 3 seconds, the i73+ delivers 3 cm

accuracy at up to 30 degrees pole tilt, increasing point measurement

efficiency by 20% and stakeout by 30%. Surveyors are

able to extend their working boundary near trees, walls, and

buildings without the use of a total station or offset measurement

tools.

Compact design, only 0.73kg including battery

The i73+ is the lightest and smallest receiver in its class, weighing

only 0.73 kg including battery. It is almost 40% lighter

than traditional GNSS receivers and easy to carry, use and

operate without fatigue. The i73+ is packed with advanced

technology, fits in hands and offers maximum productivity for

GNSS surveys.

Learn more about i73+: https://chcnav.com/product-detail/

i73+-imu+-+rtk-gnss

BRINGING REAL LIFE LO-

CATION BASED DATA TO

THE METAVERSE WITH

METAGEO

METAGEO is an easy to use map (GIS)

platform that brings imagery, maps,

Digital Twins, and sensor data into one

3D universe, and then streams to any

internet enabled device, or metaverse

platform. The new GIS platform aim to

enable organizations of all sizes to host,

analyze, find and share 3D map datasets

between any internet-capable device.

The platform processes any locationbased

map or sensor data from the real

world, combines it into a single 3D virtual

environment and streams it to any

device or Metaverse platform.Today’s

traditional GIS platforms are expensive,

primarily offer 2D mapping features, are

highly complicated and often require an

advanced degree to master. 3D map and

scan datasets are large, expensive and

often hidden. Furthermore, these large

files are often unsuitable for viewing

on mobile devices or rendering in AR/

VR environments. METAGEO addresses

these issues with an affordable and

easy-to-use platform that can load data

from multiple sources. These sources include

satellites, drones, mobile devices,

public and crowdsourced repositories,

IoT sensor data, 3D models and topographic

maps. The data is then processed

by the METAGEO platform into a

3D world and streamed to any internetconnected

device, enabling live collaboration

between the office and field via

mobile or AR device. Key innovations

in the METAGEO 3D map platform

include:Fast and intuitive multi-user interface

for easy data sharing and collaborationAggregation

of map and locationbased

data from a multitude of sources

on a global scaleSeamlessly import and

sync data from multiple different systems

into a single platformEasily host

and stream large datasets between internet-connected

devicesProvide ability to

find open source and private dataPlugin

SDK will allow for 3rd party tools to

scale and fit any user needsMETAGEO

has been designed for a wide range of applications

in academia, architecture, engineering,

construction, energy, natural

resource management, environmental

monitoring, utilities, and public safety,

among others. The platform uses include

planning and managing construction

sites, organizing the layouts of events,

maps for disaster management, public

safety, visualizing inspection imagery

from drones and mobile devices, and

much more.“After working with 3D

map data for several years, it became apparent

that there was no easy way to share

big datasets with those who need the

information most, those with the boots

on the ground,”said Paul Spaur, Founder

of METAGEO. “Now with the rapid

advancement of mobile hardware, and

using advanced processing techniques,

we can now leverage this data in real life,

and in the metaverse.” METAGEO will

be offered in several affordable subscription

tiers, including Free Single User,

Free Educational, Standard, Commercial

and Enterprise. Each tier provides added

features and

benefits, enabling organizations to scale.

METAGEO is available to a limited

number of beta subscribers.Interested

parties can get started today at www.metageo.iowww.geoforall.it/kcax

GEOmedia n°3-2022 37


NEWS

LEICA GEOSYSTEMS ANNOUNCES

MAJOR PERFORMANCE INCREASE IN

AIRBORNE BATHYMETRIC SURVEY

Leica Geosystems, part of Hexagon, announced today

the introduction of Leica Chiroptera-5, the new highperformance

airborne bathymetric LiDAR sensor for

coastal and inland water surveys. This latest mapping

technology increases the depth penetration, point density

and topographic sensitivity of the sensor compared

to previous generations. The new system delivers highresolution

LiDAR data supporting numerous applications

such as nautical charting, coastal infrastructure

planning, environmental monitoring as well as landslide

and erosion risk assessments.

Higher sensor performance enables

more cost-effective surveys

Chiroptera-5 combines airborne bathymetric and topographic

LiDAR sensors with a 4-band camera to collect

seamless data from the seabed to land. Thanks to higher

pulse repetition frequency (PRF), the new technology

increases point density by 40% compared to the previous

generation system, collecting more data during

every survey flight. Improved electronics and optics

increase water depth penetration by 20% and double

the topographic sensitivity to capture larger areas of

submerged terrain and objects with greater detail. The

high-performance sensor is designed to fit a stabilising

mount, enabling more efficient area coverage which decreases

operational costs and carbon footprint of mapping

projects.

Leica Geosystems’ signature bathymetric workflow supports

the sensor’s performance. Introducing near realtime

data processing enables coverage analysis immediately

after landing, allowing operators to quality control

the data quickly before demobilising the system.

The Leica LiDAR Survey Studio (LSS) processing suite

provides full waveform analysis and offers automatic calibration,

refraction correction and data classification,

as well as advanced turbid water enhancement.

Expanding bathymetric application portfolio to support

environmental research

Combining superior resolution, depth penetration and

topographic sensitivity, Chiroptera-5 provides substantial

benefits for various environmental applications like

shoreline erosion monitoring, flood simulation and

prevention and benthic habitat classification.

Bundled with the FAAS/EASA certified helicopter pod,

the system enables advanced terrain-following flying

paths for efficient river mapping and complex coastlines

surveys. Owners of previous generation systems are

offered an easy upgrade path to Chiroptera-5 to add

capabilities to their existing sensor and leverage their

initial investment.

“The first generation Chiroptera airborne sensor was

flown in 2012. During its ten years of operation, the

system has seen constant evolution that continuously

improved the productivity and efficiency of the entire

bathymetric surveying industry,” says Anders Ekelund,

Vice President of Airborne Bathymetry at Hexagon.

“By collecting detailed data of coastal areas and inland

waters, Chiroptera-5 provides an invaluable source of

information that supports better decision making, especially

for environmental monitoring and management,

in line with Hexagon’s commitment to a more sustainable

future.”

For more information please visit: http://leica-geosystems.com/chiroptera-5

38 GEOmedia n°3-2022


NEWS

GEOmedia n°3-2022 39


AEROFOTOTECA

POTATOES, ARTIFICIAL INTELLIGENCE AND

L’Aerofototeca

Nazionale

racconta…

OTHER AMENITIES: PLAYING WITH COLORS ON

PANCHROMATIC AERIAL PHOTOGRAPHS

by Gianluca Cantoro

Towards color photography

During the first half of

19th century, photography

stimulated people’s imagination

and wonder. Despite the

impressive quality of the first

trials, photographs lacked the

realism provided by natural

colors, at times added in postproduction

–as we would

say today– by specialized

painters (Coleman, 1897, p.

56) who felt threatened by the

emergence of photography.

Following Rintoul, “when the

photographer has succeeded

in obtaining a good likeness,

it passes into the artist’s hands,

who, with skill and color, give

to it a life-like and natural

appearance” (Rintoul, 1872, p.

XIII–XIV).

A French physicist, Louis

Ducos du Hauron, announced

a method for creating color

photographs by combining

colored pigments instead of

light, as suggested by Maxwell’s

demonstration of 1861. His

process required long exposure

times, and this problem

built on top of the absence

of photographic materials

sensitive to the whole range

of the color spectrum. Other

inventors and scientists tried

to solve the challenge of color

photographs, but all trials were

quite expensive and needed

specific equipment and complex

procedures.

Fig. 1 - Example of pan-sharpening between a satellite image and an historical panchromatic

photograph (top and bottom left). In the column to the right, three different algorithms, respectively

(from top to bottom) Brovey, IHS and PCA.

The first patent of color

photograph, combining both

screen and emulsion on the

same glass support under

the name Autochrome, was

registered by Auguste and

Louis Lumière in 1895, the

same year of their invention

of the Cinématographe. The

manufacturing of autochrome

plates was a complex process,

starting with the sieving

of potato starch (to isolate

individual grains between 10–

15 microns in diameter), whose

grains were then dyed red, green

and blue-violet, mixed and

spread over a glass plate (around

four million transparent starch

grains on every square inch of

it), and coated with a sticky

varnish. Next, charcoal powder

was spread over the plate to fill

any gaps between the colored

starch grains.

Autochrome plates were simple

to use, they required no special

apparatus and photographers

were able to use their existing

cameras. Exposure times,

however, were long –about 30

times those of conventional

plates. Nevertheless, by 1913,

the Lumière factory in Lyon was

40 GEOmedia n°3-2022


AEROFOTOTECA

producing 6,000 autochrome

plates every day. This testifies

of the appeal of color

photographs already in those

early times.

Is it possible today to convert

native black and white images

(raster digital pictures, not

prints or negative anymore)?

And why should one take

the trouble to convert

panchromatic into color

images after all? This paper

presents some experiments

to colorize historical

photographs, in the effort

to boost our capabilities to

undisclose information from

frozen moments captured

by cameras and to –ideally–

promote further the use of

aerial images in various fields.

Colors in Remote Sensing

Some procedures in remote

sensing are known and

frequently applied to

satellite images, to improve

the resolution of a color

image with the details of its

panchromatic twin. This

fusion procedure, known with

the term pan-sharpening,

can be applied to satellite

imagery through numerous

algorithms, and it produces

a sensible increase in the

accuracy of photo-analysis

and derived feature extraction,

modeling and classification

(Yang et al., 2012). The most

commonly used algorithms

include IHS (Intensity, Hue

and Saturation) (Schetselaar,

1998), PCA (Principal

Component Analysis) (Chavez

et al., 1990), the Gram-

Schmidt Spectral Sharpening

(Laben Craig and Brower

Bernard, 2000) and the

Weighted Brovey transform

(Chavez et al., 1990).

The various pan-sharpening

techniques have two main

factors in common: 1) they are

normally applied to satellite

images, namely multispectral

and panchromatic bands; 2)

the two datasets to be fused

Fig. 2 - Example of visual trick for image colorization inspired by the Color-Assimilation-Grid-Illusion. Top-

Left: historical vertical image of Ostia of 1985 precisely georeferenced over the bottom satellite image. Bottomleft:

Landsat/Copernicus satellite image of the same area of 2019. Top-Right: historical panchromatic with

over-saturated color grid extracted from the available satellite image. Bottom-Right: Detail of the image above

to show a close-up look at the colored grid and the black and white background.

Fig. 3 - Application of automatic colorization algorithms (Deoldify, Algorithmia and Automatic Colorizer) to

three oblique photographs (Original Image). Photographs by Otto Braasch (Musson et al., 2005, fig. 10.8, 10.9,

10.7) edited for the proposed approach.

should have been captured

(almost) simultaneously. For

these reasons it is apparently

not possible to fuse an historical

aerial image with a satellite

image, which is what we are

going to try here. Proposed

methods are not conventional

and may therefore attract

comprehensible skepticism, but

they should be intended as a

proof-of-concept or experiments

GEOmedia n°3-2022 41


AEROFOTOTECA

Fig. 4 - Interactive Deep Colorization User Interface. After clicking on a specific point on the

black and white image (see colored spots on left image in the interface), the user can assign a

color from the “ab Color Gamut”, the “Suggested colors” or the “Recently used colors”. Results

are presented in real time to the right of the UI and can be saved at any time.

Fig. 5 - Comparison of processing of the same pictures as for Fig. 3 with Interactive Deep Colorization

(or iColor).

to test the capabilities of

modern computer approach to

obtain a realistic representation

of past environments in natural

colors for the benefits of photoreaders.

For example, since our objective

is mainly to get a colorized

historical image, we can adjust

reciprocal resolution of our

vertical and satellite images of

exactly the same area. A similar

approach has been explored

recently (Siok and Ewiak,

2020) with aerial and satellite

images of about the same

period and without dramatic

changes in cultivations or plot

sizes. Indeed, the processing

of areas that changed across

time (i.e. between the date of

the historical photograph and

the date of the chosen satellite

image in terms of time of the

day, season or cultivations/

urbanization processes) may

produce some unpleasant

artifact (see in Fig. 1 the details

of the trees and bushes colors

which are larger than in the

historical image), but in this

case a targeted editing with

computer graphic software

may minimize the problem, if

needed.

Once such high-resolution

color image is being generated,

the applicability of multiple

operations can be explored,

such as image classification and

feature extraction.

Another approach, completely

different in terms of processing

and output, is the use of

an operation called Color-

Assimilation-Grid-Illusion

(Kolås, 2019). As the name

suggests, this approach is a mere

visual trick and it is presented

here mainly for dissemination

purposes (not for an improved

photo-interpretation) and as

a sort of mild invasive way to

add colors to black and white

images. It consists in overlaying

grids (or lines or dots) of oversaturated

colors over black

and white images; our brain

essentially fills in the missing

colors that it would anticipate,

or expect, to be there in a full

color image.

The image processing is

intended to be used on one

single color image, that is

converted to grayscale and

overlaid with the original colors

only through a grid. Instead, in

our case, starting from the same

inspiring principle, we take

the historical aerial photograph

that we want to colorize, and a

satellite image (ideally of about

the same season) of the same

42 GEOmedia n°3-2022


AEROFOTOTECA

Fig. 6 - Possible variants generated with iColor of the same image with deliberate selection of false colors to make specific features more visible or for

other applications.

area; we generate the color grid

from the satellite image, oversaturate

it and overlay it over the

panchromatic picture (Fig. 2).

Calibration of the above

experiment may be needed

according to the printing or

screen zoom size, nevertheless,

it is an easy effect to colorize

historical photographs.

The Machine Learning

automatic and

semi-automatic approach

Machine learning has been

explored in several fields for its

ability to “learn” the intended

process from A to B with

the help of prepared dataset.

Algorithms are available to

convert black and white images

into color one, based exactly on

a learning dataset. In this sense

it could be applied to historical

aerial images as well, but again,

here the intent is to generate

an image with colors that are

plausible, not to produce an

accurate representation of the

actual snapshot in time.

Below (Fig. 3) are some

examples of oblique

photographs originally taken

with digital camera in color,

then converted to black and

white for the sake of the

experiment, and colorized back

with three automatic algorithms

for image processing: Deoldify

(Antic, 2021) (or DeepAI),

Algorithmia (Zhang et al.,

2016) and Automatic Colorizer

(Larsson et al., 2017).

The chosen algorithms, selected

for their simplicity of use, for

their advertised capabilities and

for their availability as opensource

code or online demo,

are examples of a computer

problem called image-to-image

translation, whose success

depends by the provision of

sufficient (and compatible)

training data (Tripathy et al.,

2018). Since the training data

is mostly made of ground

photographs of natural subjects,

portraits or architectures, the

obtained result in our case is

mostly unsatisfactory, especially

when looking at the original

(our “ground truth”) but also

if we consider the generated

images on their own for

photointerpretation.

Different result is instead

achievable with another

algorithm of the same family,

which has an interactive model

that allows user to manually

input colors on black and white

image based on chrominance

gamut: it is the case of

Interactive Deep Colorization

(or iColor) (Zhang et al., 2017)

(Fig.4).

By default, the first proposed

colorized image in this

algorithm is very much similar

to the ones generated by similar

algorithms (see Fig. 3), but once

specific colors are selected, the

result improves considerably

reaching a good proximity to

the original images of our test

cases (Fig. 5).

If from one side the Interactive

Deep Colorization algorithm

allows one to create images with

plausible colors, possibly similar

to the originally depicted

subject, it also provides the

option to deliberately choose

“wrong” colors, and somehow

create a completely unreal

scenario (Fig. 6), which may

make sense if they are employed

in our case to highlight specific

features or shadows.

Conclusions

In modern photointerpretation,

crop-marks

– as well as weed-marks,

germination-marks and grassmarks

– by definition are

made of vegetational stress or

differential growth in green

fields. Even with soil-marks,

shades of brown help us

recognizing patterns in arable

lands.

Seeing black-and-white images

in color has the potential to

brings certain details to life that

would otherwise be missed or

hardly be visible. This sense

of immediacy is why color

images feel more relatable.

Historical vertical photographs

traditionally served (and still

serve) immensely for the study

of landscape changes and the

identification of archaeological

traces (among others) for

the reconstruction of topic

palimpsests. They often provide

details that have no equals in

color images so far and, training

on photo-interpretation black

and white images cannot be

ignored or replaced in any way.

In the paper an effort is

presented to push the

boundaries of consolidated

GEOmedia n°3-2022 43


AEROFOTOTECA

practice in remote sensing

and artificial intelligence,

together with the attempt of

presenting a visual trick for

dissemination purposes. The

proposed methods change

the current paradigm with

respect to employed algorithms

and dataset to which the

algorithms are applied, aiming

at a new way of looking at

historical aerial photographs

and ideally unveiling a new

dimension in past-landscape

studies and dissemination.

The various procedures are all

oriented towards the artificial

colorization of historical aerial

photographs, natively black

and white. These “bizarre”

trials are intended as ways to

promote new approaches to

legacy data, with the ultimate

goal to simplify or enhance

aerial photo-interpretation

and involve non-experts in

the narration of the past

made through photographic

documents.

Lastly, artists dealing with

historical image colorization

admit the intense and timeconsuming

effort required to

achieved a realistic result and

a philological reconstruction,

involving historical research,

comparative materials and

interviews with witnesses

or experts. Therefore, black

and white colorization may

be a creative process that can

increase focus and attention on

what we see (or we don’t see) in

historical aerial images.

REFERENCES

Antic, J., 2021. DeOldify [WWW Document]. Deoldify, a deep learning based project

for coloring and restoring old images. URL https://github.com/jantic/DeOldify (accessed

1.15.21).

Chavez, P.S., Jr, Sides, S.C., Anderson, J.A., 1990. Comparison of three different methods

to merge multiresolution and multispectral data: LANDSAT TM and SPOT panchromatic.

AAPG Bulletin (American Association of Petroleum Geologists); (USA) 74:6.

Coleman, F.M., 1897. Typical Pictures of Indian Natives, Being Reproductions from

Specially Prepared Hand-coloured Photographs with Descriptive Letterpress. “Times of

India” office, and Thacker & Company, Limited.

Kolås, Ø., 2019. Color Assimilation Grid Illusion [WWW Document]. Color Assimilation

Grid Illusion. URL https://www.patreon.com/posts/color-grid-28734535 (accessed

2.2.21).

Laben Craig, Brower Bernard 2000: A. Laben Craig, V. Brower Bernard, Process For Enhancing

The Spatial Resolution Of Multispectral Imagery Using Pan-sharpening - (US

Patent: US 6011875 A), https://lens.org/135-660-046-023-136.

Larsson, G., Maire, M., Shakhnarovich, G., 2017. Learning Representations for Automatic

Colorization. arXiv:1603.06668 [cs].

Musson, C., Palmer, R., Campana, S., 2005. In volo nel passato: aerofotografia e cartografia

archeologica, Biblioteca del Dipartimento di archeologia e storia delle arti, Sezione

archeologica, Università di Siena. All’insegna del giglio, Florence?

Rintoul, A.N., 1872. A guide to painting photographic portraits, draperies, backgrounds,

&c. in water colours : with concise instructions for tinting paper, glass, & daguerreotype

pictures and for painting photographs in oil colours and photo-chromography.

Schetselaar, E.M., 1998. Fusion by the IHS transform: Should we use cylindrical or

spherical coordinates? International Journal of Remote Sensing 19, 759–765. https://doi.

org/10.1080/014311698215982

Siok, K., Ewiak, I., 2020. The simulation approach to the interpretation of archival aerial

photographs. Open Geosciences 12, 1–10. https://doi.org/10.1515/geo-2020-0001

Tripathy, S., Kannala, J., Rahtu, E., 2018. Learning image-to-image translation using

paired and unpaired training samples. arXiv:1805.03189 [cs].

Yang, S., Wang, M., Jiao, L., 2012. Fusion of multispectral and panchromatic images

based on support value transform and adaptive principal component analysis. Information

Fusion 13, 177–184. https://doi.org/10.1016/j.inffus.2010.09.003

Zhang, R., Isola, P., Efros, A.A., 2016. Colorful Image Colorization. arXiv:1603.08511

[cs].

Zhang, R., Zhu, J.-Y., Isola, P., Geng, X., Lin, A.S., Yu, T., Efros, A.A., 2017. Rea

KEYWORDS

Color photography; artificial intelligence; image-to-image; remote sensing;

air-photo interpretation;

ABSTRACT

Historical photographs, whether taken from the air or from the ground, are usually synonyms

of grayscale or sepia prints. From the very beginning of photography, during the first

half of 19th century, people were amazed by this new media that could record all aspects

of a scene with great detail. Soon though, everybody started wondering why would such

an impressive innovation fail to record colors? A process of trials and errors then started

(including the most successful and pioneer one, involving the use of potato starch, by Lumière

brothers) aiming to add colors to photographs, till the consolidation of new systems

(camera and film) capable to collect photographs directly in color. In the past, before and

during this innovative approach, native black and white photographs were painted in the

effort to give them life. Today, only few methods are available to convert a panchromatic

image into a color one, and they need a number of steps and further development to work

properly. The paper tries to present different methods to colorize native black and white

photographs, based on available automatic or interactive Artificial Intelligence (Machine

Learning or Deep Learning) algorithms, on revised remote sensing procedures and on

visual tricks, aiming at exploring the possible improvement in readability and interpretation

of photographed contests in the usual analytic process of photo-interpretation. At the

same time, colorized historical photographs hold different appeal in the general public

and have the potential to attract and involve non-experts in the archaeological/historical

reconstruction phases.

AUTHOR

Gianluca Cantoro

gianluca.cantoro@cnr.it

Institute of Heritage Science (ISPC) – Italian National Research Council (CNR)

Area della Ricerca di Roma 1, Via Salaria km 29,300 - 00010 Montelibretti (RM)

Italian National AirPhoto Archive (Aerofototeca Nazionale, AFN) – Istituto Centrale

per il Catalogo e la Documentazione (ICCD)

Via di San Michele 18, 00153 Rome (RM)

44 GEOmedia n°3-2022


AMPERE

a a GNSS-based integrated platform

for for energy decision makers

AMPERE Working Group in Santo Domingo

AMPERE PARTNERS

Barrio Los Tres Brazos

Santo Domingo Este

Asset Mapping Platform Platform for

Emerging CountRies Electrification

Emerging CountRies Electrification

Despite global electrification rates are significantly progressing, the

Despite access global to electricity electrification in emerging rates countries are significantly is still far from progressing, being achieved. the access

Indeed, the challenge facing such communities goes beyond the lack of

to electricity infrastructure emerging assets; what countries is needed is is still a holistic far from assessment being achieved. of the

energy demand and its expected growth over time, based on an accurate

Indeed, assessment the challenge of deployed facing resources such communities and their maintenance goes beyond status. the lack of

infrastructure assets; what is needed is a holistic assessment of the energy

demand and its expected growth over time, based on an accurate

assessment of deployed resources and their maintenance status.

AMPERE Consortium

www.h2020-ampere.eu

AMPERE project has received funding from the European GNSS Agency (grant

agreement No 870227) under the European Union’s Horizon 2020 research and

innovation programme.


EVENTS

22-28 Agosto 2022

FOSS4G 2022 Academic

Track

Firenze (Italy)

http://www.geoforall.it/

kcp3h

11-15 September 2022

FIG Congress 2022

Warsaw (Poland)

http://www.geoforall.it/

kc3kd

17-20 October 2022

IAG International

Symposium on Reference

Frames for Applications in

Geosciences (REFAG 2022)

Thessaloniki (Greece)

http://www.geoforall.it/

kc3kr

18-20 October 2022

INTERGEO 2022

Essen (Germany)

http://www.geoforall.it/

kc3kh

20-21 October 2022

SAR Analytics Symposium

Berlin (Germany)

http://www.geoforall.it/

kc3f8

Cartographic Conference

Firenze www.geoforall.it/

kyk8k

NEW LEICA BLK360

The all-new Leica BLK360 breaks open

the possibilities of reality capture.

With unprecedented, best-in-class scanning speed,

the BLK360 makes you faster.

◗ A supercharged next-gen imaging laser scanner, it captures

a full scan with spherical images in only twenty seconds over

five times faster than the BLK360 G1.

◗ With fast and agile in-field workflows, along with live feedback

on your mobile device, you can be absolutely sure you’ve captured

everything you need. VIS technology automatically combines

your scans to speed up your workflow and help you make sure

your datasets are complete.

◗ BLK360 data is highly valuable for so many uses - from AEC to VR.

Easily transfer and work with data in your software ecosystem

to create immersive and highly accurate deliverables.

◗ You can also upload BLK360 data to HxDR, the Hexagon cloud-based

data storage, visualization and collaboration platform.

to know

more

Contact us to know more!

Via A. Romilli, 20/8 - 20139 Milano • Tel. 02 5398739

E-mail: teorema@geomatica.it

www.geomatica.it • www.disto.it • www.termocamere.com


EASY. RELIABLE. STONEX.

SURVEYING

3D SCANNING

SOFTWARE

www.stonex.it

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!