Martin Langner, Introduction to Digital Image and Artefact Science (Summer Semester 2021) III. Analysis: Lesson 8. Modelling and Analysis of Space and Time (https://youtu.be/bZoe7MlcnDY) [1] Introduction [2] Cultural Heritage [4] Data representation of building models [6] The Problem of Dating Historical Objects [7] Beginnings of Digital Archaeology [8] Content of this lecture lesson [9] 1. Digital Archaeology of Excavations [10] a) Geoprospecting [15] b) Excavation documentation [18] c) Geoarchaeology [19] d) Geovisualisation [34] e) Geoinformation systems [44] Archaeological Information Systems [51] 2. Dating Problems [52] a) Date formats and chronological systems [62] b) Dating possibilities [73] 3. Digital Modelling of Space and Time [74] a) Timeline [80] b) Fuzziness [90] c) Seriation [93] Conclusion [93] Current research questions [94] What you know and what you should be able to do [97] Literature [1] After analysing images and artefacts, today we reach the larger context in which the works were located. I welcome you very warmly to our 8th lesson of the Introduction to Digital Image and Artefact Science. This time we are talking about the modelling and analysis of space and time. [2] What kind of objects and facts do we actually have to digitise and investigate when we talk about modelling space and time? The question is not so easy to answer, because what counts as cultural heritage is subject to selection by the respective society. Accordingly, the concept of "cultural heritage" has changed fundamentally over the past hundred years. While in the 19th century it was still generally used to mean material heritage, such as buildings and material assets, the meaning of the term has increasingly changed from representative ensembles and identity-creating built heritage to cultural heritage sites in general. Since 1975, natural and historical sites have also been included, since 2003 also intangible values such as traditions and customs, and since 2013 really everything that was and is of significance to humanity. In this respect, the works of Goethe and Schiller are included just as much as pop culture or the Mecklenburg Lake District. For the Digital Humanities, however, this means that we want to digitally model and analyse culture in its entirety. 3] But as the concept of cultural heritage has become more complex, there is also a greater need for suitable digital processes to be able to protect and transmit this cultural heritage. In order to visualise this diversity of requirements, it is perhaps sufficient to look at the influences that constantly threaten cultural heritage, which I will only list here: the mere use alone causes damage, as do military and general interpersonal conflicts, the countless visitors to the sites, climate change and other natural disasters, the expanding urban development that tends to dredge up and concrete over everything, various political measures that stand in the way of the protection of cultural heritage, as well as financial limits. Problems can also arise from ownership structures, different administrative responsibilities, fundamental principles and theories of what is considered worthy of protection, or scientific findings that call previous practice into question. Globalisation with its new communities and their needs for identity formation or simply for everyday life also play a major role here, as do different interests and values in the respective cultural context and their views on ecology and sustainability. These competing influences hinder the interest of society as a whole to preserve the testimonies of the past and present for the future and to pass them on to our descendants. And suitable digital processes open up completely new ways to do this. 4] Corresponding procedures are relatively far advanced. With some of the methods we have already learned about in object digitisation, buildings and rooms can also be digitised or digitally represented. Florent Poux has put them together very clearly in his blog using the example of an abbey in southern France. On the one hand, there are 3D models in the form of point clouds, which were either created photogrammetrically or with a laser pulse-based scanner when flying over the abbey. Multi-view views in the form of RGB raster images can be generated from the point clouds or a polygon mesh can be calculated. Point clouds represent the 3D data in a simple but efficient way and impress above all with their accurate representation, which can be rendered and transformed quickly. So far, they have hardly been used for web representation and virtual reality applications because they are very memory-intensive. 3D models with explicit surfaces such as meshes, which represent the model as polygons, usually as triangular meshes, are more suitable for this. 5] The difference between surface-based and volume-based 3D models can be easily seen in the top row. In the one case, the boundaries of the model are defined by edges - this representation is therefore also called boundary-reps - with surfaces that represent the structure as an empty shell. The associated textures are stored separately as a 2D image if the colour information is not stored with the vertices. In the other case, the solid model, the spatial information consists of voxels, resulting in a compact model. In the case of point cloud data, each point can be represented as a voxel of size x to give a filled view of the voids between the points. In a data structure as an octree, a certain number of points per voxel unit can thus be averaged, depending on the level of refinement desired. A voxelisation is always only an approximation to the original geometry of the building and can sometimes be too abstracted. However, it is quite suitable for shape classification tasks with neural networks because of its structuredness. In a CAD model, the faces and edges are usually the structural components of the building, which by their nature tend to form a polygon mesh of rectangles. A CAD is parametric, meaning that the shape can be scaled as desired by setting a parameter to a target value that modifies the underlying geometry. This is very convenient if you want to model walls, for example, by simply setting their height, length and width. Parametric models use a composition of feature-based, solid and surface modelling to allow manipulation of the model attributes. In the special case mentioned, this is also referred to as BIM: Building Information Model. Generating CAD models from point clouds is time-consuming because many correction steps are necessary. On the other hand, the models, which are structurally divided into segments such as roofs or windows, can be segmented automatically. The advantage of such segmentation is that classes of components can be automatically recognised and stored in databases. Changes to these models can then be made not only to details, but to the entire class of a building component. And such segmentation is also indispensable for the visualisation of the building history. Of course, such a 3D model cannot replace a real building survey. 6] However, the modelling of complex historical content not only encounters external limitations, but also difficulties inherent in cultural heritage. For example, dating is very important for the construction of historical models, since historiography is based on the integrity of datable sources. However, a central problem here already lies in the exact dating. This is because absolute dates are rather rare until modern times. Dating is therefore mostly a process of estimation, as perhaps a typical inscription panel in a museum, here in the Berlin Bode Museum, can illustrate. In general, there are four sources with which objects of past times can be dated: First, there are contemporary texts, such as literature and inscriptions, which provide us with clues for classification. Then there are archaeophysical or archaeochemical data, which only seem more exact at first glance. They too are open to interpretation. Third, the stylistic series mentioned in the last lesson; and in space, stratigraphic relationships, which I will explain in more detail in a moment. [7] As far as the digital indexing of cultural property is concerned, archaeology is in the vanguard. This is due to the specific orientation of this discipline since the 1960s. On the one hand, the quantification of research objects began very early in archaeology, because excavations were confronted with increasingly large quantities of finds and features. For example, the International Congress on Computer Applications and Quantitative Methods in Archaeology has been held annually since 1973, and national conferences are added to this every year. On the other hand, archaeology, especially in England and the USA, has been more strongly oriented towards anthropological research approaches since the beginning of the 1960s. The so-called New Archaeology, for example, believed that cultural changes were externally driven by environmental influences to which humans adapted. Therefore, they believed that these cultural processes could be reconstructed almost completely if only sufficiently measurable data were collected. In addition to the associated theoretical debate, this approach also contributed to the development of geo-information systems, which are indispensable for determining environmental changes, just as in general the trend towards environmental archaeology and thus towards the reconstruction of the entire habitat and not just the buildings and their furnishings necessitated the digitisation of archaeology at an early stage. [8] Therefore, I will explain the modelling and analysis of spatial data using the example of digital excavation archaeology. Connected to this are also the various dating problems that arise when one tries to express historical processes in numbers. The second part therefore deals with date formats and dating possibilities and the third with some methods to visualise these data. [9] Let's start with an area where spatial analysis plays a central role, namely digital excavation archaeology. Even if you have nothing to do with archaeology, this should be interesting for you, because here you will find widely developed methods and tools that apply accordingly to other areas of cultural heritage. And the difficulties of modelling historical spaces and in general: acquisition of past times, are particularly significant here. 10] An airborne laser scanner is suitable for the acquisition of a landscape. Here, geoprospecting is done by optical distance and speed measurement when flying over the terrain. This method, called "light detection and ranging", is abbreviated LiDAR. Lidar instruments, which are attached to aircraft and satellites, carry out surveys and mapping. In this process, a narrow laser beam can image physical features with very high resolutions. For example, a laser pulse-based scanner with a medium to long range, over two metres focal distance, can image terrain with a resolution of 30 centimetres or better. To do this, the systems use circuitry that measures, to the picosecond, the time it takes for the light to travel from the laser to the object and back to the sensor for millions of laser pulses, and calculate the distance from each. [11] An even simpler method of geoprospecting is aerial archaeology. This is because underground remains can be seen, especially after rain, by a slight discolouration of the ground. With the help of aeroplanes, helicopters, balloons or drones, these can be photographed from a greater height and thus evaluated. The Roman fort of Arnsburg in Hesse, which has not yet been excavated, was first discovered on an aerial photograph. The entire complex, rampart and enclosing wall with west gate and intermediate tower can be seen very clearly. The ground plans of the staff building and a magazine building are also clearly visible inside the fort. 12] These structures are even clearer with geomagnetics. For by electromagnetically measuring anomalies in the (so-called normal) earth's field, magnetised rock bodies and objects can be determined in terms of position, depth and shape. In archaeology, geomagnetic surveys for mapping historical and prehistoric settlements and for planning excavations have become standard. Since clay rock and especially fired clay is more magnetic than e.g. sandstone, geomagnetics is particularly suitable for the detection of brick buildings. 13] Related to this is the use of ground penetrating radar in archaeological prospection. Ground penetrating radar uses electromagnetic radiation in the microwave band of the radio spectrum to detect and map archaeological artefacts, features and patterns below the surface. Subsurface objects and stratigraphies cause reflections that are picked up by a receiver. The travel time of the reflected signal thus indicates the depth of the structures. [14] Subsequently, the data can be recorded as profiles, as plan view maps isolating specific depths, or as three-dimensional models. On the right you can see the result of the geoelectrical measurement of a Roman building in Ammaia at a depth of 60 to 65 cm and below that, as an evaluation, the plan view with a drawing of the once covered areas and the water supply. The electrical conductivity of the soil, the transmitted centre frequency and the radiated power are problematic and can limit the effective depth range of the GPR survey. Higher frequencies can lead to improved resolution, but increasing electrical conductivity attenuates the introduced electromagnetic wave and thus the penetration depth. This is why ground penetrating radar is mostly used when measuring just below the surface. [15] But not only the measurement of the ground is done digitally, digital methods are also indispensable in excavation documentation. An example is the excavation of a villa rustica near Nassenfels in the district of Eichstätt. In the 8 x 9.5 m area lay the south-west corner of a stone outbuilding of the villa, surrounded by disorganised rubble. The condition of the features corresponded to the 1st planum of the excavation. On the left you see a traditional hand drawing and on the right the digitally created plan with the measured stones and features in an archaeological information system. It can now be easily correlated with the photos taken from a great height and improved accordingly, resulting in a pictorial plan with wall contours. 16] Tilman Wanke evaluated the different recording methods years ago and drew them into a descriptive graphic, on the left without and on the right with a layer of rubble. On the x-axis you find the standard deviation plotted in mm and on the y-axis the working time in minutes. Especially when it comes to documenting everything precisely, the picture plan with wall contour is superior to all other methods. However, one must also mention that this requires not one photo, but a series of photos from different positions to reduce the distortion. Nowadays, photogrammetric images of the plana on excavations are standard. [17] Mostly, however, a combination of several methods is used. For example, in the excavation of a stoa on the Greek Lycaonsberg in Arcadia, a series of photos were combined to form a photostitch, which was then transferred to AutoCAD along with the hand drawing and the hand measurement, thus using the stitched photo to create a publishable elevation drawing. [18] Geoarchaeology also benefits from the advanced digital possibilities that were first used in geography. By this I mean, on the one hand, the acquisition, storage and analysis of geodata with geoinformation systems (GIS for short) and, on the other hand, the subsequent geovisualisation of historical spaces on maps and in 3D models. For example, on the basis of today's soil and yield potential maps, the ancient utilisation potential for arable farming and livestock breeding in the surroundings of the Glauberg was calculated. According to the technological possibilities of the Iron Age to plough fields up to a slope inclination of 15 ̊ (or 27 %), it was possible to form and map three soil use classes from the arable suitability, the soil units and the slope inclination, namely yellow: potential arable land, green: rough pastures and blue: wet pastures and wet areas. [19] For the creation of a topographic map, the specification of heights above normal level is central. The contour line projection is used for this purpose. You may have to imagine a contour line connecting measured points of the same elevation as a flat circular path around a hill. Let's take the topographic map of Göttingen from 1904. There you can see three different types of lines. The normal lines are the main contour lines. They rise with an always equal height distance (or equidistance) of e.g. five metres from the zero level, i.e. the points on two adjacent contour lines are five metres apart. The contour line projection thus suggests sections through the terrain at always the same distance, just as if a mountain did not rise evenly but in a staircase. The contour lines are labelled in such a way that they have to be read in the direction of the rising terrain. The thick lines, called main contour lines or count lines, are found at round elevation values, such as here at 260 and 280 metres, whereas the dashed lines are only auxiliary contour lines or intermediate contour lines drawn between two adjacent main contour lines in order to be able to better represent the relief form by the contour lines, especially in flatter terrain. Without such auxiliary contour lines, it would not be possible to depict relief structures that can no longer be represented with the equidistance. The contour lines therefore make the shape and slope of a terrain visible. The closer the lines are together, the steeper the terrain, the further apart they are, the flatter it is. 20] Contour lines can be created photogrammetrically by taking at least two photographs of the terrain from different angles. The resulting stereo model is used to move a pointing marker along the terrain at the desired elevation to create the contour line. With airborne survey cameras, orthophotos can be taken in strip-like series or other photo series, from which digital terrain models can be calculated depending on the flight altitude. Digital sections can easily be laid through these and condensed into maps. 21] The number of contour lines as well as the degree of detail depend strongly on the scale of the maps. Applications such as Google Earth or Google maps therefore use multiresolution on the flow, i.e. instead of zooming into the map, new maps with the corresponding degree of abstraction are always reloaded. [22] The result of a photogrammetric image is a mesh of the terrain, either as a Triangulated Irregular Network or as a rectangular grid. From such a 3D terrain model, the desired contour lines can be generated by intersecting it with horizontal planes at the appropriate height. These planes also help to reduce the areas in the meshes. You can see relatively well here how strong the abstraction must be in order to be able to visualise height differences in 3D models as well. On the other hand, digital terrain models can be generated from maps by vertically stretching the respective coordinate system. [23] Hypsometric colouring of the terrain model is then also easily possible. Hypsometric means "as a function of terrain elevation", i.e. that elevation measures are applied to the relief by colouring or texturing, as here in the case of El-Kala National Park in Algeria. 24] However, the relief does not only serve as a reference surface for terrain heights. As a so-called "image drape", terrain models can be textured with raster graphics, such as scanned topographic maps, aerial photographs or satellite images. Or as a drape of the geological map onto the relief, here again as an example the El-Kala National Park. or also the potential natural vegetation as a texture of the relief. A prerequisite for image draping is georeferencing of the image data, which must be projected onto the relief parallel to the xy-plane. 25] As you know, several such textures are usually used on maps. Multi-texturing techniques are especially important for the representation of thematic information, e.g. to combine roads, buildings and vegetation in one representation by combining several texture layers. In order to still be able to acquire this multitude of information, a simplified relief representation (i.e. a "bump mapping") is of course necessary, which does not reproduce unevenness in the terrain in its natural course, but uses specific degrees of abstraction of the visual simulation. [26] Similar to the segmentation of building models, it is important for working with terrain models to group individual parts such as structures, pylons, bridges, tunnels, dams, wind turbines, plants or bore profiles into classes. This can already be done when positioning vector geo-objects in the terrain model by colouring or texturing the relief. Here, for example, is a simple terrain model of ancient Rome, where the residential buildings are coloured red. But also linear objects such as streets, land boundaries or building ground plans can be marked in this way. As a rule, we only know the ground plan of ancient buildings, which is why they appear here as simple red cuboids. In the first version of Rome Reborn, which was also available on google Earth, the developers used this grouping of building types to give them a uniform texture. The result is disconcerting at first glance, but if this type of proxy texturing is applied consistently, a realistic overall impression is created that nevertheless clearly visualises the uncertainties. [27] With corresponding symbols, the time-relatedness of which can also be debated, the city model of ancient Rome in Rome Reborn has been provided with information in pop-up windows, where the archaeological basis for the model is documented. As you can already see, the visualisation of historical or even current spatial references benefits greatly from the digital possibilities of incorporating geoinformation. When I was a student, you had to gather this information from various books and journal articles, and I confess that back then I had a very hard time getting an idea of the city of Rome in imperial times. Whether you now have a much easier time of it is open to question, because on the other hand you come into contact very early on with ready-made visualisations that will get stuck in your head without you being familiar with the basis for them. But we will talk about the basics and commandments of 3D reconstruction in more detail in the 10th lesson. [28] First, let me give you some tools for 2D visualisation of historical data on a map, which I will not read out here. Just try them out yourself! Then you will see how wide the range of applications and the associated possibilities are. 29] In summary, I would like to emphasise once again the advantages of digital geovisualisation. On the one hand, it consists in the fact that the virtual representation space is three-dimensional and spatial references can now be depicted better (and possibly even interactively) than is not the case with traditional map representation. On the other hand, the 3D model can be enriched with geo-referenced data, which enables the integration of foreign or newly added information in a precisely fitting way. Furthermore, there are various possibilities to represent the earth's surface, namely as "relief", "3D topography", "terrain" or as "terrain model". Above all, our terrain models can be enriched with thematic information. This can be done by inserting text and by cartographical implementation in colours and symbols. The GIS connection, i.e. the use of Geographic Information Systems, enables the reading of geoobjects, the writing back of edited geometries and/or thematic attributes, the calling up of GIS analysis methods, etc. This way, varying spaces such as, e.g., the terrain models can be displayed in a variety of ways. In this way, varying spaces such as different historical phases can be visualised, and this in varying degrees of abstraction, namely photorealistic 3D models or more abstract representations. 30] You probably know a prime example of successful geovisualisation, namely the google Earth project. If you install it on your computer, you have the possibility to switch the database entries on and off on different layers. A particular advantage here is that you do not need any knowledge or experience with geoinformation systems to be able to work with it. [31] For the Roman and Medieval world, the Digital Atlas of Roman and Medieval Civilization was created at Harvard in a similar way, which additionally includes a large number of historical and archaeological sources. In my screenshot I show you the sites of shipwrecks, but it also lists, for example, mass graves, hoards of coins and other treasure finds, Anglo-Saxon occupied areas or pre-Roman settlement, to name but a few examples. [32] Heritage in general is addressed by Arches, an open-source software platform that uses semantic technologies and is freely available to cultural heritage organisations to help them manage their data. It includes a module for inventorying all types of immovable cultural property, including archaeological sites, buildings, cultural landscapes and heritage groups, thus going far beyond mere mapping. [33] In the expanded definition of cultural heritage that is valid today, the natural and historical habitats of cultures are also a component worthy of equal protection. Geoarchaeology, which is located in the field of settlement archaeology or landscape archaeology and works with scientific methods of geography and geology to reconstruct historical landscapes, is dedicated to these natural and living space conditions. Its methods include, for example, geomorphology, soil geography and settlement geography, and also geological methods such as sediment investigation and raw material analysis. In concrete terms, the analysis of Holocene sediments, pollen analysis, geophysical investigations such as geoelectrics and georadar, and the determination of the origin of rocks by thin section and geochemical analyses are now used above all in prehistoric research. Their aim is the acquisition and analysis of topographic changes, such as the flooding of valleys or the silting up of lakes. This means, however, that it is no longer just a matter of determining the habitats, but rather of describing the potential uses for the ancient population, i.e. the human-environment relationship in general. [34] It should be clear that this involves an almost unmanageable amount of data that can only be managed in geoinformation systems. To create a GIS, it is best to collect data in the field with GPS devices. In addition, paper maps and survey plans must be digitised. However, satellite and aerial photographs can also be digitised on-screen. The sources are then mapped onto each other so that the result is that the photos and maps are on top of each other on different layers and are georeferenced, as shown here in the illustration. For many sites this work has already been done and a comprehensive list of archaeological geoinformation systems, especially in Italy, can be found at the link given. 35] A central component of a GIS is georeferencing, with which existing maps are rectified. Scanned paper maps or survey plans can serve as a basis, from which the legend or unneeded parts of the map can be removed with an image processing programme. It is also a good idea to change colour, contrast or brightness in order to simplify later vectorisation. Afterwards, as many points as possible are marked with an appropriate software and provided with exact localisations in the form of geo-coordinates. There are a number of georeferencers on the market that access coordinate databases online. For example, the service Map Rectify is free of charge, and QGIS also has a Georeferencer item in the Raster menu after installing the plug-in "Georeferencer GDAL". The result is a calibrated map that can be exported as GeoTIFF and JPEG2000. 36] To determine the correct coordinates manually, a service such as Google maps or OpenStreetMap can be used, where the coordinates can be read in the address line of the browser, first the latitude and then the longitude. 37] For ancient places, the Digital Atlas of the Roman Empire is highly recommended. [38] Also useful are OldMapsOnline, an access point to historical map collections around the world, Georeferencer, a crowdsourcing online tool that turns scanned paper maps into georeferenced vector maps, and Recogito, which makes it easy to annotate and link files of all kinds. And IIIF-Hosting allows hosting and lightning fast access to large images via the IIIF protocol, making your referenced maps available on the web as well. [39] As you will now already understand, a GIS is a type of database management system that links each data element to a coordinate-based representation of its location (e.g. as a point, line, polygon or pixel). As a "historical GIS", it offers a wide range of possibilities: Thus, geodata technologies can also be used for historical questions, and the analytical methods of the geosciences can also be exploited in this field. In addition to the correction of old maps, this includes above all the display and analysis of information that can be linked to any point on earth, the visualisation of information in a geographical and geopolitical context, the examination of this information at different scales, as well as the addition of new data, and the finding, describing and explaining of spatial patterns. There is also the possibility of sharing the data, managing paradata (read: metadata in the GIS domain) and accessing the sources and their documentation. [40] A GIS linked to information on excavation finds and features forms an Archaeological Information System (AIS). The requirements for such a database system are listed here in question form. First: What information is digitally acquired? This question is not entirely trivial, because each excavation produces different data depending on the excavation site and the question. Accordingly, the question of which programmes are used for acquisition is also important. In the 1990s we tried to develop a universal excavation database for all excavations. I don't want to say that we failed grandly with it. But usability decreases to the extent that data fields and tables that are only relevant on another excavation are taken into account. Nevertheless, standards are important here to ensure the reusability of the data. So right before starting an excavation, one has to think carefully about the form in which the data should be acquired, i.e. which types, structures, standards and formats should be used. How is geodata acquired? With CAD? Or with GIS? This is a decision that has to be made depending on functionality and costs. Meanwhile, 3D visualisations are standard on excavations, not only for documenting the excavation process, sections and plana, but also for visualising the results. Therefore, one has to decide whether one wants to carry out the 3D acquisition with laser scans and or rather photogrammetrically with Structure for Motion and which methods, programmes and data formats should be used for this. Another very important question is how the collected data can be integrated into existing information systems. This is also a question of reusability. So you have to ask yourself: How sustainable is this data? How long can they be used? And how can long-term or permanent usability or archivability be achieved? For this reason alone, proprietary systems are actually to be rejected. But archaeological information systems must above all function reliably. Therefore, one must ask: What role does free and open source software play in these contexts? Therefore, it is certainly useful to exchange ideas with colleagues and to join the discussions of, for example, the commission "Archaeology and Information Systems" of the Association of State Archaeologists. 41] However, one does not have to start from scratch! ArcGIS, for example, is very widespread as a geoinformation system, although the campus licence for Göttingen has expired. However, its structure is representative of many such systems, which combine a data management component with tools for displaying and designing 2D maps and 3D globes and also include a virtual reality module for special 3D display and analysis. 42] A free opensource alternative is QGIS, formerly QuantumGIS. I can only recommend that you take a closer look at this GIS. There are also a number of useful tutorials, of which I only mention the one by the archaeologist Armin Volkmann. [43] And thirdly, perhaps, GRASS GIS should be mentioned, which is widely used especially on excavations in Italy. [44] How all the components in such an archaeological geoinformation system interlock is illustrated by the example of the Tell El-Daba Archaeological Information System, which uses both ArcGIS and QGIS. Tell el-Daba was an important settlement in the eastern Nile Delta of Egypt in the early 2nd millennium BC with far-reaching contacts into the Minoan culture. 45] In a pilot project, the Austrian Academy of Sciences has digitised all data (photographs, plans, drawings and written documentation) from 50 years of archaeological research and brought them together in a GIS. 46] Distributed over three layers you will find the brick walls, the finds, and the stratigraphic units. Of course, these data can also be combined in one view. The entries in the database for the respective units can be called up easily and select them. Here are the structures of the palace, ... [47] ... which can be viewed and analysed in isolation. All this happens automatically and on the basis of the georeferenced data. [48] The maps and 3D models resulting from this spatio-temporal analysis formed the basis for a comprehensive virtual reconstruction. [49] The use of established geoinformation systems also facilitates the import of official geodata. With Inspire, the Infrastructure For Spatial Information In Europe, the European Union has established a common spatial data infrastructure. Its aim is to support joint environmental policy decisions. For example, questions of flood protection that are stored there often concern fundamental environmental influences that could also be of interest for the protection of cultural assets and historical landscape modelling. Unfortunately, historical data are much more difficult to acquire, but it would be very desirable to be able to use such a geo-infrastructure for topography, traffic, political and administrative structures, population, environmental influences etc. also for past periods. [50] In part, this is to be realised in the Time Machine Europe project, which aims to create a vast distributed digital information system to map the social, cultural and geographical development of Europe across the ages. The endeavour is unlikely to be feasible for all of Europe, despite technological advances in Big Data analysis, but the number of local TimeMachines to be created here is quite impressive. And there are more all the time: in 2020, for example, Cologne and Luxembourg were added. 51] This brings us to the temporal aspect of culture. As you know, there is a separate data format for time information: DATE, which stores information in the form: Day/Month/Year. This is perfect for your birthday, for example. However, dates from the past are often not so easy to express in this form. This is what our second part is about. 52] Let's take, for example, our established calendar, which is named differently depending on the world view and culture. It begins with the birth of Christ. However, it was the monk Dionysius Exiguus in the 6th century who founded this chronological system, which takes the year of the Incarnation of the Lord, as he called it, as its starting point. He sets the first year with the birth of Christ on the first day of the year 1, i.e. on 1.1.1. This means that there is no year 0, as is often colloquially said, because the chronology begins with an event. Moreover, this system only became established in the Middle Ages. Important here are the writings of Beda Venerabilis from the early 8th century. 53] However, this is problematic for digital modelling. If we imagine the year sequence in a coordinate system and place the birth of Christ at the intersection of the axes, then according to our convention the year one also begins here. This is because we think of the year from the beginning. For example, the year 4 A.D. lasts from 4 to 5 ... and thus the year one from the time of Christ's birth to 2. 54] Before that there were a number of competing counts. I will mention only a few from Greco-Roman antiquity. There, the calculation according to Olympiads is very common, by which is meant the athletic and musical competitions held every four years in Olympia. The Olympiad is a period of four years. 776 BC is therefore the first year of the first Olympiad, 775 BC the second year of the first Olympiad and so on. That the calculation and fixing of the 1st Olympiad in the year 776 is correct can be deduced from other chronological data. For example, the Greek author Diodorus reports of a solar eclipse in the 117th Olympiad in the third year, which can be astronomically calculated with modern methods to 15 August 310 BC. If we now calculate backwards, we arrive at the year 776 with the first Olympiad, which confirms the synchronisation of the time calculations. 55] One could now simply convert this information. But the customs of antiquity are somewhat more special: for example, the Roman author Pliny the Elder, who died during the eruption of Vesuvius in 79 AD, does not give the life dates in his Natural History, but the Akmé dates (i.e. the flowering period) of the artists. Here we can only estimate the lifetime, especially since some artists were active for much more than just four years. And in an important section of art history he says: "Hereupon art ended, but revived in the 156th Olympiad (= 156-153 B.C.) with artists who, although far below those previously mentioned, were nevertheless highly esteemed". He thus sees a new upswing in art shortly before the middle of the 2nd century BC. But how would one model such a date? [56] There are also competing systems. On the one hand, there is the so-called eponymous counting, i.e. certain officials or priests give their term of office their name. For example, in Athens the years are named after the annually elected archons, in Sparta after ephors, in Rome first after the consuls and then after the titles of the Roman emperors. Or another example: the Greek historian Hellanikos of Mytilene based his historiography on the list of herapriestesses from Argos. In the most fortunate cases, records have come down to us that give the respective names and their order, so that this can be converted. Most of the time, however, this is unfortunately not the case and we can again only give periods of time. In Rome, however, there was another yearly count in addition to the consular dates, namely the one from urbe condita, i.e. from the foundation of Rome in 753 BC. The beginning of the Roman tradition, however, does not begin until the 3rd century BC with authors such as Naevius, Q. Ennius or Fabius Pictor. Before that, we know of no written records at all. This means that there is a time gap of almost 500 years between the beginning of a historically tangible tradition and the date of Rome's foundation, and in the meantime it is certain that the historical date of the city's foundation represents a much later construction, which may have been undertaken only in Augustan times. So here, in addition to the different chronology, we also grasp the source-critical problem that the surviving dates should be used with caution. 57] However, caution is also advisable with more recent dates. It is not uncommon for artists to want to be ahead of their time and to pre-date their works, which can sometimes be proven retrospectively, e.g. with diary entries or letters. Kandinsky's Composition Sept from 1913 is considered one of the first truly abstract paintings in art history. A very similar watercolour in the Centre Pompidou, however, already bears the year 1910 next to the signature. Researchers are fairly certain here that Kandinsky predated the work. And perhaps you know similar phenomena from predated sick notes, cheques or even deliberate forgeries. [58] Our usual modelling of time refers to an absolute chronology, as it was already given in ancient Greece with the naming of the archons. These archons were officials about whom we are particularly well informed for Athens. There, 9 archons originally formed the college of the highest state officials. One could only be elected archon once in a lifetime. The beginning of the establishment of the one-year archontate was given in later sources as 683/2 BC. From the end of the 5th century at the latest, it was customary to name the year after the highest official, the archon eponymos. However, the Attic year began with the first new light after the summer solstice, i.e. in July or August. Therefore, the year in Athens does not correspond exactly to our year. The Panathenaic prize amphora here must therefore be dated to 323/22 BC. And the document relief on the right contains two documents, one from the year 405/4 and below it one from 403/2 B.C. So if it is unknown in which season an event took place, there are two years to be mentioned in our chronological system. [59] Fortunately, there is a tool that can be used to convert the calendrical data of other chronology systems into the system used today. [60] In particularly fortunate cases, we can thus infer accurate dates. In public building projects in Athens, the building materials were precisely accounted for each year, and for the Parthenon these construction accounts have been preserved for us. Thus we are not only able to place the construction of the Parthenon between 447/46 and 433/32 BC, but can even date individual parts, such as the frieze or the pediment figures, more precisely. Despite this fortunate tradition, however, the dates do not coincide with our date formats. Rather, all the years must be calculated for the relevant periods if the duration of the respective phase is to be expressed in our day/month/year determinations. 61] In the Roman imperial period, dates are given in relation to the emperor's offices and titles. The chronological framework for counting the years of the reign is the indication of tribunicia potestas (tribunician power). What this means in concrete terms can be seen in the list on the right for the Emperor Trajan. As in Athens, the dates cannot simply be transferred to our chronological system. 62] Let us take a concrete example. On this silver coin of Trajan, the legend COS for consule and the Roman numeral 3 can be read on the reverse above the sacrificing goddess of victory. The coin was thus minted during Trajan's third consulate in the year 100. This gives us an exact determination of the coin's date of origin. However, we do not know how long the coin was in circulation. On excavations, such found coins are often the only dating clue. The context in which the coin was found cannot therefore be dated before the year 100. How many years later can only be estimated and determined by hoards. A dating by means of found coins would therefore be "after 1.1.100" We have here a date as a period of time, with a definite beginning and an indefinite end. 63] But coin finds are rare. Most of the time archaeologists have to rely on using a relative chronology. For example, for Greek sculpture of the classical period, one can describe a line of development from still standing to relieved to leaning figures. However, the dates given for this are only approximate values and describe in figures the researcher's idea of earlier, at the same time or later. They have very little to do with the dates of events in history. They are therefore relative dates that have much more concrete significance for other statues than for other works such as buildings or plays, because they represent a chronological system within a genre. This determination is not always easy to use even for experts. In a large database, however, a specification such as "around 340 BC" is difficult to model in such a way that it can be statistically evaluated, precisely because the range of uncertainty is not easy to specify. But if one is too generous here and limits oneself to the century, one falls behind the possibilities of archaeology. 64] And even scientific investigations rarely offer precise dating. Dendrochronology, for example, is based on the fact that trees grow at different rates depending on the influence of the weather and thus form annual rings of different thickness. These sequences, i.e. the sequence of thicker and thinner rings, are so characteristic that one can determine the felling date of the tree. With incomplete sequences, however, this is not possible exactly to the year. [65] This dating method helps e.g. for the history of a building. For example, the oldest oak piles of the Roman bridge of Trier are from 18/17 B.C., 71 A.D., 144 A.D. and 315 A.D. Since one would not expect the trees to have been stored for a long time, this method captures the date of construction and a series of repair works. However, the example also shows very clearly that there may well be a number of different dates for one object. This is also something to consider when modelling time, and it is one of the reasons why I recommended event orientation to you in the database lesson. [66] You may have heard of the radiocarbon or C14 method. It measures the decay of radioactive carbon in organic matter and allows a conversion of calibrated 14C ages into calendar years consistently from today back to 50,000 years ago. However, as a rule, only relatively large time spans of more than a hundred years are determined with this method, which is why this dating possibility is relatively imprecise for historical periods, but is important for prehistoric archaeology. 67] For archaeology, the finds are not nearly as important as the features, i.e. the contexts in which the finds were made. For they can be used to reconstruct the living conditions and actions of past times. Stratigraphy is a central dating method for these findings. Unlike in today's asphalted cities, where building rubble and rubbish is taken away to special dumps, until the 20th century destroyed buildings were simply levelled and a new building was erected on the rubble. Floors in the houses were also renewed, for example by applying a new layer of clay on top. This created cultural layers several metres high. Take a look at Göttingen's cityscape: old buildings such as churches are regularly below today's walking level. During an excavation, a clean cut is therefore made at the edges of the excavation, so that the discolouration of the soil can be used to deduce such a sequence of layers. As you can see on the right, photos of such sections can be optimised with an image processing programme to bring out the discolouration even better. Even an untrained eye can recognise a black layer of ash here, which is a sure sign of a fire. In the lucky case, there are also dateable finds such as coins or, as here, amphora stamps, so that the catastrophe can be dated more precisely. The layers above must be younger and those below older. Stratigraphy, or recording the sequence of layers, is thus a method of being able to date cultural layers via historical events such as fire, earthquakes and finds. [68] The trick is that you can date the individual cultural layers by their relationship to each other. Edward Harris has formalised and systematised this method in such a way that it can be evaluated by computer. For the sequence of layers can be complicated, as you can see from the schematic drawing. because here the layers have been disturbed by later excavation. Once you have identified these discolourations as disturbances, it is clear that layers 1 and 4 as well as 9 and 10 must be contemporaneous. These ratios can be abstracted in their sequence and represented as a scheme, which you will find depicted on the right as a so-called Harris matrix. [69] A computer-based analysis of stratigraphy in an archaeological information system must store the sequence of layers accurately in order to be able to react appropriately to changes in the interpretation of the features. Let us take a theoretical example given by Ian Hodder. Initially the ratio of layers A and B was thought to be given as result 1, so A was thought to be later than B. However, if the finds in B are significantly later than A and there is uncertainty about the stratigraphic relationships, we may decide after excavation to reinterpret the profile/section and consider B to be later than A. This may be the case. Alternatively, we might decide that we were wrong about the finds in B, and interpret them as earlier than those in A, or as secondary to B. And this decision should be documented in the archaeological information system. [70] In relation to the excavation as a whole, the sequence of layers is usually very complicated and is only disentangled after the finds have been evaluated. To record this as a Harris matrix on paper means using a lot of glue for piecing. Therefore, a computer-based approach has been in use here for some time. [71] and is also regularly integrated into modern archaeological information systems, so that, as here in the Tell Es-Daba AIS, when you click on the feature number in the Harris matrix, you can see the feature, here a wall structure, in plan and in the database. 72] In summary, it can be said that there are far more forms of description for dates than one would have thought at first glance. First of all, there is Christ birth as a turning point at which counting is done forwards and backwards, and which can also be exceeded in the case of time periods. As far as the accuracy of the determination is concerned, dates to the day are increasingly rare for older times the further back one goes in the past. Only here can the data type DATE be used. It is not always easy to date exactly to the year, which is why the data type Integer (or integer) is rarely sufficient. For antiquity, one must more often refer to precisely defined but not generalisable time periods (such as the year 367/66 BC). They could be defined as a set, i.e. as objects of an abstract data type SET in the manner of an unordered collection of elements of a certain data type, here either integer or date. But it is also conceivable to model the time period as an array, i.e. as a data list. The following date definitions express a certain fuzziness in the data. This can be an approximate time period, e.g. around 20-10 BC, a stylistic period designation such as Middle Augustan, or a linguistically marked fuzziness, such as later 1st century BC. Sometimes the fuzzy date space is bounded by a starting or ending point, such as the eruption of Mount Vesuvius in AD 79, or it is defined as a function of other dating, as we have seen with stratigraphy. We shall have to talk about this indeterminacy in the dates in the following section. 73] For now, after the longer considerations on date formats and chronological systems, we shall deal with their digital modelling. In doing so, we go beyond the possibilities of a three-dimensional representation by also taking the factor of time into account as a fourth dimension. In economic contexts, even a fifth dimension is included with the acquisition of costs. 74] The simplest way to visualise the dimension of time is the timeline for the correlation of time periods. In a timeline, a list of events can be displayed in chronological order with artefacts (or events) located at the points where they were produced. 75] With tiki-toki you can easily create such a timeline yourself and host it on your website. In addition to the usual timelines, there is also a three-dimensional form of representation available, with which, for example, the history of the Tower in London is visualised here, and which in a sense implements a journey back in time, where one can click on the respective persons and places and thus in a sense question them. 76] At the end of the 1990s, I myself programmed a timeline for archaeological databases that made it possible to enter and search for dates as years, as time periods and as epochs. In the process, linguistic datings containing values as text were transferred into corresponding numerical values with areas of uncertainty. In parallel, one could also use different numerical timelines containing numerical values. My approach was designed to be user-friendly in that the actual timeline was interactive, clickable and zoomable as a visualisation of the inputs. [77] However, the modelling of timelines would also be conceivable in spreadsheet programs that contain a project planning option. [78] Much more complex is the iDAI.chronontology web service, which combines time terms with dating. Here you can see, for example, the existence of C14 dates of the Bronze Age in Europe with the number of features represented as a heat map. [79] Primarily, ChronOntology is an ontology for time expressions that seeks to harmonise the many different chronology systems. The project was funded by the German Research Foundation (DFG) from 2015 to 2018, with the long-term availability of the data guaranteed by the German Archaeological Institute as part of iDAI.welt. However, the problem of the fuzziness of time and space data was not addressed here. [80] As we have seen, imprecision in spatial and temporal data can have various causes. First of all, there is the inaccuracy or general fuzziness in the data, by which I mean the discrepancy between observation and reality as expressed, for example, in a hand sketch. But this also includes blurring that occurs during scaling, or deliberately undocumented boundary courses. In the field of dating, such blurriness is made clear, for example, by an imagined "um" or a trailing plus-minus. I would like to contrast this with the lack of precision, i.e. the inaccuracy of measurement. It often arises from carelessness or the desire not to have to specify. Examples would be expressions like "near Rome", or coordinates shortened to one decimal place. Similar phenomena also occur with time indications, for example, when one says "once a week" without being specific, or uses "later imperial period" as a dating without precisely specifying the vagueness of the dating. A lack of unambiguity is more often seen with place names. For example, two places such as Neustadt may have the same name, or a place may be known by several names (and with considerable spelling variation). Place references may also be used ambiguously in the sources, e.g. in travel guides there are more places in Tuscany than in the actual region, and on the Göttingen housing market every house, no matter how remote, is still advertised as Ostviertel. In addition, there are fictitious place names, such as Atlantis or Utopia. As far as dating is concerned, I would just like to point out a linguistic problem. When we say "around 300", we might think of a vagueness of +/- 10-15 years. The longer the year, the greater this margin may be. But it is always higher with rounded years than with crooked years like "around 305", where we probably only think of a deviation of about 5 years. A fourth cause of imprecision in the data is simply their incompleteness, i.e. the fact that information is rarely comprehensively documented. Maps, aerial photographs and survey data can be incomplete due to light conditions or vegetation, for example, and dating information is not infrequently compromised by gaps in tradition, as we saw with dendrochronology. In addition, inconsistencies also occur in the data, i.e. individual pieces of information do not agree with others in the same source. For example, it is more common to speak of the statue as "from Rome" and mean "in Rome" and not "with place of discovery or place of manufacture Rome". Or in one source, "12h" is used synonymously for 12 hours and 12 o'clock. Similarly, Saturday and Saturday can be used inconsistently. Furthermore, information has a limited validity, i.e. there are temporal gaps between the first occurrence, the compilation and the use of the information. This has to be considered, for example, when looking at the age of the cards. Often we do not know how long the facts marked there, such as property boundaries, street names and courses, etc., were valid. have existed. We saw the same thing when converting to another chronological system or when asking about the duration of coins. Furthermore, the credibility or reliability of the source of information affects the certainty of the data. Thus, whether the author had knowledge of the place or only knew the information from hearsay plays a not insignificant role in source criticism. I have already referred to the problem of backdating or pre-dating documents and works of art. Related to this is the subjectivity of data, i.e. the extent to which they contain interpretations or judgements. For example, a native will certainly grasp the spatial contexts more clearly than a foreigner. And an expert is much more confident in dating finds and features than students. Finally, the interrelation between information, i.e. the dependence of the source on other sources, should be mentioned. I am thinking, for example, of altered versions of a known map, where errors are taken over or information that is no longer accurate at the time is passed on. Such an anachronism, i.e. a classification in a wrong chronological context, is of course more frequent. For example, when contemporary costumes are worn in paintings with historical themes. In ancient historiography, efforts were even made to parallelise events. For example, when in the same year the tyrants in Athens and the kings in Rome are driven out, it makes us sit up and take notice. But errors can also creep in during the acquisition, i.e. entries can contradict each other and then it is important to determine the causes if possible. In cartography, this is often due to multiple input sources or incorrectly rectified plans, and for time data, conversion problems between the different chronology systems should not be underestimated. A fuzziness arises here when the error is discovered but cannot be traced, so that one has to continue working with competing information. I think this overview can serve as an example to illustrate why humanities studies proceed so differently from information science studies. After all, we have no choice but to work with noisy data. However, we should make an effort to define the fuzziness more precisely in order to be able to model it. [81] Traditional representation conventions for maps, namely points for cities, polygons for areas, polylines for rivers, are not capable of representing uncertainties. However, there are certainly sensible forms of visualisation to counter this deficiency. As examples, I will show you the mapping of principalities whose domain cannot be determined exactly, and the localisation of the places of action in Theodor Storm's novella "Der Schimmelreiter". Places are indicated here as points with different scattering, boundaries are blurred and areas are hatched. If a place could not be located exactly, a blur polygon has been drawn around the centre of gravity as a radiating area. 82] In principle, a number of visual variables are available for the visualisation of blur, as long as the aim is to express the degree of indeterminacy locally. Thus, one can use point sizes, colour scales, brightness or degrees of saturation. One can specify the distance of a point from the origin of the coordinate grid, choose different hatchings or symbols, or work with blurred edges or transparency. [83] In 2011, 72 geography students were asked which visual variable they found most appropriate on maps. Intuitively, they thought edge blur was the best symbol, followed by the coordinate symbol and decreasing brightness. Such readability studies are important, but they cannot hide the fundamental problem that we often do not know the exact spatial boundaries that should be drawn. [84] In Geographic Information Systems one can define a degree of uncertainty for the objects and reproduce it accordingly in an automated way. For example, cartographic symbols could be assigned and coloured accordingly, as in this case, the certainty of the boundary line with different interrupted lines, building ground plans or areas either dotted with different thicknesses or framed with different thicknesses of lines. or outline or colour them with lines of different thickness. 85] As far as sites are concerned, it will have to be decided according to the individual case whether, for example, an empty frame is chosen, areas are coloured in, symbols are used or differently coloured blurred areas. [86] One can try to transform such expressions into machine-readable time periods or coordinates, but the modelling of the uncertainties is crucial, otherwise there is a considerable risk of misinterpretation. Take, for example, the expression "mid-16th century" as a dating for Titian's painting. It could perhaps be represented in numerical values as "1545-1555". The temporal relation between "mid-16th century" and "24 April 1547" is thus unclear, however, insofar as with these two dates initially neither the first is before the second nor vice versa. This only becomes clear when the events are logically linked. If, on the other hand, one understands the data as a set, then the latter is contained in the first, but not vice versa. An increasing exponential uncertainty could also determine the probability with which 24 April 1547 still lies within the modelled period. In our modelling, this would be 83 per cent. [87] The uncertainty could also be modelled as a confidence interval. A confidence interval is the range that, with infinite repetition of a random experiment, encompasses the true position of the parameter with a certain probability. Usually, 95% is chosen as the probability. To take up the example from wikipedia: on the right you see 100 samples from a normally distributed population plotted. Of these, 94 intervals cover the exact expected value μ = 5; the remaining 6 do not. The confidence intervals established in the natural sciences are not normally used in the humanities because the depiction of exact-looking values thereby gives the impression of exact knowledge, which is often not only inaccurate but also levels the complexity of the facts. This is because this approach presupposes measurable values, which are usually not available for dating. The modelling of fuzziness therefore poses a particular challenge, especially in information technology systems that are normally exact. [88] The application of fuzzy set theory is now established in many everyday and technical areas. It operates with indeterminate conceptual scopes in the sense of referential semantics. This procedure, which is based on the theory of fuzzy sets, could also be applied to humanities data sets and search results by actually visualising the data as fuzzy sets whose density decreases at the edges, whereby linking and overlapping is also possible. One could thus also model different linguistic fuzziness terms differently, e.g. by choosing linear or exponentially shaped margins. [89] This can perhaps be illustrated quite nicely with subjective linguistic terms of temperature, since each of you is likely to use indeterminate terms such as "lukewarm". In our example, the six temperature terms are assigned triangular fuzzy sets with a range of up to 30°. These ranges can overlap, so that the input variable e=33° would have a linguistic equivalent of 50% for "warm" and 20" for "lukewarm". If one were to determine the conventions of the respective technical language in this way, perhaps using text-mining methods, one could determine credible fuzzy sets for the respective historical disciplines. For an additional problem to the fuzziness in expression deliberately used by the authors is the terminology and epoch designations used differently by the respective individual disciplines. That is why I am convinced that Big Data applications in the humanities will not get around fuzzy set theory. And I imagine it to be very appealing to be able to evaluate the entire lore of an epoch without generating only banalities through inappropriate equations and generalisations. [90] This is where statistical methods come into play, which we will talk about in more detail in the next lesson. However, I can't help but set another one against the established founding myth of the Digital Humanities already today, which begins even before the development of the first universally programmable calculating machines. As early as 1899, the Egyptologist Flinders Petrie presented a method of contextual seriation in the Journal of the Anthropological Institute, with which he was able to chronologically evaluate Egyptian predynastic tombs and the finds they contained. Explained in a highly simplified way, contextual seriation works by first classifying the finds, in this case ceramic vessels, by type and then writing them one below the other. In columns representing the individual graves, one then records the occurrence of this type of vessel in the graves. In our example, the simple beaker occurs in graves E and F, vessels with a black rim in A, E and F, and so on. Such a matrix can of course also be made machine-readable by using a 1 for the occurrence of the vessel type in the respective grave and a zero for its absence. Petrie recognised that some urns had handles and others only stylised ridges in the same place. He therefore assumed that this could be interpreted as a development. The same applies to changes in funerary rites. In connection with other vessel forms that underwent a similar development, it assumed that one could put the graves in a chronological sequence if one evaluated all the features combined. Therefore, Petrie went about re-sorting the rows and columns of his table so that a diagonal emerged. For in this way a development series was created, i.e. a sequence of the grave contexts in connected series, as you can see in the figure on the right from his publication. In this way he was able to distinguish seven phases of occupation. [91] This method of seriation is still used. For the dating of the individual tombs in the necropolis of Taranto, for example, Daniel Graepler assumed that tombs with similar grave goods were closer in time than those with completely different grave goods. The similarity of the individual graves was determined by Graepler by evaluating the morphology of the pottery present in the necropolis and subdivided into phases A-G. It was important that the morphology of the graves was similar. It was important that the morphological separation of the types was based on chronological, culturally determined developments and not on functional differences. [92] With the help of a computer program that did the seriation, the graves could be brought into chronological order, this time arranged in the rows; again by sorting the rows and columns until an approximately diagonal sequence of entries emerged. In conjunction with the other grave goods, a chronological framework for Taranto's graves could thus be established for the first time. The data of such a seriation therefore lead (as with the Harris matrix for stratigraphy) to a chronological network of relationships that can be expressed in vectors and thus in numerical values that are independent of the date formats considered so far, and with which one is very well able to model fuzziness. [93] This brings us to the end, because we will discuss the statistical procedures associated with seriation in the next lesson. I hope it has become clear that there are still some challenges in the acquisition and analysis of spatial and temporal data. For the modelling of chronological fuzziness with regard to its statistical evaluability, I have already pointed to a possibility for further research with the reference to fuzzy logic. Incidentally, this theory has already been applied to geo-data. In the future, spatial analysis will certainly also benefit from the visualisation and evaluation of spatial fuzziness. After a long time of thinking about acquisition standards in excavations, I now see some potential in making acquisition in archaeological information systems more flexible so that they can be used meaningfully in many places. This means that we will increasingly be in a position to be able to evaluate unstructured data with the help of neural networks. Therefore, we must increasingly collect all existing data, digitise old records and try out appropriate methods of analysis in order to arrive at real Big Data applications in the humanities. [94] In the end, I would like to summarise again what you should have acquired in terms of knowledge after this lesson. The definitions and concepts of cultural heritage are relevant to any engagement with spatial and temporal data. After all, as with the broader concept of images, it is about more than the material legacy. It is also about habitats, cultural practices and traditions. Therefore, as a developed example of geo-applications in the Digital Humanities, you should be familiar with the possibilities of digital geoarchaeology and excavation documentation. In fact, in other areas of Digital Cultural Heritage, one proceeds in a very similar way. Knowledge of historical cartography, surveying and geo-referencing as well as their digital implementation are also important. Perhaps you were not even aware until today of the different dating methods and chronology systems that exist. They pose particular challenges for digital modelling. An important category with regard to Big Data applications in the humanities is GIS or CAD data. You should therefore look further into their integration into corresponding projects, into geo-databases and geo-repositories. Heterogeneity and fuzziness may be considered typical characteristics of data in the humanities. They should therefore be familiar with the approaches to modelling and visualising fuzziness in spatial and temporal data. In an introductory lecture, all of the above can only be initial basic knowledge, which you should deepen further in the course of your studies. 95] Therefore, you should be able to handle time and geodata routinely by the time you graduate at the latest. It would be good if you could also familiarise yourself with the visualisation of historical situations and archaeological features on maps and in GIS and gain practical experience in the use of GIS and archaeological databases with regard to different usage scenarios. 96] In the exam, I could ask you what dating possibilities are available to the historical sciences and how dating can be expressed linguistically. What difficulties does this pose for digital processing? I would also be interested to know what advantages you see in digital geovisualisation. Or I would ask you quite specifically to give me two examples of digital processing of spatial data. And if I asked you: What do you understand by the term cultural heritage? What opportunities does digital acquisition offer here? then you can draw on examples from all the lessons. And in the 10th and 11th lecture hours you will also get to know a variety of virtual implementations. Finally, a more far-reaching question where I would be interested in your opinion, namely: In your opinion, what should a historical geo-information system be able to do? 97] You can always find important literature on detailed questions directly on the slides, and the attempt to name six central publications on the topic in each lesson is a daring undertaking anyway. This time I failed because both the theory and practice of digital spatial analysis are very multifaceted. Here it is essential to mention informatic and geographical as well as archaeological and cultural-historical survey works. 98] And with that, I say goodbye to you for today and wish you much success in learning and reworking.