Machine learning and AI
One of the most innovative aspects of the seafloor and habitat mapping research in iAtlantic is the development of machine learning to support automated image analysis of deep-sea fauna.
Whilst out on surveys during an expedition like iMirabilis2, instruments such as the AUV can collect tens of thousands of seafloor images and dozens of hours of seafloor video footage. You can imagine the amount of time it would take one scientist to examine all of this material, classify the seafloor substrate and identify all the species seen!
However, in order to fully understand the biodiversity and ecosystems of the deep sea, and their habitats, we do need to be able to identify the different seafloor substrates and species that we observe. To help make this process quicker, scientists are developing an automated image recognition process, so that the thousands of images collected can be analysed quickly and accurately by computer rather than relying solely on human analysis.
To build, test and refine the algorithms used in the development of this technique, we need real observational material and expert annotation (i.e., the manual identification of features in images) in order to help “train” the computer programme how to recognise different species and their characteristics – this is what we call machine learning. For image analysis, machine learning consists of two steps: 1) detecting parts of an image that show objects of interest, and 2) classifying those objects into categories defined by experts.
During iMirabilis2, photographic datasets from Autosub6000 will be used to help this process evolve, but it will still involve the expertise of many biologists (both at sea and back in the research labs on shore) to help annotate the photographs correctly to make sure the machines don’t learn any bad habits!

