The Titanic sank 113 years ago on April 14–15, after hitting an iceberg, with human error likely causing the ship to stray into those dangerous waters. Today, autonomous systems built on artificial intelligence can help ships avoid such accidents, but could such a system explain to the captain why it was maneuvering a certain way?
That’s the idea behind explainable AI, which should help human actors trust autonomous systems more.
Researchers from Osaka Metropolitan University’s Graduate School of Engineering have developed an explainable AI model for ships that quantifies the collision risk for all vessels in a given area, an important feature as key sea-lanes have become ever more congested. The findings were published in Applied Ocean Research.
Graduate student Hitoshi Yoshioka and Professor Hirotada Hashimoto created the AI model so that it explains the basis for its decisions and the intention behind actions using numerical values for collision risk.
“By being able to explain the basis for the judgments and behavioral intentions of AI-based autonomous ship navigation, I think we can earn the trust of maritime workers,” Professor Hashimoto stated. “I also believe that this research can contribute to the realization of unmanned ships.”
More information:
Hitoshi Yoshioka et al, Explainable AI for ship collision avoidance: Decoding decision-making processes and behavioral intentions, Applied Ocean Research (2025). DOI: 10.1016/j.apor.2025.104471
Citation:
Explainable AI for ship navigation raises trust, decreases human error (2025, April 15)
retrieved 15 April 2025
from https://techxplore.com/news/2025-04-ai-ship-decreases-human-error.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.