search the site
Perplexity and shipping

When asked, artificial intelligence defines itself as the capability of machines or software to perform tasks that normally require human intelligence, adding that it is designed to process data, identify patterns, and act on them—often improving over time without being explicitly reprogrammed. However, in the massive industry that is shipping, where the margin for error is minimal, can it be trusted?
In the recently published Maritime Cyber Trends Report 2026 from Cydome, Katerina Raptaki of Navios stated that shipping companies are deploying AI faster than they are defining cyber accountability and that, in 2026, the question after an incident won’t be ‘was the AI wrong?’ but ‘why was it trusted?’
So, if we want to look at this from an analytical standpoint, we need numbers. And they show that the maritime industry’s accelerating adoption of artificial intelligence is sharply compressing the cybersecurity response window, with new data showing that up to 60% of newly disclosed software vulnerabilities are weaponised within 48 hours.
The real opportunity is to rethink and simplify the underlying processes first and then apply AI
That same Cydome report also found that the average time from vulnerability disclosure to active exploitation has fallen from 63 days in 2018 to five days in 2024. Today, AI-driven tools are targeting some systems within 15 minutes of a flaw being detected. The report also notes a 1,600% surge in voice phishing attacks and an 800% rise in assaults on edge network devices in 2025.
A new white paper from maritime cyber specialist CYTUR warns that shipping’s digital revolution has outpaced its defences and that the industry is facing systemic cyber risk. The white paper claimed that maritime cyber incidents in 2025 surged by 103% compared to 2024, emerging as a critical threat to maritime safety.
The Institute of Marine Engineering, Science and Technology (IMarEST), in its survey of 130 maritime professionals and in-depth interviews, found that around 80% of respondents believe AI can improve efficiency. However, it also found that 37% have personally witnessed AI failures. The report added that only 11% have any formal AI policies.
Even though the numbers do look very problematic, IMO and BIMCO claim that AI-based threat detection reduces incident response times by up to 70%. Maritime organisations integrating AI-driven analytics also benefit from predictive maintenance and early warning of emerging cyber risks.
But the point Raptaki was making was somewhat supported by Bob Doncom, chair of the IMarEST AI special interest group. He said that just because AI may arrive at an answer faster than a human, that answer is not necessarily correct.
“The machine might be wrong, but if you don’t do anything in, for example, a rapidly evolving collision situation you’re wrong as well,” Doncom stated.
However, he points out that people can cope with one, two, or three variables, whereas AI can handle seven, eight, or more, and all sorts of variables can be thrown in, with different scenarios run to see the likely outcomes.
“Human in the loop is absolutely brilliant, unless you can’t afford the time. In rapidly evolving situations, such as collision avoidance, the volume of information and speed of events can overwhelm human operators,” Doncom said.
Seafarers, like any human being, can only take so much input at once, and Doncom believes that as the situation gets more complex and the threat gets closer, humans may be just too slow.
“We’ve got to find out where the benefit of relying on autonomy or AI overcomes the risk, when not relying on it is more of a hazard than relying on it,” Doncom said.
The argument for human supervision of AI, or AI supporting humans in decision-making – whichever way you prefer to look at it – is also supported by Marcus Warrelmann, CEO of SEA.AI.
He believes that there is some truth to the idea that AI adoption is outpacing governance in parts of the maritime industry, but says that’s not unusual for emerging technologies, especially those that deliver immediate value.
According to him, in many cases, AI-powered machine vision systems are being adopted precisely because radar and AIS systems have known limitations, including susceptibility to deliberate manipulation, so machine vision is added as an independent layer of perception to help close these gaps.
“Today, most AI-powered safety systems don’t make autonomous decisions; they support human operators with additional situational awareness. In that context, the question is less about ‘trusting AI’ outright and more about how crews are trained to interpret and act on AI-assisted insights,” Warrelmann explains to Splash Extra.
From SEA.AI’s perspective, accountability remains with the operator because the company’s systems augment human decision-making. They do not replace it. They provide detections, classifications, and alerts, but the final judgment always sits with the crew.
Denis Morais, CEO of Canadian shipbuilding software specialist SSI, is for an approach that involves much more scrutiny of AI.
In a talk with Splash Extra, he explains how security is too often treated as an afterthought in a race between shipping firms to adopt the latest in AI tech, as has been the case with every major technological shift.
“That short-term mindset creates the illusion of rapid progress, but the moment an AI-enabled system is compromised, the consequences can be catastrophic,” Morais says. “Unlike other sectors where failures are contained, shipping is the circulatory system of the global economy; a single breach can ripple across markets, supply chains, and national security.”
He adds that the pressure to move fast is great, but in shipping, the stakes are too high to bolt on security later, and that a security-first approach is necessary when dealing with AI.
“[That is] not because it slows innovation, but because it’s the only way to ensure that innovation doesn’t become the source of the next global disruption,” Morais warns.
SEA.AI’s Warrelmann understands that the industry is right to consider human factors, including over-reliance on automation and the need for safety, but believes that AI in maritime is progressing at the pace it should.
“That’s why we believe AI systems should be designed to keep operators engaged and informed, not to remove them from the loop. Overall, we see that AI in maritime is progressing along the same path as in aviation and automotive: it starts with decision support, and best practices and standards evolve alongside it,” he tells Splash Extra.
When asked about the topic, Michael Kei, vice president of technology for the Americas at ABS, says that, for ABS, the central challenge is not AI itself, but its integration into established safety, cybersecurity, and risk management frameworks.
This is supported by Chakib Abi Saab, chief technology and innovation officer at Lloyd’s Register, who tells Splash Extra that the challenge is not the technology itself, but “how it is being integrated into operational environments that were not originally designed for this level of autonomy and data dependency”.
He adds that the broader narrative around AI and a tendency to overestimate what can be delivered in the short term create inflated expectations.
“[This] ultimately increases risk when those expectations are not matched by robust implementation. AI is powerful but not infallible, and it should not be treated as such,” Abi Saab points out.
And the issue of ‘why was AI trusted?’ is a question that should be formulated differently, according to ABS’ Kei.
“Expectations for AI-enabled systems must be unambiguous: how risk is managed, how system performance is validated and monitored, and how AI outputs are governed in daily operations. After any incident, the key question should include ‘how did the AI behave based on its design and limitations?’,” he says.
Ali Saab also made a very good point about which processes AI is applied to and what kinds of results it will yield.
“AI will only amplify the quality of the processes it is applied to. If we automate poor or outdated processes, we simply get poor outcomes faster. The real opportunity is to rethink and simplify the underlying processes first and then apply AI in a way that genuinely improves outcomes,” he explains.
He also makes it abundantly clear that he fully endorses the development of AI and emerging technologies. As long as the scale is balanced by equal focus on clear accountability, strong governance, and a deep understanding of where human oversight remains critical.
“The organisations that will lead are not simply those moving fastest, but those moving with clarity, discipline, and a strong foundation of trust in how these systems are deployed and managed,” Abi Saab notes.
Now, as the first paragraph suggested, one of the commercially available AIs was asked to define itself. But, to push the exercise a bit further, it was also asked, ‘Should AI systems integrated into maritime systems be trusted, and is AI at fault if an incident occurs due to systems it oversees?’.
It answered that AI in maritime systems “can be trusted—but only within a rigorously engineered, supervised, and regulated framework”. As for the fault, it gave a rather shrewd answer, stating that an AI is not “legally at fault in the current regime” and that “liability sits with human and corporate actors”.
So, just to be clear, if anything bad were ever to happen, artificial intelligence has a court defence ready.
But this is not only an AI’s point of view, so to speak. Anil Kumar Korupoju, a senior surveyor at the Indian Register of Shipping, wrote something similar in a story published on our sister title Splash last week.
“For owners, the implications extend beyond technical reliability. If an AI system contributes to a navigational or maintenance decision, accountability does not sit with the algorithm. It sits with the operator. In the event of an incident, questions will focus on data integrity, validation boundaries, and update control,” he said.
From everything experts have told us to a somewhat consensus regarding AI in shipping, it is still something that requires human supervision and improved governance within the industry. AI is improving by leaps and bounds in every direction, and for now, it appears that development is outpacing governance.
source : splash247


















