Home Estate Planning Eurostar’s new AI chatbot left customers exposed

Eurostar’s new AI chatbot left customers exposed

by
0 comment

Eurostar’s shiny new AI chatbot was billed as a smarter way of helping customers. But the bot was shipped with old-fashioned security flaws that could have left customers exposed, and for weeks, nobody at the train operator seemed willing to listen.

City AM can reveal that multiple security flaws were found in Eurostar’s public-facing AI chatbot, all while the company was rushing to embed AI into a consumer product.

The chatbot, a customer tool sitting atop of a large language model (LLM), was originally designed to handle general enquiries, rather than access any sensitive system.

The European rail operator has claimed that it was never connected to customer accounts or internal platforms, and that no data was put at risk, as all customer information remains protected behind login barriers.

Nonetheless, the flaws, which have now since been fixed, underline how easily ‘AI powered’ front ends could create a false sense of security.

Security researchers at Pen Test Partners raised concerns under Eurostar’s published ‘vulnerability disclosure policy’, flagging a series of weaknesses that showed how the chatbot’s controls could be bypassed in practoce.

Those reports were submitted responsibly and within scope, City AM understands.

Eurostar: Solid-looking guardrails

The most alarming issues in the Eurostar system was a guardrail bypass.

While the chatbot appeared to enforce strict content controls, only the most recent message in a conversation was properly validated server side.

Meanwhile, all the other, prior messages, could be altered client-side and quietly fed back into the model as ‘trusted’ context.

In practice, that meant an attacker could send a harmless final message to pass checks – while actually smuggling a malicious or manipulative prompt earlier in the chat history.

Once past the guardrails, the chatbot could be steered into revealing internal details like its system prompt and underlying information.

The latter risks an awkward exposure for any company, and a potentially dangerous one if the bot were later connected to personal data or account details.

Other weaknesses included conversation and message IDs that weren’t properly verified, and an HTML injection flaw that allowed JavaScript to run inside the chat window.

This was initially a harmless input, but with a plausible path to something more serious should chats ever be replayed or shared.

“No attempt was made to access other users’ conversations or personal data”, PTP said.

“But the same design weaknesses could become far more serious as chatbot functionality expands”.

Eurostar stressed that customer data was in this case not at risk.

A spokesperson told City AM: “The chatbot did not have access to other systems and more importantly no sensitive customer data was at risk. All data is protected by a customer login.”

Disclosure derailment

If the tech issues were concerning, the disclosure process raised even more eyebrows.

The vulnerabilities were first reported to Eurostar on 11 June 2025 via the company’s vulnerability disclosure email address.

There was no acknowledgement, and a follow-up on 18 June also went unanswered.

After nearly a month of silence, the issue was escalated privately via Linkedin to Eurostar’s head of security, who said to use the rail giant’s ‘vulnerability disclosure programme’, which had already been done.

Weeks later, Eurostar had either changed or outsourced its disclosure process mid-way through, meaning there was no longer any record of the disclosure.

During the back and forth, Eurostar even issued blackmail accusations for persisting in trying to get the issues addressed.

Eurostar said it encourages responsible disclosure and reviews of all reports carefully, claiming that “any issues identified during early testing were addressed promptly, and we continue to monitor and strengthen our security controls”.

The flaws were eventually fixed.

Old problems with a new wrapper

Eurostar’s chatbot relied on familiar web and API plumbing like message histories, IDs, and signatures.

The transport giant said the chatbot was an experimental service, and that it has a “well-established cyber security governance framework, including the use of external ethical hacking specialists”.

You may also like

Leave a Comment

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?