home CX Predictions 2026 Why AI Transparency will build brand trust and loyalty in 2026 – Interview with Nicholas Kontopoulos

Why AI Transparency will build brand trust and loyalty in 2026 – Interview with Nicholas Kontopoulos

Failing to disclose an AI interaction, particularly one that results in a poor service experience, leads to negative sentiments and may  push customers toward more transparent competitors.

I recently interviewed Nicholas Kontopoulos, Vice President of Marketing, Asia Pacific & Japan for Twilio about the indispensable role of transparency and trust when implementing AI. If organisations want to be successful, winning over customers rather than alienating them, they must be on the front foot in terms of transparency. Nick also highlighted the need to focus on the problem you are trying to resolve and the root causes of those problems. 

Mark Atterby (MA): Hi Nicholas, thanks for joining me today. What are the risks and negative consequences for both brands and consumers, if organisations are not transparent with their use of AI?

Nicholas Kontopoulos (NK): Here are a couple of references from some of our research –  the Decoding Digital Patience and the Inside the Conversational AI Revolution reports. What’s really interesting with the latter is that 90% of consumers are ultimately failing to correctly identify AI-generated voice clips, which was a really interesting insight. I think that’s an important data point to hold when we think about AI transparency.

And the other one, when we look at the Decoding Digital Patient’s research, which we did regionally across seven markets in Australia, about 50% of Australian consumers were saying that they use AI.

Australian consumers reported that the use of AI in customer service is making them less patient, which was really surprising. They are more likely to lose patience when interacting with AI agents versus when dealing with human agents.  Australian consumers are more likely than their Asia Pacific peers to downgrade their opinion of a brand after a poor service experience. They are also more vocal in sharing their negative sentiment.

We were advocating early with our customers for transparency regarding AI usage. This ‘AI identity crisis’ is evident in the data.

Imagine a conversation—voice or email—where you only realise later you were speaking to an AI. This lack of upfront awareness can lead to a poor brand outcome, and our surveys show this is already manifesting. Given that 90% of consumers fail to correctly identify AI voice clips, the reality is most customers cannot reliably distinguish between human and AI agents anymore.

When the majority of customers can’t tell who they are speaking to, it risks a negative consequence if the experience doesn’t meet expectations. Even positive interactions can lead to awkwardness. The potential for poor outcomes, or simply consumer discomfort, is a critical consideration.

MA: If an organisation fails to inform the consumer that they are interacting with an AI, and the consumer subsequently realises the deception themselves, what is the resulting impact on trust in the organisation, and what are the consequential issues for the brand?

NK: I think the key word is trust. As we’ve discussed, trust is the bedrock of customer experience (CX). If you are a CX professional, a marketing leader, or a customer service leader, every action must be aimed at reinforcing trust.

If a brand promises transparency but fails to declare that a customer is dealing with an AI, that action will surely erode trust.

Many technology leaders and vendors are starting to get ahead of this topic. The opportunity for brands—whether they are FSI, airlines, or any other industry—is to lead the way now before regulation potentially catches up, which I think is highly likely. The question for brands is: Do we want to be proactive in addressing this?

MA: You mentioned previously that AI transparency will likely become an enforceable consumer right. What specific form do you expect this to take? Do you foresee a particular regulatory framework emerging, and which regulatory bodies or agencies would likely be involved in enforcing it?

NK:I suspect we will see governments respond by taking a regulatory approach to AI. They will start placing a requirement on businesses to be more open and transparent about declaring when an AI is in play. In my opinion, this will definitely happen, especially concerning the financial services industry and the public sector.

I anticipate policies similar to what we saw with GDPR in Europe starting to manifest. That’s why I believe it behooves brands to start thinking about this now, rather than just reacting later. It would be beneficial for larger global, regional, and national brands to get ahead of this, proactively helping to shape what that regulation might look like so they are not caught on the back foot.

MA: For organisations currently deploying conversational AI, generative AI, or agentic AI, what are the immediate legal and reputational risks they face right now if they fail to disclose to consumers that they are interacting with an AI?

NK: The risk, drawing from the data points we discussed, is an increase in negative sentiment. Consumers will ultimately start considering alternative brands if they feel they aren’t being treated transparently or given options. If a negative experience occurs, a customer’s response will likely be to seek out competitors who are more open and transparent about how they use data and the tools they employ to engage. I believe this will manifest in lower customer retention and potentially allow competitors who embrace transparency to gain an advantage.

MA: Considering that organisations consistently identify data challenges and skills development as the primary obstacles in their AI experimentation and implementation journeys, what specific, actionable strategies can leaders employ to rapidly overcome these two biggest hurdles?

Most businesses are set up in a very ‘stovepipe’ way, which exacerbates the issue you touched on—and one they absolutely need to crack—which is data. Data must be at the heart of this. A clear AI data strategy needs to be shaped and developed that flows horizontally across the organisation.

Only then can you work back into the technology from there, identifying use cases that yield the best immediate outcome, preferably tying to a top or bottom-line result.

For example, looking at how Twilio approached this internally, we started this journey two years ago with our self-serve business. We examined how we could integrate AI agents into the engagement experience for the developers signing up on our platform.

If you sign up on our platform, we have an AI agent named ISA. She will reach out to you and help guide you through the process, ensuring you have the necessary documentation to build and deploy a messaging capability. It’s a back-and-forth correspondence between ESA and the developer. This has had a material impact on our ability to scale and support the tens of thousands of sign-ups we see every quarter in this region alone

The success of ISA  was achieved by:

  1. Starting with the problem: Clearly defining the customer problem we needed to solve.
  2. Working Back: Utilizing both predictive and generative AI capabilities to address that problem.
  3. Prioritising Transparency: We learned early on to make it explicit that ESA is an AI agent, reinforcing trust with our users.

For organisations whose initial AI experiments delivered mixed results, the key learning is to re-focus on defining the problem and assessing the organisational readiness for AI. Organisations need to approach AI from an organisational lens first, rather than a technology-first approach.

MA: Considering the necessity of shifting from a technology-first approach to a problem-first, organisational approach, what specific criteria or characteristics define an organisation as truly “AI ready” to successfully deploy and scale conversational, generative, or agentic AI solutions?

NK: Old process plus new technology equals expensive old process. However, you can avoid that trap. I think that nine times out of ten, people are treating a symptom versus the actual problem. There’s a big difference.

Defining the problem statement is a really critical component, and it can’t be done in isolation. Since most processes cut across the organisation, you need to define that problem in partnership with everyone involved in the delivery.For example, from a CX perspective, marketing, sales, and customer Service need to work together. And don’t overlook logistics—if you are delivering physical goods, they are a critical component of CX that often gets overlooked.

I believe AI, if set up correctly, will significantly empower individuals. Your ability to predict changes in consumer behaviour across different channels will manifest more quickly, allowing you to anticipate potential customer churn.

This empowerment is achieved by enabling the humans to  interact with consumers. For instance, a core focus for us is helping agents using our Flex tool leverage a unified data profile . This allows them to anticipate that the customer is calling based on a negative experience already identified through previous interactions. AI can also score the sentiment of a conversation in real-time, highlighting customer frustration. This data can then be used by quality assurance teams to identify and address where an agent might not be delivering the required experience.

This is where AI will drive far more possibilities to improve customer experience than ever before. However, you must still get back to the root: understanding the root cause of the problem you are trying to solve and involving the right people to drive that change.

Crucially, transparency will remain at the heart of this. This applies not only to customers, but equally to employees. Being transparent with your employees about how AI will be used to support them in delivering on the brand promises is equally important.

Mark Atterby

Mark Atterby has 18 years media, publishing and content marketing experience.

Leave a Reply