Deloitte and SAP weigh in


Whether you’re creating or customizing an AI policy or re-evaluating how your company approaches trust, maintaining customer trust can be increasingly difficult due to the unpredictability of generative AI. We spoke to Michael Bondar, principal and enterprise trust leader at Deloitte, and Shardul Vikram, chief technology officer and head of data and AI at SAP Industries and CX, about how companies can maintain trust in the age of AI.

Organizations benefit from trust

First, Bondar stated that each organization must define trust based on its specific needs and customers. Deloitte offers tools for this, such as the “trusted domain” system found in some of Deloitte's downloadable frameworks.

Organizations want their customers to trust, but people involved in discussions about trust often hesitate when asked exactly what trust means, he said. Trusted companies show stronger financial results, better stock performance and greater customer loyalty, Deloitte found.

“And we have seen that almost 80% of employees feel motivated to work for a trusted employer,” Bondar said.

Vikram defined trust as believing that the organization will act in the best interests of customers.

When thinking about trust, customers will ask themselves, “What is the uptime of those services?” Vikram said. “Are those services secure? Can I trust that particular partner to keep my data safe and ensure that they comply with local and global regulations?”

Deloitte found that trust “starts with a combination of competence and intent — that is, the organization being able and trustworthy to deliver on its promises,” Bondar said. “But also the rationale, the motivation, the why behind those actions is aligned with the values ​​(and) expectations of the various stakeholders, and humanity and transparency are embedded in those actions.”

Why might organisations struggle to improve trust? Bondar attributed this to “geopolitical unrest”, “socio-economic pressures” and “apprehension” around new technologies.

Generative AI can erode trust if customers are not informed about its use

Generative AI is a top priority when it comes to new technologies. If it is to be used, it needs to be robust and reliable so as not to reduce trust, Bondar said.

“Privacy is essential,” he said. “Consumer privacy must be respected and customer data must be used only for its intended purposes.”

This includes every step of AI use, from initial data collection to training large language models to allowing consumers to opt out of having AI use their data in any way.

In fact, training generative AI and seeing where it fails could be a good time to eliminate outdated or irrelevant data, Vikram said.

SEE: Microsoft delayed the launch of its AI recovery feature in search of more community feedback

He suggested the following methods to maintain trust with customers when adopting AI:

  • Train employees on how to use AI safely. Focus on simulation exercises and media literacy. Consider your own organization's notions about data reliability.
  • Request data consent and/or intellectual property compliance when developing or working with a generative AI model.
  • Watermark AI content and train employees to recognize AI metadata when possible.
  • Provide a complete view of your AI models and capabilities, being transparent about the ways you use AI.
  • Create a trust center. A trust center is a “digital and visual connection layer between an organization and its customers, where you teach and share the latest threats, the latest practices, and the latest use cases that are emerging that we’ve seen work wonders when done the right way,” Bondar said.

CRM companies are likely already following regulations, such as the California Privacy Rights Act, the European Union's General Data Protection Regulation, and the SEC's cyber disclosure rules, which may also have an impact on the way they use customer data and AI.

How SAP builds trust in generative AI products

“At SAP, we have our DevOps team, infrastructure teams, security team, and compliance team deeply embedded within each and every product team,” Vikram said. “This ensures that every time we make a product decision, every time we make an architectural decision, we think about trust as something from day one and not as an afterthought.”

SAP operationalizes trust by creating these connections between teams, as well as creating and following the company's ethics policy.

“We have a policy that we cannot submit anything unless it is approved by the ethics committee,” Vikram said. “It is approved by those responsible for quality… It is approved by those responsible for safety. So this actually adds a layer of process to the operational issues, and bringing the two together helps us operationalize trust or reinforce it.”

When SAP launches its own generative AI products, those same policies apply.

SAP has launched several generative AI products, including CX AI Toolkit for CRM, which can write and rewrite content, automate some tasks and analyze business data. CX AI Toolkit will always show its sources when you ask it for information, Vikram said; this is one of the ways SAP is trying to gain the trust of its customers who use AI products.

How to reliably incorporate generative AI into the organization

Generally speaking, companies need to incorporate generative AI and reliability into their KPIs.

“With AI in the landscape, and especially with generative AI, there are additional KPIs or metrics that clients are looking for, which are like: How do we build trust, transparency, and auditability into the results that we get from the generative AI system?” Vikram said. “The systems, by default or by definition, are high-fidelity nondeterministic.

“And now, in order to use those particular capabilities in my business applications, in my revenue centers, I need to have a basic level of trust. At least, what are we doing to minimize hallucinations or provide adequate information?

Decision-makers in senior positions are eager to try AI, Vikram said, but they want to start with a few specific use cases at a time. The speed at which new AI products appear can conflict with this desire for a measured approach. Concerns about hallucinations or poor-quality content are common. Generative AI for performing legal tasks, for example, shows “widespread” cases of errors.

But organizations want to try AI, Vikram said. “I've been building AI applications for the last 15 years and it's never been like this. “There has never been this growing appetite, and not just an appetite to know more but to do more with it.”

scroll to top