Thomson Reuters report on the future of professionals shows cautious optimism regarding AI in the legal field


It has become a common refrain these days that generative AI can handle simpler tasks, but struggles to perform more difficult tasks. In that case, how much does it save time or improve performance at work?

Thomson Reuters, a professional services and technology company in law, tax, compliance and more, explored how professionals are using AI in its Future of Professionals 2024 report. We spoke to Thomson Reuters Chief Product Officer David Wong about generative AI in the workplace in an exclusive interview on the publication of the report.

Thomson Reuters surveyed 2,205 professionals in the legal, tax, risk and compliance industries around the world. The report did not specify generative AI when it asked about AI, but the capabilities discussed in the report are often related to generative AI. In the conversation with Wong, we used AI as a general term for generative models that can redact images or text.

The percentage of professionals who believe AI will be “transformative” increased by 10%

The report was largely optimistic about AI, as it is expected to save time. Additionally, 77% of respondents said they believe AI will have a “major or transformative impact on their work over the next five years,” a figure that was up 10% from last year’s report.

“I was a little surprised that the strategic relevance increased, because you would have thought last year when ChatGPT and GPT-4 came on the scene that the hype cycle would be in full swing and people would be very excited,” Wong said.

However, interest in the strategic relevance of AI spread from law firms to almost all sectors served by Thomson Reuters. Therefore, Wong said, the higher numbers could reflect broader interest across all sectors, rather than heightened interest from law firms.

The divide between those who are very cautious about AI and those who are very ambitious about AI

Wong noted that there is an interesting divide between cautious people and ambitious people in terms of generative AI. In the report, this appears as a question where Thomson Reuters asked: “In one year, three years, and five years, roughly what percentage of the work your team currently produces do you think will be [performed by either humans or AI].” The survey provided four possible responses (a spectrum of AI-led or human-led work) to determine whether professionals were cautious or ambitious about using AI technology for work. They found that 13% of professionals fell into the “cautious” category, thinking that a low percentage of work will be done by AI assistants even five years from now. The other extreme was the “ambitious” category, in which 19% of professionals predicted that AI will do a large portion of their work within five years.

“Many practitioners have woken up to the practical implications and reality of much of the technology,” Wong said. “And, based on the experimentation that has taken place over the last 12 months, we are starting to see how those practitioners translate experimentation into implementation.”

What tasks can't AI perform?

Expectations for generative AI were very high in 2023, but are likely to fall again before stabilizing, according to Gartner.

For legal professionals and other jobs covered in the Thomson Reuters report, “AI solutions are extremely good at any type of task where you can provide, frankly, a pretty good set of instructions,” Wong said.

That type of work includes research, document summarization or “researching high-level concepts that do not require specific legal citations,” as one respondent put it in the report.

What AI can’t do is make decisions on its own. AI companies want it to eventually be able to do that – indeed, autonomously carrying out actions on behalf of a user is level 3 of 5 in OpenAI’s new AI capabilities ranking. But AI isn’t there yet, and Wong noted that for Thomson Reuters Industries, this problem is as much a question of the technology’s capabilities as it is of people’s trust in it.

SEE: A modern enterprise data organization needs the right people team to thrive.

“I think AI hasn’t really been able to get to a point, at least in terms of trust, to be able to make decisions on its own,” Wong said.

In many cases, Wong said, AI “doesn’t work as well as human reviewers, except on the simplest things.”

According to the report, 83% of legal professionals, 43% of risk, fraud and compliance professionals and 35% of tax professionals think that “using AI to provide strategic advice or recommendations” is “ethically… a step too far.”

The majority of respondents (95% of respondents in the legal and tax field) think that “allowing AI to represent clients in court or make final decisions on complex legal, tax, risk, fraud and compliance issues” would be “a step too far.”

“If you ask the question, ‘How likely is it that an AI will make the right decision or make it as good as a human decision?’ I think the answer might be different than ‘is it ethical?’” Wong said.

Will everyone have an AI assistant in five years?

Despite these doubts, Thomson Reuters made a bold claim in the report: “Every professional will have a genAI assistant within five years.” That assistant will function like a human team member and perform complex tasks, they predicted.

Wong noted that part of the optimism is due to sheer numbers. The number of companies offering AI products has skyrocketed in the past two years, including giant smartphone makers.

“Almost everyone with an iPhone 15 or later and iOS 18 is going to have an AI system in their pocket,” Wong said. “And I’m sure in a couple more years, in every new version and on every Apple device, you’ll be able to access that assistant. Microsoft has also been rolling out Copilot very aggressively. I think in a few years, it will be pretty hard to have a version of Microsoft 365 that doesn’t have Copilot.”

SEE: Learn everything you need to know about Microsoft Copilot with TechRepublic's cheat sheet.

In addition to using AI to create, analyze or summarize content, organizations are considering how their product or production process could change using AI. According to the report, the majority of senior management respondents believe AI most strongly influences their operational strategy (59%) and product/service strategy (53%).

“I think that's what virtually every company is considering right now: a company's operations have a lot of routine, repetitive work that could be described with an instruction manual,” Wong said.

These routine tasks are ideal for AI. In the legal field, he said, AI could change the process for companies to file regulatory or statutory documents.

What responsible, “professional-grade” AI looks like

Respondents had diverse ideas about what constituted responsible use of AI at work. Many considered data security to be a key aspect of responsible use of AI. Others valued:

  • Data security at the request or query stage.
  • Mandatory review of results by a human professional.
  • Care must be taken as to what tasks AI-powered technologies can be used for.
  • Transparency in the data sources from which the responses have been extracted.

“If someone says that [generative AI is] “If someone does something perfectly, without hallucinations, without errors, then either they are deluded or the claim should be scrutinized,” Wong said. “But you want transparency in the performance.”

Responsible AI systems used in professional settings should be based on validated content, be measurable and able to be referenced, he said. They should be built with security, reliability and confidentiality in mind.

ChatGPT is “the worst example of a generative AI solution for professionals because it doesn’t meet those needs,” Wong said. “But in fact, you can design a privacy-safe and confidential ChatGPT that doesn’t train on data. Those are system design decisions. They’re not inherent to AI.”

scroll to top