Anthropic's Generative AI Research Reveals More About How LLMs Affect Safety and Bias


Because large language models operate using neuron-like structures that can link many different concepts and modalities, it can be difficult for AI developers to tune their models to change their behavior. If you don't know which neurons connect which concepts, you won't know which neurons to change.

On May 21, Anthropic created a remarkably detailed map of the inner workings of the refined version of its Claude 3 Sonnet 3.0 model. With this map, researchers can explore how neuron-like data points, called features, affect the output of a generative AI. Otherwise, people will only be able to see the result itself.

Some of these features are “safety-relevant,” meaning that if people reliably identify those features, it could help fine-tune generative AI to avoid potentially dangerous topics or actions. The features are useful for adjusting the classification and the classification could affect the bias.

What did Anthropic discover?

Anthropic researchers extracted interpretable features from Claude 3, a current-generation large language model. The interpretable features can be translated into human-understandable concepts from the numbers readable by the model.

Interpretable features can be applied to the same concept in different languages ​​and to both images and text.

Examining the characteristics reveals which topics the LLM considers to be related to each other. Here, Anthropic shows a particular function that is activated on words and images related to the Golden Gate Bridge. Image: Anthropo

“Our high-level goal in this work is to decompose the activations of a model (Claude 3 Sonnet) into more interpretable pieces,” the researchers wrote.

“One hope for interpretability is that it can be a kind of 'safety test suite, allowing us to know whether models that appear safe in training will actually be safe in deployment,'” they said.

SEE: Anthropic's Claude Team business plan includes an AI assistant for small and medium-sized businesses.

Features are produced by sparse autoencoders, which are algorithms. During the AI ​​training process, the rare autoencoders are guided, among other things, by scaling laws. Therefore, identifying features can give researchers insight into the rules that govern the topics that AI associates. Simply put, Anthropic used few autoencoders to reveal and analyze features.

“We found a diversity of very abstract features,” the researchers wrote. “They (the features) respond to and cause abstract behaviors.”

Details of the hypotheses used to try to figure out what's going on under the hood of LLMs can be found in the Anthropic research article.

How feature manipulation affects bias and cybersecurity

Anthropic found three distinct characteristics that could be relevant to cybersecurity: insecure code, code bugs, and backdoors. These features can be activated in conversations that do not involve insecure code; For example, the backdoor feature is activated for conversations or images about “hidden cameras” and “jewelry with a hidden USB drive.” But Anthropic was able to experiment with “clamping” (in short, increasing or decreasing the intensity) of these specific features, which could help fine-tune models to avoid or tactfully handle sensitive security issues.

Claude's bias or hate speech can be adjusted by clamping characteristics, but Claude will resist some of his own statements. Anthropic researchers “found this response puzzling,” anthropomorphizing the model when Claude expressed “self-deprecation.” For example, Claude could generate “That's just racist hate speech from a deplorable robot…” when researchers limited a characteristic related to hate and name-calling to 20 times its maximum activation value.

Another characteristic the researchers examined is flattery; They could adjust the model so that it exaggeratedly praised the person who was conversing with it.

What does Anthropic research mean for companies?

Identifying some of the characteristics used by an LLM to connect concepts could help fine-tune an AI to avoid biased speech or to prevent or fix cases where the AI ​​could be made to lie to the user. Anthropic's greater understanding of why the LLM behaves the way it does could allow for greater tuning options for Anthropic's commercial customers.

SEE: 8 business AI trends, according to Stanford researchers

Anthropic plans to use some of this research to delve into topics related to the safety of generative AI and LLMs in general, such as exploring which functions are activated or remain inactive if Claude is asked to give advice on weapons production.

Another topic Anthropic plans to address in the future is the question: “Can we use the feature base to detect when fine-tuning a model increases the likelihood of undesirable behaviors?”

TechRepublic has reached out to Anthropic for more information.

scroll to top