A Spectrum of Agency
As AI systems grow more sophisticated, we will need better ways to gauge their potential impact. I'd like to present an idea that bears a passing resemblance to a Kardashev scale. This is to grade an AI based on its inherent agency, that is to say its ability and likelyhood to influence the world around it.
Third Party
At the lowest level, we have AIs that function like advanced encyclopedias, retrieving information but unable to act on it directly. Like current large language models they can answer questions, summarize texts, or explain complex topics. At this level their influence is purely limited to informing human decision-making. As an example, an AI might provide detailed information about climate change, but it could only pass that information back to the user.
Second Party
On the second level we have AIs that assist with tasks, much like a digital assistant. These agents can take more complex actions to interact more directly with the user, such as performing data analysis or writing basic programs. They indirectly enable their users by augmenting their capabilities. An AI at this level might, for example, help to optimise a company's supply chain by analysing data and finding potential optimisations that the user could then act upon.
First Party
The next tier includes AIs that are capable of taking direct action on our behalf, such as booking appointments or making purchases. These systems have the authority to interact with the world directly for a user.
An AI at this level could, for example, order groceries or schedule home maintenance without constant human oversight. In a business context, it could autonomously handle customer service interactions or handle certain aspects of a sales pipeline.
Pure Agents
At the highest level of this scale are AIs that could set and pursue their own goals independently. These systems would have the capacity to define their own objectives and the ability to take action to achieve them. While largely speculative at this point, a powerful AI of this kind could decide to optimize global resource allocation for optimal paperclip production, or pursue scientific discoveries based on its own criteria of importance.
Conclusion
By providing common terms for discussing AI capabilities and risks, we can enable more nuanced and productive dialogues about the future we are creating. This concept is a step on the road to prepare for the ethical and practical challenges that are coming our way.
Finally, in all likelyhood I could have read this whole concept somewhere before and simply forgotten, if that is the case please do me a huge favour and remind me so I can amend this post.