Cohere CEO Gomez: AI Deployment Requires Human Oversight and Ethical Considerations
Cohere CEO Aidan Gomez emphasized that in certain key areas, AI will always need "humans in the loop" for supervision and cannot be fully automated. He pointed out that both the public and private sectors should focus on the widespread deployment of AI in fields such as healthcare and finance, rather than just in defense.
Gomez believes that these applications involve significant ethical issues and consequences for people's livelihoods, which is why Cohere prioritizes collaboration with democratic countries to ensure that systems can self-correct in the event of violations. This statement from the co-author of the Transformer paper responds to the controversy between Anthropic and the Department of Defense.
Source: Public Information
ABAB AI Insight
This "human in the loop" principle is fundamentally about addressing the "tail risks" and accountability of AI decisions. Full automation in areas of high uncertainty and high consequences can easily lead to systemic failures, while human oversight provides the final judgment and accountability mechanism, balancing technical capabilities with institutional constraints.
From a deployment perspective, this reflects a shift of AI from a "race for technology" to a "race for governance." Ethical controversies in the defense sector have triggered regulatory backlash, while the impacts in healthcare, finance, and other livelihood areas are more direct, amplifying social trust crises when errors occur. Prioritizing democratic countries means using the "rule of law environment" as a market access standard.
This also exposes a structural trend of global AI differentiation: countries with strong technological capabilities may face stricter ethical and oversight constraints, while markets with looser regulations become "gray experimental fields." In the long run, this differentiation will affect the maturity and reliability of the AI ecosystem.
Essentially, Gomez's logic is that "the boundaries of technology are no longer the bottleneck; governance boundaries are key." Whoever can first build a reliable human-machine collaboration and ethical framework will hold the dominant position in sustainable AI applications.