Codex Founder Tibo Calls for Feature Requests Using Images 2.0 Generated Submissions
Codex founder Tibo stated that users can submit feature requests in the form of images generated by Images 2.0, making it easier for Codex to understand and implement them.
He has seen several high-quality image requests, and the Codex team is following up on development.
Developers are submitting requirements through AI-generated images, shifting funding from traditional text feedback to visual Agent interactions, benefiting Codex and Images 2.0 users, while traditional text feedback tools face efficiency replacement pressure.
Source: Public Information
ABAB AI Insight
Tibo, as the founder of Codex, has previously positioned the product as a powerful AI Agent coding platform. This public encouragement to use Images 2.0 for feature requests continues his product philosophy of 'visual language as the optimal human-computer interface.' Codex has already supported multimodal input and rapidly iterated Agent capabilities.
In terms of capital pathways, Codex reduces textual ambiguity and accelerates team understanding by guiding users to describe requirements using AI-generated images. This also channels traffic and usage of Images 2.0 into its ecosystem, forming a closed-loop conversion of 'image generation → feature realization → higher subscription stickiness,' significantly enhancing product iteration speed.
Similar to how Cursor accelerates development with visual prototypes and Magic Patterns enables instant UI cloning, Codex is in the early expansion phase of transforming AI coding Agents from 'text-driven' to 'image + Agent dual-modal interaction,' with several high-quality visual requests already entering the development process.
Essentially, this represents a technological replacement: the traditional feedback mechanism relying on text or issue lists is being directly replaced by AI-generated images. Tibo positions visual prompts as a more efficient interface, allowing Codex to more easily translate user intentions into executable code, reconstructing product iteration from a 'text communication' to a 'visual-Agent autonomous realization' closed-loop mechanism.