
Ant Group is set to seize the AI super entrance: the all-modal general AI assistant "Lingguang" is officially launched, capable of generating applications in 30 seconds

More news, ongoing updates
On Tuesday, ANT GROUP officially launched its all-modal general AI assistant "Lingguang," which can generate small applications in natural language within 30 seconds on mobile devices, and is editable, interactive, and shareable.
Lingguang is the industry's first AI assistant capable of generating multimodal content with full code. The first batch of features includes "Lingguang Dialogue," "Lingguang Flash Application," and "Lingguang Open Eye," supporting the output of all-modal information such as 3D, audio and video, charts, animations, and maps, making conversations more vivid and communication more efficient. Currently, Lingguang has been launched on both Android and Apple app stores.
Among them, "Lingguang Dialogue" breaks through the traditional text Q&A model, not by piling up words, but by designing each conversation like a curation: through structured thinking, it allows AI to respond with clear logic and concise expression; by generating visual content such as dynamic 3D models, interactive maps, audio and video, it makes content presentation more vivid; ultimately, through high-quality information organization, it enables users to "understand" knowledge in seconds.
The "Lingguang Flash Application" feature allows users to say or input a sentence during a conversation, and Lingguang can generate an AI application within 1 minute, as fast as 30 seconds. Whether it's a fitness plan tool or a travel planner, it can achieve one-sentence generation, parameter customization, and immediate sharing.
The "Lingguang Open Eye" feature is equipped with AGI camera technology, enabling observation and understanding of the physical world through real-time video stream analysis, and supports various creative modes such as text-to-image/video and image-to-image/video.
Continuously updating

