Google has unveiled its latest AI model, Gemini 3.0, described as the “most advanced model for complex tasks.” According to benchmark results, Gemini 3.0 surpasses competitors such as Claude Sonnet 4.5 and ChatGPT 5.1 in multiple performance metrics.
Users can now leverage the model to tackle a wide array of sophisticated tasks, including game generation and reading historical handwritten texts. In Google’s AI Studio, the flagship Gemini 3 Pro is available as a high-performance, natively multimodal AI, capable of handling text, audio, images, video, and code repositories.
Google emphasizes that Gemini 3 is not a mere upgrade of previous models, but a completely new architecture based on Mixture of Experts (MoE) technology. Its abilities include generating fully functional games, building interactive websites, creating 3D animations, and recognizing historical manuscripts.
Gemini 3 Pro significantly outpaces Gemini 2.5 Pro and other leading AI systems such as Claude Sonnet 4.5 and ChatGPT 5.1, excelling in benchmarks like ARC-AGI-2, MMMU-Pro, CharXiv Reasoning, Video-MMMU, and Terminal-Bench 2.0.
In testing, Gemini 3.0 achieved 91.9% on the GPQA Diamond test and a score of 2439 on LiveCodeBench, among the highest results for modern AI models.
Google also highlighted improvements in safety:
“Gemini 3 Pro exceeds Gemini 2.5 Pro in both safety and tone, while maintaining a low rate of unjustified refusals,” the company noted.
Despite its advances, Gemini 3 still has limitations, including possible hallucinations, slower response times, and vulnerability to jailbreak attacks.
The launch of Gemini 3.0 continues the rapid expansion of the Gemini ecosystem. In September 2025, the Gemini app reached #1 in the U.S. App Store, surpassing ChatGPT, boosted by the Nano Banana editor. Downloads increased by 45% to 12.6 million.
In October, Google released Gemini Enterprise for corporate users, and by November, the AI model began integrating into Google Maps, enabling interactive AI features for navigation. Additionally, Gemini became available in Chrome for all U.S. users in September.

