Shared from twixb · arstechnica.com

OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips - Ars Technica

arstechnica.com·Feb 12, 2026

OpenAI has released its GPT-5.3-Codex-Spark model on Cerebras hardware, achieving significantly faster coding speeds of over 1,000 tokens per second, marking a strategic shift away from reliance on Nvidia technology. This development highlights OpenAI's efforts to enhance the speed of AI coding agents and diversify its hardware partnerships.

OpenAI's release of the GPT-5.3-Codex-Spark model on Cerebras hardware, bypassing Nvidia, is a significant development in AI infrastructure. It delivers coding outputs 15 times faster than its predecessor, highlighting the potential for speed enhancements using alternative hardware solutions. For AI professionals, this move underscores the importance of exploring diverse hardware partnerships to optimize model performance and reduce dependency on dominant chip suppliers like Nvidia.

Powered by twixb

Want more content like this?

twixb tracks your favorite blogs and social media, filters by keywords, and delivers personalized key learnings — straight to your inbox.