Researchers have demonstrated how to fine-tune AI models to replace complex physics simulations – making them faster and cheaper than running calculations from scratch.
AI: Events
How AI Learns to Improve Its Own Code: An Experiment in Self-Optimization
Technical context • Research
AMD researchers have demonstrated how an AI agent can iteratively optimize high-performance code without human intervention.
AI: Events
How AMD Optimizes Recommendation Model Training: A Simple Guide to a Complex Task
Technical context • Infrastructure
AMD has shared its approach to simplifying the training of recommendation systems on its GPUs – the algorithms that select movies, products, and news for us.
AI: Events
25x Inference Speedup: What's Happening with AI Performance on New NVIDIA Hardware
Infrastructure
The new NVIDIA GB300 NVL72 server, paired with the SGLang framework, has demonstrated a 25x performance boost when running language models.
AMD has released JAX-AITER, a library of pre-built, optimized computational blocks for developing large AI models on AMD GPUs using the JAX framework.
AI: Events
AMD and Artificial Intelligence: How the Company is Catching Up to Market Leaders in Inference Performance
Infrastructure
AMD has shared its progress in supporting AI models on its GPUs: from basic compatibility to optimized performance comparable with competitors.