The latest media appearances by Futurum Analysts.

If you’re deploying AI, your decisions around what compute to use shouldn’t be based on which three-letter acronym you remember the best. We’re breaking down how to think about compute for AI; what CPUs, GPUs, and newer accelerators are best at; where organizations can get tripped up; and what constraints matter more than the chip itself.

In a staff memo, OpenAI Chief Revenue Officer Denise Dresser characterized demand for OpenAI’s integration with AWS as "staggering."

This paper explores two primary market trends and workload architectures that are driving increased demand for CPUs, relative to GPUs, for AI implementation.

Cursor released Composer 2 on March 19, and the pitch is less about being the best model on every benchmark and more about hitting a cost-to-intelligence ratio that changes how teams think about AI coding budgets.

GitHub shipped four secret scanning updates in March that collectively represent the most significant expansion of the platform’s credential detection capabilities in months.

A customer deploys AKS in a regulated environment, hits an issue during node bootstrapping, and wants to know exactly what happens when a node joins the cluster. The question sounds simple.

Planning a complex code change is hard enough. Reviewing it in a terminal window shouldn’t make it harder.
Get important insights straight to your inbox, receive first looks at eBooks, exclusive event invitations, custom content, and more. We promise not to spam you or sell your name to anyone. You can always unsubscribe at any time.