Update: A very interesting revelation here in Nvidia's plan is that they don't really see OpenAI as being some only relevant player here. This bodes well for all chipmakers really. They see it as you need the AI and your AI is going to be built on us/Nvidia. OpenAI is a line item of pushing the current aspect of AI now but not something they feel is a requisite aspect of their growth and capabilities. This is probably why Sam from OpenAI is actively going to the CHIP space for deals.
All in all, CHIPS are the most important thing to all of this AI. The strategy is to accelerate compute, drive down costs and increase throughput. Expand the reach of GAI.
Update: GRACEHOPPER is 35000 parts WOW. Weighs 70Lbs and and everything about it is super complex. To install it is complex. You need a super computer just to test it.
Update: Jensen talks RAG and how GRACEHOPPER is going to be so vital to the inference application building for GAI. VS and Search DB's effectively run better on this and they are seeing super strong demand.
Update: Jensen is talking and he is talking about how AI and AI factories which need to generate the infercing tokens of AI and they are seeing tremendous growth for chatbots and copilots. GPU specialized CSP's cropping up all over the world. Other countries they need their own data and culture to create their own AI's. Seeing this in India and Sweden. Japan and France. Sovereign AI clouds are being built. Every country will have their own AI clouds. (dictatorship for AI lol)? got it.
InfiniBand is profoundly important for AI factories.> InfiniBand is a channel-based fabric that facilitates high-speed communications between interconnected nodes.
We're at beginning at this reflection point/transition.
Update: Demand and Inferencing is strong. AI AI AI (who cares about AI now see Adam). Licenses Hopper and AMPERE 100 and 800 series. China subject to licenses requirements contributing 20 – 25% and will significantly decline in the Q4 but will be offset by other growth opportunities. We can still sell to China. Highest performance levels the US requires licenses. Government has clear guidelines and we will comply with each regulatory category including products that don't even need advanced notice. We don't know if this will effects revenue. Many countries want our products. We want countries such as India to give local LLM instances to they can boost their sovereign AI inference architecture. Fuel advancement in France and Europe. (France wants it bad HAHAHA) LLM's don't work with Par Le Vous France (sike JK)
1st quarter with GRACE with HOPPER new superchips growing into a billiion+ product line.GraceHOPPER is getting traction with many institutions including UK and and Swiss.
Germany and pretty much everyone wants GraceHOPPER superchips with over 90 xoflops of performance.
will exceed over 200 xoflops.
Inference is improving significantly with Chatbots and copilots. This is just the beginning.
Tensor RTLLM acheiving for 2x of inference performance on Nvidia GPU's.
The latest memboer of Hopper the H200 HBM3E Faster and better LLM's boost infernce speed up to 2x.
Tensor RT LLM reduce cost in 4x by 1 year with customers changing their stack.
H200 delivers an 18X performance increase allowing customers to move to larger models.
Microsoft we deepend and expanded our relationship with Microsoft. Custom foundry service created on Azure for customers to build custom models on Azure cloud and DGX Cloud. SAP MDOX first customers.
LLM's are growing by orders of magnitude each year.
Networking accounts for a 10B run rate.
Azure/MSFT uses 29K miles of InfiniBand cabling circling the globe.
Software and services seeing excellent adoption.
Annualized run rate of 1B
DGX cloud and nvidia cloud software growth opportunities
DGX cloud announcement with Gentech BioNemo Bio LLM framework for their drug discovery program.
Gaming:
2.86B up 30% YOY
Gaming has doubled relative precovid levels***
Raytracing and AI has exploded upgrade and new buyers growth
RTX continues to grow with over 100 games and applications.
GAI is the new killer APP. Tensor RT for windows speeds up inference 4X
Geforce now cloud gaming with 1000 games and various titles Cyberpunk and Starfield run on this.
RTX is platform of choose for design.
AI AI AI
running inference locally ***
3d virtual world. Mercedez uses this for factories.
I'm just realizing GPT could be doing this for me LOL
Automotive revenue up 4%
self driving platforms***
Nvidia drive 4 next gen driver automation tooling
Foxconn things.
Gap gross margin 74% nongap 75% lower sales and net reserves. release of previously held inventory.
reflecting increase comp and benefits.
OUTLOOK of Q4
revenue 20B +- 20%
Gaming will likely decline as it is aligned with notebook seasonality (kids going back to school).
Closing. No mention of OpenAI shitshow. (actually there was an indirect reference they don't care and nothing is stopping AI.
Update: Call Started 5:00 ~494
Update: 4:49 call is starting live. elevator music. ~495I am on live and will update key points.
Drops Hard at initial release:Beat on the top and bottom line 4:21.
Q4 revenue guidance 20 Billion.
Data center revenue = came in 14.5B
Stock is just static. Growth baked in?
Leave a Reply