![](/Content/images/logo2.png)
Original Link: https://www.anandtech.com/show/2602
NVISION 08 - Jen-Hsun Talks Larrabee, Mobility, VIA and More
by Anand Shimpi & Larry Barber on August 26, 2008 3:00 PM EST- Posted in
- Trade Shows
The first keynote of NVIDIA's first NVISION was...disappointing. I'm sorry, I just couldn't get into it. Journalists lamented Craig Barrett's IDF keynote as being too touchy feely, but the issues he was talking about were honestly more important than Larrabee or CUDA.
The Jen-Hsun keynote on the other hand just wasn't very good in my opinion - there were a couple of good demos, the Photosynth and multi-touch demos were great but the rest was a total letdown. Thankfully, Jen-Hsun more than made up for it with the 30 minutes he spent with press after the keynote.
Why NVISION?
Jen-Hsun started off the Q&A session by answering why he wanted to create NVISION. I’m not sure I totally get the point of this show, everyone at NVIDIA tells me that it’s not intended for people like us, it’s more consumer focused. But then at the keynotes (which are totally consumer focused), the presenters are always talking about the audience being developers and scientists...which it totally isn’t.
Honestly, and I know NVIDIA would hate to hear this, the format needs to be more like IDF. Intel manages to have the perfect balance of interesting technology and demos that the uninformed could be entertained/informed by.
Jen-Hsun and the rest of NVIDIA view this as a convention for the Visual Computing Market that doesn’t have a show, the question is - does it need a show? And at this point I’m not sure I know the answer. But through a few meetings I was able to get some good information.
The Larrabee Question
Let's talk about Larrabee.
NVIDIA pointed out that Larrabee x86 isn't binary compatible with other Intel x86 processors (since it doesn't support any of the SSEs) - so there's no advantage there.
Honestly, x86 today is a burden for Larrabee, not a boon as it is not the most desirable ISA from anything other than a compatibility standpoint. The difference between G2xx and Larrabee is in the programming model not the ISA. It's the threading model with G2xx that the developer complaints are really about.
NVIDIA says that it simply takes a new approach to development - focus on data in and data out, rather than conventional top to bottom function coding. The issue is that programmers don't like to change the way they work.
The real question is: when Larrabee ships, will its threaded programming model be significantly easier than G2xx. At this point it's simply too early to tell, Intel thinks it will be and many of the developers I've spoken to agree, but NVIDIA keeps arguing that Larrabee's programming model will be just as different as CUDA and that NVIDIA has the inherent advantage here because of the experience it has had in building GPUs for the past 15 years.
Would NVIDIA Integrate a CPU?
David Kirk summarized, quite well, his thoughts on whether NVIDIA would ever pursue putting a CPU on die next to one of its GPUs.
Kirk's view is that at the low end there's a place for a single-chip CPU/GPU, he views integration (rightly so) as a low cost play. "None of our customers ask us for less performance, why would we ever take away part of our GPU and put a CPU in it?"
NVIDIA currently competes in the low end of the GPU market with its sub $75 GPUs and IGP chipsets. The integrated CPU/GPU does stand the chance of eating into NVIDIA's largest quantity market, and it doesn't look like NVIDIA stands the chance to compete there - at least in x86 desktops/notebooks. Why would you pay more for a NVIDIA chipset with integrated graphics, if you already get integrated graphics on every single CPU you buy?
We've got a future where AMD/Intel ship these hybrid CPU/GPUs on the low end, GPUs like the RV770 and Larrabee at the high end, and NVIDIA is already being pushed out on the chipset side (neither Intel nor AMD wants to be the #2 manufacturer of chipsets for their own CPUs). In the worst case scenario, if NVIDIA gets squeezed by everything I just mentioned over the next few years, what is NVIDIA's strategy going forward? Jen-Hsun actually highlighted one possible direction...
NVIDIA's Mobile Strategy: "Completely Focus on Smartphones"
Out of nowhere Jen-Hsun threw out quite possibly one of the most important statements I'd ever heard him make, that NVIDIA's mobile strategy is to "completely focus on smartphones".
Jen-Hsun went on to say NVIDIA believes the "phone will become the next personal computer".
This highlights an important potential strategy for NVIDIA. If Intel/AMD push NVIDIA out of the conventional PC market with their GPUs/platforms/CPUs, NVIDIA needs to look elsewhere: the mobile phone market is an interesting outlet. Remember this slide from IDF?
By 2012 Intel estimates that it will be shipping ~200M desktop CPUs, ~360M notebook CPUs and around 200M Atom CPUs. The mobile CPU market is just as large as the mobile GPU market, the difference being that margins are much lower. But it's quite feasible that NVIDIA could see a good hunk of its revenue coming from that market, should it be forced out of the GPU market. But it's worth noting that the Intel/AMD/NVIDIA GPU war looks like it will take at least 3 - 5 years to see through.
NVIDIA's approach could also really hamper Intel's push into getting x86 into mobile devices with Atom. NVIDIA would partner its GPU technology with existing CPU manufacturers/architectures (think: ARM), making Intel's competitors much better suited to compete with Intel.
Jen-Hsun on VIA
One reporter asked Jen-Hsun what his thoughts were on VIA, NVIDIA's CEO responded by saying that VIA has a really good chip (the Nano) and when paired with a GeForce GTX it can actually deliver a pretty high end gaming experience. Jen-Hsun committed to supporting VIA in two ways:
"We're going to go and optimize all of our software for Nano"
"We're going to do compatibility testing for all of the Nano platforms"
I'm unsure how much driver optimization NVIDIA can do for Nano, but it is a big deal since prior to this "announcement", NVIDIA had only optimized its drivers/software for AMD and Intel. Given the VIA Nano install base is nonexistent right now, I'd be willing to bet that the cost of "optimizing" for Nano is pretty small.
Compatibility testing is arguably more important as it means that NVIDIA GPUs should work with whatever Nano platforms end up on the market. Since Intel seems to be interested in leaving Atom crippled on the desktop from a platform standpoint, NVIDIA's support could allow VIA some additional success with Nano desktop machines.
Jen-Hsun Talks about "Bad Chips"
You may have heard that NVIDIA has had some issues with higher than expected GPU failure rates in notebooks. The parts that seem to be impacted are G84/G86 based GPUs and the failure appears to be something related to the physical manufacturing of the GPU itself.
A reporter from The Inquirer asked Jen-Hsun to clarify what other GPUs might be affected by the manufacturing issue and to basically get more specific, publicly with what users can expect.
Jen-Hsun said that he was the first to admit that this problem existed, having set aside $200M to deal with any potential repairs that had to be made. He characterized the issue as follows:
"We know that there are some failures that are associated with our chips. We know that its related to specific combinations of the chip, the deisgn of the notebook...depending on the design of the thermal solution...and all of the software that goes on top of it...sometimes it will fail. Most of the notebooks are fine...certain notebooks have this problem."
There isn't an official recall, but if your notebook fails and it's got a GPU in it that NVIDIA agrees may be problematic, your OEM should repair it for you.
The question Jen-Hsun didn't answer is what GPUs in specific that this problem impacts and whether or not it extends to the entire line of G8x GPUs, desktop and mobile.
Jen-Hsun did mention that the problem is very specific and can crop up over a long period of time. While NVIDIA's competitors are aware of what caused the problem, none of them appear to be impacted by it (even those that manufacture at the same facilities as NVIDIA), it just seems to be something that is exclusive to NVIDIA.
Who is to blame? According to Jen-Hsun both NVIDIA and its OEM partners are to blame for the issue, although he would not go as far as to blame TSMC, its manufacturing partner.
What about Lucid?
What does NVIDIA think about Lucid and Hydra? Well, some good things and some bad things. Of course no one I asked gave me anywhere near an answer about how they thought SLI would be impacted if Lucid met their goals. And there's really no good way to dodge that either; it always ends in sort of a trailing off into other things attempt to change the subject.
Some people were a little more willing to talk about the technology itself, even if they didn't go near how it would impact their platform position.
Jen-Hsun flat out said he thought that it is naive of Lucid to believe that their solution can divide the work load effectively and in a way where they get linear scaling and only the API is taken into account (the application doesn't matter). He believes that you must tune for specific applications and that high compatibility is difficult, let alone multi-GPU scaling.
David Kirk had a similar take on things but went into a little more depth. His issue is that even if you can split up the workload, there are going to be resources every GPU that's working on a scene will need in order to render it properly. Things rendered to textures, cube maps, and other "funny buffer games" make it so that rendering isn't separable. There are just some things that need to happen on all GPUs and this will make it so that performance doesn't scale linearly.
Which makes sense. But the Lucid guys still keep saying they can get linear scaling regardless of the game on any platform that uses their hardware. Which sounds nice even if it sounds a little far fetched. They did show off two games at IDF, but we would really like to see more and to have the opportunity to test it's epic scaling claims.
It's tough to tell whether NVIDIA is just being cocky and writing off a company that they don't think can pull something off, of if the logical stuff that NVIDIA is saying is really rock solid enough that they don't have to worry. Time will tell of course.
Personally, I like to dream big. But it is easy to be skeptical about Lucid because their claims are so dramatic. Like they say about anything that sounds too good to be true ... But really, I still want someone to tell me what happens to SLI and the NVIDIA chipset business if Lucid's product really delivers on its promises.
Final Words
So far we haven't been too impressed with NVISION, but access to folks like Jen-Hsun and David Kirk has thus far been worth it.
We're off to hear Epic's Tim Sweeney speak about Unreal Engine 3, check back later for more updates from NVISION.