Summary: FiringSquad is an equal opportunity griller – last month we talked with ATI about AVIVO. This month we talk with Scott Vouri about PureVideo. Did you know that NVIDIA has the technology to improve LCD ghosting by overdriving the color signal?
FiringSquad: Why don’t you tell our readers about yourself? i.e. What do you do for NVIDIA, how long have you been working there, what was the greatest achievement you’ve seen at NVIDIA, and the biggest mistake at NVIDIA you’ve seen.
Scott Vouri: My name is Scott Vouri and I’m General Manager of Multimedia for NVIDIA. My group is responsible for NVIDIA’s digital home products and initiatives such as our PureVideo technology, MCE and other home theater products. My home set-up is a HP Digital Entertainment Center with a silent, half-height MSI GeForce 6600 HD, PureVideo Decoder and a prototype NVIDIA dual tuner driving a Sony 36HS420 HD CRT Monitor. While I’ve only been working here 15 months, the greatest thing I have been part of is NVIDIA’s re-taking of the high-end of the pixel processing market. From SLI to PureVideo it has been a fun ride. Biggest mistake I’ve seen was not educating the public sooner on NVIDIA’s video technology leadership position.
FiringSquad: The PureVideo brand-name seems to attached to the following technologies:
1. Capture of the source (NVTV, Dual TV)
2. Video processing on the GPU
3. Software (nstant Media, MPEG-2 decoder)
Are we missing anything?
Scott Vouri: PureVideo is NVIDIA’s unique hardware and software technology for advanced video processing. You probably remember from the GeForce 7800GTX launch back in June, we described how we implemented that technology at all stages of the video pipeline from 3D Comb Filter, Noise Reduction and Weak Signal Amplification technology on NVIDIA TV tuner products to HD and SD broadcast quality compositing, previewing and output on NVIDIA Professional Solutions products. The video processing power our engineers have delivered in every GPU since the GeForce 6x series is truly phenomenal. There are three dedicated video processing cores separate from the 3D rendering engine in every GPU - an MPEG decoding engine, a programmable video processor with its own video programming language, and a motion estimation engine. On top of the video processing cores we have a huge set of microcode which implements our advanced algorithms for MPEG2, WMV and H.264 HD content decode acceleration, Spatial-Temporal De-Interlacing of HD and SD content, Hardware Scaling, Inverse Telecine (3:2 & 2:2 Pulldown correction), and Bad Edit Correction. Then on top of the microcode is our PureVideo driver set and of course the PureVideo Decoder. Finally our TV-Out capabilities provide high quality Composite, S-Video, Component, DVI, HDMI and SDI outputs.
FiringSquad: Which of those PureVideo features are available on MacOS?
FiringSquad: Since PureVideo isn't a brand-name attached to the display component of a GPU (component video out, etc.) I wanted to ask a few display questions from the get go.
a. Windows Vista may require HDCP support for high-definition playback. Which NVIDIA products support HDCP?
Scott Vouri: All our GeForce 6 Series and GeForce 7 Series GPUs provide HDCP support. In fact we’ve offered it as a design option to our partners since back in May. Two good examples of our technology can be found in the Sony VAIO RA940GN4 ( GeForce 6600 HDCP) and the Sony VGX-XL1 Digital Living System (GeForce 6200 HDCP/HDMI)
[Alan's comments: That VGX-XL1 is actually really cool – it's a two-box unit where the second box is a 200-disc DVD changer. It's integrated with Windows Media Center Edition where you can keep track of what discs are in there. You can also populate the changer with DVD-Rs and do automated disc burning! Equally cool is the fact that the sound card reportedly supports DSD playback for use with the DSD mastering software that comes with the system.
This answer actually highlights one of the strengths of vertical integration. Since ATI manufactures their own board, it's possible for ATI to directly translate chip technology down to the board level. With NVIDIA technology you aren't guaranteed to see all of the chip technology exposed at the board level.]
FiringSquad: b. Are there plans to extend SDI support to something either higher or lower-end than the current Quadro FX4000 SDI?
Scott Vouri: Definitely. Today’s professional broadcast standards require the quality of our 10-bit uncompressed Serial Data Interface output, and the industry’s adoption of the Quadro FX4000 SDI as the video card of choice ensure that you will see SDI on many more NVIDIA products.
FiringSquad: 8 questions on "rumored" NVIDIA DualTV.
Scott Vouri: Alan – sorry I can’t answer these yet because they relate to an unannounced product…
[Alan's comment: Doh! Still, if you notice the introduction, Scott Vouri does acknowledge the existence of a prototype NVIDIA dual tuner…]
FiringSquad: Ignoring SDI support, is there a difference in PureVideo capabilities between the Quadro and GeForce line? (i.e. QuadroFX 540 Professional Video Edition versus a GeForce 6600)
Scott Vouri: Starting with the Quadro FX4000 SDI, our PureVideo features include the HD and SDI output we talked about, YUV/RGB 4:2:2 , 4:4:4 and 4:4:4:4 support, display resolutions up to 4K via SLI and Genlock. The Quadro FX540 is the most affordable HD content editing solution on the market today. For about $250 you get high-quality component BNC high-definition output and certified support of Adobe Premiere, Autodesk Combustion and 3dSmax, Alias Maya, and many other professional editing packages.
[Alan's comments: NVIDIA didn't answer the question as directly as I'd like to. Reading between the lines, it seems like the differences in the drivers means that only the Quadro supports the higher precision 4:4:4:4 processing. On the FX540, the main difference is the component BNC outputs instead of the RCA outputs.]
FiringSquad: What kind of GPU powers those NASCAR RACEf/x technology? What PureVideo features are being used?
Scott Vouri: The NASCAR RACE f/x uses the Quadro FX 4000 SDI and PureVideo’s high-precision 10-bit SDI for real-time graphics rendering, compositing and broadcast quality output of high-definition live auto-racing action.
The NVIDIA QuadroFX 4000 SDI is used by SportVision to provide the 1st & Ten (TM) virtual first down line in NFL games, the K-Zone virtual strike zone and pitch tracking in MLB games and a host of other sports viewing enhancements.
FiringSquad: Is there anywhere else we can see PureVideo being used in broadcast?
Scott Vouri: Actually most of the major networks in the US, Japan and Europe use NVIDIA Quadro based solutions for their real- broadcast-quality graphics needs such as weather reporting and virtual studio set-ups. They take advantage of our ability to generate reflections, complex textures, and shadows during live data streaming.
Another really interesting application of PureVideo technology is in movie theaters. There is a company called National CineMedia which broadcasts, via private satellite network, the advertisements that are shown before the movie starts. The ads are broadcast in 1080i MPEG2, downloaded to an HP workstation, processed by a GeForce 6600 with PureVideo technology and projected onto the screen with a Christie HD projector.
[Alan's comments: Christie's HD projectors are part of Dolby's Digital Cinema platform. It's a 3-chip DLP solution. I've seen Sin City and Star Wars III: Revenge of the Sith on a Christie HD projector. The image quality is stunning. Perhaps more stunning is how close you can get to achieving the same image quality at home with a high-end TV. Once the movie starts there's no need for deinterlacing because it's being broadcast in progressive scan format.]
FiringSquad: How many MPEG-2 HD streams can a GeForce 7800GTX decode simultaneously?
[Alan's comments: In our ATI interview, we learned that the X1800 can "easily decode a single HD-MPEG2" stream. ATI used the phrase that they had "not really mapped out beyond a single stream" when it comes to the X1800]
FiringSquad: How much of PureVideo is done with the dedicated VPU components of the chip versus the 3D pipeline?
Scott Vouri: I’m really glad you asked this because it is one of the coolest things about our programmable video processor. Our PureVideo technology actually does processor load-balancing across all the video cores and the 3D rendering engine. That way we can process multiple tasks at once or process different stages of the video pipeline at the same time.
[Alan's comments: That wasn't as much detail as I was interested in hearing, but in a follow-up question, NVIDIA suggested the example of performing hardware accelerated decode and some post-processing effects on the internal VPUs and then doing additional color-enhancements or post-processing on the 3D engine.]
FiringSquad: Are there any plans to make use of both PureVideo VP's in SLI configured systems for more exotic processing?
Scott Vouri: I can’t comment on unannounced products.
[Alan's comments: So far in this interview, NVIDIA's chosen not to discuss unannounced TV tuners, but they have hinted about SDI support in upcoming Quadro products. I'm not sure how to interpret this answer as SLI and PureVideo are both fully disclosed technologies. If you believe in test taking strategy, your gut response is usually right and my gut says this is something NVIDIA is working on. That said, one of the challenges of deinterlacing is not simply the processing power but also coming up with appropriate algorithms as well. There's no "brute force" solution. It's possible to have so elegant of a solution that you don't need the extra processing. I'll put motion-adaptive noise reduction during playback on my wish list...]
Scott Vouri: The nice thing about having separate video processing cores in our GPUs is that we can move those blocks into any product we want. For example, we recently announced the nForce MCP 430 and GeForce 6150 motherboard solution. This provides PureVideo HD processing with component out without the need for a discrete graphics card. We also offer video processing technology on mobile phones. I can envision many other applications of our PureVideo technology, especially in the consumer electronics field, where our video processing technology will be highly valued.
FiringSquad: Does the VMR offer higher precision computation than the overlay when converting YUV to RGB?
Scott Vouri: Yes. The VMR mode, which provides advanced features such as support for extended color space, allows higher precision computation than overlay mode.
[Alan comments: In this case, you guys will want to enable the High Quality mode inside of Windows Media Player to force VMR playback. One good way to test this is the Print-Screen test. If you can capture video using the Print Screen button, it's going through the VMR. If you just get a black screen, it's going through the overlay.]
FiringSquad: What sort of software do I need to utilize the noise reduction feature of PureVideo for MPEG encoding? Is there anyway to use this in the decoding stage?
Scott Vouri: This is a feature of an unannounced product so I can’t comment.
[Alan's comments: Actually the feature is described in figure 3 of page 7 of NVIDIA's Technical Brief on PureVideo http://www.nvidia.com/object/IO_16213.html]
So in essence, NVIDIA has confirmed that a future product is going to have noise reduction for MPEG encoding.]
FiringSquad: PureVideo seems to do more than regular bob deinterlacing when tested with the HQV Benchmark DVD. Can you give us any more details on what's being done?
Scott Vouri: Yes, we do much more than regular ‘bob’ deinterlacing, but unfortunately we can’t disclose the algorithms behind our de-interlacing technology. I do want to point out that HQV doesn’t even test one of the best things about our spatial-temporal de-interlacing – the fact that we do it on 1080i HD content, which is quite computationally intensive.
[Alan's comments: This is actually the same answer that ATI gave us, so that's still a fair answer. NVIDIA's point about high-definition deinterlacing is well taken. We'll actually be looking into a de-interlacing shoot-out between ATI and NVIDIA in the future, and high-definition test patterns are going to be a component of that future article.
Video deinterlacing is all about throwing away pixels that would otherwise cause feathering artifacts and coming up with ways to guess what the best restoration would be. Only poor quality video processors are non-motion-adaptive. This means 50% of the data is ignored. A motion-adaptive deinterlacer is one that selectively discards pixels. It can be as cool as a video processor that tries to map each pixel to a motion vector or as dumb as video processor that divides the screen in half and has the option of discarding the top half of one field, the bottom half of one field or both.
Once the pixels have been discarded, there is also a wide variety of algorithms that can be used for deinterlacing. Bob de-interlacing is the simplest intra-frame (a single frame) approach.
You always start with the destruction of a line of pixels. In bob reconstruction, those missing pixels are regenerated by averaging the pixel above and below. While this takes care of the feathering, you give up 50% of the resolution that's potentially recoverable.
So, suppose that you have these pixels at the odd rows 1 and 3:
The video processor has to figure out what 5 pixels should go in between
A simple bob would just average the pixel above and below for each.
Resulting in this:
A better intrafield method of deinterlacing is edge-adaptive diagonal interpolation in which you examine several pixels from the row above and below the discarded pixel instead of just the pixels above and below. If an edge is detected, then the missing pixel is interpolated by averaging the pixels along that edge. So, first the video processor looks for edges in several directions
When it detects the edge, it interpolates along that axis.
You can then use the nearest pixel outside of the edge resulting in
The difference doesn't looks like much until you take a step back and compare the two. With traditional bob, you get artifacts that result in jaggies but with edge-adaptive directional interpolation it looks exceptionally smooth. (Try squinting for the maximum effect).
Amazing isn't it? Diagonal filtering was a feature pioneered by Faroudja.
Done deal? Not quite. How do I know if the correct "edge" is the red part
or if the background happens to be red and the white line is what I'm supposed to interpolate?
Resulting in :
Was the picture supposed to represent a red slash on a white background or a bold white backslash on a red background? It's impossible to say.
This again brings up the point that diagnosis is just as important as the treatment when it comes to video processing. You could have two video processors with diagonal interpolation, but one might be more accurate at guessing the appropriate direction. Likewise, you could have an exceptionally exotic deinterlacing strategy that only worked in some conditions.
This is where trade secrets come in and this is where you find the challenge of diagnosis. A better video processor may choose to look at a larger area of surrounding pixels. The algorithms and thresholds used to identify edges can differ. ATI is likely using this general algorithm when it comes to their "vector adaptive deinterlacing" – the real question is how accurate their detection and diagnosis is.
Another approach to video deinterlacing would be an inter-field strategy where you incorporate data from historical fields to make decisions. In theory, additional data should give a video processor the ability to make a more informed decision, however it still boils down to how well the algorithms are – it's what you do what the information that counts. As you can imagine, the complexity for these algorithms are substantially higher.
FiringSquad: Do you have any opinion on Philips Trimension? Good? Bad?
Scott Vouri: The Trimension technology is good, but it is nearly impossible to attain the quality levels provided by a dedicated video processing core with a CPU. In addition, with the computational demands being placed on a CPU today by advanced applications, it is unwise to use bandwidth for video pixel processing.
[Alan's comments: Well I'm glad they're aware of the technology. Hopefully future versions of NVIDIA deinterlacing evaluate motion compensated deinterlacing strategies].
FiringSquad: How does the "LCD Sharpening" technology that overdrives the color signals compare to the technology used in Viewsonic’s overdrive technology?
Scott Vouri: While the two technologies utilize the same theory, there are subtle differences. The ViewSonic technology works by temporarily driving the requested pixel at a higher voltage than necessary to achieve a given color value. By driving at a higher value initially then settling back to the voltage level proscribed for the color requested, Viewsonic is able to compensate for the display response-time lag LCDs have versus CRTs. Our LCD Sharpening technology does not physically drive a higher voltage into the panel, but rather uses an algorithm to compute a temporary color value higher than the requested color value and based on a LCD panel’s response time that will achieve the desired color in a shorter time. PureVideo’s LCD Sharpening technology can work with any LCD panel is programmed appropriately.
FiringSquad: Is this enabled by default? How can a user disable/enable the feature or adjust the amount of overdrive?
Scott Vouri: Our LCD Sharpening technology is currently offered to OEMs so that it can be optimized for the particular LCD panel being used in a given application. Typically OEMs do not expose control of this feature to the end-user.
[Alan's comments: Am I the only one who thinks a user-configurable tool for tweaking LCD panel response would be very useful? I can't believe that OEMs have opted not expose control of this feature. A long time ago, video board manufacturers would actually tweak the drivers from the chipset manufacturer in order to improve performance. Nowadays, NVIDIA and ATI release drivers so frequently that it's difficult for a board manufacturer to keep up with the changes. Still, if any OEM is looking for a way to distinguish itself from the rest of the pack, enabling this feature would be an easy first step.]
FiringSquad: I brought up Trimension because software like WinDVD 7 supports both PureVideo and Trimension software technologies. How does someone decide between the PureVideo Decoder and WinDVD 7?
Scott Vouri: We brought the PureVideo Decoder to market so that we could fully expose new PureVideo technologies as fast as we bring them to market. That being said, we are not in the business of competing with software companies and in fact work with the decoder vendors to add PureVideo support to their applications. Did you know that there is a user-configurable option to enable PureVideo hardware support in WinDVD 7? We look forward to bringing even more video processing technologies to market with our software partners.
[Alan's comments: Err… of course I knew that WinDVD 7 has PureVideo support – that was part of the question stem.]
FiringSquad: Should people running non-NVIDIA graphics cards consider using the PureVideo decoder?
Scott Vouri: The NVIDIA PureVideo Decoder is very popular among home theater enthusiasts, regardless of their graphics card, because of its great support of MPEG-2 HD transport streams and solid audio and video decoding. However, it does require an NVIDIA GeForce 6 or 7 Series GPU to take advantage of advanced PureVideo features like our spatial-temporal de-interlacing and inverse telecine.
FiringSquad: NVIDIA is known for having excellent Linux graphics drivers. What are the chances of seeing PureVideo and nStant media for Linux?
Scott Vouri: There are many Consumer Electronics applications for PureVideo and Linux. I can foresee the day when we offer those technologies in that market.
[Alan's comments: Woo hoo!]
FiringSquad: Some HTPC owners use multiple sound cards. Are there any plans to allow a future version of the PureVideo Decoder to output a SPDIF stream to one sound card and a decoded stream to another for analog output?
Scott Vouri: We don’t have any plans to do that at this time.
[Alan's comments: This is probably a weekend job for the PureVideo intern. It would be a really cool feature…]
FiringSquad: The Media Center Edition drivers with the wizard-based configuration is actually very cool. That said, I can't help but to notice that the MCE drivers always lag behind the standard ForceWare release. What other differences are present between the MCE drivers and the ForceWare drivers?
Scott Vouri: All NVIDIA drivers are built using the Unified Driver Architecture model, so our Media Center drivers come from the exact same code base as our standard ForceWare release. Media Center drivers can take longer to test and release due to the extra QA and certification processes necessary for the Designed for Media Center and Imaging Science Foundation certifications. While this is a situation we strive to avoid, it is worth it in order to provide the home theater quality video output covered by these standards.
FiringSquad: Where do the Digital Vibrance and Image Sharpening algorithms get applied? By the GPU or at some post-processing stage? Would it ever be possible to have application-specific digital vibrance or image sharpening settings?
Scott Vouri: These algorithms are applied as the pixels are output from the graphics core to the DAC, TMDS, or TV encoder. Which means that they benefit the entire display. We are investigating ways to subjectively apply them to application-specific portions of the display.
FiringSquad: How does the advanced mode of color profiling work with ICC/ICM files? It seems like I cannot load a file produced by a hardware calibration tool such as MonacoOptix?
Scott Vouri: ICM profiles are operating system level profiles set by via the Microsoft provided control panel and are used by ICM aware apps. Currently users cannot set separate ICM tables per monitor with other applications.
FiringSquad: Will future revisions of the ForceWare allow me to apply different color LUTs to different monitors in a multiple-monitor setup in Windows XP the same way it does on MacOS X?
Scott Vouri: Actually we support this today. To do this, go to the NVIDIA Control Panel and then browse to the nView page, right click on the monitor you want to change and choose color correction. However, if you are using ICM profiles, the changes get applied to both monitors due to the issues stated above.
FiringSquad: nStant media seems like a under-advertised component of the PureVideo Decoder. What are NVIDIA's long-term plans for this?
Scott Vouri: nStant Media is our cross-platform home entertainment application that includes a “ten-foot” user interface and supports TV tuning and PVR programming, DVD movie viewing, music listening and photo viewing. Since it is cross-platform from both a GPU and an OS perspective, I think you will see it deployed in a number of different ways over the coming months.
1. You are first author on the U.S. patent for switching graphics resolutions without having to restart and close your running applications. If you had registered this patent in your own name instead of to your company, would you have been rich beyond your wildest dreams from patent licensing fees alone?
Scott Vouri: Well, there were a number of us that collaborated on the technology and the products it went into, so it was only fair that the value of the patent accrue to the shareholders (all of us), so that we could all benefit – which we did when the company was sold.
[Alan's comments: So you did benefit from that patent (along with your investors)!]
FiringSquad: 2. We've asked some tough questions, so the last one is the freebie. If you had one paragraph to directly address our readers and tell them what NVIDIA PureVideo was all about, and could tell them why they should care about PureVideo and not just the 3D stuff, what would you say?
Scott Vouri: To NVIDIA, PureVideo is about achieving digital video perfection, not just on PCs but on any device. We constantly push the limits of our video performance because as we grow our company into new markets, best-of-class video is as important, if not more important, than 3D rendering capabilities. Already you see NVIDIA is in mobile phones, PDAs and many other devices where our video processing quality is highly valued. One can only imagine where you will be viewing an NVIDIA rendered video pixel next.
|© Copyright 2003 FS Media, Inc.|