Product Info | User Reviews | Article Images | Image Gallery | Comments | Forum Thread
Interview Page One
Here We Go
These questions were sent to Sam Lantinga in March, 2002 and he responded quickly to the entire lot of them. Perhaps I did not put enough effort in. Maybe I'll have to bug him for a follow-up. Regardless, here is the interview for your wandering eyes to read:
1. FiringSquad: Did you initially envision SDL as a direct competitor to Direct X?
Sam Lantinga: No, SDL was originally designed as an API to provide the services that multimedia applications need across many different platforms. This is still the intent, it just happens that it works very well for developing games on Linux where there isn't a DirectX equivalent.
2. FS: Were you able to use Open GL and Open AL “as is” and then develop the rest of the SDL API around them?
SL: SDL doesn't really use OpenAL at all, although games have been written that use both. As for OpenGL, SDL just acts as a sort of cross-platform glx, setting up the window and GL context, but letting you do all your own work, and introducing no overhead. As such, the SDL API doesn't really interact with the OpenGL API.
3. FS: Was the development of SDL more of a solitary or team effort?
SL: I would call it more of a team effort. I wrote most of the core code, but over the years it has been improved upon by so many different people that it literally would not be the API it is now without everybody's help and suggestions.
4. FS: Can you describe some of the challenges you faced during the development process, from beginning to current day?
SL: The biggest challenge I face during SDL development is balancing the needs of the general application with the services available on different platforms. For example, when designing the audio API, the most common way to access the sound card on Linux is to open the audio device and write a sequence of audio samples to it to play at some point in the future. However on most other platforms, you can't queue audio in that way, you have to keep the audio ready and feed it to the DSP as the DMA buffer is emptied. The audio API I ended up writing reflected this real-time approach.
5. FS: The next version of SDL is said to be a major re-write. Can you go into detail as to why such a re-write is needed, and also what it will entail?
SL: There are a lot of things that would be nice to have in the SDL API that it was never really designed for. Two examples off the top of my head are multi-monitor support and support for hardware audio buffers. These two features alone would require a complete rewrite of the video drivers and a complete redesign of the audio API. There are many other places in the API that could use revisiting, and so the idea has been to start fresh with these concepts in mind.