Thinking Out Loud

When working on math homework, I would sometimes encounter a problem where either I didn’t know how to proceed, or I thought I had the process right but the numbers at the ends weren’t adding up.  When this happens, it usually means you’ve missed a small detail somewhere, and the best plan is usually to call for help.

The funny thing is that it doesn’t really matter who you get help from, because the first thing you’re going to do is explain the problem you’re having and the process you’ve been using.  During the explanation, you’ll typically discover exactly what you did wrong, and more often than not you could literally be talking to a brick wall and it would be just as helpful as a genius with multiple Doctorate degrees in the field of the problem you’re having.  So if you don’t mind, I’ll just think out loud for a while.  This is going to be pretty dry, and I don’t expect to post a lot of pictures, but we’ll see.

As you may have gathered from the last couple of posts, I’m working on my procedural image library, specifically the part where I can load new files in instead of hard-coding everything.  Nothing can ever run as fast as a system specially-designed to do the one thing, but for a slight cost in speed, you can have a system that can do several different things.  This is one of the driving philosophies of computer design: “If I hard-code this, it’ll be faster to write and faster to run, but if it suddenly needs to something only a little different, I’ll probably have to scrap everything and start over again.”

Files.  Everything that isn’t hard-coded will have to come from files.  As easy as it would be to just write every single texture, level, animation frame directly into the program, there are long-term consequences.  In this particular case, we’re dealing with file bloat and turnaround time.  It’s a lot faster, usually, to alter a media file and re-load it into a running instance of a program than it is to recompile the program, and moving the media out of the executable itself means reduced executable size, allowing for faster start-up of said executable because the whole thing has to be loaded into system memory anyway, at the cost of having the executable absolutely reliant on the media files.  In the case of, say, loading the level objects for a game, the cost of reliance on the levels is, in most cases, absolutely worth the benefit of being able to modify them easily.  It opens the game up to being modded by end-users, for one thing, and it makes content creation easier on me, because instead of having to think like a programmer AND an artist, I just have to think like an artist.

Now, images are used in more than just the level file, but it would be nice to be able to have a level file where I just drop it in, tell the game to load one file, and everything in it works right away.  One good way to do this is to make the “level” file actually be an archive file.  This is actually what Quake levels are, and by changing the file extension to .zip you can actually open the level file up using the Windows compressed file utility(or any other file compression utility you may prefer) and muck about with the resources inside of it.  Kind of fun, actually, and it was a brilliant move on the part of Id Software, because not only did they not have to write their own package format, but it meant anybody with, at the time, a copy of Win-Zip could package up their own multiplayer maps.  You also gain the advantage of built-in file compression.  Smaller files, lower hard disk usage, shorter network transfer times, the ability to cram more content into the same amount of data space.

So I thought I’d do something similar.  Phobos, the standard D library, actually has a few compression utilities built right in.  I’ve played around with them, they’re fairly easy to use, and ultimately everything comes down to an array of bytes, so I could, for example, pack everything up into a standard .zip file, or I could try to outsmart myself and add padding or marker bytes in parts of the file, so that only my special reader can read it.  My special reader or one that just about any half-way clever programmer, anyway.  This would be a really weak encryption, and if I started distributing files like that, I would be shocked if it wasn’t cracked after a week.  Three days, tops.  That’s irrelevant, though.  Let’s say our “level” file is just a compressed archive with a tag at the beginning to tell the program what kind of file it is.

Wave files actually work a lot like this.  I think PNG files may work exactly like this, but there are enough libraries out there that I don’t need to look inside it for myself.  The file has a 4-byte identifier code, followed by a 4-byte value telling the audio player how long it is, then the interesting data, which itself consists of 4-byte identifier codes, 4-byte length values, and then the relevant data, including the track name, bit rate, number of channels, authorship information, etcetera…

My level files will be zip archives which contain the level geometry data, that’s for sure, level collision data, definitely, and probably a set of sounds and images, both procedural and PNG format.  Now, the images are where things get tricky.

A PNG format image is easy.  You read it, you display it, it’s got one picture.  That picture may have multiple frames if it’s an Animated PNG, but I’m not supporting those, so I don’t have to worry about it.

Procedural images are more tricky.  For one thing, I’m creating the file format as I go, so I can make it as feature-rich or as feature-anemic as I want.

Well, the way I have the loader set up already, it ends up with a collection of image arrays.  Some in floating-point format, some in pixel format.  If I happen to have a procedural file that generates the sources for several different images, it seems like it’d be a waste to make it only good for one image.  So I’ll include the capability to store multiple images in a single PRI(PRocedural Image) file.

But what of textures?  Just because I have an array of image data, that doesn’t mean I’ll be showing it to the user.  Some of those arrays are in floating-point format.  I could use that for all sorts of under-the-hood things, so let’s not automatically dump EVERY image created to a texture.  Well now I’ve got an interesting conundrum.  Do I want the procedural image format to be able to dump ANY images to texture all by itself?  It would certainly be helpful to pack everything together, but it adds one more dependency to the project(the Texture Library), and adds another command for me to code in.  On the other hand, if I don’t include the ability to rip stuff off into textures, then I have to include that capability elsewhere, but I’m rewarded with consistency of data handling.  I don’t load up a PRI file and discover that it’s populated my texture library with a bunch of stuff I may or may not be aware of.  More notably,  I can organize all of my image data by PRI, have a single superfile that generates all of the textures used exclusively in a given level, and load them only as I need them.

Actually, I can have that anyway.  Say the Procedural Image Library loads up our PRI file.  There will be a list of images that get converted to textures, and everything can be read out by the system anyway if it needs to.  The Image Library keeps track of any images that are converted, and when it unloads, it simply goes to the Texture Library and says “Hey, unload these, please.”

I like this idea, and it simplifies the outside code.  It’s a little questionable as far as Object-Oriented code goes, since now private objects can make changes to the global program state without asking permission, but so long as they clean up after themselves, and everything else is expecting them to make changes without knowing what changes those may be, it should be OK.

So the next question I have.  As I said, everything is just arrays of bytes.  Internally, that array of bytes is converted to a set of objects that represent the parameters of every single function the Procedural Image Library may allow the PRI file to call.  The objects aren’t private, anybody can see them, which means that anybody can ALSO simply say “Hey Library, I need you to load up these images.”  This is a good way to standardize interfacing, but it does produce another conundrum.

I can make every function accept a stream of bytes, a group of strings, or a single Parameters object.  Doing all three of these is actually a good idea.  The stream of bytes allows me to feed raw data in, the group of strings is necessary for “compiling” the human-readable files I’m working with initially into raw binary files, which have various speed and size advantages, but are basically impossible for me to alter directly.  In a lot of ways, it’s easier to write a program that accepts a text file and outputs a binary file than it is to write a program that allows the user to directly manipulate data into a binary file.  In my case I’d be directly manipulating the data by typing in text commands, anyway.  The Parameter objects allow me to put together the parameters I desire elsewhere in the program and then feed them directly to the Library.  Because again, I’m not doing that in raw binary.

So here’s a question for you.  Do I make the Procedural Image Library understand how to load PRI files, or do I just tell it how to handle a bunch of bytes fed in?  Doing the first once again gives me a whole self-contained system, but at the same time it means that if I want to make a special set of compressed image files(PRC, PRocedural Compressed image), I’m pretty much obligated to include those as well.  Besides which, I don’t expect to be dealing with a lot of naked PRI files, so it would probably be easier to write an external utility class to load files and feed the binary data to the Library.

Actually, I can’t think of a single valid reason not to do this, given compressed objects are also handled as streams of data.

On the other hand, this means that any PNG images I want to use in my procedural images, which is a feature I want to include, would have to be loaded externally.  This is not a huge issue.  It would actually involve removing a feature, but it also means that PNG images have to be loaded separately and handed to the Library by an outside function before loading the PRI file, or that any PNG images to be used have to be stored outside of the archive they’ll be used in.

Which itself isn’t a huge issue.

Well thanks!  I think I’ve figured out all the answers I was looking for, and you didn’t even have to have any idea what I was talking about.  Isn’t that cool?


Leave a Reply. Wheaton's Rule in full effect.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s