I'm currently working on an algorithm that, similar to PNG-related formats, is lossless when a picture is rendered. I will start by explaining the format of a normal 24-bit bitmap image, and how one can quickly access the actual picture data and dimensions, and then go into depth on how far I've decided on how compression routines will work.
On-Computer Absorption, Compression, and Padding (ACP phase)In a bitmap file, the chunks of data we are most concerned with are, and how large the values are:
- The offset from the beginning of the file the raw image data is documented 0x0A bytes from the beginning of the file (32bit)
- The size of the DIB header after the normal BMP header's size (non-inclusive) is 0x0E bytes from the beginning of the file (32bit)
- Width of bitmap is 0x12 bytes from beginning (32bit)
- Height of bitmap is 0x16 bytes from the beginning (32bit)
- We can check to see if input file is 24bit encoded by checking to see if byte 0x1C == 0x1800 (16bit)
- Size of raw pixel data is in 0x22 (this includes 4-byte alignment pads)
some notes:
- The format of a WINDOWSDIB bitmap file is in the order BMP_HEADER, DIB_HEADER, and RAWDATA. The BMP_HEADER and DIB_HEADER always start in static locations
- The end of the file can be marked as (beginning + *(beginning+0x0A) + *(beginning+0x22))
- each 'row' of pixels MUST be a multiple of 4, so if the lines are not aligned to that, there will be extra padding bytes with the value 0x00 inserted
This is good information for simply scanning a bitmap file for the data within, and I almost have a Ruby script up and working that will absorb the entire bitmap's raw data minus the paddings into a array from which it can be loaded into an ACPNG file format (Ashbad Compressed Prizm Networks Graphic file -- the file format pioneer has the first pick for the name
) and then padded into a Prizm-C compatible 'const char NAME[] = {' format that can be inserted into a source file as regular picture data.
Actual Compression Concerns and IdealsThe style of compression planned for uses a lossless Pixel-Prediction algorithm that I almost have perfected for theoretical decently fast rendering and high compression rates (which can only be estimated, and not be said for sure, until I go into intensive testing mode). This method is very similar to what low-standard PNG images use, and can be summed up by this ripped picture from wikipedia -- the pixel X's value is predicted by the values of A,B,C, and
Another feature is palette indexing, for pictures with or less than 256 colors. This is tightly wrapped indexing, so if a picture has something like 126 colors, it would be based on a 7 bit palette with 126 entries. Or, a better example (let's use a value 127 < # < 255 for an 8 bit example) a picture with 192 colors would have 8 bit indexing, but would still be more compressed than a picture with 250 colors because it would support only 192 colors. I may have it later so you can still specify indexing for #ofcolors with 9-11 bit values, but the compression with this technique would be drastically less obvious, rising sub-exponentially at each bit level added.
The point of all of these measures (and a few I didn't mention yet) is to get an image file
as small as technologically possible without any color loss -- that's why PNG-based compressions are generally more of a pain to write than 92% reducing JPEG compressions
If you want an educated guess on how compressed a picture would be with this, I'll take Kerm's obliter8 title screen with a ~160K data size he regularly publically laments about. I assume from looking at the image itself it contains around 200-250 colors total, which with a indexing table would drop it to 80K instantly, no losses. The prediction method is much harder to predict to the many factors that relate output size and input size, such as size, bit depth and (surprisingly) width and height. Though, making a good guess, it'll knock off at least an additional 30% from where we were, bringing his title screen down to a more reasonable ~60K size, knocking already 100K of the top.
However, if I use advanced deflation and line-based predictions, along with other features I haven't looked into much yet, it could drop to ~35K. Yay Kerm's title screen!
Prizm-C routine to convert ACPNG data to nomal bitmap formatI haven't worked on it nearly as much, but the actual routine that renders ACPNG data will function with the following format, subject to change:
unsigned char Render_ACPNG_Data(void*datapointer, short x, short y, char mode, void*additional_args = 0);
inputs: pointer to ACPNG data, x position to render to in VRAM, y position to render to in VRAM, 8 bit value mode for how to render, optional pointer to additional arguments to communicate with the decompressor directly (mostly for debugging and possibly for getting info on data and such)
outputs: obvious.
Quite possible Specifications of the actual Prizm-C routine:
- estimated 1-5K function code size
- estimated 6-8 times slower rendering than normal graphical data
- estimated 1-24 hours Kerm will be happy I made a routine that will lower his title screen size
besides the silly last one, the specs are actual estimates I theorize will be a result, and will most likely
not be subject to change.
Questions, Comments, "I WANNA HELP", etc.feel free to post anything described in the sub-title's below or email me at a s h b a d . a l v i n @ g m a i l . c o m (remove spaces obviously
AND DO NOT REPEAT IT HERE WITHOUT SPACES.)
~Ashbad