Coding and Dismantling Stuff

Don't thank me, it's what I do.

About the author

Russell is a .Net developer based in Lancashire in the UK.  His day job is as a C# developer for the UK's largest online white-goods retailer, DRL Limited.

His weekend job entails alternately demolishing and constructing various bits of his home, much to the distress of his fiance Kelly, 3-year-old daughter Amelie, and menagerie of pets.


  1. Fix dodgy keywords Google is scraping from my blog
  2. Complete migration of NHaml from Google Code to GitHub
  3. ReTelnet Mock Telnet Server à la Jetty
  4. Learn to use Git
  5. Complete beta release FHEMDotNet
  6. Publish FHEMDotNet on Google Code
  7. Learn NancyFX library
  8. Pull RussPAll/NHaml into NHaml/NHaml
  9. Open Source Blackberry Twitter app
  10. Other stuff

Editing JPEG Photos Without Recompressing - Part 2

Hot on the heels of Editing JPEG Photos Without Recompressing - Part 2, I've had a few days to ponder, and I'm back with more thoughts on editing JPEGs without recompression, and maybe with a conclusion. Where we left off, we'd determined that the JPEG compression steps that we'd need to deal with, in decoding order, are:

  1. Huffman decoding
  2. Run-Length Encoding
  3. Zig-Zag scan
  4. Quantisation
  5. DCT transform

We're aiming to do as little of the above as possible, so to affect the image compression as little as possible (if at all).

What Have I Learned? It's Not Looking Good!

After digging into the above encoding and decoding process, it turns out the lossy compresion that we want to avoid takes place at the quantization stage. The quantization process, I now know, is a fairly simple dividing and rounding of values, with more brutal dividing at parts of the frequency domain where these frequencies have less impact on the appearance of that particular image block, something like the following (note I've used a 4x4 block size instead of the normal 8x8 for clarity):

Example of normal JPEG image quantisation

The purpose of quantisation is to reduce the number of unique values, so that they can be effectively compressed using a combination of run-length and huffman encoding. But our saved data is no longer uniform across the block, because the quantisation process isn't uniform across the block - we can't apply any simple operations now to adjust the image brightness. But what if we apply the quantisation process as above to reduce the number of unique values, then remultiply the values so that we can apply a uniform brightness adjustment to the saved data? For example:

Fiddling JPEG image quantisation to give 'normal' output values

This plan falls down too - because I'm making my changes in the frequency domain, I can't do a uniform change across all my values, because they'll each map very differently to the space domain. And we can't apply non-uniform changes to the saved data, because this will bring in a whole host of new values, which will mean either having to massively inflate each file size, or having to recompress the image. Fail again.

What's Next

The above is at this point all theory, so the next step is to prove this in code. I'm cooking up a little console app to do just that, so watch this space.

Permalink | Comments (0)

Add comment

  Country flag

  • Comment
  • Preview