Return to Digital Photography Articles
JPEG Compression, Quality and File Size
When trying to resave a digital photo, one is often faced with a decision as to what "quality settings" (level of compression) to use. The JPEG file format (more properly JFIF), allows one to select an appropriate trade-off between file size and image quality. It is important to understand that JPEG (and nearly all lossy file formats) are not suitable for intermediate editing because of the fact that repeated saves will generally diminish the working file's quality. In addition to the cumulative introduction of visual artefacts (error), repeated recompression also introduces destructive color changes. It is for these reasons that "lossless" file formats (such as TIFF, PSD, BMP, etc.) are better choices for intermediate processing. JPEG should only be used for storing the final image (ie. after editing) and possibly the initial capture.
How does JPEG compression work?
When a JPEG file is opened in an image editor, a large number of steps must be performed before the raw image (one RGB triplet per pixel) can be displayed or edited. It is easier to understand the process by looking at the reverse process — ie. what happens when one generates a JPEG file (ie. save) from the raw image data. In summary, the JPEG algorithm involves the following stages:
- Color Space Conversion - The image first undergoes a color space conversion where it is remapped from RGB (Red Green Blue) triplets into YCbCr (Luminance, Chrominance Blue, Chrominance Red) triplets. This color space conversion assists the use of different quantization tables (one for luminance, the other for chrominance).
- Segmentation into Blocks - The raw image data is chopped into 8x8 pixel blocks (these blocks are the Minimum Coded Unit). This means that the JPEG compression algorithm depends heavily on the position and alignment of these boundaries.
- Discrete Cosine Transformation (DCT) - The image is transformed from a spatial domain representation to a frequency domain representation. This is perhaps the most confusing of all steps in the process and hardest to explain. Basically, the contents of the image are converted into a mathematical representation that is essentially a sum of wave (sinusoidal) patterns. For example, the binary sequence 101010 can be represented by a wave that repeats every two pixels. The sequence 1100110011 can be represented by a wave that repeats every four pixels. Similarly, the sequence 1001101011 can be represented by the sum of several simpler waves. Now imagine that this mapping to wave equations (known as the DCT basis functions) is done in both the X and Y directions.
- Quantization - Given the resulting wave equations from the DCT step, they are sorted in order of low-frequency components (changes that occur over a longer distance across the image block) to high-frequency components (changes that might occur every pixel). Is it widely known that humans are more critical of errors in the low-frequency information than the high-frequency information. The JPEG algorithm discards many of these high-frequency (noise-like) details and preserves the slowly-changing image information. This is done by dividing all equation coefficients by a corresponding value in a quantization table and then rounding the result to the nearest integer. Components that either had a small coefficient or a large divisor in the quantiziation table will likely round to zero. The lower the quality setting, the greater the divisor, causing a greater chance of a zero result. On the converse, the highest quality setting would have quantization table values of all 1's, meaning the all of the original DCT data is preserved.
An important point to realize here is that the quantization table used for this step differs between nearly all digital cameras and software packages. Since this is the most significant contributor of compression or recompression "error", one is almost always going to suffer image degradation in resaving from different compressors / sources. Camera manufacturers independently choose an arbitrary "image quality" name (or level) to assign to the 64-value quantization matrix that they devise, and so the names cannot be compared between makes or even models by the same manufacturer (i.e. Canon's "Fine" vs Nikon's "Fine").
Please see my article on JPEG Quantization Tables for the actual tables used in the Canon, Nikon, Sigma, Photoshop CS2 and IrfanView digital photos. - Zigzag Scan - The resulting matrix after quantization will contain many zeros. The lower the quality setting, the more zeros will exist in the matrix. By re-ordering the matrix from the top-left corner into a 64-element vector in a zig-zag pattern, the matrix is essentially sorted from low-frequency components to high-frequency components. As the high-frequency components are the most likley to round to zero, one will typically end up with a run of zeros at the end of the 64-entry vector. This is important for the next step.
- DPCM on DC component - On a block-by-block basis, the difference in the average value (across the entire block, the DC component) is encoded as a change from the previous block's value. This is known as Differential Pulse Code Modulation.
- RLE on AC components - On the individual entries in the 64-element vector (the AC components), a Run Length Encoding stores each value along with the number of zeros preceeding it. As the 1x64 vector contains a lot of zeros, it is more efficient to save the non-zero values and then count the number of zeros between these non-zero values. The RLE stores a skip and a value, where skip is the number of zeros before this component, and the value is the next non-zero component.
- Entropy Coding / Huffman Coding - A dictionary is created which represents commonly-used strings of values with a shorter code. More common strings / patterns use shorter codes (encoded in only a few bits), while less frequently used strings use longer codes. So long as the dictionary (Huffman table) is stored in the file, it is an easy matter to lookup the encoded bit string to recover the original values. See my JPEG Huffman Coding tutorial.
Examine your JPEG Files!
I have written a free Windows utility that examines and displays all of the details described above in your JPEG files.
Where does the error come from?
By far the biggest contributor to the error (ie. file size savings) in the JPEG algorithm is the quantization step. This is also the step that allows tuning by the user. A user may choose to have a slightly smaller file while preserving much of the original (ie. high quality, or low compression ratio), or a much smaller file size with less accuracy in matching the original (ie. low quality, or high compression ratio). The tuning is simply done by selecting the scaling factor to use with the quantization table.
The act of rounding the coefficients to the nearest integer results in a loss of image information (or more specifically, adds to the error). With larger quality scaling factors (ie. low image quality setting or high numbers in the quantization table), the amount of information that is truncated or discarded becomes significant. It is this stage (when combined with the Run Length Encoding that compresses the zeros) that allows for significant compression capabilities.
There are other contributors to the compression error, such as the color space conversions, but the quantization step is the most important.
Please see my results in the JPEG Quantization Table article for a more accurate comparison between software packages and their quality settings.
JPEG Chroma Subsampling
In order to further improve JPEG compression rates, chroma subsampling is used to reduce the amount of image information to compress. Please refer to my article on Chroma Subsampling for more information on the 2x1 and 2x2 subsampling typically used in digital cameras and image editors such as Photoshop.
Breakthrough in JPEG compression?
Up until now, it has been widely assumed that the JPEG image compression is about as good as it gets as far as compression rates are concerned (unless one uses fractal compression, etc.). Compressing the JPEG files again by Zip or other generic compression programs typically offers no further improvement in size (and often does the reverse, increasing the size!).
As documented in a whitepaper (no longer available) written by the authors of StuffIt (Allume Systems, formerly Alladin Systems), they have apparently developed software that will further compress JPEG files by up to an additional 30%! Considering how many years the highly-compressed JPEG algorithm has been around, it is surprising to see any new developments that offer this degree of increased compression. Note that the "Stuffit Image Format" (SIF) uses lossless compression of the lossy-compressed original JPEG image. Therefore, there is no further image quality reduction in this Stuffit additional compression.
On Slashdot.org, there have been many theories as to how this additional compression could be achieved, but it seems that many feel that it must be through either replacement of the Huffman coding portion (and using arithmetic coding instead) or alternatives to the zig-zag reordering scan. It seems that the consensus is that SIF uses an implementation of arithmetic coding.
On first glance, this would seem to potentially revolutionize the photo industry. Imagine how this could affect online image hosts, or personal archiving needs. Saving 30% of the file size is a significant improvement. Unfortunately, a few significant problems are immediately apparent, possibly killing the adoption of this format:
- Proprietary Standard - I cannot see this format taking off simply because a single company owns the format. Would you trust your entire photo collection to a single company's utility? The company can charge what it likes, has no guarantees about the company's future, etc. At least Adobe tried to do things right by releasing their DNG (Digital Negative format) specification to the open community, allowing many other developers to back the format. Allume / Stuffit sees this as a potential financial jackpot.
- Processor Intensive / Slow - Unlike the methods used in the standard JPEG file compression scheme, the SIF method is apparently extremely slow. As tested by ACT (Archive Comparison Test website), a 1.8 GHz Pentium computer took nearly 8 seconds to compress or decompress a 3 megapixel file. While this is less of an issue for those wishing to archive photos to CD, for example, it is obvious that this would prevent the algorithm from ever being supported in most embedded applications (including within a digital camera).
Resaving and workflow
When resaving after making changes, I strive to preserve the quality of the original as much as possible and not lose additional detail to compression round-off error. Therefore, one should keep in mind a few suggestions about resaving:
Original | Save as... | Notes |
---|---|---|
TIFF | TIFF | If the original was uncompressed, then it makes sense to resave it as uncompressed |
BMP | BMP | If the original was uncompressed, then it makes sense to resave it as uncompressed |
JPG | TIFF or BMP | Best approach: Allows the best preservation of detail by saving in a lossless format. Unfortunately, this approach complicates things as most catalog programs don't handle the change of file type very well (as it changes the filename). |
JPG | JPG | Alternate approach: While not as good as the previous approach that saved it in a lossless format, this can be adequate if the compression algorithm is the same (ie. same quantization tables & quality settings). If this is not possible, then resaving with a quality setting that is high enough (ie. striving for less compression than the original) might be the only choice. |
If one intends to edit a JPEG file and resave it as a JPEG, the issue of recompression error should be given consideration. If a little additional error is not going to be of much concern (especially if it is nearly invisible), then resaving to match file size might be an adequate solution. If, however, the goal is to preserve the original image's detail as much as possible, then one has to take a closer look at the way the files are saved.
Option 1 - Resaving with no recompression error
All of the software applications that advertise "lossless" operations (such as lossless rotation) will resave the file with no additional recompression error. The only way that this can work is if the settings used in the compression algorithm match the settings of the original, identically. Any differences in the settings (more specifically, the quantization table and the quality setting / factor) will cause additional compression error.
Unfortunately, it is very difficult (as a user) to determine what these settings were, let alone have any control over them (besides the quality factor). Besides cjpeg, I haven't seen any other programs that actually allow you to configure the quantization tables yourself.
In an attempt to identify whether or not this option is even possible, I have compared the quantization tables of my digital camera to a couple imaging applications.
Fortunately, if one is resaving an image in the application that originally created the image (eg. saved an image in Photoshop, re-opened it, edited it and then resaved it in Photoshop), one can almost achieve this by simply resaving with the same quality settings as were used the previous time. As the quantization table is hardcoded, the user must ensure that the quality setting matches (not higher or lower than) the original. If one forgot what settings were used in the original, it is possible to make an educated guess by performing a couple test saves to compare by file size (across quality settings) to get a very rough idea.
Option 2 - Resaving with minimal recompression error
If one is resaving a photo with a different program than originally created the original (eg. Photoshop CS resaving an edited version of a photo direct from a digital camera), it is not possible to resave without some additional "loss" (recompression error) . The problem here is that the quantization tables and quality settings either are not known or they cannot be set. This is the most likely scenario for users editing their digital photos.
In this scenario, the goal is no longer "lossless resaving" but minimizing the additional recompression error that will be introduced. Making a very rough assumption, one can get an equivalent level of detail by resaving with settings that uses similar quantization tables. There are many reasons why this ends up being a rough assumption, but it should give a close approximation to the same level of detail.
Compression Quality and File Size
The following details the effect of JPEG quality on file size from several popular image editing programs. Unfortunately, each graphics program tends to use its own compression quality scale and quantization tables, and therefore, one can't simply transfer quality settings from one application to another.
As described in the section above, if one cannot guarantee lossless resaving (because of differences in the quantization tables), then it is worth looking at the quantization table comparison for a guideline.
Knowing what quality level is roughly equivalent to the original image helps in determining an appropriate quality level for resaving. Ideally, one doesn't want to resave at a lower quality level (and therefore lose image detail / quality), and on the other side, one shouldn't save at a higher quality setting as it simply wastes space and can in fact introduce extra recompression noise!
Digital Photo Source Characteristics
The source file for comparison purposes is a JPEG image shot with a Canon 10D digital SLR camera, recording a 6 megapixel image (3072x2048 pixels) at ISO 400 and super-fine quality. The source file size is 2280 KB.
Photoshop CS - JPEG Compression
For more detailed information, please see the article: Photoshop Save As vs Save for Web.

Notes:
- Photoshop CS2 allows a range of quality settings in the Save As dialog box from 0..12 in integer increments.
- Photoshop CS2 allows a range of quality settings in the Save For Web dialog box from 0..100 in integer increments.
- A JPEG quality setting of around 11 achieved a similar file size to what was originally produced by the Canon 10D digital SLR camera (in super-fine mode).
- Photoshop CS has three modes of saving JPEGs: Baseline, Baseline Optimized and Progressive. The difference in file size between these three modes (progressive was set to 3-scan) was in the order of about 20 KB for a 1 MB file. Of course this is dependent upon the content of the image, but it demonstrates the rough order of magnitude expected in the different modes.
- An ICC profile was attached to the image, but it is simply the default sRGB, which is relatively insignificant (~ 4 KB).
Photoshop CS and Chroma Sub-sampling
Although it is not advertised, I have determined that Photoshop CS uses chroma subsampling only in certain quality level settings.
Photoshop does not allow the user to select whether or not Chroma Subsampling is used in the JPEG compression. Instead, 2x2 subsampling is used for all Save As at Quality 6 and below, while it is disabled (ie. 1x1 subsampling) for Save As at Quality 7 and higher. Similarly, it is used for all Save For Web operations at Quality 50 and below, while it is not used in Save For Web at Quality 51 and above.
Irfanview - JPEG Compression

Notes:
- Irfanview allows a range of quality settings from 0..100 in integer increments.
- It also allows one to select whether or not Chroma Subsampling is used in the JPEG compression.
- With Chroma Subsampling disabled, it appears that a quality setting of 96 achieves comparable file size to the original.
- With Chroma Subsampling enabled, a quality setting of around 98-99 creates a comparable file size to the original. However, it should be noted that the digital camera itself is using chroma subsampling (2x1), so that this figure is not particularly useful. In other words, if the original source used chroma subsampling, then there is no point in resaving it without chroma subsampling — the additional CbCr color information is already gone. In the example image I have used for the above analysis, chroma subsampling offered approximately 25% savings in file size over the same JPEG compression without.
Miscellaneous Topics
Beware of Photoshop Save For Web
Although Photoshop's Save for the Web dialog seems great as it lets one interactively set the image dimension, palette depth and compression quality, it performs one potentially disastrous effect: the removal of EXIF metadata!
You will find that the file sizes after using "Save for the Web" will be smaller than if you had simply chosen "Save As..." with equal compression settings. The difference is in the lost image metadata (time / date / aperture / shutter speed / etc).
The only time that it is worth using "Save for the Web" is when one is creating web graphics (where you are optimizing for speed) or deliberately wants to eliminate the metadata (eg. for photo galleries, privacy, etc.).
Reader's Comments:
Please leave your comments or suggestions below!I'd read almost all your articles and comments but couldn't find a solution (like on many other web sources).
In Resaving and workflow, - Attempt to Recompress Losslessly, etc. You wrote some kind of guideline on how images had to be proccessed while editing\resaving. But it's unclear to me if I'v jpeg that derived directly from phone's camera with Quality=96.95% & Subs. factor=4:2:0 and I need to downsize this image (4000x3000pix) in half what would be the best settings for doing it & for minimizing the loss of pixel info while preserving adequate file size?
Downsize\downscale image via IM\XnConvert with next sett. Q=97 with Ss=4:2:0 or Q=100 with Ss=4:4:4 or Q=97 with Ss=4:4:4 ?
Could you give me your opinion?
Thanks in advance!
You wrote: "... depending on the input (in-camera) color space, it is possible for the color reproduction to be impacted."
Is there a way (a software) to determine the input color space, so we can use the same one when saving to PNG and thus avoid the color shift?
Does this apply to all the formats using lossless compression when saving from JPG?
I would like to crop a JPG image and then save it as a PNG image. IIUC, it involves a conversion from JPG to PNG. Does it have any negative effects on the final image (introducing artifacts, changes in color reproduction etc)?
Thanks!
Please provide suggestions for following problems in implementation of jpeg encoder:
1. If the image is of size ''72x72'', we will obtain "36x36" sized Cb and Cr components because of 4:2:0 downsampling. But 36 is not a multiple of 8. In that case, how can we adjust these 36x36 blocks to obtain distinct 8x8 blocks of Cb and Cr components.
2. Is there any way to separate EOB marker of Luma components from huffman codes of Chroma components. For example, EOB marker for Luma part is '1010' while '1010' is itself a code in Huffman table for chroma components. So, how can we deal with such conflicts in codes of EOB and actual values?
Thanks in advance.
Please provide suggestions for following problem in implementation of jpeg encoder:
If the image is of size ''72x72'', we will obtain "36x36" sized Cb and Cr components because of 4:2:0 downsampling. But 36 is not a multiple of 8. In that case, how can we adjust these 36x36 blocks to obtain distinct 8x8 blocks of Cb and Cr components.
I can NOT find your article on JPEG Quantization Tables, could you please give me a link? Thanks a lot!
Please can you elaborate on this. Thanks in advance.
Do you know why quantization table for Y is not symmetrical? Distribution of these coefficients over the table looks really strange.
Thanks,
Fyodor
If I edit mainly in Picasa can I save other than JPeg.
Thank you
Great website...
I have some comment on your amazing piece of software. And some ideas how to improve it and I'd like to share some feelings I do have about digital image compression, including the unreasonable fail of wavelet-based algorithms :(
Just to let you know that the link to the software from StuffIt is broken.
Hope to here from you by email.
Cheers,
Alex
How we can make them the same size?
Best regards
In my case I am making a dvd collection of all the birds of the world, and I believe that reducing many of my pictures to less than 30k enables this. For most computers today reduction below 30k size seems to be pointless from the point of view of saving disk space, because of the block storage scheme used.
I think that the savings in space are significant, and the appearance is satisfactory.
RIOT is available as a stand-alone product, but much handier as a plugin for Irfanview. It is very fast.
I wanted to know how is the compression ratio is calculated on the compression stats section of JPEGsnoop?
Thank you
I guess it would be only an approximation, but perhaps usefull to compare pictures. I didn't find a satisfying solution.
Thanks for your website and tools which were very usefull to me.
Yves
If a) the camera used a similar scaling factor algorithm and b) the camera based its pre-scaled table on something similar to the table listed in the ITU-T standard, then it may be possible to perform some approximate comparisons of "quality".
image quality using a number q, whose value varies from 0 to 1 inclusive. Design a way to control the
quality and size of JPEG file produced that makes use of q.
But other aspects are repeated values, small deltas, spectra, etc. The fundamental question in compression is "how can I use the fact that the probability of the next bit being a 1 or 0 is not exactly 50%". Huffman takes the static statistical histogram into account but nothing else. Run length takes sequences into account. But there's lots more which can be done.
For many pictures, each line will differ from the one above or below by a small margin, and adjacent pixels won't differ. You will have many outliers, and JPEG's use of differences helps, but a bunch of differences near zero still result in a long string of near-zero codes, which are repeated vertically.
The "lossless" compression is not very efficient, but it is easy to implement in devices like cameras where you have some processing power and memory. (A microcontroller like a PIC or ATmel would have trouble, PowerPC, MIPS, or ARM, would work). We've had Moore's law going on for a few years, so these newer techniques are now possible, but remembering the technology back then (and when much would be in a customized gate array), this was the best compromise.
After reading your articles I've an idea:
I'm wondering if it's possible to recompress a file back to the
original state. I mean:
function (it cuts MIME/EXIF data like author, comments, etc. and
converts it to the RGB)
being converted back to the JPEG format and saved to the file
(otherpic.jpg).
the markers like Huffman tables, SOS, JFIF, Quantization)... I
mean ... they're binary different (visually the same).
There is my question... how to get reverse compression/decompression
of file otherpic.jpg back to the original state (nicepic.jpg) before
all steps I described above?
And another question came out... Is possible to foresee (more or less accurate) the binary
result of JPEG after described conversions? I mean... I
want to get specific byets sequence after conversion... and need to
know how input file should looks like to get that sequence of bytes.
PS
GD library uses The Independent JPEG Group's jpeglib.
Your second question is more difficult to answer. I believe you are asking: can one determine what "nicepic.jpg" values might produce the output "otherpic.jpg" after the intermediate RGB conversion? The answer is "somewhat" no. Even though you can perform a JPEG decompression on "otherpic.jpg" to RGB, you'll then need to know exactly what compression characteristics (quantization tables, Huffman, subsampling, etc.) are required for the JPEG compression step to create a specific "nicepic.jpg" sequence of bytes.
Thanks,
Veeru.
Thanks,
Veeru.
I am just started to dig into more internals in the JPEG format found this is nice article.
Recently i came across a file which has got 4 components in START OF FRAME FF C0. Could you please let me know more info on case where a file contains more than three components and what is the purpose. Material on this could be great help.
Thanks,
Veeru.
Has just released version 7 of its library for JPEG image compression.
The 27-Jun-2009 release includes many new features, including JPEG arithmetic coding compression/decompression/re-compression.
Looks like the Software Patents on Arithmetic Coding have finally expired after 17 years !
I have some c++ code that I use to encode/decode jpegs, and there's no way to estimate the size of the buffer that will hold the encoded image.
Currently, I send in a buffer equal to height * width * bands * sizeof(unsigned char). I will set the buffer to a minimum size (4096) if it is too small.
There must be a better way to estimate this. I know the ultimate size is dependent on the image contents, but I just need a reasonable maximum that won't fail most of the time. Maybe half the size of the original image? I could use image quality (between 0-100), the sampling (411,422,444) and the type of compression (baseline, progressive, or lossless) in the equation.
Thanks for such an informative site. FYI, I'm using the intelIPP library for the heavy lifting, if it matters.
Jeff
The biggest problem is that the compression ratio is largely determined by the image content (at least for conventional JPEG). Taking a random sampling of MCUs from the image might give you an approximate estimate, but it is not a guarantee.
If anyone knows of a good strategy, please feel free to add!
Is it possible to send you a jpeg file that is JPEG-B abbreviated format (without Q and huffman tables). In spec they suggest to use standard Kx tables to decode but I failed to do it. Your snoop also can't open it. I would appreciate your help with decoding it.
Have a question about IrfanView.
I tried to create a new image by IrfanView, but I can't create a new image with 4 or 1 BPP.
Could you give me some help?
Thanks a lot!!!
Let's say I have a photo called '01.jpg' and I bring it into Photoshop, edit some values then save as '02.jpg' to avoid overwriting the original '01.jpg'. Is the original '01.jpg' decompressed and recompressed? Is there any quality loss to the original image over hundreds of "save as" operations or batch resize operations?
Basically, if I NEVER overwrite '01.jpg' am I going to have any degradation of the image through all of this editing? I can't find a solid answer on this so i'd appreciate your help. Thanks a lot.
The act of opening up an image and then closing it (without making any modifications or "resaving") should never cause any changes to the file itself -- and therefore no recompression will occur either.
The concerns about recompression and generation "losses" only appear when a user instructs the software to save or output the loaded image.
The presumption is that most image editing programs will not resave an image upon closing a file unless you explicitly request it to.
I am a programmer in a mobile phone comapany and basically deals in to mobile phone camera device driver and jpeg decoder encoder.
Though I study online material related to it but I want to enhance my knowledge and reach some bench mark related to my field.
Kindly tell me if there is any kind of online certification available in these areas??
Thanks
I want to study complete JPEG file format, the way markers are arranged in JPEG file and also about functionality of each marker.
I searched for the material but dint find anything worth. I will feel great if you can suggest some book or online material that can give me complete information about it.
By reading about the the compression process of JPEG images one doubt is there in my mind.
According to the compression procedure the compression depends on Quantization table (Different quantizaton table must result in differenct image quality).Suppose i take an image by my mobile phone and then look at the stored image on moble phone and then i copy the same JPEG image to my PC and then i look at the image on my machine my any viewer.
So different quantization table will be used in phone and PC so what effect will it has on the final image??
Also as per my knowledge quantization table is present in JPEG image so which quantization table will be used if i watch the image on PC?
As you may already know, there's a new application called Image Compressor that claims to create better JPEG?
Have you try it and compare the result?
I think you can get one from here:
www.image-compressor.com
What do you think? Is it worthed?
I'm a web designer, and when I get photos from clients, sometimes they are already in great shape, sharp at 960 pixels wide for a file size of 75K +/-. Others come in huge (3Ms 2500pixels wide), but if I use Photoshop CS3 to resize them, and drop the ppi count to 72, they look terrible with artifacts, etc! Why is this? Was the first set shot in some particular mode? Will I ever be able to get the big ones down to less than 130K with the same clarity as the first set?
Thanks for any help you can give me.
Thanks a lot for your software JpegSnoop, it helped me a great deal. I was searching for a way to know if pictures coming from IP cameras were encoded in 4:2:2 or 4:1:1 or else. I thought JpegSnoop and your explanations gave enough information to be able to determine it, but it apparently isn't enough. I succeeded in comparing two pics and saying they got the same 4:x:y, even though they were 4:2:2 for the first and 4:1:1 for the second...
Is there a quick, simple and not-prone-to-error way to know the 4:x:y of a picture with your software (or even without).
If so, you would help me a great deal again by saying what should be done.
Thanks again,
Gregory
I'm trying to understand how Camera phones store their images. I was wondering if you could help me with that. Thanks !
After comparing many infos (also in many application and docs) about the DCT of JPEG, related to flood point setting or the old default mode(s). I'm interested in why the flood point is on such a bad position, they say you will no difference, but it can drop the size what I already tested many times.
We are talking about several KBs and its technical supported by close to all CPUs these days.
Nowadays, many processors can handle floating point calculations without as much impact to performance because they have a separate floating point processor (FPU). In my tests in developing JPEGsnoop, I implemented my JPEG color conversion calculations in both integer and floating point routines. Turns out that the performance difference was less than 10 percent!
In general, you generally don't ever need to save in Photoshop over level 10. At that point the increase in file storage requirements typically doesn't outweigh any increases in quality. If the original file were saved with a lower quality to begin with, saving at a higher quality level can't improve the image quality further.
http://encode.ru/forum/showthread.php?t=133
http://encode.ru/forum/showthread.php?t=133
I'm just interest in this, because XnView another free pic viewer with basic edit abilities (like IrfanView it has) supports DCT with floating point setting. After a trip through time and many papers about the invention of MMX, SSE instructions and the past of the x86 I'm more confused as before, because a few said this calculation technic is unsecure. So what are the min. requirements to use it, my PC support SSE2 (P4), it should be technical possible to do it, but I've never ever read a recommendation somewhere, maybe anybody didn't realized the resulting file is always the smallest of all settings, compared to the fast and the slow default mode.
Surely, somebody can say use it or ignore it, but this setting can really drop the size and with bigger files a bigger difference is recognizeable. For example a Wallpaper at 1204x768 can be ~10KB smaller, while a little banner get a max. reduction of ~1KB. With a few hundred photo snapshots from a digital cameras around 2MB per file, this could be a good thing that nobody thinking of.
The prob is, I'm just not yet sure how accurate it is visually, because there is no difference, but from the technical calculation side it takes only <1 sec. with the daily common hardware. I'm looking forward what your opinion is about this.
We live in a time where SIMD or other facilities exist, but I think not enough use them, anyway everybody could use it just for free, because its possible and reality. I forget to say, without negative effects for stability.
That said, I'd be hesitant to save my images using the faster, less precise implementations until I had evaluated the differences in the quality of output. With the cost of storage falling so greatly, a file savings of 10KB is not worth it to most unless the reduction in quality is absolutely imperceptible.
I need to prevent quality loss when I process and resize photos.
Any help will be highly appreciated. I need to do it in coding rather than some kind of tool like infanview
Thanks
I have a question regarding compounded errors due to recompression. It makes sense that recompressing with a different factor and/or different quantization matrices would increase error each time. And if an image is editted, even using the same quality factor and quantization would increase the error. The part that I'm unclear on is this - Doesn't the jpeg file contain enough info for you to be able to replicate the compression? That is, if the decompressed image isn't editted, couldn't you use the quantization and scale to produce a jpeg file that has no additional error (I'm not looking to do this, I just want to make sure I understand the concepts correctly)? I've been reading a bunch of the pages on this site today and I haven't seen how the jpeg stores the scaling that goes with the quantization, but I think that value needs to be in the file to avoid having the result be saturated or muted.
Thanks,
Brett
BTW - this is by far the best set of pages on jpeg I've seen. I wish I'd found your site a few months ago!
In some sense, yes, the JPEG does store enough information to do the resave with the same parameters. The quantization tables (DQT) dictate the array of constants used to scale/descale during compression/decompression. However, the missing link is that most image editing software programs out there do not support the use of arbitrary quantization tables during save. Only a few programs out there do this, and it's really not a difficult thing to do! It would add some value if the major photo editing apps out there provided it as an option (but still allow the user to specify their own compression level if they choose).
this info helped me a lot.....i have a question...i am looking for image format which has a sub object level intelligence but i learned here that jpeg treats whole image as a single object..can u tell me any other image formats having this intelligence...or is it a relevant question to ask in image formats?
thanks
vishnu
Recently i have been looking for code of jpeg compression in c or c++ language based on self define quality table .Can anyone of you give me some information about this topic ? Thank you very much !