@arithma, we're in mid 2011 and you're still debating CPU load?
That sounds unprofessional because compression is a must. It's a how can we reduce on bandwidth consumption era.
With virtualization, and mobiles (for the more general case of everything), CPU load is an important factor.
Try this: Stream a file directly through a script and then directly off the hard drive. Enjoy the difference in performance. I am trying to research a hunch, but I couldn't find any concrete references about DMA that web servers could be using (from hard disk to memory) to serve static disk files.
if you are on shitty shared hosting(which most shared hostings are), performance shouldn't be your concern because it it will be fair at best.
This is a very obtuse and strange statement. Under any allowed hardware, I should give my client, or deliver for my own software, the peak performance. What I am suggesting is "measuring" because it is worth "measuring". I am not claiming to be correct or wrong. It's not too difficult to do either, so the cost saving may be justified for the research (at least for a publication by someone for everyone else, since not everyone has to do it).
Even though, compression is still a good option in this environment because of how the TCP protocol works, so longer transmissions means longer CPU wait cycles, and don't forget the network latency and the ARP packets.
I am particularly talking about lebgeeks. We have a peak time since all of the people are from the same area, and finish work at almost the same time.
samer may be able to give us some figures here.
Although i don't have numbers, i don't think that compression is more costy than a single fairly complex database operation
I am not sure about this yet, but it could be that reading directly from hard disk and sending to the network could be a great offload from the CPU. This would explain why scripts streaming files usually have a much worse performance than reading files directly on the server's disk.
So at worst you wouldn't want to compress frequently changing dynamic content, but at least you would want to compress all the static, js, css files..
This is of course assuming the web server knows how to cache the compressions it does.
Although i don't have numbers, i don't think that compression is more costy than a single fairly complex database operation
Database servers usually are distinct from the web servers, especially in shared environment, so your argument is moot.
and don't forget the network latency and the ARP packets
I can forget about them and assign just a network latency that will always exist, no matter what happens. But if that's so, why should I care about it. I guess we don't.
compression is still a good option in this environment because of how the TCP protocol works, so longer transmissions means longer CPU wait cycles, and don't forget the network latency and the ARP packets.
Usually, servers stream script output. (That's why in php, for example, you'd want to directly echo rather than save to a separate string). So, almost usually, whenever the script has finished executing, the client would have started to receive content a while ago. This is not possible with dynamic compression and means that the CPU will be more busy with each request a little bit more. It also means that the user will
start to receive the request a little bit later. Which will finish later is what we're arguing. Am saying it's worth benchmarking since there's just a lot of variables, some of which we're not even familiar with, or can't conceive easily (concurrent client loads and their interplay, the threshold count of concurrent requests before the server starts having to queue the requests, whether the particular web server does use DMA to send files from disk to network without CPU intervention).
If compression is the bottleneck imo, your application is not more complex than a fairly simple html page lol :P
Your argument is that an html page, which a lot of people use, that's dynamically generated, is not worth being given the attention? Something is wrong somewhere, either in your argument, or in your thinking process.
Note: I do understand that compression
ought to be enabled by default. However, not everyone can afford decent server technology (which starts at $100 a month). I can't accept the argument that if you're feeling limited by your budget, then you should spend more money. It's illogical.