…so i was about to profile a web application written in php, and as all of us does when doing so, i scattered the code around echoing microtime differences. (Ok, ok, some of you might use log instead.)
To my greatest surprise the part where the application showed significant slowdown in some cases, was a simple echo statement. Well, i wont be able to optimize that, will i? Why can this be happening?
After a bit of googling, i came across this site, where they suggest to split up the echo to smaller chunks to avoid network fragmentation. Hmm, i gave it a try, but no difference. It is a rather old post so i was scrolling down on the comments (most of them are engaging into the question of determining MTU and completely miss the point) to see if people still face this problem, and what are their solutions. The very last post (at that time) suggested simply using output buffering.

Well, my application was already using ob, but as i inspected the code more closely, one of the modules that was included early in the page generation, simply switched ob off!
Damm, switching it back speeded up the application again!

So what is happening in the background?
I am not sure about the implementation of echo, but i guess when echoing a large string over a slow network, if output buffering is not used, echo waits until it has sent the last byte. In other words the network latency comes down to php level.
However, if you use output buffering, you can generate the whole page quickly, then have the webserver send it to the client over the network, all the latency is offloaded from php and will not reflect in the execution time.

To sum up, output buffering is your true friend.

This is an archive post. It represents a point of view in the past. Facts might have changed, events might be interpreted differently as of today. Links might be broken.