PHPFixing
  • Privacy Policy
  • TOS
  • Ask Question
  • Contact Us
  • Home
  • PHP
  • Programming
  • SQL Injection
  • Web3.0
Showing posts with label pipe. Show all posts
Showing posts with label pipe. Show all posts

Tuesday, November 15, 2022

[FIXED] How to set pipefail in a Makefile

 November 15, 2022     error-handling, makefile, pipe     No comments   

Issue

Consider this Makfile:

all: 
    test 1 -eq 2 | cat
    echo 'done'

It will be executed with no error.

I've heard of set -o pipefail that I may use like this:

all: 
    set -o pipefail;     \
    test 1 -eq 2 | cat;  \
    echo 'done'

Apart that it does not work, this writing is very painful.

Another solution would be to use temporary files. I would like to avoid it.

What other solution can I use?


Solution

For anything more complicated than single commands I generally prefer using a script. That way you control the interpreter completely (via the shebang line), and you can put more complicated commands together rather than trying to shoe-horn it into effectively a single line. For example:

Makefile:

all:
    ./my.sh

my.sh:

#!/usr/bin/env bash
set -o errexit -o pipefail
test 1 -eq 2 | cat
echo 'done'

That said, the exit code of a Makefile command block like the one you have is the exit code of the last command since you separate the commands with ;. You can use && to execute only until you get an error (equivalent to errexit), like this:

set -o pipefail && test 1 -eq 2 | cat && echo 'done'


Answered By - l0b0
Answer Checked By - Gilberto Lyons (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Sunday, October 30, 2022

[FIXED] How to properly fread & fwrite from & to a pipe

 October 30, 2022     c, eof, linux, pipe, stdio     No comments   

Issue

I have this code which acts as a pipe between two shell invocations.

It reads from a pipe, and writes into a different one.

#include <stdio.h>
#include <stdlib.h>


#define BUFF_SIZE (0xFFF)

/*
 *  $ cat /tmp/redirect.txt |less
 */
int main(void)
{
    FILE    *input;
    FILE    *output;
    int     c;
    char    buff[BUFF_SIZE];
    size_t  nmemb;

    input   = popen("cat /tmp/redirect.txt", "r");
    output  = popen("less", "w");
    if (!input || !output)
        exit(EXIT_FAILURE);

#if 01
    while ((c = fgetc(input))  !=  EOF)
        fputc(c, output);
#elif 01
    do {
        nmemb   = fread(buff, 1, sizeof(buff), input);
        fwrite(buff, 1, nmemb, output);
    } while (nmemb);
#elif 01
    while (feof(input) != EOF) {
        nmemb   = fread(buff, 1, sizeof(buff), input);
        fwrite(buff, 1, nmemb, output);
    }
#endif
/*
 * EDIT: The previous implementation is incorrect:
 * feof() return non-zero if EOF is set
 * EDIT2:  Forgot the !.  This solved the problem.
 */
#elif 01
    while (feof(input)) {
        nmemb   = fread(buff, 1, sizeof(buff), input);
        fwrite(buff, 1, nmemb, output);
    }
#endif

    pclose(input);
    pclose(output);

    return  0;
}

I want it to be efficient, so I want to implement it with fread()&fwrite(). There are the 3 way I tried.

The first one is implemented with fgetc()&fputc() so it will be very slow. However it works fine because it checks for EOF so it will wait until cat (or any shell invocation I use) finishes its job.

The second one is faster, but I'm concerned that I don't check for EOF so if there is any moment when the pipe is empty (but the shell invocation hasn't finished, so may not be empty in the future), it will close the pipe and end.

The third implementation is what I would like to do, and it relatively works (all the text is received by less), but for some reason it gets stuck and doesn't close the pipe (seems like it never gets the EOF).

EDIT: Third implementation is buggy. Fourth tries to solve the bug, but now less doesn't receive anything.

How should this be properly done?


Solution

First of all, to say that I think you are having problems more with buffering, than with efficiency. That is a common problem when first dealing with the stdio package.

Second, the best (and simplest) implementation of a simple data copier from input to output is the following snippet (copied from K&R first ed.).

while((c = fgetc(input)) != EOF) 
    fputc(c, output);

(well, not a literal copy, as there, K&R use stdin and stdout as FILE* descriptors, and they use the simpler getchar(); and putchar(c); calls.) When you try to do better than this, normally you incur in some false assumptions, as the fallacy of the lack of buffering or the number of system calls.

stdio does full buffering when the standard output is a pipe (indeed, it does full buffering always except when the file descriptor gives true to the isatty(3) function call), so you should do, in the case you want to see the output as soon as it is available, at least, no output buffering (with something like setbuf(out, NULL);, or fflush()) your output at some point, so it doesn't get buffered in the output while you are waiting in the input for more data.

What it seems to be is that you see that the output for the less(1) program is not visible, because it is being buffered in the internals of your program. And that is exactly what is happening... suppose you feed your program (which, despite of the handling of individual characters, is doing full buffering) doesn't get any input until the full input buffer (BUFSIZ characters) have been feeded to it. Then, a lot of single fgetc() calls are done in a loop, with a lot of fputc() calls are done in a loop (exactly BUFSIZ calls each) and the buffer is filled at the output. But this buffer is not written, because it need one more char to force a flush. So, until you get the first two BUFSIZ chunks of data, you don't get anything written to less(1).

A simple, and efficient way is to check after fputc(c, out); if the char is a \n, and flush output with fflush(out); in that case, and so you'll write a line of output at a time.

fputc(c, out);
if (c == '\n') fflush(out);

If you don't do something, the buffering is made in BUFSIZ chunks, and normally, not before you have such an amount of data in the output side. And remember always to fclose() things (well, this is handled by stdio), or you can lose output in case your process gets interrupted.

IMHO the code you should use is:

while ((c = fgetc(input))  !=  EOF) {
    fputc(c, output);
    if (c == '\n') fflush(output);
}
fclose(input);
fclose(output);

for the best performance, while not blocking unnecessarily the output data in the buffer.

BTW, doing fread() and fwrite() of one char, is a waste of time and a way to complicate things a lot (and error prone). fwrite() of one char will not avoid the use of buffers, so you won't get more performance than using fputc(c, output);.

BTW(bis) if you want to do your own buffering, don't call stdio functions, just use read(2) and write(2) calls on normal system file descriptors. A good approach is:

int input_fd = fileno(input); /* input is your old FILE * given by popen() */
int output_fd = fileno(output);

while ((n = read(input_fd, your_buffer, sizeof your_buffer)) > 0) {
    write(output_fd, your_buffer, n);
}
switch (n) {
case 0: /* we got EOF */
    ...
    break;
default: /* we got an error */
    fprintf(stderr, "error: read(): %s\n", strerror(errno));
    ...
    break;
} /* switch */

but this will awaken your program only when the buffer is fully filled with data, or there's no more data.

If you want to feed your data to less(1) as soon as you have one line for less, then you can disable completely the input buffer with:

setbuf(input, NULL);
int c; /* int, never char, see manual page */
while((c == fgetc(input)) != EOF) {
    putc(c, output);
    if (c == '\n') fflush(output);
}

And you'll get less(1) working as soon as you have produced a single line of output text.

What are you exactly trying to do? (This would be nice to know, as you seem to be reinventing the cat(1) program, but with reduced functionality)



Answered By - Luis Colorado
Answer Checked By - Terry (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Thursday, August 18, 2022

[FIXED] How to capture stdout output but also show progress

 August 18, 2022     go, goroutine, output, pipe, stdout     No comments   

Issue

I have a function named print() that print numbers every 2 seconds, this function runs in a goroutine.

I need to pass its stdout printing to a variable and print it, but not one time, until it finish.
I need to have a some scanner in an infinite loop that will scan for the stdout and print it, once the function done the scanner will done too.

I tried to use this answer but it doesn't print anything.
This is what I tried to do:

package main

import (
    "bufio"
    "fmt"
    "os"
    "sync"
    "time"
)


func print() {

    for i := 0; i < 50; i++ {
        time.Sleep(2 * time.Second)
        fmt.Printf("hello number: %d\n", i)
    }
}

func main() {
    old := os.Stdout // keep backup of the real stdout

    defer func() { os.Stdout = old }()
    r, w, _ := os.Pipe()
    os.Stdout = w

    go print()


    var wg sync.WaitGroup

    c := make(chan struct{})
    wg.Add(1)


    defer wg.Done()
    for {
        <-c
        scanner := bufio.NewScanner(r)
        for scanner.Scan() {
            m := scanner.Text()
            fmt.Println("output: " + m)
        }

    }

    c <- struct{}{}

    wg.Wait()
    fmt.Println("DONE")

}  

I also tried to use io.Copy to read the buffer like that but it didn't work too:

package main

import (
    "bytes"
    "fmt"
    "io"
    "os"
    "time"
)


func print() {

    for i := 0; i < 50; i++ {
        time.Sleep(2 * time.Second)
        fmt.Printf("hello number: %d\n", i)
    }
}

// https://blog.kowalczyk.info/article/wOYk/advanced-command-execution-in-go-with-osexec.html
func main() {
    old := os.Stdout // keep backup of the real stdout

    defer func() { os.Stdout = old }()
    r, w, _ := os.Pipe()
    os.Stdout = w

    go print()

    fmt.Println("DONE 1")
    outC := make(chan string)

    for {

        var buf bytes.Buffer
        io.Copy(&buf, r)
        outC <- buf.String()

        out := <-outC
        fmt.Println("output: " + out)
    }

    // back to normal state
    w.Close()


    fmt.Println("DONE")

}

Solution

It is possible to run print() as a "blackbox" and capture its output though it is a little bit tricky and does not work on Go Playground.

package main

import (
    "bufio"
    "fmt"
    "os"
    "runtime"
    "time"
)


func print() {
    for i := 0; i < 50; i++ {
        time.Sleep(100 * time.Millisecond)
        fmt.Printf("hello number: %d\n", i)
    }
}

func main() {

    var ttyName string
    if runtime.GOOS == "windows" {
    fmt.Println("*** Using `con`")
        ttyName = "con"
    } else {
    fmt.Println("*** Using `/dev/tty`")
        ttyName = "/dev/tty"
    }   

    f, err := os.OpenFile(ttyName, os.O_WRONLY, 0644)
    if err != nil {
        panic(err)
    }
    defer f.Close()

    r, w, _ := os.Pipe()
    oldStdout := os.Stdout
    os.Stdout = w
    defer func() { 
        os.Stdout = oldStdout
        fmt.Println("*** DONE")
    }()

    fmt.Fprintln(f, "*** Stdout redirected")

    go func(){
       print()
       w.Close()
       r.Close()          
    }()

    c := make(chan struct{})
    go func(){c <- struct{}{}}()
    defer close(c)

    <-c
    scanner := bufio.NewScanner(r)
    for scanner.Scan() {
        m := scanner.Text()
        fmt.Fprintln(f, "output: " + m)
    }
}



Answered By - maxim_ge
Answer Checked By - Terry (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Wednesday, August 17, 2022

[FIXED] How to pipe stdout while keeping it on screen ? (and not to a output file)

 August 17, 2022     bash, output, pipe, shell, stdout     No comments   

Issue

I would like to pipe standard output of a program while keeping it on screen.

With a simple example (echo use here is just for illustration purpose) :

$ echo 'ee' | foo
ee <- the output I would like to see

I know tee could copy stdout to file but that's not what I want.
$ echo 'ee' | tee output.txt | foo

I tried
$ echo 'ee' | tee /dev/stdout | foo but it does not work since tee output to /dev/stdout is piped to foo


Solution

Here is a solution that works at on any Unix / Linux implementation, assuming it cares to follow the POSIX standard. It works on some non Unix environments like cygwin too.

echo 'ee' | tee /dev/tty | foo

Reference: The Open Group Base Specifications Issue 7 IEEE Std 1003.1, 2013 Edition, §10.1:

/dev/tty

Associated with the process group of that process, if any. It is useful for programs or shell procedures that wish to be sure of writing messages to or reading data from the terminal no matter how output has been redirected. It can also be used for applications that demand the name of a file for output, when typed output is desired and it is tiresome to find out what terminal is currently in use. In each process, a synonym for the controlling terminal

Some environments like Google Colab have been reported not to implement /dev/tty while still having their tty command returning a usable device. Here is a workaround:

tty=$(tty)
echo 'ee' | tee $tty | foo

or with an ancient Bourne shell:

tty=`tty`
echo 'ee' | tee $tty | foo


Answered By - jlliagre
Answer Checked By - Willingham (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Tuesday, August 9, 2022

[FIXED] What are the parameters for the number Pipe - Angular 2

 August 09, 2022     angular, decimal, pipe     No comments   

Issue

I have used the number pipe below to limit numbers to two decimal places.

{{ exampleNumber | number : '1.2-2' }}

I was wondering what the logic behind '1.2-2' was? I have played around with these trying to achieve a pipe which filters to zero decimal places but to no avail.


Solution

The parameter has this syntax:

{minIntegerDigits}.{minFractionDigits}-{maxFractionDigits}

So your example of '1.2-2' means:

  • A minimum of 1 digit will be shown before decimal point
  • It will show at least 2 digits after decimal point
  • But not more than 2 digits


Answered By - rinukkusu
Answer Checked By - David Goodson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Friday, July 22, 2022

[FIXED] How to create self-decompressing executables on linux when you cannot execve a pipe

 July 22, 2022     c, exec, linux, pipe     No comments   

Issue

I've been working on a small hack to get the filesize of an executable down. I'm aware there exist tools that do executable compressing properly, but this is more for my own enjoyment than anything serious.

My idea is to compress the executable with gzip, then embed it in another c program, called the launcher, as an array. When the launcher runs, it sets up a piping system like so:

parent launcher -> fork 1 of launcher -> fork 2 of launcher

fork 1 turns itself into gzip, so it decompresses whatever the parent feeds it, and spits out the decompressed version to fork 2.

Here's where the hack kicks in. Fork 2 tries to exec the file "/dev/fd/n", where n is the file number of the pipe that goes from fork 1 to fork 2. In essence this means fork 2 will try to execute whatever binary gzip spits out.

However, this doesn't work (surprise surprise.) I tried stracing my sample implementation and the line that does the execv on "/dev/fd/n" returns -1 EACCES (Permission denied). However, if I open up a terminal and run ls -l /dev/fd/ I get something like:

lrwx------ 1 blackle users 64 Nov 10 05:14 0 -> /dev/pts/0
lrwx------ 1 blackle users 64 Nov 10 05:14 1 -> /dev/pts/0
lrwx------ 1 blackle users 64 Nov 10 05:14 2 -> /dev/pts/0
lr-x------ 1 blackle users 64 Nov 10 05:14 3 -> /proc/17138/fd

All of them have permissions +x for the user (me.) This means it should be executable, no? Or is this just a very strange kernel edge case that says it hasn't got the permissions, but really it can't execute because it's not a real file.

UPDATE after nearly 7 years

With the advent of linux "memfd"s, it's now possible to create self-decompressing executables on linux that don't touch the filesystem. See: https://gitlab.com/PoroCYon/vondehi


Solution

Only files which are mmap able are possible to execute. A pipe is unfortunately not mmapable in this way due to its sequential nature and limited buffer size (it may need to re-read earlier code again which would now be gone after reading it once).

You would have much more luck instead of using a pipe to create a file in a ramfs, mmap it to an area of the memory space of the parent the copy the uncompressed code into the mmap, then finally have the child exec the file in the ramfs, finally unlink the file in ramfs in the parent so it is automatically freed when the child exits.

Hope this helps, if anything is unclear please comment.



Answered By - Vality
Answer Checked By - Marilyn (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Tuesday, July 12, 2022

[FIXED] Which kind of inter process communication (ipc) mechanism should I use at which moment?

 July 12, 2022     ipc, message, pipe, process, sockets     No comments   

Issue

I know that there are several methods of inter-process communication (ipc), like:

  • File
  • Signal
  • Socket
  • Message Queue
  • Pipe
  • Named pipe
  • Semaphore
  • Shared memory
  • Message passing
  • Memory-mapped file

However I was unable to find a list or a paper comparing these mechanism to each other and pointing out the benefits of them in different environemnts.

E.g I know that if I use a file which gets written by process A and process B reads it out it will work on any OS and is pretty robust, on the other hand - why shouldn't I use TCP Socket ? Has anyone a kind of overview in which cases which methods are the most suitable ?


Solution

Long story short:

  • Use lock files, mutexes, semaphores and barriers when processes compete for a scarce resource. They operate in a similar manner: several process try to acquire a synchronisation primitive, some of them acquire it, others are put in sleeping state until the primitive is available again. Use semaphores to limit the amount of processes working with a resource. Use a mutex to limit the amount to 1.

  • You can partially avoid using synchronisation primitives by using non-blocking thread-safe data structures.

  • Use signals, queues, pipes, events, messages, unix sockets when processes need to exchange data. Signals and events are usually used for notifying a process of something (for instance, ctrl+c in unix terminal sends a SIGINT signal to a process). Pipes, shared memory and unix sockets are for transmitting data.

  • Use sockets for networking (or, speaking formally, for exchanging data between processes located on different machines).

Long story long: take a look at Modern Operating Systems book by Tanenbaum & Bos, namely IPC chapter. The topic is vast and can't be completely covered within a list or a paper.



Answered By - u354356007
Answer Checked By - Timothy Miller (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Friday, July 8, 2022

[FIXED] How to remove pipe symbol in Wordpress Tags under Post Menu

 July 08, 2022     pipe, posts, tags, wordpress     No comments   

Issue

i'v found an issue of title which shows extra pipe symbol which is not manually written in "Edit Tag" section on one of my website pharmadesiccant. I just want to remove it.


Solution

The easiest approach would be to install WordPress SEO or All-In-One SEO so you can redefine the page title (without the | ).



Answered By - Mohammed Salah Eldowy
Answer Checked By - Willingham (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Thursday, December 30, 2021

[FIXED] LAMP: How to create .Zip of large files for the user on the fly, without disk/CPU thrashing

 December 30, 2021     bash, lamp, php, pipe, zip     No comments   

Issue

Often a web service needs to zip up several large files for download by the client. The most obvious way to do this is to create a temporary zip file, then either echo it to the user or save it to disk and redirect (deleting it some time in the future).

However, doing things that way has drawbacks:

  • a initial phase of intensive CPU and disk thrashing, resulting in...
  • a considerable initial delay to the user while the archive is prepared
  • very high memory footprint per request
  • use of substantial temporary disk space
  • if the user cancels the download half way through, all resources used in the initial phase (CPU, memory, disk) will have been wasted

Solutions like ZipStream-PHP improve on this by shovelling the data into Apache file by file. However, the result is still high memory usage (files are loaded entirely into memory), and large, thrashy spikes in disk and CPU usage.

In contrast, consider the following bash snippet:

ls -1 | zip -@ - | cat > file.zip
  # Note -@ is not supported on MacOS

Here, zip operates in streaming mode, resulting in a low memory footprint. A pipe has an integral buffer – when the buffer is full, the OS suspends the writing program (program on the left of the pipe). This here ensures that zip works only as fast as its output can be written by cat.

The optimal way, then, would be to do the same: replace cat with a web server process, streaming the zip file to the user with it created on the fly. This would create little overhead compared to just streaming the files, and would have an unproblematic, non-spiky resource profile.

How can you achieve this on a LAMP stack?


Solution

You can use popen() (docs) or proc_open() (docs) to execute a unix command (eg. zip or gzip), and get back stdout as a php stream. flush() (docs) will do its very best to push the contents of php's output buffer to the browser.

Combining all of this will give you what you want (provided that nothing else gets in the way -- see esp. the caveats on the docs page for flush()).

(Note: don't use flush(). See the update below for details.)

Something like the following can do the trick:

<?php
// make sure to send all headers first
// Content-Type is the most important one (probably)
//
header('Content-Type: application/x-gzip');

// use popen to execute a unix command pipeline
// and grab the stdout as a php stream
// (you can use proc_open instead if you need to 
// control the input of the pipeline too)
//
$fp = popen('tar cf - file1 file2 file3 | gzip -c', 'r');

// pick a bufsize that makes you happy (64k may be a bit too big).
$bufsize = 65535;
$buff = '';
while( !feof($fp) ) {
   $buff = fread($fp, $bufsize);
   echo $buff;
}
pclose($fp);

You asked about "other technologies": to which I'll say, "anything that supports non-blocking i/o for the entire lifecycle of the request". You could build such a component as a stand-alone server in Java or C/C++ (or any of many other available languages), if you were willing to get into the "down and dirty" of non-blocking file access and whatnot.

If you want a non-blocking implementation, but you would rather avoid the "down and dirty", the easiest path (IMHO) would be to use nodeJS. There is plenty of support for all the features you need in the existing release of nodejs: use the http module (of course) for the http server; and use child_process module to spawn the tar/zip/whatever pipeline.

Finally, if (and only if) you're running a multi-processor (or multi-core) server, and you want the most from nodejs, you can use Spark2 to run multiple instances on the same port. Don't run more than one nodejs instance per-processor-core.


Update (from Benji's excellent feedback in the comments section on this answer)

1. The docs for fread() indicate that the function will read only up to 8192 bytes of data at a time from anything that is not a regular file. Therefore, 8192 may be a good choice of buffer size.

[editorial note] 8192 is almost certainly a platform dependent value -- on most platforms, fread() will read data until the operating system's internal buffer is empty, at which point it will return, allowing the os to fill the buffer again asynchronously. 8192 is the size of the default buffer on many popular operating systems.

There are other circumstances that can cause fread to return even less than 8192 bytes -- for example, the "remote" client (or process) is slow to fill the buffer - in most cases, fread() will return the contents of the input buffer as-is without waiting for it to get full. This could mean anywhere from 0..os_buffer_size bytes get returned.

The moral is: the value you pass to fread() as buffsize should be considered a "maximum" size -- never assume that you've received the number of bytes you asked for (or any other number for that matter).

2. According to comments on fread docs, a few caveats: magic quotes may interfere and must be turned off.

3. Setting mb_http_output('pass') (docs) may be a good idea. Though 'pass' is already the default setting, you may need to specify it explicitly if your code or config has previously changed it to something else.

4. If you're creating a zip (as opposed to gzip), you'd want to use the content type header:

Content-type: application/zip

or... 'application/octet-stream' can be used instead. (it's a generic content type used for binary downloads of all different kinds):

Content-type: application/octet-stream

and if you want the user to be prompted to download and save the file to disk (rather than potentially having the browser try to display the file as text), then you'll need the content-disposition header. (where filename indicates the name that should be suggested in the save dialog):

Content-disposition: attachment; filename="file.zip"

One should also send the Content-length header, but this is hard with this technique as you don’t know the zip’s exact size in advance. Is there a header that can be set to indicate that the content is "streaming" or is of unknown length? Does anybody know?


Finally, here's a revised example that uses all of @Benji's suggestions (and that creates a ZIP file instead of a TAR.GZIP file):

<?php
// make sure to send all headers first
// Content-Type is the most important one (probably)
//
header('Content-Type: application/octet-stream');
header('Content-disposition: attachment; filename="file.zip"');

// use popen to execute a unix command pipeline
// and grab the stdout as a php stream
// (you can use proc_open instead if you need to 
// control the input of the pipeline too)
//
$fp = popen('zip -r - file1 file2 file3', 'r');

// pick a bufsize that makes you happy (8192 has been suggested).
$bufsize = 8192;
$buff = '';
while( !feof($fp) ) {
   $buff = fread($fp, $bufsize);
   echo $buff;
}
pclose($fp);

Update: (2012-11-23) I have discovered that calling flush() within the read/echo loop can cause problems when working with very large files and/or very slow networks. At least, this is true when running PHP as cgi/fastcgi behind Apache, and it seems likely that the same problem would occur when running in other configurations too. The problem appears to result when PHP flushes output to Apache faster than Apache can actually send it over the socket. For very large files (or slow connections), this eventually causes in an overrun of Apache's internal output buffer. This causes Apache to kill the PHP process, which of course causes the download to hang, or complete prematurely, with only a partial transfer having taken place.

The solution is not to call flush() at all. I have updated the code examples above to reflect this, and I placed a note in the text at the top of the answer.



Answered By - Lee
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg
Older Posts Home

Total Pageviews

Featured Post

Why Learn PHP Programming

Why Learn PHP Programming A widely-used open source scripting language PHP is one of the most popular programming languages in the world. It...

Subscribe To

Posts
Atom
Posts
All Comments
Atom
All Comments

Copyright © PHPFixing