Delicious Bookmark this on Delicious Share on Facebook SlashdotSlashdot It! Digg! Digg



PHP : Function Reference : Program Execution Functions : proc_open

proc_open

Execute a command and open file pointers for input/output (PHP 4 >= 4.3.0, PHP 5)
resource proc_open ( string cmd, array descriptorspec, array &pipes [, string cwd [, array env [, array other_options]]] )

Example 2013. A proc_open() example

<?php
$descriptorspec
= array(
 
0 => array("pipe", "r"),  // stdin is a pipe that the child will read from
 
1 => array("pipe", "w"),  // stdout is a pipe that the child will write to
 
2 => array("file", "/tmp/error-output.txt", "a") // stderr is a file to write to
);

$cwd = '/tmp';
$env = array('some_option' => 'aeiou');

$process = proc_open('php', $descriptorspec, $pipes, $cwd, $env);

if (
is_resource($process)) {
   
// $pipes now looks like this:
   // 0 => writeable handle connected to child stdin
   // 1 => readable handle connected to child stdout
   // Any error output will be appended to /tmp/error-output.txt

   
fwrite($pipes[0], '<?php print_r($_ENV); ?>');
   
fclose($pipes[0]);

   echo
stream_get_contents($pipes[1]);
   
fclose($pipes[1]);

   
// It is important that you close any pipes before calling
   // proc_close in order to avoid a deadlock
   
$return_value = proc_close($process);

   echo
"command returned $return_value\n";
}
?>

The above example will output something similar to:

Array
(
   [some_option] => aeiou
   [PWD] => /tmp
   [SHLVL] => 1
   [_] => /usr/local/bin/php
)
command returned 0

Code Examples / Notes » proc_open

richard

[Again, please delete my previous comment, the code still contained bugs (sorry). This version now includes Frederick Leitner's fix from below, and also fixes another bug: If an empty file was piped into the process, the loop would hang indefinitely.]
The following code works for piping large amounts of data through a filtering program. I find it very weird that such a lot of code is needed for this task... On entry, $stdin contains the standard input for the program. Tested on Debian Linux with PHP 5.1.2.
 $descriptorSpec = array(0 => array("pipe", "r"),
                          1 => array('pipe', 'w'),
                          2 => array('pipe', 'w'));
 $process = proc_open($command, $descriptorSpec, $pipes);
 $txOff = 0; $txLen = strlen($stdin);
 $stdout = ''; $stdoutDone = FALSE;
 $stderr = ''; $stderrDone = FALSE;
 stream_set_blocking($pipes[0], 0); // Make stdin/stdout/stderr non-blocking
 stream_set_blocking($pipes[1], 0);
 stream_set_blocking($pipes[2], 0);
 if ($txLen == 0) fclose($pipes[0]);
 while (TRUE) {
   $rx = array(); // The program's stdout/stderr
   if (!$stdoutDone) $rx[] = $pipes[1];
   if (!$stderrDone) $rx[] = $pipes[2];
   $tx = array(); // The program's stdin
   if ($txOff < $txLen) $tx[] = $pipes[0];
   stream_select($rx, $tx, $ex = NULL, NULL, NULL); // Block til r/w possible
   if (!empty($tx)) {
     $txRet = fwrite($pipes[0], substr($stdin, $txOff, 8192));
     if ($txRet !== FALSE) $txOff += $txRet;
     if ($txOff >= $txLen) fclose($pipes[0]);
   }
   foreach ($rx as $r) {
     if ($r == $pipes[1]) {
       $stdout .= fread($pipes[1], 8192);
       if (feof($pipes[1])) { fclose($pipes[1]); $stdoutDone = TRUE; }
     } else if ($r == $pipes[2]) {
       $stderr .= fread($pipes[2], 8192);
       if (feof($pipes[2])) { fclose($pipes[2]); $stderrDone = TRUE; }
     }
   }
   if (!is_resource($process)) break;
   if ($txOff >= $txLen && $stdoutDone && $stderrDone) break;
 }
 $returnValue = proc_close($process);


bzapf

whenever the result of proc_open is forgotten (e.g. a function  calling it is left), the process will be terminated immediately. This can be extremely confusing. So always globalize the variable that stores the return value of proc_open...

falstaff

Using this function under windows with large amounts of data is apparently futile.
these functions are returning 0 but do not appear to be doing anything useful.
stream_set_write_buffer($pipes[0],0);
stream_set_write_buffer($pipes[1],0);
these functions are returning false and are also apparently useless under windows.
stream_set_blocking($pipes[0], FALSE);
stream_set_blocking($pipes[1], FALSE);
The magic max buffer size I found with winxp is 63488 bytes, (62k). Anything larger than this results in a system hang.


andrew dot budd

The pty option is actually disabled in the source for some reason via a #if 0 && condition.  I'm not sure why it's disabled.  I removed the 0 && and recompiled, after which the pty option works perfectly.  Just a note.

mjutras

The best way on windows to open a process then to let the php script continue is to call your process with the start command then to kill the "start" process and let your program run.
<?
$descriptorspec = array(
  0 => array("pipe", "r"),   // stdin
  1 => array("pipe", "w"),  // stdout
  2 => array("pipe", "w")   // stderr
);
$process = proc_open('start notepad.exe', $descriptorspec, $pipes);
sleep(1);
proc_close($process);
?>
The start command will be called then open notepad, after 1 second the "start" command will be killed but the notepad will still opened and your php script can continue!


ralf

The behaviour described in the following may depend on the system php runs on. Our platform was "Intel with Debian 3.0 linux".
If you pass huge amounts of data (ca. >>10k) to the application you run and the application for example echos them directly to stdout (without buffering the input), you will get a deadlock. This is because there are size-limited buffers (so called pipes) between php and the application you run. The application will put data into the stdout buffer until it is filled, then it blocks waiting for php to read from the stdout buffer. In the meantime Php filled the stdin buffer and waits for the application to read from it. That is the deadlock.
A solution to this problem may be to set the stdout stream to non blocking (stream_set_blocking) and alternately write to stdin and read from stdout.
Just imagine the following example:
<?
/* assume that strlen($in) is about 30k
*/
$descriptorspec = array(
  0 => array("pipe", "r"),
  1 => array("pipe", "w"),
  2 => array("file", "/tmp/error-output.txt", "a")
);
$process = proc_open("cat", $descriptorspec, $pipes);
if (is_resource($process)) {

  fwrite($pipes[0], $in);
   /* fwrite writes to stdin, 'cat' will immediately write the data from stdin
   * to stdout and blocks, when the stdout buffer is full. Then it will not
   * continue reading from stdin and php will block here.
   */
  fclose($pipes[0]);
  while (!feof($pipes[1])) {
      $out .= fgets($pipes[1], 1024);
  }
  fclose($pipes[1]);
  $return_value = proc_close($process);
}
?>


picaune

The above note on Windows compatibility is not entirely correct.
Windows will dutifully pass on additional handles above 2 onto the child process, starting with Windows 95 and Windows NT 3.5. It even supports this capability (starting with Windows 2000) from the command line using a special syntax (prefacing the redirection operator with the handle number).
These handles will be, when passed to the child, preopened for low-level IO (e.g. _read) by number. The child can reopen them for high-level (e.g. fgets) using the _fdopen or _wfdopen methods. The child can then read from or write to them the same way they would stdin or stdout.
However, child processes must be specially coded to use these handles, and if the end user is not intelligent enough to use them (e.g. "openssl < commands.txt 3< cacert.der") and the program not smart enough to check, it could cause errors or hangs.


mendoza

Since I don't have access to PAM via Apache, suexec on, nor access to /etc/shadow I coughed up this way of authenticating users based on the system users details. It's really hairy and ugly, but it works.
<?
function authenticate($user,$password) {
 $descriptorspec = array(
    0 => array("pipe", "r"),  // stdin is a pipe that the child will read from
    1 => array("pipe", "w"),  // stdout is a pipe that the child will write to
    2 => array("file","/dev/null", "w") // stderr is a file to write to
 );
 $process = proc_open("su ".escapeshellarg($user), $descriptorspec, $pipes);
 if (is_resource($process)) {
   // $pipes now looks like this:
   // 0 => writeable handle connected to child stdin
   // 1 => readable handle connected to child stdout
   // Any error output will be appended to /tmp/error-output.txt
   fwrite($pipes[0],$password);
   fclose($pipes[0]);
   fclose($pipes[1]);
   // It is important that you close any pipes before calling
   // proc_close in order to avoid a deadlock
   $return_value = proc_close($process);
   return !$return_value;
 }
}
?>


kyle gibson

proc_open is hard coded to use "/bin/sh". So if you're working in a chrooted environment, you need to make sure that /bin/sh exists, for now.

chapman flack

One can learn from the source code in ext/standard/exec.c that the right-hand side of a descriptor assignment does not have to be an array ('file', 'pipe', or 'pty') - it can also be an existing open stream.
<?php
$p = proc_open('myfilter', array( 0 => $infile, ...), $pipes);
?>
I was glad to learn that because it solves the race condition in a scenario like this: you get a file name, open the file, read a little to make sure it's OK to serve to this client, then rewind the file and pass it as input to the filter. Without this feature, you would be limited to <?php array('file', $fname) ?> or passing the name to the filter command. Those choices both involve a race (because the file will be reopened after you have checked it's OK), and the last one invites surprises if not carefully quoted, too.


magicaltux

Note that if you need to be "interactive" with the user *and* the opened application, you can use stream_select to see if something is waiting on the other side of the pipe.
Stream functions can be used on pipes like :
- pipes from popen, proc_open
- pipes from fopen('php://stdin') (or stdout)
- sockets (unix or tcp/udp)
- many other things probably but the most important is here
More informations about streams (you'll find many useful functions there) :
http://www.php.net/manual/en/ref.stream.php


antti kauppinen

missilesilo at gmail dot com had a great example. However error messages didn't work because of wrong stderr argument.
I changed last value 'r' of
       $descriptorspec = array(
           0 => array('pipe', 'r'),
           1 => array('pipe', 'w'),
           2 => array('pipe', 'r')
       );
to 'w' so that error messages are actually written.
       $descriptorspec = array(
           0 => array("pipe", "r"),  // stdin is a pipe that the child will read from
           1 => array("pipe", "w"),  // stdout is a pipe that the child will write to
           2 => array("pipe", "w") // stderr is a file to write to
       );


daniela

Just a small note in case it isn't obvious, its possible to treat the filename as in fopen, thus you can pass through the standard input from php like
$descs = array (
               0 => array ("file", "php://stdin", "r"),
               1 => array ("pipe", "w"),
               2 => array ("pipe", "w")
       );
       $proc = proc_open ("myprogram", $descs, $fp);


ashnazg

It seems that if you configured --enable-sigchild when you compiled PHP (which from my reading is required for you to use Oracle stuff), then return codes from proc_close() cannot be trusted.
Using proc_open's Example 1998's code on versions I have of PHP4 (4.4.7) and PHP5 (5.2.4), the return code is always "-1".  This is also the only return code I can cause by running other shell commands whether they succeed or fail.
I don't see this caveat mentioned anywhere except on this old bug report -- http://bugs.php.net/bug.php?id=29123


docey

if your writing a function that processes a resource from
another function its a good idea not only to check whether
a resource has been passed to your function but also if its
of the good type like so:
<?php
function workingonit($resource){
if(is_resource($resource)){
  if(get_resource_type($resource) == "resource_type"){
   // resource is a resource and of the good type. continue
  }else{
   print("resource is of the wrong type.");
   return false;
  }
}else{
 print("resource passed is not a resource at all.");
 return false;
}
// do your stuff with the resource here and return
}
?>
this is extra true for working with files and process pipes.
so always check whats being passed to your functions.
here's a small snipppet of a few resource types:
files are of type 'file' in php4 and 'stream' in php5
'prossess' are resources opened by proc_open.
'pipe' are resource opened by popen.
btw the 'prossess' resource type was not mentioned in
the documentation. i make a bug-report for this.


enrico

If you want pass an array to $env, you MUST serialize this!
Bad Example:
$env = array('pippo' => 'Hello', 'request' =>$_REQUEST);
$process = proc_open('php', $descriptorspec, $pipes, $cwd, $env);
fwrite($pipes[0], '<?php print_r($_ENV[request]); ?>');
A result is an empty array;
Good Example:
$env = array('pippo' => 'Hello', 'request' =>serialize($_REQUEST));
$process = proc_open('php', $descriptorspec, $pipes, $cwd, $env);
fwrite($pipes[0], '<?php print_r(unserialize($_ENV[request])); ?>');
A result is good array!
Bye,
Enrico


list

if you push a little bit more data through the pipe, it will be hanging forever. One simple solution  on RH linux was to do this:
stream_set_blocking($pipes[0], FALSE);
stream_set_blocking($pipes[1], FALSE);
This did not work on windows XP though.


missilesilo

If you just want to execute a command and get the output of the program, here is a simple object-oriented way to do it:
If there was an error detected from the STDERR output of the program, the open method of the Process class will throw an Exception. Otherwise, it will return the STDOUT output of the program.
<?php
class Process
{
   public static function open($command)
   {
       $retval = '';
       $error = '';
       $descriptorspec = array(
           0 => array('pipe', 'r'),
           1 => array('pipe', 'w'),
           2 => array('pipe', 'r')
       );
       $resource = proc_open($command, $descriptorspec, $pipes, null, $_ENV);
       if (is_resource($resource))
       {
           $stdin = $pipes[0];
           $stdout = $pipes[1];
           $stderr = $pipes[2];
           while (! feof($stdout))
           {
               $retval .= fgets($stdout);
           }
           while (! feof($stderr))
           {
               $error .= fgets($stderr);
           }
           fclose($stdin);
           fclose($stdout);
           fclose($stderr);
           $exit_code = proc_close($resource);
       }
       if (! empty($error))
           throw new Exception($error);
       else
           return $retval;
   }
}
try
{
   $output = Process::open('cat example.txt');
   // do something with the output
}
catch (Exception $e)
{
   echo $e->getMessage() . "\n";
   // there was a problem executing the command
}
?>


php dot net_manual

If you are going to allow data coming from user input to be passed to this function, then you should keep in mind the following warning that also applies to exec() and system():
http://www.php.net/manual/en/function.exec.php
http://www.php.net/manual/en/function.system.php
Warning:
If you are going to allow data coming from user input to be passed to this function, then you should be using escapeshellarg() or escapeshellcmd() to make sure that users cannot trick the system into executing arbitrary commands.


joeldegan

I worked with proc_open for a while before realizing how it works with applications in real time.
This example loads up the eDonkey2000 client and reads data from it and passes in various commands and returns the results.
This is the base for an ncurses gui for edonkey I am writing in PHP.
<?
define ("DASHES", "-------------------------------------------------\n");
function readit($pipes, $len=2, $end="> "){
   stream_set_blocking($pipes[1], FALSE);
   while($ret = fread($pipes[1],$len)){
    $retval .= $ret;
if(substr_count($ret, $end) > 0){ $pipes[1] = "" ; break;}
   }
   return $retval;
}//end fucntion
function sendto($pipes, $str){
fwrite($pipes[0], $str."\n");
}//end function
function viewopts($pipes, $opt){
sleep(1);
sendto($pipes, $opt);
return readit($pipes);
}//end function
function sendopts($pipes, $opt){
sendto($pipes, $opt);
usleep(50);
return readit($pipes);
}//end function
$dspec = array(
  0 => array("pipe", "r"),
  1 => array("pipe", "w"),
  2 => array("file", "/tmp/eo.txt", "a"),);
$process = proc_open("donkey", $dspec, $pipes);
if (is_resource($process)) {
   readit($pipes);
   echo DASHES;
   echo viewopts($pipes, "vo");
   echo DASHES; echo SEP;echo DASHES;
   echo sendopts($pipes, "name test".rand(5,5000));
   echo DASHES; echo SEP; echo DASHES;
   echo viewopts($pipes, "vo");
   echo DASHES; echo SEP; echo DASHES;
   echo sendopts($pipes, "temp /tmp");
   echo DASHES; echo SEP; echo DASHES;
   echo viewopts($pipes, "g");
   echo DASHES;
sendto($pipes, "q");
sendto($pipes, "y");
readit($pipes);
   fclose($pipes[0]);
   fclose($pipes[1]);
   $return_value = proc_close($process);
}
?>
returns what looks like the following
-----------------------------------------------------------------
Name:                   test2555
AdminName:              admin
AdminPass:              password
AdminPort:              79
Max Download Speed:     0.00
Max Upload Speed:       0.00
Line Speed Down:        0.00
Door Port:              4662
AutoConnect:            1
Verbose:                0
SaveCorrupted:          1
AutoServerRemove:       1
MaxConnections:         45
> ----------------------------------------------------------------


jeff warner

I wanted to proc_open bash and then send a command and read the output multiple times instead of opening bash each time.  There were several "tricks"
Put a "\n" on the end of each command
Use a fflush($pipes[0]) after each fwrite($pipes[0])
Put a sleep(1) before you read the output of the command
Once I added all of that I was able to send an arbitrary amount of commands to bash, read the output and when I was finished I could close the pipes.


mib

I thought it was highly not recommended to fork from your web server?
Apart from that, one caveat is that the child process inherits anything that is preserved over fork from the parent (apart from the file descriptors which are explicitly closed).
Importantly, it inherits the signal handling setup, which at least with apache means that SIGPIPE is ignored.  Child processes that expect SIGPIPE to kill them in order to get sensible pipe handling and not go into a tight write loop will have problems unless they reset SIGPIPE themselves.
Similar caveats probably apply to other signals like SIGHUP, SIGINT, etc.
Other things preserved over fork include shared memory segments, umask and rlimits.


ch

I had trouble with this function as my script always hung like in a deadlock until I figured out that I had to strictly keep the following
order. Trying to close all at the end did not work!
 proc_open();
 fwrite(pipes[0]); fclose(pipes[0]); # stdin
 fread(pipes[1]);  fclose(pipes[1]); # stdout
 fread(pipes[2]);  flcose(pipes[2]); # stderr
 proc_close();


kevin barr

I found that with disabling stream blocking I was sometimes attempting to read a return line before the external application had responded. So, instead, I left blocking alone and used this simple function to add a timeout to the fgets function:
// fgetsPending( $in,$tv_sec ) - Get a pending line of data from stream $in, waiting a maximum of $tv_sec seconds
function fgetsPending(&$in,$tv_sec=10) {
if ( stream_select($read = array($in),$write=NULL,$except=NULL,$tv_sec) ) return fgets($in);
else return FALSE;
}


mike

Here's an extremely useful function to perform
data dumps (pg_dump) on a PostgreSQL database.
It could be easily modified to zip the data into a
zip or gz file and force a browser download window.
Or it could be used as in a nightly backup script.
<?php
/**
* postgresBackup: Creates a backup of a postgresql database
* @author Michael Honaker - Symmetry Technical Consultants, LLC
* @date 2006-06-15
* @param - string - DB name
* @param - string - User name
* @param - string - Password
* @param - string - Backup directory
* @return - bool - Success or failure
**/
function postgresBackup($dbName, $dbUser, $dbPwd, $backupPath)
{
$fSuccess = FALSE;
// get rid of try..catch for PHP 4 or less
try {
ignore_user_abort(TRUE);
$file = date('YmdHis') . "_pgDBBackup.sql";
$buffer = '';
$logFile = "/tmp/pgdump-error-output.txt";
$descriptorspec = array(
 0 => array("pipe", "r"),  // stdin
 1 => array("pipe", "w"),  // stdout
 2 => array("file", "$logFile", "a") // stderr
);
// may need entire path to pg_dump
$cmd = "pg_dump -c -D -U {$dbUser} {$dbName}";
$process = proc_open("$cmd", $descriptorspec, $pipes);
if (is_resource($process)):
// $pipes now looks like this:
// 0 => writeable handle connected to child stdin
// 1 => readable handle connected to child stdout
// Any error output will be written to log file

// send the password and close the stdin
fwrite($pipes[0], "{$dbPwd}\n");
fclose($pipes[0]);

// read in the dump data and close stdout
while(!feof($pipes[1])):
$buffer .= fgets($pipes[1], 1024);
endwhile;
fclose($pipes[1]);

// close any pipes before calling
// proc_close to avoid a deadlock
$return_value = proc_close($process);

// write the file to disk and return true
if($return_value == 0):
// this could be modified to automatically
// force a file download if run from a web page

// may need to change this line for PHP 4 or less
file_put_contents($backupPath . $file, $buffer);
// successfully created backup
$fSuccess =  TRUE;
endif;
endif;
} catch(Exception $ex) {
error_log($ex->getMessage(), 3, $logFile);
return FALSE; // return so log file is not deleted
}
// comment out the following for debug information if backup fails
@unlink($logFile);
return $fSuccess;
}
// example usage
postgresBackup('mydb','myuser','pwd',"/home/db/");
?>


filippo

Example of emulating the press of the special key "F3":
fwrite($pipes[0], chr(27)."[13~");
(for others special keys,  use the program 'od -c' on linux)
(NEEDED: a timeout for stdout pipe, otherwise a fgets on $pipes[1] can lag forever...)


mgeisler

An example of using proc_open() can be found in my program PHP Shell: http://mgeisler.net/php-shell/

andre caldas

About the comment by  ch at westend dot com
of 28-Aug-2003 08:46
File streams are buffers. The data is not actualy written if you do not flush the buffer. In your case, fclose has the side effect of flushing the buffer you are closing.
The program "hangs" because it tries to read some data that was not written (since it is buffered).
You must do something like:
<?php
 fwrite($fp);
 fflush($fp);
 fread($fp);
?>
Good luck,
   Andre Caldas.


Change Language


Follow Navioo On Twitter
escapeshellarg
escapeshellcmd
exec
passthru
proc_close
proc_get_status
proc_nice
proc_open
proc_terminate
shell_exec
system
eXTReMe Tracker