R.batch (Was: Re: [R] Calling R from R and specifying "wait until script is finished")

Henrik Bengtsson hb at maths.lth.se
Sun May 22 15:25:47 CEST 2005


Hi. I have a package R.batch in R.classes 
[http://www.maths.lth.se/help/R/R.classes/] to simplify running multiple 
batch jobs, which might interest you.

The idea is as follows. You setup a directory structure defining a 
'JobBatch';

  <path-to>/jobs/
    src/

    input/
    output/

    erroneous/
    failed/
    finished/
    interrupted/
    running/
    todo/
      job01/
      job02/
      job03/

A 'Job' is simply a directory (above job01/, job02/, job03/). Put code 
shared by all Job:s in src/. Put code unique to each Job in its job 
directory, e.g. job01/setupParameters.R. All *.R files in src/ and then 
in job directory are source():ed before a Job is started. When a Job is 
run, onRun(job) is called. Thus, you have to define onRun() in src/ with 
the option to override it in each job directory.

As soon as the Job is being processed it is moved to running/. When a 
Job is successful and completed, onFinally(job) is called and it is 
moved to finished/. If a Job is interrupted, say by Ctrl+C or by sending 
SIG-INT for elsewhere, onInterrupt() is called and the job is moved to 
interrupted/. Similarly, if an error occurs, say, by calling stop(), 
onError(job) is called and it is moved to failed/. (If an error occurs 
while source():ing the *.R files before starting, the job is moved to 
erroneous/). If you call sourceHotCode(job) once in a while in your 
onRun(job) code, code in src/hot/ or job01/hot/ will be source():ed *and 
*removed. This allows you to fix problems (redefine objective functions 
etc) *while running*, say a 10-hour job.

When a Job runs, its working directory is set to "itself", e.g. 
running/job01/. Thus, written result files and created images etc will 
naturally be save in each job directory. You can also write to 
getOutputPath(), which is the output/. Log files are written to 
getLogPath(), which defaults to output/.

Each Job can access common data from the input/ path by getInputPath(). 
Note that in Unix input/ can be a soft link to another directory. To 
provide the same functionality under Windows, Windows shortcut files, 
say, input.lnk, are recognized and followed if getInputPath() is used. 
[This actually holds for other directories too; if multiple batches 
share same source code, you can link src/].

Given the above structure, you run all Jobs one by one, by

library(R.batch)
batch <- JobBatch("<path-to>/jobs/")
run(batch)
# wait until all jobs are processed
# Try Ctrl+C, rerun by run(batch).
print(batch) # Gives a summary of the status of all jobs

Logging and everything else is taken care of automatically.

The code is written such it should be possible for several R sessions to 
operate on the same batch set simultaneously. Lock files are used to 
control for this. I used this last summer to run batch jobs from 30+ 
computers sharing the same file system.

Want to try it? Try this

 > install.packages("R.classes", 
contriburl="http://www.maths.lth.se/help/R")
 > library(R.batch)
 > example(JobBatch)

and a batch of Mandelbrot sets (from Martin Maechler's rhelp example) 
will be generated together with images.

Warning: The package works, but the API is not fixed, meaning it may 
change in future releases. However, the general idea should remain. 
Currently I feel that the names of some methods and directories are a 
little bit confusing. Feedback on this is appreciated.

Future: Recently, I have been working on adding dependency control 
between jobs so certain jobs are processed before others. This is not 
included in the current version. Some kind of mechanism to restarting 
interrupted jobs where they where interrupted would also be very nice, 
but this is very tricky and will propably require modification of the R 
engine, which is beyond my skills.

Cheers

Henrik Bengtsson


Lapointe, Pierre wrote:
> Hello,
> 
> Let's say I have 50 R scripts to run.  What would be the most efficient way
> to run them?
> 
> I thought I could do multiple Rterms in a DOS batch file:
> 
> Ex:
> Rterm <1.R> 1.txt 
> Rterm <2.R> 2.txt 
> ...
> Rterm <50.R> 50.txt 
> 
> However, I'm afraid they will all open at the same time.   I know I could
> pause the batch file with something like: 
> 
> PING 1.1.1.1 -n 1 -w 60000 >NUL  (to delay 60 seconds)
> 
> But that would require that I know how long each of my scripts take.
> 
> Is there an easier way?  Something like calling R from R and specifying that
> the script has to be finished before continuing.
> 
> Thanks
> 
> Pierre Lapointe
> 
> 
> 
> *********************************************************************************** 
> AVIS DE NON-RESPONSABILITE:\ Ce document transmis par courri...{{dropped}}
> 
> ______________________________________________
> R-help at stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
> 
>




More information about the R-help mailing list