[R] Making a series of similar, but modified .r files - suggested method(s)?

Barry Rowlingson b.rowlingson at lancaster.ac.uk
Sat Aug 21 20:14:27 CEST 2010


On Sat, Aug 21, 2010 at 6:48 PM, Laura S <leslaura at gmail.com> wrote:
> Dear all:
>
> Any suggestions are much appreciated. I am looking for a way to make a
> series of similar, but slightly modified, .r files.
>
> My issue is automating making 320 .r files that change the for(i in 1:x) in
> my base .r file (as well as other elements, e.g., the load(...),
> setwd(...)). For smaller jobs running on a single computer with batch files,
> I have been manually changing the for(i in 1:x) line, etc..
>
> Why does this matter to me? I am planning on running a simulation experiment
> on a linux cluster as a serial job. Although not elegant, it has been
> suggested I make 320 .r files so qsub runs one .r file and then selects
> other jobs. Thus, the manual route I am currently using would take a very
> long time (given multiple runs of 320 .r files, given experimental
> replication).

 qsub? Are you using the Sun Grid Engine or some other queue
submission system? It should be possible to pass a parameter that gets
through to your R process. I wrote some docs on something like that,
geared for our local HPC which uses SGE:

http://www.maths.lancs.ac.uk/~rowlings/HPC/RJobs/

 The crux of which is to get the TASK_ID variable from the environment
and use that to do slightly different things in a batched submission.
You get a TASK_ID if you submit the job as a task array.

 If you really do have to make 320 .R files, then look into the brew
package, which is a simple templating system. Create a analysis.brew
file that has tagged template variables in it (its something like <%=
i %>) and then run brew 320 times.

Barry



More information about the R-help mailing list