[BioC] EBS volumes with the Bioconductor AMI: how to change default behaviour

Quin Wills qilin at quinwills.net
Fri Aug 12 09:32:06 CEST 2011


Thanks for the advice Dan.

The reason I like to use S3 is that I like to run jobs, log out and
have them automatically
shut down when done. At the moment I'm just running the following function for
automated shutdown of my instances from within my R script:

shutdown <- function(time=0) return(system(paste("echo 'sudo halt' |
at now + ",time," min",sep="")))

Even if I set my instance's shutdown behaviour to "terminate" (in the
AWS management console),
those EBS volumes seem to persist when I automate termination this way.

Do you perhaps have a recommendation on how better to make sure my
instance shuts
down once the job is done? Ideally it would be great if it could fire
off a quick
email too, but this doesn't seem so easy to do unless I create my own
AMI I think.

Thanks a ton,
Quin


>>On Thu, Aug 11, 2011 at 6:11 AM, Quin Wills <qilin at quinwills.net> wrote:
>> Hello Bioconductor AMI gurus
>>
>> Delighted that Bioconductor has an AMI with pre-loaded bells and whistles.
>> I'm hardly an AWS guru (yet?), and in particular feel like all the dots
>> aren't connecting in my brain regarding EBS.
>>
>> So I see that the Bioconductor AMI automatically initiates 1 x 20GiB root
>> EBS volume, and 3 x 30 GiB extra volumes, correct?
>> What if I don't want
>> these? Presumably just detaching and deleting them in the AWS management
>> console is one way to do it? Is this the only (reasonably easy) way?
>
>
>The AMI "lives" on these EBS volumes so you don't want to delete them.
>You may find you don't even own them.
>
>
>
>> For the moment I'm just using AWS for CPU-intensive work that I need to
>> speed up. I have an S3 bucket and am using the omegahat RAmazonS3 library to
>> access and save data on a semi-permanent basis. Does this seem like a
>> reasonable tactic? For the moment, the sizes of the data objects in my S3
>> bucket are manageable.
>
>If it works for you, it is reasonable. The reason we don't use S3 is
>that we find it slow, plus it is a two-step process to push files to
>S3 from your AMI, then pull them from S3 to your local machine, as
>opposed to using scp to copy files directly in one step.
>
>But if you find that S3 works for you, there's no reason not to use it.
>Dan
>
>> Perhaps there's a link to an idiots guide on "EBS vs S3" options and
>> suggestions when using the Bioconductor AMI?
>>
>> Thanks in advance for any wisdom,
>> Quin



More information about the Bioconductor mailing list