In a typical academic/super computer set up, multiple useres are allowed to ssh into some head node, and there is a job scheduler installed that allows you to submit jobs requiring some resources. It then farms the jobs out to other compute nodes.<p>Is it possible to approximate this workflow using AWS, where you ssh into a very cheap always on node like a t1.micro, but then from the command line can spin up another ec2 instance that runs a script or pipeline, saves the output to S3, and terminates itself?