This is my first time on this site so I am looking forward to hearing from all of you. I have come to this site in search of an answer to one specific question:
I have to conduct some modeling. I do this by running huge numbers (13 million+) of instances of one program with small input changes at each instance. The program takes about 1hr to run on a single core with no more than 2 gb of RAM needed. I then need to compile and plot all the results at the very end of the sequence. Since each instance is independent of the others, I am not interested in multi-core processing.
My restrictions are:
- The cheaper the better but a max budget of approximately 5000$
- My own computer coding is still what I would consider basic, I would prefer to use predeveloped systems rather than write my own
- I have access to large amounts of individual computer labs (at a university) but do not wish to have to run each computer individually (remote access is better obviously)
- The program uses MatLab to compile the results but it is a TCL based program called OpenSees
Basically, my understanding of the most efficient way of handling this operation is by having many individual cores available with a the token amount of RAM available to each. I was thinking of a small server or a computer lab with an SSI type of set up. Any comments or suggestions are greatly appreciated! I am open to any solution
Edited by stenman, 22 December 2015 - 03:26 PM.